; HL = SAMPLE TO PLAY (8bit unsigned) ; DE = LENGHT TO PLAY ; TABLE = ADDRESS OF SAMPLE TABLE ; EXX LD C,#A1 EXX LOOP: LD A,(HL) EXX LD L,A LD H,0 LD D,H LD E,L ADD HL,HL ADD HL,DE LD DE,TABLE ADD HL,DE LD B,(HL) INC HL LD D,(HL) INC HL LD E,(HL) LD A,8 OUT (#A0),A INC A OUT (C),B OUT (#A0),A OUT (C),D INC A OUT (#A0),A OUT (C),E EXX INC HL DEC DE LD A,D OR E JP NZ,LOOP RET

- Find all different PSG register combinations.

- Calculate analog value for each combination and place the results to a "line". If two absolutely same values are calculated the combination that contains bigger or smaller number is stored for later use.

That’s what my C# routine does.

- Take last number in a line (generated from 15,15,15) now let's call that number X

I think that is not desirable for two reasons:

1. the upper ranges have very little precision, e.g. the value directly below 3.0 is 2.7.

2. when all PSG channels output on maximum volume, there is probably distortion. But this is an assumption.

But, I indeed do take an X, it is just not the maximum value. So my routine does this as well.

LOOP:

- divide X to 256 steps in a line

- find closest value for each step.

Does this.

- substract the step from closest value and add the result to error counter

I do that, but it works a little differently in my routine. I instead count the number of errors with more than a 0.2% deviation.

This 0.2% is not entirely arbitrary, this is is actually the rounded off value when converted to an 8-bit number, when it is rounded off incorrectly it is considered an ‘error’ . Thus, 1/256*.5 = 0.2%.

I tried to change this today to add the actual errors together and not just the number of times it was bigger than a certain value. It must be divided by the maximum value before adding it to the error to make it normalised, but somehow I couldn’t get it to work very well.

- after all steps are gone trough store the X and error counter to table

My routine doesn’t put it in a table, but instead compares it with the lowest error found so far, and if it is lower, stores the X.

- take previous from X value in a line and repeat LOOP until about 3/4 of the numbers on the line are gone trough.

- sort the list by error counter and you have the optimal solution!

Oh, right. That explains why you started with the maximum output y(15,15,15). Yes, I do that, but over a range of 1.0 ... 1.5 and with 0.001 increments. The most optimal value found was 1.097.

- sort the register combinations so that biggest/smallest register value is first.

That happens automatically by the ordering of the array and the duplicate detection.

- make the result look nice and put it online

I did all that in the article on the MAP that I updated yesterday.

So.

~Grauw

.

% T has 849 different values !! This is the full dynamic of our PSG

I got 608 different values. Are you sure there aren’t rounding errors which you mistake for different values??? They are probably internally represented by floating point values, after all, which *will* cause errors.

Ok they are! You're right!

241 values in my table T are repeated values.

Anyway this is non very important as the line

[m,i]= min(abs(T(:,1)-x(n)));

looks for the best index i in T that gives x(n)

Thus in theory nothing change even if I skip the step where

I cancel duplicates.

% cancel duplicates

[h,i] = sort([ QQ , -1, 1e6]);

w = h(find(diff(h)>1e-6));

w = [ w(2:end)];

% store in T non duplicate outputs with the corresponding PSG volumes

T = [];

for i=1:length(w)

j=find(II(:,1)==w(i));

T = [T; II(j(1),: )];

end

% T has 608 different values !

% Sorry :-)

I did all that in the article on the MAP that I updated yesterday.

So... Wich of the tables is better? I must say, that I can't understand the SNR system that well...

I think at least the one on the MAP is sub-optimal , but not by much.

In effect, they will likely both sound similar. Although if ARTRAG takes a base range of 1.45, his output will probably be louder.

I’m still working on using the real error, instead of counting the number of times the deviation passes a threshold. If both my and ARTRAGs calculations are correct, we should end up with the same values.

Btw, the MAP one is sub-optimal in the sense that it might not use the most optimal range (that is, the one with the least amount of errors). The values in the table are all correct though.

~Grauw

Ok, next add keyclick bit and start again

Unfortunately I don't think keyclick volume is standard.

I don’t think so either...