Hi - In 1986 I bought a TI-74, and I'm still not done with it...
I'm working on a program to extract various statistical indices from
a table of data.
Data occurs in the range (40 < n < 450). Data outside
this range is not accepted by the routine. The vast majority of the data will
occur in the range (80 < n < 250).
The value is manipulated thus... n = INT( n / 1.8 + .5 ) and the result
stored as one byte.
INT(expression) returns the the largest integer less than or equal to
the expression.
When the value is retrieved for summation, mean, std deviation, and so
on the value is pre-processed as n = INT( n * 1.8 + .5 ).
My experience with this scheme is that the value returned is always
within +/- 1 of the original value, with the variation seemingly evenly
distributed.
Data is evaluated in subsets, ranging from ~100 to 600 values, typically.
Individual datum vary in accuracy as follows...
90% of the data are within 10% of the actual value. All of the results are
within 20% of the actual value.
My sense is that the data compression scheme is not materially affecting
the results, but I don't know how to address this rigorously, so I'd appreciate
any input - Or even better, a pointer to how to evaluate the scheme.
thanks, Jack
I'm working on a program to extract various statistical indices from
a table of data.
Data occurs in the range (40 < n < 450). Data outside
this range is not accepted by the routine. The vast majority of the data will
occur in the range (80 < n < 250).
The value is manipulated thus... n = INT( n / 1.8 + .5 ) and the result
stored as one byte.
INT(expression) returns the the largest integer less than or equal to
the expression.
When the value is retrieved for summation, mean, std deviation, and so
on the value is pre-processed as n = INT( n * 1.8 + .5 ).
My experience with this scheme is that the value returned is always
within +/- 1 of the original value, with the variation seemingly evenly
distributed.
Data is evaluated in subsets, ranging from ~100 to 600 values, typically.
Individual datum vary in accuracy as follows...
90% of the data are within 10% of the actual value. All of the results are
within 20% of the actual value.
My sense is that the data compression scheme is not materially affecting
the results, but I don't know how to address this rigorously, so I'd appreciate
any input - Or even better, a pointer to how to evaluate the scheme.
thanks, Jack