Let Tick_Desired be the interval you want.
Let Tick_Actual be the closest you can come given your platform.
You say that a trained musician can hear an error of "tens of
microseconds." Is that consistent (accumulating) error? For example, if
every note ends 50 microseconds too soon, can he/she hear it? What if the
note endings vary from fifty too soon to fifty too late, but "average"
right on time?
If Error = abs (Tick_Desired - Tick_Actual) = one microsecond, and there
are 480 ticks per quarter note, using
will cause an error of nearly half a millisecond to accumulate by the time
a quarter note should end.
But if you use the method I posted (computing tick times to the precision
needed, and then rounding off EACH Tick to the available resolution), then
no matter how long the song lasts, the worst error on ANY event will be
+/- (Tick_Desired - Tick_Actual) and the errors will average zero.
If Worst_Error = abs (Tick_Desired - Tick_Actual) = ten microseconds, can
a "trained musician" hear it?
Write in * Wes Groleau * for President of the U.S.A.
I pledge to VETO nearly everything.