TEAM-ADA Archives

Team Ada: Ada Programming Language Advocacy

TEAM-ADA@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender: "Team Ada: Ada Advocacy Issues (83 & 95)" <[log in to unmask]>
From: "W. Wesley Groleau x4923" <[log in to unmask]>
Date: Tue, 7 Nov 2000 09:44:27 -0500
Reply-To: "W. Wesley Groleau x4923" <[log in to unmask]>
Parts/Attachments: text/plain (31 lines)
Let Tick_Desired be the interval you want.
Let Tick_Actual be the closest you can come given your platform.

You say that a trained musician can hear an error of "tens of
microseconds."  Is that consistent (accumulating) error?  For example, if
every note ends 50 microseconds too soon, can he/she hear it?  What if the
note endings vary from fifty too soon to fifty too late, but "average"
right on time?

If Error = abs (Tick_Desired - Tick_Actual) = one microsecond, and there
are 480 ticks per quarter note, using

   delay Tick_Actual;
   Send_Clock;

will cause an error of nearly half a millisecond to accumulate by the time
a quarter note should end.

But if you use the method I posted (computing tick times to the precision
needed, and then rounding off EACH Tick to the available resolution), then
no matter how long the song lasts, the worst error on ANY event will be
+/- (Tick_Desired - Tick_Actual) and the errors will average zero.

If Worst_Error = abs (Tick_Desired - Tick_Actual) = ten microseconds, can
a "trained musician" hear it?

--
Write in  * Wes Groleau *  for President of the U.S.A.
I  pledge  to  VETO  nearly  everything.
http://freepages.rootsweb.com/~wgroleau

ATOM RSS1 RSS2