At 02:56 PM 8/11/1999 , Tucker Taft wrote:
>Roger Racine wrote:
>> At 01:33 PM 8/11/1999 , Harbaugh, John S wrote:
>> > <snip>
>> >We are running into this sort of problem when we are running a preemptive
>> tasking model on a Unix box that uses time slicing. The Ada code looks
>> slick, but the actual behavior is not per the design. The hidden time
>> slicing effectively changes the dynamic semantics of the language.
>> Unfortunately, it is not always possible to turn off the time-slicing. One
>> wonders how such an implementation can be validated for compliance with
>> annex D.
>> Since the semantics are the same whether the same-priority tasks are
>> running completely in parallel (such as on a multiprocessor) or not, I can
>> not see any problem that could possibly be a result of time slicing. Could
>> you send some more information?
>I am not an expert on scheduling theory. However, based on my
>understanding and discussions during the Ada 9X design process,
>it is clear that doing an accurate analysis of schedulability
>of a hard real-time multi-tasking system depends on knowing how often there
>are task switches. The idea behind FIFO_Within_Priorities is
>to minimize the number of "unnecessary" task switches.
>When you say "the semantics are the same..." that really depends on
>the context. In the "Ada" core, the semantics do not specify anything
>about scheduling of tasks, and doesn't even mention priorities.
>In the real-time annex, the semantics are much more specific, and provide
>additional guarantees. These guarantees *are* dependent on the
>number of physical processors, so in that sense, the Annex semantics
>do depend on how same-priority tasks are scheduled. In a sense, you
>could define a "real-time" system by one whose semantics do depend
>(at least potentially) on the details of the scheduling algorithms and
I am letting my prejudice get the better of me. Sorry. I have always
tried to write multi-tasking programs from the following point of view: The
program should work the same, not counting timing, on a multiprocessor as
on a uniprocessor system.
However, in the interest of performance, I have, on occasion, optimized the
design to work better in a uniprocessor environment. But I only do this
for different priority tasks (i.e. I assume if I am in the lowest priority
task, then all the other tasks must be on some sort of queue). I can see
how a program could be written that assumes data from one task can be used
in another task of the same priority. I would never do that, since my use
of tasks is to implement concurrency. If I want sequential execution, I
put the calls sequentially in the same task.
That is why I asked for more information. Is there any good reason to
create a separate task that must be run after some other task? That seems
like a waste of effort (just put the processing of the second task after
the processing in the first task, and get rid of the second task). It is
also somewhat a waste of CPU time, since there is no need for a change of
context if there is no possibility of concurrency.
Draper Laboratory, MS 31
555 Technology Sq.
Cambridge, MA 02139, USA