Real-time Linux at OCE

I've attended the Embedded Systems conference in Eindhoven and the
best presentation I have seen is the one from Alex van der Wal,
working at OCE
(a digital printer manufacturer) about "Combining real-time with
non-real-time software on a single Linux image".
The presentation explains how they succeeded using the Linux
preempt-rt patch in running mixed RT/NRT applications. They use
RT/NRT processes communicating over sockets and use malloc() from
real-time threads.

It's still not 'flawless', there is an unexplainable worst case
latency of 1200us in one thread (with average of 10us), but they did
not investigate that further as it was ok for their application.
(could be SMI or something).

The presentation is here:

There is also a paper, which goes into the technical details:

Good read !

I hope this new 'rt' malloc approach will help in simplifying the
Orocos implementation as a whole.

Peter

Real-time Linux at OCE

On Wed, 7 Nov 2007, Peter Soetens wrote:

> I've attended the Embedded Systems conference in Eindhoven and the best
> presentation I have seen is the one from Alex van der Wal, working at OCE
> (a digital printer manufacturer) about "Combining real-time with
> non-real-time software on a single Linux image".
> The presentation explains how they succeeded using the Linux preempt-rt patch
> in running mixed RT/NRT applications. They use RT/NRT processes
> communicating over sockets
Why sockets and not shared variables? (Or even orocos Data Ports :-))

> and use malloc() from real-time threads.
This is also strange: if the memory is not available it will kill the
realtime anyway... (Are they making the mistake that 'realtime' means
'fast', instead of 'deterministic in even the worst case'? I don't think
so, because the presentation itself mentions the right ways to handle
memory allocation...)

> It's still not 'flawless', there is an unexplainable worst case latency of
> 1200us in one thread (with average of 10us), but they did not investigate
> that further as it was ok for their application. (could be SMI or something).
>
> The presentation is here:
>
> There is also a paper, which goes into the technical details:
>
>
> Good read !
> I hope this new 'rt' malloc approach will help in simplifying the Orocos
> implementation as a whole.

>From the paper you mention, I don't get the essentials of this "new rt malloc"...

Herman

Real-time Linux at OCE

On Wednesday 07 November 2007 16:57:53 Herman Bruyninckx wrote:
> On Wed, 7 Nov 2007, Peter Soetens wrote:
> > I've attended the Embedded Systems conference in Eindhoven and the best
> > presentation I have seen is the one from Alex van der Wal, working at OCE
> > (a digital printer manufacturer) about "Combining real-time with
> > non-real-time software on a single Linux image".
> > The presentation explains how they succeeded using the Linux preempt-rt
> > patch in running mixed RT/NRT applications. They use RT/NRT processes
> > communicating over sockets
>
> Why sockets and not shared variables? (Or even orocos Data Ports :-))

Sockets are for inter-process communication and allow to wait/block on them
during a read() or a select() call. Sockets allow to listen for multiple event
sources (CORBA uses sockets for this purpose as well). Sockets are truely
transparent with respect to distribution, shared variables (or shared memory)
are not.

>
> > and use malloc() from real-time threads.
>
> This is also strange: if the memory is not available it will kill the
> realtime anyway... (Are they making the mistake that 'realtime' means
> 'fast', instead of 'deterministic in even the worst case'? I don't think
> so, because the presentation itself mentions the right ways to handle
> memory allocation...)

They mean 100% time bounded, just like we (RTT) mean it. We have the same
assumption as them: the system will not run out of memory. Unavailable memory
will lead to a malloc() that returns zero. That does not need to 'kill' the
real-time. it should emergency-stop the application (which hopefully does not
rely on malloc()). But both OCE and the RTT do not have the infrastructure to
handle these cases gracefully, although RTT does not malloc() from an RT
thread.

> > The presentation is here:
> > > >20Prestatie> There is also a paper, which goes into the technical details:
> >
> >
> > Good read !
> > I hope this new 'rt' malloc approach will help in simplifying the Orocos
> > implementation as a whole.
> >
> >From the paper you mention, I don't get the essentials of this "new rt
> > malloc"...

The essential is that the 'thing/syscall' inside malloc()/free() that 'breaks'
real-time are *page faults*. mmap() causes page faults. OCE could configure
glibc as such that it no longer issues mmap at run-time. But due to a
regression in glibc, they eventually plugged in their own malloc() which did
not use mmap() at all.

Peter

Real-time Linux at OCE

On Mon, 12 Nov 2007, Peter Soetens wrote:

[...]
>>> and use malloc() from real-time threads.
>>
>> This is also strange: if the memory is not available it will kill the
>> realtime anyway... (Are they making the mistake that 'realtime' means
>> 'fast', instead of 'deterministic in even the worst case'? I don't think
>> so, because the presentation itself mentions the right ways to handle
>> memory allocation...)
>
> They mean 100% time bounded, just like we (RTT) mean it. We have the same
> assumption as them:

"We"?!?!? _You_, you mean :-) For me, _all_ resources must be
deterministically assigned before one can call something "realtime"...

> the system will not run out of memory. Unavailable memory
> will lead to a malloc() that returns zero. That does not need to 'kill' the
> real-time. it should emergency-stop the application (which hopefully does not
> rely on malloc()). But both OCE and the RTT do not have the infrastructure to
> handle these cases gracefully, although RTT does not malloc() from an RT
> thread.

That's The Right Thing to do :-)

Herman