Well, I was told these were discussions were appropriate for the whole group, so here goes. Continuing another off ML discussion:
> > To be honest using heap in hard real-time, even TLSF, is intriguing but > > not something I've ever been willing to use "for real". >> Can you name a hard real-time os that offers messaging but that > doesn't use any kind of allocator in the critical path ?
Fascinating, I thinking this must be semantic. While I'm willing to say its painful, limiting, and uses RAM inefficiently, isn't the answer "All of them" ?
After careful analysis and testing, create generously sized rt fifos or other buffered I/O channels. It's a catastrophic error for the buffer to overflow. Even when communicating with non-RT the basic approach works fine (you just get really big buffers "just in case"). For a couple extra dollars in RAM you get rock solid. For my applications it's appropriate, YMMV.
On my corner of the world I'm the radical one in that I that I don't just do that before execution, but am willing to have a (high priority, but non-HRT) helper/proxy thread that (attempts to) allocate on the fly and hands off to the HRT portions when complete. The semantics part is that while the proxy uses the RT scheduler, it doesn't have an enforced time requirement, which is what I mean by HRT.
But doesn't even a static (non-resizable, non heap) rt fifo provide messaging?
- alexis.
Allocation in HRT
On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
> Well, I was told these were discussions were appropriate for the whole group, so here goes. Continuing another off ML discussion:
>
>> > To be honest using heap in hard real-time, even TLSF, is intriguing but
>> > not something I've ever been willing to use "for real".
>>
>> Can you name a hard real-time os that offers messaging but that
>> doesn't use any kind of allocator in the critical path ?
>
> Fascinating, I thinking this must be semantic. While I'm willing to say its painful, limiting, and uses RAM inefficiently, isn't the answer "All of them" ?
This gets indeed semantical. Pre-allocating and then assigning a part
of that in the cricitial path, is still allocating in my opinion: You
have a limited resource (even more limited than the total RAM you
have) which you manage and assume there will be enough. That sounds
very much like a standard allocation behaviour to me.
A major reason to do pre-allocation is to avoid sbrk() calls, which
stalls your CPU because new physical memory must be mapped. But still,
when the new piece of memory is available in your process, you need to
manage it. Enter TLSF.
>
> After careful analysis and testing, create generously sized rt fifos or other buffered I/O channels. It's a catastrophic error for the buffer to overflow. Even when communicating with non-RT the basic approach works fine (you just get really big buffers "just in case"). For a couple extra dollars in RAM you get rock solid. For my applications it's appropriate, YMMV.
>
> On my corner of the world I'm the radical one in that I that I don't just do that before execution, but am willing to have a (high priority, but non-HRT) helper/proxy thread that (attempts to) allocate on the fly and hands off to the HRT portions when complete. The semantics part is that while the proxy uses the RT scheduler, it doesn't have an enforced time requirement, which is what I mean by HRT.
>
> But doesn't even a static (non-resizable, non heap) rt fifo provide messaging?
Same remark as Stephen here: RAM is a limited resource, so if you eat
from your stack, there's less for the rest. TLSF is a resource
manager, which is deterministic, and which allows you to be more
efficient with the available memory. Especially when your application
buffers needs to pre-allocate 'just in case'.
The RTT has a long history in preallocating everything. We have learnt
the pros and contras. The conclusion as I see it is that many
mechanisms in a controller will do perfectly fine with pre-allocation
+ a real-time allocator *and* that it's better to design with
resource constraints in mind than to assume everything will just work.
That said, we're still in the process of bringing TLSF in practice,
and as I said before on this list, it will be possible to compile the
RTT without which will impose limitations similar to the ones we have
now when using commands or events.
Peter
Allocation in HRT
On Mon, 26 Oct 2009, Peter Soetens wrote:
> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>> Well, I was told these were discussions were appropriate for the whole group, so here goes. Continuing another off ML discussion:
>>
>>>> To be honest using heap in hard real-time, even TLSF, is intriguing but
>>>> not something I've ever been willing to use "for real".
>>>
>>> Can you name a hard real-time os that offers messaging but that
>>> doesn't use any kind of allocator in the critical path ?
>>
>> Fascinating, I thinking this must be semantic. While I'm willing to say its painful, limiting, and uses RAM inefficiently, isn't the answer "All of them" ?
>
> This gets indeed semantical. Pre-allocating and then assigning a part
> of that in the cricitial path, is still allocating in my opinion: You
> have a limited resource (even more limited than the total RAM you
> have) which you manage and assume there will be enough. That sounds
> very much like a standard allocation behaviour to me.
No! The difference is that you _know_ that it will be enough (since you
know your application's needs and resources). Only _then_ hard realtime is
possible. If you don't know those two things, talking about hard realtime
is nonsense, and "as efficient and fast as possible" is the best you can
hope to achieve.
> A major reason to do pre-allocation is to avoid sbrk() calls, which
> stalls your CPU because new physical memory must be mapped. But still,
> when the new piece of memory is available in your process, you need to
> manage it. Enter TLSF.
That's indeed the right context to present TLSF in. And, yes, then the O(1)
computational complexity of the algorithm makes sense. :-) But whether it
can perform hard realtime or not depends on the other of the
above-mentioned two conditions: knowing your application's needs in
advance.
>> After careful analysis and testing, create generously sized rt fifos or
>> other buffered I/O channels. It's a catastrophic error for the buffer
>> to overflow. Even when communicating with non-RT the basic approach
>> works fine (you just get really big buffers "just in case"). For a
>> couple extra dollars in RAM you get rock solid. For my applications
>> it's appropriate, YMMV.
>>
>> On my corner of the world I'm the radical one in that I that I don't
>> just do that before execution, but am willing to have a (high priority,
>> but non-HRT) helper/proxy thread that (attempts to) allocate on the fly
>> and hands off to the HRT portions when complete. The semantics part is
>> that while the proxy uses the RT scheduler, it doesn't have an enforced
>> time requirement, which is what I mean by HRT.
>>
>> But doesn't even a static (non-resizable, non heap) rt fifo provide messaging?
>
> Same remark as Stephen here: RAM is a limited resource, so if you eat
> from your stack, there's less for the rest. TLSF is a resource
> manager, which is deterministic, and which allows you to be more
> efficient with the available memory. Especially when your application
> buffers needs to pre-allocate 'just in case'.
>
> The RTT has a long history in preallocating everything. We have learnt
> the pros and contras. The conclusion as I see it is that many
> mechanisms in a controller will do perfectly fine with pre-allocation
> + a real-time allocator *and* that it's better to design with
> resource constraints in mind than to assume everything will just work.
I follow the first, analysis, part of your comments, but not your
conclusion: as you do rather often, you take one particular set of use
cases and tacitly assume that they cover _all_ use cases :-) So, no, it is
not "better too design with resource constraints in mind than to assume
everything will just work"! In some use cases, it's better to design your
system in such a way that you can be usre that no resource problems will
occur.
But again, there is room (and need) for both "policies" in Orocos/RTT; just
make sure that 2.0 makes them (clearly) configurable.
> That said, we're still in the process of bringing TLSF in practice,
> and as I said before on this list, it will be possible to compile the
> RTT without which will impose limitations similar to the ones we have
> now when using commands or events.
Good. This is an answer to my "policy comment" above :-)
How difficult would it be to make it a runtime configuration option...?
Herman
Allocation in HRT
On Mon, Oct 26, 2009 at 13:21, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
> On Mon, 26 Oct 2009, Peter Soetens wrote:
>
>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>>>
>>> Well, I was told these were discussions were appropriate for the whole
>>> group, so here goes. Continuing another off ML discussion:
>>>
>>>>> To be honest using heap in hard real-time, even TLSF, is intriguing but
>>>>> not something I've ever been willing to use "for real".
>>>>
>>>> Can you name a hard real-time os that offers messaging but that
>>>> doesn't use any kind of allocator in the critical path ?
>>>
>>> Fascinating, I thinking this must be semantic. While I'm willing to say
>>> its painful, limiting, and uses RAM inefficiently, isn't the answer "All of
>>> them" ?
>>
>> This gets indeed semantical. Pre-allocating and then assigning a part
>> of that in the cricitial path, is still allocating in my opinion: You
>> have a limited resource (even more limited than the total RAM you
>> have) which you manage and assume there will be enough. That sounds
>> very much like a standard allocation behaviour to me.
>
> No! The difference is that you _know_ that it will be enough (since you
> know your application's needs and resources). Only _then_ hard realtime is
> possible. If you don't know those two things, talking about hard realtime
> is nonsense, and "as efficient and fast as possible" is the best you can
> hope to achieve.
The reality is we don't know *for sure*. We can reason about resource
usage for days, but nothing has been proven. If in the end a dormant
bug causes us to use much more resources than we thought we would,
we're just fooling ourselves with pre-allocation. The reality is we
want to do hard real-time with buggy programs. Imagine that. So 'we'
double our RAM and CPU capacity, and put an O(1) allocator in there.
I'm not suggesting that the RTT nor user applications should be
designed to be like that or work only like that, but it's at least a
very real part of our development process. We don't know.
>
>> A major reason to do pre-allocation is to avoid sbrk() calls, which
>> stalls your CPU because new physical memory must be mapped. But still,
>> when the new piece of memory is available in your process, you need to
>> manage it. Enter TLSF.
>
> That's indeed the right context to present TLSF in. And, yes, then the O(1)
> computational complexity of the algorithm makes sense. :-) But whether it
> can perform hard realtime or not depends on the other of the
> above-mentioned two conditions: knowing your application's needs in
> advance.
>
...
>> That said, we're still in the process of bringing TLSF in practice,
>> and as I said before on this list, it will be possible to compile the
>> RTT without which will impose limitations similar to the ones we have
>> now when using commands or events.
>
> Good. This is an answer to my "policy comment" above :-)
>
> How difficult would it be to make it a runtime configuration option...?
Close to impossible, given the current resources :-)
Peter
Allocation in HRT
On Oct 26, 2009, at 08:21 , Herman Bruyninckx wrote:
> On Mon, 26 Oct 2009, Peter Soetens wrote:
>
>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
<sni
>
>> That said, we're still in the process of bringing TLSF in practice,
>> and as I said before on this list, it will be possible to compile the
>> RTT without which will impose limitations similar to the ones we have
>> now when using commands or events.
>
> Good. This is an answer to my "policy comment" above :-)
>
> How difficult would it be to make it a runtime configuration
> option...?
Are you hoping to be able to turn on/off, or disable, the real-time
allocation and/or logging? I can't speak for RTT's intended use of
real-time allocation, but you can certainly deal with these issues at
deployment-time for the RT allocator and logging. Turning both off at
runtime would probably require wrapping all their functions with a
check on a disable flag. I don't see the benefit to either a) not
building them in the first place through compile-time options, or b)
just not loading them at deployment-time. What are you hoping to
achieve here?
Stephen
Allocation in HRT
On Mon, 26 Oct 2009, S Roderick wrote:
> On Oct 26, 2009, at 08:21 , Herman Bruyninckx wrote:
>
>> On Mon, 26 Oct 2009, Peter Soetens wrote:
>>
>>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>
> <sni
>
>>
>>> That said, we're still in the process of bringing TLSF in practice,
>>> and as I said before on this list, it will be possible to compile the
>>> RTT without which will impose limitations similar to the ones we have
>>> now when using commands or events.
>>
>> Good. This is an answer to my "policy comment" above :-)
>>
>> How difficult would it be to make it a runtime configuration
>> option...?
>
> Are you hoping to be able to turn on/off, or disable, the real-time
> allocation and/or logging? I can't speak for RTT's intended use of
> real-time allocation, but you can certainly deal with these issues at
> deployment-time for the RT allocator and logging. Turning both off at
> runtime would probably require wrapping all their functions with a
> check on a disable flag. I don't see the benefit to either a) not
> building them in the first place through compile-time options, or b)
> just not loading them at deployment-time. What are you hoping to
> achieve here?
>
I was in fact thinking about "deployment"-time configuration, and not "run
time" configuration! Sorry, my mistake!
Herman
Allocation in HRT
On Mon, 26 Oct 2009, Peter Soetens wrote:
> On Mon, Oct 26, 2009 at 13:21, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>> On Mon, 26 Oct 2009, Peter Soetens wrote:
>>
>>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>>>>
>>>> Well, I was told these were discussions were appropriate for the whole
>>>> group, so here goes. Continuing another off ML discussion:
>>>>
>>>>>> To be honest using heap in hard real-time, even TLSF, is intriguing but
>>>>>> not something I've ever been willing to use "for real".
>>>>>
>>>>> Can you name a hard real-time os that offers messaging but that
>>>>> doesn't use any kind of allocator in the critical path ?
>>>>
>>>> Fascinating, I thinking this must be semantic. While I'm willing to say
>>>> its painful, limiting, and uses RAM inefficiently, isn't the answer "All of
>>>> them" ?
>>>
>>> This gets indeed semantical. Pre-allocating and then assigning a part
>>> of that in the cricitial path, is still allocating in my opinion: You
>>> have a limited resource (even more limited than the total RAM you
>>> have) which you manage and assume there will be enough. That sounds
>>> very much like a standard allocation behaviour to me.
>>
>> No! The difference is that you _know_ that it will be enough (since you
>> know your application's needs and resources). Only _then_ hard realtime is
>> possible. If you don't know those two things, talking about hard realtime
>> is nonsense, and "as efficient and fast as possible" is the best you can
>> hope to achieve.
>
> The reality is we don't know *for sure*. We can reason about resource
> usage for days, but nothing has been proven. If in the end a dormant
> bug causes us to use much more resources than we thought we would,
> we're just fooling ourselves with pre-allocation. The reality is we
> want to do hard real-time with buggy programs.
Come'on Peter! This is way too ambitious/preposterous/unrealistic/... (fill
in your own) to be taken seriously :-)
> Imagine that. So 'we' double our RAM and CPU capacity, and put an O(1)
> allocator in there.
>
> I'm not suggesting that the RTT nor user applications should be
> designed to be like that or work only like that, but it's at least a
> very real part of our development process.
I agree with this fact: it _is_ very interesting to many of our "customers" :-)
So, we _have_ to provide it. But in a wise way :-)
> We don't know.
>
>>
>>> A major reason to do pre-allocation is to avoid sbrk() calls, which
>>> stalls your CPU because new physical memory must be mapped. But still,
>>> when the new piece of memory is available in your process, you need to
>>> manage it. Enter TLSF.
>>
>> That's indeed the right context to present TLSF in. And, yes, then the O(1)
>> computational complexity of the algorithm makes sense. :-) But whether it
>> can perform hard realtime or not depends on the other of the
>> above-mentioned two conditions: knowing your application's needs in
>> advance.
> ...
>
>>> That said, we're still in the process of bringing TLSF in practice,
>>> and as I said before on this list, it will be possible to compile the
>>> RTT without which will impose limitations similar to the ones we have
>>> now when using commands or events.
>>
>> Good. This is an answer to my "policy comment" above :-)
>>
>> How difficult would it be to make it a runtime configuration option...?
>
> Close to impossible, given the current resources :-)
Then allocate some more, online and even in realtime!!! :-)))
Herman
Allocation in HRT
On Mon, Oct 26, 2009 at 14:09, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
> On Mon, 26 Oct 2009, Peter Soetens wrote:
>
>> On Mon, Oct 26, 2009 at 13:21, Herman Bruyninckx
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>
>>> On Mon, 26 Oct 2009, Peter Soetens wrote:
>>>
>>>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>>>>>
>>>>> Well, I was told these were discussions were appropriate for the whole
>>>>> group, so here goes. Continuing another off ML discussion:
>>>>>
>>>>>>> To be honest using heap in hard real-time, even TLSF, is intriguing
>>>>>>> but
>>>>>>> not something I've ever been willing to use "for real".
>>>>>>
>>>>>> Can you name a hard real-time os that offers messaging but that
>>>>>> doesn't use any kind of allocator in the critical path ?
>>>>>
>>>>> Fascinating, I thinking this must be semantic. While I'm willing to
>>>>> say
>>>>> its painful, limiting, and uses RAM inefficiently, isn't the answer
>>>>> "All of
>>>>> them" ?
>>>>
>>>> This gets indeed semantical. Pre-allocating and then assigning a part
>>>> of that in the cricitial path, is still allocating in my opinion: You
>>>> have a limited resource (even more limited than the total RAM you
>>>> have) which you manage and assume there will be enough. That sounds
>>>> very much like a standard allocation behaviour to me.
>>>
>>> No! The difference is that you _know_ that it will be enough (since you
>>> know your application's needs and resources). Only _then_ hard realtime
>>> is
>>> possible. If you don't know those two things, talking about hard realtime
>>> is nonsense, and "as efficient and fast as possible" is the best you can
>>> hope to achieve.
>>
>> The reality is we don't know *for sure*. We can reason about resource
>> usage for days, but nothing has been proven. If in the end a dormant
>> bug causes us to use much more resources than we thought we would,
>> we're just fooling ourselves with pre-allocation. The reality is we
>> want to do hard real-time with buggy programs.
>
> Come'on Peter! This is way too ambitious/preposterous/unrealistic/... (fill
> in your own) to be taken seriously :-)
For clarity, I'm not being any of those. A wrong use, mis-calculation
or -estimation of resources is in my view a bug. Bugs don't need to
lead to blue screens or even wrong output. I don't believe there exist
bug-free systems. Clearly, people that build airplanes think so too.
Still we want to be robust against these bugs in a HRT environment.
TLSF allows you to do this efficiently with respect to memory
resources.
Peter
Allocation in HRT
>
> >> The reality is we don't know *for sure*. We can reason about resource
> >> usage for days, but nothing has been proven. If in the end a dormant
> >> bug causes us to use much more resources than we thought we would,
> >> we're just fooling ourselves with pre-allocation. The reality is we
> >> want to do hard real-time with buggy programs.
> >
> > Come'on Peter! This is way too ambitious/preposterous/unrealistic/...
> (fill
> > in your own) to be taken seriously :-)
>
> For clarity, I'm not being any of those. A wrong use, mis-calculation
> or -estimation of resources is in my view a bug. Bugs don't need to
> lead to blue screens or even wrong output. I don't believe there exist
> bug-free systems. Clearly, people that build airplanes think so too.
> Still we want to be robust against these bugs in a HRT environment.
> TLSF allows you to do this efficiently with respect to memory
> resources.
>
Peter,
- How will TLSF handle a bug wich causes a memory allocation of size -1? You
are screwed if this happens in a critical rt path.
- If for some reason a buggy (embedded) webserver eats up all heap memory I
expect my machine to stop in a well known way. This is impossible when using
memory allocation in a critical rt path. After all I won't be able to send
an event if my web server runs out of memory
- Allocating memory in rt is adding an other risk of which you need to
handle exceptions. Adding a risk in rt is not done unless you have a realy
__good__ reason.
- The air plane people would probebly linch you on the spot if you would
propose memory allocation in rt to them.
Butch.
Allocation in HRT
On Mon, Nov 2, 2009 at 10:03, Butch Slayer <butch [dot] slayers [..] ...> wrote:
>> >> The reality is we don't know *for sure*. We can reason about resource
>> >> usage for days, but nothing has been proven. If in the end a dormant
>> >> bug causes us to use much more resources than we thought we would,
>> >> we're just fooling ourselves with pre-allocation. The reality is we
>> >> want to do hard real-time with buggy programs.
>> >
>> > Come'on Peter! This is way too ambitious/preposterous/unrealistic/...
>> > (fill
>> > in your own) to be taken seriously :-)
>>
>> For clarity, I'm not being any of those. A wrong use, mis-calculation
>> or -estimation of resources is in my view a bug. Bugs don't need to
>> lead to blue screens or even wrong output. I don't believe there exist
>> bug-free systems. Clearly, people that build airplanes think so too.
>> Still we want to be robust against these bugs in a HRT environment.
>> TLSF allows you to do this efficiently with respect to memory
>> resources.
>
>
> Peter,
>
> - How will TLSF handle a bug wich causes a memory allocation of size -1? You
> are screwed if this happens in a critical rt path.
Can you give an example when this would happen ? We probably only
call TLSF through C++ object construction.
> - If for some reason a buggy (embedded) webserver eats up all heap memory I
> expect my machine to stop in a well known way. This is impossible when using
> memory allocation in a critical rt path. After all I won't be able to send
> an event if my web server runs out of memory
This is not how we intend to use TLSF. TLSF works on a pre-allocated
piece of heap, just like you would pre-allocate because of using
static allocation. So picture this: Big square of memory, reserved for
your application, TLSF partitions it and hands out pieces to the
places that require it. We can setup different pieces of memory for
different subsystems (like logging, scripting,...).
> - Allocating memory in rt is adding an other risk of which you need to
> handle exceptions. Adding a risk in rt is not done unless you have a realy
> __good__ reason.
This is true. Dynamic allocations require another level of error
handling, because memory might run out. The way we will use it, it
will be actually darn easy to test run-out scenarios. This will become
part of the unit tests surrounding the TLSF.
Users will not see it. It works as an internal resource manager for
the RTT. The RTT will have the error handling in place, this will be
unit tested, and behaviour in such a case can be predicted and
documented (ie no 'undefined behaviour'). TLSF is capable of
allocating extra memory if the prealloc'ed chunk has dried up. This is
enabled with a flag and is clearly only enabled during
development/debug sessions. You can get statistics after/during a run.
TLSF doesn't mean pre-allocs will go away. For example, the lock-free
containers will probably keep using pre-allocated chunks of memory,
because of efficiency reasons. There is a big concern here:
multi-threaded access to TLSF must be guarded by mutexes (Stephen, can
you confirm this wrt logging ?). In plain Linux, mutexes are *cheap*.
There's only a syscall when the lock is already taken (the syscall
needs to happen anyway in that case). In Xenomai, RTAI, this is not
(yet) the case. Since 'messages' are in between threads by definition
in RTT, our TLSF will be shared across threads (one thread allocs, the
other frees). If anyone could come up with a lock-free scheme for
TLSF, that would be great, but I fear that is almost impossible. It's
not on my agenda.
In the end, I think the TLSF will have these features in RTT:
1. Works on pre-allocated pieces of heap.
2. Is only used for RTT internal memory management
3. Uses a mutex internally to synchronize access
4. Is allowed to fetch another chunk of memory for testing and finding
a reasonable worst case.
5. Will have multiple instances to separate critical/less critical
systems (messaging vs logging)
6. Can be turned off, in which case you revert to 1.x like scenarios.
It's hard to predict which features will suffer most.
> - The air plane people would probebly linch you on the spot if you would
> propose memory allocation in rt to them.
I don't know. Maybe I have to explain them better :-)
Markus and I gave a presentation on the Xenomai User's Meeting in
Dresden about Lua+TLSF. See the presentation on:
http://www.denx.de/en/pub/News/Xum2009AbstractsAndPresentations/OROCOS.pdf
I believe his setup was single-threaded.
Peter
Allocation in HRT
On Mon, Nov 02, 2009 at 11:01:36AM +0100, Peter Soetens wrote:
> On Mon, Nov 2, 2009 at 10:03, Butch Slayer <butch [dot] slayers [..] ...> wrote:
> >> >> The reality is we don't know *for sure*. We can reason about resource
> >> >> usage for days, but nothing has been proven. If in the end a dormant
> >> >> bug causes us to use much more resources than we thought we would,
> >> >> we're just fooling ourselves with pre-allocation. The reality is we
> >> >> want to do hard real-time with buggy programs.
> >> >
> >> > Come'on Peter! This is way too ambitious/preposterous/unrealistic/...
> >> > (fill
> >> > in your own) to be taken seriously :-)
> >>
> >> For clarity, I'm not being any of those. A wrong use, mis-calculation
> >> or -estimation of resources is in my view a bug. Bugs don't need to
> >> lead to blue screens or even wrong output. I don't believe there exist
> >> bug-free systems. Clearly, people that build airplanes think so too.
> >> Still we want to be robust against these bugs in a HRT environment.
> >> TLSF allows you to do this efficiently with respect to memory
> >> resources.
> >
> >
> > Peter,
> >
> > - How will TLSF handle a bug wich causes a memory allocation of size -1? You
> > are screwed if this happens in a critical rt path.
>
> Can you give an example when this would happen ? We probably only
> call TLSF through C++ object construction.
>
> > - If for some reason a buggy (embedded) webserver eats up all heap memory I
> > expect my machine to stop in a well known way. This is impossible when using
> > memory allocation in a critical rt path. After all I won't be able to send
> > an event if my web server runs out of memory
>
> This is not how we intend to use TLSF. TLSF works on a pre-allocated
> piece of heap, just like you would pre-allocate because of using
> static allocation. So picture this: Big square of memory, reserved for
> your application, TLSF partitions it and hands out pieces to the
> places that require it. We can setup different pieces of memory for
> different subsystems (like logging, scripting,...).
>
> > - Allocating memory in rt is adding an other risk of which you need to
> > handle exceptions. Adding a risk in rt is not done unless you have a realy
> > __good__ reason.
>
> This is true. Dynamic allocations require another level of error
> handling, because memory might run out. The way we will use it, it
> will be actually darn easy to test run-out scenarios. This will become
> part of the unit tests surrounding the TLSF.
>
> Users will not see it. It works as an internal resource manager for
> the RTT. The RTT will have the error handling in place, this will be
> unit tested, and behaviour in such a case can be predicted and
> documented (ie no 'undefined behaviour'). TLSF is capable of
> allocating extra memory if the prealloc'ed chunk has dried up. This is
> enabled with a flag and is clearly only enabled during
> development/debug sessions. You can get statistics after/during a run.
>
> TLSF doesn't mean pre-allocs will go away. For example, the lock-free
> containers will probably keep using pre-allocated chunks of memory,
> because of efficiency reasons. There is a big concern here:
> multi-threaded access to TLSF must be guarded by mutexes (Stephen, can
> you confirm this wrt logging ?). In plain Linux, mutexes are *cheap*.
> There's only a syscall when the lock is already taken (the syscall
> needs to happen anyway in that case). In Xenomai, RTAI, this is not
> (yet) the case. Since 'messages' are in between threads by definition
> in RTT, our TLSF will be shared across threads (one thread allocs, the
> other frees). If anyone could come up with a lock-free scheme for
> TLSF, that would be great, but I fear that is almost impossible. It's
> not on my agenda.
>
> In the end, I think the TLSF will have these features in RTT:
>
> 1. Works on pre-allocated pieces of heap.
> 2. Is only used for RTT internal memory management
> 3. Uses a mutex internally to synchronize access
> 4. Is allowed to fetch another chunk of memory for testing and finding
> a reasonable worst case.
> 5. Will have multiple instances to separate critical/less critical
> systems (messaging vs logging)
> 6. Can be turned off, in which case you revert to 1.x like scenarios.
> It's hard to predict which features will suffer most.
>
> > - The air plane people would probebly linch you on the spot if you would
> > propose memory allocation in rt to them.
>
> I don't know. Maybe I have to explain them better :-)
>
> Markus and I gave a presentation on the Xenomai User's Meeting in
> Dresden about Lua+TLSF. See the presentation on:
> http://www.denx.de/en/pub/News/Xum2009AbstractsAndPresentations/OROCOS.pdf
>
> I believe his setup was single-threaded.
Yes. But I intend to allocate one memory pool per Lua instance, so
this way all locking goes away as the TLSF allocation function
operates statelessly on the pool give as an argument.
I believe there are two fundamental ways to use a real-time
allocator. One is you know exactly how much memory you need, but you
don't know when and in which sizes you will need it. But this is like
good old preallocation behind a nicer interface. Allocation never
fails! You can also use separate pools and avoid all locking.
The other (and more dangerous) is that you want the allocator because
either you don't know how much memory you will need, or you want to
save memory because you believe the sum of all memory allocated to all
"tlsf users" at given time is always less than the sum of all
individual worst-case needs. Allocation can fail. You have to lock.
Of course Lua scripting falls into category 2, but wrt the state
machines I intend to force it into category 1 by a) making a very good
guess about the memory required b) providing a small emergency pool
that can be used in case of a allocation failure to transition to a
safe state.
Can all RT-allocations you intend for the RTT be in class 1?
Markus
Allocation in HRT
> On Behalf Of Markus Klotzbuecher
> On Mon, Nov 02, 2009 at 11:01:36AM +0100, Peter Soetens wrote:
> > On Mon, Nov 2, 2009 at 10:03, Butch Slayer
> <butch [dot] slayers [..] ...> wrote:
> > >> >> The reality is we don't know *for sure*. We can
> reason about resource
> > >> >> usage for days, but nothing has been proven. If in
> the end a dormant
> > >> >> bug causes us to use much more resources than we
> thought we would,
> > >> >> we're just fooling ourselves with pre-allocation. The
> reality is we
> > >> >> want to do hard real-time with buggy programs.
> > >> >
> > >> > Come'on Peter! This is way too
> ambitious/preposterous/unrealistic/...
> > >> > (fill
> > >> > in your own) to be taken seriously :-)
> > >>
> > >> For clarity, I'm not being any of those. A wrong use,
> mis-calculation
> > >> or -estimation of resources is in my view a bug. Bugs
> don't need to
> > >> lead to blue screens or even wrong output. I don't
> believe there exist
> > >> bug-free systems. Clearly, people that build airplanes
> think so too.
> > >> Still we want to be robust against these bugs in a HRT
> environment.
> > >> TLSF allows you to do this efficiently with respect to memory
> > >> resources.
> > >
> > >
> > > Peter,
> > >
> > > - How will TLSF handle a bug wich causes a memory
> allocation of size -1? You
> > > are screwed if this happens in a critical rt path.
> >
> > Can you give an example when this would happen ? We probably only
> > call TLSF through C++ object construction.
> >
> > > - If for some reason a buggy (embedded) webserver eats up
> all heap memory I
> > > expect my machine to stop in a well known way. This is
> impossible when using
> > > memory allocation in a critical rt path. After all I
> won't be able to send
> > > an event if my web server runs out of memory
> >
> > This is not how we intend to use TLSF. TLSF works on a pre-allocated
> > piece of heap, just like you would pre-allocate because of using
> > static allocation. So picture this: Big square of memory,
> reserved for
> > your application, TLSF partitions it and hands out pieces to the
> > places that require it. We can setup different pieces of memory for
> > different subsystems (like logging, scripting,...).
> >
> > > - Allocating memory in rt is adding an other risk of
> which you need to
> > > handle exceptions. Adding a risk in rt is not done unless
> you have a realy
> > > __good__ reason.
> >
> > This is true. Dynamic allocations require another level of error
> > handling, because memory might run out. The way we will use it, it
> > will be actually darn easy to test run-out scenarios. This
> will become
> > part of the unit tests surrounding the TLSF.
> >
> > Users will not see it. It works as an internal resource manager for
> > the RTT. The RTT will have the error handling in place, this will be
> > unit tested, and behaviour in such a case can be predicted and
> > documented (ie no 'undefined behaviour'). TLSF is capable of
> > allocating extra memory if the prealloc'ed chunk has dried
> up. This is
> > enabled with a flag and is clearly only enabled during
> > development/debug sessions. You can get statistics
> after/during a run.
> >
> > TLSF doesn't mean pre-allocs will go away. For example, the
> lock-free
> > containers will probably keep using pre-allocated chunks of memory,
> > because of efficiency reasons. There is a big concern here:
> > multi-threaded access to TLSF must be guarded by mutexes
> (Stephen, can
> > you confirm this wrt logging ?). In plain Linux, mutexes
> are *cheap*.
> > There's only a syscall when the lock is already taken (the syscall
> > needs to happen anyway in that case). In Xenomai, RTAI, this is not
> > (yet) the case. Since 'messages' are in between threads by
> definition
> > in RTT, our TLSF will be shared across threads (one thread
> allocs, the
> > other frees). If anyone could come up with a lock-free scheme for
> > TLSF, that would be great, but I fear that is almost
> impossible. It's
> > not on my agenda.
> >
> > In the end, I think the TLSF will have these features in RTT:
> >
> > 1. Works on pre-allocated pieces of heap.
Then come full circle and honestly look at if it's needed.
If you can reduce it to a FIFO and/or stack, consider using
the more efficient and deterministic approach.
> > 2. Is only used for RTT internal memory management
Which is the worst case since this means it's all the
way down in the core.
> > 3. Uses a mutex internally to synchronize access
Sigh, as already pointed out, they can be expensive.
> > 4. Is allowed to fetch another chunk of memory for testing
> and finding
> > a reasonable worst case.
> > 5. Will have multiple instances to separate critical/less critical
> > systems (messaging vs logging)
> > 6. Can be turned off, in which case you revert to 1.x like
> scenarios.
> > It's hard to predict which features will suffer most.
What do you anticipate would be lost when TLSF is turned off?
If it's just Lua, and you don't have to use Lua, that's one thing.
If you loose core functionality (e.g. logging) that's a problem.
> >
> > > - The air plane people would probebly linch you on the
> spot if you would
> > > propose memory allocation in rt to them.
Not just the airlines ...
> >
> > I don't know. Maybe I have to explain them better :-)
And you also need to listen. This remains an enormous stumbling
block, and may preclude the use of OROCOS in a number of
applications.
> >
> > Markus and I gave a presentation on the Xenomai User's Meeting in
> > Dresden about Lua+TLSF. See the presentation on:
> >
> http://www.denx.de/en/pub/News/Xum2009AbstractsAndPresentation
> s/OROCOS.pdf
> >
> > I believe his setup was single-threaded.
These are a truly impressive results (I'd like to see it published
so more details are available). As always questions like threading,
other loads, etc are relevant.
>
> Yes. But I intend to allocate one memory pool per Lua instance, so
> this way all locking goes away as the TLSF allocation function
> operates statelessly on the pool give as an argument.
>
> I believe there are two fundamental ways to use a real-time
> allocator. One is you know exactly how much memory you need, but you
> don't know when and in which sizes you will need it. But this is like
> good old preallocation behind a nicer interface. Allocation never
> fails! You can also use separate pools and avoid all locking.
>
> The other (and more dangerous) is that you want the allocator because
> either you don't know how much memory you will need, or you want to
> save memory because you believe the sum of all memory allocated to all
> "tlsf users" at given time is always less than the sum of all
> individual worst-case needs. Allocation can fail. You have to lock.
>
> Of course Lua scripting falls into category 2, but wrt the state
> machines I intend to force it into category 1 by a) making a very good
> guess about the memory required b) providing a small emergency pool
> that can be used in case of a allocation failure to transition to a
> safe state.
>
> Can all RT-allocations you intend for the RTT be in class 1?
How would the emergency pool work? Is that something TLSF provides, or
something you'd hack around it?
- alexis.
Allocation in HRT
On Mon, Nov 2, 2009 at 18:49, Wieland, Alexis P
<Alexis [dot] P [dot] Wieland [..] ...> wrote:
>> > In the end, I think the TLSF will have these features in RTT:
>> >
>> > 1. Works on pre-allocated pieces of heap.
>
> Then come full circle and honestly look at if it's needed.
> If you can reduce it to a FIFO and/or stack, consider using
> the more efficient and deterministic approach.
TLSF is O(1), thus deterministic. I didn't evaluate its efficiency
yet, so there may be a minus point there. I'm convinced that the
default allocator must be good to cover 95% of the cases, but as
always in the RTT, we'll leave the user the option to change it for
his specific requirements. As this discussion shows, an allocator is
clearly a badly needed point of customization.
>
>> > 2. Is only used for RTT internal memory management
> Which is the worst case since this means it's all the
> way down in the core.
RTT *needs* memory management in place, it has today too. Any memory
management in place today is indeed all the way down to the core. This
is not a worst case, it's something we need to take into account in
our designs.
>
>> > 3. Uses a mutex internally to synchronize access
> Sigh, as already pointed out, they can be expensive.
>
>> > 4. Is allowed to fetch another chunk of memory for testing
>> and finding
>> > a reasonable worst case.
>> > 5. Will have multiple instances to separate critical/less critical
>> > systems (messaging vs logging)
>> > 6. Can be turned off, in which case you revert to 1.x like
>> scenarios.
>> > It's hard to predict which features will suffer most.
>
> What do you anticipate would be lost when TLSF is turned off?
> If it's just Lua, and you don't have to use Lua, that's one thing.
> If you loose core functionality (e.g. logging) that's a problem.
I'm having an idea here. Today, we pre-allocate a chunk of memory for
each primitive used. So 3 commands lead to 3 chunks allocated for each
on of them. When a command is sent, you can not re-send it, until the
command completed and its chunk is available again. This resembles an
allocation strategy with only 1 chunk of memory left available. What
I'm saying is: the behaviour today is equivalent to a TLSF
implementation which has only one chunk of memory available. OR,
equivalent to a FIFO implementation which has one chunk of memory
available.
So if you want to imagine what happens *if* you choose a TLSF/FIFO
allocator *and* it runs out of chunks, you'll have to look at todays
RTT 1.x behaviour. That's probably the ideal scenario. Graceful (or
Robust) degradation. To get more technical (meant for Butch), this
scenario will still require that pre-allocation on a per-instance
level is done, such that each command/.. can be sent after it was
created (a guarantee identical to RTT 1.x). Upon execution, it will
try to allocate a new chunk to 'send in', if they are exhausted (or
you didn't turn on the allocator), the pre-alloced chunk is used and
recycled when the command is done, good old 1.x style.
>
>> >
>> > > - The air plane people would probebly linch you on the
>> spot if you would
>> > > propose memory allocation in rt to them.
>
> Not just the airlines ...
>
>> >
>> > I don't know. Maybe I have to explain them better :-)
>
> And you also need to listen.
Orocos couldn't have been what it's today if it were just me coding in
my small, dark room. Telling me I'm not listening is telling the
*countless* contributions (on the ML and in patches) that they don't
exist.
> This remains an enormous stumbling
> block, and may preclude the use of OROCOS in a number of
> applications.
Our first real/external/industrial Orocos user was/is a zero-malloc /
zero-mutex type of user. These people are our primary test for
checking if we're still on the right hard real-time path. If they stop
following us, we know we screwed up. You'd really like them. We like
to keep them too.
Peter
Allocation in HRT
On Mon, Nov 02, 2009 at 06:49:29PM +0100, Wieland, Alexis P wrote:
> > On Behalf Of Markus Klotzbuecher
What do you mean "on behalf of Markus Klotzbuecher"??? That's not
true. Don't do this!!!
...
> > > Markus and I gave a presentation on the Xenomai User's Meeting in
> > > Dresden about Lua+TLSF. See the presentation on:
> > >
> > http://www.denx.de/en/pub/News/Xum2009AbstractsAndPresentation
> > s/OROCOS.pdf
> > >
> > > I believe his setup was single-threaded.
> These are a truly impressive results (I'd like to see it published
> so more details are available). As always questions like threading,
> other loads, etc are relevant.
It's on the todo list.
> > Yes. But I intend to allocate one memory pool per Lua instance, so
> > this way all locking goes away as the TLSF allocation function
> > operates statelessly on the pool give as an argument.
> >
> > I believe there are two fundamental ways to use a real-time
> > allocator. One is you know exactly how much memory you need, but you
> > don't know when and in which sizes you will need it. But this is like
> > good old preallocation behind a nicer interface. Allocation never
> > fails! You can also use separate pools and avoid all locking.
> >
> > The other (and more dangerous) is that you want the allocator because
> > either you don't know how much memory you will need, or you want to
> > save memory because you believe the sum of all memory allocated to all
> > "tlsf users" at given time is always less than the sum of all
> > individual worst-case needs. Allocation can fail. You have to lock.
> >
> > Of course Lua scripting falls into category 2, but wrt the state
> > machines I intend to force it into category 1 by a) making a very good
> > guess about the memory required b) providing a small emergency pool
> > that can be used in case of a allocation failure to transition to a
> > safe state.
> >
> > Can all RT-allocations you intend for the RTT be in class 1?
>
> How would the emergency pool work? Is that something TLSF provides, or
> something you'd hack around it?
Neither, it's something easily done with TLSF. If the allocation from
the standard memory pool fails, fall back on the emergency pool and
raise the respective event. Determining the right size for the
emergency pool is the tricky part, of course.
Markus
Allocation in HRT
On Oct 26, 2009, at 09:09 , Herman Bruyninckx wrote:
> On Mon, 26 Oct 2009, Peter Soetens wrote:
>
>> On Mon, Oct 26, 2009 at 13:21, Herman Bruyninckx
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>> On Mon, 26 Oct 2009, Peter Soetens wrote:
>>>
>>>> On Sat, Oct 24, 2009 at 20:45, <alexis [..] ...> wrote:
>>>>>
>>>>> Well, I was told these were discussions were appropriate for the
>>>>> whole
>>>>> group, so here goes. Continuing another off ML discussion:
>>>>>
>>>>>>> To be honest using heap in hard real-time, even TLSF, is
>>>>>>> intriguing but
>>>>>>> not something I've ever been willing to use "for real".
>>>>>>
>>>>>> Can you name a hard real-time os that offers messaging but that
>>>>>> doesn't use any kind of allocator in the critical path ?
>>>>>
>>>>> Fascinating, I thinking this must be semantic. While I'm
>>>>> willing to say
>>>>> its painful, limiting, and uses RAM inefficiently, isn't the
>>>>> answer "All of
>>>>> them" ?
>>>>
>>>> This gets indeed semantical. Pre-allocating and then assigning a
>>>> part
>>>> of that in the cricitial path, is still allocating in my opinion:
>>>> You
>>>> have a limited resource (even more limited than the total RAM you
>>>> have) which you manage and assume there will be enough. That sounds
>>>> very much like a standard allocation behaviour to me.
>>>
>>> No! The difference is that you _know_ that it will be enough
>>> (since you
>>> know your application's needs and resources). Only _then_ hard
>>> realtime is
>>> possible. If you don't know those two things, talking about hard
>>> realtime
>>> is nonsense, and "as efficient and fast as possible" is the best
>>> you can
>>> hope to achieve.
>>
>> The reality is we don't know *for sure*. We can reason about resource
>> usage for days, but nothing has been proven. If in the end a dormant
>> bug causes us to use much more resources than we thought we would,
>> we're just fooling ourselves with pre-allocation. The reality is we
>> want to do hard real-time with buggy programs.
>
> Come'on Peter! This is way too ambitious/preposterous/
> unrealistic/... (fill
> in your own) to be taken seriously :-)
Wow, that is a bit of a stretch ,Peter? Or are we misinterpreting what
you are getting at?
Personally, it seems to me that we've all agreed
a) real-time logging would be incredibly useful to all Orocos users
b) real-time logging requires real-time allocation (if it is not to be
unduly constrained)
c) let's add RT allocation and RT logging to OCL
So can we leave this issue and move on now ...?
S
Allocation in HRT
On Mon, 26 Oct 2009, S Roderick wrote:
[...]
> Personally, it seems to me that we've all agreed
> a) real-time logging would be incredibly useful to all Orocos users
> b) real-time logging requires real-time allocation (if it is not to be
> unduly constrained)
I don't really agree with this: realtime allocation _can_ be a (very
useful) configurable option, but should not become the default! Since use
cases exist that can pre-allocate enough for all their logging needs, and
these applications should not carry the overhead of realtime allocation.
> c) let's add RT allocation and RT logging to OCL
Yes.
> So can we leave this issue and move on now ...?
Almost... "Just" implement it :-)
Herman
Allocation in HRT
On Oct 26, 2009, at 09:30 , Herman Bruyninckx wrote:
> On Mon, 26 Oct 2009, S Roderick wrote:
>
> [...]
>> Personally, it seems to me that we've all agreed
>> a) real-time logging would be incredibly useful to all Orocos users
>> b) real-time logging requires real-time allocation (if it is not to
>> be
>> unduly constrained)
>
> I don't really agree with this: realtime allocation _can_ be a (very
> useful) configurable option, but should not become the default!
> Since use
> cases exist that can pre-allocate enough for all their logging
> needs, and
> these applications should not carry the overhead of realtime
> allocation.
>
>> c) let's add RT allocation and RT logging to OCL
> Yes.
c) let's add RT allocation and RT logging to OCL (and make them
configurable build options with the default==OFF)
:-)
>> So can we leave this issue and move on now ...?
>
> Almost... "Just" implement it :-)
Working on it ...
S
Allocation in HRT
> > [...]
> >> Personally, it seems to me that we've all agreed
> >> a) real-time logging would be incredibly useful to all Orocos users
> >> b) real-time logging requires real-time allocation (if it
> is not to
> >> be
> >> unduly constrained)
> >
> > I don't really agree with this: realtime allocation _can_ be a (very
> > useful) configurable option, but should not become the default!
> > Since use
> > cases exist that can pre-allocate enough for all their logging
> > needs, and
> > these applications should not carry the overhead of realtime
> > allocation.
If I have a point, it's that it doesn't *require* it, and in rare
cases it may not be wanted (or allowed).
As an old fart, for decades it's been a requirement that we don't
do resource allocation in anything time critical, and we've managed
to field a lot of trick systems ... so now claiming that you can't
live without it just seems a bit odd ;-)
> >
> >> c) let's add RT allocation and RT logging to OCL
> > Yes.
>
> c) let's add RT allocation and RT logging to OCL (and make them
> configurable build options with the default==OFF)
>
I'd advocate the rt alloc on as the default, and agree that a compile
time switch is adequate.
> :-)
>
> >> So can we leave this issue and move on now ...?
> >
> > Almost... "Just" implement it :-)
>
> Working on it ...
Eager to try it out ;-)
- alexis.
Allocation in HRT
On Oct 24, 2009, at 15:45 , alexis [..] ... wrote:
> Well, I was told these were discussions were appropriate for the
> whole group, so here goes. Continuing another off ML discussion:
>
>>> To be honest using heap in hard real-time, even TLSF, is
>>> intriguing but
>>> not something I've ever been willing to use "for real".
>>
>> Can you name a hard real-time os that offers messaging but that
>> doesn't use any kind of allocator in the critical path ?
>
> Fascinating, I thinking this must be semantic. While I'm willing to
> say its painful, limiting, and uses RAM inefficiently, isn't the
> answer "All of them" ?
>
> After careful analysis and testing, create generously sized rt fifos
> or other buffered I/O channels. It's a catastrophic error for the
> buffer to overflow. Even when communicating with non-RT the basic
> approach works fine (you just get really big buffers "just in
> case"). For a couple extra dollars in RAM you get rock solid. For
> my applications it's appropriate, YMMV.
But it isn't always possiible to add more RAM. I have worked on
embedded systems where we used virtually every single byte of RAM. We
couldn't afford over-sized buffers, just in case. I'm just saying that
your example approach doesn't always apply.
> On my corner of the world I'm the radical one in that I that I don't
> just do that before execution, but am willing to have a (high
> priority, but non-HRT) helper/proxy thread that (attempts to)
> allocate on the fly and hands off to the HRT portions when
> complete. The semantics part is that while the proxy uses the RT
> scheduler, it doesn't have an enforced time requirement, which is
> what I mean by HRT.
>
> But doesn't even a static (non-resizable, non heap) rt fifo provide
> messaging?
Yes, but where did the "rt fifo" get its buffer space from?
Stephen
Allocation in HRT
> From: S Roderick [mailto:kiwi [dot] net [..] ...]
> On Oct 24, 2009, at 15:45 , alexis [..] ... wrote:
>
> > Well, I was told these were discussions were appropriate for the
> > whole group, so here goes. Continuing another off ML discussion:
> >
> >>> To be honest using heap in hard real-time, even TLSF, is
> >>> intriguing but
> >>> not something I've ever been willing to use "for real".
> >>
> >> Can you name a hard real-time os that offers messaging but that
> >> doesn't use any kind of allocator in the critical path ?
> >
> > Fascinating, I thinking this must be semantic. While I'm
> willing to
> > say its painful, limiting, and uses RAM inefficiently, isn't the
> > answer "All of them" ?
> >
> > After careful analysis and testing, create generously sized
> rt fifos
> > or other buffered I/O channels. It's a catastrophic error for the
> > buffer to overflow. Even when communicating with non-RT the basic
> > approach works fine (you just get really big buffers "just in
> > case"). For a couple extra dollars in RAM you get rock
> solid. For
> > my applications it's appropriate, YMMV.
>
> But it isn't always possiible to add more RAM. I have worked on
> embedded systems where we used virtually every single byte of
> RAM. We
> couldn't afford over-sized buffers, just in case. I'm just
> saying that
> your example approach doesn't always apply.
I think we're agreeing, it doesn't *always* apply.
I just don't want you to preclude me from meeting
what in some cases are design requirements.
My understanding is if the logger is implemented in the way
that's currently planned, and then used in the core of RTT
(as I'd hope the logger would be), you will have made it
difficult to not have resource management in critical paths.
>
> > On my corner of the world I'm the radical one in that I
> that I don't
> > just do that before execution, but am willing to have a (high
> > priority, but non-HRT) helper/proxy thread that (attempts to)
> > allocate on the fly and hands off to the HRT portions when
> > complete. The semantics part is that while the proxy uses the RT
> > scheduler, it doesn't have an enforced time requirement, which is
> > what I mean by HRT.
> >
> > But doesn't even a static (non-resizable, non heap) rt fifo
> provide
> > messaging?
>
> Yes, but where did the "rt fifo" get its buffer space from?
Before (or at least outside of) the critical path, memlocked down,
never de-allocated/changed, all the normal.
- alexis.
Allocation in HRT
On Oct 26, 2009, at 12:16 , Wieland, Alexis P wrote:
>
>> From: S Roderick [mailto:kiwi [dot] net [..] ...]
>> On Oct 24, 2009, at 15:45 , alexis [..] ... wrote:
>>
>>> Well, I was told these were discussions were appropriate for the
>>> whole group, so here goes. Continuing another off ML discussion:
>>>
>>>>> To be honest using heap in hard real-time, even TLSF, is
>>>>> intriguing but
>>>>> not something I've ever been willing to use "for real".
>>>>
>>>> Can you name a hard real-time os that offers messaging but that
>>>> doesn't use any kind of allocator in the critical path ?
>>>
>>> Fascinating, I thinking this must be semantic. While I'm
>> willing to
>>> say its painful, limiting, and uses RAM inefficiently, isn't the
>>> answer "All of them" ?
>>>
>>> After careful analysis and testing, create generously sized
>> rt fifos
>>> or other buffered I/O channels. It's a catastrophic error for the
>>> buffer to overflow. Even when communicating with non-RT the basic
>>> approach works fine (you just get really big buffers "just in
>>> case"). For a couple extra dollars in RAM you get rock
>> solid. For
>>> my applications it's appropriate, YMMV.
>>
>> But it isn't always possiible to add more RAM. I have worked on
>> embedded systems where we used virtually every single byte of
>> RAM. We
>> couldn't afford over-sized buffers, just in case. I'm just
>> saying that
>> your example approach doesn't always apply.
>
> I think we're agreeing, it doesn't *always* apply.
> I just don't want you to preclude me from meeting
> what in some cases are design requirements.
Fair enough.
> My understanding is if the logger is implemented in the way
> that's currently planned, and then used in the core of RTT
> (as I'd hope the logger would be), you will have made it
> difficult to not have resource management in critical paths.
I think this last concern is in everyone's mind, including mine. We'll
have to tread carefully when we get there ... I don't know how this is
going to work out yet. But also, Peter has indicated that such
resource management will be required in RTT for other pursposes too ...
Stephen
Allocation in HRT
> From: Stephen Roderick [mailto:kiwi [dot] net [..] ...]
> On Oct 26, 2009, at 12:16 , Wieland, Alexis P wrote:
> >
> >> From: S Roderick [mailto:kiwi [dot] net [..] ...]
> >> On Oct 24, 2009, at 15:45 , alexis [..] ... wrote:
> >>
> >>> Well, I was told these were discussions were appropriate for the
> >>> whole group, so here goes. Continuing another off ML discussion:
> >>>
> >>>>> To be honest using heap in hard real-time, even TLSF, is
> >>>>> intriguing but
> >>>>> not something I've ever been willing to use "for real".
> >>>>
> >>>> Can you name a hard real-time os that offers messaging but that
> >>>> doesn't use any kind of allocator in the critical path ?
> >>>
> >>> Fascinating, I thinking this must be semantic. While I'm
> >> willing to
> >>> say its painful, limiting, and uses RAM inefficiently, isn't the
> >>> answer "All of them" ?
> >>>
> >>> After careful analysis and testing, create generously sized
> >> rt fifos
> >>> or other buffered I/O channels. It's a catastrophic error for the
> >>> buffer to overflow. Even when communicating with non-RT the basic
> >>> approach works fine (you just get really big buffers "just in
> >>> case"). For a couple extra dollars in RAM you get rock
> >> solid. For
> >>> my applications it's appropriate, YMMV.
> >>
> >> But it isn't always possiible to add more RAM. I have worked on
> >> embedded systems where we used virtually every single byte of
> >> RAM. We
> >> couldn't afford over-sized buffers, just in case. I'm just
> >> saying that
> >> your example approach doesn't always apply.
> >
> > I think we're agreeing, it doesn't *always* apply.
> > I just don't want you to preclude me from meeting
> > what in some cases are design requirements.
>
> Fair enough.
>
> > My understanding is if the logger is implemented in the way
> > that's currently planned, and then used in the core of RTT
> > (as I'd hope the logger would be), you will have made it
> > difficult to not have resource management in critical paths.
>
> I think this last concern is in everyone's mind, including
> mine. We'll
> have to tread carefully when we get there ... I don't know
> how this is
> going to work out yet. But also, Peter has indicated that such
> resource management will be required in RTT for other
> pursposes too ...
>
> Stephen
>
The question for me then becomes is Orocos acceptable to use.
- alexis.
Allocation in HRT
Well, I was told these were discussions were appropriate for the whole group, so here goes. Continuing another off ML discussion:
> > To be honest using heap in hard real-time, even TLSF, is intriguing but
> > not something I've ever been willing to use "for real".
>
> Can you name a hard real-time os that offers messaging but that
> doesn't use any kind of allocator in the critical path ?
Fascinating, I thinking this must be semantic. While I'm willing to say its painful, limiting, and uses RAM inefficiently, isn't the answer "All of them" ?
After careful analysis and testing, create generously sized rt fifos or other buffered I/O channels. It's a catastrophic error for the buffer to overflow. Even when communicating with non-RT the basic approach works fine (you just get really big buffers "just in case"). For a couple extra dollars in RAM you get rock solid. For my applications it's appropriate, YMMV.
On my corner of the world I'm the radical one in that I that I don't just do that before execution, but am willing to have a (high priority, but non-HRT) helper/proxy thread that (attempts to) allocate on the fly and hands off to the HRT portions when complete. The semantics part is that while the proxy uses the RT scheduler, it doesn't have an enforced time requirement, which is what I mean by HRT.
But doesn't even a static (non-resizable, non heap) rt fifo provide messaging?
- alexis.