Orocos with in-memory database

Hello orocos-users,

I am toying with an idea and wonder if anyone else has already tried it.

1. The WriteData ports of all orocos taskcontexts in the system are
connected to a special taskcontext. Let's call the special taskcontext
'DB'.

2. DB contains an in-memory database (something like BerkeleyDB, which
is basically a key-value store). In DB's updateHook, all it's InputPorts
are read and the read data is written to the in-memory database
(possibly just an update to the existing values in the database).

3. DB's in-memory database is synchronously replicated to one (or
several) computers on a network. (BerkeleyDB allows for that sort of thing)

Now you have access to the data of your orocos system on a database in
another computer, where you may run online analysis, logging, plotting
etc. tools.

One possible problem could be the performance of the system. It might be
too slow for the concept to actually work, if the data in question
refers to high frequency sensor readings and controller output
parameters.. that needs to be tested.

Comments?

Regards,
Sagar

Orocos with in-memory database

On 05/12/2013 12:37 PM, Sagar Behere wrote:
> Now you have access to the data of your orocos system on a database in
> another computer, where you may run online analysis, logging, plotting
> etc. tools.
What I am wondering is: why would use the DB's replication system, since
you could directly fill the database on the machine where you are going
to use it ?

Orocos with in-memory database

On 05/12/2013 06:41 PM, Sylvain Joyeux wrote:
> On 05/12/2013 12:37 PM, Sagar Behere wrote:
>> Now you have access to the data of your orocos system on a database in
>> another computer, where you may run online analysis, logging, plotting
>> etc. tools.
> What I am wondering is: why would use the DB's replication system, since
> you could directly fill the database on the machine where you are going
> to use it ?

Because

1. most in-memory databases are libraries that run in the address space
of the application. They are not 'servers' to which a process on another
computer can connect. That said, it would be relatively easy to write a
simple server than receives data from the taskcontext DB, say in a UDP
stream, and stores it to a local, in-process, in-memory database. Then,
each computer that wishes to have such a database would need to run its
own server. This is also true if non-in-memory databases are used. Also,

2. The in-memory database in the DB taskcontext could later be extended
for application specific purposes. In my case (self-driving cars) I can
imagine extending the database to hold a map of the geographical area
around the car, and populate that map with detected vehicles, road
signs, traffic lights etc.

Regards,
Sagar

Orocos with in-memory database

On 05/12/2013 06:53 PM, Sagar Behere wrote:
> 2. The in-memory database in the DB taskcontext could later be extended
> for application specific purposes. In my case (self-driving cars) I can
> imagine extending the database to hold a map of the geographical area
> around the car, and populate that map with detected vehicles, road
> signs, traffic lights etc.
On the one hand, if you really want to do that, just go directly for a
GIS-enabled database (e.g. not berkeley)

On the other hand, I am not sure I conveyed my question properly. Since
you can transmit data from RTT to any machine, you don't really have the
need to use a replication system to get the DB on a "visualization"
machine. Moreover, when visualization is concerned, one usually wants to
avoid sending the whole data stream to other machines (especially if you
are talking about synchronous replication).

The plans we (rock devs) have for "live" visualization of old data would
be more to have log access servers that allow to access the logs when
needed. Vizkit would then cache already accessed data locally.

Orocos with in-memory database

On 05/13/2013 10:31 AM, Sylvain Joyeux wrote:
> On 05/12/2013 06:53 PM, Sagar Behere wrote:
>> 2. The in-memory database in the DB taskcontext could later be extended
>> for application specific purposes. In my case (self-driving cars) I can
>> imagine extending the database to hold a map of the geographical area
>> around the car, and populate that map with detected vehicles, road
>> signs, traffic lights etc.
> On the one hand, if you really want to do that, just go directly for a
> GIS-enabled database (e.g. not berkeley)
>
> On the other hand, I am not sure I conveyed my question properly. Since
> you can transmit data from RTT to any machine,

This is a problem I have looked at over multiple projects, without
finding an ideal solution. The DB replication is just another attempt to
solve it. So I would like to know if you have a preferred solution for
transmitting data out from RTT. The following requirements would be nice

1. Solution should work on different architectures (i.e. not just x86)
2. The data should be sent out to the network, not just to local storage
3. Should be able to serialize+send data at minimum 100Hz (maybe 1kHz)
in an efficient (probably binary) format that can be picked up at the
receiving computer in a variety of programming languages (including some
way to push it into Matlab, S-functions if necessary, but native
preferable, because we often have dSpace/xPC Target boxes in our system
whose S-function supported is limited)
4. Should work with variable length strings (so components can send
error messages and other diagnostic info from exception handlers etc.)

(Note: There are two different tasks here: data streaming and data
serialization)

I have tried so far (together with orocos) ØMQ (no serialization), UDP
streaming server/client(ditto, but works nicely with Matlab), DDS (comes
with built-in serialization, no easy way to pick up DDS data in Matlab)
and for serialization boost serialization(binary archive not portable
across architectures, programming languages), JSON,XML,ASCII (not the
most efficient), BSON (latest bson c++ library did not compile on arm),
google protocol buffers (comes very close, needs IDL specs and extra
code and scaffolding)

Usually at the other end, I write all the logs to netcdf, and then
examine with kst2

What tools/libraries do you use?

/Sagar

> you don't really have the
> need to use a replication system to get the DB on a "visualization"
> machine. Moreover, when visualization is concerned, one usually wants to
> avoid sending the whole data stream to other machines (especially if you
> are talking about synchronous replication).
>
> The plans we (rock devs) have for "live" visualization of old data would
> be more to have log access servers that allow to access the logs when
> needed. Vizkit would then cache already accessed data locally.

Orocos with in-memory database

Hi Sagar,

I don't have much insights into this matter, but Ingo Lutkebohle and Tim
Niemueller presented on ROSCON a database recording system, quite
independent from ROS, but they mapped ROS message types to a schema-less
database

http://roscon.ros.org/2013/?page_id=14

You might find further publications of them, there's nothing to download on
the roscon website though.

Peter

On Mon, May 13, 2013 at 11:07 AM, Sagar Behere <sagar [dot] behere [..] ...>wrote:

> On 05/13/2013 10:31 AM, Sylvain Joyeux wrote:
> > On 05/12/2013 06:53 PM, Sagar Behere wrote:
> >> 2. The in-memory database in the DB taskcontext could later be extended
> >> for application specific purposes. In my case (self-driving cars) I can
> >> imagine extending the database to hold a map of the geographical area
> >> around the car, and populate that map with detected vehicles, road
> >> signs, traffic lights etc.
> > On the one hand, if you really want to do that, just go directly for a
> > GIS-enabled database (e.g. not berkeley)
> >
> > On the other hand, I am not sure I conveyed my question properly. Since
> > you can transmit data from RTT to any machine,
>
> This is a problem I have looked at over multiple projects, without
> finding an ideal solution. The DB replication is just another attempt to
> solve it. So I would like to know if you have a preferred solution for
> transmitting data out from RTT. The following requirements would be nice
>
> 1. Solution should work on different architectures (i.e. not just x86)
> 2. The data should be sent out to the network, not just to local storage
> 3. Should be able to serialize+send data at minimum 100Hz (maybe 1kHz)
> in an efficient (probably binary) format that can be picked up at the
> receiving computer in a variety of programming languages (including some
> way to push it into Matlab, S-functions if necessary, but native
> preferable, because we often have dSpace/xPC Target boxes in our system
> whose S-function supported is limited)
> 4. Should work with variable length strings (so components can send
> error messages and other diagnostic info from exception handlers etc.)
>
> (Note: There are two different tasks here: data streaming and data
> serialization)
>
> I have tried so far (together with orocos) ØMQ (no serialization), UDP
> streaming server/client(ditto, but works nicely with Matlab), DDS (comes
> with built-in serialization, no easy way to pick up DDS data in Matlab)
> and for serialization boost serialization(binary archive not portable
> across architectures, programming languages), JSON,XML,ASCII (not the
> most efficient), BSON (latest bson c++ library did not compile on arm),
> google protocol buffers (comes very close, needs IDL specs and extra
> code and scaffolding)
>
> Usually at the other end, I write all the logs to netcdf, and then
> examine with kst2
>
> What tools/libraries do you use?
>
> /Sagar
>
> > you don't really have the
> > need to use a replication system to get the DB on a "visualization"
> > machine. Moreover, when visualization is concerned, one usually wants to
> > avoid sending the whole data stream to other machines (especially if you
> > are talking about synchronous replication).
> >
> > The plans we (rock devs) have for "live" visualization of old data would
> > be more to have log access servers that allow to access the logs when
> > needed. Vizkit would then cache already accessed data locally.
>
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>