About user data types&transports in 2.0

I have been insinuating a lot about the future of types without really
pointing out a plan. Here's the plan.

First the invariants:

1. The RTT has always been data type agnostic and will remain so. It will not
assume any function, base class etc to be present, except that data is a)
copyable and b) default constructible.

2. In order to display, serialize, transport data types, the RTT relies on
users to fill in the TypeInfoRepository with TypeInfo objects that implement
these functionalities for a given data type. We collect these objects in
'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
plugins.

3. A single data types can be transported using multiple transports. We now
have CORBA and POSIX mqueues more may follow.

The questions raised during RTT 1.x development are these:

A How could a typekit be generated from user specification ?
B How could a typekit support future transports, or at least a certain class
of transports ?
C How can we interoperate with other frameworks, ie, do some conversion from
our type to a 3rd party type OR use a 3rd party type directly in our component
code.

I'll tackle A,B,C in order:

A. Typekit generation
There are only two ways for *generating* a typekit for C++ class T: a. by
defining it in an idl-like language and generating class T and companion
Typekit from it (CORBA, ROS, OpenRTM) and b. by parsing class T and generating
a typekit from it (orogen).

RTT doesn't exclude any of these, so both may coexist, and even should co-
exist. Preferably, a front-end tool manages this and knows in what to generate
in which case. the tool needs to run on a bunch of platforms and is written in
an interpreted language which excells at parsing/generating text. I'm only a
dumb C++ programmer, and I would have choosen Python because it's widely
available and others (ROS, OpenRTM) rely on python too for their tooling. On
the other hand, orogen is ruby, Markus is using ruby too to generate
components from ecore models. I had more difficulty than acceptable to install
ruby with the correct gems to get orogen working. I'm a bit worried here.

Then there's ROS with genmsg and a pile of predefined message types and tools
that can read/use these types. The first thing an Orocos user could do is to
use ros data types in his components. Remember, Orocos allows *any* user type.
Next, we could define a RosTemplateTypeInfo that tells about this type to the
RTT and in addition create a 'stream based' transport for sending to topics,
very similar to the current MQueue implementation. Maybe this would require to
extend genmsg to generate missing free functions (cfr boost::serialization)
that allow to serialize any ros msg to any format. Finally, the Orocos typekit
generator would recognize the ros message format and create a Typekit for each
ros msg type.

My personal commitment would go to option 2. Since far-going ROS-Orocos
integration is on my road-map, it makes sense in my vision. I don't feel I
exclude anyone else here, and again, I believe there is a strong need for a
native C++-to-typekit tool too. Another personal opinion is that orogen needs
to be split into two parts: a rttypegen part and a deployment/component
specification part (the latter may depend on the former though). Sylvain
already proposed a prototype earlier, I didn't go further on that yet. I would
also hope that rttypegen does not require any modification of the original
headers.

B. typekits and transports.
In 1.x, you can query a type over which ways it can be transported. This
implies that for each way of transporting, you need to load additional plugins
that implement this. One may wonder if it wouldn't be possible to have 'self-
describing' TypeInfo objects that allow transports to decompose/recompose the
data of the type and transport it over the wire. Such 'clever' transports
would then only need the TypeInfo object and nothing more.

This self-description is necessary anyway in our scripting service, where we
want to read or modify (sub-)members of a structure. The MQueue transport
would benefit from this system too (since it's a binary-only transport). CORBA
wouldn't benefit that much, although it could pump a sequence of any's (an any
for each member), which is less efficient.

This topic is non-intrusive and is independent of the choice in point A. I
intend to look at this after I fixed the RTT-2 CORBA layer.

C.interoperation with other frameworks
I'm not planning on doing 'in the flow' conversions between Orocos types and
3rd party types. If you want to interoperate with another framework, your
component will have to produce data that is understood by that framework and a
transport that connects to it. The YARP transport is an example of this. There
is possibly here also room for improvement, but I'm not planning to make
changes here.

Summarizing, tooling around data types is very important, but I can't maintain
such a framework myself. I can only open RTT to such tools and adapt one
existing tool as a demonstrator. Preferably, this is done in a coordinated
way, ie an intelligent front-end that knows what to do when it sees a data
type definition in some language.

Peter

About user data types&transports in 2.0

On Tue, 9 Mar 2010, Peter Soetens wrote:

> I have been insinuating a lot about the future of types without really
> pointing out a plan. Here's the plan.
Thanks! (Should end up on the wiki in one form or another...)

> First the invariants:
>
> 1. The RTT has always been data type agnostic and will remain so. It will not
> assume any function, base class etc to be present, except that data is a)
> copyable and b) default constructible.
>
> 2. In order to display, serialize, transport data types, the RTT relies on
> users to fill in the TypeInfoRepository with TypeInfo objects that implement
> these functionalities for a given data type. We collect these objects in
> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
> plugins.

It could be nice to use the name "Topics", since that has been standardized
in the DDS standard for data communication <http://www.omgwiki.org/dds />...

[...]

Herman

About user data types&transports in 2.0

Peter Soetens wrote:
> <sni

> A. Typekit generation
> There are only two ways for *generating* a typekit for C++ class T: a. by
> defining it in an idl-like language and generating class T and companion
> Typekit from it (CORBA, ROS, OpenRTM) and b. by parsing class T and generating
> a typekit from it (orogen).
>
> RTT doesn't exclude any of these, so both may coexist, and even should co-
> exist. Preferably, a front-end tool manages this and knows in what to generate
> in which case. the tool needs to run on a bunch of platforms and is written in
> an interpreted language which excells at parsing/generating text. I'm only a
> dumb C++ programmer, and I would have choosen Python because it's widely
> available and others (ROS, OpenRTM) rely on python too for their tooling. On
> the other hand, orogen is ruby, Markus is using ruby too to generate
> components from ecore models. I had more difficulty than acceptable to install
> ruby with the correct gems to get orogen working. I'm a bit worried here.
>
orogen needs typelib, and it is therefore *not* a Ruby-only library
(ergo, you can't get it to work with gem only). The main reason why I
picked typelib when I wrote orogen was that typelib *was there*, is a
good intermediate representation for value-types, offers self-describing
C++ types and the manipulation tools that come with it (endian swapping,
fast marshalling/demarshalling *from C++*) -- and offers transparent
bridging with Ruby, which is critical for me.

The bottom line is: to make the use of the RTT really streamlined, one
would have to use a meta-build system (like autoproj or rosbuild). I
don't like rosbuild because it is CMake-centric (we have autotools,
qmake and "undefined types" of packages here) and require the user to
import the packages (not acceptable when you have one git repository per
package, as should be under git, even less acceptable when you need to
download tarballs). Moreover, it leads to non-standalone CMake packages,
since you *cannot* build the ROS packages as-is outside of a ROS tree (I
personally feel it is important to have standalone, standard, cmake
packages).

> Then there's ROS with genmsg and a pile of predefined message types and tools
> that can read/use these types. The first thing an Orocos user could do is to use ros data types in his components.
<snip how to integrate ROS into Orocos>

And how standalone is genmsg by the way ? How do you get the C++ base
that the generated types need to be used without having to fork the ROS
package (remember, not standalone CMake ...) ?

For the record, there is orogen with -- maybe not a pile, but quite a
few -- predefined message types and the tools that can read/use these
types ;-)

> My personal commitment would go to option 2. Since far-going ROS-Orocos
> integration is on my road-map, it makes sense in my vision. I don't feel I
> exclude anyone else here, and again, I believe there is a strong need for a
> native C++-to-typekit tool too. Another personal opinion is that orogen needs
> to be split into two parts: a rttypegen part and a deployment/component
> specification part (the latter may depend on the former though). Sylvain
> already proposed a prototype earlier, I didn't go further on that yet. I would
> also hope that rttypegen does not require any modification of the original
> headers.
>
On the rtttypegen/orogen split: I don't have any issue with that. I do
see the value in having two command-line tools, but don't see the need
to split the software package. On the integration of ROS datatypes into
orogen, I would not mind parsing it from typelib. This typelib thing is
actually important to me, because if I do add support for it in typelib
then I get the Ruby bindings for free. And Ruby is what I use for all
the advanced deployment and supervision.

> B. typekits and transports.
> In 1.x, you can query a type over which ways it can be transported. This
> implies that for each way of transporting, you need to load additional plugins
> that implement this. One may wonder if it wouldn't be possible to have 'self-
> describing' TypeInfo objects that allow transports to decompose/recompose the
> data of the type and transport it over the wire. Such 'clever' transports
> would then only need the TypeInfo object and nothing more.
>
That's how orogen/orocos.rb works: it gets the typelib descriptions and
then uses typelib to manipulate the data.
> This self-description is necessary anyway in our scripting service, where we
> want to read or modify (sub-)members of a structure. The MQueue transport
> would benefit from this system too (since it's a binary-only transport). CORBA
> wouldn't benefit that much, although it could pump a sequence of any's (an any
> for each member), which is less efficient.
>
> This topic is non-intrusive and is independent of the choice in point A. I
> intend to look at this after I fixed the RTT-2 CORBA layer.
>
> C.interoperation with other frameworks
> I'm not planning on doing 'in the flow' conversions between Orocos types and
> 3rd party types. If you want to interoperate with another framework, your
> component will have to produce data that is understood by that framework and a
> transport that connects to it. The YARP transport is an example of this. There
> is possibly here also room for improvement, but I'm not planning to make
> changes here.
>
> Summarizing, tooling around data types is very important, but I can't maintain
> such a framework myself. I can only open RTT to such tools and adapt one
> existing tool as a demonstrator. Preferably, this is done in a coordinated
> way, ie an intelligent front-end that knows what to do when it sees a data
> type definition in some language.
>
Yes, except that you *will* have an impact about what type generation is
chosen by user X. As an example, right now, I need typelib-generated
descriptions to be able to manipulate RTT components from Ruby (and all
the advanced stuff that comes with it). If someone else uses another
type description whose representation is not understood by typelib, then
it does not work.

That's the issue with "tools". You do lose a bit of the freedom for a
greater ease of use.

About user data types&transports in 2.0

On Tue, Mar 9, 2010 at 17:58, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
> Peter Soetens wrote:
>> <sni

>> A. Typekit generation
>> There are only two ways for *generating* a typekit for C++ class T: a. by
>> defining it in an idl-like language and generating class T and companion
>> Typekit from it (CORBA, ROS, OpenRTM) and b. by parsing class T and generating
>> a typekit from it (orogen).
>>
>> RTT doesn't exclude any of these, so both may coexist, and even should co-
>> exist. Preferably, a front-end tool manages this and knows in what to generate
>> in which case. the tool needs to run on a bunch of platforms and is written in
>> an interpreted language which excells at parsing/generating text. I'm only a
>> dumb C++ programmer, and I would have choosen Python because it's widely
>> available and others (ROS, OpenRTM) rely on python too for their tooling. On
>> the other hand, orogen is ruby, Markus is using ruby too to generate
>> components from ecore models. I had more difficulty than acceptable to install
>> ruby with the correct gems to get orogen working. I'm a bit worried here.
>>
> orogen needs typelib, and it is therefore *not* a Ruby-only library

I was specifically writing about *ruby*, not typelib. I got
errors/exceptions when installing gems.

> (ergo, you can't get it to work with gem only). The main reason why I
> picked typelib when I wrote orogen was that typelib *was there*, is a
> good intermediate representation for value-types, offers self-describing
> C++ types and the manipulation tools that come with it (endian swapping,
> fast marshalling/demarshalling *from C++*) -- and offers transparent
> bridging with Ruby, which is critical for me.

I have nothing against typelib.

>
> The bottom line is: to make the use of the RTT really streamlined, one
> would have to use a meta-build system (like autoproj or rosbuild). I
> don't like rosbuild because it is CMake-centric

This is a misunderstanding. ros provides cmake macros for their ros
packages, but a package may have its own build system and choose not
to use them (afaikt). For example, the orocos-rtt ROS package does not
include any of these cmake macros, just the manifest.xml file. From
the ROS website: "The presence of a manifest.xml file in a directory
is significant: any directory within your ROS package path that
contains a manifest.xml file is considered to be a package." ergo:
making Orocos components ros packages does not add any dependency to
ROS.

> (we have autotools,
> qmake and "undefined types" of packages here) and require the user to
> import the packages (not acceptable when you have one git repository per
> package, as should be under git, even less acceptable when you need to
> download tarballs). Moreover, it leads to non-standalone CMake packages,
> since you *cannot* build the ROS packages as-is outside of a ROS tree (I
> personally feel it is important to have standalone, standard, cmake
> packages).

I think too standalone is important. ROS does not violate this. Heck,
even KDL is a heavily used ROS package, yet no code in the kdl trunk
resembles ROS.

>
>> Then there's ROS with genmsg and a pile of predefined message types and tools
>> that can read/use these types. The first thing an Orocos user could do is to use ros data types in his components.
> <snip how to integrate ROS into Orocos>
>
> And how standalone is genmsg by the way ? How do you get the C++ base
> that the generated types need to be used without having to fork the ROS
> package (remember, not standalone CMake ...) ?

You really need to rewrite this email without the standalone argument.
To confirm: if an Orocos component depends on a ros msg, it certainly
will depend on the ros core tools and on the ros package that defines
the msg. That's the choice of the component writer at that time
(remember 3rd party types).

>
> For the record, there is orogen with -- maybe not a pile, but quite a
> few -- predefined message types and the tools that can read/use these
> types ;-)
>
>> My personal commitment would go to option 2. Since far-going ROS-Orocos
>> integration is on my road-map, it makes sense in my vision. I don't feel I
>> exclude anyone else here, and again, I believe there is a strong need for a
>> native C++-to-typekit tool too. Another personal opinion is that orogen needs
>> to be split into two parts: a rttypegen part and a deployment/component
>> specification part (the latter may depend on the former though). Sylvain
>> already proposed a prototype earlier, I didn't go further on that yet. I would
>> also hope that rttypegen does not require any modification of the original
>> headers.
>>
> On the rtttypegen/orogen split: I don't have any issue with that. I do
> see the value in having two command-line tools, but don't see the need
> to split the software package.

I was not implying the software package to be split. Just the command
line tools.

> On the integration of ROS datatypes into
> orogen, I would not mind parsing it from typelib. This typelib thing is
> actually important to me, because if I do add support for it in typelib
> then I get the Ruby bindings for free. And Ruby is what I use for all
> the advanced deployment and supervision.

Maybe you should wait with that until it's clear how hard I hit the wall :-)

Anyway, we need to standardize/share the tooling as much as possible.
I'm not at all against typelib, I'm not at all against genmsg/ros, but
we need a uniform way of providing these tools to the users (and for
ourselves). I'm dreaming of stacks representing these variations of
tools but maybe that's a bridge too far for now.

Peter

About user data types&transports in 2.0

Peter Soetens wrote:
> On Tue, Mar 9, 2010 at 17:58, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>
>> Peter Soetens wrote:
>>
>>> <sni

>>> A. Typekit generation
>>> There are only two ways for *generating* a typekit for C++ class T: a. by
>>> defining it in an idl-like language and generating class T and companion
>>> Typekit from it (CORBA, ROS, OpenRTM) and b. by parsing class T and generating
>>> a typekit from it (orogen).
>>>
>>> RTT doesn't exclude any of these, so both may coexist, and even should co-
>>> exist. Preferably, a front-end tool manages this and knows in what to generate
>>> in which case. the tool needs to run on a bunch of platforms and is written in
>>> an interpreted language which excells at parsing/generating text. I'm only a
>>> dumb C++ programmer, and I would have choosen Python because it's widely
>>> available and others (ROS, OpenRTM) rely on python too for their tooling. On
>>> the other hand, orogen is ruby, Markus is using ruby too to generate
>>> components from ecore models. I had more difficulty than acceptable to install
>>> ruby with the correct gems to get orogen working. I'm a bit worried here.
>>>
>>>
>> orogen needs typelib, and it is therefore *not* a Ruby-only library
>>
>
> I was specifically writing about *ruby*, not typelib. I got
> errors/exceptions when installing gems.
>
>
>> (ergo, you can't get it to work with gem only). The main reason why I
>> picked typelib when I wrote orogen was that typelib *was there*, is a
>> good intermediate representation for value-types, offers self-describing
>> C++ types and the manipulation tools that come with it (endian swapping,
>> fast marshalling/demarshalling *from C++*) -- and offers transparent
>> bridging with Ruby, which is critical for me.
>>
>
> I have nothing against typelib.
>
>
>> The bottom line is: to make the use of the RTT really streamlined, one
>> would have to use a meta-build system (like autoproj or rosbuild). I
>> don't like rosbuild because it is CMake-centric
>>
>
> This is a misunderstanding. ros provides cmake macros for their ros
> packages, but a package may have its own build system and choose not
> to use them (afaikt). For example, the orocos-rtt ROS package does not
> include any of these cmake macros, just the manifest.xml file. From
> the ROS website: "The presence of a manifest.xml file in a directory
> is significant: any directory within your ROS package path that
> contains a manifest.xml file is considered to be a package." ergo:
> making Orocos components ros packages does not add any dependency to
> ROS.
>
>
>> (we have autotools,
>> qmake and "undefined types" of packages here) and require the user to
>> import the packages (not acceptable when you have one git repository per
>> package, as should be under git, even less acceptable when you need to
>> download tarballs). Moreover, it leads to non-standalone CMake packages,
>> since you *cannot* build the ROS packages as-is outside of a ROS tree (I
>> personally feel it is important to have standalone, standard, cmake
>> packages).
>>
>
> I think too standalone is important. ROS does not violate this. Heck,
> even KDL is a heavily used ROS package, yet no code in the kdl trunk
> resembles ROS.
>
You just point out my main issue with ROS: it is easy to integrate
something with ROS, but hard to get something out of ROS to reuse in a
different environment.

They even say it themselves: what they favor is the "extraction of code
out of ROS packages to be reused elsewhere". We all know that having to
*extract* is bad as it means that you have to fork the package (yuk).

But I think we all now have a clear understanding of the different
options/positions. I propose that we all think about it for a while, ask
clarification questions (if there are any), and discuss it when we meet ;-)

About user data types&transports in 2.0

On Wednesday 10 March 2010 10:59:10 Sylvain Joyeux wrote:
> Peter Soetens wrote:
> > On Tue, Mar 9, 2010 at 17:58, Sylvain Joyeux <sylvain [dot] joyeux [..] ...>
wrote:
> >> Peter Soetens wrote:
> >>> <sni

> >>> A. Typekit generation
> >>> There are only two ways for *generating* a typekit for C++ class T: a.
> >>> by defining it in an idl-like language and generating class T and
> >>> companion Typekit from it (CORBA, ROS, OpenRTM) and b. by parsing class
> >>> T and generating a typekit from it (orogen).
> >>>
> >>> RTT doesn't exclude any of these, so both may coexist, and even should
> >>> co- exist. Preferably, a front-end tool manages this and knows in what
> >>> to generate in which case. the tool needs to run on a bunch of
> >>> platforms and is written in an interpreted language which excells at
> >>> parsing/generating text. I'm only a dumb C++ programmer, and I would
> >>> have choosen Python because it's widely available and others (ROS,
> >>> OpenRTM) rely on python too for their tooling. On the other hand,
> >>> orogen is ruby, Markus is using ruby too to generate components from
> >>> ecore models. I had more difficulty than acceptable to install ruby
> >>> with the correct gems to get orogen working. I'm a bit worried here.
> >>
> >> orogen needs typelib, and it is therefore *not* a Ruby-only library
> >
> > I was specifically writing about *ruby*, not typelib. I got
> > errors/exceptions when installing gems.
> >
> >> (ergo, you can't get it to work with gem only). The main reason why I
> >> picked typelib when I wrote orogen was that typelib *was there*, is a
> >> good intermediate representation for value-types, offers self-describing
> >> C++ types and the manipulation tools that come with it (endian swapping,
> >> fast marshalling/demarshalling *from C++*) -- and offers transparent
> >> bridging with Ruby, which is critical for me.
> >
> > I have nothing against typelib.
> >
> >> The bottom line is: to make the use of the RTT really streamlined, one
> >> would have to use a meta-build system (like autoproj or rosbuild). I
> >> don't like rosbuild because it is CMake-centric
> >
> > This is a misunderstanding. ros provides cmake macros for their ros
> > packages, but a package may have its own build system and choose not
> > to use them (afaikt). For example, the orocos-rtt ROS package does not
> > include any of these cmake macros, just the manifest.xml file. From
> > the ROS website: "The presence of a manifest.xml file in a directory
> > is significant: any directory within your ROS package path that
> > contains a manifest.xml file is considered to be a package." ergo:
> > making Orocos components ros packages does not add any dependency to
> > ROS.
> >
> >> (we have autotools,
> >> qmake and "undefined types" of packages here) and require the user to
> >> import the packages (not acceptable when you have one git repository per
> >> package, as should be under git, even less acceptable when you need to
> >> download tarballs). Moreover, it leads to non-standalone CMake packages,
> >> since you *cannot* build the ROS packages as-is outside of a ROS tree (I
> >> personally feel it is important to have standalone, standard, cmake
> >> packages).
> >
> > I think too standalone is important. ROS does not violate this. Heck,
> > even KDL is a heavily used ROS package, yet no code in the kdl trunk
> > resembles ROS.
>
> You just point out my main issue with ROS: it is easy to integrate
> something with ROS, but hard to get something out of ROS to reuse in a
> different environment.

I agree partly. Ros nodes/application code is difficult to reuse in non-ROS.
We're not going for that stuff. However, Geoff is proving you wrong wrt the
build system and package management. He's really starting to make a strong
case:
* ROS supports federated repositories and pulling packages from version
control
* ROS supports locating packages and resolving dependencies cross-repository.
* The ros-build tools now depend on manifest.xml and Makefile. Geoff will extend
this to :
- manifest.xml : like now but without the cflags/ldflags build specifics
- package.pc : contains the build specifics in pkg-config format (he ported
pkg-config to windows using Python)
- Makefile : used in case 'buildpackage.py' is not present. defines basic build
steps.
- buildpackage.py : serves same purpose as Makefile, but is cross-platform
(read: windows).

* The ROS tooling is perfectly fit for 'software in development', contrast that
to Debian packages, which is for 'released software'.

>From my part, I'm convinced that we can even pull the message types out of ROS
for integration into non-ros software. We'll write a generator that does ROS-
idl to C++ struct (without base class/ROS dependencies). ROS and Orocos in
turn will provide a generator that takes the ROS-idl and converts it to
'messages' or 'typekits' respectively. The user code only uses the original
C++ structs, the behind the scenes middleware code uses the transport code..
Libraries like GearBox (and Orocos components!) would benefit greatly from such
an infrastructure. This would even work with orogen, since orogen parses C++
structs.

The only problem I see is the Header message, where time is mapped to
ros::Time instead of a stdc++ type or even a boost type. ros::Time is not
header only and also mixes OS specific calls (it's more than data). It's an
annoying design decision they made, but it's the only annoying one I found so
far. One good thing, they did standardize the representation: uint32 secs,
uint32 nsecs. They'll have to accept a time data type in their messages that
is convertible to/from ros::Time and is header only. They could go for it.

We'll propose both patches on/with the ROS mailing list.

>
> They even say it themselves: what they favor is the "extraction of code
> out of ROS packages to be reused elsewhere". We all know that having to
> *extract* is bad as it means that you have to fork the package (yuk).

I'm not a forking-kind-of-guy.

>
> But I think we all now have a clear understanding of the different
> options/positions. I propose that we all think about it for a while, ask
> clarification questions (if there are any), and discuss it when we meet ;-)

My first priority is getting the CORBA transport right, that's why I wanted to
start this discussion now such that it has time to mature.

Peter

About user data types&transports in 2.0

Peter Soetens wrote:
>>
>> You just point out my main issue with ROS: it is easy to integrate
>> something with ROS, but hard to get something out of ROS to reuse in a
>> different environment.
>>
>
> I agree partly. Ros nodes/application code is difficult to reuse in non-ROS.
> We're not going for that stuff. However, Geoff is proving you wrong wrt the
> build system and package management. He's really starting to make a strong
> case:
> * ROS supports federated repositories and pulling packages from version
> control
>
Can you point me to some pages about that ? I just quickly googled it
and could not find anything about "pulling packages from version
control". The only things I could find is that someone can *manually* do
"svn co roslocate package". Not good enough when the number of packages
skyrocket.

> * ROS supports locating packages and resolving dependencies cross-repository.
>
> * The ros-build tools now depend on manifest.xml and Makefile. Geoff will extend
> this to :
> - manifest.xml : like now but without the cflags/ldflags build specifics
> - package.pc : contains the build specifics in pkg-config format (he ported
> pkg-config to windows using Python)
>
What does "contain the build specifics" mean ?
> - Makefile : used in case 'buildpackage.py' is not present. defines basic build
> steps.
> - buildpackage.py : serves same purpose as Makefile, but is cross-platform
> (read: windows).
>
Not the point. The thing is that ROS packages are encouraged to use ROS
tools into their build system (roslocate, rosdep, ...). Therefore, my
guess (and it is only a guess !) is that some (most ?) packages can't be
easily used without having those tools around, because the cmake code is
not enough to find dependencies. Obviously, one can add a normal package
in there, build it and keep it standalone. The issue was on the other
direction.

> * The ROS tooling is perfectly fit for 'software in development', contrast that
> to Debian packages, which is for 'released software'.
>
so does autoproj, which is completely standalone, does not require any
specific file in the package, supports importing cvs, svn and git alike
(including for its configuration), supports building autotools, cmake,
plain make, genom, orogen, ruby packages, is cross platform, has a
federated repository model and can make coffee(*).

In any case, I don't think people at DFKI will be changing build system
again (as they start to be accustomed to autoproj *and* it fits their
needs). Now, autoproj does not need anything special in the package
source, so both build systems can safely coexist as long as the
package's cmake is standalone (did I already mention that ? ;-))
> From my part, I'm convinced that we can even pull the message types out of ROS
> for integration into non-ros software. We'll write a generator that does ROS-
> idl to C++ struct (without base class/ROS dependencies). ROS and Orocos in
> turn will provide a generator that takes the ROS-idl and converts it to
> 'messages' or 'typekits' respectively. The user code only uses the original
> C++ structs, the behind the scenes middleware code uses the transport code..
> Libraries like GearBox (and Orocos components!) would benefit greatly from such
> an infrastructure. This would even work with orogen, since orogen parses C++
> structs.
>
Sounds fine.
>> They even say it themselves: what they favor is the "extraction of code
>> out of ROS packages to be reused elsewhere". We all know that having to
>> *extract* is bad as it means that you have to fork the package (yuk).
>>
>
> I'm not a forking-kind-of-guy.
>
I got that. To make this whole plan fruitful, one would actually need to
have a "message package", where there is *only* package definitions (no
actual code), so that people of ROS-package "X" don't start making
changes that impact Orocos-package "Y" (and the other way around).
>> But I think we all now have a clear understanding of the different
>> options/positions. I propose that we all think about it for a while, ask
>> clarification questions (if there are any), and discuss it when we meet ;-)
>>
>
> My first priority is getting the CORBA transport right, that's why I wanted to
> start this discussion now such that it has time to mature.
>
Agreed.

Common data types (was About user data types&transports in 2.0)

This is a separate issue from package management.

>> From my part, I'm convinced that we can even pull the message types
out of ROS
>> for integration into non-ros software. We'll write a generator that
does ROS-
>> idl to C++ struct (without base class/ROS dependencies). ROS and
Orocos in
>> turn will provide a generator that takes the ROS-idl and converts it to
>> 'messages' or 'typekits' respectively. The user code only uses the
original
>> C++ structs, the behind the scenes middleware code uses the transport
code..
>> Libraries like GearBox (and Orocos components!) would benefit greatly
from such
>> an infrastructure. This would even work with orogen, since orogen
parses C++
>> structs.
>>
> Sounds fine.

Peter and I spent some time the other day discussing this, and here's
what we came up with:

A package supplies common data types in some IDL format.

A package supplies a generator that turns the IDL into structures for
the developer's target language (C structs, etc).

A package supplies a functional library written in C++. It depends on
the common messages package and the generator package, and part of its
build is to use the generator to create C structs for the data types it
uses. The library uses these data types internally and through its API.
The package also supplies several component wrappers for different
frameworks.

A package supplies a generator that turns the IDL into data
serialisation code for a specific framework.

A package supplies a transport for a framework (maybe this is a separate
package, maybe it's the entire framework, it doesn't really matter).
This depends on the common data types package and any other packages
necessary for the framework. One of these will be the framework's
serialisation generator.

If I use the functional library directly, no framework, then I only need
the common to-C-structs generator package and the common data types
package. If I use the Orocos component for this library, I also need
Orocos's serialisation generator and its transport stuff. If I use the
ROS component, I need their serialisation generator and their transport
stuff. It is the responsibility of each framework to implement their own
generator for serialisation.

Currently Orocos and Gearbox will work well with this approach. OpenRTM
will require some changes, but those are changes we plan to make soon
anyway. ROS will require some changes (they need to change their message
class to be a template type instead of using inheritance, and the time
stamp stuff), but we think that we can convince them to accept this change.

I think I described all that correctly. I'm sure Peter will correct me
if I didn't.

We like the idea of plain data structures separate from
serialisation/management because the serialisation stuff always changes
depending on where you are using the data.

>>> They even say it themselves: what they favor is the "extraction of code
>>> out of ROS packages to be reused elsewhere". We all know that having to
>>> *extract* is bad as it means that you have to fork the package (yuk).
>>>
>>
>> I'm not a forking-kind-of-guy.

I've pointed out why this sucks to some of the WG people myself. Many
agree, but it's always a question of educating developers, and that
researchers don't usually have the time or patience to think about the
usability of their code outside their immediate needs. This is something
I learned with Gearbox: people like the idea but few have the time.

> I got that. To make this whole plan fruitful, one would actually need to
> have a "message package", where there is *only* package definitions (no
> actual code), so that people of ROS-package "X" don't start making
> changes that impact Orocos-package "Y" (and the other way around).

Yep, that's the goal.

Geoff

Common package management (was About user data types&transports

This is probably going to be long and rambling. I have a tendency to get
carried away in these sorts of emails. I apologise in advance. :)

On 17/03/10 02:05, Sylvain Joyeux wrote:
> Peter Soetens wrote:
>>>
>>> You just point out my main issue with ROS: it is easy to integrate
>>> something with ROS, but hard to get something out of ROS to reuse in a
>>> different environment.

This is an internal ROS issue, not related to the infrastructure. ROS
package creators do not typically follow good practice and split
functional code from the framework code in a separate library that does
not depend on anything ROS (cmake files or otherwise). Whatever the
"package manager" used, you can still get this issue.

>> I agree partly. Ros nodes/application code is difficult to reuse in non-ROS.
>> We're not going for that stuff. However, Geoff is proving you wrong wrt the
>> build system and package management. He's really starting to make a strong
>> case:
>> * ROS supports federated repositories and pulling packages from version
>> control
>>
> Can you point me to some pages about that ? I just quickly googled it
> and could not find anything about "pulling packages from version
> control". The only things I could find is that someone can *manually* do
> "svn co roslocate package". Not good enough when the number of packages
> skyrocket.

rosinstall. It's poorly named (probably because its original purpose is
to install ROS), but it does downloading of packages and stacks based on
yaml descriptions.

>> * ROS supports locating packages and resolving dependencies cross-repository.
>>
>> * The ros-build tools now depend on manifest.xml and Makefile. Geoff will extend
>> this to :
>> - manifest.xml : like now but without the cflags/ldflags build specifics
>> - package.pc : contains the build specifics in pkg-config format (he ported
>> pkg-config to windows using Python)
>>
> What does "contain the build specifics" mean ?
>> - Makefile : used in case 'buildpackage.py' is not present. defines basic build
>> steps.
>> - buildpackage.py : serves same purpose as Makefile, but is cross-platform
>> (read: windows).
>>

To clarify what Peter said...

I have been looking at a more generic package manager along the lines of
what ROS offers. As well as ROS's tools, I've also been investigating
using Portage, and have been writing up a proposal. Although I wasn't
planning on announcing anything until I had a finished proposal and a
working prototype using *something* (not necessarily ROS, it seems that
this discussion is getting to the point where it's worth talking about.

As an experiment, I have looked at creating/using a stand-alone Portage
(see the Gentoo Prefix project), and at splitting out the ROS package
management stuff.

I have already split the main package management infrastructure
(rosmake, rosdep, rospack, and so on) out of ROS. They build and install
separate from ROS, get placed in standard bin/ etc directories, and can
be used completely stand-alone. The idea is that you install this set of
software tools, and then use them for whatever framework(s) you like -
or even no framework (e.g. Gearbox).

The ROS tools version currently uses the standard ROS stuff, but it
works right now (on my laptop - no repository yet) for anything I care
to throw at it. This was 2 hours' work.

In a separate project (porting ROS to Windows) I have replaced the
Makefile dependency of rosmake with a call to a Python script, the
internal design of which is completely up to the package developer to
choose. You can see this in action in a branch on the ROS SVN.

The end goal is to have some infrastructure toolkit that a user
downloads and installs, and from there they go off and install
frameworks, packages, etc, as they please. I (and Markus agrees with me
on this point) like the idea of a standard manifest specification that
can work with multiple install systems, as this gives a bit of freedom
of choice (see Gentoo's Package Manager Specification project, which
aims to allow different tools with different advantages to work with the
same packages). Whether or not this is practical is still up for debate,
but it would mean I could create my mega-giga-package-manager using
Gentoo Prefix :). As a first step to towards this goal, and whether or
not everyone ends up using it, it is still beneficial for the ROS tools
to be broken out of the framework into a separate toolkit.

The .pc file, by the way, is to fix a bad design decision in the ROS
manifest specification: manifests contain the C flags and link flags for
other packages, in GCC format. This is pretty awful. We propose instead
using pkg-config for this, since it already handles things like putting
flags into the right format for compilers. pkg-config, by the way, is
another option for package introspection: it can already do things like
tell you who wrote a package, what its dependencies are, and so on. In
ROS terminology, it would replace rospack and some of rosdep.

> Not the point. The thing is that ROS packages are encouraged to use ROS
> tools into their build system (roslocate, rosdep, ...). Therefore, my
> guess (and it is only a guess !) is that some (most ?) packages can't be
> easily used without having those tools around, because the cmake code is
> not enough to find dependencies. Obviously, one can add a normal package
> in there, build it and keep it standalone. The issue was on the other
> direction.

Regarding using rospack, etc in build scripts, this is, again, an
internal ROS issue and not something we can solve other than by saying
"please don't do that." But having a common set of tools that does not
require ROS to be installed certainly makes it less painful. Regarding
ROS users being encouraged to make ROS-only packages, that's also a
question of education, but I've discussed that already...

My ideal package contains a build script for the developer's chosen
build system, which builds a stand-alone library with a well-defined
(and well-documented) API. In sub-directories, the developer places
component wrappers for the frameworks they wish to support (an OpenRTM
subdirectory, an Orocos subdirectory, a ROS subdirectory, an ORCA
subdirectory, and so on). Using options set in the build system (e.g.
via ccmake), I can enable/disable the components for the frameworks I
do/don't want. Naturally, the build system will also perform checks to
see what is present and so what can be compiled. I then provide a
manifest file and a really simple interface script to the package
manager, because I was sensible and made my build system utilisation
follow common methods.

My package can now be distributed to anyone who wishes to use it. The
library can be used without any frameworks, or a framework can be
utilised. The code is all in one place, and a new framework is easy to add.

I plan on discussing moving towards this ideal layout with the other
Gearbox maintainers as soon as the package manager question is settled.

>> * The ROS tooling is perfectly fit for 'software in development', contrast that
>> to Debian packages, which is for 'released software'.
>>
> so does autoproj, which is completely standalone, does not require any
> specific file in the package, supports importing cvs, svn and git alike
> (including for its configuration), supports building autotools, cmake,
> plain make, genom, orogen, ruby packages, is cross platform, has a
> federated repository model and can make coffee(*).
>
> In any case, I don't think people at DFKI will be changing build system
> again (as they start to be accustomed to autoproj *and* it fits their
> needs). Now, autoproj does not need anything special in the package
> source, so both build systems can safely coexist as long as the
> package's cmake is standalone (did I already mention that ? ;-))

I, too, like the idea of standalone CMake, but there are benefits for
having some common CMake modules for a framework, where compiling
components is likely to be the same across many packages. My preferred
approach is that such a collection of CMake scripts is just another
package that the component's package depends on. The beauty of having a
common package manager is that *everything* can be a package, and the
package manager will take care of making sure what your component needs
is available.

If you made it to the end of this email, congratulations. Hopefully it
made some kind of sense...

Geoff

Common package management (was About user data types&transports

For the sake of the discussion: here's how autoproj does things:
* packages can have manifest.xml files and use pkg-config to propagate
cflags and ldflags (our packages *do* but since we build "normal"
software as well, some don't)
* packages should be build-able without autoproj. This does *not* mean
that there can't be common CMake modules, only that building the package
should not depend on non-standard external tools that are not part of
their build system (pkg-config *is* standard on linux, ros* is not).
* we *split* functional libraries and orogen modules. To Geoff: I
really don't like the idea of having "subdirectories" IN THE PACKAGE for
each transport/... . Simple reason: having it separated *favors*
splitting functionality and glue code.

Obviously, autoproj also requires configuration files. Nothing is added
in the packages. Instead:
* how to build packages are declared into so-called "package sets".
These package sets can be stored in a version control repository.
* additionally, the package sets tell where to get what. This
information can be overriden by other package sets and/or the main
autoproj configuration.
* a manifest file tells autoproj what package sets one wants to get,
and to cherry-pick packages from them if you don't want everything.
Moreover, since everything has a Ruby API, one can customize the
installation through this API. autoproj does the rest (import, install,
documentation building).

Additional "niceties" (that are at least very useful for us): support
for configuration parameters. This requires that the packages declare
dependencies dynamically sometimes as well. We have for instance the
ability to build orocos with or without CORBA. Moreover, since we need
to mount a CIFS directory to get access to our gits, then we also need
to have the URL be installation-dependent.

The only thing I'm personally missing from autoproj is a unification of
the os-dependencies with the packages. The idea is that, one some
systems, one wants to build a package instead of using the OS-provided
one. In that case, I want autoproj to help the building (instead of
relying on horrible YAML + shell scripts as rosdep does).

Now, why do I think that it is way better to have the build "scripts"
outside of the packages ? Simply because *most* of the software out
there is *not* robot-specific and will probably never have a
manifest.xml and python-build file. Since I don't want to have two-third
of the packages managed by my "build manager" and one third not, I'd
rather design the whole thing so that it does not require the packages
to be designed with the "build manager" in mind.

Common package management (was About user data types&transports

On 17/03/10 20:53, Sylvain Joyeux wrote:
> * we *split* functional libraries and orogen modules. To Geoff: I really
> don't like the idea of having "subdirectories" IN THE PACKAGE for each
> transport/... . Simple reason: having it separated *favors* splitting
> functionality and glue code.

We tried that approach with Gearbox. It didn't gain much acceptance,
mainly for organisational reasons. People didn't like splitting up their
work flow. Perhaps without the centralisation of Gearbox it would work
better, but I still like having related code in one place. Splitting up
functionality and glue code is more of an education issue then a package
manager issue.

Either way you prefer to do it, though, the decision on where to put the
functional and the glue code has no bearing on the design of a package
manager, it is purely a decision by the creator of a package.

Geoff

Common package management (was About user data types&transports

Geoffrey Biggs wrote:
> On 17/03/10 20:53, Sylvain Joyeux wrote:
>
>> * we *split* functional libraries and orogen modules. To Geoff: I really
>> don't like the idea of having "subdirectories" IN THE PACKAGE for each
>> transport/... . Simple reason: having it separated *favors* splitting
>> functionality and glue code.
>>
>
> We tried that approach with Gearbox. It didn't gain much acceptance,
> mainly for organisational reasons. People didn't like splitting up their
> work flow.
That's the issue: they should not have to see it as splitting their work
flow. If they do see it that way, it means that the integration module
and the functional library are too tied together in their mind and
probably in the code -- and that's why the policy here is to force
people splitting.
> Perhaps without the centralisation of Gearbox it would work
> better, but I still like having related code in one place.
Then have one huge package, as everything is "related" ;-)
> Splitting up
> functionality and glue code is more of an education issue then a package
> manager issue. Either way you prefer to do it, though, the decision on where to put the
> functional and the glue code has no bearing on the design of a package
> manager, it is purely a decision by the creator of a package.
>
Not entirely true, as any code generation you have will have to be
integrated into your package management. I took the option, with oroGen,
to follow Genom's legacy of having orogen generate the CMake code ...
integrating something like that it into a pure CMake package would be
challenging.

Common package management (was About user data types&transports

On 17/03/10 22:08, Sylvain Joyeux wrote:
> That's the issue: they should not have to see it as splitting their work
> flow. If they do see it that way, it means that the integration module
> and the functional library are too tied together in their mind and
> probably in the code -- and that's why the policy here is to force
> people splitting.

Yeah, that was the approach we were pushing with Gearbox. We wanted to
really encourage people to think about the separation of functional code
and frameworks. Everyone said it was a good idea, but when you can't
force it on people via policy...

I do think that people like WG could do more to encourage this sort of
policy, though. We try to with OpenRTM (via our tutorials and book), but
as we currently don't have any organisation of components at all, it
doesn't really work.

>> Splitting up
>> functionality and glue code is more of an education issue then a package
>> manager issue. Either way you prefer to do it, though, the decision on
>> where to put the
>> functional and the glue code has no bearing on the design of a package
>> manager, it is purely a decision by the creator of a package.
>>
> Not entirely true, as any code generation you have will have to be
> integrated into your package management. I took the option, with oroGen,
> to follow Genom's legacy of having orogen generate the CMake code ...
> integrating something like that it into a pure CMake package would be
> challenging.

It is true. The package description that says how to build the package,
used by the package manager, may know that it needs to perform some kind
of generation step as part of compiling, but that's not a part of the
_package manager_, nor is it integrated _into_ the package manager. As
long as the package manager has some entry point to the package, that's
all that matters.

ebuilds, ports files, whatever Arch calls its bash-based files, Antony's
robotpkg files, the Makefiles used by ROS - they all do this, and from
your description of autoproj, it does, too. It's a single entry point
that the package manager uses to say to a package "build thyself"
(possibly with trumpet fanfare). Anything beyond that, including code
generation, is up to the package developer to specify. The package
manager itself can make no assumptions, in order to provide the greatest
flexibility.

Any sensible package developer will put all the hard work in their
actual build system, with the PM interface file (for lack of a better
term) just being a sequence of calls to different targets (e.g. generate
followed by build_all or something). For your example, you would put the
commands to generate the CMake code into the PM interface file, followed
by a call to CMake. The package manager has no awareness that this is
what is happening.

Geoff

Common package management (was About user data types&transports

Geoffrey Biggs wrote:
> On 17/03/10 22:08, Sylvain Joyeux wrote:
>
>> That's the issue: they should not have to see it as splitting their work
>> flow. If they do see it that way, it means that the integration module
>> and the functional library are too tied together in their mind and
>> probably in the code -- and that's why the policy here is to force
>> people splitting.
>>
>
> Yeah, that was the approach we were pushing with Gearbox. We wanted to
> really encourage people to think about the separation of functional code
> and frameworks. Everyone said it was a good idea, but when you can't
> force it on people via policy...
>
> I do think that people like WG could do more to encourage this sort of
> policy, though. We try to with OpenRTM (via our tutorials and book), but
> as we currently don't have any organisation of components at all, it
> doesn't really work.
>
>
>>> Splitting up
>>> functionality and glue code is more of an education issue then a package
>>> manager issue. Either way you prefer to do it, though, the decision on
>>> where to put the
>>> functional and the glue code has no bearing on the design of a package
>>> manager, it is purely a decision by the creator of a package.
>>>
>>>
>> Not entirely true, as any code generation you have will have to be
>> integrated into your package management. I took the option, with oroGen,
>> to follow Genom's legacy of having orogen generate the CMake code ...
>> integrating something like that it into a pure CMake package would be
>> challenging.
>>
>
> It is true. The package description that says how to build the package,
> used by the package manager, may know that it needs to perform some kind
> of generation step as part of compiling, but that's not a part of the
> _package manager_, nor is it integrated _into_ the package manager. As
> long as the package manager has some entry point to the package, that's
> all that matters.
>
> ebuilds, ports files, whatever Arch calls its bash-based files, Antony's
> robotpkg files, the Makefiles used by ROS - they all do this, and from
> your description of autoproj, it does, too. It's a single entry point
> that the package manager uses to say to a package "build thyself"
> (possibly with trumpet fanfare). Anything beyond that, including code
> generation, is up to the package developer to specify. The package
> manager itself can make no assumptions, in order to provide the greatest
> flexibility.
>
> Any sensible package developer will put all the hard work in their
> actual build system, with the PM interface file (for lack of a better
> term) just being a sequence of calls to different targets (e.g. generate
> followed by build_all or something). For your example, you would put the
> commands to generate the CMake code into the PM interface file, followed
> by a call to CMake. The package manager has no awareness that this is
> what is happening.
>
There's a power vs. flexibility problem here.

autoproj does *not* work the way you describe. The "preferred" way to
work with it is to integrate different types of packages as subclasses
of Autobuild::Package and use that type to build whatever package you
have handy. I.e. there *is* a orogen-specific and a genom-specific class
in autoproj (to name a few).

Obviously, autoproj can integrate your scheme (this is the simplest
package type). Now, what you will lose is the ability to do partial
builds -- i.e. avoid calling the code generation, the cmake
configuration and/or the build / install phase if calling them is
unnecessary. As I said, this is a tradeoff.

Interestingly, it allows to work around the build systems limitations.
For instance, I added to our cmake handler the ability to remove the
CMakeCache.txt before any needed reconfiguration. This is to work around
cmake's aggressive caching of pkg-config values, which led to having
changes to .pc files not being propagated. You *do* lose this kind of
"intrinsic knowledge of the underlying build system" if you rely on the
developer to provide a script.

Finally, there is one thing I added to the VCS handling that rosinstall
does not give me: "autoproj status", which checks the state of the
current checkout vs. the repository, and proper handling of git branches
(i.e. being able to switch branches on already exported packages). You
can't live without it if you have more than 30 packages.

Common package management (was About user data types&transports

On 17/03/10 22:45, Sylvain Joyeux wrote:
> There's a power vs. flexibility problem here.
>
> autoproj does *not* work the way you describe. The "preferred" way to
> work with it is to integrate different types of packages as subclasses
> of Autobuild::Package and use that type to build whatever package you
> have handy. I.e. there *is* a orogen-specific and a genom-specific class
> in autoproj (to name a few).
>
> Obviously, autoproj can integrate your scheme (this is the simplest
> package type). Now, what you will lose is the ability to do partial
> builds -- i.e. avoid calling the code generation, the cmake
> configuration and/or the build / install phase if calling them is
> unnecessary. As I said, this is a tradeoff.
>
> Interestingly, it allows to work around the build systems limitations.
> For instance, I added to our cmake handler the ability to remove the
> CMakeCache.txt before any needed reconfiguration. This is to work around
> cmake's aggressive caching of pkg-config values, which led to having
> changes to .pc files not being propagated. You *do* lose this kind of
> "intrinsic knowledge of the underlying build system" if you rely on the
> developer to provide a script.

This is an interesting approach, and I can certainly see the advantages
it offers. I imagine that, after a period of time, the package manager
would contain most of the common cases and adding a new one would be
uncommon. It's almost like a meta-build system combined with a package
manager, which gives intriguing new possibilities.

You can look at this approach from the other side, too. Portage ebuilds
inherit from a base ebuild file that defines the basics and the
defaults, and Portage comes with a large number of common ebuild types
(e.g. if you create a package that uses the KDE libraries to compile,
inherit the KDE ebuild stuff and you get all the macros and so on
necessary for your package to depend on and use KDE). Arch does
something similar for its packages. ROS does as well, providing some
default Makefiles that can be included in the package's one for rosmake
to interact with. Relying on the developer to provide a script doesn't
preclude providing lots of defaults they can use (the Windows stuff I
did for ROS.also did in one iteration).

> Finally, there is one thing I added to the VCS handling that rosinstall
> does not give me: "autoproj status", which checks the state of the
> current checkout vs. the repository, and proper handling of git branches
> (i.e. being able to switch branches on already exported packages). You
> can't live without it if you have more than 30 packages.

I didn't mean to say that rosinstall is a complete system. It's a tool
in its infant stages, and it remains to be seen if WG decides they need
to take it to its logical conclusion.

Geoff

Common package management (was About user data types&transports

Of course, by this I mean any sensible developer who can. There are
always cases where you can't, like your orogen-generated CMake files. I
didn't mean to imply that there was something bad about your case; just
that a PM interface file should not be treated as a replacement for a
build system (which it is in some of the worse ebuilds I've seen).

Geoff

> Any sensible package developer will put all the hard work in their
> actual build system, with the PM interface file (for lack of a better

About user data types&transports in 2.0

On Tuesday 16 March 2010 18:05:27 Sylvain Joyeux wrote:
> Peter Soetens wrote:
> >> You just point out my main issue with ROS: it is easy to integrate
> >> something with ROS, but hard to get something out of ROS to reuse in a
> >> different environment.
> >
> > I agree partly. Ros nodes/application code is difficult to reuse in
> > non-ROS. We're not going for that stuff. However, Geoff is proving you
> > wrong wrt the build system and package management. He's really starting
> > to make a strong case:
> > * ROS supports federated repositories and pulling packages from version
> > control
>
> Can you point me to some pages about that ? I just quickly googled it
> and could not find anything about "pulling packages from version
> control". The only things I could find is that someone can *manually* do
> "svn co roslocate package". Not good enough when the number of packages
> skyrocket.

It's roslocate that I had in mind. ROS/WG track record shows that this kind of
management tools is high on the priority list, even if it is not fully
operational.

>
> > * ROS supports locating packages and resolving dependencies
> > cross-repository.
> >
> > * The ros-build tools now depend on manifest.xml and Makefile. Geoff will
> > extend this to :
> > - manifest.xml : like now but without the cflags/ldflags build specifics
> > - package.pc : contains the build specifics in pkg-config format (he
> > ported pkg-config to windows using Python)
>
> What does "contain the build specifics" mean ?

Well, the stuff that is typically in a .pc file: install dir, include paths link
libraries etc. That's now part of the manifest.xml file, which is plain wrong
in our C world. (for example, RTT is compiled against externally installed
boost)

>
> > - Makefile : used in case 'buildpackage.py' is not present. defines
> > basic build steps.
> > - buildpackage.py : serves same purpose as Makefile, but is
> > cross-platform (read: windows).
>
> Not the point. The thing is that ROS packages are encouraged to use ROS
> tools into their build system (roslocate, rosdep, ...). Therefore, my
> guess (and it is only a guess !) is that some (most ?) packages can't be
> easily used without having those tools around, because the cmake code is
> not enough to find dependencies.

I agree. That's why we need the .pc files, to provide this info back to
external projects instead of the manifest.xml file.

> Obviously, one can add a normal package
> in there, build it and keep it standalone. The issue was on the other
> direction.

Ok.

>
> > * The ROS tooling is perfectly fit for 'software in development',
> > contrast that to Debian packages, which is for 'released software'.
>
> so does autoproj, which is completely standalone, does not require any
> specific file in the package, supports importing cvs, svn and git alike
> (including for its configuration), supports building autotools, cmake,
> plain make, genom, orogen, ruby packages, is cross platform, has a
> federated repository model and can make coffee(*).

Are you making this up ? :-) It's not (only) technical superiority that I am
looking for, it's third party maintenance of something I need, but don't have
the expertise for. I don't want to pull your leg by throwing a 'ros has more
users/maintainers' to you, because maybe they are going the wrong
direction.... but their build system is really going to the development model
I would like to have for Orocos. But it needs the .pc files since the RTT needs
these to express the 'target' (xenomai, gnulinux,...).

Also, I wouldn't mind that 'modern' robotics software would need to provide a
(standardized) manifest.xml file in order to get the tooling lean.

>
> In any case, I don't think people at DFKI will be changing build system
> again (as they start to be accustomed to autoproj *and* it fits their
> needs). Now, autoproj does not need anything special in the package
> source, so both build systems can safely coexist as long as the
> package's cmake is standalone (did I already mention that ? ;-))

I follow your concern. I might however provide ros-like cmake macros for
Orocos components. The dependency on them will be expressed in the
manifest.xml file and components using such macros will depend on a rospack-
like tool (similar like depending on pkg-config to find compile flags of
installed libs).

rosbuild users equally are hard to convince to leave their tool chain.

>
> > From my part, I'm convinced that we can even pull the message types out
> > of ROS for integration into non-ros software. We'll write a generator
> > that does ROS- idl to C++ struct (without base class/ROS dependencies).
> > ROS and Orocos in turn will provide a generator that takes the ROS-idl
> > and converts it to 'messages' or 'typekits' respectively. The user code
> > only uses the original C++ structs, the behind the scenes middleware code
> > uses the transport code.. Libraries like GearBox (and Orocos components!)
> > would benefit greatly from such an infrastructure. This would even work
> > with orogen, since orogen parses C++ structs.
>
> Sounds fine.

It means a lot when you agree.[*]

>
> >> They even say it themselves: what they favor is the "extraction of code
> >> out of ROS packages to be reused elsewhere". We all know that having to
> >> *extract* is bad as it means that you have to fork the package (yuk).
> >
> > I'm not a forking-kind-of-guy.
>
> I got that. To make this whole plan fruitful, one would actually need to
> have a "message package", where there is *only* package definitions (no
> actual code), so that people of ROS-package "X" don't start making
> changes that impact Orocos-package "Y" (and the other way around).

Yes, that's the idea !

>
> >> But I think we all now have a clear understanding of the different
> >> options/positions. I propose that we all think about it for a while, ask
> >> clarification questions (if there are any), and discuss it when we meet
> >> ;-)
> >
> > My first priority is getting the CORBA transport right, that's why I
> > wanted to start this discussion now such that it has time to mature.
>
> Agreed.
>

Peter

[*] Also when you don't :)

About user data types&transports in 2.0

Peter Soetens wrote:
> On Tuesday 16 March 2010 18:05:27 Sylvain Joyeux wrote:
>
>> Peter Soetens wrote:
>>
>>>> You just point out my main issue with ROS: it is easy to integrate
>>>> something with ROS, but hard to get something out of ROS to reuse in a
>>>> different environment.
>>>>
>>> I agree partly. Ros nodes/application code is difficult to reuse in
>>> non-ROS. We're not going for that stuff. However, Geoff is proving you
>>> wrong wrt the build system and package management. He's really starting
>>> to make a strong case:
>>> * ROS supports federated repositories and pulling packages from version
>>> control
>>>
>> Can you point me to some pages about that ? I just quickly googled it
>> and could not find anything about "pulling packages from version
>> control". The only things I could find is that someone can *manually* do
>> "svn co roslocate package". Not good enough when the number of packages
>> skyrocket.
>>
>
> It's roslocate that I had in mind. ROS/WG track record shows that this kind of
> management tools is high on the priority list, even if it is not fully
> operational.
>
>
>>> * ROS supports locating packages and resolving dependencies
>>> cross-repository.
>>>
>>> * The ros-build tools now depend on manifest.xml and Makefile. Geoff will
>>> extend this to :
>>> - manifest.xml : like now but without the cflags/ldflags build specifics
>>> - package.pc : contains the build specifics in pkg-config format (he
>>> ported pkg-config to windows using Python)
>>>
>> What does "contain the build specifics" mean ?
>>
>
> Well, the stuff that is typically in a .pc file: install dir, include paths link
> libraries etc. That's now part of the manifest.xml file, which is plain wrong
> in our C world. (for example, RTT is compiled against externally installed
> boost)
>
>
>>> - Makefile : used in case 'buildpackage.py' is not present. defines
>>> basic build steps.
>>> - buildpackage.py : serves same purpose as Makefile, but is
>>> cross-platform (read: windows).
>>>
>> Not the point. The thing is that ROS packages are encouraged to use ROS
>> tools into their build system (roslocate, rosdep, ...). Therefore, my
>> guess (and it is only a guess !) is that some (most ?) packages can't be
>> easily used without having those tools around, because the cmake code is
>> not enough to find dependencies.
>>
>
> I agree. That's why we need the .pc files, to provide this info back to
> external projects instead of the manifest.xml file.
>
>
>> Obviously, one can add a normal package
>> in there, build it and keep it standalone. The issue was on the other
>> direction.
>>
>
> Ok.
>
>
>>> * The ROS tooling is perfectly fit for 'software in development',
>>> contrast that to Debian packages, which is for 'released software'.
>>>
>> so does autoproj, which is completely standalone, does not require any
>> specific file in the package, supports importing cvs, svn and git alike
>> (including for its configuration), supports building autotools, cmake,
>> plain make, genom, orogen, ruby packages, is cross platform, has a
>> federated repository model and can make coffee(*).
>>
>
> Are you making this up ? :-) It's not (only) technical superiority that I am
> looking for, it's third party maintenance of something I need, but don't have
> the expertise for. I don't want to pull your leg by throwing a 'ros has more
> users/maintainers' to you, because maybe they are going the wrong
> direction.... but their build system is really going to the development model
> I would like to have for Orocos. But it needs the .pc files since the RTT needs
> these to express the 'target' (xenomai, gnulinux,...).
>
> Also, I wouldn't mind that 'modern' robotics software would need to provide a
> (standardized) manifest.xml file in order to get the tooling lean.
>
>
>> In any case, I don't think people at DFKI will be changing build system
>> again (as they start to be accustomed to autoproj *and* it fits their
>> needs). Now, autoproj does not need anything special in the package
>> source, so both build systems can safely coexist as long as the
>> package's cmake is standalone (did I already mention that ? ;-))
>>
>
> I follow your concern. I might however provide ros-like cmake macros for
> Orocos components. The dependency on them will be expressed in the
> manifest.xml file and components using such macros will depend on a rospack-
> like tool (similar like depending on pkg-config to find compile flags of
> installed libs).
>
> rosbuild users equally are hard to convince to leave their tool chain.
>
>
>>> From my part, I'm convinced that we can even pull the message types out
>>> of ROS for integration into non-ros software. We'll write a generator
>>> that does ROS- idl to C++ struct (without base class/ROS dependencies).
>>> ROS and Orocos in turn will provide a generator that takes the ROS-idl
>>> and converts it to 'messages' or 'typekits' respectively. The user code
>>> only uses the original C++ structs, the behind the scenes middleware code
>>> uses the transport code.. Libraries like GearBox (and Orocos components!)
>>> would benefit greatly from such an infrastructure. This would even work
>>> with orogen, since orogen parses C++ structs.
>>>
>> Sounds fine.
>>
> It means a lot when you agree.[*]
>
Actually, I still have a problem with having "plain" datatypes. I still
feel that having data types without the methods to manage them is
annoying (at best).

What I thought is that, as an *alternative* to what you propose (i.e.
both methods would be supported by orogen), orogen could verify that a
human-designed C++ structure does match a ros-IDL type. That would allow
to extend the C++ type while still making sure that we retain
compatibility with the common datatype pool.

Thoughts ?

About user data types&transports in 2.0

On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>
> Actually, I still have a problem with having "plain" datatypes. I still feel
> that having data types without the methods to manage them is annoying (at
> best).

I've been thinking about this when I looked into ros::Time and
ros::Duration. The solution I had there was that you can write classes
(with methods) that accept initialisation from the plain data type.
The class could inherit from the plain data type too. It's not
something we need to enforce, they are just options. What I do like is
that the transport only transports plain data, and nothing else. CORBA
and other middlewares went way to far trying to pass on objects. It
adds way to much coupling.

>
> What I thought is that, as an *alternative* to what you propose (i.e. both
> methods would be supported by orogen), orogen could verify that a
> human-designed C++ structure does match a ros-IDL type. That would allow to
> extend the C++ type while still making sure that we retain compatibility
> with the common datatype pool.
>
> Thoughts ?
>

By extending, do you mean, inheriting from the structure and adding
member functions ?

Peter

About user data types&transports in 2.0

Peter Soetens wrote:
> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>
>> Actually, I still have a problem with having "plain" datatypes. I still feel
>> that having data types without the methods to manage them is annoying (at
>> best).
>>
>
> I've been thinking about this when I looked into ros::Time and
> ros::Duration. The solution I had there was that you can write classes
> (with methods) that accept initialisation from the plain data type.
> The class could inherit from the plain data type too. It's not
> something we need to enforce, they are just options. What I do like is
> that the transport only transports plain data, and nothing else. CORBA
> and other middlewares went way to far trying to pass on objects. It
> adds way to much coupling.
>
>
>> What I thought is that, as an *alternative* to what you propose (i.e. both
>> methods would be supported by orogen), orogen could verify that a
>> human-designed C++ structure does match a ros-IDL type. That would allow to
>> extend the C++ type while still making sure that we retain compatibility
>> with the common datatype pool.
>>
>> Thoughts ?
>>
>>
>
> By extending, do you mean, inheriting from the structure and adding
> member functions ?
>
What I meant is the following: the data type compatibility is actually
useful when modules communicate with each other. We want that
compatibility because it makes interoperability better (at least
theoretically).

Now, I want to be able to design nice, easy to use C++ data structures.

I thought that *since the compatibility is needed only at the component
communication level*, people could use their own data structures, and
have orogen verify that this hand-made data structure actually matches
the data type that the ROS IDL describes. This way, the component
integrator makes sure that the data types are compatible with each other
(i.e. modules can talk to each other).

About user data types&transports in 2.0

I'm interested in hearing more about how this approach might work. I've
seen "typelib" thrown around a bit in this thread, but I don't know
anything about it; I assume it's related.

What would you verify with orogen? That the names match? Or that the
individual types match? How well would it work with a dynamically-typed
language? Better, because you could inspect the types of the actual data
at run time? Or worse, because you wouldn't have type information
available just from scanning the code prior to run time? How would you
encourage the use of similar types (ROS didn't go with KDL's types, even
though they existed first)? Would it still benefit from having a package
in the ether containing some common types to encourage reuse of the same
types?

Geoff

On 18/03/10 01:12, Sylvain Joyeux wrote:
> I thought that *since the compatibility is needed only at the component
> communication level*, people could use their own data structures, and
> have orogen verify that this hand-made data structure actually matches
> the data type that the ROS IDL describes. This way, the component
> integrator makes sure that the data types are compatible with each other
> (i.e. modules can talk to each other).

About user data types&transports in 2.0

Geoffrey Biggs wrote:
> I'm interested in hearing more about how this approach might work. I've
> seen "typelib" thrown around a bit in this thread, but I don't know
> anything about it; I assume it's related.
>
Typelib is a C++ library that
* provides an intermediate representation of C-like types. Right now,
I only have a C-importer, a CORBA-IDL exporter, and an importer/exporter
into typelib's own XML representation.
* provides a way from C++ to manipulate (inspect and transform) memory
that is known to contain a value of a given type that is represented
in Typelib. Logging works by using typelib's marshalling operator.
* provides a Ruby extension that uses Typelib to inspect/modify C++
types directly (i.e. everything that can be represented in Typelib
can be manipulated from Ruby without a marshalling/demarshalling step)
> What would you verify with orogen? That the names match? Or that the
> individual types match?
I would check both. Name matching is needed as a field with a given type
could be exchanged by a completely unrelated one that has the same type.
Type matching is needed because typelib does not have enough information
to safely do type conversion (i.e. to know whether type conversion is
allowed in a particular context or not)
> How well would it work with a dynamically-typed
> language? Better, because you could inspect the types of the actual data
> at run time? Or worse, because you wouldn't have type information
> available just from scanning the code prior to run time?
I would say "better". How it works right now, though, is that the
dynamic language directly manipulates a C++ type and that this type is
then fed (in Orocos) to the typekit for marshalling/demarshalling.
> How would you
> encourage the use of similar types (ROS didn't go with KDL's types, even
> though they existed first)?
I don't know KDL, but I guess that KDL types use inheritance and virtual
methods ? That is not supported by typelib at all ...

I would encourage functional library writers to use the KDL types, and
have a common set of wrapper types for marshalling/demarshalling. orogen
even has the functionality to use so-called "opaque types", for which
the user provides a conversion -- for marshalling purposes -- to a type
that typelib can understand. That's how (for instance) smart pointers
are supported in orogen-generated typekits, and no conversion is needed
in the normal RTT flow (as C++ types are passed directly there)

The only types that do *not* work that way are the types that contain
Eigen matrices, as Eigen has strong alignment requirements that are not
guaranteed in the RTT dataflow. For those, we have to define our wrapper
types and do the conversion in the task contexts.

> Would it still benefit from having a package
> in the ether containing some common types to encourage reuse of the same
> types?
>
I don't believe that the approach *by itself* does benefit for such a
package. Now, standardizing the base types obviously *does* benefit the
community as a whole.

Though, I don't think it is practical to enforce that "functional
libraries" adopt these types, because some very nice C++ libraries (KDL,
Eigen) exist out there. For me, these types are meant so that we have
common interfaces at the integration level. Using them in functional
libraries should be favored only when it makes sense.

About user data types&transports in 2.0

On 19/03/2010 6:30 p.m., Sylvain Joyeux wrote:
>> Would it still benefit from having a package
>> in the ether containing some common types to encourage reuse of the same
>> types?
> I don't believe that the approach *by itself* does benefit for such a
> package. Now, standardizing the base types obviously *does* benefit the
> community as a whole.
>
> Though, I don't think it is practical to enforce that "functional
> libraries" adopt these types, because some very nice C++ libraries (KDL,
> Eigen) exist out there. For me, these types are meant so that we have
> common interfaces at the integration level. Using them in functional
> libraries should be favored only when it makes sense.

I agree. It would provide something that libraries *could* use if they
wanted to, but obviously what makes the best API for the library in
question should be given a higher priority.

I'll reply to the rest when I'm less jetlagged (it's 2:17am and I'm wide
awake... plus I have to go to the airport in 4 hours).

Geoff

About user data types&transports in 2.0

On Wed, 17 Mar 2010, Sylvain Joyeux wrote:

> Peter Soetens wrote:
>> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>
>>> Actually, I still have a problem with having "plain" datatypes. I still feel
>>> that having data types without the methods to manage them is annoying (at
>>> best).
>>>
>>
>> I've been thinking about this when I looked into ros::Time and
>> ros::Duration. The solution I had there was that you can write classes
>> (with methods) that accept initialisation from the plain data type.
>> The class could inherit from the plain data type too. It's not
>> something we need to enforce, they are just options. What I do like is
>> that the transport only transports plain data, and nothing else. CORBA
>> and other middlewares went way to far trying to pass on objects. It
>> adds way to much coupling.
>>
>>
>>> What I thought is that, as an *alternative* to what you propose (i.e. both
>>> methods would be supported by orogen), orogen could verify that a
>>> human-designed C++ structure does match a ros-IDL type. That would allow to
>>> extend the C++ type while still making sure that we retain compatibility
>>> with the common datatype pool.
>>>
>>> Thoughts ?

Yes :-) I beg you to accept the advice of a veteran of many of these
"interoperability wars" (that I all lost, by the way...) and put the
priority at _first_ defining the _semantics_ of the data types, in a formal,
symbolic language, and _only then_ think about data structures in whatever
programming language (preferably generated automatically). Otherwise, you
will be bitten severely by the "semantic gap" effect: at the C++ level,
both communicating sides agree completely, but the meaning of each or
several of the data fields that match in IDL or C++ are inconsistent in the
real physical world.

Examples:
- whatever coordinate representation that one uses, one _must_ define
physical units, frame reference points, order of relative frames,
velocity reference point, order of angular and linear components,...
of all data types. That means at least a dozen or so 'semantic tags' for
each definition of a 6D geometric or kinematic data structure.
- whatever control algorithm that one uses, one _must_ define in what order
every "control error" is calculated as the difference which two other
data structures, their physical units, the interpretation of bounds
(saturation, thresholds, uncertainty, ...)
I can give some more examples of data interoperability wars that I have
lost...

>> By extending, do you mean, inheriting from the structure and adding
>> member functions ?
>>
> What I meant is the following: the data type compatibility is actually
> useful when modules communicate with each other. We want that
> compatibility because it makes interoperability better (at least
> theoretically).
>
> Now, I want to be able to design nice, easy to use C++ data structures.
>
> I thought that *since the compatibility is needed only at the component
> communication level*, people could use their own data structures, and
> have orogen verify that this hand-made data structure actually matches
> the data type that the ROS IDL describes. This way, the component
> integrator makes sure that the data types are compatible with each other
> (i.e. modules can talk to each other).

This is "The Way To Go!"(R). I would add the obvious: tools like orogen can
not only _verify_ but also _generate_ the interoperability layer.

Herman

About user data types&transports in 2.0

On Wednesday 17 March 2010 20:43:11 Herman Bruyninckx wrote:
> On Wed, 17 Mar 2010, Sylvain Joyeux wrote:
> > Peter Soetens wrote:
> >> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...>
wrote:
> >>> Actually, I still have a problem with having "plain" datatypes. I still
> >>> feel that having data types without the methods to manage them is
> >>> annoying (at best).
> >>
> >> I've been thinking about this when I looked into ros::Time and
> >> ros::Duration. The solution I had there was that you can write classes
> >> (with methods) that accept initialisation from the plain data type.
> >> The class could inherit from the plain data type too. It's not
> >> something we need to enforce, they are just options. What I do like is
> >> that the transport only transports plain data, and nothing else. CORBA
> >> and other middlewares went way to far trying to pass on objects. It
> >> adds way to much coupling.
> >>
> >>> What I thought is that, as an *alternative* to what you propose (i.e.
> >>> both methods would be supported by orogen), orogen could verify that a
> >>> human-designed C++ structure does match a ros-IDL type. That would
> >>> allow to extend the C++ type while still making sure that we retain
> >>> compatibility with the common datatype pool.
> >>>
> >>> Thoughts ?
>
> Yes :-) I beg you to accept the advice of a veteran of many of these
> "interoperability wars" (that I all lost, by the way...) and put the
> priority at _first_ defining the _semantics_ of the data types, in a
> formal, symbolic language, and _only then_ think about data structures in
> whatever programming language (preferably generated automatically).
> Otherwise, you will be bitten severely by the "semantic gap" effect: at
> the C++ level, both communicating sides agree completely, but the meaning
> of each or several of the data fields that match in IDL or C++ are
> inconsistent in the real physical world.
>
> Examples:
> - whatever coordinate representation that one uses, one _must_ define
> physical units, frame reference points, order of relative frames,
> velocity reference point, order of angular and linear components,...
> of all data types. That means at least a dozen or so 'semantic tags' for
> each definition of a 6D geometric or kinematic data structure.
> - whatever control algorithm that one uses, one _must_ define in what order
> every "control error" is calculated as the difference which two other
> data structures, their physical units, the interpretation of bounds
> (saturation, thresholds, uncertainty, ...)
> I can give some more examples of data interoperability wars that I have
> lost...

I agree fully and acknowledge that missing semantic information causes
disasters. However, the type transport layer (ie serialization) does not need
to care about semantics, only about name->value mappings. An example: the
KDL::Frame lacks any semantic information, but contains the mappings "vector"
-> p and "rotation" -> m. The KDL::SemanticFrame is the fromer extended with
the semantics. These semantics are encoded as name->value pairs as well, so
for the transport, KDL::SemanticFrame is just a 'bigger' version of
KDL::Frame.

This view on the problem will allow to detect run-time misconfiguration and not
at compile time, but I believe that's the 'flexibility' vs 'proof' trade-off.

Peter

About user data types&transports in 2.0

Herman Bruyninckx wrote:
> On Wed, 17 Mar 2010, Sylvain Joyeux wrote:
>
>
>> Peter Soetens wrote:
>>
>>> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>>
>>>
>>>> Actually, I still have a problem with having "plain" datatypes. I still feel
>>>> that having data types without the methods to manage them is annoying (at
>>>> best).
>>>>
>>>>
>>> I've been thinking about this when I looked into ros::Time and
>>> ros::Duration. The solution I had there was that you can write classes
>>> (with methods) that accept initialisation from the plain data type.
>>> The class could inherit from the plain data type too. It's not
>>> something we need to enforce, they are just options. What I do like is
>>> that the transport only transports plain data, and nothing else. CORBA
>>> and other middlewares went way to far trying to pass on objects. It
>>> adds way to much coupling.
>>>
>>>
>>>
>>>> What I thought is that, as an *alternative* to what you propose (i.e. both
>>>> methods would be supported by orogen), orogen could verify that a
>>>> human-designed C++ structure does match a ros-IDL type. That would allow to
>>>> extend the C++ type while still making sure that we retain compatibility
>>>> with the common datatype pool.
>>>>
>>>> Thoughts ?
>>>>
>
> Yes :-) I beg you to accept the advice of a veteran of many of these
> "interoperability wars" (that I all lost, by the way...) and put the
> priority at _first_ defining the _semantics_ of the data types, in a formal,
> symbolic language, and _only then_ think about data structures in whatever
> programming language (preferably generated automatically). Otherwise, you
> will be bitten severely by the "semantic gap" effect: at the C++ level,
> both communicating sides agree completely, but the meaning of each or
> several of the data fields that match in IDL or C++ are inconsistent in the
> real physical world.
>
> Examples:
> - whatever coordinate representation that one uses, one _must_ define
> physical units, frame reference points, order of relative frames,
> velocity reference point, order of angular and linear components,...
> of all data types. That means at least a dozen or so 'semantic tags' for
> each definition of a 6D geometric or kinematic data structure.
> - whatever control algorithm that one uses, one _must_ define in what order
> every "control error" is calculated as the difference which two other
> data structures, their physical units, the interpretation of bounds
> (saturation, thresholds, uncertainty, ...)
> I can give some more examples of data interoperability wars that I have
> lost...
>
Yes, this "semantic tagging" as you call it is important. Unfortunately,
enforcing that at the code level would either require a lot of C++
templating or a static code analysis tool a-la sparse (which checks for
mixing incompatible memory pointers in the Linux kernel).

So we're left with a simple "tagging" (i.e. documentation), which is
already done -- at least in my lab -- by documenting the data structures.

Now, I agree that the practicality of such an approach (how to make it
happen) would be an interesting around-a-beer discussion ;-)

About user data types&transports in 2.0

On Fri, 19 Mar 2010, Sylvain Joyeux wrote:

> Herman Bruyninckx wrote:
>> On Wed, 17 Mar 2010, Sylvain Joyeux wrote:
>>
>>
>>> Peter Soetens wrote:
>>>
>>>> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>>>
>>>>
>>>>> Actually, I still have a problem with having "plain" datatypes. I still feel
>>>>> that having data types without the methods to manage them is annoying (at
>>>>> best).
>>>>>
>>>>>
>>>> I've been thinking about this when I looked into ros::Time and
>>>> ros::Duration. The solution I had there was that you can write classes
>>>> (with methods) that accept initialisation from the plain data type.
>>>> The class could inherit from the plain data type too. It's not
>>>> something we need to enforce, they are just options. What I do like is
>>>> that the transport only transports plain data, and nothing else. CORBA
>>>> and other middlewares went way to far trying to pass on objects. It
>>>> adds way to much coupling.
>>>>
>>>>
>>>>
>>>>> What I thought is that, as an *alternative* to what you propose (i.e. both
>>>>> methods would be supported by orogen), orogen could verify that a
>>>>> human-designed C++ structure does match a ros-IDL type. That would allow to
>>>>> extend the C++ type while still making sure that we retain compatibility
>>>>> with the common datatype pool.
>>>>>
>>>>> Thoughts ?
>>>>>
>>
>> Yes :-) I beg you to accept the advice of a veteran of many of these
>> "interoperability wars" (that I all lost, by the way...) and put the
>> priority at _first_ defining the _semantics_ of the data types, in a formal,
>> symbolic language, and _only then_ think about data structures in whatever
>> programming language (preferably generated automatically). Otherwise, you
>> will be bitten severely by the "semantic gap" effect: at the C++ level,
>> both communicating sides agree completely, but the meaning of each or
>> several of the data fields that match in IDL or C++ are inconsistent in the
>> real physical world.
>>
>> Examples:
>> - whatever coordinate representation that one uses, one _must_ define
>> physical units, frame reference points, order of relative frames,
>> velocity reference point, order of angular and linear components,...
>> of all data types. That means at least a dozen or so 'semantic tags' for
>> each definition of a 6D geometric or kinematic data structure.
>> - whatever control algorithm that one uses, one _must_ define in what order
>> every "control error" is calculated as the difference which two other
>> data structures, their physical units, the interpretation of bounds
>> (saturation, thresholds, uncertainty, ...)
>> I can give some more examples of data interoperability wars that I have
>> lost...
>>
> Yes, this "semantic tagging" as you call it is important. Unfortunately,
> enforcing that at the code level would either require a lot of C++
> templating or a static code analysis tool a-la sparse (which checks for
> mixing incompatible memory pointers in the Linux kernel).
>
> So we're left with a simple "tagging" (i.e. documentation), which is
> already done -- at least in my lab -- by documenting the data structures.

But that is NOT ENOUGH!!!! And without having seen your documentation, I am
quite sure it is _not_ semantically complete...

> Now, I agree that the practicality of such an approach (how to make it
> happen) would be an interesting around-a-beer discussion ;-)

We will need much more than a couple of beers! :-)

Herman

About user data types&transports in 2.0

Herman Bruyninckx wrote:
> On Fri, 19 Mar 2010, Sylvain Joyeux wrote:
>
>> Herman Bruyninckx wrote:
>>> On Wed, 17 Mar 2010, Sylvain Joyeux wrote:
>>>
>>>
>>>> Peter Soetens wrote:
>>>>
>>>>> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux
>>>>> <sylvain [dot] joyeux [..] ...> wrote:
>>>>>
>>>>>
>>>>>> Actually, I still have a problem with having "plain" datatypes. I
>>>>>> still feel
>>>>>> that having data types without the methods to manage them is
>>>>>> annoying (at
>>>>>> best).
>>>>>>
>>>>>>
>>>>> I've been thinking about this when I looked into ros::Time and
>>>>> ros::Duration. The solution I had there was that you can write
>>>>> classes
>>>>> (with methods) that accept initialisation from the plain data type.
>>>>> The class could inherit from the plain data type too. It's not
>>>>> something we need to enforce, they are just options. What I do
>>>>> like is
>>>>> that the transport only transports plain data, and nothing else.
>>>>> CORBA
>>>>> and other middlewares went way to far trying to pass on objects. It
>>>>> adds way to much coupling.
>>>>>
>>>>>
>>>>>
>>>>>> What I thought is that, as an *alternative* to what you propose
>>>>>> (i.e. both
>>>>>> methods would be supported by orogen), orogen could verify that a
>>>>>> human-designed C++ structure does match a ros-IDL type. That
>>>>>> would allow to
>>>>>> extend the C++ type while still making sure that we retain
>>>>>> compatibility
>>>>>> with the common datatype pool.
>>>>>>
>>>>>> Thoughts ?
>>>>>>
>>>
>>> Yes :-) I beg you to accept the advice of a veteran of many of these
>>> "interoperability wars" (that I all lost, by the way...) and put the
>>> priority at _first_ defining the _semantics_ of the data types, in a
>>> formal,
>>> symbolic language, and _only then_ think about data structures in
>>> whatever
>>> programming language (preferably generated automatically).
>>> Otherwise, you
>>> will be bitten severely by the "semantic gap" effect: at the C++ level,
>>> both communicating sides agree completely, but the meaning of each or
>>> several of the data fields that match in IDL or C++ are inconsistent
>>> in the
>>> real physical world.
>>>
>>> Examples:
>>> - whatever coordinate representation that one uses, one _must_ define
>>> physical units, frame reference points, order of relative frames,
>>> velocity reference point, order of angular and linear components,...
>>> of all data types. That means at least a dozen or so 'semantic
>>> tags' for
>>> each definition of a 6D geometric or kinematic data structure.
>>> - whatever control algorithm that one uses, one _must_ define in
>>> what order
>>> every "control error" is calculated as the difference which two
>>> other
>>> data structures, their physical units, the interpretation of bounds
>>> (saturation, thresholds, uncertainty, ...)
>>> I can give some more examples of data interoperability wars that I have
>>> lost...
>>>
>> Yes, this "semantic tagging" as you call it is important. Unfortunately,
>> enforcing that at the code level would either require a lot of C++
>> templating or a static code analysis tool a-la sparse (which checks for
>> mixing incompatible memory pointers in the Linux kernel).
>>
>> So we're left with a simple "tagging" (i.e. documentation), which is
>> already done -- at least in my lab -- by documenting the data
>> structures.
>
> But that is NOT ENOUGH!!!! And without having seen your documentation,
> I am
> quite sure it is _not_ semantically complete...
Yes. That was the exact point I was trying to make. Except that going up
to the point that it becomes *usable on a real-world system* will need a
lot of work.

Here's the thing: I already have too much trouble having people
undestand that *basic* model based deployment is worth the effort. I
personally have other priorities, and, anyway, since I'm not a PhD
anymore, I could not spend years until I have something that can be
published. Moreover, I am not in the position of deciding this kind of
resouce investment in a lab, so that it manages to cover the needs of a
real-world system.

So, at least for my side, it will stay at the discussion level in the
foreseeable future.

About user data types&transports in 2.0

On Fri, 19 Mar 2010, Sylvain Joyeux wrote:

> Herman Bruyninckx wrote:
>> On Fri, 19 Mar 2010, Sylvain Joyeux wrote:
>>
>>> Herman Bruyninckx wrote:
>>>> On Wed, 17 Mar 2010, Sylvain Joyeux wrote:
>>>>
>>>>
>>>>> Peter Soetens wrote:
>>>>>
>>>>>> On Wed, Mar 17, 2010 at 10:04, Sylvain Joyeux
>>>>>> <sylvain [dot] joyeux [..] ...> wrote:
>>>>>>
>>>>>>
>>>>>>> Actually, I still have a problem with having "plain" datatypes. I
>>>>>>> still feel
>>>>>>> that having data types without the methods to manage them is
>>>>>>> annoying (at
>>>>>>> best).
>>>>>>>
>>>>>>>
>>>>>> I've been thinking about this when I looked into ros::Time and
>>>>>> ros::Duration. The solution I had there was that you can write
>>>>>> classes
>>>>>> (with methods) that accept initialisation from the plain data type.
>>>>>> The class could inherit from the plain data type too. It's not
>>>>>> something we need to enforce, they are just options. What I do
>>>>>> like is
>>>>>> that the transport only transports plain data, and nothing else.
>>>>>> CORBA
>>>>>> and other middlewares went way to far trying to pass on objects. It
>>>>>> adds way to much coupling.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> What I thought is that, as an *alternative* to what you propose
>>>>>>> (i.e. both
>>>>>>> methods would be supported by orogen), orogen could verify that a
>>>>>>> human-designed C++ structure does match a ros-IDL type. That
>>>>>>> would allow to
>>>>>>> extend the C++ type while still making sure that we retain
>>>>>>> compatibility
>>>>>>> with the common datatype pool.
>>>>>>>
>>>>>>> Thoughts ?
>>>>>>>
>>>>
>>>> Yes :-) I beg you to accept the advice of a veteran of many of these
>>>> "interoperability wars" (that I all lost, by the way...) and put the
>>>> priority at _first_ defining the _semantics_ of the data types, in a
>>>> formal,
>>>> symbolic language, and _only then_ think about data structures in
>>>> whatever
>>>> programming language (preferably generated automatically).
>>>> Otherwise, you
>>>> will be bitten severely by the "semantic gap" effect: at the C++ level,
>>>> both communicating sides agree completely, but the meaning of each or
>>>> several of the data fields that match in IDL or C++ are inconsistent
>>>> in the
>>>> real physical world.
>>>>
>>>> Examples:
>>>> - whatever coordinate representation that one uses, one _must_ define
>>>> physical units, frame reference points, order of relative frames,
>>>> velocity reference point, order of angular and linear components,...
>>>> of all data types. That means at least a dozen or so 'semantic
>>>> tags' for
>>>> each definition of a 6D geometric or kinematic data structure.
>>>> - whatever control algorithm that one uses, one _must_ define in
>>>> what order
>>>> every "control error" is calculated as the difference which two
>>>> other
>>>> data structures, their physical units, the interpretation of bounds
>>>> (saturation, thresholds, uncertainty, ...)
>>>> I can give some more examples of data interoperability wars that I have
>>>> lost...
>>>>
>>> Yes, this "semantic tagging" as you call it is important. Unfortunately,
>>> enforcing that at the code level would either require a lot of C++
>>> templating or a static code analysis tool a-la sparse (which checks for
>>> mixing incompatible memory pointers in the Linux kernel).
>>>
>>> So we're left with a simple "tagging" (i.e. documentation), which is
>>> already done -- at least in my lab -- by documenting the data
>>> structures.
>>
>> But that is NOT ENOUGH!!!! And without having seen your documentation,
>> I am
>> quite sure it is _not_ semantically complete...
> Yes. That was the exact point I was trying to make. Except that going up
> to the point that it becomes *usable on a real-world system* will need a
> lot of work.
>
> Here's the thing: I already have too much trouble having people
> undestand that *basic* model based deployment is worth the effort. I
> personally have other priorities, and, anyway, since I'm not a PhD
> anymore, I could not spend years until I have something that can be
> published. Moreover, I am not in the position of deciding this kind of
> resouce investment in a lab, so that it manages to cover the needs of a
> real-world system.
>
> So, at least for my side, it will stay at the discussion level in the
> foreseeable future.

Very understandable! Because that missing semantic layer is a _huge_
investment, that no lab could (should!) be doing on its own...

I have some hope to be able to realise some progress in this matter via two
different ways:
- there is a growing awareness of the problem, also within the large
European research projects such as Rosetta, RoboEarth, Cotesys,... and I
think something will come out of their cooperations.
- there is the idea of setting up a PhD School on robotics, at the European
level, and there students can be used to work on the semantic layer,
since that's exactly the layer that is used during teaching.
But don't hold your breath :-)

Herman

About user data types&transports in 2.0

On Wed, Mar 10, 2010 at 12:12:33AM +0100, Peter Soetens wrote:
> On Tue, Mar 9, 2010 at 17:58, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:

> > (we have autotools,
> > qmake and "undefined types" of packages here) and require the user to
> > import the packages (not acceptable when you have one git repository per
> > package, as should be under git, even less acceptable when you need to
> > download tarballs). Moreover, it leads to non-standalone CMake packages,
> > since you *cannot* build the ROS packages as-is outside of a ROS tree (I
> > personally feel it is important to have standalone, standard, cmake
> > packages).
>
> I think too standalone is important. ROS does not violate this. Heck,
> even KDL is a heavily used ROS package, yet no code in the kdl trunk
> resembles ROS.

The second point Sylvain points out here is a much larger problem
IMO. In rosbuild there is no clean separation between build
specification and source code, the latter is usually checked out of VC
together with the build spec or will get downloaded as a side-effect
of building the package. This works OK for a limited set of packages
but will become a major pain once many packages are available.

Markus

About user data types&transports in 2.0

On Mar 9, 2010, at 07:56 , Peter Soetens wrote:

> I have been insinuating a lot about the future of types without really
> pointing out a plan. Here's the plan.
>
> First the invariants:
>
> 1. The RTT has always been data type agnostic and will remain so. It will not
> assume any function, base class etc to be present, except that data is a)
> copyable and b) default constructible.
>
> 2. In order to display, serialize, transport data types, the RTT relies on
> users to fill in the TypeInfoRepository with TypeInfo objects that implement
> these functionalities for a given data type. We collect these objects in
> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
> plugins.
>
> 3. A single data types can be transported using multiple transports. We now
> have CORBA and POSIX mqueues more may follow.
>
> The questions raised during RTT 1.x development are these:
>
> A How could a typekit be generated from user specification ?
> B How could a typekit support future transports, or at least a certain class
> of transports ?
> C How can we interoperate with other frameworks, ie, do some conversion from
> our type to a 3rd party type OR use a 3rd party type directly in our component
> code.

<sni

One thing that was not mentioned above, but that has come up multiple times on the ML, is output formatting. There is a default precision/width assumption built into RTT v1. Will that still exist in v2? Will there be any way to configure it? Will we be able to specify formatting on a per-type basis?

The rest all sounds very good, but it will take a while to get our heads around it, I think ...

Stephen

About user data types&transports in 2.0

S Roderick wrote:
> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
>
>
>> I have been insinuating a lot about the future of types without really
>> pointing out a plan. Here's the plan.
>>
>> First the invariants:
>>
>> 1. The RTT has always been data type agnostic and will remain so. It will not
>> assume any function, base class etc to be present, except that data is a)
>> copyable and b) default constructible.
>>
>> 2. In order to display, serialize, transport data types, the RTT relies on
>> users to fill in the TypeInfoRepository with TypeInfo objects that implement
>> these functionalities for a given data type. We collect these objects in
>> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
>> plugins.
>>
>> 3. A single data types can be transported using multiple transports. We now
>> have CORBA and POSIX mqueues more may follow.
>>
>> The questions raised during RTT 1.x development are these:
>>
>> A How could a typekit be generated from user specification ?
>> B How could a typekit support future transports, or at least a certain class
>> of transports ?
>> C How can we interoperate with other frameworks, ie, do some conversion from
>> our type to a 3rd party type OR use a 3rd party type directly in our component
>> code.
>>
>
> <sni

>
> One thing that was not mentioned above, but that has come up multiple times on the ML, is output formatting. There is a default precision/width assumption built into RTT v1. Will that still exist in v2? Will there be any way to configure it? Will we be able to specify formatting on a per-type basis?
>
For my understanding of how you guys use the RTT, can I actually ask for
what reason you seem to rely on ostream formatting ? I don't use it at
all, so I am a bit at a loss here...

About user data types&transports in 2.0

On Mar 9, 2010, at 10:17 , Sylvain Joyeux wrote:

> S Roderick wrote:
>> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
>>
>>
>>> I have been insinuating a lot about the future of types without really
>>> pointing out a plan. Here's the plan.
>>>
>>> First the invariants:
>>>
>>> 1. The RTT has always been data type agnostic and will remain so. It will not
>>> assume any function, base class etc to be present, except that data is a)
>>> copyable and b) default constructible.
>>>
>>> 2. In order to display, serialize, transport data types, the RTT relies on
>>> users to fill in the TypeInfoRepository with TypeInfo objects that implement
>>> these functionalities for a given data type. We collect these objects in
>>> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
>>> plugins.
>>>
>>> 3. A single data types can be transported using multiple transports. We now
>>> have CORBA and POSIX mqueues more may follow.
>>>
>>> The questions raised during RTT 1.x development are these:
>>>
>>> A How could a typekit be generated from user specification ?
>>> B How could a typekit support future transports, or at least a certain class
>>> of transports ?
>>> C How can we interoperate with other frameworks, ie, do some conversion from
>>> our type to a 3rd party type OR use a 3rd party type directly in our component
>>> code.
>>>
>>
>> <sni

>>
>> One thing that was not mentioned above, but that has come up multiple times on the ML, is output formatting. There is a default precision/width assumption built into RTT v1. Will that still exist in v2? Will there be any way to configure it? Will we be able to specify formatting on a per-type basis?
>>
> For my understanding of how you guys use the RTT, can I actually ask for
> what reason you seem to rely on ostream formatting ? I don't use it at
> all, so I am a bit at a loss here...

It was there when I started using Orocos. It is built-in to the underlying type system, and so is most commonly visible in the output of the OCL::ReportingComponent output files.

How do you specify the formatting of output data files that involve multiple, possibly complex, types?

Stephen

About user data types&transports in 2.0

S Roderick wrote:
> On Mar 9, 2010, at 10:17 , Sylvain Joyeux wrote:
>
>
>> S Roderick wrote:
>>
>>> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
>>>
>>>
>>>
>>>> I have been insinuating a lot about the future of types without really
>>>> pointing out a plan. Here's the plan.
>>>>
>>>> First the invariants:
>>>>
>>>> 1. The RTT has always been data type agnostic and will remain so. It will not
>>>> assume any function, base class etc to be present, except that data is a)
>>>> copyable and b) default constructible.
>>>>
>>>> 2. In order to display, serialize, transport data types, the RTT relies on
>>>> users to fill in the TypeInfoRepository with TypeInfo objects that implement
>>>> these functionalities for a given data type. We collect these objects in
>>>> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
>>>> plugins.
>>>>
>>>> 3. A single data types can be transported using multiple transports. We now
>>>> have CORBA and POSIX mqueues more may follow.
>>>>
>>>> The questions raised during RTT 1.x development are these:
>>>>
>>>> A How could a typekit be generated from user specification ?
>>>> B How could a typekit support future transports, or at least a certain class
>>>> of transports ?
>>>> C How can we interoperate with other frameworks, ie, do some conversion from
>>>> our type to a 3rd party type OR use a 3rd party type directly in our component
>>>> code.
>>>>
>>>>
>>> <sni

>>>
>>> One thing that was not mentioned above, but that has come up multiple times on the ML, is output formatting. There is a default precision/width assumption built into RTT v1. Will that still exist in v2? Will there be any way to configure it? Will we be able to specify formatting on a per-type basis?
>>>
>>>
>> For my understanding of how you guys use the RTT, can I actually ask for
>> what reason you seem to rely on ostream formatting ? I don't use it at
>> all, so I am a bit at a loss here...
>>
>
> It was there when I started using Orocos. It is built-in to the underlying type system, and so is most commonly visible in the output of the OCL::ReportingComponent output files.
>
> How do you specify the formatting of output data files that involve multiple, possibly complex, types?
>
orogen uses a type introspection library that I originally developped at
LAAS to interface Ruby with genom (the LAAS modular framework). It is
only trivial (and very fast), based on that representation, to dump the
data into self-describing binary files -- i.e. files in which the data
type is described, allowing to reload the data from the file without the
need for any additional information.

In other words, we have a typelib-based binary logger
(http://github.com/doudou/orogen-logger) and interpret the log files in
Ruby directly (using http://github.com/doudou/orogen-logger). Or convert
them to CSV and load them in other tools afterwards, but since the
conversion is done in Ruby as well, we have complete control on the
formatting. This is only possible for datatypes for which orogen has a
typelib representation (which means, for now, orogen-generated components).

--
Sylvain Joyeux (Dr. Ing.)
Researcher - Space and Security Robotics
DFKI Robotics Innovation Center
Bremen, Robert-Hooke-Straße 5, 28359 Bremen, Germany

Phone: +49 421 218-64136
Fax: +49 421 218-64150
Email: sylvain [dot] joyeux [..] ...

Weitere Informationen: http://www.dfki.de

About user data types&transports in 2.0

Sylvain Joyeux wrote:
> S Roderick wrote:
>
>> On Mar 9, 2010, at 10:17 , Sylvain Joyeux wrote:
>>
>>
>>
>>> S Roderick wrote:
>>>
>>>
>>>> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
>>>>
>>>>
>>>>
>>>>
>>>>> I have been insinuating a lot about the future of types without really
>>>>> pointing out a plan. Here's the plan.
>>>>>
>>>>> First the invariants:
>>>>>
>>>>> 1. The RTT has always been data type agnostic and will remain so. It will not
>>>>> assume any function, base class etc to be present, except that data is a)
>>>>> copyable and b) default constructible.
>>>>>
>>>>> 2. In order to display, serialize, transport data types, the RTT relies on
>>>>> users to fill in the TypeInfoRepository with TypeInfo objects that implement
>>>>> these functionalities for a given data type. We collect these objects in
>>>>> 'Typekits' (formerly 'Toolkits'), which are run-time loadable libraries, aka
>>>>> plugins.
>>>>>
>>>>> 3. A single data types can be transported using multiple transports. We now
>>>>> have CORBA and POSIX mqueues more may follow.
>>>>>
>>>>> The questions raised during RTT 1.x development are these:
>>>>>
>>>>> A How could a typekit be generated from user specification ?
>>>>> B How could a typekit support future transports, or at least a certain class
>>>>> of transports ?
>>>>> C How can we interoperate with other frameworks, ie, do some conversion from
>>>>> our type to a 3rd party type OR use a 3rd party type directly in our component
>>>>> code.
>>>>>
>>>>>
>>>>>
>>>> <sni

>>>>
>>>> One thing that was not mentioned above, but that has come up multiple times on the ML, is output formatting. There is a default precision/width assumption built into RTT v1. Will that still exist in v2? Will there be any way to configure it? Will we be able to specify formatting on a per-type basis?
>>>>
>>>>
>>>>
>>> For my understanding of how you guys use the RTT, can I actually ask for
>>> what reason you seem to rely on ostream formatting ? I don't use it at
>>> all, so I am a bit at a loss here...
>>>
>>>
>> It was there when I started using Orocos. It is built-in to the underlying type system, and so is most commonly visible in the output of the OCL::ReportingComponent output files.
>>
>> How do you specify the formatting of output data files that involve multiple, possibly complex, types?
>>
>>
> orogen uses a type introspection library that I originally developped at
> LAAS to interface Ruby with genom (the LAAS modular framework). It is
> only trivial (and very fast), based on that representation, to dump the
> data into self-describing binary files -- i.e. files in which the data
> type is described, allowing to reload the data from the file without the
> need for any additional information.
>
> In other words, we have a typelib-based binary logger
> (http://github.com/doudou/orogen-logger) and interpret the log files in
> Ruby directly (using http://github.com/doudou/orogen-logger). Or convert
>
Make that http://github.com/doudou/pocosim-log

About user data types&transports in 2.0

On Tuesday 09 March 2010 14:06:18 S Roderick wrote:
> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
> > I have been insinuating a lot about the future of types without really
> > pointing out a plan. Here's the plan.
> >
> > First the invariants:
> >
> > 1. The RTT has always been data type agnostic and will remain so. It will
> > not assume any function, base class etc to be present, except that data
> > is a) copyable and b) default constructible.
> >
> > 2. In order to display, serialize, transport data types, the RTT relies
> > on users to fill in the TypeInfoRepository with TypeInfo objects that
> > implement these functionalities for a given data type. We collect these
> > objects in 'Typekits' (formerly 'Toolkits'), which are run-time loadable
> > libraries, aka plugins.
> >
> > 3. A single data types can be transported using multiple transports. We
> > now have CORBA and POSIX mqueues more may follow.
> >
> > The questions raised during RTT 1.x development are these:
> >
> > A How could a typekit be generated from user specification ?
> > B How could a typekit support future transports, or at least a certain
> > class of transports ?
> > C How can we interoperate with other frameworks, ie, do some conversion
> > from our type to a 3rd party type OR use a 3rd party type directly in our
> > component code.
>
> <sni

>
> One thing that was not mentioned above, but that has come up multiple times
> on the ML, is output formatting. There is a default precision/width
> assumption built into RTT v1. Will that still exist in v2? Will there be
> any way to configure it? Will we be able to specify formatting on a
> per-type basis?

Good point. One possibility is to allow you to pass a stream object to the
conversion code that got the necessary formatting set. This would override any
default/previous set parameters.

Alternatively, we could add a 'setWidth' method to TypeInfo such that all
following outputting requests will use the available width. This is then a
global setting, subject to 'races'.

Probably both are needed.

Peter

About user data types&transports in 2.0

On Mar 9, 2010, at 09:00 , Peter Soetens wrote:

> On Tuesday 09 March 2010 14:06:18 S Roderick wrote:
>> On Mar 9, 2010, at 07:56 , Peter Soetens wrote:
>>> I have been insinuating a lot about the future of types without really
>>> pointing out a plan. Here's the plan.
>>>
>>> First the invariants:
>>>
>>> 1. The RTT has always been data type agnostic and will remain so. It will
>>> not assume any function, base class etc to be present, except that data
>>> is a) copyable and b) default constructible.
>>>
>>> 2. In order to display, serialize, transport data types, the RTT relies
>>> on users to fill in the TypeInfoRepository with TypeInfo objects that
>>> implement these functionalities for a given data type. We collect these
>>> objects in 'Typekits' (formerly 'Toolkits'), which are run-time loadable
>>> libraries, aka plugins.
>>>
>>> 3. A single data types can be transported using multiple transports. We
>>> now have CORBA and POSIX mqueues more may follow.
>>>
>>> The questions raised during RTT 1.x development are these:
>>>
>>> A How could a typekit be generated from user specification ?
>>> B How could a typekit support future transports, or at least a certain
>>> class of transports ?
>>> C How can we interoperate with other frameworks, ie, do some conversion
>>> from our type to a 3rd party type OR use a 3rd party type directly in our
>>> component code.
>>
>> <sni

>>
>> One thing that was not mentioned above, but that has come up multiple times
>> on the ML, is output formatting. There is a default precision/width
>> assumption built into RTT v1. Will that still exist in v2? Will there be
>> any way to configure it? Will we be able to specify formatting on a
>> per-type basis?
>
> Good point. One possibility is to allow you to pass a stream object to the
> conversion code that got the necessary formatting set. This would override any
> default/previous set parameters.
>
> Alternatively, we could add a 'setWidth' method to TypeInfo such that all
> following outputting requests will use the available width. This is then a
> global setting, subject to 'races'.
>
> Probably both are needed.
>
> Peter

Yeah, I'm really not sure about the overall best approach here. This is a tough one. BTW we would need more than just setWidth(). Also precision and fixed/scientific, at least, for some of my projects.

Maybe an overall default compiled in (setWidth, setPrecision and fixed/scientific) plus methods to change these defaults at runtime, and then overrides per type. The first provides backward compatibility, some small extensions over v1, and also would work for many small projects IMHO. The second approach would provide fine-grained control, but the additional responsibility of dealing with "races", etc.

Do you know how ROS or others deal with this?

Stephen