[ANN] DNG - beyond deployment scripts

I'm pleased to announce DNG [1] (deployment next generation). This is
a proof of concept implementation of the Coordinator-Configurator
pattern (described here [2]). The basic idea is:

Split the coordinatior into:

- a slim coordinator that only raises and receives events

- a Configurator, who is configured with a set of
configurations. When a configuration event is received, the
respective configuration is applied.

The Configurator-Configuration allows importing packages, creating
components, changing component state, setting properties, writing to
ports and calling operations. It is essentially a simple model of RTT
primitives that can be executed to change the system state.

Additionally, deployment can be specified as just an other case of
coordination (i.e. applying a configuration that creates and
configures components).

Note: this is a prototype. It works for me, but is not tested very
well and is probably not very efficient.

For docs, see the README.md on the repo-website.

Markus

[1] https://bitbucket.org/kmarkus/dng

[2] "Pure Coordination using the Coordinator–Configurator Pattern",
Markus Klotzbuecher, Geoffrey Biggs and Herman Bruyninckx. In
Proceedings of the 3rd International Workshop on Domain-Specific
Languages and models for ROBotic systems, November 2012. Japan.

[ANN] DNG - beyond deployment scripts

On 11/14/2012 04:17 PM, Markus Klotzbuecher wrote:
> I'm pleased to announce DNG [1] (deployment next generation). This is
> a proof of concept implementation of the Coordinator-Configurator
> pattern (described here [2]). The basic idea is:
>
> Split the coordinatior into:
>
> - a slim coordinator that only raises and receives events
>
> - a Configurator, who is configured with a set of
> configurations. When a configuration event is received, the
> respective configuration is applied.
>
> The Configurator-Configuration allows importing packages, creating
> components, changing component state, setting properties, writing to
> ports and calling operations. It is essentially a simple model of RTT
> primitives that can be executed to change the system state.
>
> Additionally, deployment can be specified as just an other case of
> coordination (i.e. applying a configuration that creates and
> configures components).
>
> Note: this is a prototype. It works for me, but is not tested very
> well and is probably not very efficient.
You should maybe have had a look at (or did you already ?)

http://www.rock-robotics.org/master/documentation/system/index.html

which does all of this and much more, is not a prototype, works for
years now on pretty complex systems already.

[ANN] DNG - beyond deployment scripts

On Wed, Nov 14, 2012 at 04:34:34PM +0100, Sylvain Joyeux wrote:
> On 11/14/2012 04:17 PM, Markus Klotzbuecher wrote:
> >I'm pleased to announce DNG [1] (deployment next generation). This is
> >a proof of concept implementation of the Coordinator-Configurator
> >pattern (described here [2]). The basic idea is:
> >
> >Split the coordinatior into:
> > - a slim coordinator that only raises and receives events
> >
> > - a Configurator, who is configured with a set of
> > configurations. When a configuration event is received, the
> > respective configuration is applied.
> >
> >The Configurator-Configuration allows importing packages, creating
> >components, changing component state, setting properties, writing to
> >ports and calling operations. It is essentially a simple model of RTT
> >primitives that can be executed to change the system state.
> >
> >Additionally, deployment can be specified as just an other case of
> >coordination (i.e. applying a configuration that creates and
> >configures components).
> >
> >Note: this is a prototype. It works for me, but is not tested very
> >well and is probably not very efficient.
> You should maybe have had a look at (or did you already ?)
>
> http://www.rock-robotics.org/master/documentation/system/index.html
>
> which does all of this and much more, is not a prototype, works for
> years now on pretty complex systems already.

Yes, Roby is sort of an intelligent configurator.

But... the major problem is it's not remotely real-time safe. Yet,
even the simple example in the paper has a real-time constraint: the
switch from impedance control to gravity compensation upon exceeding
the force threshold.

Hence, I see the need for two levels: the Roby cerebrum and the DNG
medulla oblongata.

Markus

[ANN] DNG - beyond deployment scripts

On 11/15/2012 10:01 AM, Markus Klotzbuecher wrote:
> Yes, Roby is sort of an intelligent configurator.
>
> But... the major problem is it's not remotely real-time safe. Yet,
> even the simple example in the paper has a real-time constraint: the
> switch from impedance control to gravity compensation upon exceeding
> the force threshold.
>
> Hence, I see the need for two levels: the Roby cerebrum and the DNG
> medulla oblongata.
As usual, it is easy to find reasons to duplicate other people's work.

I actually do agree on the need for two levels Roby/lua except that I
believe that you *are* unnecessarily duplicating what Syskit (the new
name for the component-oriented configuration / deployment plugin for
Roby) does. Modelling the switching behaviour at the lua level is not
required at all ... IMO one should compute it at the Syskit level and
generate a state machine for the lua level.

The benefit ? You don't isolate information (i.e. you have a system view
of your coordination), and you can decide at any time to migrate any
part or subpart of the switching management to the realtime switching.
An example of a problem there: if the lua level "decides" to switch but
that affects other parts of the system (you change configuration that
other parts of the system required) ? Boom. And so on, and so forth.
Obviously, you also benefit from the (very extensive) work done in
syskit -- including the graphical interfaces.

[ANN] DNG - beyond deployment scripts

On Thu, 15 Nov 2012, Sylvain Joyeux wrote:

> On 11/15/2012 10:01 AM, Markus Klotzbuecher wrote:
>> Yes, Roby is sort of an intelligent configurator.
>>
>> But... the major problem is it's not remotely real-time safe. Yet,
>> even the simple example in the paper has a real-time constraint: the
>> switch from impedance control to gravity compensation upon exceeding
>> the force threshold.
>>
>> Hence, I see the need for two levels: the Roby cerebrum and the DNG
>> medulla oblongata.
> As usual, it is easy to find reasons to duplicate other people's work.
>
> I actually do agree on the need for two levels Roby/lua except that I
> believe that you *are* unnecessarily duplicating what Syskit (the new
> name for the component-oriented configuration / deployment plugin for
> Roby) does. Modelling the switching behaviour at the lua level is not
> required at all ... IMO one should compute it at the Syskit level and
> generate a state machine for the lua level.
>
> The benefit ? You don't isolate information (i.e. you have a system view
> of your coordination), and you can decide at any time to migrate any
> part or subpart of the switching management to the realtime switching.
> An example of a problem there: if the lua level "decides" to switch but
> that affects other parts of the system (you change configuration that
> other parts of the system required) ? Boom. And so on, and so forth.
> Obviously, you also benefit from the (very extensive) work done in
> syskit -- including the graphical interfaces.

Lua is, both, a model and an executable programe, so there is no
contradiction in what Markus and you "defend": the Lua script can be
generated by Syskit, great!

But Sylvain, you did only react to half of Markus' message, namely the one
that deployment is Coordination. His message also has a second part: the
separation of pure Coordination from Configuration. Maybe you have also
solved that long before our poor minds have tried to put that into a "best
practice", but then I may have overlooked that :-)

> Sylvain Joyeux (Dr.Ing.)

Herman

> Senior Researcher
>
> Space & Security Robotics
> Underwater Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------
>

[ANN] DNG - beyond deployment scripts

On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
> Lua is, both, a model and an executable programe, so there is no
> contradiction in what Markus and you "defend": the Lua script can be
> generated by Syskit, great!
"Can" and "will" are two different things. Last time I heard something
like that, it was about typelib vs. TypeInfo. In the end, we ended up
having two different type systems "just because".

[ANN] DNG - beyond deployment scripts

On Thu, 15 Nov 2012, Sylvain Joyeux wrote:

> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>> Lua is, both, a model and an executable programe, so there is no
>> contradiction in what Markus and you "defend": the Lua script can be
>> generated by Syskit, great!
> "Can" and "will" are two different things. Last time I heard something like
> that, it was about typelib vs. TypeInfo. In the end, we ended up having two
> different type systems "just because".

Agreed!

So let's come up with a "document" guiding the community towards better
practices in the future! :-)

> Sylvain Joyeux (Dr.Ing.)

Herman

> Senior Researcher
>
> Space & Security Robotics
> Underwater Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------
>
>

--
KU Leuven, Mechanical Engineering, Robotics Research Group
<http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 328056
Vice-President Research euRobotics <http://www.eu-robotics.net>
Open RObot COntrol Software <http://www.orocos.org>
Associate Editor JOSER <http://www.joser.org>, IJRR <http://www.ijrr.org>

[ANN] DNG - beyond deployment scripts

On 11/15/2012 08:06 PM, Herman Bruyninckx wrote:
> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>
>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>> Lua is, both, a model and an executable programe, so there is no
>>> contradiction in what Markus and you "defend": the Lua script can be
>>> generated by Syskit, great!
>> "Can" and "will" are two different things. Last time I heard
>> something like that, it was about typelib vs. TypeInfo. In the end,
>> we ended up having two different type systems "just because".
>
> Agreed!
>
> So let's come up with a "document" guiding the community towards better
> practices in the future! :-)
That's where we disagree. I know you are a big fan of writing paper and
harmonizing meta-models, but I am personally more looking for pooling of
development effort, which is really where the money goes in the end.

We are, here, on orocos-users and all (I thought) supposedly working
towards having a common toolchain. At least, I believed that was the
plan when we made the effort of creating something called "the orocos
toolchain"

I say "thought" and "supposedly" because I don't see that being
realized. No amount of document writing will change it. In my opinion,
it is not about "best practices" it is about sharing development effort.
Writing paper is cheap: professors can do it and you have a limited
number of debugging involved. Writing software is a lot more
complicated, and it is IMO a waste of time and money to try to define
things before you have practical experience with it anyways (at which
point it is too late to write papers to "harmonize" or whatever since
the software is already there and nobody has the amount of ressources
required to rewrite it to match the harmonized stuff).

[ANN] DNG - beyond deployment scripts

On Fri, Nov 16, 2012 at 2:41 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
> On 11/15/2012 08:06 PM, Herman Bruyninckx wrote:
>> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>>
>>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>>> Lua is, both, a model and an executable programe, so there is no
>>>> contradiction in what Markus and you "defend": the Lua script can be
>>>> generated by Syskit, great!
>>> "Can" and "will" are two different things. Last time I heard
>>> something like that, it was about typelib vs. TypeInfo. In the end,
>>> we ended up having two different type systems "just because".
>>
>> Agreed!
>>
>> So let's come up with a "document" guiding the community towards better
>> practices in the future! :-)
> That's where we disagree. I know you are a big fan of writing paper and
> harmonizing meta-models, but I am personally more looking for pooling of
> development effort, which is really where the money goes in the end.
>
> We are, here, on orocos-users and all (I thought) supposedly working
> towards having a common toolchain. At least, I believed that was the
> plan when we made the effort of creating something called "the orocos
> toolchain"
>
> I say "thought" and "supposedly" because I don't see that being
> realized.

I don't fully agree with your perspective. Allow me also to reply on
your 'just because' argument of the type system:

The reason we are not as far as we hoped can be captured in one word: 'legacy'.

- Typekits could not be dynamically extended, meaning that if
typegen/orogen was used to generate a typekit, a hand-written one
could no longer be used and vice versa (typekit legacy). -> we fixed
this one in 2.6 / master.
- We have two type systems because at that point in time, typelib
could not replace easily scripting functionality, such as operators
and constructors, we had in RTT (scripting legacy).
- Not so much people switched to orogen because they have existing
components which can't be picked up by orogen deployments (component
legacy).

Compare that last point to the adoption of 'rttlua': it works with any
existing Orocos component and with any typekit, generated with any
tool or hand written. It's so much easier to be 'tricked' into lua,
because you can just start it in an existing system. I know very well
that because of that, it has less capabilities than orogen, but it
does explain this evolution.

- There's also this big issue of running things on Mac OS-X and
Windows which influenced decisions. (operating system legacy)

On the positive side, we did at least benefit from the 'orocos
toolchain', having typegen for quite some typekits. Also, typelib is
better now, relying on gccxml (for Linux systems !) and rock-stable.
So it won't go away imho.

Maybe we can reopen this 'legacy' discussion vs orogen again and see how:

- generated orogen components can be used in existing OCL deployments
(define some unit tests that checks this for every release)
- existing legacy components can be used in new orogen deployments (
load component, introspect interface and dump to .orogen ? Marcus
wrote a tool that does dump the interface to various formats)

> No amount of document writing will change it. In my opinion,
> it is not about "best practices" it is about sharing development effort.
> Writing paper is cheap: professors can do it and you have a limited
> number of debugging involved. Writing software is a lot more
> complicated, and it is IMO a waste of time and money to try to define
> things before you have practical experience with it anyways (at which
> point it is too late to write papers to "harmonize" or whatever since
> the software is already there and nobody has the amount of ressources
> required to rewrite it to match the harmonized stuff).

Ack. That's why people should have time to do this together, but we don't.

Peter

[ANN] DNG - beyond deployment scripts

On 11/16/2012 05:34 PM, Peter Soetens wrote:
>
> I don't fully agree with your perspective. Allow me also to reply on
> your 'just because' argument of the type system:
>
> The reason we are not as far as we hoped can be captured in one word: 'legacy'.
Yes, and as we discussed in the ERF, this is a point I do understand.

Now, quite a big amount of effort has been put in *extending* the
existing solutions to also match the "new" use cases. Especially the lua
stuff ... Which could have used the typelib solution as much as the RTT
typekit solution. A choice has been made there between ignoring the
typelib stuff or spending what was maybe an equivalent development
effort to converge towards a single solution.

The point where the "community" part was lacking is that this decision
has been made "behind closed doors", i.e. without even trying to sketch
a common solution. That is what I see happening again. And again. We
have discussed ways to try to converge again ... but, honestly, how much
effort are you ready to put into this ?
> - Typekits could not be dynamically extended, meaning that if
> typegen/orogen was used to generate a typekit, a hand-written one
> could no longer be used and vice versa (typekit legacy). -> we fixed
> this one in 2.6 / master.
Ack.
> - We have two type systems because at that point in time, typelib
> could not replace easily scripting functionality, such as operators
> and constructors, we had in RTT (scripting legacy).
Moot point in the rttlua stuff, as rttlua can "introspect" types the
same way than the Ruby side does. I.e. you can define these operators on
the lua side.
> - Not so much people switched to orogen because they have existing
> components which can't be picked up by orogen deployments (component
> legacy).
I am really wondering why there is this fixation about oroGen and
deployments. I saw, a year or so ago, to my great surprise, a line on
the orocos wiki saying that "oroGen is a static deployment generator".
What ? Where is that coming from ? oroGen has always been and still is
mostly a component generator. You can continue using the OCL deployer
with oroGen components and you actually can make oroGen generate
deployments that use "legacy" components. Once upon a time, oroGen
deployments were containing task browsers. Not documented anywhere since
it is not a use case I even thought about. Now, again, in a
common-project state of mind, the issue could have been raised by other
fellow developers that knew about the issue.
> Compare that last point to the adoption of 'rttlua': it works with any
> existing Orocos component and with any typekit, generated with any
> tool or hand written.
If someone is nice enough to make the typekits have the required
interface. I did that, and spent effort doing it, in typegen ... But
could not have been. Particularly, if I only thought about "the Rock use
case", then I would definitely NOT have spent that effort.
> It's so much easier to be 'tricked' into lua,
> because you can just start it in an existing system. I know very well
> that because of that, it has less capabilities than orogen, but it
> does explain this evolution.
A more suitable comparison would be ruby vs. rttlua. lua and oroGen are
two completely orthogonal things, and I was and still is vocal
supporting the existence of the lua scripting. At least until it is
starting to extend itself towards a path where it is going to become a
lot bigger and duplicate existing RTT-oriented efforts. At which point I
am raising a red flag. Just so that, at least, this duplication is done
willingly (and people can't pretend that "they did not know").

> On the positive side, we did at least benefit from the 'orocos
> toolchain', having typegen for quite some typekits.
You did, we(rock) did not. Think about that.
> Maybe we can reopen this 'legacy' discussion vs orogen again and see how:
>
> - generated orogen components can be used in existing OCL deployments
> (define some unit tests that checks this for every release)
This should already work for quite some time now. You have to realize
that I don't have an OCL legacy setup, so it would take me quite some
time to first test this properly.
> - existing legacy components can be used in new orogen deployments (
> load component, introspect interface and dump to .orogen ? Marcus
> wrote a tool that does dump the interface to various formats)
The only issue with this is that the task context constructors are
supposed to follow a certain interface (which is the one from the main
TaskContext class).
>> No amount of document writing will change it. In my opinion,
>> it is not about "best practices" it is about sharing development effort.
>> Writing paper is cheap: professors can do it and you have a limited
>> number of debugging involved. Writing software is a lot more
>> complicated, and it is IMO a waste of time and money to try to define
>> things before you have practical experience with it anyways (at which
>> point it is too late to write papers to "harmonize" or whatever since
>> the software is already there and nobody has the amount of ressources
>> required to rewrite it to match the harmonized stuff).
> Ack. That's why people should have time to do this together, but we don't.
What do your "this" refer to ?

---
Sylvain Joyeux (Dr.Ing.)
Senior Researcher

Space & Security Robotics
Underwater Robotics

!!! Achtung, neue Telefonnummer!!!

Standort Bremen:
DFKI GmbH
Robotics Innovation Center
Robert-Hooke-Straße 5
28359 Bremen, Germany

Phone: +49 (0)421 178-454136
Fax: +49 (0)421 218-454150
E-Mail: robotik [..] ...

Weitere Informationen: http://www.dfki.de/robotik
-----------------------------------------------------------------------
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
(Vorsitzender) Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313
Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
USt-Id.Nr.: DE 148646973
Steuernummer: 19/673/0060/3
-----------------------------------------------------------------------

[ANN] DNG - beyond deployment scripts

On Fri, Nov 16, 2012 at 6:15 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
> On 11/16/2012 05:34 PM, Peter Soetens wrote:
>>
>>
>> I don't fully agree with your perspective. Allow me also to reply on
>> your 'just because' argument of the type system:
>>
>> The reason we are not as far as we hoped can be captured in one word:
>> 'legacy'.
>
> Yes, and as we discussed in the ERF, this is a point I do understand.
>
> Now, quite a big amount of effort has been put in *extending* the existing
> solutions to also match the "new" use cases. Especially the lua stuff ...
> Which could have used the typelib solution as much as the RTT typekit
> solution. A choice has been made there between ignoring the typelib stuff or
> spending what was maybe an equivalent development effort to converge towards
> a single solution.

Maybe. The irony is that the typekit was initially only written to
suit RTT scripting and XML marshalling. That same code then became
usable in other places as well such as RTT transports (per 2.0) or
rttlua. From an implementation point of view, only little code&time
was invested in typekits compared to the code using the typekits. The
typekit API is for example still inferior to typelib, exposing less
information about the type, Markus has been complaining about this wrt
rttlua btw.

>
> The point where the "community" part was lacking is that this decision has
> been made "behind closed doors", i.e. without even trying to sketch a common
> solution. That is what I see happening again. And again. We have discussed
> ways to try to converge again ... but, honestly, how much effort are you
> ready to put into this ?

My personal effort will be limited to paid-for stuff (directly or
indirectly if permitted) or guiding/assisting others into getting
their patches onto the mainline.

>
>> - Typekits could not be dynamically extended, meaning that if
>> typegen/orogen was used to generate a typekit, a hand-written one
>> could no longer be used and vice versa (typekit legacy). -> we fixed
>> this one in 2.6 / master.
>
> Ack.
>
>> - We have two type systems because at that point in time, typelib
>> could not replace easily scripting functionality, such as operators
>> and constructors, we had in RTT (scripting legacy).
>
> Moot point in the rttlua stuff, as rttlua can "introspect" types the same
> way than the Ruby side does. I.e. you can define these operators on the lua
> side.

Point taken. There was no lua back then though, only 'classical'
scripting. The operators we're thinking about are the ones KDL
defines, which are not available in lua, or at least not efficiently.
It's not a major argument I guess, but if you have the same functions
in your rapid-prototyping language as in C++, that's an advantage.

>
>> - Not so much people switched to orogen because they have existing
>> components which can't be picked up by orogen deployments (component
>> legacy).
>
> I am really wondering why there is this fixation about oroGen and
> deployments. I saw, a year or so ago, to my great surprise, a line on the
> orocos wiki saying that "oroGen is a static deployment generator". What ?
> Where is that coming from ? oroGen has always been and still is mostly a
> component generator. You can continue using the OCL deployer with oroGen
> components and you actually can make oroGen generate deployments that use
> "legacy" components. Once upon a time, oroGen deployments were containing
> task browsers. Not documented anywhere since it is not a use case I even
> thought about. Now, again, in a common-project state of mind, the issue
> could have been raised by other fellow developers that knew about the issue.

Hmm... you're right of course. I saw the added value of oroGen in
generating deployments though and would not consider it for solely
generating some boilerplate code. But that's probably based more on
prejudice than on experimental findings.
We definitely need to look into the rock libraries and tools, ie even
beyond oroGen.

>
>> Compare that last point to the adoption of 'rttlua': it works with any
>> existing Orocos component and with any typekit, generated with any
>> tool or hand written.
>
> If someone is nice enough to make the typekits have the required interface.
> I did that, and spent effort doing it, in typegen ... But could not have
> been. Particularly, if I only thought about "the Rock use case", then I
> would definitely NOT have spent that effort.

Clearly.

>
>> It's so much easier to be 'tricked' into lua,
>> because you can just start it in an existing system. I know very well
>> that because of that, it has less capabilities than orogen, but it
>> does explain this evolution.
>
> A more suitable comparison would be ruby vs. rttlua. lua and oroGen are two
> completely orthogonal things, and I was and still is vocal supporting the
> existence of the lua scripting. At least until it is starting to extend
> itself towards a path where it is going to become a lot bigger and duplicate
> existing RTT-oriented efforts. At which point I am raising a red flag. Just
> so that, at least, this duplication is done willingly (and people can't
> pretend that "they did not know").

Duty well done :-)

>
>
>> On the positive side, we did at least benefit from the 'orocos
>> toolchain', having typegen for quite some typekits.
>
> You did, we(rock) did not. Think about that.

As in: -not benefiting from the components/work of other people or as
in: - not having any benefit of the orocos toolchain as a separate
entity ?

>
>> Maybe we can reopen this 'legacy' discussion vs orogen again and see how:
>>
>> - generated orogen components can be used in existing OCL deployments
>> (define some unit tests that checks this for every release)
>
> This should already work for quite some time now. You have to realize that I
> don't have an OCL legacy setup, so it would take me quite some time to first
> test this properly.

Actually, I just remembered a Jenkins job is doing something like that.

>
>> - existing legacy components can be used in new orogen deployments (
>> load component, introspect interface and dump to .orogen ? Marcus
>> wrote a tool that does dump the interface to various formats)
>
> The only issue with this is that the task context constructors are supposed
> to follow a certain interface (which is the one from the main TaskContext
> class).

That's a minor requirement....

>
>>> No amount of document writing will change it. In my opinion,
>>> it is not about "best practices" it is about sharing development effort.
>>> Writing paper is cheap: professors can do it and you have a limited
>>> number of debugging involved. Writing software is a lot more
>>> complicated, and it is IMO a waste of time and money to try to define
>>> things before you have practical experience with it anyways (at which
>>> point it is too late to write papers to "harmonize" or whatever since
>>> the software is already there and nobody has the amount of ressources
>>> required to rewrite it to match the harmonized stuff).
>>
>> Ack. That's why people should have time to do this together, but we don't.
>
> What do your "this" refer to

"sharing development effort."

Peter

[ANN] DNG - beyond deployment scripts

On 11/17/2012 01:18 AM, Peter Soetens wrote:
>
> Hmm... you're right of course. I saw the added value of oroGen in
> generating deployments though and would not consider it for solely
> generating some boilerplate code.
I don't see how not having to learn the intricate RTT C++ API is not a
clear benefit in general. I just checked the oroGen-generated part of
our graph_slam component ... and can tell you that nobody at DFKI would
have accepted RTT without oroGen doing the interface declaration. Even I
would have given up on it. This
http://www.rock-robotics.org/stable/documentation/orogen/orogen_cheat_sh...

is all what our users need to know about creating and deploying RTT
components. And I am not even talking about integration of higher-level
stuff like the stream aligner and the transformer (which is only seen as
a very high level by the users as well).
>> You did, we(rock) did not. Think about that.
> As in: -not benefiting from the components/work of other people or as
> in: - not having any benefit of the orocos toolchain as a separate
> entity ?
Currently, both. The first part, I don't mind so much given how little
publicly-available components there were on the Orocos side from the
very beginning. The second more, as it basically means that any
continued effort on the orocos toolchain from me will have zero benefit
on my side.

>>>> No amount of document writing will change it. In my opinion,
>>>> it is not about "best practices" it is about sharing development effort.
>>>> Writing paper is cheap: professors can do it and you have a limited
>>>> number of debugging involved. Writing software is a lot more
>>>> complicated, and it is IMO a waste of time and money to try to define
>>>> things before you have practical experience with it anyways (at which
>>>> point it is too late to write papers to "harmonize" or whatever since
>>>> the software is already there and nobody has the amount of ressources
>>>> required to rewrite it to match the harmonized stuff).
>>> Ack. That's why people should have time to do this together, but we don't.
>> What do your "this" refer to
> "sharing development effort."
This is always a matter of cost vs. benefit. From that statement, I get
that your analysis is that sharing development effort on the toolchain
is costing you more than what you will ever get from it. In which case I
don't see how I could continue spending effort on it myself since the
only benefit for me would be some convergence, i.e. having things done
once and not many times.

[ANN] DNG - beyond deployment scripts

On Sat, Nov 17, 2012 at 01:18:11AM +0100, Peter Soetens wrote:
> On Fri, Nov 16, 2012 at 6:15 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
> > On 11/16/2012 05:34 PM, Peter Soetens wrote:
> >>
> >>
> >> I don't fully agree with your perspective. Allow me also to reply on
> >> your 'just because' argument of the type system:
> >>
> >> The reason we are not as far as we hoped can be captured in one word:
> >> 'legacy'.
> >
> > Yes, and as we discussed in the ERF, this is a point I do understand.
> >
> > Now, quite a big amount of effort has been put in *extending* the existing
> > solutions to also match the "new" use cases. Especially the lua stuff ...
> > Which could have used the typelib solution as much as the RTT typekit
> > solution. A choice has been made there between ignoring the typelib stuff or
> > spending what was maybe an equivalent development effort to converge towards
> > a single solution.
>
> Maybe. The irony is that the typekit was initially only written to
> suit RTT scripting and XML marshalling. That same code then became
> usable in other places as well such as RTT transports (per 2.0) or
> rttlua. From an implementation point of view, only little code&time
> was invested in typekits compared to the code using the typekits. The
> typekit API is for example still inferior to typelib, exposing less
> information about the type, Markus has been complaining about this wrt
> rttlua btw.
>
> >
> > The point where the "community" part was lacking is that this decision has
> > been made "behind closed doors", i.e. without even trying to sketch a common
> > solution. That is what I see happening again. And again. We have discussed
> > ways to try to converge again ... but, honestly, how much effort are you
> > ready to put into this ?
>
> My personal effort will be limited to paid-for stuff (directly or
> indirectly if permitted) or guiding/assisting others into getting
> their patches onto the mainline.
>
> >
> >> - Typekits could not be dynamically extended, meaning that if
> >> typegen/orogen was used to generate a typekit, a hand-written one
> >> could no longer be used and vice versa (typekit legacy). -> we fixed
> >> this one in 2.6 / master.
> >
> > Ack.
> >
> >> - We have two type systems because at that point in time, typelib
> >> could not replace easily scripting functionality, such as operators
> >> and constructors, we had in RTT (scripting legacy).
> >
> > Moot point in the rttlua stuff, as rttlua can "introspect" types the same
> > way than the Ruby side does. I.e. you can define these operators on the lua
> > side.
>
> Point taken. There was no lua back then though, only 'classical'
> scripting. The operators we're thinking about are the ones KDL
> defines, which are not available in lua, or at least not efficiently.
> It's not a major argument I guess, but if you have the same functions
> in your rapid-prototyping language as in C++, that's an advantage.
>
> >
> >> - Not so much people switched to orogen because they have existing
> >> components which can't be picked up by orogen deployments (component
> >> legacy).
> >
> > I am really wondering why there is this fixation about oroGen and
> > deployments. I saw, a year or so ago, to my great surprise, a line on the
> > orocos wiki saying that "oroGen is a static deployment generator". What ?
> > Where is that coming from ? oroGen has always been and still is mostly a
> > component generator. You can continue using the OCL deployer with oroGen
> > components and you actually can make oroGen generate deployments that use
> > "legacy" components. Once upon a time, oroGen deployments were containing
> > task browsers. Not documented anywhere since it is not a use case I even
> > thought about. Now, again, in a common-project state of mind, the issue
> > could have been raised by other fellow developers that knew about the issue.
>
> Hmm... you're right of course. I saw the added value of oroGen in
> generating deployments though and would not consider it for solely
> generating some boilerplate code. But that's probably based more on
> prejudice than on experimental findings.
> We definitely need to look into the rock libraries and tools, ie even
> beyond oroGen.
>
> >
> >> Compare that last point to the adoption of 'rttlua': it works with any
> >> existing Orocos component and with any typekit, generated with any
> >> tool or hand written.
> >
> > If someone is nice enough to make the typekits have the required interface.
> > I did that, and spent effort doing it, in typegen ... But could not have
> > been. Particularly, if I only thought about "the Rock use case", then I
> > would definitely NOT have spent that effort.
>
> Clearly.

Alas, how emotional can a discussion about software toolchains get?
This is not about being nice (or sullen, for that matter) but by
technical advantages. If the ROCK toolchain solves a problem thats
itching hard enough, someone will do the effort to port the tool or
pay somebody to do it. Or maybe just use ROCK. Users will decide for
themselves. You should have known that the path you've chosen of
forking and rebranding is not contributing to get plain orocos users
to adopt your stuff.

I acknoledge your SysKit and Roby tools and that they are similar and
more featureful that this prototype. But I also hold on to the point
that a low-level, real-time safe configurator has its place. If a tool
satisfies an essential property that an other can't, it can hardly be
called duplication.

Markus

[ANN] DNG - beyond deployment scripts

On 11/17/2012 10:47 AM, Markus Klotzbuecher wrote:
> Alas, how emotional can a discussion about software toolchains get?
It is *not* a discussion about software toolchains. Or, actually it is a
discussion the plural applied to Orocos ToolchainS.
> This is not about being nice (or sullen, for that matter) but by
> technical advantages.
In my case, it is about knowing where "the orocos toolchain" stands.
Maybe I *am* a bit bitter about this whole thing (and probably my
baby-induced lack of sleep for the last two weeks did not help either
...) in which case I apologize. I hope I stayed civilized anyways ...

So far, at ERF, there was this nice bullet point list of "how can we
converge". Similar lists had been done already a few times, but nothing
goes out of it. BRIDE decides to do its own code generator. You start
something that will in the end duplicate. So now I am asking again:

Where does the orocos toolchain stands ? Where do you guys are
leading it ? How does is progression related to Rock (e.g. is that a
"don't care" I get from you, Markus) ?

Which is related to the underlying question (question that I can be the
only one to answer, obviously)

Does it make sense that I continue spending effort on it ?

> If the ROCK toolchain solves a problem thats
> itching hard enough, someone will do the effort to port the tool or
> pay somebody to do it.
Agreed. And that is what currently happening - I get more requests for
cooperation with users and developers that are currently ROS-oriented
than with the orocos toolchain developers. Which I think is a pity. When
I went to the orocos developer's meeting, it was mainly to get a
*developer* community started. This particular comment from you shows me
that it was NOT the general consensus.
> Or maybe just use ROCK. Users will decide for
> themselves. You should have known that the path you've chosen of
> forking and rebranding is not contributing to get plain orocos users
> to adopt your stuff.
The Rock brand started because I asked about making the now-Rock and
then-not-yet-Rock tools (orocos.rb, syskit, the data logger, ...) part
of the orocos toolchain and that has been voted down ... mostly by KUL
guys. Short memory, Markus ?

As for forking, I don't see why you say that we forked. Everything that
is within the orocos toolchain is kept in sync (as much as I can) with
the Rock stuff. Hardly a fork. But maybe it is, in the end, the
underlying question: should I actually make it a fork ? I.e. let you
guys "pick" the typegen / typelib stuff from Rock and stop caring ?

[ANN] DNG - beyond deployment scripts

On Mon, Nov 19, 2012 at 11:19:03AM +0100, Sylvain Joyeux wrote:
> On 11/17/2012 10:47 AM, Markus Klotzbuecher wrote:
> >Alas, how emotional can a discussion about software toolchains get?
> It is *not* a discussion about software toolchains. Or, actually it
> is a discussion the plural applied to Orocos ToolchainS.
> >This is not about being nice (or sullen, for that matter) but by
> >technical advantages.
> In my case, it is about knowing where "the orocos toolchain" stands.
> Maybe I *am* a bit bitter about this whole thing (and probably my
> baby-induced lack of sleep for the last two weeks did not help
> either ...) in which case I apologize. I hope I stayed civilized
> anyways ...

Considering it's you, you're doing quite well :-)

> So far, at ERF, there was this nice bullet point list of "how can we
> converge". Similar lists had been done already a few times, but
> nothing goes out of it. BRIDE decides to do its own code generator.
> You start something that will in the end duplicate. So now I am
> asking again:
>
> Where does the orocos toolchain stands ? Where do you guys are
> leading it ? How does is progression related to Rock (e.g. is that a
> "don't care" I get from you, Markus) ?
> Which is related to the underlying question (question that I can be
> the only one to answer, obviously)
>
> Does it make sense that I continue spending effort on it ?
>
> >If the ROCK toolchain solves a problem thats
> >itching hard enough, someone will do the effort to port the tool or
> >pay somebody to do it.
> Agreed. And that is what currently happening - I get more requests
> for cooperation with users and developers that are currently
> ROS-oriented than with the orocos toolchain developers. Which I
> think is a pity. When I went to the orocos developer's meeting, it
> was mainly to get a *developer* community started. This particular
> comment from you shows me that it was NOT the general consensus.
> >Or maybe just use ROCK. Users will decide for
> >themselves. You should have known that the path you've chosen of
> >forking and rebranding is not contributing to get plain orocos users
> >to adopt your stuff.
> The Rock brand started because I asked about making the now-Rock and
> then-not-yet-Rock tools (orocos.rb, syskit, the data logger, ...)
> part of the orocos toolchain and that has been voted down ... mostly
> by KUL guys. Short memory, Markus ?

Yes. And I also don't remember the reasons anymore. But looking at
things today, I'm starting to think that might have been the wrong
choice. The only reason I can imagine is the fear of "plain" RTT users
(including) myself) of being forced into a certain workflow. If this
is not the case (as I would suspect, for some tools at least), I think
the division is unecessary and even harmful.

> As for forking, I don't see why you say that we forked. Everything
> that is within the orocos toolchain is kept in sync (as much as I
> can) with the Rock stuff. Hardly a fork. But maybe it is, in the
> end, the underlying question: should I actually make it a fork ?
> I.e. let you guys "pick" the typegen / typelib stuff from Rock and
> stop caring ?

It's probably more a sensed fork than a real one, but potentially just
as confusing.

Let me make it clear: I do see a lot of value in the work you have
done for RTT. I'm interested in collaborating to get both our
"Configurator" requirement levels supported consistently and without
overlap. If the easiest way to achieve that means ditching this
prototype, no problem. But one requirement for me to adopt rock tools
is to be able to do this incrementally, and not having to switch to an
entire new toolchain + buildsystem.

Markus

[ANN] DNG - beyond deployment scripts

On 11/19/2012 12:25 PM, Markus Klotzbuecher wrote:
> Let me make it clear: I do see a lot of value in the work you have
> done for RTT. I'm interested in collaborating to get both our
> "Configurator" requirement levels supported consistently and without
> overlap. If the easiest way to achieve that means ditching this
> prototype, no problem. But one requirement for me to adopt rock tools
> is to be able to do this incrementally, and not having to switch to an
> entire new toolchain + buildsystem.
Which is a mantra I heard about a few times already.

Given how much I worked on towards this end and what I see as a lack of
result so far, I honestly need more than a statement of intent. I need
somebody to "put his money where his mouth his". In other words, try it
and tell *specifically* what blockers he is seeing. Get a fix and iterate.

With what I see(*) as the current behaviour on the side of KUL, I would
never have written the ROS/Rock integration that I am currently doing.
Because, you know, I actually had to look at ROS ! Different toolchain,
different build system. Yuk !

(*) I am not saying that it is necessarily your actual behaviour. Only
the (probably distorted) perception I have of it.

[ANN] DNG - beyond deployment scripts

On Mon, 19 Nov 2012, Sylvain Joyeux wrote:

> On 11/17/2012 10:47 AM, Markus Klotzbuecher wrote:
>> Alas, how emotional can a discussion about software toolchains get?
> It is *not* a discussion about software toolchains. Or, actually it is a
> discussion the plural applied to Orocos ToolchainS.
>> This is not about being nice (or sullen, for that matter) but by
>> technical advantages.
> In my case, it is about knowing where "the orocos toolchain" stands.
> Maybe I *am* a bit bitter about this whole thing (and probably my
> baby-induced lack of sleep for the last two weeks did not help either
> ...) in which case I apologize. I hope I stayed civilized anyways ...

Sure, _and_ concrete and clear. Which is a good basis to work on.

> So far, at ERF, there was this nice bullet point list of "how can we
> converge". Similar lists had been done already a few times, but nothing
> goes out of it. BRIDE decides to do its own code generator. You start
> something that will in the end duplicate. So now I am asking again:
>
> Where does the orocos toolchain stands ? Where do you guys are
> leading it ? How does is progression related to Rock

Convergence is a multi-lateral effort. So, let's try to get constructive.
You refer to some earlier workshop meetings where "the toolchain" was
discussed; the most important message _I_ took home from these workshops is
that we should _first_ try to standardize on the "meta model", that is, the
meaning and the use cases of the "things" we all want to see in "the
toolchain". Before we have reached that point, bits and pieces of toolchain
will develop "in the wild", exactly because of a lack of common.
"officially" agreed upon, meta model. That might be a pity, but it is
somewhat unavoidable, because of the strong influence of personal itches....
I have seen enough of this before, and I know how such things happen. I
have learned to live with it, _but_ am still very motivated to take
something positive out of the "abundance" of useful tools. :-)

So, my concrete set of questions now is:
- if there is agreement on "standardizing the meta model is the way to go",
then how shall we proceed?
- can deployment be sufficiently separated from other things to come up
with a deployment meta model? (I think it can, but I have no concrete
enough suggestion by now.)
- can we spend some time on this on the Leuven workshop at February 11th?

If the vocal contributors to this thread agree, I would like to ask them to
start the meta model standardization effort by posting their set of:
- concepts
- relationships
- constraints
that must be part of the meta model we are after. Because a meta model is not more
difficult than this set of three components :-)

A nice addition to the meta model is the identification of the "platforms"
we need to consider (ROS, RTT, plain OS, multi-core, EtherCat and other
fieldbusses, etc.), together with the "platform constraints" each of them
brings into the picture, and that have to be _added_ to the generic meta
model constraints, whenever concrete _models_ (and implementations) are
made.

> (e.g. is that a "don't care" I get from you, Markus) ?
>
> Which is related to the underlying question (question that I can be the
> only one to answer, obviously)
>
> Does it make sense that I continue spending effort on it ?

I would make a lot of sense if we can identify the "weak spots" for all of
the existing tooling efforts, together with their "overlaps"...

>> If the ROCK toolchain solves a problem thats
>> itching hard enough, someone will do the effort to port the tool or
>> pay somebody to do it.
> Agreed. And that is what currently happening - I get more requests for
> cooperation with users and developers that are currently ROS-oriented
> than with the orocos toolchain developers. Which I think is a pity. When
> I went to the orocos developer's meeting, it was mainly to get a
> *developer* community started. This particular comment from you shows me
> that it was NOT the general consensus.
>> Or maybe just use ROCK. Users will decide for
>> themselves. You should have known that the path you've chosen of
>> forking and rebranding is not contributing to get plain orocos users
>> to adopt your stuff.
> The Rock brand started because I asked about making the now-Rock and
> then-not-yet-Rock tools (orocos.rb, syskit, the data logger, ...) part
> of the orocos toolchain and that has been voted down ... mostly by KUL
> guys. Short memory, Markus ?
>
> As for forking, I don't see why you say that we forked. Everything that
> is within the orocos toolchain is kept in sync (as much as I can) with
> the Rock stuff. Hardly a fork. But maybe it is, in the end, the
> underlying question: should I actually make it a fork ? I.e. let you
> guys "pick" the typegen / typelib stuff from Rock and stop caring ?

This might be the most pragmatic thing for you to do, for the time being. I
see and understand that you suffer from the current situation, and that
part of the problem is not at your side! I would not like this to happen,
but I would understand it, because, as mentioned above, it takes two (and
more developers) to tango.

Anyway, I have very good experiences with first trying to agree on the meta
model, because that avoids most of the "who forked what" discussions that
are inevitable at the implementation level, sigh...

Thanks to all participants for keeping this thread alive and full of
content. But it is time to stop the current kind of discussions, and start
formulating constructive ways forward.

I also think orocos-dev is the better forum for this discussion, so I post
this message also there, in the hope that we can continue there without
bothering users with the development-related discussions on meta modeling
etc. :-)

> Sylvain Joyeux (Dr.Ing.)

Herman

[ANN] DNG - beyond deployment scripts

On Mon, 19 Nov 2012, Sylvain Joyeux wrote:

> On 11/17/2012 10:47 AM, Markus Klotzbuecher wrote:
>> Alas, how emotional can a discussion about software toolchains get?
> It is *not* a discussion about software toolchains. Or, actually it is a
> discussion the plural applied to Orocos ToolchainS.
>> This is not about being nice (or sullen, for that matter) but by
>> technical advantages.
> In my case, it is about knowing where "the orocos toolchain" stands.
> Maybe I *am* a bit bitter about this whole thing (and probably my
> baby-induced lack of sleep for the last two weeks did not help either
> ...) in which case I apologize. I hope I stayed civilized anyways ...

Sure, _and_ concrete and clear. Which is a good basis to work on.

> So far, at ERF, there was this nice bullet point list of "how can we
> converge". Similar lists had been done already a few times, but nothing
> goes out of it. BRIDE decides to do its own code generator. You start
> something that will in the end duplicate. So now I am asking again:
>
> Where does the orocos toolchain stands ? Where do you guys are
> leading it ? How does is progression related to Rock

Convergence is a multi-lateral effort. So, let's try to get constructive.
You refer to some earlier workshop meetings where "the toolchain" was
discussed; the most important message _I_ took home from these workshops is
that we should _first_ try to standardize on the "meta model", that is, the
meaning and the use cases of the "things" we all want to see in "the
toolchain". Before we have reached that point, bits and pieces of toolchain
will develop "in the wild", exactly because of a lack of common.
"officially" agreed upon, meta model. That might be a pity, but it is
somewhat unavoidable, because of the strong influence of personal itches....
I have seen enough of this before, and I know how such things happen. I
have learned to live with it, _but_ am still very motivated to take
something positive out of the "abundance" of useful tools. :-)

So, my concrete set of questions now is:
- if there is agreement on "standardizing the meta model is the way to go",
then how shall we proceed?
- can deployment be sufficiently separated from other things to come up
with a deployment meta model? (I think it can, but I have no concrete
enough suggestion by now.)
- can we spend some time on this on the Leuven workshop at February 11th?

If the vocal contributors to this thread agree, I would like to ask them to
start the meta model standardization effort by posting their set of:
- concepts
- relationships
- constraints
that must be part of the meta model we are after. Because a meta model is not more
difficult than this set of three components :-)

A nice addition to the meta model is the identification of the "platforms"
we need to consider (ROS, RTT, plain OS, multi-core, EtherCat and other
fieldbusses, etc.), together with the "platform constraints" each of them
brings into the picture, and that have to be _added_ to the generic meta
model constraints, whenever concrete _models_ (and implementations) are
made.

> (e.g. is that a "don't care" I get from you, Markus) ?
>
> Which is related to the underlying question (question that I can be the
> only one to answer, obviously)
>
> Does it make sense that I continue spending effort on it ?

I would make a lot of sense if we can identify the "weak spots" for all of
the existing tooling efforts, together with their "overlaps"...

>> If the ROCK toolchain solves a problem thats
>> itching hard enough, someone will do the effort to port the tool or
>> pay somebody to do it.
> Agreed. And that is what currently happening - I get more requests for
> cooperation with users and developers that are currently ROS-oriented
> than with the orocos toolchain developers. Which I think is a pity. When
> I went to the orocos developer's meeting, it was mainly to get a
> *developer* community started. This particular comment from you shows me
> that it was NOT the general consensus.
>> Or maybe just use ROCK. Users will decide for
>> themselves. You should have known that the path you've chosen of
>> forking and rebranding is not contributing to get plain orocos users
>> to adopt your stuff.
> The Rock brand started because I asked about making the now-Rock and
> then-not-yet-Rock tools (orocos.rb, syskit, the data logger, ...) part
> of the orocos toolchain and that has been voted down ... mostly by KUL
> guys. Short memory, Markus ?
>
> As for forking, I don't see why you say that we forked. Everything that
> is within the orocos toolchain is kept in sync (as much as I can) with
> the Rock stuff. Hardly a fork. But maybe it is, in the end, the
> underlying question: should I actually make it a fork ? I.e. let you
> guys "pick" the typegen / typelib stuff from Rock and stop caring ?

This might be the most pragmatic thing for you to do, for the time being. I
see and understand that you suffer from the current situation, and that
part of the problem is not at your side! I would not like this to happen,
but I would understand it, because, as mentioned above, it takes two (and
more developers) to tango.

Anyway, I have very good experiences with first trying to agree on the meta
model, because that avoids most of the "who forked what" discussions that
are inevitable at the implementation level, sigh...

Thanks to all participants for keeping this thread alive and full of
content. But it is time to stop the current kind of discussions, and start
formulating constructive ways forward.

I also think orocos-dev is the better forum for this discussion, so I post
this message also there, in the hope that we can continue there without
bothering users with the development-related discussions on meta modeling
etc. :-)

> Sylvain Joyeux (Dr.Ing.)

Herman

[ANN] DNG - beyond deployment scripts

(I am replying only on orocos-dev from now on)

On 11/19/2012 11:49 AM, Herman Bruyninckx wrote:
> So, my concrete set of questions now is:
> - if there is agreement on "standardizing the meta model is the way to
> go",
> then how shall we proceed?
It depends on what you call meta model in this context.

If it is a set of not necessarily formalized requirements, then yes.
Otherwise, please be more specific (I know you tried already :P)
> - can deployment be sufficiently separated from other things to come up
> with a deployment meta model? (I think it can, but I have no concrete
> enough suggestion by now.)
Then, again, maybe my point before is NOT what you meant. As the rest of
your email shows pretty clearly. I know you are big on this
meta-modelling thing, but putting things in boxes has actually a
tendency to tell people "Oh, X and Y are two different things" even
though, from an implementation P.O.V. they are almost the same.
> - can we spend some time on this on the Leuven workshop at February 11th?
I guess we could, or have a second day just for that purpose.

> This might be the most pragmatic thing for you to do, for the time
> being. I
> see and understand that you suffer from the current situation, and that
> part of the problem is not at your side! I would not like this to happen,
> but I would understand it, because, as mentioned above, it takes two (and
> more developers) to tango.
Let's be clear on something: I *know* that part of the problem is
probably on my side as well ;-)
> Anyway, I have very good experiences with first trying to agree on the
> meta
> model, because that avoids most of the "who forked what" discussions that
> are inevitable at the implementation level, sigh...
But actually in my experience has a tendency to "hide" commonality.

[ANN] DNG - beyond deployment scripts

On Mon, 19 Nov 2012, Sylvain Joyeux wrote:

> (I am replying only on orocos-dev from now on)
>
> On 11/19/2012 11:49 AM, Herman Bruyninckx wrote:
>> So, my concrete set of questions now is:
>> - if there is agreement on "standardizing the meta model is the way to go",
>> then how shall we proceed?
> It depends on what you call meta model in this context.

That was clarified a little bit later on in my message: what are really the
concepts we have to capture, what are their relationships, and how are they
constrained?

> If it is a set of not necessarily formalized requirements, then yes.

In the beginning, it will be not so formal: human natural language, more or
less.

If we find some agreement on the natural language level, we should then try to
formalize it. Because different tools can only complement each other if
they understand the same model, to the extent they need it.

> Otherwise, please be more specific (I know you tried already :P)

No problem with retrying! :-)

>> - can deployment be sufficiently separated from other things to come up
>> with a deployment meta model? (I think it can, but I have no concrete
>> enough suggestion by now.)
> Then, again, maybe my point before is NOT what you meant. As the rest of your
> email shows pretty clearly. I know you are big on this meta-modelling thing,
> but putting things in boxes has actually a tendency to tell people "Oh, X and
> Y are two different things" even though, from an implementation P.O.V. they
> are almost the same.

That is _the other_ added value of meta modelling :-) In the context of
this thread, I think we are looking for the first added value: what are the
really _different_ things that we have in mind when talking about
deployment. And, in a later stage, which of these differences really
_require_ different tools.

>> - can we spend some time on this on the Leuven workshop at February 11th?
> I guess we could, or have a second day just for that purpose.

Let's see what the 'demand' is; for us, adding one more day is logistically
simple.

>> This might be the most pragmatic thing for you to do, for the time being. I
>> see and understand that you suffer from the current situation, and that
>> part of the problem is not at your side! I would not like this to happen,
>> but I would understand it, because, as mentioned above, it takes two (and
>> more developers) to tango.
> Let's be clear on something: I *know* that part of the problem is probably on
> my side as well ;-)

But, much better, a big part of the solution is also there! :-)

>> Anyway, I have very good experiences with first trying to agree on the meta
>> model, because that avoids most of the "who forked what" discussions that
>> are inevitable at the implementation level, sigh...
> But actually in my experience has a tendency to "hide" commonality.

That's why I focus on the following three pieces of information to be made
explicit, by all who feels "involved" in this discussion:
- what are the really different concepts that we need to tackle (in model,
tool, and implementation)?
- what are the fundamental relationships between these concepts?
- what are constraints that show up: (i) as limits on the ranges of
property parameters of the concepts, and (ii) as limits on the
relationship properties.

For example:
- concepts: components ("activities", "process", "thread",...),
connections,
- relationships: life cycle FSM states, activation (partial) ordering,
protocols to check whether different components have reached the expected
life cycle state, etc.
- constraints: available amount of memory, field busses over which
deployment has to be distributed, single-point-of-failure name servers,
etc.

This is just a start, to give you an impression of what I have in mind...

> Sylvain Joyeux (Dr.Ing.)

Herman

> Senior Researcher
>
> Space & Security Robotics
> Underwater Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------
>
>

--
KU Leuven, Mechanical Engineering, Robotics Research Group
<http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 328056
Vice-President Research euRobotics <http://www.eu-robotics.net>
Open RObot COntrol Software <http://www.orocos.org>
Associate Editor JOSER <http://www.joser.org>, IJRR <http://www.ijrr.org>

[ANN] DNG - beyond deployment scripts

On 11/19/2012 01:43 PM, Herman Bruyninckx wrote:
>>> - can we spend some time on this on the Leuven workshop at February
>>> 11th?
>> I guess we could, or have a second day just for that purpose.
>
> Let's see what the 'demand' is; for us, adding one more day is
> logistically
> simple.
Rethinking overnight:

One major issue, I think is communication -- or more precisely the lack
of it. We are too much focussed on "let's meet in person" to make things
happen, which is not a substainable way to make an open-source project
go forward. We can't meet every second week.

While I would be happy to meet in february, I think we should try to
make this happen "online". Obviously, if needs be, we could have some
IRC / Google Hangout / skype meetings. But I would try first to use the
wiki and the ML.

[ANN] DNG - beyond deployment scripts

On Tue, 20 Nov 2012, Sylvain Joyeux wrote:

> On 11/19/2012 01:43 PM, Herman Bruyninckx wrote:
>> > > - can we spend some time on this on the Leuven workshop at February
>> > > 11th?
>> > I guess we could, or have a second day just for that purpose.
>>
>> Let's see what the 'demand' is; for us, adding one more day is
>> logistically
>> simple.
> Rethinking overnight:
>
> One major issue, I think is communication -- or more precisely the lack of
> it. We are too much focussed on "let's meet in person" to make things happen,
> which is not a substainable way to make an open-source project go forward. We
> can't meet every second week.

Agreed. But once a year or so, an in-person meeting makes sense. The ERF is
such a possibility.

> While I would be happy to meet in february, I think we should try to make
> this happen "online". Obviously, if needs be, we could have some IRC / Google
> Hangout / skype meetings. But I would try first to use the wiki and the ML.

Sure. That's what we are doing now :-)

Herman

[ANN] DNG - beyond deployment scripts

On 11/20/2012 12:17 PM, Herman Bruyninckx wrote:
>> While I would be happy to meet in february, I think we should try to
>> make this happen "online". Obviously, if needs be, we could have some
>> IRC / Google Hangout / skype meetings. But I would try first to use
>> the wiki and the ML.
>
> Sure. That's what we are doing now :-)
For the starting point, yes. Not for the actual work. The plan you
proposed was to discuss our requirements in february. My point is: this
should be done here.

[ANN] DNG - beyond deployment scripts

On Nov 20, 2012, at 06:29 , Sylvain Joyeux wrote:

> On 11/20/2012 12:17 PM, Herman Bruyninckx wrote:
>>> While I would be happy to meet in february, I think we should try to
>>> make this happen "online". Obviously, if needs be, we could have some
>>> IRC / Google Hangout / skype meetings. But I would try first to use
>>> the wiki and the ML.
>>
>> Sure. That's what we are doing now :-)
> For the starting point, yes. Not for the actual work. The plan you
> proposed was to discuss our requirements in february. My point is: this
> should be done here.

Agreed. This will aid those in the community that won't be able to make such physical meetings ...
S

[ANN] DNG - beyond deployment scripts

On Tue, 20 Nov 2012, Sylvain Joyeux wrote:

> On 11/20/2012 12:17 PM, Herman Bruyninckx wrote:
>> > While I would be happy to meet in february, I think we should try to
>> > make this happen "online". Obviously, if needs be, we could have some
>> > IRC / Google Hangout / skype meetings. But I would try first to use the
>> > wiki and the ML.
>>
>> Sure. That's what we are doing now :-)
> For the starting point, yes. Not for the actual work. The plan you proposed
> was to discuss our requirements in february. My point is: this should be done
> here.

Sure. That's exactly why I invited the mailinglist members to start creating
"meta model" content _now_... :-P

But I did not see any, for the time being.

Herman

[ANN] DNG - beyond deployment scripts

On Mon, Nov 19, 2012 at 01:43:51PM +0100, Herman Bruyninckx wrote:
> >On 11/19/2012 11:49 AM, Herman Bruyninckx wrote:
> >>So, my concrete set of questions now is:
> >>- if there is agreement on "standardizing the meta model is the way to go",
> >> then how shall we proceed?
> >It depends on what you call meta model in this context.
>
> That was clarified a little bit later on in my message: what are really the
> concepts we have to capture, what are their relationships, and how are they
> constrained?

In a hope to clarify even further: these are exactly the components of the
'abstract syntax' of a domain-specific modelling language, as defined in [1].
This paper provides a few other interesting definitions, which maybe we could
adopt on this mailing list...

[1] G. Karsai, J. Sztipanovits, A. Ledeczi, and T. Bapty. Model-integrated
development of embedded software. Proceedings of the IEEE, 91(1):145–164, 2003.

[ANN] DNG - beyond deployment scripts

On Mon, 19 Nov 2012, Piotr Trojanek wrote:

> On Mon, Nov 19, 2012 at 01:43:51PM +0100, Herman Bruyninckx wrote:
>> >On 11/19/2012 11:49 AM, Herman Bruyninckx wrote:
>> >>So, my concrete set of questions now is:
>> >>- if there is agreement on "standardizing the meta model is the way to go",
>> >> then how shall we proceed?
>> >It depends on what you call meta model in this context.
>>
>> That was clarified a little bit later on in my message: what are really the
>> concepts we have to capture, what are their relationships, and how are they
>> constrained?
>
> In a hope to clarify even further: these are exactly the components of the
> 'abstract syntax' of a domain-specific modelling language, as defined in [1].

Indeed. I consider both (DSL and metamodel) to coincide, whenever possible.

> This paper provides a few other interesting definitions, which maybe we could
> adopt on this mailing list...

Which ones exactly are you thinking of?
Your suggestion is very valid, but then we are now focusing on the
metametamodel discussion first :-)
>
> [1] G. Karsai, J. Sztipanovits, A. Ledeczi, and T. Bapty. Model-integrated
> development of embedded software. Proceedings of the IEEE, 91(1):145–164, 2003.

Thanks for this reference! I am now taking a look at all articles in the
special issue that this article appeared in.

> Piotr Trojanek

Herman

[ANN] DNG - beyond deployment scripts

On Mon, Nov 19, 2012 at 06:50:58PM +0100, Herman Bruyninckx wrote:
> >>That was clarified a little bit later on in my message: what are really the
> >>concepts we have to capture, what are their relationships, and how are they
> >>constrained?
> >
> >In a hope to clarify even further: these are exactly the components of the
> >'abstract syntax' of a domain-specific modelling language, as defined in [1].
>
> Indeed. I consider both (DSL and metamodel) to coincide, whenever possible.
>
> >This paper provides a few other interesting definitions, which maybe we could
> >adopt on this mailing list...
>
> Which ones exactly are you thinking of?

Please excuse me for delayed reply. Definitions that I like in this paper are:
- model: a formal structure representing selected aspects of the engineering
artifact and its environment (page 2).
- modeling language: a five-tuple of concrete syntax, abstract syntax, semantic
domain, and semantic and syntactic mappings (page 4).

What I do not like are the explanations of "metamodel" as:
- "formal model of domain-specific modeling language" (page 1)
- "concrete, formal specifications of DSML" (page 4).

This suggests that model=specification (I do not agree). Moreover, they
describe fig. 3 as "metamodel" -- this is clearly wrong, because there is only
an abstract syntax in that figure, i.e. concepts, relationships and integrity
constraints.

It seems that even experts mix two different meanings of the term "metamodel":
1. modelling language (including syntax, semantics and notation)
2. abstract syntax of a DSML (formalized more as a graph than as a tree).

> Your suggestion is very valid, but then we are now focusing on the
> metametamodel discussion first :-)
> >
> >[1] G. Karsai, J. Sztipanovits, A. Ledeczi, and T. Bapty. Model-integrated
> >development of embedded software. Proceedings of the IEEE, 91(1):145–164, 2003.
>
> Thanks for this reference! I am now taking a look at all articles in the
> special issue that this article appeared in.

Thank you for pointing to other gems hidden inside this issue!

>
> >Piotr Trojanek
>
> Herman

[ANN] DNG - beyond deployment scripts

On Tue, 4 Dec 2012, Piotr Trojanek wrote:

> On Mon, Nov 19, 2012 at 06:50:58PM +0100, Herman Bruyninckx wrote:
>> >>That was clarified a little bit later on in my message: what are really the
>> >>concepts we have to capture, what are their relationships, and how are they
>> >>constrained?
>> >
>> >In a hope to clarify even further: these are exactly the components of the
>> >'abstract syntax' of a domain-specific modelling language, as defined in [1].
>>
>> Indeed. I consider both (DSL and metamodel) to coincide, whenever possible.
>>
>> >This paper provides a few other interesting definitions, which maybe we could
>> >adopt on this mailing list...
>>
>> Which ones exactly are you thinking of?
>
> Please excuse me for delayed reply. Definitions that I like in this paper are:
> - model: a formal structure representing selected aspects of the engineering
> artifact and its environment (page 2).

I try to make this one step more concrete by a constructive definition:
- give the concepts in the model domain a name
- give the relationships between these concepts a name _and_ a structure
- optional (when applicable): give a name to the objective functions the
domain provides or design trade-off
- define which constraints can act on concepts, primitives, and objective
functions.
- define which tolerances a particular design can use.

> - modeling language: a five-tuple of concrete syntax, abstract syntax, semantic
> domain, and semantic and syntactic mappings (page 4).
>
> What I do not like are the explanations of "metamodel" as:
> - "formal model of domain-specific modeling language" (page 1)
> - "concrete, formal specifications of DSML" (page 4).
>
> This suggests that model=specification (I do not agree).
I have no problem with this "equality".

> Moreover, they
> describe fig. 3 as "metamodel" -- this is clearly wrong, because there is only
> an abstract syntax in that figure, i.e. concepts, relationships and integrity
> constraints.

So, they miss the reference to the ontology/semantics of the concepts,
relationships, and constraints. This is rather "obvious", isn't it?

> It seems that even experts mix two different meanings of the term "metamodel":
> 1. modelling language (including syntax, semantics and notation)
> 2. abstract syntax of a DSML (formalized more as a graph than as a tree).

How and why then exactly do you differentiate both?

>> Your suggestion is very valid, but then we are now focusing on the
>> metametamodel discussion first :-)
>> >
>> >[1] G. Karsai, J. Sztipanovits, A. Ledeczi, and T. Bapty. Model-integrated
>> >development of embedded software. Proceedings of the IEEE, 91(1):145–164, 2003.
>>
>> Thanks for this reference! I am now taking a look at all articles in the
>> special issue that this article appeared in.
>
> Thank you for pointing to other gems hidden inside this issue!

>> >Piotr Trojanek

Herman

[ANN] DNG - beyond deployment scripts

On Nov 17, 2012, at 04:47 , Markus Klotzbuecher wrote:

> On Sat, Nov 17, 2012 at 01:18:11AM +0100, Peter Soetens wrote:
>> On Fri, Nov 16, 2012 at 6:15 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>> On 11/16/2012 05:34 PM, Peter Soetens wrote:
>>>>
>>>>
>>>> I don't fully agree with your perspective. Allow me also to reply on
>>>> your 'just because' argument of the type system:
>>>>
>>>> The reason we are not as far as we hoped can be captured in one word:
>>>> 'legacy'.
>>>
>>> Yes, and as we discussed in the ERF, this is a point I do understand.
>>>
>>> Now, quite a big amount of effort has been put in *extending* the existing
>>> solutions to also match the "new" use cases. Especially the lua stuff ...
>>> Which could have used the typelib solution as much as the RTT typekit
>>> solution. A choice has been made there between ignoring the typelib stuff or
>>> spending what was maybe an equivalent development effort to converge towards
>>> a single solution.
>>
>> Maybe. The irony is that the typekit was initially only written to
>> suit RTT scripting and XML marshalling. That same code then became
>> usable in other places as well such as RTT transports (per 2.0) or
>> rttlua. From an implementation point of view, only little code&time
>> was invested in typekits compared to the code using the typekits. The
>> typekit API is for example still inferior to typelib, exposing less
>> information about the type, Markus has been complaining about this wrt
>> rttlua btw.
>>
>>>
>>> The point where the "community" part was lacking is that this decision has
>>> been made "behind closed doors", i.e. without even trying to sketch a common
>>> solution. That is what I see happening again. And again. We have discussed
>>> ways to try to converge again ... but, honestly, how much effort are you
>>> ready to put into this ?
>>
>> My personal effort will be limited to paid-for stuff (directly or
>> indirectly if permitted) or guiding/assisting others into getting
>> their patches onto the mainline.
>>
>>>
>>>> - Typekits could not be dynamically extended, meaning that if
>>>> typegen/orogen was used to generate a typekit, a hand-written one
>>>> could no longer be used and vice versa (typekit legacy). -> we fixed
>>>> this one in 2.6 / master.
>>>
>>> Ack.
>>>
>>>> - We have two type systems because at that point in time, typelib
>>>> could not replace easily scripting functionality, such as operators
>>>> and constructors, we had in RTT (scripting legacy).
>>>
>>> Moot point in the rttlua stuff, as rttlua can "introspect" types the same
>>> way than the Ruby side does. I.e. you can define these operators on the lua
>>> side.
>>
>> Point taken. There was no lua back then though, only 'classical'
>> scripting. The operators we're thinking about are the ones KDL
>> defines, which are not available in lua, or at least not efficiently.
>> It's not a major argument I guess, but if you have the same functions
>> in your rapid-prototyping language as in C++, that's an advantage.
>>
>>>
>>>> - Not so much people switched to orogen because they have existing
>>>> components which can't be picked up by orogen deployments (component
>>>> legacy).
>>>
>>> I am really wondering why there is this fixation about oroGen and
>>> deployments. I saw, a year or so ago, to my great surprise, a line on the
>>> orocos wiki saying that "oroGen is a static deployment generator". What ?
>>> Where is that coming from ? oroGen has always been and still is mostly a
>>> component generator. You can continue using the OCL deployer with oroGen
>>> components and you actually can make oroGen generate deployments that use
>>> "legacy" components. Once upon a time, oroGen deployments were containing
>>> task browsers. Not documented anywhere since it is not a use case I even
>>> thought about. Now, again, in a common-project state of mind, the issue
>>> could have been raised by other fellow developers that knew about the issue.
>>
>> Hmm... you're right of course. I saw the added value of oroGen in
>> generating deployments though and would not consider it for solely
>> generating some boilerplate code. But that's probably based more on
>> prejudice than on experimental findings.
>> We definitely need to look into the rock libraries and tools, ie even
>> beyond oroGen.
>>
>>>
>>>> Compare that last point to the adoption of 'rttlua': it works with any
>>>> existing Orocos component and with any typekit, generated with any
>>>> tool or hand written.
>>>
>>> If someone is nice enough to make the typekits have the required interface.
>>> I did that, and spent effort doing it, in typegen ... But could not have
>>> been. Particularly, if I only thought about "the Rock use case", then I
>>> would definitely NOT have spent that effort.
>>
>> Clearly.
>
> Alas, how emotional can a discussion about software toolchains get?
> This is not about being nice (or sullen, for that matter) but by
> technical advantages. If the ROCK toolchain solves a problem thats
> itching hard enough, someone will do the effort to port the tool or
> pay somebody to do it. Or maybe just use ROCK. Users will decide for
> themselves. You should have known that the path you've chosen of
> forking and rebranding is not contributing to get plain orocos users
> to adopt your stuff.
>
> I acknoledge your SysKit and Roby tools and that they are similar and
> more featureful that this prototype. But I also hold on to the point
> that a low-level, real-time safe configurator has its place. If a tool
> satisfies an essential property that an other can't, it can hardly be
> called duplication.

So then what was the decision process to not try to extend or modify SysKit and Roby? What was the driving factor to reinvent the wheel? And out of curiousity, when was this communicated to the community?
S

[ANN] DNG - beyond deployment scripts

On Sat, Nov 17, 2012 at 06:51:18AM -0500, S Roderick wrote:
> On Nov 17, 2012, at 04:47 , Markus Klotzbuecher wrote:
>
> > On Sat, Nov 17, 2012 at 01:18:11AM +0100, Peter Soetens wrote:
> >> On Fri, Nov 16, 2012 at 6:15 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
> >>> On 11/16/2012 05:34 PM, Peter Soetens wrote:
> >>>>
> >>>>
> >>>> I don't fully agree with your perspective. Allow me also to reply on
> >>>> your 'just because' argument of the type system:
> >>>>
> >>>> The reason we are not as far as we hoped can be captured in one word:
> >>>> 'legacy'.
> >>>
> >>> Yes, and as we discussed in the ERF, this is a point I do understand.
> >>>
> >>> Now, quite a big amount of effort has been put in *extending* the existing
> >>> solutions to also match the "new" use cases. Especially the lua stuff ...
> >>> Which could have used the typelib solution as much as the RTT typekit
> >>> solution. A choice has been made there between ignoring the typelib stuff or
> >>> spending what was maybe an equivalent development effort to converge towards
> >>> a single solution.
> >>
> >> Maybe. The irony is that the typekit was initially only written to
> >> suit RTT scripting and XML marshalling. That same code then became
> >> usable in other places as well such as RTT transports (per 2.0) or
> >> rttlua. From an implementation point of view, only little code&time
> >> was invested in typekits compared to the code using the typekits. The
> >> typekit API is for example still inferior to typelib, exposing less
> >> information about the type, Markus has been complaining about this wrt
> >> rttlua btw.
> >>
> >>>
> >>> The point where the "community" part was lacking is that this decision has
> >>> been made "behind closed doors", i.e. without even trying to sketch a common
> >>> solution. That is what I see happening again. And again. We have discussed
> >>> ways to try to converge again ... but, honestly, how much effort are you
> >>> ready to put into this ?
> >>
> >> My personal effort will be limited to paid-for stuff (directly or
> >> indirectly if permitted) or guiding/assisting others into getting
> >> their patches onto the mainline.
> >>
> >>>
> >>>> - Typekits could not be dynamically extended, meaning that if
> >>>> typegen/orogen was used to generate a typekit, a hand-written one
> >>>> could no longer be used and vice versa (typekit legacy). -> we fixed
> >>>> this one in 2.6 / master.
> >>>
> >>> Ack.
> >>>
> >>>> - We have two type systems because at that point in time, typelib
> >>>> could not replace easily scripting functionality, such as operators
> >>>> and constructors, we had in RTT (scripting legacy).
> >>>
> >>> Moot point in the rttlua stuff, as rttlua can "introspect" types the same
> >>> way than the Ruby side does. I.e. you can define these operators on the lua
> >>> side.
> >>
> >> Point taken. There was no lua back then though, only 'classical'
> >> scripting. The operators we're thinking about are the ones KDL
> >> defines, which are not available in lua, or at least not efficiently.
> >> It's not a major argument I guess, but if you have the same functions
> >> in your rapid-prototyping language as in C++, that's an advantage.
> >>
> >>>
> >>>> - Not so much people switched to orogen because they have existing
> >>>> components which can't be picked up by orogen deployments (component
> >>>> legacy).
> >>>
> >>> I am really wondering why there is this fixation about oroGen and
> >>> deployments. I saw, a year or so ago, to my great surprise, a line on the
> >>> orocos wiki saying that "oroGen is a static deployment generator". What ?
> >>> Where is that coming from ? oroGen has always been and still is mostly a
> >>> component generator. You can continue using the OCL deployer with oroGen
> >>> components and you actually can make oroGen generate deployments that use
> >>> "legacy" components. Once upon a time, oroGen deployments were containing
> >>> task browsers. Not documented anywhere since it is not a use case I even
> >>> thought about. Now, again, in a common-project state of mind, the issue
> >>> could have been raised by other fellow developers that knew about the issue.
> >>
> >> Hmm... you're right of course. I saw the added value of oroGen in
> >> generating deployments though and would not consider it for solely
> >> generating some boilerplate code. But that's probably based more on
> >> prejudice than on experimental findings.
> >> We definitely need to look into the rock libraries and tools, ie even
> >> beyond oroGen.
> >>
> >>>
> >>>> Compare that last point to the adoption of 'rttlua': it works with any
> >>>> existing Orocos component and with any typekit, generated with any
> >>>> tool or hand written.
> >>>
> >>> If someone is nice enough to make the typekits have the required interface.
> >>> I did that, and spent effort doing it, in typegen ... But could not have
> >>> been. Particularly, if I only thought about "the Rock use case", then I
> >>> would definitely NOT have spent that effort.
> >>
> >> Clearly.
> >
> > Alas, how emotional can a discussion about software toolchains get?
> > This is not about being nice (or sullen, for that matter) but by
> > technical advantages. If the ROCK toolchain solves a problem thats
> > itching hard enough, someone will do the effort to port the tool or
> > pay somebody to do it. Or maybe just use ROCK. Users will decide for
> > themselves. You should have known that the path you've chosen of
> > forking and rebranding is not contributing to get plain orocos users
> > to adopt your stuff.
> >
> > I acknoledge your SysKit and Roby tools and that they are similar and
> > more featureful that this prototype. But I also hold on to the point
> > that a low-level, real-time safe configurator has its place. If a tool
> > satisfies an essential property that an other can't, it can hardly be
> > called duplication.
>
> So then what was the decision process to not try to extend or modify
> SysKit and Roby? What was the driving factor to reinvent the wheel?

Because there is no reinvention. Roby/Syskits high level model need to
be transformed to atomic steps to be applied to a system. Either you
do this implicitely or you use introduce something like the proposed
configuration model. I repeat, both tools are complementary.

> And out of curiousity, when was this communicated to the community?

What exactly?

Markus

[ANN] DNG - beyond deployment scripts

On 11/17/2012 02:26 PM, Markus Klotzbuecher wrote:
> Because there is no reinvention. Roby/Syskits high level model need to
> be transformed to atomic steps to be applied to a system. Either you
> do this implicitely or you use introduce something like the proposed
> configuration model. I repeat, both tools are complementary.
And I repeat: even though it is kind-of true (if you do generate the
steps, a plain state machine is more than enough). I won't repeat why
your model does not fit the syskit model, i.e. we would have to generate
a full state machine and "hide" it within your configuration model.

The point is: I've seen that route taken already. "Oh but it is
complementary" until it gets extended to the point where it is *not*
complementary. Since there has been no communication, so far, on the
subject, I do think that we will go down this route again. Especially
since you seem to believe that Rock is not worth looking at.

[ANN] DNG - beyond deployment scripts

On Nov 16, 2012, at 12:15 , Sylvain Joyeux wrote:

> On 11/16/2012 05:34 PM, Peter Soetens wrote:
>>
>> I don't fully agree with your perspective. Allow me also to reply on
>> your 'just because' argument of the type system:
>>
>> The reason we are not as far as we hoped can be captured in one word: 'legacy'.
> Yes, and as we discussed in the ERF, this is a point I do understand.
>
> Now, quite a big amount of effort has been put in *extending* the
> existing solutions to also match the "new" use cases. Especially the lua
> stuff ... Which could have used the typelib solution as much as the RTT
> typekit solution. A choice has been made there between ignoring the
> typelib stuff or spending what was maybe an equivalent development
> effort to converge towards a single solution.
>
> The point where the "community" part was lacking is that this decision
> has been made "behind closed doors", i.e. without even trying to sketch
> a common solution. That is what I see happening again. And again. We
> have discussed ways to try to converge again ... but, honestly, how much
> effort are you ready to put into this ?

I'm completely with Sylvain on this general point. I know where he's coming from.
S

[ANN] DNG - beyond deployment scripts

On Nov 16, 2012, at 08:41 , Sylvain Joyeux wrote:

> On 11/15/2012 08:06 PM, Herman Bruyninckx wrote:
>> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>>
>>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>>> Lua is, both, a model and an executable programe, so there is no
>>>> contradiction in what Markus and you "defend": the Lua script can be
>>>> generated by Syskit, great!
>>> "Can" and "will" are two different things. Last time I heard
>>> something like that, it was about typelib vs. TypeInfo. In the end,
>>> we ended up having two different type systems "just because".
>>
>> Agreed!
>>
>> So let's come up with a "document" guiding the community towards better
>> practices in the future! :-)
> That's where we disagree. I know you are a big fan of writing paper and
> harmonizing meta-models, but I am personally more looking for pooling of
> development effort, which is really where the money goes in the end.

+1

> We are, here, on orocos-users and all (I thought) supposedly working
> towards having a common toolchain. At least, I believed that was the
> plan when we made the effort of creating something called "the orocos
> toolchain"
>
> I say "thought" and "supposedly" because I don't see that being
> realized. No amount of document writing will change it. In my opinion,
> it is not about "best practices" it is about sharing development effort.
> Writing paper is cheap: professors can do it and you have a limited
> number of debugging involved. Writing software is a lot more
> complicated, and it is IMO a waste of time and money to try to define
> things before you have practical experience with it anyways (at which
> point it is too late to write papers to "harmonize" or whatever since
> the software is already there and nobody has the amount of ressources
> required to rewrite it to match the harmonized stuff).

Agreed, though up front definition and prototyping have their place. But that's not the problem here.
S

[ANN] DNG - beyond deployment scripts

On 11/16/2012 02:41 PM, Sylvain Joyeux wrote:
> On 11/15/2012 08:06 PM, Herman Bruyninckx wrote:
>> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>>
>>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>>> Lua is, both, a model and an executable programe, so there is no
>>>> contradiction in what Markus and you "defend": the Lua script can be
>>>> generated by Syskit, great!
>>> "Can" and "will" are two different things. Last time I heard
>>> something like that, it was about typelib vs. TypeInfo. In the end,
>>> we ended up having two different type systems "just because".
>>
>> Agreed!
>>
>> So let's come up with a "document" guiding the community towards better
>> practices in the future! :-)
> That's where we disagree. I know you are a big fan of writing paper and
> harmonizing meta-models, but I am personally more looking for pooling of
> development effort, which is really where the money goes in the end.
>
> We are, here, on orocos-users and all (I thought) supposedly working
> towards having a common toolchain. At least, I believed that was the
> plan when we made the effort of creating something called "the orocos
> toolchain"
>
> I say "thought" and "supposedly" because I don't see that being
> realized. No amount of document writing will change it. In my opinion,
> it is not about "best practices" it is about sharing development effort.
> Writing paper is cheap: professors can do it and you have a limited
> number of debugging involved. Writing software is a lot more
> complicated, and it is IMO a waste of time and money to try to define
> things before you have practical experience with it anyways (at which
> point it is too late to write papers to "harmonize" or whatever since
> the software is already there and nobody has the amount of ressources
> required to rewrite it to match the harmonized stuff).

Eh, isn't this _exactly_ why there is a need to properly define things
first, before the software is written? Writing the software before
deciding what is best is putting the cart before the horse. And later,
if and when you do get around to writing things up, it results in a
situation where you know what is theoretically good, but in practice you
continue using the "inferior" code that is already there and tried and
trusted.. e voila! You now have a gap between theory and practice.

/Sagar

[ANN] DNG - beyond deployment scripts

On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
> But Sylvain, you did only react to half of Markus' message, namely the
> one
> that deployment is Coordination. His message also has a second part: the
> separation of pure Coordination from Configuration. Maybe you have also
> solved that long before our poor minds have tried to put that into a
> "best
> practice", but then I may have overlooked that :-)
You did. Giving a name to the pattern is great, but Syskit works this
way from the very beginning (which is almost three years ago ...)

First of all, what you call "configuration" in this paper is not a state
but a transition. Meaning that activating configuration X does not
guarantee at time T does not mean the same thing that activating X at
time T+1. Syskit does define configurations as states, i.e. you are
guaranteed that, when you enable a configuration, you get the same
software state than you expected.

Then, these configurations are coordinated using higher-level constructs.

[ANN] DNG - beyond deployment scripts

On Thu, Nov 15, 2012 at 01:19:46PM +0100, Sylvain Joyeux wrote:
> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
> >But Sylvain, you did only react to half of Markus' message, namely
> >the one
> >that deployment is Coordination. His message also has a second part: the
> >separation of pure Coordination from Configuration. Maybe you have also
> >solved that long before our poor minds have tried to put that into
> >a "best
> >practice", but then I may have overlooked that :-)
> You did. Giving a name to the pattern is great, but Syskit works
> this way from the very beginning (which is almost three years ago
> ...)
>
> First of all, what you call "configuration" in this paper is not a
> state but a transition. Meaning that activating configuration X does

No, the configuration is the model, the DNG engine is responsible for
execution the model which results in the transition.

> not guarantee at time T does not mean the same thing that activating
> X at time T+1. Syskit does define configurations as states, i.e. you
> are guaranteed that, when you enable a configuration, you get the
> same software state than you expected.

That's nice and usefull. But sometimes you don't only need to know
what the state is, but also when that state will be brought about. For
that, you need access to the lower level. I agree that both approaches
are nicely complementary.

> Then, these configurations are coordinated using higher-level constructs.

It might be interesting to discuss about how these two levels can be
be used together and to agree upon a common representation of the
lower.

Markus

[ANN] DNG - beyond deployment scripts

On Thu, 15 Nov 2012, Sylvain Joyeux wrote:

> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>> But Sylvain, you did only react to half of Markus' message, namely the one
>> that deployment is Coordination. His message also has a second part: the
>> separation of pure Coordination from Configuration. Maybe you have also
>> solved that long before our poor minds have tried to put that into a "best
>> practice", but then I may have overlooked that :-)
> You did. Giving a name to the pattern is great, but Syskit works this way
> from the very beginning (which is almost three years ago ...)

Can you point to information where that patterns is being explained?

> First of all, what you call "configuration" in this paper is not a state but
> a transition. Meaning that activating configuration X does not guarantee at
> time T does not mean the same thing that activating X at time T+1. Syskit
> does define configurations as states, i.e. you are guaranteed that, when you
> enable a configuration, you get the same software state than you expected.

THis is an interesting remark. Because it is _the opposite_ of what our
patterns describes as best practice: modelling configuration as a
transition prevents you from reasoning about it (or only in terms of
nominal success), because transitions are always assumed to take not time
to execute, and not time to compute which transition to take. Our approach
_does_ model both aspects, as 'activities' in a state.

> Then, these configurations are coordinated using higher-level constructs.

What does that mean exactly?

Our approach suggest one considers both aspects _together_ (the
coordination and the configuration), in one particular architecture.

> Sylvain Joyeux (Dr.Ing.)

Herman

> Senior Researcher
>
> Space & Security Robotics
> Underwater Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------
>
>

--
KU Leuven, Mechanical Engineering, Robotics Research Group
<http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 328056
Vice-President Research euRobotics <http://www.eu-robotics.net>
Open RObot COntrol Software <http://www.orocos.org>
Associate Editor JOSER <http://www.joser.org>, IJRR <http://www.ijrr.org>

[ANN] DNG - beyond deployment scripts

On 11/15/2012 01:30 PM, Herman Bruyninckx wrote:
> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>
>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>> But Sylvain, you did only react to half of Markus' message, namely
>>> the one
>>> that deployment is Coordination. His message also has a second part:
>>> the
>>> separation of pure Coordination from Configuration. Maybe you have also
>>> solved that long before our poor minds have tried to put that into a
>>> "best
>>> practice", but then I may have overlooked that :-)
>> You did. Giving a name to the pattern is great, but Syskit works this
>> way from the very beginning (which is almost three years ago ...)
>
> Can you point to information where that patterns is being explained?
In your paper ;-) And in the two talks that you and markus attended ...

Now, don't get me wrong. If I thought that you *willingly* ignored stuff
that you *knew* existed in Rock, I would not even be discussing it.

>
>> First of all, what you call "configuration" in this paper is not a
>> state but a transition. Meaning that activating configuration X does
>> not guarantee at time T does not mean the same thing that activating
>> X at time T+1. Syskit does define configurations as states, i.e. you
>> are guaranteed that, when you enable a configuration, you get the
>> same software state than you expected.
>
> THis is an interesting remark. Because it is _the opposite_ of what our
> patterns describes as best practice: modelling configuration as a
> transition prevents you from reasoning about it (or only in terms of
> nominal success), because transitions are always assumed to take not time
> to execute, and not time to compute which transition to take. Our
> approach
> _does_ model both aspects, as 'activities' in a state.
I don't see how it does. If you only model partial transitions, which is
what you advocate in the paper, then what you get when activating a
particular configuration is not always the same thing. Hence, it is hard
to know what configuration X actually *is* (since it will depend on
which configurations have been activated or not beforehand). Moreover, a
system state change always ends up being a transition. If he thinks
about it as a state, he does know what it is and let the system do
compute the transition automatically.

>> Then, these configurations are coordinated using higher-level
>> constructs.
>
> What does that mean exactly?
>
> Our approach suggest one considers both aspects _together_ (the
> coordination and the configuration), in one particular architecture.
So does syskit. The configuration modelling defines configurations as
state of the system (i.e. set of components, their connections and their
configuration properties). The coordination, which is done by Roby
(syskit being a part of Roby) then allows to coordinate these
configurations. Admittedly, the syntax was not-so nice so far (since one
had to use Roby coordination primitives). Using an extension of a syntax
that people used at this year's Sauc-E, it is probably going to become
something like:

state_machine do
depends_on eslam_mapping # always needed
state(:explore) do
synchronize_localization_map_from_eslam
explore = corridor_servoing, :target => goal_position
transition explore.blocked_event => :explore
end
state(:navigate) do
synchronize_localization_map_from_eslam
navigate = corridor_following, :target => goal_position
transition navigate.blocked_event => :explore
transition navigate.out_of_bound_event => :navigate
end
end

which can then be combined to form more complex behaviours and so on ...
The point is: the system deployment is always done globally, i.e. each
of the sub-configurations are self-contained *and* the compatibility
between simultaneously active configurations is verified. If you can
know in advance which transitions will be needed (and in something like
the above or like what you are presenting in your paper it is possible),
the transitions can be pre-computed and then made to be executed by lua
scripting.

The point is: how the transitions are *executed* is obviously important.
How they are *computed* is as important. I am trying to extend a hand so
that we don't end up AGAIN spending resources solving the same problems.
There's a serious duplication of effort

[ANN] DNG - beyond deployment scripts

On Thu, 15 Nov 2012, Sylvain Joyeux wrote:

> On 11/15/2012 01:30 PM, Herman Bruyninckx wrote:
>> On Thu, 15 Nov 2012, Sylvain Joyeux wrote:
>>
>>> On 11/15/2012 12:32 PM, Herman Bruyninckx wrote:
>>>> But Sylvain, you did only react to half of Markus' message, namely the
>>>> one
>>>> that deployment is Coordination. His message also has a second part: the
>>>> separation of pure Coordination from Configuration. Maybe you have also
>>>> solved that long before our poor minds have tried to put that into a
>>>> "best
>>>> practice", but then I may have overlooked that :-)
>>> You did. Giving a name to the pattern is great, but Syskit works this way
>>> from the very beginning (which is almost three years ago ...)
>>
>> Can you point to information where that patterns is being explained?
> In your paper ;-) And in the two talks that you and markus attended ...
>
> Now, don't get me wrong. If I thought that you *willingly* ignored stuff that
> you *knew* existed in Rock, I would not even be discussing it.

You should!!! :-)

Since we are formulating the whole thing as a _pattern_ we _need_ to refer
to as many prior art as possible. This is very different from claiming that
we _invented_ the whole thing. So, I would be more than happy if we could
refer to concrete citable sources of prior art, even if that is on web
sites or code documentation.

>>> First of all, what you call "configuration" in this paper is not a state
>>> but a transition. Meaning that activating configuration X does not
>>> guarantee at time T does not mean the same thing that activating X at time
>>> T+1. Syskit does define configurations as states, i.e. you are guaranteed
>>> that, when you enable a configuration, you get the same software state
>>> than you expected.
>>
>> THis is an interesting remark. Because it is _the opposite_ of what our
>> patterns describes as best practice: modelling configuration as a
>> transition prevents you from reasoning about it (or only in terms of
>> nominal success), because transitions are always assumed to take not time
>> to execute, and not time to compute which transition to take. Our approach
>> _does_ model both aspects, as 'activities' in a state.

> I don't see how it does. If you only model partial transitions, which is what
> you advocate in the paper, then what you get when activating a particular
> configuration is not always the same thing. Hence, it is hard to know what
> configuration X actually *is* (since it will depend on which configurations
> have been activated or not beforehand).

I fully agree with you here: in most current practice it is indeed "hard to
know" (I would rather say "close to impossible") to know what a
configuration really, _because_ the constraints that represent a
configuration are not modelled explicitly. But this explicit _modelling_ is
another story, although, of course, extremely connected.

> Moreover, a system state change
> always ends up being a transition. If he thinks about it as a state, he does
> know what it is and let the system do compute the transition automatically.

>>> Then, these configurations are coordinated using higher-level constructs.
>>
>> What does that mean exactly?
>>
>> Our approach suggest one considers both aspects _together_ (the
>> coordination and the configuration), in one particular architecture.

> So does syskit. The configuration modelling defines configurations as state
> of the system (i.e. set of components, their connections and their
> configuration properties).

Good! Do you have a link to a formal representation of this modelling? (I
do not find it immediately from the link you gave in a previous post...)

> The coordination, which is done by Roby (syskit
> being a part of Roby) then allows to coordinate these configurations.

This is the _execution_ of the model. Fine. It would be nice if the
robotics domain could come up with separate modelling and exection, so that
the latter can be done in the most appropriate software framework (Ruby,
C++, Lua,...)

> Admittedly, the syntax was not-so nice so far (since one had to use Roby
> coordination primitives). Using an extension of a syntax that people used at
> this year's Sauc-E, it is probably going to become something like:
>
> state_machine do
> depends_on eslam_mapping # always needed
> state(:explore) do
> synchronize_localization_map_from_eslam
> explore = corridor_servoing, :target => goal_position
> transition explore.blocked_event => :explore
> end
> state(:navigate) do
> synchronize_localization_map_from_eslam
> navigate = corridor_following, :target => goal_position
> transition navigate.blocked_event => :explore
> transition navigate.out_of_bound_event => :navigate
> end
> end
>
> which can then be combined to form more complex behaviours and so on ...

Ok.

> The
> point is: the system deployment is always done globally, i.e. each of the
> sub-configurations are self-contained *and* the compatibility between
> simultaneously active configurations is verified.

What verification is being done exactly?

> If you can know in advance
> which transitions will be needed (and in something like the above or like
> what you are presenting in your paper it is possible), the transitions can be
> pre-computed and then made to be executed by lua scripting.

ok. This is indeed the third aspect, in addition to modeling and execution:
pre-processing! (or "reasoning", or "optimizing", or whatever verb one
wants to use).

> The point is: how the transitions are *executed* is obviously important. How
> they are *computed* is as important.

I agree! (I think that your "computed" is my "pre-processed"...?)

> I am trying to extend a hand so that we
> don't end up AGAIN spending resources solving the same problems. There's a
> serious duplication of effort

Sigh, you're absolutely right! But as we identified on the Paris workshop
last month: we first have to be able to "standardize" our meta-models
(read: the meaning of the terms we use, and the relationships that we find
important) before we can even realise that we are duplicating efforts...

But I am positive! I expect a lot from the two following workshops that are
planned around this theme: Leuven (Feb 11th, 2013) and ERF Lyon (19-21
March, 2013)... :-) _The_ important evolution in my view is that
standardization...

> Sylvain Joyeux (Dr.Ing.)

Herman

Herman

> Senior Researcher
>
> Space & Security Robotics
> Underwater Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------
>
>

--
KU Leuven, Mechanical Engineering, Robotics Research Group
<http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 328056
Vice-President Research euRobotics <http://www.eu-robotics.net>
Open RObot COntrol Software <http://www.orocos.org>
Associate Editor JOSER <http://www.joser.org>, IJRR <http://www.ijrr.org>