For your information: <http://code.google.com/p/protobuf-dt />.
This is an example of a tool to help people write the "model" for a
"Communication" message. It might be doable to apply the same
infrastructure for other data models than just Google Protocol Buffers:
NetCDF, ROS typekits,...
Herman
[software-toolchain] Eclipse editor for "data message", Google P
On 02/18/2012 09:51 PM, Herman Bruyninckx wrote:
>> I know that Herman's point of view is that the only "true" model-driven
>> approach is the one where you start at least at one meta-model (i.e. you
>> are modelling your models). I think that we both agree to not agree on
>> this point. As soon as you are having one level of models you *are*
>> model-based.
>
> Sure, but the meta-model allows you to go further, plain and simple. Well,
> "simple" is definitely not really the right word here :-) The meta-model is
> the "introspection" to your model, so you need it if you want your
> components to decide automatically whether or not they understand each
> other's models.
> This is indeed not "needed" in the current robotics state of the practice
> where tons of human developers spend thousands of hours debugging the
> _semantics_ of the code they interchange...
One point that I did know but lost in the discussion is that, as soon as
you do some modelling, you have an underlying metamodel.
The issue was about having it explicitely created in a meta-meta-model
framework (the eCore way) or implicitly given how the modelling system
is implemented (the oroGen way). To me, the jury is still out on the
best approach.
[software-toolchain] Eclipse editor for "data message", Google P
On Mon, 20 Feb 2012, Sylvain Joyeux wrote:
> On 02/18/2012 09:51 PM, Herman Bruyninckx wrote:
>> > I know that Herman's point of view is that the only "true" model-driven
>> > approach is the one where you start at least at one meta-model (i.e. you
>> > are modelling your models). I think that we both agree to not agree on
>> > this point. As soon as you are having one level of models you *are*
>> > model-based.
>>
>> Sure, but the meta-model allows you to go further, plain and simple. Well,
>> "simple" is definitely not really the right word here :-) The meta-model
>> is
>> the "introspection" to your model, so you need it if you want your
>> components to decide automatically whether or not they understand each
>> other's models.
>> This is indeed not "needed" in the current robotics state of the practice
>> where tons of human developers spend thousands of hours debugging the
>> _semantics_ of the code they interchange...
> One point that I did know but lost in the discussion is that, as soon as you
> do some modelling, you have an underlying metamodel.
>
> The issue was about having it explicitely created in a meta-meta-model
> framework (the eCore way) or implicitly given how the modelling system is
> implemented (the oroGen way).
I very much agree with this analysis. As in more and more ICT-driven
applications, "Software is law!"[1], and that is _not_ a good thing.
> To me, the jury is still out on the best approach.
As all "best" approaches, there are only "best" in particular contexts :-)
My research and impact ambitions are towards multi-vendor applications
(whatever meaning of vendor you want to attach: developer, company,
framework, domain, ....), and in that context it is most often the case
that different component providers do not have a good idea whether they are
using the same meta-model and semantics, so that's where it definitely
makes sense to introduce that overhead. In the kind of applications that
most of us here on this mailinglist are working on, the whole system still
always fits in the head of one or two core developers, so they only feel
"overhead" when meta-modelling has to be done explicitly.
It is, however, easy to predict that the "multi-vendor" context is only
gaining importance in the near and medium future, so I feel rather
comfortable that my "evangelisation"efforts are going to be used sooner or
later :-)
Herman
[1] the software takes the decisions based on how it is implemented, and
not on the basis of what it _should_ be doing. This is particularly
worrysome in software systems that involved copyright decisions, human
rights, privacy, etc.
[software-toolchain] Eclipse editor for "data message", Google P
With you permission, I join to give my two cents.
But I don't want to create too much noise in this thread that is
already very active!
Honestly I don't care much about Eclipse / ECore versus something else.
In these months in the BRICS project I saw many times the same
question arise and no one answered it: "how do you do modelling"?
It can be an Eclipse based tool or an IDL, but, at the end of the day,
as it was said in this thread, what you need are contract between
humans that understand each other when they talk about something.
Documentation is the first modelling language! If a data message is
made of 6-numbers, the easiest solution is to describe in your Doxygen
what these numbers represent.
As Sylvain said:
>The fact is that most people are fine to have the first barrier to
>incompatibilities be the weak-semantic-annotation that is provided with
>typing and then do the rest based on naming. The very fact that ROS is
>"so popular" is a statement to this.
A completely agree with Sylvain that ADA proves that strongly typed
interface can _improve_ very much safety in large projects (for what
is worth, also DDS is strongly typed) and I agree with him that, soon
or later during the component composition phase, component wiring will
be done by naming (and, therefore, based on contracts between humans).
On the other hand, in the Robotics Wikibook I propose that a "best
practice" would be to attach some meta-data to the data (information
about the model of the data).
http://www.roboticswikibook.org/conf/x/WgBG
But I don't want to define _which_ meta-data shall be used (in my
example I use units). It can be any information that we _believe_ that
can "help" another _human_ to understand the semantic of the data
message.
>From a pragmatic point of view meta-model of data, in my opinion, is
nothing more than:
1) basic types can be used
2) basic types can be composed in custom types
3) meta-data can be included using the same rule of composition. The
only difference of meta-data is that it is sent only during the wiring
process and when it is changed (stateful interface) and not once per
every data sample.
Davide
[software-toolchain] Eclipse editor for "data message", Google P
On 02/17/2012 11:36 AM, Sylvain Joyeux wrote:
> On 02/17/2012 11:23 AM, Hugo Garcia wrote:
>> What is your problem with models? What is your point?
> I have no problem with models, I'm developing and advocating Rock, which
> is indeed a model-based toolchain (and will hopefully talk about it at
> the ERF if my abstract proposal is accepted).
>
> However, I am against a trend to split between "true" model-driven and
> "false" model-driven, which has been (in my opinion) done quite a few
> times on this ML, where the "true" model-driven are the approach that
> are doing metamodelling.
>
> I know that Herman's point of view is that the only "true" model-driven
> approach is the one where you start at least at one meta-model (i.e. you
> are modelling your models). I think that we both agree to not agree on
> this point. As soon as you are having one level of models you *are*
> model-based.
What is your problem with metamodels? What is your point?
-H