Data Flow 2.0 Example

This mail is to inform you of my impressions of the new data flow
framework. First of all, the concept behind the new structure is given
in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

The idea is that *outputs* are send and forget, while *inputs* specify
a 'policy': e.g. 'I want to read all samples -> so buffer the input'
or: ' I want lock-based protection' or:...
The policy is specified in a 'ConnPolicy' object which you can give to
the input, to use as a default, or override during the connection of
the ports during deployment.

This is the basic use case of the new code:

#incude <rtt/Port.hpp>
using namespace RTT;
 
// Component A:
OutputPort<double> a_output("MyOutput");
//...
double x = ...;
a_output.write( x );
 
// Component B buffers data produced by A (default buf size==20):
bool init_connection = true; // read last written value after connection
bool pull = true;                   // fetch data directly from output
port during read
InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
internal::ConnPolicy::LOCK_FREE, init_connection, pull));
//...
double x;
while ( b_input.read( x ) {
   // process sample x...
} else {
   // buffer empty
}
 
// Component C gets the most recent data produced by A:
bool init_connection = true; // read last written value after connection
bool pull = true;                   // fetch data directly from output
port during read
InputPort<double> c_input("MyInput",
internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
init_connection, pull));
//...
double x;
if ( c_input.read( x ) {
   // use last value of x...
} else {
  // no new data
}
 
// Finally connect some ports. The order/direction of connecting does
not matter anymore,
// it will always do as expected !
a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
a_output.connectTo( c_input ); // or other way around
 
//Change buffer size for B by giving a policy during connectTo:
b_input.disconnect();
b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
internal::ConnPolicy::LOCK_FREE, init_connection, pull));

Note: ConnPolicy will probably move to RTT or RTT::base.

Since each InputPort takes a default policy (which is type = DATA,
lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
old DeploymentComponent + XML scripts. The 'only' addition necessary
is to extend the XML elements such that a connection policy can be
defined in addition, to override the default. I propose to update the
deployment manual such that the connection semantics are clearer. What
we call now a 'connection', I would propose to call a 'Topic',
analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
InputPort mapping in your XML file. We could easily generalize Topic
to also allow for port names, such that in simple setups, you just set
OutputPort -> InputPort. I'm not even proposing a new XML format here,
because when we write:

RTT 1.0, from DeploymentComponent example:

    <struct name="Ports"  type="PropertyBag">
      <simple name="a_output_port"
type="string"><value>AConnection</value></simple>
      <simple name="a_input_port"
type="string"><value>BConnection</value></simple>
    </struct>

We actually mean to write (what's in a name):
RTT 2.0

    <struct name="Topics"  type="PropertyBag">
      <simple name="a_output_port" type="string"><value>ATopic</value></simple>
      <simple name="b_input_port" type="string"><value>BTopic</value></simple>
    </struct>

In 2.0, you need to take care that exactly one port writes the given
topic (so one OutputPort) and the other ports are all InputPorts. If
this is not correct, the deployer refuses to setup the connections.

So far deployement. The whole mechanism is reported to work
transparantly over CORBA, but I still need to test that statement
personally though.

As before, the receiver can subscribe an RTT::Event to receive
notifications when a new data sample is ready. The scripting interface
of ports are only 'bool read( sample )' or 'bool write(sample)'.

Peter

Data Flow 2.0 Example

On Aug 19, 2009, at 10:03 , Peter Soetens wrote:

> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>
>>> In 2.0, you need to take care that exactly one port writes the given
>>> topic (so one OutputPort) and the other ports are all InputPorts. If
>>> this is not correct, the deployer refuses to setup the connections.
>>
>> Really!? You can't connect multiple outputs to one input? That
>> breaks many
>> of our systems, where we have multiple controllers all outputting
>> (say)
>> nAxesDesiredPosition, and then an executive component that turns on
>> only one
>> of these controllers. Those outputs all go to one input, the next
>> component
>> in line (say, a joint position limiter). Note that at any given
>> time, only
>> one output port is active, but we still have multiple connected.
>> Will this
>> no longer be possible?
>
> It's a universal pattern, so it must remain possible imho.

Well, I agree, at least! :-)

>> The intention is for this new dataflow implementation to replace the
>> existing implementation, correct?
>
> Yes. Sylvain discussed this scenario before
> (http://www.orocos.org/node/894#comment-2500). The question is if 1/
> run-time reconnection should take place vs 2/ allowing many-to-many.
> Case 1 guarantees that only one writer is present, by design, but
> reconfiguration is/may not be a real-time mechanism. Case 2. adds
> complexity when everything gets distributed; If data suddenly comes
> from a different process, clients must be notified that they need to
> pull from a different process. In the push scenario, this is not an
> issue.
>
> We need to think this over better with your use case in mind.

I'll throw another one at you then. One of our applications uses N
controllers components and M dynamics components. We can switch
controllers and dynamics independantly. So each controller is
connected to M dynamics, and each dynamics is connected to N
controllers. The executive/coordinator sees to it that only one
controller and one dynamics is running at any given time. This is
acheived by the executive running two sub state machines, one for
controllers and one for dynamics. Each controller has a state in its
submachine (similarly for dynamics). Then each state fundamentally has
a start/stop in it. Nice, simple, easy to work with and understand
(there's a little more to it, but you get the general picture).

How does this fit scenario into the new dataflow model? With Sylvain's
multiplexer suggestion, I think you need 3 multiplexers:

1) take N controller outputs and form into one output
2) take the one now-multiplexed controller output, and fan back out
into M dynamics inputs
3) take M dynamics outputs and form into one output

With this approach, you go from having two connections
(ControllerOutput and DynamicsOutput) to having N + M + M connections:
one connection per input to 1), one connection per output of 2), and
one connection per input of 3). That is a _lot_ of extra stuff to
track in the deployer XML files.

You could combine 1 and 2 into one component if you wished, but we
would not as we have scenarios where we deploy just the N controllers
and don't use dynamics. Multiplexer 1) supports both scenarios.

Thoughts?
Stephen

Data Flow 2.0 Example

On Aug 19, 2009, at 10:03 , Peter Soetens wrote:

> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>
>>> In 2.0, you need to take care that exactly one port writes the given
>>> topic (so one OutputPort) and the other ports are all InputPorts. If
>>> this is not correct, the deployer refuses to setup the connections.
>>
>> Really!? You can't connect multiple outputs to one input? That
>> breaks many
>> of our systems, where we have multiple controllers all outputting
>> (say)
>> nAxesDesiredPosition, and then an executive component that turns on
>> only one
>> of these controllers. Those outputs all go to one input, the next
>> component
>> in line (say, a joint position limiter). Note that at any given
>> time, only
>> one output port is active, but we still have multiple connected.
>> Will this
>> no longer be possible?
>
> It's a universal pattern, so it must remain possible imho.

Well, I agree, at least! :-)

>> The intention is for this new dataflow implementation to replace the
>> existing implementation, correct?
>
> Yes. Sylvain discussed this scenario before
> (http://www.orocos.org/node/894#comment-2500). The question is if 1/
> run-time reconnection should take place vs 2/ allowing many-to-many.
> Case 1 guarantees that only one writer is present, by design, but
> reconfiguration is/may not be a real-time mechanism. Case 2. adds
> complexity when everything gets distributed; If data suddenly comes
> from a different process, clients must be notified that they need to
> pull from a different process. In the push scenario, this is not an
> issue.
>
> We need to think this over better with your use case in mind.

I'll throw another one at you then. One of our applications uses N
controllers components and M dynamics components. We can switch
controllers and dynamics independantly. So each controller is
connected to M dynamics, and each dynamics is connected to N
controllers. The executive/coordinator sees to it that only one
controller and one dynamics is running at any given time. This is
acheived by the executive running two sub state machines, one for
controllers and one for dynamics. Each controller has a state in its
submachine (similarly for dynamics). Then each state fundamentally has
a start/stop in it. Nice, simple, easy to work with and understand
(there's a little more to it, but you get the general picture).

How does this fit scenario into the new dataflow model? With Sylvain's
multiplexer suggestion, I think you need 3 multiplexers:

1) take N controller outputs and form into one output
2) take the one now-multiplexed controller output, and fan back out
into M dynamics inputs
3) take M dynamics outputs and form into one output

With this approach, you go from having two connections
(ControllerOutput and DynamicsOutput) to having N + M + M connections:
one connection per input to 1), one connection per output of 2), and
one connection per input of 3). That is a _lot_ of extra stuff to
track in the deployer XML files.

You could combine 1 and 2 into one component if you wished, but we
would not as we have scenarios where we deploy just the N controllers
and don't use dynamics. Multiplexer 1) supports both scenarios.

Thoughts?
Stephen

Data Flow 2.0 Example

> How does this fit scenario into the new dataflow model? With Sylvain's
> multiplexer suggestion, I think you need 3 multiplexers:
>
> 1) take N controller outputs and form into one output
> 2) take the one now-multiplexed controller output, and fan back out
> into M dynamics inputs
> 3) take M dynamics outputs and form into one output

The data flow implementation takes care of single-output/multiple-input, so you
need two multiplexers: one that makes a single output from the N controllers
and one that make a single output from the M dynamics modules.

Sylvain

Data Flow 2.0 Example

On Thu, 20 Aug 2009, Stephen Roderick wrote:

> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
>
>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>
>>>> In 2.0, you need to take care that exactly one port writes the given
>>>> topic (so one OutputPort) and the other ports are all InputPorts. If
>>>> this is not correct, the deployer refuses to setup the connections.
>>>
>>> Really!? You can't connect multiple outputs to one input? That
>>> breaks many
>>> of our systems, where we have multiple controllers all outputting
>>> (say)
>>> nAxesDesiredPosition, and then an executive component that turns on
>>> only one
>>> of these controllers. Those outputs all go to one input, the next
>>> component
>>> in line (say, a joint position limiter). Note that at any given
>>> time, only
>>> one output port is active, but we still have multiple connected.
>>> Will this
>>> no longer be possible?
>>
>> It's a universal pattern, so it must remain possible imho.
>
> Well, I agree, at least! :-)
>
>>> The intention is for this new dataflow implementation to replace the
>>> existing implementation, correct?
>>
>> Yes. Sylvain discussed this scenario before
>> (http://www.orocos.org/node/894#comment-2500). The question is if 1/
>> run-time reconnection should take place vs 2/ allowing many-to-many.
>> Case 1 guarantees that only one writer is present, by design, but
>> reconfiguration is/may not be a real-time mechanism. Case 2. adds
>> complexity when everything gets distributed; If data suddenly comes
>> from a different process, clients must be notified that they need to
>> pull from a different process. In the push scenario, this is not an
>> issue.
>>
>> We need to think this over better with your use case in mind.
>
> I'll throw another one at you then. One of our applications uses N
> controllers components and M dynamics components. We can switch
> controllers and dynamics independantly. So each controller is
> connected to M dynamics, and each dynamics is connected to N
> controllers. The executive/coordinator sees to it that only one
> controller and one dynamics is running at any given time. This is
> acheived by the executive running two sub state machines, one for
> controllers and one for dynamics. Each controller has a state in its
> submachine (similarly for dynamics). Then each state fundamentally has
> a start/stop in it. Nice, simple, easy to work with and understand
> (there's a little more to it, but you get the general picture).
>
> How does this fit scenario into the new dataflow model? With Sylvain's
> multiplexer suggestion, I think you need 3 multiplexers:
>
> 1) take N controller outputs and form into one output
> 2) take the one now-multiplexed controller output, and fan back out
> into M dynamics inputs
> 3) take M dynamics outputs and form into one output
>
> With this approach, you go from having two connections
> (ControllerOutput and DynamicsOutput) to having N + M + M connections:
> one connection per input to 1), one connection per output of 2), and
> one connection per input of 3). That is a _lot_ of extra stuff to
> track in the deployer XML files.
>
> You could combine 1 and 2 into one component if you wished, but we
> would not as we have scenarios where we deploy just the N controllers
> and don't use dynamics. Multiplexer 1) supports both scenarios.
>
> Thoughts?

If I understand your (very valid) use case, all of the N controllers have
the same data structure as output? Similarly for the dynamics? In that
case, I think your FSM-based _internal_ multiplexing is _the_ way to go.
Because that use case is not a matter of "communication" multiplexing, but
"computation" coordination :-)

There might be other use cases, though... Such as the "Simulink" one.

Anyway, as a general rule I would like to state that the focus of RTT
should be _not_ on providing full communication middleware, but only on
those things that will never be taken care of by real middleware
projects...

Herman

Data Flow 2.0 Example

On Thu, 20 Aug 2009, Stephen Roderick wrote:

> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
>
>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>
>>>> In 2.0, you need to take care that exactly one port writes the given
>>>> topic (so one OutputPort) and the other ports are all InputPorts. If
>>>> this is not correct, the deployer refuses to setup the connections.
>>>
>>> Really!? You can't connect multiple outputs to one input? That
>>> breaks many
>>> of our systems, where we have multiple controllers all outputting
>>> (say)
>>> nAxesDesiredPosition, and then an executive component that turns on
>>> only one
>>> of these controllers. Those outputs all go to one input, the next
>>> component
>>> in line (say, a joint position limiter). Note that at any given
>>> time, only
>>> one output port is active, but we still have multiple connected.
>>> Will this
>>> no longer be possible?
>>
>> It's a universal pattern, so it must remain possible imho.
>
> Well, I agree, at least! :-)
>
>>> The intention is for this new dataflow implementation to replace the
>>> existing implementation, correct?
>>
>> Yes. Sylvain discussed this scenario before
>> (http://www.orocos.org/node/894#comment-2500). The question is if 1/
>> run-time reconnection should take place vs 2/ allowing many-to-many.
>> Case 1 guarantees that only one writer is present, by design, but
>> reconfiguration is/may not be a real-time mechanism. Case 2. adds
>> complexity when everything gets distributed; If data suddenly comes
>> from a different process, clients must be notified that they need to
>> pull from a different process. In the push scenario, this is not an
>> issue.
>>
>> We need to think this over better with your use case in mind.
>
> I'll throw another one at you then. One of our applications uses N
> controllers components and M dynamics components. We can switch
> controllers and dynamics independantly. So each controller is
> connected to M dynamics, and each dynamics is connected to N
> controllers. The executive/coordinator sees to it that only one
> controller and one dynamics is running at any given time. This is
> acheived by the executive running two sub state machines, one for
> controllers and one for dynamics. Each controller has a state in its
> submachine (similarly for dynamics). Then each state fundamentally has
> a start/stop in it. Nice, simple, easy to work with and understand
> (there's a little more to it, but you get the general picture).
>
> How does this fit scenario into the new dataflow model? With Sylvain's
> multiplexer suggestion, I think you need 3 multiplexers:
>
> 1) take N controller outputs and form into one output
> 2) take the one now-multiplexed controller output, and fan back out
> into M dynamics inputs
> 3) take M dynamics outputs and form into one output
>
> With this approach, you go from having two connections
> (ControllerOutput and DynamicsOutput) to having N + M + M connections:
> one connection per input to 1), one connection per output of 2), and
> one connection per input of 3). That is a _lot_ of extra stuff to
> track in the deployer XML files.
>
> You could combine 1 and 2 into one component if you wished, but we
> would not as we have scenarios where we deploy just the N controllers
> and don't use dynamics. Multiplexer 1) supports both scenarios.
>
> Thoughts?

If I understand your (very valid) use case, all of the N controllers have
the same data structure as output? Similarly for the dynamics? In that
case, I think your FSM-based _internal_ multiplexing is _the_ way to go.
Because that use case is not a matter of "communication" multiplexing, but
"computation" coordination :-)

There might be other use cases, though... Such as the "Simulink" one.

Anyway, as a general rule I would like to state that the focus of RTT
should be _not_ on providing full communication middleware, but only on
those things that will never be taken care of by real middleware
projects...

Herman

Data Flow 2.0 Example

On Aug 20, 2009, at 8:55, Herman Bruyninckx <Herman [dot] Bruyninckx [..] ...
> wrote:

> On Thu, 20 Aug 2009, Stephen Roderick wrote:
>
>> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
>>
>>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>>
>>>>> In 2.0, you need to take care that exactly one port writes the
>>>>> given
>>>>> topic (so one OutputPort) and the other ports are all
>>>>> InputPorts. If
>>>>> this is not correct, the deployer refuses to setup the
>>>>> connections.
>>>>
>>>> Really!? You can't connect multiple outputs to one input? That
>>>> breaks many
>>>> of our systems, where we have multiple controllers all outputting
>>>> (say)
>>>> nAxesDesiredPosition, and then an executive component that turns on
>>>> only one
>>>> of these controllers. Those outputs all go to one input, the next
>>>> component
>>>> in line (say, a joint position limiter). Note that at any given
>>>> time, only
>>>> one output port is active, but we still have multiple connected.
>>>> Will this
>>>> no longer be possible?
>>>
>>> It's a universal pattern, so it must remain possible imho.
>>
>> Well, I agree, at least! :-)
>>
>>>> The intention is for this new dataflow implementation to replace
>>>> the
>>>> existing implementation, correct?
>>>
>>> Yes. Sylvain discussed this scenario before
>>> (http://www.orocos.org/node/894#comment-2500). The question is if
>>> 1/
>>> run-time reconnection should take place vs 2/ allowing many-to-many.
>>> Case 1 guarantees that only one writer is present, by design, but
>>> reconfiguration is/may not be a real-time mechanism. Case 2. adds
>>> complexity when everything gets distributed; If data suddenly comes
>>> from a different process, clients must be notified that they need to
>>> pull from a different process. In the push scenario, this is not an
>>> issue.
>>>
>>> We need to think this over better with your use case in mind.
>>
>> I'll throw another one at you then. One of our applications uses N
>> controllers components and M dynamics components. We can switch
>> controllers and dynamics independantly. So each controller is
>> connected to M dynamics, and each dynamics is connected to N
>> controllers. The executive/coordinator sees to it that only one
>> controller and one dynamics is running at any given time. This is
>> acheived by the executive running two sub state machines, one for
>> controllers and one for dynamics. Each controller has a state in its
>> submachine (similarly for dynamics). Then each state fundamentally
>> has
>> a start/stop in it. Nice, simple, easy to work with and understand
>> (there's a little more to it, but you get the general picture).
>>
>> How does this fit scenario into the new dataflow model? With
>> Sylvain's
>> multiplexer suggestion, I think you need 3 multiplexers:
>>
>> 1) take N controller outputs and form into one output
>> 2) take the one now-multiplexed controller output, and fan back out
>> into M dynamics inputs
>> 3) take M dynamics outputs and form into one output
>>
>> With this approach, you go from having two connections
>> (ControllerOutput and DynamicsOutput) to having N + M + M
>> connections:
>> one connection per input to 1), one connection per output of 2), and
>> one connection per input of 3). That is a _lot_ of extra stuff to
>> track in the deployer XML files.
>>
>> You could combine 1 and 2 into one component if you wished, but we
>> would not as we have scenarios where we deploy just the N controllers
>> and don't use dynamics. Multiplexer 1) supports both scenarios.
>>
>> Thoughts?
>
> If I understand your (very valid) use case, all of the N controllers
> have
> the same data structure as output? Similarly for the dynamics? In that
> case, I think your FSM-based _internal_ multiplexing is _the_ way to
> go.
> Because that use case is not a matter of "communication"
> multiplexing, but
> "computation" coordination :-)

Yes and yes. You are completely correct. This is all about
coordinating computation, rather than communication.

> There might be other use cases, though... Such as the "Simulink" one.

Agreed, and some of which will definitely gain from the new dataflow.

> Anyway, as a general rule I would like to state that the focus of RTT
> should be _not_ on providing full communication middleware, but only
> on
> those things that will never be taken care of by real middleware
> projects...
>
> Herman

Data Flow 2.0 Example

On Aug 20, 2009, at 8:55, Herman Bruyninckx <Herman [dot] Bruyninckx [..] ...
> wrote:

> On Thu, 20 Aug 2009, Stephen Roderick wrote:
>
>> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
>>
>>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>>
>>>>> In 2.0, you need to take care that exactly one port writes the
>>>>> given
>>>>> topic (so one OutputPort) and the other ports are all
>>>>> InputPorts. If
>>>>> this is not correct, the deployer refuses to setup the
>>>>> connections.
>>>>
>>>> Really!? You can't connect multiple outputs to one input? That
>>>> breaks many
>>>> of our systems, where we have multiple controllers all outputting
>>>> (say)
>>>> nAxesDesiredPosition, and then an executive component that turns on
>>>> only one
>>>> of these controllers. Those outputs all go to one input, the next
>>>> component
>>>> in line (say, a joint position limiter). Note that at any given
>>>> time, only
>>>> one output port is active, but we still have multiple connected.
>>>> Will this
>>>> no longer be possible?
>>>
>>> It's a universal pattern, so it must remain possible imho.
>>
>> Well, I agree, at least! :-)
>>
>>>> The intention is for this new dataflow implementation to replace
>>>> the
>>>> existing implementation, correct?
>>>
>>> Yes. Sylvain discussed this scenario before
>>> (http://www.orocos.org/node/894#comment-2500). The question is if
>>> 1/
>>> run-time reconnection should take place vs 2/ allowing many-to-many.
>>> Case 1 guarantees that only one writer is present, by design, but
>>> reconfiguration is/may not be a real-time mechanism. Case 2. adds
>>> complexity when everything gets distributed; If data suddenly comes
>>> from a different process, clients must be notified that they need to
>>> pull from a different process. In the push scenario, this is not an
>>> issue.
>>>
>>> We need to think this over better with your use case in mind.
>>
>> I'll throw another one at you then. One of our applications uses N
>> controllers components and M dynamics components. We can switch
>> controllers and dynamics independantly. So each controller is
>> connected to M dynamics, and each dynamics is connected to N
>> controllers. The executive/coordinator sees to it that only one
>> controller and one dynamics is running at any given time. This is
>> acheived by the executive running two sub state machines, one for
>> controllers and one for dynamics. Each controller has a state in its
>> submachine (similarly for dynamics). Then each state fundamentally
>> has
>> a start/stop in it. Nice, simple, easy to work with and understand
>> (there's a little more to it, but you get the general picture).
>>
>> How does this fit scenario into the new dataflow model? With
>> Sylvain's
>> multiplexer suggestion, I think you need 3 multiplexers:
>>
>> 1) take N controller outputs and form into one output
>> 2) take the one now-multiplexed controller output, and fan back out
>> into M dynamics inputs
>> 3) take M dynamics outputs and form into one output
>>
>> With this approach, you go from having two connections
>> (ControllerOutput and DynamicsOutput) to having N + M + M
>> connections:
>> one connection per input to 1), one connection per output of 2), and
>> one connection per input of 3). That is a _lot_ of extra stuff to
>> track in the deployer XML files.
>>
>> You could combine 1 and 2 into one component if you wished, but we
>> would not as we have scenarios where we deploy just the N controllers
>> and don't use dynamics. Multiplexer 1) supports both scenarios.
>>
>> Thoughts?
>
> If I understand your (very valid) use case, all of the N controllers
> have
> the same data structure as output? Similarly for the dynamics? In that
> case, I think your FSM-based _internal_ multiplexing is _the_ way to
> go.
> Because that use case is not a matter of "communication"
> multiplexing, but
> "computation" coordination :-)

Yes and yes. You are completely correct. This is all about
coordinating computation, rather than communication.

> There might be other use cases, though... Such as the "Simulink" one.

Agreed, and some of which will definitely gain from the new dataflow.

> Anyway, as a general rule I would like to state that the focus of RTT
> should be _not_ on providing full communication middleware, but only
> on
> those things that will never be taken care of by real middleware
> projects...
>
> Herman

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 07:25:35AM -0400, Stephen Roderick wrote:
> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
> > We need to think this over better with your use case in mind.
>
> I'll throw another one at you then. One of our applications uses N
> controllers components and M dynamics components. We can switch
> controllers and dynamics independantly. So each controller is
> connected to M dynamics, and each dynamics is connected to N
> controllers. The executive/coordinator sees to it that only one
> controller and one dynamics is running at any given time. This is
> acheived by the executive running two sub state machines, one for
> controllers and one for dynamics. Each controller has a state in its
> submachine (similarly for dynamics). Then each state fundamentally has
> a start/stop in it. Nice, simple, easy to work with and understand
> (there's a little more to it, but you get the general picture).
>
> How does this fit scenario into the new dataflow model? With Sylvain's
> multiplexer suggestion, I think you need 3 multiplexers:
>
> 1) take N controller outputs and form into one output
> 2) take the one now-multiplexed controller output, and fan back out
> into M dynamics inputs
> 3) take M dynamics outputs and form into one output
>
> With this approach, you go from having two connections
> (ControllerOutput and DynamicsOutput) to having N + M + M connections:
> one connection per input to 1), one connection per output of 2), and
> one connection per input of 3). That is a _lot_ of extra stuff to
> track in the deployer XML files.

Hang on. In your original setup you have two connections between each
of N controller and M dynamic components, right? That would be 2*N*M
connections. With the multiplexer you will have much less, only N+M+M
total connections.

What am I missing? Are you concerned about having to reconfigure the
multiplexer component and the respective controller and dynamic
components?

Markus

Data Flow 2.0 Example

On Aug 20, 2009, at 8:53, Markus Klotzbuecher <markus [dot] klotzbuecher [..] ...
> wrote:

> On Thu, Aug 20, 2009 at 07:25:35AM -0400, Stephen Roderick wrote:
>> On Aug 19, 2009, at 10:03 , Peter Soetens wrote:
>>> We need to think this over better with your use case in mind.
>>
>> I'll throw another one at you then. One of our applications uses N
>> controllers components and M dynamics components. We can switch
>> controllers and dynamics independantly. So each controller is
>> connected to M dynamics, and each dynamics is connected to N
>> controllers. The executive/coordinator sees to it that only one
>> controller and one dynamics is running at any given time. This is
>> acheived by the executive running two sub state machines, one for
>> controllers and one for dynamics. Each controller has a state in its
>> submachine (similarly for dynamics). Then each state fundamentally
>> has
>> a start/stop in it. Nice, simple, easy to work with and understand
>> (there's a little more to it, but you get the general picture).
>>
>> How does this fit scenario into the new dataflow model? With
>> Sylvain's
>> multiplexer suggestion, I think you need 3 multiplexers:
>>
>> 1) take N controller outputs and form into one output
>> 2) take the one now-multiplexed controller output, and fan back out
>> into M dynamics inputs
>> 3) take M dynamics outputs and form into one output
>>
>> With this approach, you go from having two connections
>> (ControllerOutput and DynamicsOutput) to having N + M + M
>> connections:
>> one connection per input to 1), one connection per output of 2), and
>> one connection per input of 3). That is a _lot_ of extra stuff to
>> track in the deployer XML files.
>
> Hang on. In your original setup you have two connections between each
> of N controller and M dynamic components, right? That would be 2*N*M
> connections. With the multiplexer you will have much less, only N+M+M
> total connections.
>
> What am I missing? Are you concerned about having to reconfigure the
> multiplexer component and the respective controller and dynamic
> components?
>
> Markus

Now we have N*M + M port to port connections. But we only have 2
topics (as such) to manage in the deployer file.

With multiplexing you have N + N + M port to port connections but you
have N + N + M topics to manage in the deployer file. Plus you have 3
additional multiplex components. I don't see the win here.

For us, N~=10 and M=3. FYI.

Doubel thubbing from me iPhone

Data Flow 2.0 Example

On Wednesday 19 August 2009 16:03:38 Peter Soetens wrote:
> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
> >> In 2.0, you need to take care that exactly one port writes the given
> >> topic (so one OutputPort) and the other ports are all InputPorts. If
> >> this is not correct, the deployer refuses to setup the connections.
> >
> > Really!? You can't connect multiple outputs to one input? That breaks
> > many of our systems, where we have multiple controllers all outputting
> > (say) nAxesDesiredPosition, and then an executive component that turns on
> > only one of these controllers. Those outputs all go to one input, the
> > next component in line (say, a joint position limiter). Note that at any
> > given time, only one output port is active, but we still have multiple
> > connected. Will this no longer be possible?
>
> It's a universal pattern, so it must remain possible imho.
Well. Why don't you just rewire at the same time than you start/stop the
controllers ? You can do that with

current_output.disconnect(the_input)
new_output.connect(the_input)

That's two lines of code ...

Sylvain

Data Flow 2.0 Example

On Aug 19, 2009, at 10:21 , Sylvain Joyeux wrote:

> On Wednesday 19 August 2009 16:03:38 Peter Soetens wrote:
>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>> In 2.0, you need to take care that exactly one port writes the
>>>> given
>>>> topic (so one OutputPort) and the other ports are all InputPorts.
>>>> If
>>>> this is not correct, the deployer refuses to setup the connections.
>>>
>>> Really!? You can't connect multiple outputs to one input? That
>>> breaks
>>> many of our systems, where we have multiple controllers all
>>> outputting
>>> (say) nAxesDesiredPosition, and then an executive component that
>>> turns on
>>> only one of these controllers. Those outputs all go to one input,
>>> the
>>> next component in line (say, a joint position limiter). Note that
>>> at any
>>> given time, only one output port is active, but we still have
>>> multiple
>>> connected. Will this no longer be possible?
>>
>> It's a universal pattern, so it must remain possible imho.
> Well. Why don't you just rewire at the same time than you start/stop
> the
> controllers ? You can do that with
>
> current_output.disconnect(the_input)
> new_output.connect(the_input)
>
> That's two lines of code ...

Yes, but now "someone" needs to know what connections to rewire and
when. And anytime you update/add a component's ports that are involved
in one of these runtime-rewired connections, you have to remember to
go and update the "someone" to make sure that they rewire things. Not
scalable. Not something I would sign up to.

Currently, I just deploy a bunch of components, list their ports/
connections to the deployer, and have an executive component that
simply runs a state machine with appropriate component start/stop
calls in each state's entry and exit functions. I would rather not add
complexity and (more) coupling to this scenario than necessary.

YMMV
Stepen

Re: Data Flow 2.0 Example

snrkiwi wrote:

On Aug 19, 2009, at 10:21 , Sylvain Joyeux wrote:

> On Wednesday 19 August 2009 16:03:38 Peter Soetens wrote:
>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>> In 2.0, you need to take care that exactly one port writes the
>>>> given
>>>> topic (so one OutputPort) and the other ports are all InputPorts.
>>>> If
>>>> this is not correct, the deployer refuses to setup the connections.
>>>
>>> Really!? You can't connect multiple outputs to one input? That
>>> breaks
>>> many of our systems, where we have multiple controllers all
>>> outputting
>>> (say) nAxesDesiredPosition, and then an executive component that
>>> turns on
>>> only one of these controllers. Those outputs all go to one input,
>>> the
>>> next component in line (say, a joint position limiter). Note that
>>> at any
>>> given time, only one output port is active, but we still have
>>> multiple
>>> connected. Will this no longer be possible?
>>
>> It's a universal pattern, so it must remain possible imho.
> Well. Why don't you just rewire at the same time than you start/stop
> the
> controllers ? You can do that with
>
> current_output.disconnect(the_input)
> new_output.connect(the_input)
>
> That's two lines of code ...

Yes, but now "someone" needs to know what connections to rewire and
when. And anytime you update/add a component's ports that are involved
in one of these runtime-rewired connections, you have to remember to
go and update the "someone" to make sure that they rewire things. Not
scalable. Not something I would sign up to.

Currently, I just deploy a bunch of components, list their ports/
connections to the deployer, and have an executive component that
simply runs a state machine with appropriate component start/stop
calls in each state's entry and exit functions. I would rather not add
complexity and (more) coupling to this scenario than necessary.

YMMV
Stepen

I think "someone" or "executive component", which you mentioned, actually is a supervisor/coordinator that control the configured dataflow between components. It means the coordinator must know the structure of control system beforehand. The switching mechanism between partial components/controllers should be decided based on:
- coordination patterns such as fixed-priority, master-slave, parallel, sequencial, cyclic,...
- "want to operate" intention of partial components/controllers.

Regards,
Phong

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 17:08, S Roderick<kiwi [dot] net [..] ...> wrote:
> On Aug 19, 2009, at 10:21 , Sylvain Joyeux wrote:
>
>> On Wednesday 19 August 2009 16:03:38 Peter Soetens wrote:
>>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>>> In 2.0, you need to take care that exactly one port writes the
>>>>> given
>>>>> topic (so one OutputPort) and the other ports are all InputPorts.
>>>>> If
>>>>> this is not correct, the deployer refuses to setup the connections.
>>>>
>>>> Really!? You can't connect multiple outputs to one input? That
>>>> breaks
>>>> many of our systems, where we have multiple controllers all
>>>> outputting
>>>> (say) nAxesDesiredPosition, and then an executive component that
>>>> turns on
>>>> only one of these controllers. Those outputs all go to one input,
>>>> the
>>>> next component in line (say, a joint position limiter). Note that
>>>> at any
>>>> given time, only one output port is active, but we still have
>>>> multiple
>>>> connected. Will this no longer be possible?
>>>
>>> It's a universal pattern, so it must remain possible imho.
>> Well. Why don't you just rewire at the same time than you start/stop
>> the
>> controllers ? You can do that with
>>
>>  current_output.disconnect(the_input)
>>  new_output.connect(the_input)
>>
>> That's two lines of code ...
>
> Yes, but now "someone" needs to know what connections to rewire and
> when. And anytime you update/add a component's ports that are involved
> in one of these runtime-rewired connections, you have to remember to
> go and update the "someone" to make sure that they rewire things. Not
> scalable. Not something I would sign up to.
>
> Currently, I just deploy a bunch of components, list their ports/
> connections to the deployer, and have an executive component that
> simply runs a state machine with appropriate component start/stop
> calls in each state's entry and exit functions. I would rather not add
> complexity and (more) coupling to this scenario than necessary.

I had seen this coming too after your previous remark. MIMO allows to
setup all possible flows on beforehand and let the executive switch
flows by simply starting and stopping a component. But in a way you
even want both: when one component is pushing data, the other's
shouldn't be allowed to do so (say that a method call into a stopped
component accidentally causes a write to an output port).

Reconnections during the real/run-time are a no-no. They can cause
unpredictable long delays at times where I bet timing is everything.
So there must be an alternative to the 'switching components' use
case. This scenario is so common that it should belong in the
'application templates' package (thanks Markus for hinting me :-D).

The way RTT 1.0 solved this problem gives a false sense of security,
as I outlined above. The RTT does and can not prevent simultaneous
writes from multiple sources.

I wonder how ROS fixes this, because I believe they also allow another
component to take over if the former component fails.

Peter

Data Flow 2.0 Example

On Aug 19, 2009, at 12:02 , Peter Soetens wrote:

> On Wed, Aug 19, 2009 at 17:08, S Roderick<kiwi [dot] net [..] ...> wrote:
>> On Aug 19, 2009, at 10:21 , Sylvain Joyeux wrote:
>>
>>> On Wednesday 19 August 2009 16:03:38 Peter Soetens wrote:
>>>> On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>>>>>> In 2.0, you need to take care that exactly one port writes the
>>>>>> given
>>>>>> topic (so one OutputPort) and the other ports are all InputPorts.
>>>>>> If
>>>>>> this is not correct, the deployer refuses to setup the
>>>>>> connections.
>>>>>
>>>>> Really!? You can't connect multiple outputs to one input? That
>>>>> breaks
>>>>> many of our systems, where we have multiple controllers all
>>>>> outputting
>>>>> (say) nAxesDesiredPosition, and then an executive component that
>>>>> turns on
>>>>> only one of these controllers. Those outputs all go to one input,
>>>>> the
>>>>> next component in line (say, a joint position limiter). Note that
>>>>> at any
>>>>> given time, only one output port is active, but we still have
>>>>> multiple
>>>>> connected. Will this no longer be possible?
>>>>
>>>> It's a universal pattern, so it must remain possible imho.
>>> Well. Why don't you just rewire at the same time than you start/stop
>>> the
>>> controllers ? You can do that with
>>>
>>> current_output.disconnect(the_input)
>>> new_output.connect(the_input)
>>>
>>> That's two lines of code ...
>>
>> Yes, but now "someone" needs to know what connections to rewire and
>> when. And anytime you update/add a component's ports that are
>> involved
>> in one of these runtime-rewired connections, you have to remember to
>> go and update the "someone" to make sure that they rewire things. Not
>> scalable. Not something I would sign up to.
>>
>> Currently, I just deploy a bunch of components, list their ports/
>> connections to the deployer, and have an executive component that
>> simply runs a state machine with appropriate component start/stop
>> calls in each state's entry and exit functions. I would rather not
>> add
>> complexity and (more) coupling to this scenario than necessary.
>
> I had seen this coming too after your previous remark. MIMO allows to
> setup all possible flows on beforehand and let the executive switch
> flows by simply starting and stopping a component. But in a way you
> even want both: when one component is pushing data, the other's
> shouldn't be allowed to do so (say that a method call into a stopped
> component accidentally causes a write to an output port).

That would be ideal, but I can live without it if the expense is
additional coupling. I am *not* trying to prevent every possible error
condition.

> Reconnections during the real/run-time are a no-no. They can cause
> unpredictable long delays at times where I bet timing is everything.
> So there must be an alternative to the 'switching components' use
> case. This scenario is so common that it should belong in the
> 'application templates' package (thanks Markus for hinting me :-D).

Reconnections of ... ? Ports? "Switching components" needs an
alternative? Are you actually saying that having our executive start/
stop components at runtime is not real time? I'm not completely
understanding you here.

> The way RTT 1.0 solved this problem gives a false sense of security,
> as I outlined above. The RTT does and can not prevent simultaneous
> writes from multiple sources.

Like I said, it is fine with us. The programmer has to trust something.
S

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 13:11, Stephen Roderick<kiwi [dot] net [..] ...> wrote:
> On Aug 19, 2009, at 12:02 , Peter Soetens wrote:
>> Reconnections during the real/run-time are a no-no. They can cause
>> unpredictable long delays at times where I bet timing is everything.
>> So there must be an alternative to the 'switching components' use
>> case. This scenario is so common that it should belong in the
>> 'application templates' package (thanks Markus for hinting me :-D).
>
> Reconnections of ... ? Ports? "Switching components" needs an alternative?
> Are you actually saying that having our executive start/stop components at
> runtime is not real time? I'm not completely understanding you here.

With 'reconnection', I mean, removing+creating a C++ connection object
between two ports.

In RTT 1.0, all ports (readers+writers) shared the same C++ connection
object. So it was the application's responsibility that only one
'writer' wrote it. For using multiplexing, this object can live as
long as the application. In RTT 2.0, there is a C++ connection object
from the output to each input (so 1:N means N connection objects).
This has the tremendous advantage that the inputs don't need to share
state (they have their own buffer etc.). If we want to add
multiplexing in the data-flow classes (so without a multiplexing
component), we'll need to have the outputs be able to share
connections in a similar way, but with keeping the N connection
objects. So a 2:N mapping would still have N connection objects, but
both output ports are using to them. This gets tricky when CORBA is
involved, but not impossible.

For the multiplexing component idea, I don't have a clear view yet.
But it would require a change in how deployment is done to keep the
red tape at a minimum, such that for a N:M mapping, you'd only need to
define two connections.

I'm still unconclusive.

Peter

Data Flow 2.0 Example

> Yes, but now "someone" needs to know what connections to rewire and
> when. And anytime you update/add a component's ports that are involved
> in one of these runtime-rewired connections, you have to remember to
> go and update the "someone" to make sure that they rewire things. Not
> scalable. Not something I would sign up to.
>
> Currently, I just deploy a bunch of components, list their ports/
> connections to the deployer, and have an executive component that
> simply runs a state machine with appropriate component start/stop
> calls in each state's entry and exit functions. I would rather not add
> complexity and (more) coupling to this scenario than necessary.

I actually though about the "coupling thing". My general point is: your
execution component *already* knows about them, so adding the
connection/disconnection thing is actually not making it more coupled, because
you already have a component that "needs to know" about the architecture.

Another option is to make a generic multiplexer component, and make that
component run under the sequential activity. And that component could actually
go in rtt/extras (for instance :P)

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 17:34, Sylvain Joyeux<sylvain [dot] joyeux [..] ...> wrote:
>> Yes, but now "someone" needs to know what connections to rewire and
>> when. And anytime you update/add a component's ports that are involved
>> in one of these runtime-rewired connections, you have to remember to
>> go and update the "someone" to make sure that they rewire things. Not
>> scalable. Not something I would sign up to.
>>
>> Currently, I just deploy a bunch of components, list their ports/
>> connections to the deployer, and have an executive component that
>> simply runs a state machine with appropriate component start/stop
>> calls in each state's entry and exit functions. I would rather not add
>> complexity and (more) coupling to this scenario than necessary.
>
> I actually though about the "coupling thing". My general point is: your
> execution component *already* knows about them, so adding the
> connection/disconnection thing is actually not making it more coupled, because
> you already have a component that "needs to know" about the architecture.

The issue is not really coupling, but specifying things in two
locations, and keeping them in sync.

>
> Another option is to make a generic multiplexer component, and make that
> component run under the sequential activity. And that component could actually
> go in rtt/extras (for instance :P)

Putting the mechanism in a component instead of in the connection
layer is an alternative. But even then, we don't want to form
connections during run/real-time, so the connections must be setup on
beforehand anyway and the multiplexer can only be responsible for the
switching, just like the executive now does. I'm very against
'resource locking' with this regard, because a crashed application
can't that easily free a resource (unless there is help from the
underlying OS, which would tie us to certain platforms).

Peter

Data Flow 2.0 Example

On Aug 19, 2009, at 12:09 , Peter Soetens wrote:

> On Wed, Aug 19, 2009 at 17:34, Sylvain
> Joyeux<sylvain [dot] joyeux [..] ...> wrote:
>>> Yes, but now "someone" needs to know what connections to rewire and
>>> when. And anytime you update/add a component's ports that are
>>> involved
>>> in one of these runtime-rewired connections, you have to remember to
>>> go and update the "someone" to make sure that they rewire things.
>>> Not
>>> scalable. Not something I would sign up to.
>>>
>>> Currently, I just deploy a bunch of components, list their ports/
>>> connections to the deployer, and have an executive component that
>>> simply runs a state machine with appropriate component start/stop
>>> calls in each state's entry and exit functions. I would rather not
>>> add
>>> complexity and (more) coupling to this scenario than necessary.
>>
>> I actually though about the "coupling thing". My general point is:
>> your
>> execution component *already* knows about them, so adding the
>> connection/disconnection thing is actually not making it more
>> coupled, because
>> you already have a component that "needs to know" about the
>> architecture.
>
> The issue is not really coupling, but specifying things in two
> locations, and keeping them in sync.

Agreed completely.

>> Another option is to make a generic multiplexer component, and make
>> that
>> component run under the sequential activity. And that component
>> could actually
>> go in rtt/extras (for instance :P)
>
> Putting the mechanism in a component instead of in the connection
> layer is an alternative. But even then, we don't want to form
> connections during run/real-time, so the connections must be setup on
> beforehand anyway and the multiplexer can only be responsible for the
> switching, just like the executive now does. I'm very against
> 'resource locking' with this regard, because a crashed application
> can't that easily free a resource (unless there is help from the
> underlying OS, which would tie us to certain platforms).

So if you can't reconnect in realtime, than that leads to allowing
many-to-one or one-to-many connections, doesn't it?
S

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 13:12, Stephen Roderick<kiwi [dot] net [..] ...> wrote:
> On Aug 19, 2009, at 12:09 , Peter Soetens wrote:
>> Putting the mechanism in a component instead of in the connection
>> layer is an alternative. But even then, we don't want to form
>> connections during run/real-time, so the connections must be setup on
>> beforehand anyway and the multiplexer can only be responsible for the
>> switching, just like the executive now does. I'm very against
>> 'resource locking' with this regard, because a crashed application
>> can't that easily free a resource (unless there is help from the
>> underlying OS, which would tie us to certain platforms).
>
> So if you can't reconnect in realtime, than that leads to allowing
> many-to-one or one-to-many connections, doesn't it?

Yes. Or a multiplexer. Or we could setup connections from the outputs
to all inputs and have a mechanism that makes the input pick the
connection with the most recent data. That would cause N*M connection
objects of which only M are used at the same time. Doesn't sound that
attractive, doesn't it ?

Peter

Data Flow 2.0 Example

On Wednesday 19 August 2009 18:09:23 you wrote:
> On Wed, Aug 19, 2009 at 17:34, Sylvain Joyeux<sylvain [dot] joyeux [..] ...> wrote:
> >> Yes, but now "someone" needs to know what connections to rewire and
> >> when. And anytime you update/add a component's ports that are involved
> >> in one of these runtime-rewired connections, you have to remember to
> >> go and update the "someone" to make sure that they rewire things. Not
> >> scalable. Not something I would sign up to.
> >>
> >> Currently, I just deploy a bunch of components, list their ports/
> >> connections to the deployer, and have an executive component that
> >> simply runs a state machine with appropriate component start/stop
> >> calls in each state's entry and exit functions. I would rather not add
> >> complexity and (more) coupling to this scenario than necessary.
> >
> > I actually though about the "coupling thing". My general point is: your
> > execution component *already* knows about them, so adding the
> > connection/disconnection thing is actually not making it more coupled,
> > because you already have a component that "needs to know" about the
> > architecture.
>
> The issue is not really coupling, but specifying things in two
> locations, and keeping them in sync.
>
> > Another option is to make a generic multiplexer component, and make that
> > component run under the sequential activity. And that component could
> > actually go in rtt/extras (for instance :P)
>
> Putting the mechanism in a component instead of in the connection
> layer is an alternative. But even then, we don't want to form
> connections during run/real-time, so the connections must be setup on
> beforehand anyway and the multiplexer can only be responsible for the
> switching, just like the executive now does. I'm very against
> 'resource locking' with this regard, because a crashed application
> can't that easily free a resource (unless there is help from the
> underlying OS, which would tie us to certain platforms).
Yes, that's the point of the multiplexer component ! You set up the
connections at deployment time and let it manage the multiplexing. What I
meant was:
- the multiplexer creates N ports of given type, based either on its property
configuration or on a method call
- it is triggered whenever one of these ports has new data
- it pushes the data from the input ports to the output ports

The challenge is to implement whatever is needed in RTT's type system to make
is possible *in a completely generic way*. Most of it is there I believe, but
there is probably a bit of glue needed.

The idea here is the following:
- you can integrate different multiplexing strategies very easily. For
instance: allow only one input, allow multiple inputs, multiplex FIFO,
multiplex with priorities, ...
- you keep the dataflow layer simple

As to ROS, the topic-based thing is actually naturally multiplexing ... but
without any control (at least that's as far as I know).

Sylvain

Data Flow 2.0 Example

On Aug 20, 2009, at 04:07 , Sylvain Joyeux wrote:

> On Wednesday 19 August 2009 18:09:23 you wrote:
>> On Wed, Aug 19, 2009 at 17:34, Sylvain
>> Joyeux<sylvain [dot] joyeux [..] ...> wrote:
>>>> Yes, but now "someone" needs to know what connections to rewire and
>>>> when. And anytime you update/add a component's ports that are
>>>> involved
>>>> in one of these runtime-rewired connections, you have to remember
>>>> to
>>>> go and update the "someone" to make sure that they rewire things.
>>>> Not
>>>> scalable. Not something I would sign up to.
>>>>
>>>> Currently, I just deploy a bunch of components, list their ports/
>>>> connections to the deployer, and have an executive component that
>>>> simply runs a state machine with appropriate component start/stop
>>>> calls in each state's entry and exit functions. I would rather
>>>> not add
>>>> complexity and (more) coupling to this scenario than necessary.
>>>
>>> I actually though about the "coupling thing". My general point is:
>>> your
>>> execution component *already* knows about them, so adding the
>>> connection/disconnection thing is actually not making it more
>>> coupled,
>>> because you already have a component that "needs to know" about the
>>> architecture.
>>
>> The issue is not really coupling, but specifying things in two
>> locations, and keeping them in sync.
>>
>>> Another option is to make a generic multiplexer component, and
>>> make that
>>> component run under the sequential activity. And that component
>>> could
>>> actually go in rtt/extras (for instance :P)
>>
>> Putting the mechanism in a component instead of in the connection
>> layer is an alternative. But even then, we don't want to form
>> connections during run/real-time, so the connections must be setup on
>> beforehand anyway and the multiplexer can only be responsible for the
>> switching, just like the executive now does. I'm very against
>> 'resource locking' with this regard, because a crashed application
>> can't that easily free a resource (unless there is help from the
>> underlying OS, which would tie us to certain platforms).
> Yes, that's the point of the multiplexer component ! You set up the
> connections at deployment time and let it manage the multiplexing.
> What I
> meant was:
> - the multiplexer creates N ports of given type, based either on its
> property
> configuration or on a method call

And now I have a property file that has to match with a deployment
scenario. One more thing to track, or get wrong.

I do think that the new dataflow work has merit, please don't think
otherwise. I simply think that it is removing a necessary and useful
feature, for zero apparent gain to me. And I think by removing the
many-to-one (or vice versa) capability, we are actually introducing
additional complexity to compensate. That will make systems more
brittle. That's a lose-lose situation in my eyes.

> - it is triggered whenever one of these ports has new data
> - it pushes the data from the input ports to the output ports
>
> The challenge is to implement whatever is needed in RTT's type
> system to make
> is possible *in a completely generic way*. Most of it is there I
> believe, but
> there is probably a bit of glue needed.
>
> The idea here is the following:
> - you can integrate different multiplexing strategies very easily. For
> instance: allow only one input, allow multiple inputs, multiplex FIFO,
> multiplex with priorities, ...
> - you keep the dataflow layer simple

Every single one of our projects employs the many-to-one scenario.
Every single one. I'm sure you can understand why I don't want to
develop new multiplex components, when I currently don't need them and
I see no apparent gain with them for us.

Stephen

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 10:07:22AM +0200, Sylvain Joyeux wrote:
> On Wednesday 19 August 2009 18:09:23 you wrote:

> > Putting the mechanism in a component instead of in the connection
> > layer is an alternative. But even then, we don't want to form
> > connections during run/real-time, so the connections must be setup on
> > beforehand anyway and the multiplexer can only be responsible for the
> > switching, just like the executive now does. I'm very against
> > 'resource locking' with this regard, because a crashed application
> > can't that easily free a resource (unless there is help from the
> > underlying OS, which would tie us to certain platforms).
> Yes, that's the point of the multiplexer component ! You set up the
> connections at deployment time and let it manage the multiplexing. What I
> meant was:
> - the multiplexer creates N ports of given type, based either on its property
> configuration or on a method call
> - it is triggered whenever one of these ports has new data
> - it pushes the data from the input ports to the output ports

I agree this is the way to go.

> The challenge is to implement whatever is needed in RTT's type system to make
> is possible *in a completely generic way*. Most of it is there I believe, but
> there is probably a bit of glue needed.

Hmm, so to be more precise the problem is to be able to create ports
of arbitrary types during runtime, yes? So for this a toolkit would
need to provide some kind of factory method for creating ports of its
type. This is something that would be very useful for (lua) scripting
too!

> The idea here is the following:
> - you can integrate different multiplexing strategies very easily. For
> instance: allow only one input, allow multiple inputs, multiplex FIFO,
> multiplex with priorities, ...
> - you keep the dataflow layer simple

Or filter functions to apply to values read on certain ports...

Regards
Markus

Data Flow 2.0 Example

On Thursday 20 August 2009 10:55:57 Markus Klotzbuecher wrote:
> On Thu, Aug 20, 2009 at 10:07:22AM +0200, Sylvain Joyeux wrote:
> > On Wednesday 19 August 2009 18:09:23 you wrote:
> > > Putting the mechanism in a component instead of in the connection
> > > layer is an alternative. But even then, we don't want to form
> > > connections during run/real-time, so the connections must be setup on
> > > beforehand anyway and the multiplexer can only be responsible for the
> > > switching, just like the executive now does. I'm very against
> > > 'resource locking' with this regard, because a crashed application
> > > can't that easily free a resource (unless there is help from the
> > > underlying OS, which would tie us to certain platforms).
> >
> > Yes, that's the point of the multiplexer component ! You set up the
> > connections at deployment time and let it manage the multiplexing. What I
> > meant was:
> > - the multiplexer creates N ports of given type, based either on its
> > property configuration or on a method call
> > - it is triggered whenever one of these ports has new data
> > - it pushes the data from the input ports to the output ports
>
> I agree this is the way to go.
>
> > The challenge is to implement whatever is needed in RTT's type system to
> > make is possible *in a completely generic way*. Most of it is there I
> > believe, but there is probably a bit of glue needed.
>
> Hmm, so to be more precise the problem is to be able to create ports
> of arbitrary types during runtime, yes? So for this a toolkit would
> need to provide some kind of factory method for creating ports of its
> type. This is something that would be very useful for (lua) scripting
> too!

Creating ports of arbitrary type during runtime already exists, as I needed it
for my logging component :P. What might be missing is a realtime friendly
forwarding operator, which pulls from an output port and pushes whatever it
has read to an output port. Maybe it exists, but I don't know

> > The idea here is the following:
> > - you can integrate different multiplexing strategies very easily. For
> > instance: allow only one input, allow multiple inputs, multiplex FIFO,
> > multiplex with priorities, ...
> > - you keep the dataflow layer simple
>
> Or filter functions to apply to values read on certain ports...
Exactly. Well ... in general: you would be very flexible without complexifying
the RTT too much.

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 11:03:06AM +0200, Sylvain Joyeux wrote:
> On Thursday 20 August 2009 10:55:57 Markus Klotzbuecher wrote:
> > On Thu, Aug 20, 2009 at 10:07:22AM +0200, Sylvain Joyeux wrote:

> > > The challenge is to implement whatever is needed in RTT's type system to
> > > make is possible *in a completely generic way*. Most of it is there I
> > > believe, but there is probably a bit of glue needed.
> >
> > Hmm, so to be more precise the problem is to be able to create ports
> > of arbitrary types during runtime, yes? So for this a toolkit would
> > need to provide some kind of factory method for creating ports of its
> > type. This is something that would be very useful for (lua) scripting
> > too!
>
> Creating ports of arbitrary type during runtime already exists, as I needed it
> for my logging component :P. What might be missing is a realtime friendly

Oh that's nice :-)! Is this possible for the basic types by default?
Can you point me to the code?

> forwarding operator, which pulls from an output port and pushes whatever it
> has read to an output port. Maybe it exists, but I don't know

It doesn't seem like a hard thing to do.

> > > The idea here is the following:
> > > - you can integrate different multiplexing strategies very easily. For
> > > instance: allow only one input, allow multiple inputs, multiplex FIFO,
> > > multiplex with priorities, ...
> > > - you keep the dataflow layer simple
> >
> > Or filter functions to apply to values read on certain ports...
> Exactly. Well ... in general: you would be very flexible without complexifying
> the RTT too much.

Yes!

Markus

Data Flow 2.0 Example

> > Creating ports of arbitrary type during runtime already exists, as I
> > needed it for my logging component :P. What might be missing is a
> > realtime friendly
>
> Oh that's nice :-)! Is this possible for the basic types by default?
> Can you point me to the code?
It requires toolkit support

See TypeInfo::inputPort and TypeInfo::outputPort. (I don't know where it is
after Peter's refactoring). It is reimplemented in TemplateTypeInfo.