Data Flow 2.0 Example

This mail is to inform you of my impressions of the new data flow
framework. First of all, the concept behind the new structure is given
in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

The idea is that *outputs* are send and forget, while *inputs* specify
a 'policy': e.g. 'I want to read all samples -> so buffer the input'
or: ' I want lock-based protection' or:...
The policy is specified in a 'ConnPolicy' object which you can give to
the input, to use as a default, or override during the connection of
the ports during deployment.

This is the basic use case of the new code:

#incude <rtt/Port.hpp>
using namespace RTT;
 
// Component A:
OutputPort<double> a_output("MyOutput");
//...
double x = ...;
a_output.write( x );
 
// Component B buffers data produced by A (default buf size==20):
bool init_connection = true; // read last written value after connection
bool pull = true;                   // fetch data directly from output
port during read
InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
internal::ConnPolicy::LOCK_FREE, init_connection, pull));
//...
double x;
while ( b_input.read( x ) {
   // process sample x...
} else {
   // buffer empty
}
 
// Component C gets the most recent data produced by A:
bool init_connection = true; // read last written value after connection
bool pull = true;                   // fetch data directly from output
port during read
InputPort<double> c_input("MyInput",
internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
init_connection, pull));
//...
double x;
if ( c_input.read( x ) {
   // use last value of x...
} else {
  // no new data
}
 
// Finally connect some ports. The order/direction of connecting does
not matter anymore,
// it will always do as expected !
a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
a_output.connectTo( c_input ); // or other way around
 
//Change buffer size for B by giving a policy during connectTo:
b_input.disconnect();
b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
internal::ConnPolicy::LOCK_FREE, init_connection, pull));

Note: ConnPolicy will probably move to RTT or RTT::base.

Since each InputPort takes a default policy (which is type = DATA,
lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
old DeploymentComponent + XML scripts. The 'only' addition necessary
is to extend the XML elements such that a connection policy can be
defined in addition, to override the default. I propose to update the
deployment manual such that the connection semantics are clearer. What
we call now a 'connection', I would propose to call a 'Topic',
analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
InputPort mapping in your XML file. We could easily generalize Topic
to also allow for port names, such that in simple setups, you just set
OutputPort -> InputPort. I'm not even proposing a new XML format here,
because when we write:

RTT 1.0, from DeploymentComponent example:

    <struct name="Ports"  type="PropertyBag">
      <simple name="a_output_port"
type="string"><value>AConnection</value></simple>
      <simple name="a_input_port"
type="string"><value>BConnection</value></simple>
    </struct>

We actually mean to write (what's in a name):
RTT 2.0

    <struct name="Topics"  type="PropertyBag">
      <simple name="a_output_port" type="string"><value>ATopic</value></simple>
      <simple name="b_input_port" type="string"><value>BTopic</value></simple>
    </struct>

In 2.0, you need to take care that exactly one port writes the given
topic (so one OutputPort) and the other ports are all InputPorts. If
this is not correct, the deployer refuses to setup the connections.

So far deployement. The whole mechanism is reported to work
transparantly over CORBA, but I still need to test that statement
personally though.

As before, the receiver can subscribe an RTT::Event to receive
notifications when a new data sample is ready. The scripting interface
of ports are only 'bool read( sample )' or 'bool write(sample)'.

Peter

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

OK, my brain probably worked while it was supposed to sleep, so I actually
*do* have some kind of a solution.

Caveat: it is a limited solution, i.e. it will be in the category of "for
power users that know what they are doing".

Right now, what happens is that we create one channel per port-to-port
connections. That has the tremendous advantage of simplifying the
implementation, reducing the number of assumptions (in particular, there is at
most one writer and one reader per data channel), while keeping the door open
for some optimizations.

I don't want to break that model. During our discussions, it actually showed
to be successful in solving some of the problems Peter had (like: how many
concurrent threads can access a lock-free data structure in a data flow
connection ? Always 2 !)

Now, it is actually possible to have a MO/SI model, by allowing an input port
to have multiple incoming channels, and having InputPort::read round-robin on
those channels. As an added nicety, one can listen to the "new data" event and
access only the port for which we have an indication that new data can be
available. Implementing that would require very little added code, since the
management of multiple channels is already present in OutputPort.

However, that is not highly-generic: as I already stated, the generic
implementation would require the set up of a policy to manage the
multiplexing. Now, it actually offers the people "in the know" with a simple
way of doing MO/SI. More complex scheme would still have to rely on a
multiplexing component.

What are your thoughts ? Would such a behaviour be acceptable if flagged as
"advanced, use at your own risks" ?

NB: I won't have time to switch to Peter's RTT 2.0 branch soon, so either he
will have to do it, or you will have to wait ;-)

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Fri, Aug 21, 2009 at 10:07, Sylvain Joyeux<sylvain [dot] joyeux [..] ...> wrote:
> OK, my brain probably worked while it was supposed to sleep, so I actually
> *do* have some kind of a solution.
>
> Caveat: it is a limited solution, i.e. it will be in the category of "for
> power users that know what they are doing".
>
> Right now, what happens is that we create one channel per port-to-port
> connections. That has the tremendous advantage of simplifying the
> implementation, reducing the number of assumptions (in particular, there is at
> most one writer and one reader per data channel), while keeping the door open
> for some optimizations.
>
> I don't want to break that model. During our discussions, it actually showed
> to be successful in solving some of the problems Peter had (like: how many
> concurrent threads can access a lock-free data structure in a data flow
> connection ? Always 2 !)

Ack.

>
> Now, it is actually possible to have a MO/SI model, by allowing an input port
> to have multiple incoming channels, and having InputPort::read round-robin on
> those channels. As an added nicety, one can listen to the "new data" event and
> access only the port for which we have an indication that new data can be
> available. Implementing that would require very little added code, since the
> management of multiple channels is already present in OutputPort.

Well, a 'polling' + keeping a pointer to the last read channel, such
that we try that one always first, and only start polling if that
channel turned out 'empty'. This empty detection is problematic for
shared data connections (opposed to buffered), because once they are
written, they always show a valid value. We might need to add that
once an input port is read, the sample is consumed, and a next read
will return false (no new data).

>
> However, that is not highly-generic: as I already stated, the generic
> implementation would require the set up of a policy to manage the
> multiplexing. Now, it actually offers the people "in the know" with a simple
> way of doing MO/SI. More complex scheme would still have to rely on a
> multiplexing component.

We all agree here.

>
> What are your thoughts ? Would such a behaviour be acceptable if flagged as
> "advanced, use at your own risks" ?

I like it. It keeps the simplicity/robustness of the data flow
implementation, while offering the backwards compatibility. Even more,
the use case is so common, that I'm reluctant to force complexity in
each application by forcing the user to add another component in the
dataflow each time this occurs. I know that the purists already hit
their reply button to tell me that all policy should be decided in
components and not in the infrastructure, but unfortunately for them,
I take the word of a single user higher than the decision of a
committee. If you want something else than the d-f does, you'll have
to start adding components, and I'm sure they will be contributed once
there's a need for them.

Peter

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:

> > Now, it is actually possible to have a MO/SI model, by allowing an input port
> > to have multiple incoming channels, and having InputPort::read round-robin on
> > those channels. As an added nicety, one can listen to the "new data" event and
> > access only the port for which we have an indication that new data can be
> > available. Implementing that would require very little added code, since the
> > management of multiple channels is already present in OutputPort.
>
> Well, a 'polling' + keeping a pointer to the last read channel, such
> that we try that one always first, and only start polling if that
> channel turned out 'empty'. This empty detection is problematic for
> shared data connections (opposed to buffered), because once they are
> written, they always show a valid value. We might need to add that
> once an input port is read, the sample is consumed, and a next read
> will return false (no new data).

Is there really a usecase for multiple incoming but unbuffered
connections? It seems to me that the result would be quite arbitrary.

Regards
Markus

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Mon, Aug 24, 2009 at 11:19:58AM +0200, Markus Klotzbuecher wrote:
> On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:
>
> > > Now, it is actually possible to have a MO/SI model, by allowing an input port
> > > to have multiple incoming channels, and having InputPort::read round-robin on
> > > those channels. As an added nicety, one can listen to the "new data" event and
> > > access only the port for which we have an indication that new data can be
> > > available. Implementing that would require very little added code, since the
> > > management of multiple channels is already present in OutputPort.
> >
> > Well, a 'polling' + keeping a pointer to the last read channel, such
> > that we try that one always first, and only start polling if that
> > channel turned out 'empty'. This empty detection is problematic for
> > shared data connections (opposed to buffered), because once they are
> > written, they always show a valid value. We might need to add that
> > once an input port is read, the sample is consumed, and a next read
> > will return false (no new data).
>
> Is there really a usecase for multiple incoming but unbuffered
> connections? It seems to me that the result would be quite arbitrary.

Of course there is. If you think at a more broader scope there could
be a coordination component controlling the individual components such
that the results are not arbitrary at all.

In fact this is a good example of explicit vs. implicit coordination.

Regards
Markus

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Aug 24, 2009, at 05:33 , Markus Klotzbuecher wrote:

> On Mon, Aug 24, 2009 at 11:19:58AM +0200, Markus Klotzbuecher wrote:
>> On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:
>>
>>>> Now, it is actually possible to have a MO/SI model, by allowing
>>>> an input port
>>>> to have multiple incoming channels, and having InputPort::read
>>>> round-robin on
>>>> those channels. As an added nicety, one can listen to the "new
>>>> data" event and
>>>> access only the port for which we have an indication that new
>>>> data can be
>>>> available. Implementing that would require very little added
>>>> code, since the
>>>> management of multiple channels is already present in OutputPort.
>>>
>>> Well, a 'polling' + keeping a pointer to the last read channel, such
>>> that we try that one always first, and only start polling if that
>>> channel turned out 'empty'. This empty detection is problematic for
>>> shared data connections (opposed to buffered), because once they
>>> are
>>> written, they always show a valid value. We might need to add that
>>> once an input port is read, the sample is consumed, and a next read
>>> will return false (no new data).
>>
>> Is there really a usecase for multiple incoming but unbuffered
>> connections? It seems to me that the result would be quite arbitrary.
>
> Of course there is. If you think at a more broader scope there could
> be a coordination component controlling the individual components such
> that the results are not arbitrary at all.
>
> In fact this is a good example of explicit vs. implicit coordination.

This is _exactly_ the situation we have in our projects. Multiple
components with unbuffered output connections, to a single input
connection on another component. A coordination component ensures that
only one of the input components is running at a time, but they are
all connected.

Here, we want the latest data value available. No more, no less.

Otherwise, Markus is correct. Having more than one input component
running simultaneously would be arbitrary and give nonsense output data.

Stephen

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Mon, 24 Aug 2009, S Roderick wrote:

> On Aug 24, 2009, at 05:33 , Markus Klotzbuecher wrote:
>
>> On Mon, Aug 24, 2009 at 11:19:58AM +0200, Markus Klotzbuecher wrote:
>>> On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:
>>>
>>>>> Now, it is actually possible to have a MO/SI model, by allowing
>>>>> an input port
>>>>> to have multiple incoming channels, and having InputPort::read
>>>>> round-robin on
>>>>> those channels. As an added nicety, one can listen to the "new
>>>>> data" event and
>>>>> access only the port for which we have an indication that new
>>>>> data can be
>>>>> available. Implementing that would require very little added
>>>>> code, since the
>>>>> management of multiple channels is already present in OutputPort.
>>>>
>>>> Well, a 'polling' + keeping a pointer to the last read channel, such
>>>> that we try that one always first, and only start polling if that
>>>> channel turned out 'empty'. This empty detection is problematic for
>>>> shared data connections (opposed to buffered), because once they
>>>> are
>>>> written, they always show a valid value. We might need to add that
>>>> once an input port is read, the sample is consumed, and a next read
>>>> will return false (no new data).
>>>
>>> Is there really a usecase for multiple incoming but unbuffered
>>> connections? It seems to me that the result would be quite arbitrary.
>>
>> Of course there is. If you think at a more broader scope there could
>> be a coordination component controlling the individual components such
>> that the results are not arbitrary at all.
>>
>> In fact this is a good example of explicit vs. implicit coordination.
>
> This is _exactly_ the situation we have in our projects. Multiple
> components with unbuffered output connections, to a single input
> connection on another component. A coordination component ensures that
> only one of the input components is running at a time, but they are
> all connected.
>
> Here, we want the latest data value available. No more, no less.
>
> Otherwise, Markus is correct. Having more than one input component
> running simultaneously would be arbitrary and give nonsense output data.

Indeed... So, the conclusion I draw from this (sub)discussion is the
following: the _coordinated_ multi-writer use case is so special that it
does not deserve its own feature in the Data Ports part of RTT. (The
Coordinator will (have to) know about all its "data providers", and
make/delete the connections to them explicitly. So, there is no need to
"help him out" by this specific data port policy implementation.)

Herman

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:
> > Now, it is actually possible to have a MO/SI model, by allowing an input port
> > to have multiple incoming channels, and having InputPort::read round-robin on
> > those channels. As an added nicety, one can listen to the "new data" event and
> > access only the port for which we have an indication that new data can be
> > available. Implementing that would require very little added code, since the
> > management of multiple channels is already present in OutputPort.
>
> Well, a 'polling' + keeping a pointer to the last read channel, such
> that we try that one always first, and only start polling if that
> channel turned out 'empty'. This empty detection is problematic for
> shared data connections (opposed to buffered), because once they are
> written, they always show a valid value. We might need to add that
> once an input port is read, the sample is consumed, and a next read
> will return false (no new data).

I don't like the idea of read() returning false on an already initialized data
connection. If you want a connection telling you if it has been written since
last read(), use a buffer. Maybe having read() return a tri-state: NO_SAMPLE,
UPDATED_SAMPLE, OLD_SAMPLE with NO_SAMPLE being false ?

Sylvain

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Sun, 23 Aug 2009, Sylvain Joyeux wrote:

> On Sun, Aug 23, 2009 at 03:13:19PM +0200, Peter Soetens wrote:
>>> Now, it is actually possible to have a MO/SI model, by allowing an input port
>>> to have multiple incoming channels, and having InputPort::read round-robin on
>>> those channels. As an added nicety, one can listen to the "new data" event and
>>> access only the port for which we have an indication that new data can be
>>> available. Implementing that would require very little added code, since the
>>> management of multiple channels is already present in OutputPort.
>>
>> Well, a 'polling' + keeping a pointer to the last read channel, such
>> that we try that one always first, and only start polling if that
>> channel turned out 'empty'. This empty detection is problematic for
>> shared data connections (opposed to buffered), because once they are
>> written, they always show a valid value. We might need to add that
>> once an input port is read, the sample is consumed, and a next read
>> will return false (no new data).
>
> I don't like the idea of read() returning false on an already initialized data
> connection. If you want a connection telling you if it has been written since
> last read(), use a buffer. Maybe having read() return a tri-state: NO_SAMPLE,
> UPDATED_SAMPLE, OLD_SAMPLE with NO_SAMPLE being false ?

I think RTT should only provide the simplest and easiest-to-implement
policy: each reader gets the last value that was written, and new writes
overwrite that value.

More complex policies belong to dedicated port components, each providing
one (or more) of those policies.

Herman

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Fri, 21 Aug 2009, Sylvain Joyeux wrote:

> OK, my brain probably worked while it was supposed to sleep, so I actually
> *do* have some kind of a solution.
>
> Caveat: it is a limited solution, i.e. it will be in the category of "for
> power users that know what they are doing".

...as is all of RTT, isn't it? :-)

> Right now, what happens is that we create one channel per port-to-port
> connections. That has the tremendous advantage of simplifying the
> implementation, reducing the number of assumptions (in particular, there is at
> most one writer and one reader per data channel), while keeping the door open
> for some optimizations.
>
> I don't want to break that model. During our discussions, it actually showed
> to be successful in solving some of the problems Peter had (like: how many
> concurrent threads can access a lock-free data structure in a data flow
> connection ? Always 2 !)

I agree with this approach. RTT should offer the simplest, most
deterministic and realtime-ready version. Any additional complexity is the
responsibility of: (i) external middleware that is made RTT-interoperable
in one way or another, or (ii) specialized communication _Components_.

> Now, it is actually possible to have a MO/SI model, by allowing an input port
> to have multiple incoming channels, and having InputPort::read round-robin on
> those channels.

Why do you suggest this particular round robin "scheduling" policty?

> As an added nicety, one can listen to the "new data" event and
> access only the port for which we have an indication that new data can be
> available.
I think this event-driven reading is not an "added nicety", but more
fundamental than round-robin scheduling of reads.

> Implementing that would require very little added code, since the
> management of multiple channels is already present in OutputPort.

> However, that is not highly-generic: as I already stated, the generic
> implementation would require the set up of a policy to manage the
> multiplexing. Now, it actually offers the people "in the know" with a simple
> way of doing MO/SI. More complex scheme would still have to rely on a
> multiplexing component.

Agreed. But Round Robin _is_ already a multiplexing policy.

Herman

> What are your thoughts ? Would such a behaviour be acceptable if flagged as
> "advanced, use at your own risks" ?
>
> NB: I won't have time to switch to Peter's RTT 2.0 branch soon, so either he
> will have to do it, or you will have to wait ;-)

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

> > As an added nicety, one can listen to the "new data" event and
> > access only the port for which we have an indication that new data can be
> > available.
>
> I think this event-driven reading is not an "added nicety", but more
> fundamental than round-robin scheduling of reads.

Wrong terminology, sorry. It is not a round-robin, but it is basically trying
each incoming channel one after the other, starting from the first, until one
has data. It is not a "proposed strategy", it is basically the only one that
can be implemented without bloating the data flow implementation.

The only "selectable policies" we could integrate would deal with the order in
which the ports are read(), i.e. round-robin, trying first last port which had
data, ... Reading the *samples* in a FIFO manner could not be done for
instance.

PROPOSAL: multi-output/single-input behaviour [was: Data Flow 2.

On Fri, 21 Aug 2009, Sylvain Joyeux wrote:

>>> As an added nicety, one can listen to the "new data" event and
>>> access only the port for which we have an indication that new data can be
>>> available.
>>
>> I think this event-driven reading is not an "added nicety", but more
>> fundamental than round-robin scheduling of reads.
>
> Wrong terminology, sorry. It is not a round-robin, but it is basically trying
> each incoming channel one after the other, starting from the first, until one
> has data.

Ok, it's _polling_ versus _events_, which is indeed the classical trade-off
question :-) But RTT should not make this trade-off and offer _both_ (but
not more!), since both have lots of use cases in the scope of Orocos!

> It is not a "proposed strategy", it is basically the only one that
> can be implemented without bloating the data flow implementation.
I agree.

> The only "selectable policies" we could integrate would deal with the order in
> which the ports are read(), i.e. round-robin, trying first last port which had
> data, ... Reading the *samples* in a FIFO manner could not be done for
> instance.
I fully agree.

Herman

Re: Data Flow 2.0 Example

sspr wrote:

This mail is to inform you of my impressions of the new data flow
framework. First of all, the concept behind the new structure is given
in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

The idea is that *outputs* are send and forget, while *inputs* specify
a 'policy': e.g. 'I want to read all samples -> so buffer the input'
or: ' I want lock-based protection' or:...
The policy is specified in a 'ConnPolicy' object which you can give to
the input, to use as a default, or override during the connection of
the ports during deployment.

cut...

Peter

Thank you, Peter and Sylvian. I'll try to do some tests as soon as I can.

I believe that some kind of multiplexers are indeed a very good solution for making many-to-one connections. Being a person with a "signals" point-of-view, I have always preferred explicit components for merging outputs, hence we have always used 1.0 in that way.
My idea has been to add an optional feature to ports to make them cloneable (or duplicatable): initially, such a port is not instantiated, but every time that one makes a new connection involving that port, a new instance of it is created. People knowing Simulink or 20-sim will recognize it from e.g. a summer.

If I understand correctly, one-to-many connections are still possible in 2.0, right??

I understand and respect the decision to make outputs "send and forget". However, for our systems this seems inconvenient: one output is typically connected to many inputs that do not buffer, and hence the data is copied many times. I haven't checked the code, but would it be doable to implement a policy for outputs to hold data and for unbuffered inputs to get data from a connected output?

Finally: one more vote _against_ "Topic"; I think we have a very nice signal-oriented data flow now, and "Connection" is fine. Why should it change?? The fact that ROS uses Topic is really of no interest for me; I find Topic counter-intuitive, and Connection very much to the point.

Cheers, Theo.

Data Flow 2.0 Example

On Thu, 20 Aug 2009, t [dot] j [dot] a [dot] devries [..] ... wrote:

>

sspr wrote:
This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>
> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.
>
> cut...
>
> Peter
>

>
> Thank you, Peter and Sylvian. I'll try to do some tests as soon as I can.
>
> I believe that some kind of multiplexers are indeed a very good solution for making many-to-one connections. Being a person with a "signals" point-of-view, I have always preferred explicit components for merging outputs, hence we have always used 1.0 in that way.
> My idea has been to add an optional feature to ports to make them cloneable (or duplicatable): initially, such a port is not instantiated, but every time that one makes a new connection involving that port, a new instance of it is created. People knowing Simulink or 20-sim will recognize it from e.g. a summer.
>
> If I understand correctly, one-to-many connections are still possible in 2.0, right??
>
> I understand and respect the decision to make outputs "send and forget". However, for our systems this seems inconvenient: one output is typically connected to many inputs that do not buffer, and hence the data is copied many times. I haven't checked the code, but would it be doable to implement a policy for outputs to hold data and for unbuffered inputs to get data from a connected output?

My view on the ports is the following: the "Computation" part inside a
component (= your functionality) should _always_ "read and forget", as well
as "send and forget". But:
- it has to provide Quality of Service requirements in its "required" and
"provided" interface.
- it's the Deployer's job to check these QoS constraints and connect only
"Communication" components with matching capabilities.
- buffering, multiplexing, filtering, ... in "ports" is to be done by doing
all Communication in full-fledged components. That means, such
"Communication" components have their own "Configuration" (what QoS is
needed and how can it be configured?) and "Coordination" (how to realize
buffering, multiplexing, etc.?). It also has its own reporting and
resource coordination.
A toolchain will later be able to optimize out all the "overhead code" if
a particular deployment provides opportunities to do so.

> Finally: one more vote _against_ "Topic"; I think we have a very nice
> signal-oriented data flow now, and "Connection" is fine. Why should it
> change?? The fact that ROS uses Topic is really of no interest for me; I
> find Topic counter-intuitive, and Connection very much to the point.

I fully agree.

Herman

Data Flow 2.0 Example

On Aug 19, 2009, at 08:41 , Peter Soetens wrote:

> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>
> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.
>
> This is the basic use case of the new code:
>
>

> #incude <rtt/Port.hpp>
> using namespace RTT;
>
> // Component A:
> OutputPort<double> a_output("MyOutput");
> //...
> double x = ...;
> a_output.write( x );
>
> // Component B buffers data produced by A (default buf size==20):
> bool init_connection = true; // read last written value after  
> connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
> //...
> double x;
> while ( b_input.read( x ) {
>   // process sample x...
> } else {
>   // buffer empty
> }
>
> // Component C gets the most recent data produced by A:
> bool init_connection = true; // read last written value after  
> connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> c_input("MyInput",
> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
> init_connection, pull));
> //...
> double x;
> if ( c_input.read( x ) {
>   // use last value of x...
> } else {
>  // no new data
> }
>
> // Finally connect some ports. The order/direction of connecting does
> not matter anymore,
> // it will always do as expected !
> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
> a_output.connectTo( c_input ); // or other way around
>
> //Change buffer size for B by giving a policy during connectTo:
> b_input.disconnect();
> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>
> 

What is the default policy? How does it differ from the current
implementation?

How will we set similar policies within the deployer? In all of our
Orocos code (ten's of thousands of source lines of code) we do not
have a *single* instance of manual port connections like the above.
Now, we might be doing things wrong or the hard way, but that aside,
I'm really interested in how we'll gain access to any of the new
features from the deployer.

<sni

> As before, the receiver can subscribe an RTT::Event to receive
> notifications when a new data sample is ready. The scripting interface
> of ports are only 'bool read( sample )' or 'bool write(sample)'.

Really?! Is there an example or some documentation on this somewhere?
Stephen

Data Flow 2.0 Example

On Aug 19, 2009, at 08:41 , Peter Soetens wrote:

> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>
> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.
>
> This is the basic use case of the new code:
>
>

> #incude <rtt/Port.hpp>
> using namespace RTT;
>
> // Component A:
> OutputPort<double> a_output("MyOutput");
> //...
> double x = ...;
> a_output.write( x );
>
> // Component B buffers data produced by A (default buf size==20):
> bool init_connection = true; // read last written value after  
> connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
> //...
> double x;
> while ( b_input.read( x ) {
>   // process sample x...
> } else {
>   // buffer empty
> }
>
> // Component C gets the most recent data produced by A:
> bool init_connection = true; // read last written value after  
> connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> c_input("MyInput",
> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
> init_connection, pull));
> //...
> double x;
> if ( c_input.read( x ) {
>   // use last value of x...
> } else {
>  // no new data
> }
>
> // Finally connect some ports. The order/direction of connecting does
> not matter anymore,
> // it will always do as expected !
> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
> a_output.connectTo( c_input ); // or other way around
>
> //Change buffer size for B by giving a policy during connectTo:
> b_input.disconnect();
> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>
> 

What is the default policy? How does it differ from the current
implementation?

How will we set similar policies within the deployer? In all of our
Orocos code (ten's of thousands of source lines of code) we do not
have a *single* instance of manual port connections like the above.
Now, we might be doing things wrong or the hard way, but that aside,
I'm really interested in how we'll gain access to any of the new
features from the deployer.

<sni

> As before, the receiver can subscribe an RTT::Event to receive
> notifications when a new data sample is ready. The scripting interface
> of ports are only 'bool read( sample )' or 'bool write(sample)'.

Really?! Is there an example or some documentation on this somewhere?
Stephen

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 13:17, S Roderick<kiwi [dot] net [..] ...> wrote:
> On Aug 19, 2009, at 08:41 , Peter Soetens wrote:
>
>> This mail is to inform you of my impressions of the new data flow
>> framework. First of all, the concept behind the new structure is given
>> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>>
>> The idea is that *outputs* are send and forget, while *inputs* specify
>> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
>> or: ' I want lock-based protection' or:...
>> The policy is specified in a 'ConnPolicy' object which you can give to
>> the input, to use as a default, or override during the connection of
>> the ports during deployment.
>>
>> This is the basic use case of the new code:
>>
>>

>> #incude <rtt/Port.hpp>
>> using namespace RTT;
>>
>> // Component A:
>> OutputPort<double> a_output("MyOutput");
>> //...
>> double x = ...;
>> a_output.write( x );
>>
>> // Component B buffers data produced by A (default buf size==20):
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>> //...
>> double x;
>> while ( b_input.read( x ) {
>>  // process sample x...
>> } else {
>>  // buffer empty
>> }
>>
>> // Component C gets the most recent data produced by A:
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> c_input("MyInput",
>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>> init_connection, pull));
>> //...
>> double x;
>> if ( c_input.read( x ) {
>>  // use last value of x...
>> } else {
>>  // no new data
>> }
>>
>> // Finally connect some ports. The order/direction of connecting does
>> not matter anymore,
>> // it will always do as expected !
>> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
>> a_output.connectTo( c_input ); // or other way around
>>
>> //Change buffer size for B by giving a policy during connectTo:
>> b_input.disconnect();
>> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>>
>> 

>
> What is the default policy? How does it differ from the current
> implementation?

The 1.0 implementation defaults to LOCK_FREE, and init_connection+keep
and the pull depends on the direction in which the connectTo was
specified and if there was already an existing connection (very
confusing and unflexible). The default in 2.0 is LOCK_FREE, and the
rest false.

>
> How will we set similar policies within the deployer? In all of our Orocos
> code (ten's of thousands of source lines of code) we do not have a *single*
> instance of manual port connections like the above. Now, we might be doing
> things wrong or the hard way, but that aside, I'm really interested in how
> we'll gain access to any of the new features from the deployer.

Sorry for the confusion, the connectTo is the low-level function that
connects ports, used by the deployer. The deployer XML will be
backwards compatible (and using defaults for unspecified stuff,
resembling most the RTT 1.x data flow), with the current exception
that you can have only one output per connection. For the new
features, the Ports struct will need to be extended to allow to
specify data/buffer and the locking policy.

>
> <sni

>
>> As before, the receiver can subscribe an RTT::Event to receive
>> notifications when a new data sample is ready. The scripting interface
>> of ports are only 'bool read( sample )' or 'bool write(sample)'.
>
> Really?! Is there an example or some documentation on this somewhere?
> Stephen

http://www.orocos.org/stable/documentation/rtt/v1.8.x/doc-xml/orocos-com...

look for 'addEventPort'.

Peter

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 13:17, S Roderick<kiwi [dot] net [..] ...> wrote:
> On Aug 19, 2009, at 08:41 , Peter Soetens wrote:
>
>> This mail is to inform you of my impressions of the new data flow
>> framework. First of all, the concept behind the new structure is given
>> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>>
>> The idea is that *outputs* are send and forget, while *inputs* specify
>> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
>> or: ' I want lock-based protection' or:...
>> The policy is specified in a 'ConnPolicy' object which you can give to
>> the input, to use as a default, or override during the connection of
>> the ports during deployment.
>>
>> This is the basic use case of the new code:
>>
>>

>> #incude <rtt/Port.hpp>
>> using namespace RTT;
>>
>> // Component A:
>> OutputPort<double> a_output("MyOutput");
>> //...
>> double x = ...;
>> a_output.write( x );
>>
>> // Component B buffers data produced by A (default buf size==20):
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>> //...
>> double x;
>> while ( b_input.read( x ) {
>>  // process sample x...
>> } else {
>>  // buffer empty
>> }
>>
>> // Component C gets the most recent data produced by A:
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> c_input("MyInput",
>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>> init_connection, pull));
>> //...
>> double x;
>> if ( c_input.read( x ) {
>>  // use last value of x...
>> } else {
>>  // no new data
>> }
>>
>> // Finally connect some ports. The order/direction of connecting does
>> not matter anymore,
>> // it will always do as expected !
>> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
>> a_output.connectTo( c_input ); // or other way around
>>
>> //Change buffer size for B by giving a policy during connectTo:
>> b_input.disconnect();
>> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>>
>> 

>
> What is the default policy? How does it differ from the current
> implementation?

The 1.0 implementation defaults to LOCK_FREE, and init_connection+keep
and the pull depends on the direction in which the connectTo was
specified and if there was already an existing connection (very
confusing and unflexible). The default in 2.0 is LOCK_FREE, and the
rest false.

>
> How will we set similar policies within the deployer? In all of our Orocos
> code (ten's of thousands of source lines of code) we do not have a *single*
> instance of manual port connections like the above. Now, we might be doing
> things wrong or the hard way, but that aside, I'm really interested in how
> we'll gain access to any of the new features from the deployer.

Sorry for the confusion, the connectTo is the low-level function that
connects ports, used by the deployer. The deployer XML will be
backwards compatible (and using defaults for unspecified stuff,
resembling most the RTT 1.x data flow), with the current exception
that you can have only one output per connection. For the new
features, the Ports struct will need to be extended to allow to
specify data/buffer and the locking policy.

>
> <sni

>
>> As before, the receiver can subscribe an RTT::Event to receive
>> notifications when a new data sample is ready. The scripting interface
>> of ports are only 'bool read( sample )' or 'bool write(sample)'.
>
> Really?! Is there an example or some documentation on this somewhere?
> Stephen

http://www.orocos.org/stable/documentation/rtt/v1.8.x/doc-xml/orocos-com...

look for 'addEventPort'.

Peter

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:
> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

Thx for the heads up!

> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.

Don't you mean TopicPolicy? (See below ;-)

> This is the basic use case of the new code:
>
>

> #incude <rtt/Port.hpp>
> using namespace RTT;
>
> // Component A:
> OutputPort<double> a_output("MyOutput");
> //...
> double x = ...;
> a_output.write( x );
>
> // Component B buffers data produced by A (default buf size==20):
> bool init_connection = true; // read last written value after connection
 
Hmm, I don't understand the semantics of init_connection...
 
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
> //...
> double x;
> while ( b_input.read( x ) {
>   // process sample x...
> } else {
>   // buffer empty
> }
 
What's the while() statement doing here?  Does it replace updateHook()?
 
> // Component C gets the most recent data produced by A:
> bool init_connection = true; // read last written value after connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> c_input("MyInput",
> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
> init_connection, pull));
> //...
> double x;
> if ( c_input.read( x ) {
>   // use last value of x...
> } else {
>  // no new data
> }
 
If I set pull to 'false' above, does that mean that c_input.read(x)
will always return true (unless disconnected or st.), and the
behaviour will be the same of the 1.x ReadPort connected to a
WritePort (i.e. if no one writes to the port, the read on the readPort
will keep reading the same data)?
 
> // Finally connect some ports. The order/direction of connecting does
> not matter anymore,
> // it will always do as expected !
> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
> a_output.connectTo( c_input ); // or other way around
>
> //Change buffer size for B by giving a policy during connectTo:
> b_input.disconnect();
> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>
> 

> Note: ConnPolicy will probably move to RTT or RTT::base.

Makes sense.

> Since each InputPort takes a default policy (which is  type = DATA,
> lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
> old DeploymentComponent + XML scripts. The 'only' addition necessary
> is to extend the XML elements such that a connection policy can be
> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What
> we call now a 'connection', I would propose to call a 'Topic',
> analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
> InputPort mapping in your XML file. We could easily generalize Topic
> to also allow for port names, such that in simple setups, you just set
> OutputPort -> InputPort. I'm not even proposing a new XML format here,
> because when we write:

Topic???? I _really_ prefer Connection!! The word "topic" makes as
much sense to me as the word "TaskContext" ;-P

[...]
> We actually mean to write (what's in a name):

Answer to your question: "A lot".

> As before, the receiver can subscribe an RTT::Event to receive
> notifications when a new data sample is ready.

I need to check that possibility real soon!

Best regards,

Klaas

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:
> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

Thx for the heads up!

> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.

Don't you mean TopicPolicy? (See below ;-)

> This is the basic use case of the new code:
>
>

> #incude <rtt/Port.hpp>
> using namespace RTT;
>
> // Component A:
> OutputPort<double> a_output("MyOutput");
> //...
> double x = ...;
> a_output.write( x );
>
> // Component B buffers data produced by A (default buf size==20):
> bool init_connection = true; // read last written value after connection
 
Hmm, I don't understand the semantics of init_connection...
 
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
> //...
> double x;
> while ( b_input.read( x ) {
>   // process sample x...
> } else {
>   // buffer empty
> }
 
What's the while() statement doing here?  Does it replace updateHook()?
 
> // Component C gets the most recent data produced by A:
> bool init_connection = true; // read last written value after connection
> bool pull = true;                   // fetch data directly from output
> port during read
> InputPort<double> c_input("MyInput",
> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
> init_connection, pull));
> //...
> double x;
> if ( c_input.read( x ) {
>   // use last value of x...
> } else {
>  // no new data
> }
 
If I set pull to 'false' above, does that mean that c_input.read(x)
will always return true (unless disconnected or st.), and the
behaviour will be the same of the 1.x ReadPort connected to a
WritePort (i.e. if no one writes to the port, the read on the readPort
will keep reading the same data)?
 
> // Finally connect some ports. The order/direction of connecting does
> not matter anymore,
> // it will always do as expected !
> a_output.connectTo( b_input ); // or: b_input.connectTo( a_output );
> a_output.connectTo( c_input ); // or other way around
>
> //Change buffer size for B by giving a policy during connectTo:
> b_input.disconnect();
> b_input.connectTo( a_output, internal::ConnPolicy::buffer(20,
> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>
> 

> Note: ConnPolicy will probably move to RTT or RTT::base.

Makes sense.

> Since each InputPort takes a default policy (which is  type = DATA,
> lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
> old DeploymentComponent + XML scripts. The 'only' addition necessary
> is to extend the XML elements such that a connection policy can be
> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What
> we call now a 'connection', I would propose to call a 'Topic',
> analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
> InputPort mapping in your XML file. We could easily generalize Topic
> to also allow for port names, such that in simple setups, you just set
> OutputPort -> InputPort. I'm not even proposing a new XML format here,
> because when we write:

Topic???? I _really_ prefer Connection!! The word "topic" makes as
much sense to me as the word "TaskContext" ;-P

[...]
> We actually mean to write (what's in a name):

Answer to your question: "A lot".

> As before, the receiver can subscribe an RTT::Event to receive
> notifications when a new data sample is ready.

I need to check that possibility real soon!

Best regards,

Klaas

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:
>> The idea is that *outputs* are send and forget, while *inputs* specify
>> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
>> or: ' I want lock-based protection' or:...
>> The policy is specified in a 'ConnPolicy' object which you can give to
>> the input, to use as a default, or override during the connection of
>> the ports during deployment.
>
> Don't you mean TopicPolicy? (See below ;-)

You can't imagine the amount of regret I'm suffering by starting this topic :-)

>
>> This is the basic use case of the new code:
>>
>>

>> #incude <rtt/Port.hpp>
>> using namespace RTT;
>>
>> // Component A:
>> OutputPort<double> a_output("MyOutput");
>> //...
>> double x = ...;
>> a_output.write( x );
>>
>> // Component B buffers data produced by A (default buf size==20):
>> bool init_connection = true; // read last written value after connection
>
> Hmm, I don't understand the semantics of init_connection...
 
The point is this: when a port is connected, it contains no data (no
one ever wrote to its connection). So a read() would return false. In
order to get the most recent data sample before the connection took
place, the output port needs to set 'keep_last_written_value' and the
input port needs to set 'init_connection'. If both match, the newly
connected input port will be able to read() once the last sample. I
wonder if 'keep_last_written_value' shouldn't default to 'true'. ROS
had recently a similar mechanism put in place.
 
>
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>> //...
>> double x;
>> while ( b_input.read( x ) {
>>   // process sample x...
>> } else {
>>   // buffer empty
>> }
>
> What's the while() statement doing here?  Does it replace updateHook()?
 
The example is written without a component, so it contains the code
you would put in updateHook().
 
>
>> // Component C gets the most recent data produced by A:
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> c_input("MyInput",
>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>> init_connection, pull));
>> //...
>> double x;
>> if ( c_input.read( x ) {
>>   // use last value of x...
>> } else {
>>  // no new data
>> }
>
> If I set pull to 'false' above, does that mean that c_input.read(x)
> will always return true (unless disconnected or st.), and the
> behaviour will be the same of the 1.x ReadPort connected to a
> WritePort (i.e. if no one writes to the port, the read on the readPort
> will keep reading the same data)?
 
No. Yes. pull dictates if the storage is at your input side
(pull=false) or at the output side (pull=true). This has influence in
networked environments.
Data ports always read the last sample, unless the channel is cleared
(I'd like to see how Sylvain uses this feature himself). buffer ports
return
data if available. Reading on a never-written connection always
returns false (unless the init+keep trick is done).
 
Peter

Data Flow 2.0 Example

On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:
>> The idea is that *outputs* are send and forget, while *inputs* specify
>> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
>> or: ' I want lock-based protection' or:...
>> The policy is specified in a 'ConnPolicy' object which you can give to
>> the input, to use as a default, or override during the connection of
>> the ports during deployment.
>
> Don't you mean TopicPolicy? (See below ;-)

You can't imagine the amount of regret I'm suffering by starting this topic :-)

>
>> This is the basic use case of the new code:
>>
>>

>> #incude <rtt/Port.hpp>
>> using namespace RTT;
>>
>> // Component A:
>> OutputPort<double> a_output("MyOutput");
>> //...
>> double x = ...;
>> a_output.write( x );
>>
>> // Component B buffers data produced by A (default buf size==20):
>> bool init_connection = true; // read last written value after connection
>
> Hmm, I don't understand the semantics of init_connection...
 
The point is this: when a port is connected, it contains no data (no
one ever wrote to its connection). So a read() would return false. In
order to get the most recent data sample before the connection took
place, the output port needs to set 'keep_last_written_value' and the
input port needs to set 'init_connection'. If both match, the newly
connected input port will be able to read() once the last sample. I
wonder if 'keep_last_written_value' shouldn't default to 'true'. ROS
had recently a similar mechanism put in place.
 
>
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> b_input("MyInput", internal::ConnPolicy::buffer(20,
>> internal::ConnPolicy::LOCK_FREE, init_connection, pull));
>> //...
>> double x;
>> while ( b_input.read( x ) {
>>   // process sample x...
>> } else {
>>   // buffer empty
>> }
>
> What's the while() statement doing here?  Does it replace updateHook()?
 
The example is written without a component, so it contains the code
you would put in updateHook().
 
>
>> // Component C gets the most recent data produced by A:
>> bool init_connection = true; // read last written value after connection
>> bool pull = true;                   // fetch data directly from output
>> port during read
>> InputPort<double> c_input("MyInput",
>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>> init_connection, pull));
>> //...
>> double x;
>> if ( c_input.read( x ) {
>>   // use last value of x...
>> } else {
>>  // no new data
>> }
>
> If I set pull to 'false' above, does that mean that c_input.read(x)
> will always return true (unless disconnected or st.), and the
> behaviour will be the same of the 1.x ReadPort connected to a
> WritePort (i.e. if no one writes to the port, the read on the readPort
> will keep reading the same data)?
 
No. Yes. pull dictates if the storage is at your input side
(pull=false) or at the output side (pull=true). This has influence in
networked environments.
Data ports always read the last sample, unless the channel is cleared
(I'd like to see how Sylvain uses this feature himself). buffer ports
return
data if available. Reading on a never-written connection always
returns false (unless the init+keep trick is done).
 
Peter

Data Flow 2.0 Example

On Thu, 20 Aug 2009, Peter Soetens wrote:
> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:

[...]
>>> // Component C gets the most recent data produced by A:
>>> bool init_connection = true; // read last written value after connection
>>> bool pull = true;                   // fetch data directly from output
>>> port during read
>>> InputPort<double> c_input("MyInput",
>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>> init_connection, pull));
>>> //...
>>> double x;
>>> if ( c_input.read( x ) {
>>>   // use last value of x...
>>> } else {
>>>  // no new data
>>> }
>>
>> If I set pull to 'false' above, does that mean that c_input.read(x)
>> will always return true (unless disconnected or st.), and the
>> behaviour will be the same of the 1.x ReadPort connected to a
>> WritePort (i.e. if no one writes to the port, the read on the readPort
>> will keep reading the same data)?
>
> No. Yes. pull dictates if the storage is at your input side
> (pull=false) or at the output side (pull=true). This has influence in
> networked environments.

Do you mean _only_ in networked environments? Maybe I don't understand exactly what is meant by "input side" and "output side" above: Do they refer to the input and output port? And if yes, are the "pull's" exclusive (i.e. one can only set it either for the input port, or for the output port)?

> Data ports always read the last sample, unless the channel is cleared
> (I'd like to see how Sylvain uses this feature himself). buffer ports
> return
> data if available. Reading on a never-written connection always
> returns false (unless the init+keep trick is done).

ACK.

Thx,

Klaas

Data Flow 2.0 Example

On Thu, 20 Aug 2009, Peter Soetens wrote:
> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...> wrote:

[...]
>>> // Component C gets the most recent data produced by A:
>>> bool init_connection = true; // read last written value after connection
>>> bool pull = true;                   // fetch data directly from output
>>> port during read
>>> InputPort<double> c_input("MyInput",
>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>> init_connection, pull));
>>> //...
>>> double x;
>>> if ( c_input.read( x ) {
>>>   // use last value of x...
>>> } else {
>>>  // no new data
>>> }
>>
>> If I set pull to 'false' above, does that mean that c_input.read(x)
>> will always return true (unless disconnected or st.), and the
>> behaviour will be the same of the 1.x ReadPort connected to a
>> WritePort (i.e. if no one writes to the port, the read on the readPort
>> will keep reading the same data)?
>
> No. Yes. pull dictates if the storage is at your input side
> (pull=false) or at the output side (pull=true). This has influence in
> networked environments.

Do you mean _only_ in networked environments? Maybe I don't understand exactly what is meant by "input side" and "output side" above: Do they refer to the input and output port? And if yes, are the "pull's" exclusive (i.e. one can only set it either for the input port, or for the output port)?

> Data ports always read the last sample, unless the channel is cleared
> (I'd like to see how Sylvain uses this feature himself). buffer ports
> return
> data if available. Reading on a never-written connection always
> returns false (unless the init+keep trick is done).

ACK.

Thx,

Klaas

Data Flow 2.0 Example

Picking this thread up again...

On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>
>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>> wrote:
>>>
>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>> wrote:
>
> [...]
>>>>
>>>> // Component C gets the most recent data produced by A:
>>>> bool init_connection = true; // read last written value after connection
>>>> bool pull = true;                   // fetch data directly from output
>>>> port during read
>>>> InputPort<double> c_input("MyInput",
>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>> init_connection, pull));
>>>> //...
>>>> double x;
>>>> if ( c_input.read( x ) {
>>>>   // use last value of x...
>>>> } else {
>>>>  // no new data
>>>> }
>>>
>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>> will always return true (unless disconnected or st.), and the
>>> behaviour will be the same of the 1.x ReadPort connected to a
>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>> will keep reading the same data)?
>>
>> No. Yes. pull dictates if the storage is at your input side
>> (pull=false) or at the output side (pull=true). This has influence in
>> networked environments.
>
> Do you mean _only_ in networked environments?  Maybe I don't understand
> exactly what is meant by "input side" and "output side" above: Do they refer
> to the input and output port?

Yes.

> And if yes, are the "pull's" exclusive (i.e.
> one can only set it either for the input port, or for the output port)?

pull vs push is a connection property (not a port property) and has
effects in any environment. In a push, all output'ed data is pushed to
the input port's buffer. So if the input port is across the network,
the push goes over the network, so the producing component 'feels'
this in his latencies. In a pull, all output'ed data is stored at the
output port's buffer, which is local to the output. Now the input
fetches the data and if the connection is across a network link, the
input will 'feel' this in his latencies. push and pull are exclusive
in the current architecture. So if two real-time components wish to
communicate over a CORBA link, there is a problem. I proposed to fix
this with an extra thread in the proxy, which will mean in practice
that the two buffers are required (output side and input side) and
that the not-realtime proxy thread will do the high-latency part.

Peter

Data Flow 2.0 Example

Picking this thread up again...

On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>
>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>> wrote:
>>>
>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>> wrote:
>
> [...]
>>>>
>>>> // Component C gets the most recent data produced by A:
>>>> bool init_connection = true; // read last written value after connection
>>>> bool pull = true;                   // fetch data directly from output
>>>> port during read
>>>> InputPort<double> c_input("MyInput",
>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>> init_connection, pull));
>>>> //...
>>>> double x;
>>>> if ( c_input.read( x ) {
>>>>   // use last value of x...
>>>> } else {
>>>>  // no new data
>>>> }
>>>
>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>> will always return true (unless disconnected or st.), and the
>>> behaviour will be the same of the 1.x ReadPort connected to a
>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>> will keep reading the same data)?
>>
>> No. Yes. pull dictates if the storage is at your input side
>> (pull=false) or at the output side (pull=true). This has influence in
>> networked environments.
>
> Do you mean _only_ in networked environments?  Maybe I don't understand
> exactly what is meant by "input side" and "output side" above: Do they refer
> to the input and output port?

Yes.

> And if yes, are the "pull's" exclusive (i.e.
> one can only set it either for the input port, or for the output port)?

pull vs push is a connection property (not a port property) and has
effects in any environment. In a push, all output'ed data is pushed to
the input port's buffer. So if the input port is across the network,
the push goes over the network, so the producing component 'feels'
this in his latencies. In a pull, all output'ed data is stored at the
output port's buffer, which is local to the output. Now the input
fetches the data and if the connection is across a network link, the
input will 'feel' this in his latencies. push and pull are exclusive
in the current architecture. So if two real-time components wish to
communicate over a CORBA link, there is a problem. I proposed to fix
this with an extra thread in the proxy, which will mean in practice
that the two buffers are required (output side and input side) and
that the not-realtime proxy thread will do the high-latency part.

Peter

Data Flow 2.0 Example

On Mon, 31 Aug 2009, Peter Soetens wrote:
> Picking this thread up again...

me too ;-)

> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>
>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>> wrote:
>>>>
>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>> wrote:
>>
>> [...]
>>>>>
>>>>> // Component C gets the most recent data produced by A:
>>>>> bool init_connection = true; // read last written value after connection
>>>>> bool pull = true;                   // fetch data directly from output
>>>>> port during read
>>>>> InputPort<double> c_input("MyInput",
>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>> init_connection, pull));
>>>>> //...
>>>>> double x;
>>>>> if ( c_input.read( x ) {
>>>>>   // use last value of x...
>>>>> } else {
>>>>>  // no new data
>>>>> }
>>>>
>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>> will always return true (unless disconnected or st.), and the
>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>> will keep reading the same data)?
>>>
>>> No. Yes. pull dictates if the storage is at your input side
>>> (pull=false) or at the output side (pull=true). This has influence in
>>> networked environments.
>>
>> Do you mean _only_ in networked environments?  Maybe I don't understand
>> exactly what is meant by "input side" and "output side" above: Do they refer
>> to the input and output port?
>
> Yes.
>
>> And if yes, are the "pull's" exclusive (i.e.
>> one can only set it either for the input port, or for the output port)?
>
> pull vs push is a connection property (not a port property) and has
> effects in any environment. In a push, all output'ed data is pushed to
> the input port's buffer. So if the input port is across the network,
> the push goes over the network, so the producing component 'feels'
> this in his latencies. In a pull, all output'ed data is stored at the
> output port's buffer, which is local to the output. Now the input
> fetches the data and if the connection is across a network link, the
> input will 'feel' this in his latencies. push and pull are exclusive
> in the current architecture. So if two real-time components wish to
> communicate over a CORBA link, there is a problem. I proposed to fix
> this with an extra thread in the proxy, which will mean in practice
> that the two buffers are required (output side and input side) and
> that the not-realtime proxy thread will do the high-latency part.

Ok, I finally think I get it now. Thx for the clarifications. If I understood it correctly, this however means that components which are only reading data can seriously affect the performance/behaviour of the component which is writing the data. Let's say I have a hard real-time component which amongst other puts data onto one or more outputports, I can make it fail (to meet its deadline) by just connecting some (remote) readers that all have set 'pull' to true. Is that correct?
And if so, aren't we than not introducing the same "coupling" as with synchronous events (damned, I opened pandora's box now I guess :-)

Thx,

Klaas

Data Flow 2.0 Example

On Mon, 31 Aug 2009, Peter Soetens wrote:
> Picking this thread up again...

me too ;-)

> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>
>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>> wrote:
>>>>
>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>> wrote:
>>
>> [...]
>>>>>
>>>>> // Component C gets the most recent data produced by A:
>>>>> bool init_connection = true; // read last written value after connection
>>>>> bool pull = true;                   // fetch data directly from output
>>>>> port during read
>>>>> InputPort<double> c_input("MyInput",
>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>> init_connection, pull));
>>>>> //...
>>>>> double x;
>>>>> if ( c_input.read( x ) {
>>>>>   // use last value of x...
>>>>> } else {
>>>>>  // no new data
>>>>> }
>>>>
>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>> will always return true (unless disconnected or st.), and the
>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>> will keep reading the same data)?
>>>
>>> No. Yes. pull dictates if the storage is at your input side
>>> (pull=false) or at the output side (pull=true). This has influence in
>>> networked environments.
>>
>> Do you mean _only_ in networked environments?  Maybe I don't understand
>> exactly what is meant by "input side" and "output side" above: Do they refer
>> to the input and output port?
>
> Yes.
>
>> And if yes, are the "pull's" exclusive (i.e.
>> one can only set it either for the input port, or for the output port)?
>
> pull vs push is a connection property (not a port property) and has
> effects in any environment. In a push, all output'ed data is pushed to
> the input port's buffer. So if the input port is across the network,
> the push goes over the network, so the producing component 'feels'
> this in his latencies. In a pull, all output'ed data is stored at the
> output port's buffer, which is local to the output. Now the input
> fetches the data and if the connection is across a network link, the
> input will 'feel' this in his latencies. push and pull are exclusive
> in the current architecture. So if two real-time components wish to
> communicate over a CORBA link, there is a problem. I proposed to fix
> this with an extra thread in the proxy, which will mean in practice
> that the two buffers are required (output side and input side) and
> that the not-realtime proxy thread will do the high-latency part.

Ok, I finally think I get it now. Thx for the clarifications. If I understood it correctly, this however means that components which are only reading data can seriously affect the performance/behaviour of the component which is writing the data. Let's say I have a hard real-time component which amongst other puts data onto one or more outputports, I can make it fail (to meet its deadline) by just connecting some (remote) readers that all have set 'pull' to true. Is that correct?
And if so, aren't we than not introducing the same "coupling" as with synchronous events (damned, I opened pandora's box now I guess :-)

Thx,

Klaas

Data Flow 2.0 Example

On Thu, Sep 17, 2009 at 16:06, Klaas Gadeyne <klaas [dot] gadeyne [..] ...> wrote:
> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>
>> Picking this thread up again...
>
> me too ;-)
>
>> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>
>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>
>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>>> wrote:
>>>>>
>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>>> wrote:
>>>
>>> [...]
>>>>>>
>>>>>> // Component C gets the most recent data produced by A:
>>>>>> bool init_connection = true; // read last written value after
>>>>>> connection
>>>>>> bool pull = true;                   // fetch data directly from output
>>>>>> port during read
>>>>>> InputPort<double> c_input("MyInput",
>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>> init_connection, pull));
>>>>>> //...
>>>>>> double x;
>>>>>> if ( c_input.read( x ) {
>>>>>>   // use last value of x...
>>>>>> } else {
>>>>>>  // no new data
>>>>>> }
>>>>>
>>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>>> will always return true (unless disconnected or st.), and the
>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>>> will keep reading the same data)?
>>>>
>>>> No. Yes. pull dictates if the storage is at your input side
>>>> (pull=false) or at the output side (pull=true). This has influence in
>>>> networked environments.
>>>
>>> Do you mean _only_ in networked environments?  Maybe I don't understand
>>> exactly what is meant by "input side" and "output side" above: Do they
>>> refer
>>> to the input and output port?
>>
>> Yes.
>>
>>> And if yes, are the "pull's" exclusive (i.e.
>>> one can only set it either for the input port, or for the output port)?
>>
>> pull vs push is a connection property (not a port property) and has
>> effects in any environment. In a push, all output'ed data is pushed to
>> the input port's buffer. So if the input port is across the network,
>> the push goes over the network, so the producing component 'feels'
>> this in his latencies. In a pull, all output'ed data is stored at the
>> output port's buffer, which is local to the output. Now the input
>> fetches the data and if the connection is across a network link, the
>> input will 'feel' this in his latencies. push and pull are exclusive
>> in the current architecture. So if two real-time components wish to
>> communicate over a CORBA link, there is a problem. I proposed to fix
>> this with an extra thread in the proxy, which will mean in practice
>> that the two buffers are required (output side and input side) and
>> that the not-realtime proxy thread will do the high-latency part.
>
> Ok, I finally think I get it now.  Thx for the clarifications.  If I
> understood it correctly, this however means that components which are only
> reading data can seriously affect the performance/behaviour of the component
> which is writing the data.  Let's say I have a hard real-time component
> which amongst other puts data onto one or more outputports, I can make it
> fail (to meet its deadline) by just connecting some (remote) readers that
> all have set 'pull' to true.  Is that correct?
> And if so, aren't we than not introducing the same "coupling" as with
> synchronous events (damned, I opened pandora's box now I guess :-)

Your analysis is almost entirely correct[1]. First, remote (other
process or node) ports are not subject to this coupling since the
transport for such connections is setup to use a 'communication/proxy'
thread. But in-process, you can't make an omelet without breaking the
egg. For each reader, the output port will make a copy of the data, so
given enough readers, the thread may/will miss its deadline. The only
work around for this (but you don't want to) is to send one sample of
data to a 'dispatcher' ala CORBA EventService that then distributes it
to all readers. Great, now the producing thread doesn't overrun
anymore, but the readers don't get their data either (in time), due to
too many.

The difference with synchronous events is that from a fault-tolerance
perspective, synchronous events are very sensitive to 'bad user code',
such as infinite loops, or calling out to not-real-time services. For
data ports, this is impossible. when writing the output port, each
added reader causes the 'consuming' of a fixed amount of time, in the
'trusted' RTT code. That is acceptable.

Peter

Peter

[1] You know I can't publicly exclaim that you're entirely correct.

Data Flow 2.0 Example

On Thu, Sep 17, 2009 at 16:06, Klaas Gadeyne <klaas [dot] gadeyne [..] ...> wrote:
> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>
>> Picking this thread up again...
>
> me too ;-)
>
>> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>
>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>
>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>>> wrote:
>>>>>
>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>>> wrote:
>>>
>>> [...]
>>>>>>
>>>>>> // Component C gets the most recent data produced by A:
>>>>>> bool init_connection = true; // read last written value after
>>>>>> connection
>>>>>> bool pull = true;                   // fetch data directly from output
>>>>>> port during read
>>>>>> InputPort<double> c_input("MyInput",
>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>> init_connection, pull));
>>>>>> //...
>>>>>> double x;
>>>>>> if ( c_input.read( x ) {
>>>>>>   // use last value of x...
>>>>>> } else {
>>>>>>  // no new data
>>>>>> }
>>>>>
>>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>>> will always return true (unless disconnected or st.), and the
>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>>> will keep reading the same data)?
>>>>
>>>> No. Yes. pull dictates if the storage is at your input side
>>>> (pull=false) or at the output side (pull=true). This has influence in
>>>> networked environments.
>>>
>>> Do you mean _only_ in networked environments?  Maybe I don't understand
>>> exactly what is meant by "input side" and "output side" above: Do they
>>> refer
>>> to the input and output port?
>>
>> Yes.
>>
>>> And if yes, are the "pull's" exclusive (i.e.
>>> one can only set it either for the input port, or for the output port)?
>>
>> pull vs push is a connection property (not a port property) and has
>> effects in any environment. In a push, all output'ed data is pushed to
>> the input port's buffer. So if the input port is across the network,
>> the push goes over the network, so the producing component 'feels'
>> this in his latencies. In a pull, all output'ed data is stored at the
>> output port's buffer, which is local to the output. Now the input
>> fetches the data and if the connection is across a network link, the
>> input will 'feel' this in his latencies. push and pull are exclusive
>> in the current architecture. So if two real-time components wish to
>> communicate over a CORBA link, there is a problem. I proposed to fix
>> this with an extra thread in the proxy, which will mean in practice
>> that the two buffers are required (output side and input side) and
>> that the not-realtime proxy thread will do the high-latency part.
>
> Ok, I finally think I get it now.  Thx for the clarifications.  If I
> understood it correctly, this however means that components which are only
> reading data can seriously affect the performance/behaviour of the component
> which is writing the data.  Let's say I have a hard real-time component
> which amongst other puts data onto one or more outputports, I can make it
> fail (to meet its deadline) by just connecting some (remote) readers that
> all have set 'pull' to true.  Is that correct?
> And if so, aren't we than not introducing the same "coupling" as with
> synchronous events (damned, I opened pandora's box now I guess :-)

Your analysis is almost entirely correct[1]. First, remote (other
process or node) ports are not subject to this coupling since the
transport for such connections is setup to use a 'communication/proxy'
thread. But in-process, you can't make an omelet without breaking the
egg. For each reader, the output port will make a copy of the data, so
given enough readers, the thread may/will miss its deadline. The only
work around for this (but you don't want to) is to send one sample of
data to a 'dispatcher' ala CORBA EventService that then distributes it
to all readers. Great, now the producing thread doesn't overrun
anymore, but the readers don't get their data either (in time), due to
too many.

The difference with synchronous events is that from a fault-tolerance
perspective, synchronous events are very sensitive to 'bad user code',
such as infinite loops, or calling out to not-real-time services. For
data ports, this is impossible. when writing the output port, each
added reader causes the 'consuming' of a fixed amount of time, in the
'trusted' RTT code. That is acceptable.

Peter

Peter

[1] You know I can't publicly exclaim that you're entirely correct.

Data Flow 2.0 Example

On Mon, 31 Aug 2009, Peter Soetens wrote:

> Picking this thread up again...
>
> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>
>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>> wrote:
>>>>
>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>> wrote:
>>
>> [...]
>>>>>
>>>>> // Component C gets the most recent data produced by A:
>>>>> bool init_connection = true; // read last written value after connection
>>>>> bool pull = true;                   // fetch data directly from output
>>>>> port during read
>>>>> InputPort<double> c_input("MyInput",
>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>> init_connection, pull));
>>>>> //...
>>>>> double x;
>>>>> if ( c_input.read( x ) {
>>>>>   // use last value of x...
>>>>> } else {
>>>>>  // no new data
>>>>> }
>>>>
>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>> will always return true (unless disconnected or st.), and the
>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>> will keep reading the same data)?
>>>
>>> No. Yes. pull dictates if the storage is at your input side
>>> (pull=false) or at the output side (pull=true). This has influence in
>>> networked environments.
>>
>> Do you mean _only_ in networked environments?  Maybe I don't understand
>> exactly what is meant by "input side" and "output side" above: Do they refer
>> to the input and output port?
>
> Yes.
>
>> And if yes, are the "pull's" exclusive (i.e.
>> one can only set it either for the input port, or for the output port)?
>
> pull vs push is a connection property (not a port property) and has

And even more: it's a _Coordination_ policy. The simplest RTT policy should
be "push", I think.

> effects in any environment. In a push, all output'ed data is pushed to
> the input port's buffer. So if the input port is across the network,
> the push goes over the network, so the producing component 'feels'
> this in his latencies. In a pull, all output'ed data is stored at the
> output port's buffer, which is local to the output. Now the input
> fetches the data and if the connection is across a network link, the
> input will 'feel' this in his latencies. push and pull are exclusive
> in the current architecture. So if two real-time components wish to
> communicate over a CORBA link, there is a problem. I proposed to fix
> this with an extra thread in the proxy, which will mean in practice
> that the two buffers are required (output side and input side) and
> that the not-realtime proxy thread will do the high-latency part.

As soon as extra threads are needed "behind the screens", I fear that it is
not a basic RTT property anymore, is it?

Herman

Data Flow 2.0 Example

On Mon, 31 Aug 2009, Peter Soetens wrote:

> Picking this thread up again...
>
> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>
>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>> wrote:
>>>>
>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>> wrote:
>>
>> [...]
>>>>>
>>>>> // Component C gets the most recent data produced by A:
>>>>> bool init_connection = true; // read last written value after connection
>>>>> bool pull = true;                   // fetch data directly from output
>>>>> port during read
>>>>> InputPort<double> c_input("MyInput",
>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>> init_connection, pull));
>>>>> //...
>>>>> double x;
>>>>> if ( c_input.read( x ) {
>>>>>   // use last value of x...
>>>>> } else {
>>>>>  // no new data
>>>>> }
>>>>
>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>> will always return true (unless disconnected or st.), and the
>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>> will keep reading the same data)?
>>>
>>> No. Yes. pull dictates if the storage is at your input side
>>> (pull=false) or at the output side (pull=true). This has influence in
>>> networked environments.
>>
>> Do you mean _only_ in networked environments?  Maybe I don't understand
>> exactly what is meant by "input side" and "output side" above: Do they refer
>> to the input and output port?
>
> Yes.
>
>> And if yes, are the "pull's" exclusive (i.e.
>> one can only set it either for the input port, or for the output port)?
>
> pull vs push is a connection property (not a port property) and has

And even more: it's a _Coordination_ policy. The simplest RTT policy should
be "push", I think.

> effects in any environment. In a push, all output'ed data is pushed to
> the input port's buffer. So if the input port is across the network,
> the push goes over the network, so the producing component 'feels'
> this in his latencies. In a pull, all output'ed data is stored at the
> output port's buffer, which is local to the output. Now the input
> fetches the data and if the connection is across a network link, the
> input will 'feel' this in his latencies. push and pull are exclusive
> in the current architecture. So if two real-time components wish to
> communicate over a CORBA link, there is a problem. I proposed to fix
> this with an extra thread in the proxy, which will mean in practice
> that the two buffers are required (output side and input side) and
> that the not-realtime proxy thread will do the high-latency part.

As soon as extra threads are needed "behind the screens", I fear that it is
not a basic RTT property anymore, is it?

Herman

Data Flow 2.0 Example

On Mon, Aug 31, 2009 at 13:24, Herman
Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
> On Mon, 31 Aug 2009, Peter Soetens wrote:
>
>> Picking this thread up again...
>>
>> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>
>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>
>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>>> wrote:
>>>>>
>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>>> wrote:
>>>
>>> [...]
>>>>>>
>>>>>> // Component C gets the most recent data produced by A:
>>>>>> bool init_connection = true; // read last written value after
>>>>>> connection
>>>>>> bool pull = true;                   // fetch data directly from output
>>>>>> port during read
>>>>>> InputPort<double> c_input("MyInput",
>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>> init_connection, pull));
>>>>>> //...
>>>>>> double x;
>>>>>> if ( c_input.read( x ) {
>>>>>>   // use last value of x...
>>>>>> } else {
>>>>>>  // no new data
>>>>>> }
>>>>>
>>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>>> will always return true (unless disconnected or st.), and the
>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>>> will keep reading the same data)?
>>>>
>>>> No. Yes. pull dictates if the storage is at your input side
>>>> (pull=false) or at the output side (pull=true). This has influence in
>>>> networked environments.
>>>
>>> Do you mean _only_ in networked environments?  Maybe I don't understand
>>> exactly what is meant by "input side" and "output side" above: Do they
>>> refer
>>> to the input and output port?
>>
>> Yes.
>>
>>> And if yes, are the "pull's" exclusive (i.e.
>>> one can only set it either for the input port, or for the output port)?
>>
>> pull vs push is a connection property (not a port property) and has
>
> And even more: it's a _Coordination_ policy. The simplest RTT policy should
> be "push", I think.
>
>> effects in any environment. In a push, all output'ed data is pushed to
>> the input port's buffer. So if the input port is across the network,
>> the push goes over the network, so the producing component 'feels'
>> this in his latencies. In a pull, all output'ed data is stored at the
>> output port's buffer, which is local to the output. Now the input
>> fetches the data and if the connection is across a network link, the
>> input will 'feel' this in his latencies. push and pull are exclusive
>> in the current architecture. So if two real-time components wish to
>> communicate over a CORBA link, there is a problem. I proposed to fix
>> this with an extra thread in the proxy, which will mean in practice
>> that the two buffers are required (output side and input side) and
>> that the not-realtime proxy thread will do the high-latency part.
>
> As soon as extra threads are needed "behind the screens", I fear that it is
> not a basic RTT property anymore, is it?

This is not true in distributed processes. It is impossible to have
'transparent' distribution if you're not allowed to have a thread in
your process that represents a remote component. When a remote
component calls you, some thread must be waiting on the socket to
process that message. So for every remote component (in practice,
every proxy) which your component communicates with, you need an extra
thread to be able to have the same level of concurrency as you have
when all components live in the same process. There are optimizations
possible, but the simplest and most 'natural' form is one thread per
proxy.

Peter

Data Flow 2.0 Example

On Mon, Aug 31, 2009 at 13:24, Herman
Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
> On Mon, 31 Aug 2009, Peter Soetens wrote:
>
>> Picking this thread up again...
>>
>> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>
>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>
>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>>> wrote:
>>>>>
>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>>> wrote:
>>>
>>> [...]
>>>>>>
>>>>>> // Component C gets the most recent data produced by A:
>>>>>> bool init_connection = true; // read last written value after
>>>>>> connection
>>>>>> bool pull = true;                   // fetch data directly from output
>>>>>> port during read
>>>>>> InputPort<double> c_input("MyInput",
>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>> init_connection, pull));
>>>>>> //...
>>>>>> double x;
>>>>>> if ( c_input.read( x ) {
>>>>>>   // use last value of x...
>>>>>> } else {
>>>>>>  // no new data
>>>>>> }
>>>>>
>>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>>> will always return true (unless disconnected or st.), and the
>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>>> will keep reading the same data)?
>>>>
>>>> No. Yes. pull dictates if the storage is at your input side
>>>> (pull=false) or at the output side (pull=true). This has influence in
>>>> networked environments.
>>>
>>> Do you mean _only_ in networked environments?  Maybe I don't understand
>>> exactly what is meant by "input side" and "output side" above: Do they
>>> refer
>>> to the input and output port?
>>
>> Yes.
>>
>>> And if yes, are the "pull's" exclusive (i.e.
>>> one can only set it either for the input port, or for the output port)?
>>
>> pull vs push is a connection property (not a port property) and has
>
> And even more: it's a _Coordination_ policy. The simplest RTT policy should
> be "push", I think.
>
>> effects in any environment. In a push, all output'ed data is pushed to
>> the input port's buffer. So if the input port is across the network,
>> the push goes over the network, so the producing component 'feels'
>> this in his latencies. In a pull, all output'ed data is stored at the
>> output port's buffer, which is local to the output. Now the input
>> fetches the data and if the connection is across a network link, the
>> input will 'feel' this in his latencies. push and pull are exclusive
>> in the current architecture. So if two real-time components wish to
>> communicate over a CORBA link, there is a problem. I proposed to fix
>> this with an extra thread in the proxy, which will mean in practice
>> that the two buffers are required (output side and input side) and
>> that the not-realtime proxy thread will do the high-latency part.
>
> As soon as extra threads are needed "behind the screens", I fear that it is
> not a basic RTT property anymore, is it?

This is not true in distributed processes. It is impossible to have
'transparent' distribution if you're not allowed to have a thread in
your process that represents a remote component. When a remote
component calls you, some thread must be waiting on the socket to
process that message. So for every remote component (in practice,
every proxy) which your component communicates with, you need an extra
thread to be able to have the same level of concurrency as you have
when all components live in the same process. There are optimizations
possible, but the simplest and most 'natural' form is one thread per
proxy.

Peter

Orocos Controller-1-Example

Hi all, I'm returning to my studies using the examples available on
rtt-exercises-1.8.1 dir.

Reading all documents and proposed Exercises it really looks that
everything is already implemented, isn't?

So then I set up my environment to run the application and then I got some
[Warnings] and [Info] messages that appeared has no influence on
operation.

But I also got the [Error] message showed below:

0.021 [ Warning][SingleThread] Forcing priority (0) of thread with
!SCHED_OTHER policy to 1.

0.023 [ ERROR ][ReportingComponent] Could not report Component Joystick :
no such peer.
0.023 [ ERROR ][ReportingComponent] Could not report Component Joystick :
no such peer.
0.023 [ Info ][DeploymentComponent::configureComponents] Re-setting
activity of Timer
0.024 [ Info ][DeploymentComponent::configureComponents] TimerComponent
correctly configured.
0.024 [ ERROR ][DeploymentComponent::configureComponents] Failed to
configure component Reporting
0.024 [ ERROR ][deployer-gnulinux::main()] Failed to configure a
component: aborting kick-start.
Switched to : Deployer
0.024 [ Info ][deployer-gnulinux::main()] Entering Task Deployer

I couldn't figure out how to solve it once everything looks to be
implemented. Is there anything left that is supposed to be implemented?

Btw, how to use this application? How check the plant movement for example?
Or send positions?

Thanks for all,

Breno

Orocos Controller-1-Example

2009/9/7 breno <breno [..] ...>:
> Hi all, I'm returning to my studies using the examples available on
> rtt-exercises-1.8.1 dir.
>
> Reading all documents and proposed Exercises it really looks that
> everything is already implemented, isn't?

I wouldn't think so:

diff -Naur controller-1 controller-1-solution/ | diffstat
components/controller/Controller.hpp | 2 ++
components/joystick/Joystick.hpp | 5 +++++
components/modeswitch/ModeSwitch.cpp | 3 +++
components/modeswitch/ModeSwitch.hpp | 32 +++ ++++++++++++++++++++++++++++-
deployment/Controller.cpf | 7 +++++++
deployment/application.cpf | 22 ++++++++++++++++++++++
deployment/logging/log-plant.cpf | 1 +
deployment/program.ops | 7 +++++++
deployment/statemachine.osd | 27 +++++++++++++++++++++++++--
9 files changed, 103 insertions(+), 3 deletions(-)

Did you read the Exercises.txt file in the top level directory ??

The error messages are caused by incomplete deployment xml files.

Peter

>
> So then I set up my environment to run the application and then I got some
> [Warnings] and [Info] messages that appeared has no influence on
> operation.
>
> But I also got the [Error] message showed below:
>
> 0.021 [ Warning][SingleThread] Forcing priority (0) of thread with
> !SCHED_OTHER policy to 1.
>
> 0.023 [ ERROR  ][ReportingComponent] Could not report Component Joystick :
> no such peer.
> 0.023 [ ERROR  ][ReportingComponent] Could not report Component Joystick :
> no such peer.
> 0.023 [ Info   ][DeploymentComponent::configureComponents] Re-setting
> activity of Timer
> 0.024 [ Info   ][DeploymentComponent::configureComponents] TimerComponent
> correctly configured.
> 0.024 [ ERROR  ][DeploymentComponent::configureComponents] Failed to
> configure component Reporting
> 0.024 [ ERROR  ][deployer-gnulinux::main()] Failed to configure a
> component: aborting kick-start.
>   Switched to : Deployer
> 0.024 [ Info   ][deployer-gnulinux::main()] Entering Task Deployer
>
> I couldn't figure out how to solve it once everything looks to be
> implemented. Is there anything left that is supposed to be implemented?
>
> Btw, how to use this application? How check the plant movement for example?
> Or send positions?
>
> Thanks for all,
>
> Breno
>
>
>
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>

Data Flow 2.0 Example

On Mon, 31 Aug 2009, Peter Soetens wrote:

> On Mon, Aug 31, 2009 at 13:24, Herman
> Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
>> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>
>>> Picking this thread up again...
>>>
>>> On Fri, Aug 21, 2009 at 16:15, Klaas Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>>
>>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>>
>>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
>>>>> wrote:
>>>>>>
>>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
>>>>>> wrote:
>>>>
>>>> [...]
>>>>>>>
>>>>>>> // Component C gets the most recent data produced by A:
>>>>>>> bool init_connection = true; // read last written value after
>>>>>>> connection
>>>>>>> bool pull = true;                   // fetch data directly from output
>>>>>>> port during read
>>>>>>> InputPort<double> c_input("MyInput",
>>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>>> init_connection, pull));
>>>>>>> //...
>>>>>>> double x;
>>>>>>> if ( c_input.read( x ) {
>>>>>>>   // use last value of x...
>>>>>>> } else {
>>>>>>>  // no new data
>>>>>>> }
>>>>>>
>>>>>> If I set pull to 'false' above, does that mean that c_input.read(x)
>>>>>> will always return true (unless disconnected or st.), and the
>>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>>> WritePort (i.e. if no one writes to the port, the read on the readPort
>>>>>> will keep reading the same data)?
>>>>>
>>>>> No. Yes. pull dictates if the storage is at your input side
>>>>> (pull=false) or at the output side (pull=true). This has influence in
>>>>> networked environments.
>>>>
>>>> Do you mean _only_ in networked environments?  Maybe I don't understand
>>>> exactly what is meant by "input side" and "output side" above: Do they
>>>> refer
>>>> to the input and output port?
>>>
>>> Yes.
>>>
>>>> And if yes, are the "pull's" exclusive (i.e.
>>>> one can only set it either for the input port, or for the output port)?
>>>
>>> pull vs push is a connection property (not a port property) and has
>>
>> And even more: it's a _Coordination_ policy. The simplest RTT policy should
>> be "push", I think.
>>
>>> effects in any environment. In a push, all output'ed data is pushed to
>>> the input port's buffer. So if the input port is across the network,
>>> the push goes over the network, so the producing component 'feels'
>>> this in his latencies. In a pull, all output'ed data is stored at the
>>> output port's buffer, which is local to the output. Now the input
>>> fetches the data and if the connection is across a network link, the
>>> input will 'feel' this in his latencies. push and pull are exclusive
>>> in the current architecture. So if two real-time components wish to
>>> communicate over a CORBA link, there is a problem. I proposed to fix
>>> this with an extra thread in the proxy, which will mean in practice
>>> that the two buffers are required (output side and input side) and
>>> that the not-realtime proxy thread will do the high-latency part.
>>
>> As soon as extra threads are needed "behind the screens", I fear that it is
>> not a basic RTT property anymore, is it?
>
> This is not true in distributed processes. It is impossible to have
> 'transparent' distribution if you're not allowed to have a thread in
> your process that represents a remote component.

Absolutely true! But it is also absolutely true that RTT should _not_
become a distributed middleware! That's up to the (communication)
middleware projects... Alternatively, Orocos (not RTT, but maybe OCL) could
have some support for such middleware (such as CORBA).

How much of your time goes into support of middleware things? Probably way
too much, compared to what "core RTT" is using up... :-)

> When a remote
> component calls you, some thread must be waiting on the socket to
> process that message. So for every remote component (in practice,
> every proxy) which your component communicates with, you need an extra
> thread to be able to have the same level of concurrency as you have
> when all components live in the same process. There are optimizations
> possible, but the simplest and most 'natural' form is one thread per
> proxy.

Herman

Data Flow 2.0 Example

On Aug 31, 2009, at 07:55 , Herman Bruyninckx wrote:

> On Mon, 31 Aug 2009, Peter Soetens wrote:
>
>> On Mon, Aug 31, 2009 at 13:24, Herman
>> Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
>>> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>>
>>>> Picking this thread up again...
>>>>
>>>> On Fri, Aug 21, 2009 at 16:15, Klaas
>>>> Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>>>
>>>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>>>
>>>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...
>>>>>> >
>>>>>> wrote:
>>>>>>>
>>>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...
>>>>>>> >
>>>>>>> wrote:
>>>>>
>>>>> [...]
>>>>>>>>
>>>>>>>> // Component C gets the most recent data produced by A:
>>>>>>>> bool init_connection = true; // read last written value after
>>>>>>>> connection
>>>>>>>> bool pull = true; // fetch data directly
>>>>>>>> from output
>>>>>>>> port during read
>>>>>>>> InputPort<double> c_input("MyInput",
>>>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>>>> init_connection, pull));
>>>>>>>> //...
>>>>>>>> double x;
>>>>>>>> if ( c_input.read( x ) {
>>>>>>>> // use last value of x...
>>>>>>>> } else {
>>>>>>>> // no new data
>>>>>>>> }
>>>>>>>
>>>>>>> If I set pull to 'false' above, does that mean that
>>>>>>> c_input.read(x)
>>>>>>> will always return true (unless disconnected or st.), and the
>>>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>>>> WritePort (i.e. if no one writes to the port, the read on the
>>>>>>> readPort
>>>>>>> will keep reading the same data)?
>>>>>>
>>>>>> No. Yes. pull dictates if the storage is at your input side
>>>>>> (pull=false) or at the output side (pull=true). This has
>>>>>> influence in
>>>>>> networked environments.
>>>>>
>>>>> Do you mean _only_ in networked environments? Maybe I don't
>>>>> understand
>>>>> exactly what is meant by "input side" and "output side" above:
>>>>> Do they
>>>>> refer
>>>>> to the input and output port?
>>>>
>>>> Yes.
>>>>
>>>>> And if yes, are the "pull's" exclusive (i.e.
>>>>> one can only set it either for the input port, or for the output
>>>>> port)?
>>>>
>>>> pull vs push is a connection property (not a port property) and has
>>>
>>> And even more: it's a _Coordination_ policy. The simplest RTT
>>> policy should
>>> be "push", I think.
>>>
>>>> effects in any environment. In a push, all output'ed data is
>>>> pushed to
>>>> the input port's buffer. So if the input port is across the
>>>> network,
>>>> the push goes over the network, so the producing component 'feels'
>>>> this in his latencies. In a pull, all output'ed data is stored at
>>>> the
>>>> output port's buffer, which is local to the output. Now the input
>>>> fetches the data and if the connection is across a network link,
>>>> the
>>>> input will 'feel' this in his latencies. push and pull are
>>>> exclusive
>>>> in the current architecture. So if two real-time components wish to
>>>> communicate over a CORBA link, there is a problem. I proposed to
>>>> fix
>>>> this with an extra thread in the proxy, which will mean in practice
>>>> that the two buffers are required (output side and input side) and
>>>> that the not-realtime proxy thread will do the high-latency part.
>>>
>>> As soon as extra threads are needed "behind the screens", I fear
>>> that it is
>>> not a basic RTT property anymore, is it?
>>
>> This is not true in distributed processes. It is impossible to have
>> 'transparent' distribution if you're not allowed to have a thread in
>> your process that represents a remote component.

+1

> Absolutely true! But it is also absolutely true that RTT should _not_
> become a distributed middleware! That's up to the (communication)
> middleware projects... Alternatively, Orocos (not RTT, but maybe
> OCL) could
> have some support for such middleware (such as CORBA).

While I agree with the idealistic aspects of Herman's viewpoint, I
would counter it by saying that having an already-integrated comm's
middleware is a huge selling point for Orocos. It saves every newly
interested user from having to rewrite the exact same set of glue to
get comm's involved in an Orocos-based system. This was a huge factor
for us in choosing Orocos over similar projects/tools.

YMMV
Stephen

Data Flow 2.0 Example

On Mon, 31 Aug 2009, S Roderick wrote:

> On Aug 31, 2009, at 07:55 , Herman Bruyninckx wrote:
>
>> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>
>>> On Mon, Aug 31, 2009 at 13:24, Herman
>>> Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
>>>> On Mon, 31 Aug 2009, Peter Soetens wrote:
>>>>
>>>>> Picking this thread up again...
>>>>>
>>>>> On Fri, Aug 21, 2009 at 16:15, Klaas
>>>>> Gadeyne<klaas [dot] gadeyne [..] ...> wrote:
>>>>>>
>>>>>> On Thu, 20 Aug 2009, Peter Soetens wrote:
>>>>>>>
>>>>>>> On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...
>>>>>>>>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...
>>>>>>>>>
>>>>>>>> wrote:
>>>>>>
>>>>>> [...]
>>>>>>>>>
>>>>>>>>> // Component C gets the most recent data produced by A:
>>>>>>>>> bool init_connection = true; // read last written value after
>>>>>>>>> connection
>>>>>>>>> bool pull = true; // fetch data directly
>>>>>>>>> from output
>>>>>>>>> port during read
>>>>>>>>> InputPort<double> c_input("MyInput",
>>>>>>>>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
>>>>>>>>> init_connection, pull));
>>>>>>>>> //...
>>>>>>>>> double x;
>>>>>>>>> if ( c_input.read( x ) {
>>>>>>>>> // use last value of x...
>>>>>>>>> } else {
>>>>>>>>> // no new data
>>>>>>>>> }
>>>>>>>>
>>>>>>>> If I set pull to 'false' above, does that mean that
>>>>>>>> c_input.read(x)
>>>>>>>> will always return true (unless disconnected or st.), and the
>>>>>>>> behaviour will be the same of the 1.x ReadPort connected to a
>>>>>>>> WritePort (i.e. if no one writes to the port, the read on the
>>>>>>>> readPort
>>>>>>>> will keep reading the same data)?
>>>>>>>
>>>>>>> No. Yes. pull dictates if the storage is at your input side
>>>>>>> (pull=false) or at the output side (pull=true). This has
>>>>>>> influence in
>>>>>>> networked environments.
>>>>>>
>>>>>> Do you mean _only_ in networked environments? Maybe I don't
>>>>>> understand
>>>>>> exactly what is meant by "input side" and "output side" above:
>>>>>> Do they
>>>>>> refer
>>>>>> to the input and output port?
>>>>>
>>>>> Yes.
>>>>>
>>>>>> And if yes, are the "pull's" exclusive (i.e.
>>>>>> one can only set it either for the input port, or for the output
>>>>>> port)?
>>>>>
>>>>> pull vs push is a connection property (not a port property) and has
>>>>
>>>> And even more: it's a _Coordination_ policy. The simplest RTT
>>>> policy should
>>>> be "push", I think.
>>>>
>>>>> effects in any environment. In a push, all output'ed data is
>>>>> pushed to
>>>>> the input port's buffer. So if the input port is across the
>>>>> network,
>>>>> the push goes over the network, so the producing component 'feels'
>>>>> this in his latencies. In a pull, all output'ed data is stored at
>>>>> the
>>>>> output port's buffer, which is local to the output. Now the input
>>>>> fetches the data and if the connection is across a network link,
>>>>> the
>>>>> input will 'feel' this in his latencies. push and pull are
>>>>> exclusive
>>>>> in the current architecture. So if two real-time components wish to
>>>>> communicate over a CORBA link, there is a problem. I proposed to
>>>>> fix
>>>>> this with an extra thread in the proxy, which will mean in practice
>>>>> that the two buffers are required (output side and input side) and
>>>>> that the not-realtime proxy thread will do the high-latency part.
>>>>
>>>> As soon as extra threads are needed "behind the screens", I fear
>>>> that it is
>>>> not a basic RTT property anymore, is it?
>>>
>>> This is not true in distributed processes. It is impossible to have
>>> 'transparent' distribution if you're not allowed to have a thread in
>>> your process that represents a remote component.
>
> +1
>
>> Absolutely true! But it is also absolutely true that RTT should _not_
>> become a distributed middleware! That's up to the (communication)
>> middleware projects... Alternatively, Orocos (not RTT, but maybe
>> OCL) could
>> have some support for such middleware (such as CORBA).
>
> While I agree with the idealistic aspects of Herman's viewpoint, I
> would counter it by saying that having an already-integrated comm's
> middleware is a huge selling point for Orocos. It saves every newly
> interested user from having to rewrite the exact same set of glue to
> get comm's involved in an Orocos-based system. This was a huge factor
> for us in choosing Orocos over similar projects/tools.

yes! But, what I am saying is that it should _not_ be done in RTT, but in
some other part of Orocos :-) RTT needs to be the core that can work
everywhere, for all use cases, on all platforms (at least, in so far that they
satisty the basic platform assumptions of RTT, in the sense of
realtime-ness and FOSI-compliance).

Which makes me think again about the fact that we have never explicitly
listed those platforms assumptions anywhere, did we...? (Unless in the
FOSI and other header files.)

Herman

rtt-exercises-1.8

Hi all, I'm doing some initial examples with orocos-rtt reading the user
manual and looking some done exercises available on Orocos web page.

I did a copy and paste of examples showed on sections:

3.8.1. Setting up the Data Flow Interface

3.8.2. Using the Data Flow Interface in C++

from The OROCOS Component Builder's Manual but, without no chages at all,
my compilation process returned:

Mytask.cpp:53: error: call of overloaded
?addEventPort(RTT::ReadDataPort<double>*, const char [29])? is ambiguous
/usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:95:
note: candidates are: bool
RTT::DataFlowInterface::addEventPort(RTT::PortInterface*,
boost::function<void ()(RTT::PortInterface*), std::allocator /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:120:
note: bool
RTT::DataFlowInterface::addEventPort(RTT::PortInterface*, std::string,
boost::function<void ()(RTT::PortInterface*), std::allocator make: *** [helloworld] Error 1

I'm using rtt-1.8.4 and gcc- 4.1 on Ubuntu linux distro.

Does anyone have any sugg's?

Thanks in advance,

Breno

rtt-exercises-1.8

On Sat, Aug 22, 2009 at 23:02, <breno [..] ...> wrote:
> Hi all, I'm doing some initial examples with orocos-rtt reading the user
> manual and looking some done exercises available on Orocos web page.
>
> I did a copy and paste of examples showed on sections:
>
> 3.8.1. Setting up the Data Flow Interface
>
> 3.8.2. Using the Data Flow Interface in C++
>
> from The OROCOS Component Builder's Manual but, without no chages at all,
> my compilation process returned:
>
> Mytask.cpp:53: error: call of overloaded
> ‘addEventPort(RTT::ReadDataPort<double>*, const char [29])’ is ambiguous
> /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:95:
> note: candidates are: bool
> RTT::DataFlowInterface::addEventPort(RTT::PortInterface*,
> boost::function<void ()(RTT::PortInterface*), std::allocator > /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:120:
> note:                 bool
> RTT::DataFlowInterface::addEventPort(RTT::PortInterface*, std::string,
> boost::function<void ()(RTT::PortInterface*), std::allocator > make: *** [helloworld] Error 1
>
> I'm using rtt-1.8.4 and gcc- 4.1 on Ubuntu linux distro.
>
> Does anyone have any sugg's?

I can't see why this is showing up in your setup, but the work-around
is to add std::string( ".." ) around the string argument on line 53 of
Mytask.cpp.

We'll have to fix this in RTT::DataFlowInterface itself, possibly by
replacing the std::string argument with a const char* argument as is
done for the other interface elements. That would break the
work-around then again :-(

Peter

rtt-exercises-1.8.1/ helloworld-4-methods

Hi folks, I'm striving to complete my exercises here. But fortunately I
already reached helloworld-rtt-4 available on rtt-exercises-1.8.1.

I did some changes suggested on comments but some weird errors has appeared.

My declaration is below:

Method< void(string) > sayIt;
Method< string(void) > method;

string mymethod() {
return "Hello World";
}
void saysomethig(string word){
log(Info) << word <<endlog();
}

public:
/**
* This example sets the interface up in the Constructor
* of the component.
*/
Hello(std::string name)
: TaskContext(name),
sayIt("other_method", &Hello::saysomethig, this),
method("the_method", &Hello::mymethod, this)

{
// Check if all initialisation was ok:
assert( sayIt.ready() );
assert( method.ready() );

this->methods()->addMethod(&sayIt,"'other_method' Description");
this->methods()->addMethod(&method, "'the_method' Description");

}

My compilation returned the following error:

/usr/local/orocos/orocos-rtt-1.8.4/include/rtt/MethodRepository.hpp: In
member function ?bool RTT::MethodRepository::addMethod(MethodT, const
char*) [with MethodT = RTT::Method<void ()(std::string)>*]?:
HelloWorld.cpp:92: instantiated from here
/usr/local/orocos/orocos-rtt-1.8.4/include/rtt/MethodRepository.hpp:175:
error: invalid application of ?sizeof? to incomplete type
?boost::STATIC_ASSERTION_FAILURE<false>?

If I take off the "'other_method' Description" on the declaration above I
got no compilations errors but my method doesn't appeared at component
inteface.

Have anyone faced it before?

Could be anything wrong in my setup?

Thank you,

Breno

rtt-exercises-1.8.1/ helloworld-4-methods

On Sun, Aug 23, 2009 at 20:16, <breno [..] ...> wrote:
> Hi folks, I'm striving to complete my exercises here. But fortunately I
> already reached helloworld-rtt-4 available on rtt-exercises-1.8.1.
>
> I did some changes suggested on comments but some weird errors has appeared.
>
> My declaration is below:
>
>  Method< void(string) > sayIt;
>  Method< string(void) > method;
>
>
> string mymethod() {
>            return "Hello World";
>        }
> void saysomethig(string word){
>            log(Info) << word <<endlog();
>        }
>
>    public:
>        /**
>         * This example sets the interface up in the Constructor
>         * of the component.
>         */
>        Hello(std::string name)
>            : TaskContext(name),
>              sayIt("other_method", &Hello::saysomethig, this),
>              method("the_method", &Hello::mymethod, this)
>
>        {
>            // Check if all initialisation was ok:
>            assert( sayIt.ready() );
>            assert( method.ready() );
>
>            this->methods()->addMethod(&sayIt,"'other_method' Description");
>            this->methods()->addMethod(&method, "'the_method' Description");
>
>        }
>
> My compilation returned the following error:
>
> /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/MethodRepository.hpp: In
> member function ‘bool RTT::MethodRepository::addMethod(MethodT, const
> char*) [with MethodT = RTT::Method<void ()(std::string)>*]’:
> HelloWorld.cpp:92:   instantiated from here
> /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/MethodRepository.hpp:175:
> error: invalid application of ‘sizeof’ to incomplete type
> ‘boost::STATIC_ASSERTION_FAILURE<false>’
>
> If I take off the "'other_method' Description" on the declaration above I
> got no compilations errors but my method doesn't appeared at component
> inteface.
>
> Have anyone faced it before?
>
> Could be anything wrong in my setup?

Yes you did :-) You need to document each argument in the addMethod
function. So sayIt has one argument, so you need to add two C-strings,
the first is the argument name (a word), the second is the description
of the argument.

As the exercise explains, see "Exercise 4: Read Orocos Component
Builder's Manual, Chap 2 sect 3.9"

Peter

rtt-exercises-1.8

> On Sat, Aug 22, 2009 at 23:02, <breno [..] ...> wrote:
>> Hi all, I'm doing some initial examples with orocos-rtt reading the user
>> manual and looking some done exercises available on Orocos web page.
>>
>> I did a copy and paste of examples showed on sections:
>>
>> 3.8.1. Setting up the Data Flow Interface
>>
>> 3.8.2. Using the Data Flow Interface in C++
>>
>> from The OROCOS Component Builder's Manual but, without no chages at
>> all,
>> my compilation process returned:
>>
>> Mytask.cpp:53: error: call of overloaded
>> ?addEventPort(RTT::ReadDataPort<double>*, const char [29])? is ambiguous
>> /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:95:
>> note: candidates are: bool
>> RTT::DataFlowInterface::addEventPort(RTT::PortInterface*,
>> boost::function<void ()(RTT::PortInterface*), std::allocator >> /usr/local/orocos/orocos-rtt-1.8.4/include/rtt/DataFlowInterface.hpp:120:
>> note:                 bool
>> RTT::DataFlowInterface::addEventPort(RTT::PortInterface*, std::string,
>> boost::function<void ()(RTT::PortInterface*), std::allocator >> make: *** [helloworld] Error 1
>>
>> I'm using rtt-1.8.4 and gcc- 4.1 on Ubuntu linux distro.
>>
>> Does anyone have any sugg's?
>
> I can't see why this is showing up in your setup, but the work-around
> is to add std::string( ".." ) around the string argument on line 53 of
> Mytask.cpp.
>
> We'll have to fix this in RTT::DataFlowInterface itself, possibly by
> replacing the std::string argument with a const char* argument as is
> done for the other interface elements. That would break the
> work-around then again :-(
>
> Peter
>
To anyone who face this kind of problem, as Peter said,

this->ports()->addEventPort( &bufferport, std::string("Event driven Input
Data Port") );

solved the problem mentioned!!

Thanks for all,

Breno

Data Flow 2.0 Example

On Friday 21 August 2009 16:15:31 Klaas Gadeyne wrote:
> On Thu, 20 Aug 2009, Peter Soetens wrote:
> > On Thu, Aug 20, 2009 at 09:49, Klaas Gadeyne<klaas [dot] gadeyne [..] ...>
wrote:
> >> On Wed, Aug 19, 2009 at 2:41 PM, Peter Soetens<peter [dot] soetens [..] ...>
> >> wrote:
>
> [...]
>
> >>> // Component C gets the most recent data produced by A:
> >>> bool init_connection = true; // read last written value after
> >>> connection bool pull = true; // fetch data directly
> >>> from output port during read
> >>> InputPort<double> c_input("MyInput",
> >>> internal::ConnPolicy::data(internal::ConnPolicy::LOCK_FREE,
> >>> init_connection, pull));
> >>> //...
> >>> double x;
> >>> if ( c_input.read( x ) {
> >>> // use last value of x...
> >>> } else {
> >>> // no new data
> >>> }
> >>
> >> If I set pull to 'false' above, does that mean that c_input.read(x)
> >> will always return true (unless disconnected or st.), and the
> >> behaviour will be the same of the 1.x ReadPort connected to a
> >> WritePort (i.e. if no one writes to the port, the read on the readPort
> >> will keep reading the same data)?
> >
> > No. Yes. pull dictates if the storage is at your input side
> > (pull=false) or at the output side (pull=true). This has influence in
> > networked environments.
>
> Do you mean _only_ in networked environments? Maybe I don't understand
> exactly what is meant by "input side" and "output side" above: Do they
> refer to the input and output port? And if yes, are the "pull's" exclusive
> (i.e. one can only set it either for the input port, or for the output
> port)?
>
> > Data ports always read the last sample, unless the channel is cleared
> > (I'd like to see how Sylvain uses this feature himself).
I use clear() mainly for buffers, but since I like unified interfaces I
implemented it for data as well. The point is that, if you have long buffers,
you don't want to use old stuff when your component starts.

My experience with another framework made me realize that it has also a use
when the input of the channel wants to notify that the current data sample is
not valid anymore. I personally don't like so much, as I prefer using
timestamps and general data quality indicators in the samples themselves. But
I know some people do like it, so ...

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 02:41:04PM +0200, Peter Soetens wrote:
> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

Thank you, this is useful!

> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What

Why would the semantics be clearer if we called it topic? What's wrong
with connection? I have a clear idea of what a connection is, while a
"Topic" remains a very vague thing (in this context). I vote for
keeping connection!

Best regards
Markus

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 02:41:04PM +0200, Peter Soetens wrote:
> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

Thank you, this is useful!

> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What

Why would the semantics be clearer if we called it topic? What's wrong
with connection? I have a clear idea of what a connection is, while a
"Topic" remains a very vague thing (in this context). I vote for
keeping connection!

Best regards
Markus

Data Flow 2.0 Example

On Aug 19, 2009, at 08:41 , Peter Soetens wrote:

> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>
> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.

This all sounds good.

<sni

> Since each InputPort takes a default policy (which is type = DATA,
> lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
> old DeploymentComponent + XML scripts. The 'only' addition necessary
> is to extend the XML elements such that a connection policy can be
> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What
> we call now a 'connection', I would propose to call a 'Topic',
> analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
> InputPort mapping in your XML file. We could easily generalize Topic
> to also allow for port names, such that in simple setups, you just set
> OutputPort -> InputPort. I'm not even proposing a new XML format here,
> because when we write:

Seems reasonable to update connection to topic.

<sni

> In 2.0, you need to take care that exactly one port writes the given
> topic (so one OutputPort) and the other ports are all InputPorts. If
> this is not correct, the deployer refuses to setup the connections.

Really!? You can't connect multiple outputs to one input? That breaks
many of our systems, where we have multiple controllers all outputting
(say) nAxesDesiredPosition, and then an executive component that turns
on only one of these controllers. Those outputs all go to one input,
the next component in line (say, a joint position limiter). Note that
at any given time, only one output port is active, but we still have
multiple connected. Will this no longer be possible?

The intention is for this new dataflow implementation to replace the
existing implementation, correct?
Stephen

Data Flow 2.0 Example

On Aug 19, 2009, at 08:41 , Peter Soetens wrote:

> This mail is to inform you of my impressions of the new data flow
> framework. First of all, the concept behind the new structure is given
> in the wiki: http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
>
> The idea is that *outputs* are send and forget, while *inputs* specify
> a 'policy': e.g. 'I want to read all samples -> so buffer the input'
> or: ' I want lock-based protection' or:...
> The policy is specified in a 'ConnPolicy' object which you can give to
> the input, to use as a default, or override during the connection of
> the ports during deployment.

This all sounds good.

<sni

> Since each InputPort takes a default policy (which is type = DATA,
> lock_policy = LOCK_FREE, init=false, pull=false) we can keep using the
> old DeploymentComponent + XML scripts. The 'only' addition necessary
> is to extend the XML elements such that a connection policy can be
> defined in addition, to override the default. I propose to update the
> deployment manual such that the connection semantics are clearer. What
> we call now a 'connection', I would propose to call a 'Topic',
> analogous to ROS. So you'd define a OutputPort -> Topic and Topic ->
> InputPort mapping in your XML file. We could easily generalize Topic
> to also allow for port names, such that in simple setups, you just set
> OutputPort -> InputPort. I'm not even proposing a new XML format here,
> because when we write:

Seems reasonable to update connection to topic.

<sni

> In 2.0, you need to take care that exactly one port writes the given
> topic (so one OutputPort) and the other ports are all InputPorts. If
> this is not correct, the deployer refuses to setup the connections.

Really!? You can't connect multiple outputs to one input? That breaks
many of our systems, where we have multiple controllers all outputting
(say) nAxesDesiredPosition, and then an executive component that turns
on only one of these controllers. Those outputs all go to one input,
the next component in line (say, a joint position limiter). Note that
at any given time, only one output port is active, but we still have
multiple connected. Will this no longer be possible?

The intention is for this new dataflow implementation to replace the
existing implementation, correct?
Stephen

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>
>> In 2.0, you need to take care that exactly one port writes the given
>> topic (so one OutputPort) and the other ports are all InputPorts. If
>> this is not correct, the deployer refuses to setup the connections.
>
> Really!? You can't connect multiple outputs to one input? That breaks many
> of our systems, where we have multiple controllers all outputting (say)
> nAxesDesiredPosition, and then an executive component that turns on only one
> of these controllers. Those outputs all go to one input, the next component
> in line (say, a joint position limiter). Note that at any given time, only
> one output port is active, but we still have multiple connected. Will this
> no longer be possible?

It's a universal pattern, so it must remain possible imho.

>
> The intention is for this new dataflow implementation to replace the
> existing implementation, correct?

Yes. Sylvain discussed this scenario before
(http://www.orocos.org/node/894#comment-2500). The question is if 1/
run-time reconnection should take place vs 2/ allowing many-to-many.
Case 1 guarantees that only one writer is present, by design, but
reconfiguration is/may not be a real-time mechanism. Case 2. adds
complexity when everything gets distributed; If data suddenly comes
from a different process, clients must be notified that they need to
pull from a different process. In the push scenario, this is not an
issue.

We need to think this over better with your use case in mind.

Peter

Data Flow 2.0 Example

On Wed, Aug 19, 2009 at 14:49, S Roderick<kiwi [dot] net [..] ...> wrote:
>
>> In 2.0, you need to take care that exactly one port writes the given
>> topic (so one OutputPort) and the other ports are all InputPorts. If
>> this is not correct, the deployer refuses to setup the connections.
>
> Really!? You can't connect multiple outputs to one input? That breaks many
> of our systems, where we have multiple controllers all outputting (say)
> nAxesDesiredPosition, and then an executive component that turns on only one
> of these controllers. Those outputs all go to one input, the next component
> in line (say, a joint position limiter). Note that at any given time, only
> one output port is active, but we still have multiple connected. Will this
> no longer be possible?

It's a universal pattern, so it must remain possible imho.

>
> The intention is for this new dataflow implementation to replace the
> existing implementation, correct?

Yes. Sylvain discussed this scenario before
(http://www.orocos.org/node/894#comment-2500). The question is if 1/
run-time reconnection should take place vs 2/ allowing many-to-many.
Case 1 guarantees that only one writer is present, by design, but
reconfiguration is/may not be a real-time mechanism. Case 2. adds
complexity when everything gets distributed; If data suddenly comes
from a different process, clients must be notified that they need to
pull from a different process. In the push scenario, this is not an
issue.

We need to think this over better with your use case in mind.

Peter