RTT-Introspection and how to do it ?

Hey,
I'm currently working on adding Introspection somehow.
My first approach was to add the ability to get the TaskName and PortName
for each side in the connection. In the merge request it was mentioned,
that this would not be enough in order to cover all use cases.

So my big question would be, what information do we need
to do a proper connection trace ?
URI of the TaskContextServer ? This seems not to cover
the ROS usecase...

Greetings
Janosch

RTT-Introspection and how to do it ?

2015-03-23 7:16 GMT-03:00 Janosch Machowinski <Janosch [dot] Machowinski [..] ...>:
> I'm currently working on adding Introspection somehow.
> My first approach was to add the ability to get the TaskName and PortName
> for each side in the connection. In the merge request it was mentioned,
> that this would not be enough in order to cover all use cases.

My point in the pull request, that I make again here, is that the
proper, system-wide, "identification" of the ports is
toolchain-specific. Rock would do it differently than you with your
orocos-cpp, than probably the next guy who uses RTT in a different
context. You, yourself, mention TaskContextServer. That's CORBA. What
about people not using CORBA ?

That's the power of RTT, having been very careful about keeping the
core clean of such concerns. And this is why the only functionality
that should be provided by RTT is the ability to store and retrieve a
caller-provided string to identify the channels. Leave the details of
how name resolution is performed, and what information is needed, to
the environment which knows best. I.e., NOT RTT.

For convenience, I suggested to add that information in the policy. I
say "convenience" as the policy is already passed along when
connecting ports.

I would even add that we could use the opportunity to revive the idea
of having a generic key-value pair list in the policies to in-fine
replace the confusing list of parameters we have right now (who knows
that pull only applies to direct or CORBA connections ?). This would
allow storing (and therefore, exposing to the distributed system)
metadata information in connections while, again. leaving RTT out of
them.

THEN you can auto-fill this field with a sane default, if it is not
yet filled, in the current RTT code:
- first in the local input/output ports (plain task and port name)
- then in the CORBA layer to account for the fact that names are not
propagated through the CORBA interface BUT are known at the level of
the proxies.

Sylvain

2015-03-23 7:16 GMT-03:00 Janosch Machowinski <Janosch [dot] Machowinski [..] ...>:
> Hey,
> I'm currently working on adding Introspection somehow.
> My first approach was to add the ability to get the TaskName and PortName
> for each side in the connection. In the merge request it was mentioned,
> that this would not be enough in order to cover all use cases.
>
> So my big question would be, what information do we need
> to do a proper connection trace ?
> URI of the TaskContextServer ? This seems not to cover
> the ROS usecase...
>
> Greetings
> Janosch
>
> --
> Dipl. Inf. Janosch Machowinski
> SAR- & Sicherheitsrobotik
>
> Universität Bremen
> FB 3 - Mathematik und Informatik
> AG Robotik
> Robert-Hooke-Straße 1
> 28359 Bremen, Germany
>
> Zentrale: +49 421 178 45-6611
>
> Besuchsadresse der Nebengeschäftstelle:
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Tel.: +49 421 178 45-6614
> Empfang: +49 421 178 45-6600
> Fax: +49 421 178 45-4150
> E-Mail: jmachowinski [..] ...
>
> Weitere Informationen: http://www.informatik.uni-bremen.de/robotik
>
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

RTT-Introspection and how to do it ?

> Ok with me, still I would like to have a common place to store the
> information, and a standardized format would also be nice.
> Janosch

Cf. the rest of the mail:
place: the policy, as a list of key-value pairs
standardized format: actually, not sure it makes sense as it will be
tooling-specific information (i.e. each toolchain can define its own
standardized set of keys, if multiple toolings want to share one then
they can start discussing)

RTT-Introspection and how to do it ?

Am 24.03.2015 um 13:01 schrieb Sylvain Joyeux:
>> Ok with me, still I would like to have a common place to store the
>> information, and a standardized format would also be nice.
>> Janosch
> Cf. the rest of the mail:
> place: the policy, as a list of key-value pairs
> standardized format: actually, not sure it makes sense as it will be
> tooling-specific information (i.e. each toolchain can define its own
> standardized set of keys, if multiple toolings want to share one then
> they can start discussing)
My current problem here is, that I would need to patch
the RTT::proxy::RemotePorts class to fill in the con tracing
information. The con tracing information should match the
one, being filled in by the orocos.rb layer and should also
be compatible with the ROS stuff, as the goal is to have
a tool that should work in all these cases.

I'm still a bit puzzled about what would be a good can tracing
information for the 'standard' RTT/Corba case. Your point was
that the task name that could be given to a namserver would
not be sufficient given the recent corba renaming patches.
What would be unique in this case ? A Corba URI ?
Greetings
Janosch

RTT-Introspection and how to do it ?

> My current problem here is, that I would need to patch
> the RTT::proxy::RemotePorts class to fill in the con tracing
> information. The con tracing information should match the
> one, being filled in by the orocos.rb layer and should also
> be compatible with the ROS stuff, as the goal is to have
> a tool that should work in all these cases.
Right now, the goal would be to have something implemented in RTT that
*can be used* in all these cases. Having all of them fill them the
same way is a different topic.

> I'm still a bit puzzled about what would be a good can tracing
> information for the 'standard' RTT/Corba case. Your point was
> that the task name that could be given to a namserver would
> not be sufficient given the recent corba renaming patches.
> What would be unique in this case ? A Corba URI ?

If you want to be compatible with orocos.rb, you'll have to either
reimplement the naming service infrastructure (and pray that orocos.rb
and the C++ program share the same general configuration), or allow
the process to be given the NS namespace so that you can fill it.

Sylvain

RTT-Introspection and how to do it ?

Am 24.03.2015 um 16:38 schrieb Sylvain Joyeux:
>> My current problem here is, that I would need to patch
>> the RTT::proxy::RemotePorts class to fill in the con tracing
>> information. The con tracing information should match the
>> one, being filled in by the orocos.rb layer and should also
>> be compatible with the ROS stuff, as the goal is to have
>> a tool that should work in all these cases.
> Right now, the goal would be to have something implemented in RTT that
> *can be used* in all these cases. Having all of them fill them the
> same way is a different topic.
Egg vs. Hen.
I need to know, what each of them would fill, to know how to
implement something that *can be used* in all these cases...

Therefor my question.

>> I'm still a bit puzzled about what would be a good can tracing
>> information for the 'standard' RTT/Corba case. Your point was
>> that the task name that could be given to a namserver would
>> not be sufficient given the recent corba renaming patches.
>> What would be unique in this case ? A Corba URI ?
> If you want to be compatible with orocos.rb, you'll have to either
> reimplement the naming service infrastructure (and pray that orocos.rb
> and the C++ program share the same general configuration), or allow
> the process to be given the NS namespace so that you can fill it.
AFAIK the standard (implicit) RTT::Corba nameservice does exactly
the same as orocos.rb.
Janosch

RTT-Introspection and how to do it ?

> Egg vs. Hen. I need to know, what each of them would fill, to know how to
> implement something that *can be used* in all these cases...
The string is a very universal way to store any kind of information.
Which is why I advocate for leaving the RTT part simple (list of
key/value pairs).

> AFAIK the standard (implicit) RTT::Corba nameservice does exactly
> the same as orocos.rb.
Except that orocos.rb is architectured so that multiple name services
can be used. It namespaces task names based on how the task can be
resolved (so that it can be re-resolved later). This is how orocos.rb
integrates ROS transparently. It basically provides a
TaskContext-compatible API and a ROS-based name service. And, by the
way, it is a mandatory thing to have if you want to build multi-robot
systems (even with CORBA).

Sylvain

RTT-Introspection and how to do it ?

Hey,
we had a discussion about the introspection today,
and came up with a second approach to this.

The idea was to add a flag to ChannelElementBase
bool isRemoteElement()
and a method
std::string getRemoteEndpointURI()

If we afterwards modify the ros and Corba::RemoteChannelElement,
to fill in the Information, we should have a solution that works
in all cases. And not tooling patching would be involved.

To gather the introspection information, the TaskContextServer (or
whatever else you use) would need to iterate over the channel elements
of the ConnectionManager and output it. Big plus, of there is a
BufferChannelElement in between, one could also ask it for its
fill status.
Thoughts on this ?
Janosch

Am 24.03.2015 um 17:51 schrieb Sylvain Joyeux:
>> Egg vs. Hen. I need to know, what each of them would fill, to know how to
>> implement something that *can be used* in all these cases...
> The string is a very universal way to store any kind of information.
> Which is why I advocate for leaving the RTT part simple (list of
> key/value pairs).
>
>> AFAIK the standard (implicit) RTT::Corba nameservice does exactly
>> the same as orocos.rb.
> Except that orocos.rb is architectured so that multiple name services
> can be used. It namespaces task names based on how the task can be
> resolved (so that it can be re-resolved later). This is how orocos.rb
> integrates ROS transparently. It basically provides a
> TaskContext-compatible API and a ROS-based name service. And, by the
> way, it is a mandatory thing to have if you want to build multi-robot
> systems (even with CORBA).
>
> Sylvain

RTT-Introspection and how to do it ?

2015-03-25 12:25 GMT-03:00 Janosch Machowinski <Janosch [dot] Machowinski [..] ...>:
> To gather the introspection information, the TaskContextServer (or
> whatever else you use) would need to iterate over the channel elements
> of the ConnectionManager and output it. Big plus, of there is a
> BufferChannelElement in between, one could also ask it for its
> fill status.
> Thoughts on this ?

Could you *please* start by saying what is your problem with the
solution I've proposed, apart from it being coming from me ? I
literally spend hours and lots of (virtual) ink trying to tell you why
I don't like your solution. The only thing I get back is two lines ("I
don't like X because it would require me to modify the proxies", yes,
and ?)

To make things absolutely clear: to me, the way it is coded is not the
main issue. The *concept* of how you want to implement this little
part of "introspection" is the issue. Having a getURI method or moving
strings around do not change the underlying problem: such an API has
no place in RTT. *Nothing* beats leaving toolchain concerns to the
toolchain. Identifying tasks and ports ? That's a toolchain concern,
you do NOT know how toolchains might want to represent and resolve
tasks and ports. Heck, you did not seem to know how orocos.rb was. And
it avoids bogging RTT's code, which is already complex enough.

To add insult to injury, you seem determined to pass on the
opportunity of creating a cheap way to store and retrieve
channel-specific metadata.

What's wrong with filling default values with something RTT already
has (task and port names) if the expected field(s) are not already
filled exactly? You get what you want and so do I: the flexibility of
giving the toolchain(s) to do their job while still providing a
low-level functionality that is limited but "works" (for a limited
meaning of "works").

(Again: how do you deal with multiple naming services ? Multiple ROS
servers ? You would have to "feed" that information to RTT, which
either means modifying the tooling to do it (!) but still telling RTT
that he has to ignore it (i.e. creating a way for the toolchain to
store a URI that is tooling-specific *and* provide "on the side" the
name RTT expects) or moving a multi-NS system into RTT. Yuk.)

In other words, you force your vision of how RTT should be used on the
developers (present and future) of the toolchains using RTT. Which is
exactly what RTT succeeded in avoiding so far.

Sylvain

RTT-Introspection and how to do it ?

Am 25.03.2015 um 20:29 schrieb Sylvain Joyeux:
> 2015-03-25 12:25 GMT-03:00 Janosch Machowinski <Janosch [dot] Machowinski [..] ...>:
>> To gather the introspection information, the TaskContextServer (or
>> whatever else you use) would need to iterate over the channel elements
>> of the ConnectionManager and output it. Big plus, of there is a
>> BufferChannelElement in between, one could also ask it for its
>> fill status.
>> Thoughts on this ?
> Could you *please* start by saying what is your problem with the
> solution I've proposed, apart from it being coming from me ? I
> literally spend hours and lots of (virtual) ink trying to tell you why
> I don't like your solution. The only thing I get back is two lines ("I
> don't like X because it would require me to modify the proxies", yes,
> and ?)
Your proposal won't work in the case that the peer mechanism is used, and
the connections are created inside of a task. It also won't work, if the
connections
are created inside a deployment using connect_to directly.
It will also fail in the corba renaming scenario, if the renaming is
done after the
connect, as wrong information is stored in the policy.

Your proposal will also only work if you use enabled tooling, that fills
in the information.
And as the filled in information is not standardized through an proper
interface, it will
only be useful to the introspection tool for the specific tooling.
Wouldn't it be better if there would be one tool, for RTT, that would work
regardless of the tooling ? I'm thinking of a class, that you give a
bunch of
TraceFoo<Tooling specific templateClass, TaskContext> that returns you
a Graph of the same template class. Also, I don't see the need for this
class
to go inside of the RTT code...
>
> To make things absolutely clear: to me, the way it is coded is not the
> main issue. The *concept* of how you want to implement this little
> part of "introspection" is the issue. Having a getURI method or moving
> strings around do not change the underlying problem: such an API has
> no place in RTT. *Nothing* beats leaving toolchain concerns to the
> toolchain. Identifying tasks and ports ? That's a toolchain concern,
> you do NOT know how toolchains might want to represent and resolve
> tasks and ports. Heck, you did not seem to know how orocos.rb was. And
> it avoids bogging RTT's code, which is already complex enough.
You clearly did not understand the proposal. The getURI returns the URI
of the remote
counterpart of the channel element. This information can then be used to
trace back
the connections. And this information is 100% available inside of the
channel element,
regardless of the tooling you use, because it is transport specific.
>
> To add insult to injury, you seem determined to pass on the
> opportunity of creating a cheap way to store and retrieve
> channel-specific metadata.
>
> What's wrong with filling default values with something RTT already
> has (task and port names) if the expected field(s) are not already
> filled exactly? You get what you want and so do I: the flexibility of
> giving the toolchain(s) to do their job while still providing a
> low-level functionality that is limited but "works" (for a limited
> meaning of "works").
As you may recall my first patch tried to to just that. Fill in task and
port names.
And you may also recall that is was quite intrusive. So to quote you,
there is no
way to fill in the default information without a lot of 'mumbojumbo'.
>
> (Again: how do you deal with multiple naming services ? Multiple ROS
> servers ? You would have to "feed" that information to RTT, which
> either means modifying the tooling to do it (!) but still telling RTT
> that he has to ignore it (i.e. creating a way for the toolchain to
> store a URI that is tooling-specific *and* provide "on the side" the
> name RTT expects) or moving a multi-NS system into RTT. Yuk.)
Needs to be resolved by the introspection tool, by using the Tooling
specific information
mentioned above.
>
> In other words, you force your vision of how RTT should be used on the
> developers (present and future) of the toolchains using RTT. Which is
> exactly what RTT succeeded in avoiding so far.
What I'm trying to do is to find a lightweight solution, that will work
in all cases.
For some reasons, you seem to take this personal.
Greetings
Janosch

RTT-Introspection and how to do it ?

> Your proposal won't work in the case that the peer mechanism is used, and
> the connections are created inside of a task. It also won't work, if the
> connections are created inside a deployment using connect_to directly.

And why is that ? connectTo can fill the information in the "I don't
use the tooling case". But that's for another point later.

> It will also fail in the corba renaming scenario, if the renaming is done
> after the connect, as wrong information is stored in the policy.
Interesting. Not a case we have right now, but interesting.

> Your proposal will also only work if you use enabled tooling, that fills in
> the information. And as the filled in information is not standardized through an proper
> interface, it will only be useful to the introspection tool for the specific tooling.
> Wouldn't it be better if there would be one tool, for RTT, that would work
> regardless of the tooling ?

Better ? It depends on the cost on (1) RTT and (2) the various
toolchains. What is this "introspection tool" exactly doing ?
Explaining that would help understanding where you're trying to go.

> You clearly did not understand the proposal. The getURI returns the URI of
> the remote counterpart of the channel element. This information can then be used to
> trace back the connections. And this information is 100% available inside of the
> channel element, regardless of the tooling you use, because it is transport specific.
Definitely an interesting idea. However, I don't see what you would
use as "transport-dependent URI"
CORBA: the IOR, but of what? Ports don't have a IOR.
DataFlowInterface has one, but that's already two
steps further from the endpoint. And the Channel endpoints
are hidden deep inside the channel itself (a.k.a. not
known from the outside)
MQ: the mq name itself ?
ROS: the topic name ? But then that won't work if you have multiple ROS masters

If your idea is to provide an ID that can be matched externally to
rebuild the graph later, then simply generating a UUID per channel
would do the trick (just noticed Peter's message in the github thread
... he was suggesting that already, missed it somehow :(). However,
that's not working in the stream case (e.g. ROS) as there is no other
endpoint to "match" against.

<ironic>
You could even store the UUID in the policy metadata
<ironic>

> As you may recall my first patch tried to to just that. Fill in task and
> port names. And you may also recall that is was quite intrusive. So to quote you, there
> is no way to fill in the default information without a lot of 'mumbojumbo'.
Nice out-of-context quote. You should try and go into journalism. The
whole quote was

> The only thing you need to update is your implementation of TaskProxy.
> It has all the information needed to set the policy field properly without
> adding a lot of mumbo-jumbo in
> RTT itself.

Which was mainly referring to the fact that you were propagating port
and task names all the way through the channels and CORBA IDL ! If you
limit yourself to connectTo in "plain RTT" and in the corba proxies,
that would be a lot less intrusive and would provide the
fill-in-the-defaults quite nicely.

> What I'm trying to do is to find a lightweight solution, that will work in
> all cases.
> For some reasons, you seem to take this personal.
You interpreted it wrong. What you see here is "very annoyed" by
wasting my time talking to a wall. For. So. Many. Emails.

But at least it got you talking.

Sylvain

RTT-Introspection and how to do it ?

2015-03-26 1:15 GMT+01:00 Sylvain Joyeux <sylvain [dot] joyeux [..] ...>:

> > Your proposal won't work in the case that the peer mechanism is used, and
> > the connections are created inside of a task. It also won't work, if the
> > connections are created inside a deployment using connect_to directly.
>
> And why is that ? connectTo can fill the information in the "I don't
> use the tooling case". But that's for another point later.
>
> > It will also fail in the corba renaming scenario, if the renaming is done
> > after the connect, as wrong information is stored in the policy.
> Interesting. Not a case we have right now, but interesting.
>
> > Your proposal will also only work if you use enabled tooling, that fills
> in
> > the information. And as the filled in information is not standardized
> through an proper
> > interface, it will only be useful to the introspection tool for the
> specific tooling.
> > Wouldn't it be better if there would be one tool, for RTT, that would
> work
> > regardless of the tooling ?
>
> Better ? It depends on the cost on (1) RTT and (2) the various
> toolchains. What is this "introspection tool" exactly doing ?
> Explaining that would help understanding where you're trying to go.
>
> > You clearly did not understand the proposal. The getURI returns the URI
> of
> > the remote counterpart of the channel element. This information can then
> be used to
> > trace back the connections. And this information is 100% available
> inside of the
> > channel element, regardless of the tooling you use, because it is
> transport specific.
> Definitely an interesting idea. However, I don't see what you would
> use as "transport-dependent URI"
> CORBA: the IOR, but of what? Ports don't have a IOR.
> DataFlowInterface has one, but that's already two
> steps further from the endpoint. And the Channel endpoints
> are hidden deep inside the channel itself (a.k.a. not
> known from the outside)
> MQ: the mq name itself ?
> ROS: the topic name ? But then that won't work if you have multiple ROS
> masters
>
> If your idea is to provide an ID that can be matched externally to
> rebuild the graph later, then simply generating a UUID per channel
> would do the trick (just noticed Peter's message in the github thread
> ... he was suggesting that already, missed it somehow :(). However,
> that's not working in the stream case (e.g. ROS) as there is no other
> endpoint to "match" against.
>
> <ironic>
> You could even store the UUID in the policy metadata
> <ironic>
>
> > As you may recall my first patch tried to to just that. Fill in task and
> > port names. And you may also recall that is was quite intrusive. So to
> quote you, there
> > is no way to fill in the default information without a lot of
> 'mumbojumbo'.
> Nice out-of-context quote. You should try and go into journalism. The
> whole quote was
>

I'm not sure sarcasm will help in any way ... It's a public discussion btw
...

>
> > The only thing you need to update is your implementation of TaskProxy.
> > It has all the information needed to set the policy field properly
> without
> > adding a lot of mumbo-jumbo in
> > RTT itself.
>
> Which was mainly referring to the fact that you were propagating port
> and task names all the way through the channels and CORBA IDL ! If you
> limit yourself to connectTo in "plain RTT" and in the corba proxies,
> that would be a lot less intrusive and would provide the
> fill-in-the-defaults quite nicely.
>
> > What I'm trying to do is to find a lightweight solution, that will work
> in
> > all cases.
> > For some reasons, you seem to take this personal.
> You interpreted it wrong. What you see here is "very annoyed" by
> wasting my time talking to a wall. For. So. Many. Emails.
>
> But at least it got you talking.
>
> Sylvain
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>

RTT-Introspection and how to do it ?

Am 26.03.2015 um 01:37 schrieb Willy Lambert:
>
>
> 2015-03-26 1:15 GMT+01:00 Sylvain Joyeux <sylvain [dot] joyeux [..] ...
> <mailto:sylvain [dot] joyeux [..] ...>>:
>
> > Your proposal won't work in the case that the peer mechanism is used, and
> > the connections are created inside of a task. It also won't
> work, if the
> > connections are created inside a deployment using connect_to
> directly.
>
> And why is that ? connectTo can fill the information in the "I don't
> use the tooling case". But that's for another point later.
>
Actually no. In the case of connect_to to a remote port, the remote port
is created by the TaskContextServer and attached to the local task.
This is some sort of trick there... At this point, you have no chance to
figure out the name of the original task, if you don't propagate the
information
all along the way like my first patch did.
>
>
> > It will also fail in the corba renaming scenario, if the
> renaming is done
> > after the connect, as wrong information is stored in the policy.
> Interesting. Not a case we have right now, but interesting.
>
> > Your proposal will also only work if you use enabled tooling,
> that fills in
> > the information. And as the filled in information is not
> standardized through an proper
> > interface, it will only be useful to the introspection tool for
> the specific tooling.
> > Wouldn't it be better if there would be one tool, for RTT, that
> would work
> > regardless of the tooling ?
>
> Better ? It depends on the cost on (1) RTT and (2) the various
> toolchains. What is this "introspection tool" exactly doing ?
> Explaining that would help understanding where you're trying to go.
>
Is that clear now from the explanation below, or should I retry ?
>
>
> > You clearly did not understand the proposal. The getURI returns
> the URI of
> > the remote counterpart of the channel element. This information
> can then be used to
> > trace back the connections. And this information is 100%
> available inside of the
> > channel element, regardless of the tooling you use, because it
> is transport specific.
> Definitely an interesting idea. However, I don't see what you would
> use as "transport-dependent URI"
> CORBA: the IOR, but of what? Ports don't have a IOR.
>
Rethinking it a bit, we would need 2 methods getLocalURI, getRemoteURI.
in the connection tracking it then comes down to a simple string match.
The URI could also be a unique identifier / UUID.
>
> DataFlowInterface has one, but that's already two
> steps further from the endpoint. And the Channel endpoints
> are hidden deep inside the channel itself (a.k.a. not
> known from the outside)
>
Hu ? I can iterate the channel elements by using the getInput and
getOutput methods.
>
> MQ: the mq name itself ?
> ROS: the topic name ? But then that won't work if you have
> multiple ROS masters
>
> If your idea is to provide an ID that can be matched externally to
> rebuild the graph later, then simply generating a UUID per channel
> would do the trick (just noticed Peter's message in the github thread
> ... he was suggesting that already, missed it somehow :(). However,
> that's not working in the stream case (e.g. ROS) as there is no other
> endpoint to "match" against.
>
The output graph of the tracking class would contain the Tasks, the
Ports and the
channel elements. If there is no remote matched one, it will still be in
there as a
leaf with the URI/UUID. So the information in the leaf can be used to
figure out
the ROS endpoint.
Interestingly enough, this would also work in the case, were a port is
publishing
to a ROS topic, and multiple other ports are subscribed to the same ROS
topic.
>
>
> <ironic>
> You could even store the UUID in the policy metadata
> <ironic>
>
> > As you may recall my first patch tried to to just that. Fill in
> task and
> > port names. And you may also recall that is was quite intrusive.
> So to quote you, there
> > is no way to fill in the default information without a lot of
> 'mumbojumbo'.
> Nice out-of-context quote. You should try and go into journalism. The
> whole quote was
>
>
> I'm not sure sarcasm will help in any way ... It's a public discussion
> btw ...
>
>
> > The only thing you need to update is your implementation of
> TaskProxy.
> > It has all the information needed to set the policy field
> properly without
> > adding a lot of mumbo-jumbo in
> > RTT itself.
>
> Which was mainly referring to the fact that you were propagating port
> and task names all the way through the channels and CORBA IDL ! If you
> limit yourself to connectTo in "plain RTT" and in the corba proxies,
> that would be a lot less intrusive and would provide the
> fill-in-the-defaults quite nicely.
>
As I said above, there is not simple way to archive this. Feel free to
prove me wrong...
Anyway, I would prefer the conn tracking solution, it seems simpler and
would
provide the ability to inspect the buffer fill levels inside of the
connections.
Janosch

>
> > What I'm trying to do is to find a lightweight solution, that
> will work in
> > all cases.
> > For some reasons, you seem to take this personal.
> You interpreted it wrong. What you see here is "very annoyed" by
> wasting my time talking to a wall. For. So. Many. Emails.
>
> But at least it got you talking.
>
> Sylvain
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> <mailto:Orocos-Dev [..] ...>
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>
>

RTT-Introspection and how to do it ?

Below is a summary of what I understand from your proposal:

- transports must provide unique IDs that allows to identify endpoints
- the connection graph can be rebuilt by querying remote and local
IDs and pair them
- somehow we could use this mechanism to get some stats about the
channel itself

Overall, it does seem like a pretty sound starting point.

Some key points in my view:
- the info-gathering process must be purely local
- since we have to query both sides of the connection anyways, it
does not reduce the amount of information we can gather. It is
therefore a performance gain, allows it to work uniformly over one-way
communication channels like MQs or RTT-to-RTT ROS, makes the info
gathering more robust as we don't need to hit the network and will
allow to query half-broken connections
- obviously, only the side of the connection which contains the
data element would report about its state
- it would mean making the IDs per-channel instead of per-endpoint
- the channel API is private, and should remain so. Any connection
info API should go on the ports. I would propose creating a
ConnectionInfo structure and a getConnectionInfo call on PortInterface
which returns a vector of ConnectionInfo objects
- this structure would contain
- the channel ID
- the channel policy
1. it's important information since it tells how the connection
has been built (buffer size, transport ID, ...)
2. it is guaranteed to contain information about the remote
part in the case of streams (e.g. ROS)
- an additional data structure to store channel statistics (out of
the top of my head, number of samples received, number of samples
read, number of samples lost, current fill)

RTT-Introspection and how to do it ?

Am 30.03.2015 um 13:49 schrieb Sylvain Joyeux:
> Below is a summary of what I understand from your proposal:
>
> - transports must provide unique IDs that allows to identify endpoints
> - the connection graph can be rebuilt by querying remote and local
> IDs and pair them
> - somehow we could use this mechanism to get some stats about the
> channel itself
>
> Overall, it does seem like a pretty sound starting point.
>
> Some key points in my view:
> - the info-gathering process must be purely local
> - since we have to query both sides of the connection anyways, it
> does not reduce the amount of information we can gather. It is
> therefore a performance gain, allows it to work uniformly over one-way
> communication channels like MQs or RTT-to-RTT ROS, makes the info
> gathering more robust as we don't need to hit the network and will
> allow to query half-broken connections
> - obviously, only the side of the connection which contains the
> data element would report about its state
> - it would mean making the IDs per-channel instead of per-endpoint
> - the channel API is private, and should remain so. Any connection
> info API should go on the ports. I would propose creating a
> ConnectionInfo structure and a getConnectionInfo call on PortInterface
> which returns a vector of ConnectionInfo objects
I don't plan to add an API to RTT itself. To keep RTT changes minimal,
I aim at implement a Service, that adds an operation that does the
query. The Service is supposed to be loaded remotely. (See my mail
about that)
> - this structure would contain
> - the channel ID
> - the channel policy
> 1. it's important information since it tells how the connection
> has been built (buffer size, transport ID, ...)
> 2. it is guaranteed to contain information about the remote
> part in the case of streams (e.g. ROS)
> - an additional data structure to store channel statistics (out of
> the top of my head, number of samples received, number of samples
> read, number of samples lost, current fill)
That pretty much matches it.
Janosch

RTT-Introspection and how to do it ?

> I don't plan to add an API to RTT itself. To keep RTT changes minimal,
> I aim at implement a Service, that adds an operation that does the
> query. The Service is supposed to be loaded remotely. (See my mail
> about that)

Missing how that service accesses the information.

Sylvain

RTT-Introspection and how to do it ?

Am 30.03.2015 um 14:48 schrieb Sylvain Joyeux:
>> I don't plan to add an API to RTT itself. To keep RTT changes minimal,
>> I aim at implement a Service, that adds an operation that does the
>> query. The Service is supposed to be loaded remotely. (See my mail
>> about that)
> Missing how that service accesses the information.
>
> Sylvain
As the service adds a method to the Task, it then can access the
connection manager of the ports. It is a local method then...
One actually only needs a local pointer to a TaskContext to access
the connection manager.
Greetings
Janosch

RTT-Introspection and how to do it ?

... coming back to my previous point:
- the connection manager and channels APIs are private and should not
directly be accessed for this purpose

Moreover, if you aim is to make the connection info feature available
to everyone, then adding it to the public API would make sense.

How it then gets exported remotely is another question (I would have
added it to the IDL directly, but adding a dynamically loaded service
does not sound bad either)

Sylvain

2015-03-30 9:52 GMT-03:00 Janosch Machowinski <Janosch [dot] Machowinski [..] ...>:
> Am 30.03.2015 um 14:48 schrieb Sylvain Joyeux:
>>>
>>> I don't plan to add an API to RTT itself. To keep RTT changes minimal,
>>> I aim at implement a Service, that adds an operation that does the
>>> query. The Service is supposed to be loaded remotely. (See my mail
>>> about that)
>>
>> Missing how that service accesses the information.
>>
>> Sylvain
>
> As the service adds a method to the Task, it then can access the
> connection manager of the ports. It is a local method then...
> One actually only needs a local pointer to a TaskContext to access
> the connection manager.
> Greetings
>
> Janosch
>
>
> --
> Dipl. Inf. Janosch Machowinski
> SAR- & Sicherheitsrobotik
>
> Universität Bremen
> FB 3 - Mathematik und Informatik
> AG Robotik
> Robert-Hooke-Straße 1
> 28359 Bremen, Germany
> Zentrale: +49 421 178 45-6611
> Besuchsadresse der Nebengeschäftstelle:
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
> Tel.: +49 421 178 45-6614
> Empfang: +49 421 178 45-6600
> Fax: +49 421 178 45-4150
> E-Mail: jmachowinski [..] ...
>
> Weitere Informationen: http://www.informatik.uni-bremen.de/robotik
>

RTT-Introspection and how to do it ?

Am 30.03.2015 um 14:56 schrieb Sylvain Joyeux:
> ... coming back to my previous point:
> - the connection manager and channels APIs are private and should not
> directly be accessed for this purpose
Hu ? The conn manager is public in PortInterface, and the documentation
says it may be used for exactly this purpose :

/**
* Returns the connection manager of this port (if any).
* This method provides access to the internals of this port
* in order to allow connection introspection.
* @return null if no such manager is available, or the manager
* otherwise.
* @see ConnectionManager::getChannels() for a list of all
* connections of this port.
*/
virtual const internal::ConnectionManager* getManager() const = 0;

>
> Moreover, if you aim is to make the connection info feature available
> to everyone, then adding it to the public API would make sense.
It is public in the sense, that you can use it if you compile and
install the query library and it's service. I don't see a point of
adding it into the RTT core thou.
Greetings
Janosch

RTT-Introspection and how to do it ?

> /**
> * Returns the connection manager of this port (if any).
> * This method provides access to the internals of this port
> * in order to allow connection introspection.
> * @return null if no such manager is available, or the manager
> * otherwise.
> * @see ConnectionManager::getChannels() for a list of all
> * connections of this port.
> */
> virtual const internal::ConnectionManager* getManager() const = 0;

This is pretty horribly bad.

Giving direct access to the channels is a really really bad idea.
Channel memory management is very sensitive and currently pretty
horrible. I was very much hoping to clean it up (in particular, remove
the bloody need for mutexes everywhere). Having it as public pretty
much makes it impossible. By the way, enumerating channels is
currently really really broken in that respect: i.e. doing what you
aim at with getManager() while others are connecting/disconnecting
will lead to big troubles.

I would love to have comments from Peter on this (the original commit
is from RTT2's inception).

Sylvain

RTT-Introspection and how to do it ?

Am 30.03.2015 um 15:19 schrieb Sylvain Joyeux:
>> /**
>> * Returns the connection manager of this port (if any).
>> * This method provides access to the internals of this port
>> * in order to allow connection introspection.
>> * @return null if no such manager is available, or the manager
>> * otherwise.
>> * @see ConnectionManager::getChannels() for a list of all
>> * connections of this port.
>> */
>> virtual const internal::ConnectionManager* getManager() const = 0;
> This is pretty horribly bad.
>
> Giving direct access to the channels is a really really bad idea.
> Channel memory management is very sensitive and currently pretty
> horrible. I was very much hoping to clean it up (in particular, remove
> the bloody need for mutexes everywhere). Having it as public pretty
> much makes it impossible. By the way, enumerating channels is
> currently really really broken in that respect: i.e. doing what you
> aim at with getManager() while others are connecting/disconnecting
> will lead to big troubles.
>
> I would love to have comments from Peter on this (the original commit
> is from RTT2's inception).
>
> Sylvain
Hm,
ok, sounds like we need to sort this out. I did a first proof
of concept implementation using the current API. The results
look pretty promising. The code can be found here :
https://github.com/orocos-toolchain/rtt/pull/90
Greetings
Janosch