Examples and Tutorials

The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

There are several sources where you can find code and tutorials. Click below to read the rest of this post.The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

There are several sources where you can find code and tutorials. Some code is listed in wiki pages, other is downloadable in a separate package and finally you can find code snippets in the manuals too.

Simple examples

RTT Examples Get started with simple, ready-to-compile examples of how to create a component

Naming connections, not ports: Orocos' best kept secret

Using omniORBpy to interact with a component from Python

Advanced examples

These advanced examples are mainly about extending and configuring the RTT for your specific needs.

Using plugins and toolkits to support custom data types

Using non-periodic components to implement a simple TCP client

Using XML substitution to manage complex deployments

Using real-time logging

Developing plugins and toolkits

This is a work in progress and only for RTT 1.x !

Rationale

Problem: You want to pass custom types between distributed components, be able to see the value(s) of your custom type with in a deployer, and be able to read/write the custom type to/from XML files.

Solution: Develop two plugins that tell Orocos about your custom types.

<!-- break -->

Assumptions

  • The build directory is within the source directory. This helps with dynamic library loading.

Compatabilitry

Tested on v1.8 trunk on Mac OS X Leopard with omniORB from MacPorts, and Ubuntu Jaunty with ACE/TAO.

Files

See the attachments at the bottom of this page.

Overview

An RTT toolkit plugin provides information to Orocos about one or more custom types. This type of plugin allows RTT to display your types values in a deployer, load/save your types to/from XML files, and provides constructors and operators that can be used to manipulate your types within program scripts and state machines.

An RTT transport plugin provides methods to transport your custom types across CORBA, and hence between distributed Orocos components.

This is a multi-part example demonstrating plugins for two boost::posix_time types: ptime and time_duration.

  • Part 1 Without the plugin creates components that use your custom type, and demonstrates that Orocos does not know anything about these types
  • Part 2 Toolkit plugin demonstrates an Orocos plugin that makes the types available to Orocos. In a deployer, you can now see the values of your custom types
  • Part 3 Transport plugin demonstrates an Orocos transport plugin making your custom types available across CORBA. Now you can pass types between deployers.
  • TBD Part 4 will demonstrate XML manipulation
  • TBD Part 5 will demonstrate accesors and manipulators for use in scripts and state machines

For additional information on plugins and their development, see [1].

Also, the KDL toolkit and transport plugins are good examples. See src/bindings/rtt in the KDL source.

Structure

The overall structure of this examples is show below
.
|-- BoostToolkit.cpp
|-- BoostToolkit.hpp
|-- CMakeLists.txt
|-- config
|   |-- FindACE.cmake
|   |-- FindCorba.cmake
|   |-- FindOmniORB.cmake
|   |-- FindOrocos-OCL.cmake
|   |-- FindOrocos-RTT.cmake
|   |-- FindTAO.cmake
|   |-- UseCorba.cmake
|   `-- UseOrocos.cmake
|-- corba
|   |-- BoostCorbaConversion.hpp
|   |-- BoostCorbaToolkit.cpp
|   |-- BoostCorbaToolkit.hpp
|   |-- BoostTypes.idl
|   |-- CMakeLists.txt
|   `-- tests
|       |-- CMakeLists.txt
|       |-- corba-combined.cpp
|       |-- corba-recv.cpp
|       `-- corba-send.cpp
`-- tests
    |-- CMakeLists.txt
    |-- combined.cpp
    |-- no-toolkit.cpp
    |-- recv.cpp
    |-- recv.hpp
    |-- send.cpp
    `-- send.hpp

The toolkit plugin is in the root directory, with supporting test files in the tests directory.

CMake support files are in the config directory.

The transport plugin is in the corba directory, with supporting test files in the corba/tests directory.

Limitations

Currently, this example does
  • Show how to write a plugin telling Orocos about your custom types
  • Show how to write a transport plugin allowing Orocos to move your custom types between deployers/processes.
  • Demonstrate how to test said plugins.
  • Use either ACE/TAO or OmniORB for CORBA support

Currently, this example does not yet

  • Show how to read/write the custom types to/from XML file
  • Provide manipulators and/or accessors of your custom types, that can be used in scripts and state machines.
  • Does not demonstrate testing of the CORBA transport plugin within a single deployer, using two components. An optimization in RTT bypasses the CORBA mechanism in this case, rendering the test useless.
  • Does not deal with all intricacies of the boost types (eg all of the special values).

NB I could not find a method to get at the underlying raw 64-bit or 96-bit boost representation of ptime. Hence, the transport plugin inefficiently transports a ptime type using two separate data values. If you know of a method to get at the raw representation, I would love to know. Good luck in template land ...

References

[1] Extending the Real-Time Toolkit
AttachmentSize
BoostToolkit.hpp2.64 KB
BoostToolkit.cpp3.58 KB
CMakeLists.txt1.83 KB
corba/BoostCorbaToolkit.hpp934 bytes
corba/BoostCorbaToolkit.cpp1.34 KB
corba/QBoostCorbaConversion.hpp5.18 KB
corba/CMakeLists.txt738 bytes
plugins.tar_.bz214.24 KB

Part 1 Without the plugin

This is a work in progress

This part creates components that use your custom type, and demonstrates that Orocos does not know anything about these types.

Files

See the attachments at the bottom of Developing plugins and toolkits.

To build

In a shell

cd /path/to/plugins
mkdir build
cd build
cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=OFF
make

For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

Tested in Mac OS X Leopard 10.5.7.

To run

In a shell

cd /path/to/plugins/build
./no-toolkit

This starts a test case that uses an OCL taskbrowser to show two components: send and recv. If you issue a "ls" or "ls Send" command, you will get output similar to the following:

 Data Flow Ports: 
 RW(C)   unknown_t ptime          = (unknown_t)
 RW(C)   unknown_t timeDuration   = (unknown_t)

Each component has two ports, named ptime and time_duration. Notice that both ports are connected "(C)", but that Orocos considers each an unknown type with unknown value.

Part 2 Toolkit plugin will build a toolkit plugin that allows Orocos to understand these types.

Part 2 Toolkit plugin

This is a work in progress

This part creates a toolkit plugin making our types known to Orocos.

Files

See the attachments at the bottom of Developing plugins and toolkits

To build

Everything needed for this part was built in Part 1.

To run

In a shell

cd /path/to/plugins/build
./combined

The combined tests uses an OCL taskbrowser to show two components: send and recv. Typing an "ls" or "ls Send" command, as in Part 1, you will get something like the following:

RW(C) boost_ptime ptime          = 2009-Aug-09 16:14:19.724622
RW(C) boost_timeduration timeDuration   = 00:00:00.200005

Note that Orocos now knows the correct types (eg boost_ptime) and can display each ports value. Issue multiple ls commands and you will see the values change. The ptime is simply the date and time at which the send component set the port value, and the duration is the time between port values being set on each iteration (ie this should approximately be the period of the send component).

Toolkit plugin

The toolkit plugin is defined in BoostToolkit.hpp.

namespace Examples
{
    /// \remark these do not need to be in the same namespace as the plugin
 
    /// put the time onto the stream
    std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t);
    /// put the time onto duration the stream
    std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d);
    /// get a time from the stream
    std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t);
    /// get a time duration from the stream
    std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d);
The toolkit plugin is contained in an Examples namespace. First up we define input and output stream operators for each of our types.

    class BoostPlugin : public RTT::ToolkitPlugin
    {
    public:
        virtual std::string getName();
 
        virtual bool loadTypes();
        virtual bool loadConstructors();
        virtual bool loadOperators();
    };
 
    /// The singleton for the Toolkit.
    extern BoostPlugin BoostToolkit;
The actual plugin class and singleton object are then defined. The plugin provides a name that is unique across all plugins, and contains information on the types, constructors and operators for each or our custom types.

    /// provide ptime type to RTT type system
    /// \remark the 'true' argument indicates that we supply stream operators
    struct BoostPtimeTypeInfo : 
        public RTT::TemplateTypeInfo<boost::posix_time::ptime,true> 
    {
        BoostPtimeTypeInfo(std::string name) :
                RTT::TemplateTypeInfo<boost::posix_time::ptime,true>(name)
        {};
        bool decomposeTypeImpl(const boost::posix_time::ptime& img, RTT::PropertyBag& targetbag);
        bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::ptime& img);
    };
 
    /// provide time duration type to RTT type system
    /// \remark the 'true' argument indicates that we supply stream operators
    struct BoostTimeDurationTypeInfo : 
        public RTT::TemplateTypeInfo<boost::posix_time::time_duration,true> 
    {
        BoostTimeDurationTypeInfo(std::string name) :
                RTT::TemplateTypeInfo<boost::posix_time::time_duration,true>(name)
        {};
        bool decomposeTypeImpl(const boost::posix_time::time_duration& img, RTT::PropertyBag& targetbag);
        bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::time_duration& img);
    };
 
} // namespace Exampels
We then provide a type information class for each of our two custom types. These type info classes are the mechanism for Orocos to work with XML and our custom types.NB the true boolean value to each TypeInfo class indicates that stream operators are available (as defined above).

The toolkit plugin implementation is in the BoostToolkit.cpp file.

namespace Examples
{
    using namespace RTT;
    using namespace RTT::detail;
    using namespace std;
 
    std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t)
    {
        os << boost::posix_time::to_simple_string(t);
        return os;
    }
 
    std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d)
    {
        os << boost::posix_time::to_simple_string(d);
        return os;
    }
 
    std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t)
    {
        is >> t;
        return is;
    }
 
    std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d)
    {
        is >> d;
        return is;
    }
After picking up some RTT workspaces, we declare the stream operators to use the underlying boost stream operators. TODO explain why need these stream operators.

    BoostPlugin BoostToolkit;
 
    std::string BoostPlugin::getName()
    {
        return "Boost";
    }
Next we create the singleton instance of the plugin as BoostToolkit. TODO explain naming scheme. Then we declare the unique name of this plugin, "Boost".

    bool BoostPlugin::loadTypes()
    {
        TypeInfoRepository::shared_ptr ti = TypeInfoRepository::Instance();
 
        /* each quoted name here (eg "boost_ptime") must _EXACTLY_ match that
           in the associated TypeInfo::composeTypeImpl() and
           TypeInfo::decomposeTypeImpl() functions (in this file), as well as
           the name registered in the associated Corba plugin's 
           registerTransport() function (see corba/BoostCorbaToolkit.cpp)
        */
        ti->addType( new BoostPtimeTypeInfo("boost_ptime") );
        ti->addType( new BoostTimeDurationTypeInfo("boost_timeduration") );
 
        return true;
    }
The loadTypes() method provides the actual association for Orocos, from a type name to a TypeInfo class. This is how Orocos identifies a type at runtime. The choice of name is critical - it is what is shown in the deployer for an items type, and should make immediate sense when you see it. It probably also should not be too long, to keep things readable within the deployer and taskbrowser. The name you use here for each type is very important and must match with names in other places (TODO list the other places?).

    bool BoostPlugin::loadConstructors()
    {
        // no constructors for these particular types
 
        return true;
    }
 
    bool BoostPlugin::loadOperators()
    {
        // no operators for these particular types
 
        return true;
    }
Currently this example does not provide any constructors or operators, useable in program scripts and state machines. TODO update this.

    bool BoostPtimeTypeInfo::decomposeTypeImpl(const boost::posix_time::ptime& source, 
                                             PropertyBag& targetbag)
    {
        targetbag.setType("boost_ptime");
        assert(0);
        return true;
    }
 
    bool BoostPtimeTypeInfo::composeTypeImpl(const PropertyBag& bag, 
                                              boost::posix_time::ptime& result)
    {
        if ( "boost_ptime" == bag.getType() ) // ensure is correct type
        {
            // \todo
            assert(0);
        }
        return false;
    }
The implementation of a TypeInfo class for one of our custom types must use the same type name as use in loadTypes() above. These functions would also provide the mechanism to load/save the type to/from XML. TODO update this.

ORO_TOOLKIT_PLUGIN(Examples::BoostToolkit)
This macro (lying outside the namespace!) takes the fully qualified singleton, and makes it available to the RTT type system at runtime. It basically makes the singleton identifiable as an RTT toolkit plugin, when Orocos loads the dynamic library formed from this toolkit.

Build system

Now the build system takes this .cpp file, and turns it into a dynamic library. We are going to examine the root CMakeLists.txt to see how to create this library, but for now, we will ignore the corba parts of that file.

cmake_minimum_required(VERSION 2.6)
 
# pick up additional cmake package files (eg FindXXX.cmake) from this directory
list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/config")
First we enforce the minimum CMake version we require, and ensure that we can pick up FindXXX.cmake files from our config directory

find_package(Orocos-RTT 1.6.0 REQUIRED corba)
find_package(Orocos-OCL 1.6.0 REQUIRED taskbrowser)
We have to find both Orocos RTT and Orocos OCL, and we require the additional corba component from RTT and the additional taskbrowser component from OCL.

include(${CMAKE_SOURCE_DIR}/config/UseOrocos.cmake)
The UseOrocos.cmake file makes RTT and OCL available to us, and provides us with some useful macros (eg create_component).

create_component(BoostToolkit-${OROCOS_TARGET} 
  VERSION 1.0.0 
  BoostToolkit.cpp)
 
TARGET_LINK_LIBRARIES(BoostToolkit-${OROCOS_TARGET} 
  boost_date_time)
The create_component macro makes an Orocos shared library for us. This library will contain only our toolkit plugin. Note that we make the library name dependent on the Orocos target we are building for (eg macosx or gnulinux). This allows us to have plugins for multiple architectures on the same machine (typically, gnulinux and xenomai, or similar). We also have to link the shared library against the boost "date time" library, as we are using certain boost functionality that is not available in the header files.

SUBDIRS(tests)
Lastly we also build the 'test' directory.

Tests

There are two very simply test components that communicate each of our custom types between them. Tests are very important when developing plugins. Trying to debug a plugin within a complete system is a daunting challenge - do it in isolation first.

The send component regularly updates the current time on its ptime port, and the duration between ptime port updates on its timeDuration port.

class Send : public RTT::TaskContext
{
public:
    RTT::DataPort<boost::posix_time::ptime>                ptime_port;
    RTT::DataPort<boost::posix_time::time_duration>    timeDuration_port;
public:
    Send(std::string name);
    virtual ~Send();
 
    virtual bool startHook();
    virtual void updateHook();
protected:
    boost::posix_time::ptime    lastNow;
};

The implementation is very simple, and will not be discussed in detail here.

#include "send.hpp"
 
Send::Send(std::string name) :
        RTT::TaskContext(name),
        ptime_port("ptime"),
        timeDuration_port("timeDuration")
{
    ports()->addPort(&ptime_port);
    ports()->addPort(&timeDuration_port);
}
 
Send::~Send()
{
}
 
bool Send::startHook()
{
    // just set last to now
    lastNow            = boost::posix_time::microsec_clock::local_time();
    return true;
}
 
void Send::updateHook()
{
    boost::posix_time::ptime            now;
    boost::posix_time::time_duration    delta;
 
    // send the current time, and the duration since the last updateHook()
    now        = boost::posix_time::microsec_clock::local_time();
    delta   = now - lastNow;
 
    ptime_port.Set(now);
    timeDuration_port.Set(delta);
 
    lastNow = now;
}

The recv component has the same ports but does nothing. It is simply an empty receiver component, that allows us to view its ports within the deployer.

class Recv : public RTT::TaskContext
{
public:
    RTT::DataPort<boost::posix_time::ptime>                ptime_port;
    RTT::DataPort<boost::posix_time::time_duration>        timeDuration_port;
 
public:
    Recv(std::string name);
    virtual ~Recv();
};

And the recv implementation.

#include "recv.hpp"
 
Recv::Recv(std::string name) :
        RTT::TaskContext(name),
        ptime_port("ptime"),
        timeDuration_port("timeDuration")
{
    ports()->addPort(&ptime_port);
    ports()->addPort(&timeDuration_port);
}
 
Recv::~Recv()
{
}

Now the combined test program just combines one of each test component directly within the same executable.

#include <rtt/RTT.hpp>
#include <rtt/PeriodicActivity.hpp>
#include <rtt/TaskContext.hpp>
#include <rtt/os/main.h>
#include <rtt/Ports.hpp>
#include <ocl/TaskBrowser.hpp>
 
#include "send.hpp"
#include "recv.hpp"
#include "../BoostToolkit.hpp"
 
using namespace std;
using namespace Orocos;
 
int ORO_main(int argc, char* argv[])
{
    RTT::Toolkit::Import(Examples::BoostToolkit);
This forcibly loads our toolkit plugin.

 
    Recv                recv("Recv");
    PeriodicActivity    recv_activity(ORO_SCHED_OTHER, 0, 0.1, recv.engine());
    Send                 send("Send");
    PeriodicActivity    send_activity(ORO_SCHED_OTHER, 0, 0.2, send.engine());
 
    if ( connectPeers( &send, &recv ) == false )
    {
        log(Error) << "Could not connect peers !"<<endlog();
        return -1;
    }
    if ( connectPorts( &send, &recv) == false )
    {
        log(Error) << "Could not connect ports !"<<endlog();
        return -1;
    }
Connect the ports of the two components, and makes them peers

    send.configure();
    recv.configure();
    send_activity.start();
    recv_activity.start();
 
    TaskBrowser browser( &recv );
    browser.setColorTheme( TaskBrowser::whitebg );
    browser.loop();
Configures and starts both compents, and then runs an OCL::TaskBrowser over the receive component.

    send_activity.stop();
    recv_activity.stop();
 
    return 0;
}
Stops and exits cleanly.

The differences between the combined and no-toolkit test programs will be covered in Part 2, but essentially amounts to not loading the toolkit.

Part 3 Transport plugin will build a transport plugin allowing Orocos to communicate these types across CORBA.

Part 3 Transport plugin

'This is a work in progress''

This part builds a transport plugin allowing Orocos to communicate these types across CORBA.

Files

See the attachments at the bottom of Developing plugins and toolkits

To build

In a shell

cd /path/to/plugins
mkdir build
cd build
cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=ON
make

The only difference from building in Part 1, is to turn ON CORBA.

For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

Tested in Mac OS X Leopard 10.5.7.

To run

In a shell

cd /path/to/plugins/build/corba/tests
./corba-recv

In a second shell

cd /path/to/plugins/build/corba/tests
./corba-send

Now the same exact two test components of Parts 1 and 2 are in separate processes. Typing ls in either process will present the same values (subject to network latency delays, which typically are not human perceptible) - the data and types are now being communicated between deployers.

Now, the transport plugin is responsible for communicating the types between deployers, while the toolkit plugin is responsible for knowing each type and being able to display it. Separate responsibilities. Separate plugins.

NB for the example components, send must be started after recv. Starting only corba-recv and issuing ls will display the default values for each type. Also, quitting the send component and then attempting to use the recv component will lockup the recv deployer. These limitations are not due to the plugins - they are simply due to the limited functionality of these test cases.

Without the transport plugin

Running the same two corba test programs but without loading the transport plugin, is instructive as to what happens when you do not match up certain things in the toolkit sources. This is very important!

In a shell

cd /path/to/plugins/build/corba/tests
./corba-recv-no-toolkit
An ls in the recv component now gives
 Data Flow Ports: 
 RW(U) boost_ptime ptime          = not-a-date-time
 RW(U) boost_timeduration timeDuration   = 00:00:00
This is expected, as we have not connected the send component yet and so recv has default values.

In a second shell

cd /path/to/plugins/build/corba/tests
./corba-send-no-toolkit

The send component without the transport plugin fails to start, with:

$ ./build/corba/tests/corba-send-no-toolkit 
0.008 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Forcing priority (0) of thread to 0.
0.008 [ Warning][PeriodicThread] Forcing priority (0) of thread to 0.
0.027 [ Warning][SingleThread] Forcing priority (0) of thread to 0.
5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] ControlTask 'Send' already bound \
to CORBA Naming Service.
5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Trying to rebind... done. New \
ControlTask bound to Naming Service.
5.130 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Can not create a proxy for data \
connection.
5.130 [ ERROR  ][./build/corba/tests/corba-send-no-toolkit::main()] Dynamic cast failed \
for 'PN3RTT14DataSourceBaseE', 'unknown_t', 'unknown_t'. Do your typenames not match?
Assertion failed: (doi && "Dynamic cast failed! See log file for details."), function createConnection, \
file /opt/install/include/rtt/DataPort.hpp, line 462.
Abort trap
The culprit here is that we tried to pass unknown types through CORBA. While the toolkit plugin tells Orocos about a type, it takes a transport plugin to tell Orocos how to communicate the type. The above failure indicates that Orocos came across a type named unknown_t and did not know how to deal with it. We will cover this more later in the tutorial, and specifically where and why this occurs. As a matter of interest, comparing the sources of corba/tests/corba-recv.cpp and corba/tests/corba-recv-no-toolkit.cpp, the differences are
*** corba/tests/corba-recv.cpp    2009-07-29 22:08:32.000000000 -0400
--- corba/tests/corba-recv-no-toolkit.cpp    2009-08-09 16:32:03.000000000 -0400
***************
*** 11,17 ****
  #include <rtt/os/main.h>
  #include <rtt/Ports.hpp>
 
- #include "../BoostCorbaToolkit.hpp"
  #include "../../BoostToolkit.hpp"
 
  // use Boost RTT Toolkit test components
--- 11,16 ----
***************
*** 27,33 ****
  int ORO_main(int argc, char* argv[])
  {
      RTT::Toolkit::Import( Examples::BoostToolkit  );
-     RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
 
      Recv                recv("Recv");
      PeriodicActivity    recv_activity(
--- 26,31 ----
We simply did not load the transport plugin.

Transport plugin

The transport plugin implementation spans three files. We will cover them in turn

Defining CORBA types

We define the CORBA types in corba/BoostTypes.idl. This is a file in CORBA's Interface Description Language (IDL). There are plenty of references on the web, for instance [1].

// must be in RTT namespace to match some rtt/corba code
module RTT {
module Corba {
These structures must be in the RTT::Corba namespace.

    struct time_duration
    {
        short    hours;
        short    minutes;
        short    seconds;
        long    nanoseconds;
    };
We send a time duration as individual time components. Note that we avoid boost's fraction_secionds fiasco, and always send nanoseconds even if the sender or receiver implementations only support microseconds.

    // can't get at underlying type, so send this way (yes, more overhead)
    // see BoostCorbaConversion.hpp::struct AnyConversion<boost::posix_time::ptime>
    // for further details.
    struct ptime
    {
        // julian day
        long            date;    
        time_duration    time_of_day;
    };
};
};
I was not able to find a way to get to the native 64 or 96 bits that define a ptime value. Consequently, we inefficiently send a ptime as a julian day and a time duration within the day. Adequate for an example, but definitely more data than we would like to send.

Note that CORBA IDL knows about certain types already, e.g. short and long, and that we can use our time_duration structure in later structures.

We will come back to this IDL file during the build process.

The transport plugin

The actual plugin is defined in corba/BoostCorbaToolkit.hpp. This is the equivalent of the BoostToolkit.hpp file, except for a transport plugin.

namespace Examples {
namespace Corba {
 
    class CorbaBoostPlugin : public RTT::TransportPlugin
    {
    public:
        /// register this transport into the RTT type system
        bool registerTransport(std::string name, RTT::TypeInfo* ti);
 
        /// return the name of this transport type (ie "CORBA")
        std::string getTransportName() const;
 
        /// return the name of this transport
        std::string getName() const;
    };
 
    // the global instance
    extern CorbaBoostPlugin     corbaBoostPlugin;
 
// namespace
}
}
The transport plugin provides its name, the name of its transport mechanism, and a function to register the transport into Orocos. Note that no types are mentioned here as that is taken care of by the toolkit plugin. A transport plugin without a corresponding toolkit plugin is useless. Orocos will not know about the types and hence will not even make it to looking up transports for a given type.

The implementation of the plugin is in corba/BoostCorbaToolkit.cpp, and is very straight forward.

namespace Examples {
namespace Corba {
 
bool CorbaBoostPlugin::registerTransport(std::string name, TypeInfo* ti)
{
    assert( name == ti->getTypeName() );
    // name must match that in plugin::loadTypes() and 
    // typeInfo::composeTypeInfo(), etc
    if ( name == "boost_ptime" )
        return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::ptime >() );
    if ( name == "boost_timeduration" )
        return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::time_duration >() );
    return false;
}
Registering a transport registers each type for a given transport protocol (the ORO_CORBA_PROTOCOL_ID above, defined in rtt/src/corba/CorbaLib.hpp). Each type of transport must have a unique protocol ID, though currently Orocos only supports one, CORBA. Registration occurs automatically when the transport is loaded.

std::string CorbaBoostPlugin::getTransportName() const {
    return "CORBA";
}
 
std::string CorbaBoostPlugin::getName() const {
    return "CorbaBoost";
}
The plugin's name is CorbaBoost, and must be unique within all plugins (transport and toolkit, I believe). We choose to prefix the name of our toolkit plugin with Corba, to keep them recognizable.

For a CORBA transport plugin, the name returned by getTransportName() should be CORBA.

CorbaBoostPlugin corbaBoostPlugin;
 
// namespace
}
}
 
ORO_TOOLKIT_PLUGIN(Examples::Corba::corbaBoostPlugin);
Finally, the plugin itself is instantiated, and the appropriate macro is used so that Orocos can identify this as a plugin when loading it from a dynamic library.
 

Converting types

I will only cover the code for converting one of the types. The other is very similar - you can examine it yourself in the source file.

#include "BoostTypesC.h"
#include <rtt/corba/CorbaConversion.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>  // no I/O
Here we pick up some standard RTT types, and the I/O operators for our custom boost types. We also pick up BoostTypesC.h. This is a file that CORBA generates from our BoostTypes.idl file above, and contains CORBA-specific code. Ignore its contents, but just realise that it is generated from the .idl file.

// must be in RTT namespace to match some rtt/corba code
namespace RTT
{
For some historical reason, I believe this has to be in the RTT namespace. Not sure if that is still true, but ... maybe it is to match the generated output from the .idl file?

template<>
struct AnyConversion< boost::posix_time::time_duration >
{
    // define the Corba and standard (ie non-Corba) types we are using
    typedef Corba::time_duration                CorbaType;
    typedef boost::posix_time::time_duration    StdType;
Here we define some shorthand types, to make typing easier. I also find that having these two types names this way, Corba vs Std, makes it easier to read some of the later code. The actual Corba::timer_duration type comes from the files generated from our .idl file.

The last four of the following six functions are required by the CORBA library, to enable conversion between the CORBA and non-CORBA types. The two convert functions are their for convenience, and to save replicating code.

    // convert CorbaType to StdTypes
    static void convert(const CorbaType& orig, StdType& ret) 
    {
        ret = boost::posix_time::time_duration(orig.hours,
                                               orig.minutes,
                                               orig.seconds,
                                               orig.nanoseconds);
    }
 
    // convert StdType to CorbaTypes
    static void convert(const StdType& orig, CorbaType& ret) 
    {
        ret.hours       = orig.hours();
        ret.minutes     = orig.minutes();
        ret.seconds     = orig.seconds();
        ret.nanoseconds = orig.fractional_seconds();
    }
The above two functions do the actual work of converting data to/from the CORBA and standard types. In this case we can basically copy individual data members - more complicated types may require further conversions, manipulation, etc.

    static CorbaType* toAny(const StdType& orig) {
        CorbaType* ret = new CorbaType();
        convert(orig, *ret);
        return ret;
    }
 
    static StdType get(const CorbaType* orig) {
        StdType ret;
        convert(*orig, ret);
        return ret;
    }
 
    static bool update(const CORBA::Any& any, StdType& ret) {
        CorbaType* orig;
        if ( any >>= orig ) 
        {
            convert(*orig, ret);
            return true;
        }
        return false;
    }
 
    static CORBA::Any_ptr createAny( const StdType& t ) {
        CORBA::Any_ptr ret = new CORBA::Any();
        *ret <<= toAny( t );
        return ret;
    }
};
The above four functions are, as previously mentioned, the standard interface to convert types to/from CORBA types. While the syntax might appear a little strange to you (e.g "<<=" operator), you can just copy the above to your own custom types (I copy these between transport plugins, an advantage of creating CORBA and standard types at top). Note well the one dynamic allocation in the toAny() function: transport plugins are most definitely not real-time capable.

The same six functions then follow for our boost::ptime type. They are not covered in detail here.

Build system

IF (ENABLE_CORBA)
 
  INCLUDE(${CMAKE_SOURCE_DIR}/config/UseCorba.cmake)
This include ensures we know about the CORBA library, and also picks up some CMake macros we need.

  FILE( GLOB IDLS [^.]*.idl )
  FILE( GLOB CPPS [^.]*.cpp )
  ORO_ADD_CORBA_SERVERS(CPPS HPPS ${IDLS} )
The ORO_ADD_CORBA_SERVERS CMake macro we go from UseCorba.cmake, takes a list of source files (CPPS), a list of header files (HPPS - we have none here) and a list of interface description files (IDLS), and creates the necessary CMake code to generate the CORBA files from the IDL files. Basically, this takes our BoostTypes.idl file and produces header and source files to deal with that CORBA type. Note that this macro appends to the existing files listed in CPPS and HPPS - we'll need them shortly.

  INCLUDE_DIRECTORIES( ${CMAKE_CURRENT_BINARY_DIR}/. )
We now have our own source files in the source directory, as well as source files generated into the build directory. This ensures we can pick up the source files from the build directory as well.

  CREATE_COMPONENT(BoostToolkit-corba-${OROCOS_TARGET} 
    VERSION 1.0.0
    ${CPPS})
  TARGET_LINK_LIBRARIES(BoostToolkit-corba-${OROCOS_TARGET}     
    ${OROCOS-RTT_CORBA_LIBRARIES}
    ${CORBA_LIBRARIES})
Here we create a componet shared library, that contains only the transport plugins. Note that the library contains all the source files in the CPPS CMake variable, which now contains all the .cpp files in this directory (due to the "FILE(GLOB ...) statement) as well as the source files generated from the ORO_ADD_CORBA_SERVERS macro. These make up our transport toolkit. Fundamentally, the transport toolkit shared library is no different than a shared library of standard components, except for a tiny bit of C++ code that comes out of the ORO_TOOLKIT_PLUGIN() macro at the end of the BoostCorbaToolkit.cpp'' file. RTT then recognizes this shared library as containing a transport plugin.

  SUBDIRS(tests)
 
ENDIF (ENABLE_CORBA)
And lastly, pick up the tests.

Tests

The corba test programs contain one component each, to distribute the two components and hence require the CORBA transport plugin. The exact same send and receive test components are used from Part 2.

The corba-send test program instantiates a send component, and uses an RTT ControlTaskProxy to represent the remote receive component.

#include <rtt/corba/ControlTaskServer.hpp>
#include <rtt/corba/ControlTaskProxy.hpp>
#include <rtt/RTT.hpp>
#include <rtt/PeriodicActivity.hpp>
#include <rtt/TaskContext.hpp>
#include <rtt/os/main.h>
#include <rtt/Ports.hpp>
#include <ocl/TaskBrowser.hpp>
 
#include "../BoostCorbaToolkit.hpp"
#include "../../BoostToolkit.hpp"
 
#include "../../tests/send.hpp"
 
using namespace std;
using namespace Orocos;
using namespace RTT::Corba;
 
int ORO_main(int argc, char* argv[])
{
    RTT::Toolkit::Import( Examples::BoostToolkit  );
    RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
Import both the toolkit and transport plugins.

    Send                send("Send");
    PeriodicActivity    send_activity(
        ORO_SCHED_OTHER, 0, 1.0 / 10, send.engine());   // 10 Hz
 
    // start Corba and find the remote task
    ControlTaskProxy::InitOrb(argc, argv);
    ControlTaskServer::ThreadOrb();
Initialize the CORBA Orb, and then thread it (yes, this does use Proxy and Server functions - this is ok). This puts the CORBA Orb in a background thread, allowing us to run the taskbrowser (below) in the main thread.

    TaskContext* recv = ControlTaskProxy::Create( "Recv" );
    assert(NULL != recv);
Creates a proxy task context for a remote component named "Recv". This will use the name service (by default) to find this component.

    if ( connectPeers( recv, &send ) == false )
    {
        log(Error) << "Could not connect peers !"<<endlog();
    }
    // create data object at recv's side
    if ( connectPorts( recv, &send) == false )
    {
        log(Error) << "Could not connect ports !"<<endlog();
    }
Connect the local send component to the proxy recv component.

    send.configure();
    send_activity.start();
    log(Info) << "Starting task browser" << endlog();
    OCL::TaskBrowser tb( recv );
    tb.loop();
    send_activity.stop();
Start a task browser on the proxy component. We are, after all, interested in the reception of the data. You could have instead run the taskbrowser on the send component.

    ControlTaskProxy::DestroyOrb();
 
    return 0;
}
Cleanly shutdown the orb and exit.

The receive test program has a similar structure to the send test program.

#include <rtt/corba/ControlTaskServer.hpp>
#include <rtt/corba/ControlTaskProxy.hpp>
#include <rtt/RTT.hpp>
#include <rtt/PeriodicActivity.hpp>
#include <rtt/TaskContext.hpp>
#include <rtt/os/main.h>
#include <rtt/Ports.hpp>
 
#include "../BoostCorbaToolkit.hpp"
#include "../../BoostToolkit.hpp"
 
#include "../../tests/recv.hpp"
 
#include <ocl/TaskBrowser.hpp>
 
using namespace std;
using namespace Orocos;
using namespace RTT::Corba;
 
 
int ORO_main(int argc, char* argv[])
{
    RTT::Toolkit::Import( Examples::BoostToolkit  );
    RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
 
    Recv                recv("Recv");
    PeriodicActivity    recv_activity(
        ORO_SCHED_OTHER, 0, 1.0 / 5, recv.engine());    // 5 Hz
 
    // Setup Corba and Export:
    ControlTaskServer::InitOrb(argc, argv);
    ControlTaskServer::Create( &recv );
    ControlTaskServer::ThreadOrb();
We make the receive component a CORBA server, meaning that the send component will connect to this component. It could have been done the other way around - in this example, it simply impacts which test program has to be started first (the server must be running for the client to connect to it). Again we thread the ORB to put it in its own background thread.

    // Wait for requests:
    recv.configure();
    recv_activity.start();
    OCL::TaskBrowser tb( &recv );
    tb.loop();
    recv_activity.stop();
Run the taskbrowser on the recieve component (in the main thread). Note that the send component is not mentioned anywhere. The "server" does not know about any "clients", but the "clients" do need to know about the server.

    // Cleanup Corba:
    ControlTaskServer::ShutdownOrb();
    ControlTaskServer::DestroyOrb();
 
    return 0;
}
Cleanly shutdown and destroy the CORBA Orb and exit.

The no-toolkit versions of the test programs are identical, except they simply do not load the transport plugin, making it impossible to transport the boost types over CORBA.

References

[1] http://www.iona.com/support/docs/manuals/orbix/33/html/orbix33cxx_pguide/IDL.html

Experienced Users

Now located at http://orocos.org/wiki/rtt/examples-and-tutorials

Name connections, not ports (aka Orocos' best kept secret)

Rationale

Problem: How to reuse a component when you need the ports to have different names?

Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

Assumptions

  • The build directory is within the source directory. Click below to read the rest of this post.== Rationale ==

Problem: How to reuse a component when you need the ports to have different names?

Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

Assumptions

  • The build directory is within the source directory. This helps with dynamic library loading.
  • Admittedly, this is contrived example but the structure is very useful and occurs more frequently than you may realise (say using N copies of a camera component, deploying components for both a left and a right robot arm within the same deployer, etc).

Files

HMI.hpp

Robot.hpp

OneAxisFilter.hpp

HMI.cpp

Robot.cpp

OneAxisFilter.cpp

Connect-1.xml

Connect-2.xml

Connect-3.xml

Buildable tarball

Example overview

This example occurs in three parts

  1. A Human-Machine-Interface (HMI) component connects to a Robot component, and provides a desired cartesian position.
  2. A one-axis filter is placed between the HMI and Robot component, to zero out one axis (say, you did not want the robot to move in one direction due to an obstacle or something similar)
  3. A second one-axis filter is placed between the first filter and the Robot. The two filters are the exact same component with the same named ports.

Components

class HMI : public RTT::TaskContext
{
protected:
    // *** OUTPUTS ***
 
    /// desired cartesian position
    RTT::WriteDataPort<KDL::Frame>            cartesianPosition_desi_port;
 
public:
    HMI(std::string name);
    virtual ~HMI();
 
protected:
    /// set the desired cartesian position to an initial value
    /// \return true
    virtual bool startHook();
};
The HMI provides one output port that specified the desired cartesian position. This is set to an initial value in startHook().

class Robot : public RTT::TaskContext
{
protected:
    // *** INPUTS ***
 
    /// desired cartesian position
    RTT::ReadDataPort<KDL::Frame>            cartesianPosition_desi_port;
 
public:
    Robot(std::string name);
    virtual ~Robot();
};
The robot accepts a desired cartesian position as input (but in this example does nothing with it).

class OneAxisFilter : public RTT::TaskContext
{
protected:
    // *** INPUTS ***
 
    /// desired cartesian position
    RTT::ReadDataPort<KDL::Frame>            inputPosition_port;
 
    // *** OUTPUTS ***
 
    /// desired cartesian position
    RTT::WriteDataPort<KDL::Frame>            outputPosition_port;
 
    // *** CONFIGURATION ***
 
    /// specify which axis to filter (should be one of "x", "y", or "z")
    RTT::Property<std::string>                axis_prop;
 
public:
    OneAxisFilter(std::string name);
    virtual ~OneAxisFilter();
 
protected:
    /// validate axis_prop value
    /// \return true if axis_prop value is valid, otherwise false
    virtual bool configureHook();
    /// filter one translational axis (as specified by axis_prop)
    virtual void updateHook();
};
The OneAxisFilter component takes an input cartesian position, zeroes out one axis (the axis of interest is specified in a property), and then outputs the filtered cartesian position.

Component implementation

The component implementations are not given in this example, as they are not the interesting part of the solution, but are available in the Files section above.

The interesting part is in the deployment files ...

Deployment

Part 1: HMI and Robot

This part simply connects the HMI and robot together (see deployment file Connect-1.xml).

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "cpf.dtd">
<properties>
 
  <simple name="Import" type="string">
    <value>liborocos-rtt</value>
  </simple>
  <simple name="Import" type="string">
    <value>liborocos-kdl</value>
  </simple>
  <simple name="Import" type="string">
    <value>liborocos-kdltk</value>
  </simple>
  <simple name="Import" type="string">
    <value>libConnectionNaming</value>
  </simple>
The first section of the deployment file simply loads the Orocos libaries we use (including the KDL toolkit, so that we can inspect and modify KDL types within the deployer), and then loads our shared libary (libConnectionNaming).

  <struct name="HMI" type="HMI">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
The next section creates an HMI component, with the connection for its output port named "cartesianPosition_desi" (ie the same as the port name). The syntax for port/connection naming is:

      <simple name="portName" type="string">
        <value>connectionName</value>
      </simple>
which makes port portName part of connection connectionName.

  <struct name="Robot" type="Robot">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>HMI</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
</properties>
Lastly, the robot component is created with its input port on a connection named cartesianPosition_desi.

Now, the deployer uses connection names when connecting components between peers, not port names. So it attempts to connect a Robot.cartesianPosition_desi connection to a Vehicle. cartesianPosition_desi connection (which in this part, matches the port names).

Build the library, and then run this part with

cd /path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-1.xml

Examine the HMI and Robot components, and note that each has a connected port, and the port values match.

Part 2: HMI, one filter and a robot

This part adds a filter component between the HMI and the robot (see Connect-2.xml)

As with Part 1, the first part of the file loads the appropriate libraries (left out here, as it is identical to Part 1).

  <struct name="HMI" type="HMI">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>unfiltered_cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
Again an HMI component is deployed, except this time the deployer will connect the cartesianPosition_desi port as part of a connection named unfiltered_cartesianPosition_desi.

  <struct name="Filter" type="OneAxisFilter">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>HMI</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="inputPosition" type="string">
        <value>unfiltered_cartesianPosition_desi</value></simple>
      <simple name="outputPosition" type="string">
        <value>filtered_cartesianPosition_desi</value></simple>
    </struct> 
    <simple name="PropertyFile" type="string">
      <value>../Filter1.cpf</value></simple>
  </struct>
The Filter component is deployed with its input port being part of a connection named unfiltered_cartesianPosition_desi, while its output port is part of a connection named filtered_cartesianPosition_desi. Comparison with the HMI port/connections above, and the Robot port/connections below, you can see that the Filter's input port is connected to the HMI and the output port is connected to the Robot.

  <struct name="Robot" type="Robot">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>Filter</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>filtered_cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
The robot component is the same as Part 1, except that its input port is part of a connection named filtered_cartesianPosition_desi (ie connected to the Filter).

Run this part with

cd /path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-2.xml

Examine all three components, and note that all ports are connected, and in particular, that the HMI and Filter.inputPosition ports match while the Filter.outputPosition and Vehicle ports match (ie they have the 'x' axis filtered out).

Using connection naming allows us to connect ports of different names. This is particularly useful with a generic component like this filter, as in one deployment it may connect to a component with ports named cartesianPosition_desi, while in another deployment it may connect to ports named CartDesiPos, or any other names. The filter component is now decoupled from the actual port names used to deploy it.

Part 3: HMI, two filters and a robot

This part adds a second filter between the first filter and the robot.

As with Parts 1 and 2, the libraries are loaded first.

  <struct name="HMI" type="HMI">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>unfiltered_cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
There is no change in the HMI from Part 2.

  <struct name="Filter1" type="OneAxisFilter">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>HMI</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="inputPosition" type="string">
        <value>unfiltered_cartesianPosition_desi</value></simple>
      <simple name="outputPosition" type="string">
        <value>filtered_cartesianPosition_desi</value></simple>
    </struct> 
    <simple name="PropertyFile" type="string">
      <value>../Filter1.cpf</value></simple>
  </struct>
There is no change in the first filter from Part 2.

  <struct name="Filter2" type="OneAxisFilter">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>HMI</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="inputPosition" type="string">
        <value>filtered_cartesianPosition_desi</value></simple>
      <simple name="outputPosition" type="string">
        <value>double_filtered_cartesianPosition_desi</value></simple>
    </struct> 
    <simple name="PropertyFile" type="string">
      <value>../Filter2.cpf</value></simple>
  </struct>
The second filter has its input port part of a connection named filtered_cartesianPosition_desi (ie it is connected to Filter1's output port), and the second filter's output port is part of a connecton named double_filtered_cartesianPosition_desi (which as you will see, is connected to the robot's input port).

  <struct name="Robot" type="Robot">
    <struct name="Activity" type="PeriodicActivity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>Filter2</value></simple>
    </struct> 
    <struct name="Ports" type="PropertyBag">
      <simple name="cartesianPosition_desi" type="string">
        <value>double_filtered_cartesianPosition_desi</value></simple>
    </struct> 
  </struct>
The only change in the robot component, from Part 2, is to change its peer to Filter2 and to use a connection named double_filtered_cartesianPosition_desi (ie connect it to Filter2).

Run this part with

cd /path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-3.xml

Examine all components, and note which ports are connected, and what their values are. Note that the vehicle has two axes knocked out (x and y).

Points to note

  1. WARNING The deployer displays port names for ports within components, while the OCL reporting component also uses port names. Only the act of connecting ports between peers when deploying a component network, makes use of the connection naming shown above.
  2. Using connection naming allows us to reuse a component without resorting to renaming its ports or modifying its code in any way. This is an example of deployment-time configuration. Note that there are certainly instances where run-time configuration of port-names may be needed (eg the component has to name its ports based on the component name itself), but in our experience, deployment-time configuration is more frequent and decouples components better.
  3. Note that as many filters as are required could be chained together in this manner, and that none of the input, output, nor filter components need know that they are connected in such a fashion. Decoupling is your friend, and allowed the Filter component writer to simply concentrate on writing a component that did one thing well: filtered a cartesian position (yes, a trivial example, but a valid point nonetheless).
  4. You may notice that the deployment files do not specify peer combinations in pairs. The peers are mentioned in one direction only. We use this to decouple (yet again) a component from knowing what peers it is connected to, where possible. For example, Filter1 in both Parts 2 and 3 does not now what component is down-stream from it. It doesn't know, nor does it care, whether it is being filtered again, connected to a robot, or whatever. Again, decoupling. This can dramatically help when deploying large systems.

To build

In a shell

cd /path/to/ConnectionNaming
mkdir build
cd build
cmake .. -DOROCOS_TARGET=macosx
make

For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

Tested in Mac OS X Leopard 10.5.7.

AttachmentSize
HMI.hpp2.16 KB
Robot.hpp1.99 KB
OneAxisFilter.hpp2.52 KB
HMI.cpp2.04 KB
Robot.cpp1.94 KB
OneAxisFilter.cpp2.92 KB
Connect-1.xml1.96 KB
Connect-2.xml2.91 KB
Connect-3.xml3.85 KB
connectionNaming.tar_.bz27.01 KB

Simple Examples

Now located at http://orocos.org/wiki/rtt/examples-and-tutorials

Simple TCP client using non-periodic component

Rationale

Problem: You want a component that connects to a remote TCP server, and reads data from it (this example could easily write, instead of reading). This component will block for varying amounts of time when reading.

Solution: Use a non-periodic component. This example outlines one method to structure the component, to deal with the non-blocking reads while still being responsive to other components, being able to run a state machine, etc.

<!-- break -->

Assumptions

  • Uses Qt sockets to avoid operating-system intracacies and differences when using actual sockets. The code can easily be modified to use bind(), accept(), listen(), etc. instead. It is the structure of the solution that we are interested in.
  • The build directory is within the source directory. This helps with dynamic library loading.
  • Does not attempt reconnection if unable connect on first attempt.
  • Non-robust error handling.
  • Does not validate property values (a robust component would validate that the timeouts were valid, eg. not negative, within a configureHook()).

Files

SimpleNonPeriodicClient.cpp

SimpleNonPeriodicClient.hpp

SimpleNonPeriodicClient.xml

SimpleNonPeriodicClient.cpf

Buildable tarball

The .cpf file has a .txt extension simply to keep the wiki happy. To use the file, rename it to SimpleNonPeriodicClient.cpf.

Component definition

This is the class definition

class SimpleNonPeriodicClient : public RTT::TaskContext
{
protected:
    // DATA INTERFACE
 
    // *** OUTPUTS ***
 
    /// the last read data
    RTT::WriteDataPort<std::string>            lastRead_port;
 
    /// the number of items sucessfully read
    RTT::Attribute<int>                        countRead_attr;
 
    // *** CONFIGURATION ***
 
    // name to listen for incoming connections on, either FQDN or IPv4 addres
    RTT::Property<std::string>                hostName_prop;
    // port to listen on
    RTT::Property<int>                        hostPort_prop;
    // timeout in seconds, when waiting for connection
    RTT::Property<int>                        connectionTimeout_prop;
    // timeout in seconds, when waiting to read
    RTT::Property<int>                        readTimeout_prop;
 
public:
    SimpleNonPeriodicClient(std::string name);
    virtual ~SimpleNonPeriodicClient();
 
protected:
    /// reset count and lastRead, attempt to connect to remote
    virtual bool startHook();
    /// attempt to read and process one packet
    virtual void updateHook();
    /// close the socket and cleanup
    virtual void stopHook();
    /// cause updateHook() to return
    virtual bool breakUpdateHook();
 
    /// Socket used to connect to remote host
    QTcpSocket*    socket;
    /// Flag indicating to updateHook() that we want to quit
    bool        quit;
};

The component has a series of properties specifying the remote host and port to connect to, as well as timeout parameters. It also uses an RTT Attribute to count the number of successful reads that have occurred, and stores the last read data as a string in a RTT data port.

Component implementation

#include "SimpleNonPeriodicClient.hpp"
#include <rtt/Logger.hpp>
#include <ocl/ComponentLoader.hpp>
 
#include <QTcpSocket>

The class definition is included as well as the RTT logger, and importantly, the OCL component loader that turns this class into a deployable componet in a shared library.

Most importantly, all Qt related headers come after all Orocos headers. This is required as Qt redefines certain words (eg "slot", "emit") which when used in our or Orocos code cause compilation errors.

SimpleNonPeriodicClient::SimpleNonPeriodicClient(std::string name) :
        RTT::TaskContext(name),
        lastRead_port("lastRead", ""),
        countRead_attr("countRead", 0),
        hostName_prop("HostName", 
                      "Name to listen for incoming connections on (FQDN or IPv4)", ""),
        hostPort_prop("HostPort", 
                      "Port to listen on (1024-65535 inclusive)", 0),
        connectionTimeout_prop("ConnectionTimeout", 
                               "Timeout in seconds, when waiting for connection", 0),
        readTimeout_prop("ReadTimeout", 
                         "Timeout in seconds, when waiting for read", 0),
        socket(new QTcpSocket), 
        quit(false)
{
    ports()->addPort(&lastRead_port);
 
    attributes()->addAttribute(&countRead_attr);
 
    properties()->addProperty(&hostName_prop);
    properties()->addProperty(&hostPort_prop);
    properties()->addProperty(&connectionTimeout_prop);
    properties()->addProperty(&readTimeout_prop);
}

The constuctor simply sets up the data interface elements (ie the port, attribute and properties), and gives them appropriate initial values. Note that some of these initial values are illegal, which would aid in any validation code in a configureHook() (which has not been done in this example).

SimpleNonPeriodicClient::~SimpleNonPeriodicClient()
{
    delete socket;
}

The destructor cleans up by deleting the socket we allocated in the constructor.

Now to the meat of it

bool SimpleNonPeriodicClient::startHook()
{
    bool        rc                    = false;        // prove otherwise
    std::string    hostName            = hostName_prop.rvalue();
    int            hostPort            = hostPort_prop.rvalue();
    int         connectionTimeout    = connectionTimeout_prop.rvalue();
 
    quit = false;
 
    // attempt to connect to remote host/port
    log(Info) << "Connecting to " << hostName << ":" << hostPort << endlog();
    socket->connectToHost(hostName.c_str(), hostPort);
    if (socket->waitForConnected(1000 * connectionTimeout))    // to millseconds
    {
        log(Info) << "Connected" << endlog();
        rc = true;
    }
    else
    {    
        log(Error) << "Error connecting: " << socket->error() << ", " 
                   << socket->errorString().toStdString() << endlog();
        // as we now return false, this component will fail to start.
    }
 
    return rc;
}
The startHook() uses the properites loaded from the SimpleNonPeriodicClient.cpf file, to attempt to connect to the remote host. If the remote port is not ready, the attempted connection will timeout.

If the connection does not occur successfully, then startHook() will return false which prevents the component from actually being started. No reconnection is attempted (see Assumptions above)

void SimpleNonPeriodicClient::updateHook()
{
    // wait for some data to arrive, timing out if necessary
    int     readTimeout        = readTimeout_prop.rvalue();
    log(Debug) << "Waiting for data with timeout=" << readTimeout << " seconds" << endlog();
    if (!socket->waitForReadyRead(1000 * readTimeout))
    {
        log(Error) << "Error waiting for data: " << socket->error() << ", " 
                   << socket->errorString().toStdString() 
                   << ". Num bytes = " 
                   << socket->bytesAvailable() << endlog();
        log(Error) << "Disconnecting" << endlog();
        // disconnect socket, and do NOT call this function again
        // ie no engine()->getActivity()->trigger()
        socket->disconnectFromHost();
        return;        
    }
 
    // read and print whatever data is available, but stop if instructed
    // to quit
    while (!quit && (0 < socket->bytesAvailable()))
    {
#define    BUFSIZE        10
        char            str[BUFSIZE + 1];    // +1 for terminator
        qint64            numRead;
 
        numRead = socket->read((char*)&str[0], 
                               min(BUFSIZE, socket->bytesAvailable()));
        str[BUFSIZE] = '\0';        // forcibly terminate
        if (0 < numRead)
        {
            log(Info) << "Got " << numRead << " bytes : '" << &str[0] << "'" << endlog();
            countRead_attr.set(countRead_attr.get() + 1);
            lastRead_port.Set(&str[0]);
        }
    }
 
    // if not quitting then trigger another immediate call to this function, to
    // get the next batch of data
    if (!quit)
    {
        engine()->getActivity()->trigger();
    }
}

The updateHook() function attempts to wait until data is available, and then reads the data BUFSIZE characters at a time. If it times out waiting for data, then it errors out and disconnects the port. This is not a robust approach and a real algorithm would deal with this differently.

As data may be continually arriving and/or we get more than BUFSIZE characters at a time, the while loop may iterate several times. The quit flag will indicate if the user wants to stop the component, and that we should stop reading characters.

Of particular note is the last line

engine()->getActivity()->trigger();
This causes updateHook() to be called again immediately by the execution engine. Essentially, this makes the non-periodic component act as a periodic component with a varying period. Of course, this is not called if the component is being stopped (ie quit==true).

void SimpleNonPeriodicClient::stopHook()
{
    if (socket->isValid() &&
        (QAbstractSocket::ConnectedState == socket->state()))
    {
        log(Info) << "Disconnecting" << endlog();
        socket->disconnectFromHost();
    }
}
The stopHook() simply disconnects the socket if it is currently connected.

bool SimpleNonPeriodicClient::breakUpdateHook()
{
    quit = true;
    return true;
}
The breakUpdateHook() is very important, as it is the only way to inform a blocked updateHook() that it is time to return and quit. In this example we set the quit flag and return true. The quit flag will be picked up by updateHook() when it finishes waiting for data (in socket->waitForReadyRead()). Returning true from breakUpdateHook() tells the execution engine that we successfully told updateHook() to return and that it should wait (one second, hardcoded) for updateHook() to complete and return. If we returned false, then stop would also return false.

We could have also done something like socket->abort() to forcibly terminate any blocked socket->waitForReadyRead() calls.

When using system calls (e.g. read() ) instead of Qt classes you could attempt to send a signal to interrupt the system call, however, this might not have the desired effect when the component is deployed ... the reader is advised to be careful here.

ORO_CREATE_COMPONENT(SimpleNonPeriodicClient)
This line of code creates a deployable component for the SimpleNonPeriodicClient) class, that the deployer can load from a shared library.

To build

In a shell

cd /path/to/SimpleNonPeriodicClient
mkdir build
cd build
cmake .. -DOROCOS_TARGET=macosx
make

For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

Tested in Mac OS X Leopard 10.5.7, and Ubuntu Jaunty Linux.

To run

Start one shell and run netcat to act as the server (NB 50001 is the HostPort value from your SimpleNonPeriodicClient.cpf file)

nc -l 50001

Start a second shell and deploy the SimpleNonPeriodicClient component

cd /path/to/SimpleNonPeriodicClient/build
deployer-macosx -s ../SimpleNonPeriodicClient.xml

Now type in the first shell and when you hit enter, then netcat will send the data and it will be printed by the SimpleNonPeriodicClient component (where N is the size of the buffer in updateHook()).

Points to note:

  1. The SimpleNonPeriodicClient component will time out if you do not hit enter within ReadTimeout seconds (as specified in the SimpleNonPeriodicClient.cpf file).
  2. Setting the ORO_LOGLEVEL environment variable to 5 or 6, or running the deployer with -linfo or -ldebugoptions, will generate additional debugging statements.
  3. The component will take up to ReadTimeout seconds to respond to the user typing quit in the deployer, as breakUpdateHook() does not forcibly exit the socket->waitForReadyRead() call.

AttachmentSize
SimpleNonPeriodicClient.cpp7.42 KB
SimpleNonPeriodicClient.hpp3.11 KB
SimpleNonPeriodicClient.xml1 KB
SimpleNonPeriodicClient-cpf.txt748 bytes
SimpleNonPeriodicClient.tar_.bz27.72 KB

Sample output

Sample output

The netcat shell, with the text the user typed in.

nc -l 50001 
The quick brown fox jumps
over the lazy dog. 

The deployer shell, showing the text read in chunks, as well as the updated port and attribute within the component.

deployer-macosx -s ../SimpleNonPeriodicClient.xml -linfo
0.009 [ Info   ][deployer-macosx::main()] No plugins present in /usr/lib/rtt/macosx/plugins
0.009 [ Info   ][DeploymentComponent::loadComponents] Loading '../SimpleNonPeriodicClient.xml'.
0.010 [ Info   ][DeploymentComponent::loadComponents] Validating new configuration...
0.011 [ Info   ][DeploymentComponent::loadLibrary] Storing orocos-rtt
0.011 [ Info   ][DeploymentComponent::loadLibrary] Loaded shared library 'liborocos-rtt-macosx.dylib'
0.054 [ Info   ][DeploymentComponent::loadLibrary] Loaded multi component library 'libSimpleNonPeriodicClient.dylib'
0.054 [ Warning][DeploymentComponent::loadLibrary] Component type name SimpleNonPeriodicClient already used: overriding.
0.054 [ Info   ][DeploymentComponent::loadLibrary] Loaded component type 'SimpleNonPeriodicClient'
0.055 [ Info   ][DeploymentComponent::loadLibrary] Storing SimpleNonPeriodicClient
0.058 [ Info   ][DeploymentComponent::loadComponent] Adding SimpleNonPeriodicClient as new peer:  OK.
0.058 [ Warning][SingleThread] Forcing priority (0) of thread to 0.
0.058 [ Info   ][NonPeriodicActivity] SingleThread created with priority 0 and period 0.
0.058 [ Info   ][NonPeriodicActivity] Scheduler type was set to `4'.
0.059 [ Info   ][PropertyLoader:configure] Configuring TaskContext 'SimpleNonPeriodicClient' with '../SimpleNonPeriodicClient.cpf'.
0.059 [ Info   ][DeploymentComponent::configureComponents] Configured Properties of SimpleNonPeriodicClient from ../SimpleNonPeriodicClient.cpf
0.059 [ Info   ][DeploymentComponent::configureComponents] Re-setting activity of SimpleNonPeriodicClient
0.059 [ Info   ][DeploymentComponent::configureComponents] Configuration successful.
0.060 [ Info   ][DeploymentComponent::startComponents] Connecting to 127.0.0.1:50001
0.064 [ Info   ][DeploymentComponent::startComponents] Connected
0.065 [ Info   ][DeploymentComponent::startComponents] Startup successful.
0.065 [ Info   ][deployer-macosx::main()] Successfully loaded, configured and started components from ../SimpleNonPeriodicClient.xml
   Switched to : Deployer
0.066 [ Info   ][SimpleNonPeriodicClient] Entering Task Deployer
 
  This console reader allows you to browse and manipulate TaskContexts.
  You can type in a command, event, method, expression or change variables.
  (type 'help' for instructions)
    TAB completion and HISTORY is available ('bash' like)
 
 In Task Deployer[S]. (Status of last Command : none )
 (type 'ls' for context info) :4.816 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'The quick '
4.816 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'brown fox '
7.448 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'jumps
over'
7.448 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : ' the lazy '
12.448 [ ERROR  ][SimpleNonPeriodicClient] Error waiting for data: 5, Network operation timed out. Num bytes = 5
12.448 [ ERROR  ][SimpleNonPeriodicClient] Disconnecting
 
 
 In Task Deployer[S]. (Status of last Command : none )
 (type 'ls' for context info) :ls SimpleNonPeriodicClient
 
 Listing TaskContext SimpleNonPeriodicClient :
 
 Configuration Properties: 
     string HostName       = 127.0.0.1            (Name to listen for incoming connections on (FQDN or IPv4))
        int HostPort       = 50001                (Port to listen on (1024-65535 inclusive))
        int ConnectionTimeout = 5                    (Timeout in seconds, when waiting for connection)
        int ReadTimeout    = 5                    (Timeout in seconds, when waiting for read)
 
 Execution Interface:
  Attributes   : 
        int countRead      = 4                   
 
  Methods      : activate cleanup configure error getErrorCount getPeriod getWarningCount inFatalError inRunTimeError inRunTimeWarning isActive isConfigured isRunning resetError start stop trigger update warning 
  Commands     : (none)
  Events       : (none)
 
 Data Flow Ports: 
  W(U)      string lastRead       =  the lazy 
 
 Task Objects: 
  this           ( The interface of this TaskContext. ) 
  scripting      ( Access to the Scripting interface. Use this object in order to load or query programs or state machines. ) 
  engine         ( Access to the Execution Engine. Use this object in order to address programs or state machines which may or may not be loaded. ) 
  marshalling    ( Read and write Properties to a file. ) 
  lastRead       ( (No description set for this Port) ) 
 
 Peers        : (none)
 
 In Task Deployer[S]. (Status of last Command : none )
 (type 'ls' for context info) :quit
 
18.089 [ Info   ][DeploymentComponent::stopComponents] Stopped SimpleNonPeriodicClient
18.089 [ Info   ][DeploymentComponent::cleanupComponents] Cleaned up SimpleNonPeriodicClient
18.090 [ Info   ][DeploymentComponent::startComponents] Disconnected and destroyed SimpleNonPeriodicClient
18.090 [ Info   ][DeploymentComponent::startComponents] Kick-out successful.
18.091 [ Info   ][Logger] Orocos Logging Deactivated.

Using XML substitution to manage complex deployments

Rationale

Problem: You deploy multiple configurations of your system, perhaps choosing between a real and simulated robot, some real and simulated device, etc. You want to parameterize the deployments to reduce the number of files you have to write for the varying configuration combinations

Solution: Use the XML ENTITY element.

Assumptions

  • Works with Xerces only (v2 tested, v3 should also support this). Will not work with the default TinyXML processor.

Compatabilitry

Tested on v1.x trunk on Mac OS X Snow Leopard. These instructions should apply identically to RTT 2.x installations.

Files

See the attachments at the bottom of this page.

Approach

This simple example demonstrates how to deploy a tiny system in two configurations, by simply changing the name of the deployed component. This approach can be (and has been) used to manage deployments with many system configurations.

There is a top-level file per configuration, which specifies all the parameters. Each top-level file then includes a child file which instantiates components, etc.

One top level file

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "cpf.dtd"
[
  <!-- internal entities for substituion -->
  <!ENTITY name "Console">
  <!ENTITY lib "liborocos-rtt">
  <!-- external entity for file substitution -->
  <!ENTITY FILE_NAME SYSTEM "test-entity-child.xml">
]
>
 
<properties>
 
  &FILE_NAME;
 
</properties>

The internal entity values are used to substitute component names, and other basic parameters. The external entity value (&FILE_NAME) is used to include child files, so that the entity values defined in the top-level file are available within the child file. Using the Orocos' builtin include statement does not make the top-level entity values available within the child file.

The child file simply substitutes the two internal entities for a library name, and a component name.

<properties>
 
  <simple name="Import" type="string">
    <value>&lib;</value>
  </simple>
  <simple name="Import" type="string">
    <value>liborocos-ocl-common</value>
  </simple>
 
  <struct name="&name;" type="OCL::HMIConsoleOutput">
  </struct>
 
</properties>

The other top level file differs from the first top level file only in the name of the component.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "cpf.dtd"
[
  <!ENTITY name "Console2">
  <!ENTITY lib "liborocos-rtt">
  <!ENTITY file SYSTEM "test-entity-child.xml">
]
>
 
<properties>
 
  &file;
 
</properties>

You can use relative paths within the external entity filename. I have had inconsistent success with this - sometimes the relative path is needed, and other times it isn't. I think that it only needs the path relative to the file being included from, so if that file is already loaded on a relative path then you need to specify the child file only relative to the parent file, and not the current working directory that you started the deployment in.

AttachmentSize
test-entity.xml1.56 KB
test-entity2.xml278 bytes
test-entity-child.xml307 bytes

Using real-time logging

This page collects notes and issues on the use of real-time logging. Its contents will eventually become the documentation for this feature.

This feature has been integrated in the Orocos 1.x and 2.x branches but is still considered under development. However, if you need a real-time logging infrastructure (ie text messages to users), this is exactly where you need to be. If you need a real-time data stream logging of ports, use the OCL's Reporting NetCDFReporting Component instead.

It is noted in the text where Orocos 1.x and 2.x differ.

Restrictions and issues

Restrictions

Startup the logging components first: Logging events prior to logging service's configure() will be dropped. The problem is that the logging service connects categories and appenders, and is it itself a component. So until it is configured, and the connections are all made, no appenders are available to deal with the event. Therefore you are suggested to put your appender components and the logging service in a separate deployment XML or script file which is loaded first. This will allow your application components to use logging from the start (component creation). See the ocl/logging/tests/data availablility XML deployment files for examples. OCL's deployer can execute in order multiple XML or script files.

Categories can not be created in real-time: They live on the normal heap via new/delete. Create all categories in your component's constructor or during configureHook() or similar.

NDC's are not supported. They involve std::string and std::vector which we currently can't replace.

Works only with OCL's deployers: If you use a non-deployer mechanism to bring up your system, you will need to add code to ensure that the log4cpp framework creates our OCL::Category objects, and not the default (non-real-time) log4cpp::Category objects. This should be done early in your application, prior to any components and categories being created.

    log4cpp::HierarchyMaintainer::set_category_factory(
        OCL::logging::Category::createOCLCategory);

Issues

On the ML it was requested to log when events have been lost. There are two places that this would need to be implemented, both annotated with TODO's in the code.
  • When creation of the OCL::String objects in a LoggingEvent exhausts the memory pool
  • When the buffer between a category and its appenders is full

This is not currently dealt with, but could be in future implementations.

In RTT/OCL 1.x, multiple appenders connected to the same category will, receive only some of the incoming logging events. This is as each appender will pop different elements from the category's buffer. This issue has been solved in 2.x.

The size of the buffer between a category and its appenders is currently fixed (see ocl/logging/Category.cpp). This will be fixed lateron on the 2.x branch. Note that that fixed size plus the default consumption rate of the FileAppender means you can exhaust the default TLSF memory pool in very short order. For a complex application (~40 components, 400 Hz cycle rate) we increased the default buffer size to 200, increased the memory pool to 10's of kilobytes (or megabytes) and increased the FileAppender consumption rate to 500 messages per second.

Viewing logs

We can use standard log viewers for Log4j in two ways:

  1. Use FileAppender which writes log lines to a file and let the viewers read that file
  2. Use Log4cxxAppender which creates a network socket to which Log4cxx/Log4j viewers can connect.

These log viewers are compatible:

Complete application example

As at October 2010, assumes you are using for RTT 1.x:

And for RTT 2.x, use the Orocos Toolchain 2.2 or later from :

then build in the following order, with these options ON:

  • log4cpp (default options)
  • RTT: ENABLE_RT_MALLOC, OS_RT_MALLOC
  • OCL: BUILD_RTALLOC, BUILD_LOGGING

The deployer now defaults to a 20k real-time memory pool (see OCL CMake option ORO_DEFAULT_RTALLOC_SIZE), all Orocos RTT::Logger calls end up inside of log4cpp, and the default for RTT::Logger logging events is to log to a file "orocos.log". Same as always. But now you can configure all logging in one place!

IMPORTANT Be aware that there are two logging hierarchies at work here:

  1. a non-real-time, log4cpp-based logging in use by RTT::Logger (currently only for RTT 1.x)
  2. a real-time, OCL::Logging-based (with log4cpp underneath) in use by application code

In time, hopefully these two will evolve into just the latter.

Required Build flags

We're assuming here that you used 'orocreate-pkg' to setup a new application. So you're using the UseOrocos CMake macros.

  1. Your application's manifest.xml must depend on ocl.
  2. Your application's CMakeLists.txt must include the line : orocos_use_package(ocl-logging)

Both steps will make sure that your libraries link with the Orocos logging libraries and that include files are found.

Configuring real-time memory pool size

The deployer's have command line options for this

deployer-macosx --rtalloc-mem-size 10k
deployer-corba-macosx --rtalloc-mem-size 30m
deployer-corba-macosx --rtalloc 10240      # understands shortened, but unique, options
See note at top of file regarding TLSF's bookkeeping overhead. The pool needs to be larger than that value.

Configuring RTT::Logger logging

NOTE: this feature is not available on the official release. Skip to the next section (Configuring OCL::logging) if you're not using the log4cpp branch of the RTT

You can use any of log4cpp's configurator approaches to configure, but the deployer's already know about PropertyConfigurator's. You can pass a log4cpp property file to the deployer and that will be used to configure the first of the hierarchies above - the non-real-time, logging used by RTT::Logger. For example

deployer-macosx --rtt-log4cpp-config-file /z/l/log4cpp.conf
where the file /z/l/log4cpp.conf is something like
# root category logs to application (this level is also the default for all 
# categories who's level is NOT explicitly set in this file)
log4j.rootCategory=DEBUG, applicationAppender
 
# orocos setup
log4j.category.org.orocos.rtt=INFO, orocosAppender
log4j.additivity.org.orocos.rtt=false   # do not also log to parent categories
 
log4j.appender.orocosAppender=org.apache.log4j.FileAppender
log4j.appender.orocosAppender.fileName=orocos-log4cpp.log
log4j.appender.orocosAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.orocosAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %m%n
This configuration file simply changes the output filename and format. You could also add additional appenders (e.g. to stdout, to socket) and change the logging level for sub-categories, if RTT supported them (e.g. scripting.rtt.orocos.org).

IMPORTANT Note the direction of the category name, from org to rtt. This is specific to log4cpp and other log4j-style frameworks. Using a category "rtt.orocos.org" and sub-category "scripting.rtt.orocos.org" won't do what you, nor log4cpp, expect.

Configuring OCL::logging (XML)

This is how you would setup logging from a Deployer XML file. If you prefer to use a script, see the next section.

See ocl/logging/tests/xxx.xml for complete examples and more detail, but in short

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "cpf.dtd">
<properties>
 
  <simple name="Import" type="string">
    <value>liborocos-logging</value>
  </simple>
  <simple name="Import" type="string">
    <value>libTestComponent</value>
  </simple>
 
  <struct name="TestComponent" type="OCL::logging::test::Component">
    <struct name="Activity" type="Activity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
  </struct>
 
  <struct name="AppenderA" type="OCL::logging::FileAppender">
    <struct name="Activity" type="Activity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
    <struct name="Properties" type="PropertyBag">
      <simple name="Filename" type="string"><value>appendera.log</value></simple>
      <simple name="LayoutName" type="string"><value>pattern</value></simple>
      <simple name="LayoutPattern" type="string"><value>%d [%t] %-5p %c %x - %m%n</value></simple>
    </struct>
  </struct>
 
  <struct name="LoggingService" type="OCL::logging::LoggingService">
    <struct name="Activity" type="Activity">
      <simple name="Period" type="double"><value>0.5</value></simple>
      <simple name="Priority" type="short"><value>0</value></simple>
      <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
    </struct>
 
    <simple name="AutoConf" type="boolean"><value>1</value></simple>
    <simple name="AutoStart" type="boolean"><value>1</value></simple>
 
    <struct name="Properties" type="PropertyBag">
      <struct name="Levels" type="PropertyBag">
        <simple name="org.orocos.ocl.logging.tests.TestComponent" 
                type="string"><value>info</value></simple>
      </struct>
 
      <struct name="Appenders" type="PropertyBag">
        <simple name="org.orocos.ocl.logging.tests.TestComponent" 
                type="string"><value>AppenderA</value></simple>
      </struct>
    </struct>
 
    <struct name="Peers" type="PropertyBag">
      <simple type="string"><value>AppenderA</value></simple>
    </struct> 
 
  </struct>
 
</properties>
which creates one component that logs to a org.orocos.ocl.logging.tests.TestComponent category, and that category is connected to one appender that logs to a file appendera.log.

Run this XML file, save it in 'setup_logging.xml' and use it:

  deployer-gnulinux -s setuplogging.xml

Configuring OCL::logging (Lua)

This is how you would setup logging from a Lua script file. If you prefer to use a XML, see the previous section.

require("rttlib")
 
-- Set this to true to write the property files the first time.
write_props=false
 
tc = rtt.getTC()
depl = tc:getPeer("deployer")
 
-- Create components. Enable BUILD_LOGGING and BUILD_TESTS for this to
-- work.
depl:loadComponent("TestComponent","OCL::logging::test::Component")
depl:setActivity("TestComponent", 0.5, 0, 0)
 
depl:loadComponent("AppenderA", "OCL::logging::FileAppender")
depl:setActivity("AppenderA", 0.5, 0, 0)
 
depl:loadComponent("LoggingService", "OCL::logging::LoggingService")
depl:setActivity("LoggingService", 0.5, 0, 0)
 
test = depl:getPeer("TestComponent")
aa = depl:getPeer("AppenderA")
ls = depl:getPeer("LoggingService")
 
depl:addPeer("AppenderA","LoggingService")
 
-- Load marshalling service to read/write components
depl:loadService("LoggingService","marshalling")
depl:loadService("AppenderA","marshalling")
 
if write_props then
   ls:provides("marshalling"):writeProperties("logging_properties.cpf")
   aa:provides("marshalling"):writeProperties("appender_properties.cpf")
   print("Wrote property files. Edit them and set write_props=false")
   os.exit(0)
else
   ls:provides("marshalling"):loadProperties("logging_properties.cpf")
   aa:provides("marshalling"):loadProperties("appender_properties.cpf")
end
 
test:configure()
aa:configure()
ls:configure()
 
test:start()
aa:start()
ls:start()

To run this script, save it in 'setup_logging.lua' and do:

rttlua-gnulinux -i setup_logging.lua

Using OCL::Logging in C++

The component itself uses logging like the following simplified example
// TestComponent.hpp
#include <ocl/LoggingService.hpp>
#include <ocl/Category.hpp>
 
class Component : public RTT::TaskContext
{
...
    /// Our logging category
    OCL::logging::Category* logger;
};
// TestComponent.cpp
#include <rtt/rt_string.hpp>
 
Component::Component(std::string name) :
        RTT::TaskContext(name),
        logger(dynamic_cast<OCL::logging::Category*>(
                   &log4cpp::Category::getInstance("org.orocos.ocl.logging.tests.TestComponent")))
{
}
 
bool Component::startHook()
{
    bool ok = (0 != logger);
    if (!ok)
    {
        log(Error) << "Unable to find existing OCL category '" << categoryName << "'" << endlog();
    }
 
    return ok;
}
 
void Component::updateHook()
{
    // RTT 1.X
    logger->error(OCL::String("Had an error here"));
    logger->debug(OCL::String("Some debug data ..."));
    // RTT 2.X
    logger->error(RTT::rt_string("Had an error here"));
    logger->debug(RTT::rt_string("Some debug data ..."));
    logger->getRTStream(log4cpp::Priority::DEBUG) << "Some debug data and a double value " << i;
}

IMPORTANT YOu must dynamic_cast to an OCL::logging::Category* to get the logger, as shown in the constructor above. Failure to do this can lead to trouble. You must also use explicitly use OCL::String() syntax when logging. Failure to do this produces compiler errors, as otherwise the system defaults to std::string and then you are no longer real-time. See the FAQ below for more description.

And the output of the above looks something like this:

// file orocos.log, from RTT::Logger configured with log4cpp
20100414T09:50:11.844 [INFO] ControlTask 'HMI' found CORBA Naming Service.
20100414T09:50:11.845 [WARN] ControlTask 'HMI' already bound to CORBA Naming Service.
and from a deployer with OCL::logging (note that here the categories are set as components.something)
20100414T21:41:22.539 [INFO ] components.HMI Started servicing::HMI
20100414T21:41:23.039 [DEBUG] components.Robot Motoman robot started
20100414T21:41:42.539 [INFO ] components.ConnectionMonitor Connected
and if you combine RTT::Logger and your own log4cpp-logging, say in a GUI application
20100414T21:41:41.982 [INFO ] org.orocos.rtt Thread created with scheduler type '1', priority 0 and period 0.
20100414T21:41:41.982 [INFO ] org.orocos.rtt Creating Proxy interface for HMI
20100414T21:41:42.016 [DEBUG] org.me.myapp Connections made successfully
20100414T21:41:44.595 [DEBUG] org.me.myapp.Robot Request position hold

The last one is the most interesting. All RTT::Logger calls have been sent to the same appender as the application logs to. This means you can use the exact same logging statements in both your components (when they use OCL::Logging) and in your GUI code (when they use log4cpp directly). Less maintenance, less hassle, only one (more) tool to learn. The configuration file for the last example looks something like

# root category logs to application (this level is also the default for all 
# categories who's level is NOT explicitly set in this file)
log4j.rootCategory=DEBUG, applicationAppender
 
# orocos setup
log4j.category.org.orocos.rtt=INFO, applicationAppender
log4j.additivity.org.orocos.rtt=false   # do not also log to parent categories
 
# application setup
log4j.category.org.me =INFO, applicationAppender
log4j.additivity.org.me=false         # do not also log to parent categories
 
log4j.category.org.me.gui=WARN
log4j.category.org.me.gui.Robot=DEBUG
log4j.category.org.me.gui.MainWindow=INFO
 
log4j.appender.applicationAppender=org.apache.log4j.FileAppender
log4j.appender.applicationAppender.fileName=application.log
log4j.appender.applicationAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.applicationAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %c %m%n

Technical details

  • We rely on a real-time allocator called TLSF.
  • There is a several kilobyte overhead for TLSF's bookkeeping (~3k on 32-bit Ubuntu, ~6k on 64-bit Snow Leopard). You must take this into account, although the standard OCL TLSF pool size (256k) should cover your needs.
  • Only the OCL::String (in 1.x) and RTT::rt_string (in 2.x) objects in OCL::logging::LoggingEvent objects use the real-time memory pool.
  • When you create a category, all parent categories up to the root are created. For example, "org.me.myapp.cat1" causes creation of five (5) categories: "org.me.myapp.cat1", "org.me.myapp", "org.me", "org", and "" (the root category) (presuming none of these already exist). These all occur on the normal heap (see below).
  • Currently, exhausting the real-time memory pool results in logging events being silently dropped (also, see next item).
  • For real-time performance, ensure that TLSF is built with MMAP and SBRK support OFF in RTT's CMake options (-DOS_RT_MALLOC_MMAP=OFF -DOS_RT_MALLOC_SBRK=OFF).
  • TLSF use with multiple threads is currently supported only for non-macosx platforms. Use on macosx will exhibit (understandable) corruption in the TLSF bookkeeping (causes assert's).

FAQ

Logging statements are not recorded

Q: You are logging and everything seems fine, but you get no output to file/socket/stdout (depending on what your appender is).

A: Make sure you are using an OCL::logging::Category* and not a log4cpp::Category. The latter will silently compile and run, but it will discard all logging statements. This situation can also mask the fact that you are accidentally using std::string and not OCL::String. For example

log4cpp::Category* logger = log4cpp::Category::getInstance(name);
logger->debug("Hello world")
When the above is used within the OCL real-time logging framework, no logging statements are recorded and it is not running in real-time. Changing the above to

OCL::logging::Category* logger = 
  dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name));
logger->debug("Hello world")
will cause a compile error of
/path/to/log4cpp/include/log4cpp/Category.hh: In member function ‘virtual bool MyComponent::configureHook():
/path/to/log4cpp/include/log4cpp/Category.hh:310: error:void log4cpp::Category::debug(const char*, ...)’ is inaccessible
/path/to/my/source/MyComponent.cpp:64: error: within this context
because the "Hello world" string is being treated as a std::string, which you can not use with OCL::logging::Category. Finally, correct the code to

OCL::logging::Category* logger = 
  dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name));
logger->debug(OCL::String("Hello world"))
and the code compiles and runs, and now logging statements are recorded.

omniORBpy - python binding for omniORB

This page describes a working example of using omniORBpy to interact with an Orocos component. The example is very simple, and is intended for people who do not know where to start developing a CORBA client.

Your first stop is: http://omniorb.sourceforge.net/omnipy3/omniORBpy/ The omniORBpy version 3 User’s Guide. Read chapters 1 and 2. Optionally read chapter 6. The example works with and without naming services.

Once you are comfortable with omniORBpy, do the following (I assume you are kind enough to be a Linux user working on a console):

  1. download the rtt examples, and compile the smallnet orocos component (you might need first to fix the Makefile paths):
    • wget http://www.orocos.org/stable/examples/rtt/rtt-examples-1.10.0.tar.gz
      tar xf rtt-examples-1.10.0.tar.gz 
      cd rtt-examples-1.10.0/corba-example/
      make smallnet
  2. download the corba idls and, for simplicity's sake, copy the IDLs to a new empty directory:
    • svn co http://svn.mech.kuleuven.be/repos/orocos/trunk/rtt/src/corba/ 
      mkdir omniclt 
      cp corba/*idl omnictl/
      cd omniclt
  3. generate the Python stubs, two new directories should appear (namely RTT and RTT__POA)
    • omniidl -bpython *idl 
  4. download the attached python file (orocosclient.py) to your home directory and copy it where your IDLs are (current directory)
    • cp ~/orocosclient.py .
  5. open a new console and run your smallnet application
    • sudo ../smallnet

If you get something like

0.011 [ Warning][SmallNetwork] ControlTask 'ComponentA' could not find CORBA Naming Service.
0.011 [ Warning][SmallNetwork] Writing IOR to 'std::cerr' and file 'ComponentA.ior'
IOR:0...10100
it means your omniNames is either not configured or not running. Try:
sudo ../smallnet -ORBInitRef NameService=corbaname::127.0.0.1
if this work (you see a line like: 0.011 [ Info ][SmallNetwork] ControlTask 'ComponentA' found CORBA Naming Service.) then you need to modify the paremeter InitRef in your omniORB4.cfg (or similar, which is usually in /etc/) and make it read like:
InitRef=NameService=corbaname::127.0.0.1
  • finally run the python application
    • python orocosclient.py 

If you are not able to make your naming service work, try using the component's IOR. After running you smallnet server, copy the complete IOR printed on screen and paste it as argument to the python program (including the word "IOR:")

python orocosclient.py IOR:0...10100

Look at the IDLs and the code to understand how things work. I am no python expert, so if the coding style looks weird to you, my apologies. Good luck!

AttachmentSize
orocosclient.py_.txt1.99 KB