Main Page

This is the Main Orocos.org Wiki page.

From here you can find links to all Orocos related Wiki's.

Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.

To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. Click below to read the rest of this post.This is the Main Orocos.org Wiki page.

From here you can find links to all Orocos related Wiki's.

Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.

To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. When that link is clicked and the page does not exist, one is offered to create it and write it.

Currently, the Orocos wiki pages are written in MediaWiki style. You should create your pages in this style as well.

Feel free to click on the 'Edit' tab above to see how this page was written (and to improve it ! ).

Development

This section covers all development related pages.

Contributing

How you can get involved and contribute & participate in the Orocos project

Resources

Development strategy

The Orocos toolchain uses git, with the official repositories hosted at gitorious.

Branches

The various branches are
  • master : main development line, latest features. Replaces rtt-2.0-mainline
  • toolchain-2.0 : stable release branch for the 2.0 release series.
  • rtt-1.0-svn-patches: remains for svn bridge of 1.x release series
  • ocl-1.0-svn : remains for svn bridge of 1.x release series

The master branch gets updated when new branches are merged into it by its maintainer. This can be a merge from the bugfix branches (ie merge from toolchain-2.x) or a merge from a development branch.

The stable branch should always point to the latest toolchain-2.x tip. This isn't automated, and so it lags (probably something for a hudson job or a git commit hook).

All branches in the rtt-2.0-... are no longer updated. The rtt-2.0-mainline has been merged with master, which means that if you have a rtt-2.0-mainline branch, you can just do git pull origin master, and it will fast-forward your tree to the master branch, or you checkout the local master.

Contributing packages

You may contribute a software package to the community. It must respect the rules set out in the Component Packages section. Packages general enough can be adopted by the Orocos Toolchain Gitorious project. Make sure that your package name only contains word and number characters and underscores. A 'dash' ('-') is not acceptable in a package name.

Contributing patches

Small contributions should go on the mailing lists as patches. Larger features are best communicated using topic branches in a git repository clone from the official repositories. Send pull requests to the mailing lists. These topic branches should be hosted on a publicly available git server (e.g. github, gitorious).

NB for the Orocos v1, no git branches will be merged (due to SVN), use individual patches instead. v2 git branches can be merged without problems.

Making suggestions

The easiest way to make suggestions is to use the mailing list (register here). This allows discussion about what you are suggesting (which after all, someone else may already be working), as well as informing others of what you are interested in (or are willing to do).

Reporting bugs

Before reporting a bug, please check the Bug Tracker and in the Mailing list and Forum to see whether this is a known issue. If this is a new issue then TBD email the mailing lists, OR enter an issue in the bug tracker

Orocos developers meetup at ERF

Goals of the meeting: discuss the future or the Orocos toolchain w.r.t. Rock and ROS.

Identified major goals

  • make Orocos components usable across all use cases (ROS, Orocos and Rock). There should be no "rock-only" or "ros-only" components
  • make as much as the Rock toolchain usable with "plain Orocos components" (see below)
  • make the parallel usage of the three cases (Orocos toolchain, Orocos in ROS and Rock) as painless as possible

Sharing installations of the orocos toolchain

The main issue is the ability to compile the toolchain and components only once, e.g. in a Rock installation, and use them in ROS or vice-versa.

  • already existing ignore_packages mechanism. Need to create a wiki page on how to set it up for sharing an orocos installation. Manual, but should be working.
  • more automatic mechanisms:
    • allow dependencies between autoproj installations. Fully automatic for orocos toolchain/Rock interoperability
    • point to a prefix and have autoproj find out things (for ROS installs)

Sharing Orocos components across use-cases

Using rock components on plain Orocos should just work [needs testing and documentation].

Using rock tools on plain Orocos

The use of orogen or typegen would be required

  • as far as we know, there is no missing "core" functionality in orogen to make typegen usable for "core" orocos libraries like KDL. Need to make some functionality such as opaques available to typegen though (only available to orogen currently). This can be done through the ability to make typegen load an oroGen SL file (trivial)
  • allow passing -I options directly to both oroGen and typeGen
  • mechanism to define "side-loading" typekits that define constructors and operators separately for scripting
  • the core of the Rock tooling is orocos.rb. Need to test and update orocos.rb so that it can work without a model. Method: update the test suite by mocking TaskContext#model to return nil and/or getModelName to not exist. From there, test tools like oroconf and vizkit

Dataflow between ROS and Rock / plain Orocos

  • need data conversions: one must be able to publish a C++ type over a ROS topic and vice-versa
  • typegen generation for ROS messages
  • type specification when creating ROS streams (since the ROS topic and the orocos port might have different types)
  • type conversions on the data flow: use already existing constructor infrastructure to do the conversion, need to create the channel element and change the connection code
  • add type conversion support in oroGen (equivalent system than opaques)

Other discussed topics

  • make TypeInfo very thin so that we can register it once per type and never change it. Only transports / constructors / ... could then be overriden
  • make the deployer a library

Roadmap

Where things are going, and how we plan to get there.

See also Roadmap ideas 3x for some really long-term ideas ...

TODO Autoproj, RTT, OCL, etc.

Real-time logging

The goal is to provide a real-time safe, low overhead, flexbible logging system useable throughout an entire system (ie within both components and any user applications, like GUIs).

We chose to base this on log4cpp, one of the C++-based derivates of log4j, the respected Java logging system. With only minor customimzations log4cpp is now useable in user component code (but not RTT code itself, see below). It provides real-time safe, hierarchical logging with multiple levels of logging (e.g. INFO vs DEBUG).

Near future

  • Provide a complete system example demonstrating use of the real-time logging framework in both user components, and a GUI-based application. Based on the v2 toolchain.
  • Provide a component-based appender demonstrating transport of logging events. Most likely demonstrate centralized logging of a distributed system.
  • Add logging system stress tests. (l already have this for v1, but need to port to v2 and submit)
  • Able have multiple appenders per category. This is simply a technical limitation of the initial approach, and should be readily changeable.

Long term plans

  • Replace the existing RTT::Logger functionality with the real-time logging framework. This really can't involve rewriting all the logging statements in RTT, etc.
  • Provide levels of DEBUG logging. Some logging system use FINE, FINER, FINEST levels, whilst others use DEBUG plus an integer level within debug (e.g. debug-1 thru debug-9, from verbose to most-verbose). Chose one approach, and modify log4cpp to support it.
  • Support use by scripting and state machines (possibly also Lua?). This means both being able to log, as well as being able to configure categories, appenders, etc.

Catkin-ROS build-support plan

Target versions

These changes are for Toolchain >= 2.7.0 + ROS >= Hydro

Goals

Support building in these workflows:

  • Autoproj managed builds (Rock-style)
    • depends on: manifest.xml for meta-build info.
    • Rock users don't use the UseOrocos.cmake macros since their CMakeLists +pc files get generated anyway by orogen.
  • CMake managed builds (every-package-its-own-build-dir-style)
    • depends on: pkg-config files to track linking+includes
    • Uses the UseOrocos.cmake file, autolinking is done by parsing manifest.xml file
  • ROSbuild managed builds
    • depends on: manifest.xml file, rosbuild cmake macros read it and .pc files, Orocos cmake macros read it to link properly (no build flags from manifest file !)
    • Will only be used if the user explicitly called rosbuild_init() in his top-level CMakeLists.txt
  • Catkin managed builds
    • depends on: package.xml file, Orocos generated pkg-config files
    • The Catkin .pc files will be generated but ignored by us (?)
    • No Auto linking
    • We'll also generated .pc files in the devel path during the cmake run

Effects on (Runtime) Environment

  • Deployer's import ?
  • ROS Deb packages ?
  • Orocos Deb packages ?

CMake changes or new macros

  • Auto linking
  • orocos_use_package()
  • orocos_find_package()
  • orocos_generate_package()

Roadmap ideas for 3.x

While the project is still in the (heavy?) turmoil of the 1.x-to-2.x transition, it might be useful to start thinking about the next version, 3.x. Below are a number of developments and policies that could eventually become 3.x; please, use the project's (user and developer) mailinglists to give your opinions, using a 3.x Roadmap message tag.

Disclaimer: there is nothing official yet about any of the below-mentioned suggestions; on the contrary, they are currently just the reflections of one single person, Herman Bruyninckx. (Please, update this disclaimer if you add your own suggestions.)

General policies to be followed in this Roadmap:

  • the anti-Not Invented Here policy: whenever there exists a FOSS project that has already a solution for (part of) this roadmap, we should try to cooperate with that project, instead of putting efforts in our own version.
  • the big critical mass projects first policy: when being confronted with the situation above, it is much preferred to cooperate with (contribute to) projects that have a high critical mass (Cmake, Linux, Eclipse, Qt, etc.) instead of with single-person or single-team projects, even when the latter currently have better functionalities and ideas. At the same time, promising single-person projects will be stimulated to make their efforts useful in a larger critical mass project.

Orocos distribution

Much can be improved to bring Orocos closer to users, and the concept of a simple-to-install distribution is a proven best practice. However, Orocos should not try to develop its own distribution, but should rather hook on to existing, successful efforts. ROS is the obvious first choice, and the orocos_toolchain_ros is the concrete initiative that has already started in this direction. However, this "only" makes "some" relevant low-level Orocos functionality available in a form that is easier to install for many robotics users; in order to allow users to profit from all Orocos functionality, the following extra steps have to be set:
  • a "Hello Robot!" application, installable as a ROS stack. It could contain a simulated robot, visualised in Morse or Gazebo, and componentized in an RTT component, together with an RTT/KDL/BFL-based set of motion controllers and estimators. (Morse is currently the most promising candidate, from a component-based development point of view.)
  • a (Wiki) book that explains the whole setup, not just from a software point of view, but also a motivation why the presented example could be considered as a "best practice" as a robotics system. This Wiki book should not be an Orocos-only effort, but be useful for the whole community.
  • a similar "Hello Machine!" application, targeting not the robotics community, but the mechatronics, or machine tools community.

Contributors to this part of the Roadmap need not be RTT developers, but motivated users!

RTT

The road towards better decoupling, as started in 2.x, is designed and implemented further:
  • RTT will get a complete and explicit component model, existing of component, composite component, connection, port, interface, discrete behaviour, and communication:
    • The OROMACS development at the University of Twente have already produced a core of the composite component. That concept is required for a full support of the Model-Driven Engineering approach.
    • the connection is the data-less, event-less and command-less representation of the architecture of a system, consisting of only the identification of which components will interact with each other.
    • the difference between a port and an interface is that a port belongs to a component, and implements an interface; the interface in itself must become a first-class citizen of the component model.
    • discrete behaviour is the current state machine. Further developments in this context are probably only to be expected at the implementation and tooling front.
    • communication: Orocos has had, from day one, the ambition to not provide communication middleware, since there are so many other projects that do that. RTT should, however, improve its decoupling of (i) using data structures inside a component, (ii) providing them for communication in a port, and (iii) transporting them from one component's port to another component's port. Maybe this is as easy as cleanly separating the configuration files for all three aspects; maybe it's more involved than that.
  • the mapping on real hardware resources (computational thread, communication field bus) is separated from the definition of a component.
  • the process of defining data flow data structures is supported by an IDL language. This IDL has to be chosen together with other projects, and should not be an Orocos-only effort. A real IDL includes the definition of the meaning of the fields in the data, and not just their computer language representation.
  • the codel idea of GenoM3 is supported for the construction of continuous behaviour inside a component. The important role of the codel idea in the context of realtime systems is that one should give the component designer full control over when which computations are to be executed (instead of relying on the OS scheduler); this requires a design in which computations can be subdivided in pre-emptible pieces (codels), and in which they can be scheduled in efficient Directed Acyclic Graphs.

Contributors to this part of the Roadmap need be RTT developers!

BFL, KDL, SCL

SCL does not yet exist, but there is a high and natural need for a Systems and Control Library, next to BFL and KDL.

All three libraries share a common fundamental design property, and that is that they can all be considered as special cases of executable graphs, so a common support will be developed for the flexible, configurable scheduling off all computations (codels) in complex networks (Bayesian networks, kinematic/dynamic networks, control diagrams).

Contributors to this part of the Roadmap need not be RTT developers, but domain experts that have become power users of the RTT infrastructure!

iTaSC and beyond

A usable robotics control systems consists, of course, not only of RTT, BFL, KDL and/or SCL components, but there is an obvious need for a task primitive: the brain that contains all the knowledge about when to use which component, with what configuration, and until what conditions are being satisfied.

As a first step, the instantaneous version of a constrained-based optimization approach to task-level control will be provided. Following steps will extend the instantaneous idea towards non-instantaneous tasks. This extension must be focused on tasks that require realtime performance, since non-realtime solutions are provided by other projects, such as ROS.

Contributors to this part of the Roadmap need not be RTT developers, but domain experts that also happen to be average users of the RTT infrastructure! They will open up the functionalities of Orocos to the normal end-user.

Tooling

More and improved tools have been a major feature of the 2.x evolution. The major tooling effort for 3.x, will be to bring the above-mentioned component model into the Eclipse eco-system.

The first efforts in this direction have started, in the context of the European project BRICS.

Contributors to this part of the Roadmap need not be RTT developers, but programmers familiar with the advanced Eclipse features, such as ecore models, EMG, etc.

European Robotics Forum 2011 Workshop on the Orocos Toolchain

At the European Robotics Forum 2011 Intermodalics, Locomotec and K.U.Leuven are organizing a two-part seminar, appealing to both industry and research institutes, titled:

  1. Real-Time Robotics with state-of-the-art open source software: case studies (45min presentation, open to all)
  2. Exploring the Orocos Toolchain (2 hours hands-on, registration required)

The session will be on April 7, 9h00-10h30 + 11h00-12h30

Remaining seats : 0 out of 20 (last update: 06/04/2011)

Real-Time Robotics with state-of-the-art open source software: case studies

In this presentation, Peter Soetens and Ruben Smits introduce the audience to todays Open Source robotics eco-system. Which are the strong and weak points of existing software ? Which work seamlessly together, and on which operating systems (Windows, Linux, VxWorks,... ) ? We will prove our statements with practical examples from both academic and industrial use cases. This presentation is the result of the long standing experience of the presenters with a open source technologies in robotics applications and will offer the audience leads and insights to further explore this realm.

Exploring the Orocos Toolchain

In this hands-on session, the participants are invited to bring their own laptop with Orocos and ROS (optionally) installed. We will support Linux, Mac OS-X and Windows users and will provide instructions on how they can prepare to participate.YouBot: A real and simulated YouBot will be usedYouBot: A real and simulated YouBot will be used

We will let the participants experience that the Orocos toolchain:

If you'll be using the bootable USB-sticks, prepared by the organisers, you can skip all installation instructions and directly start the assignment at https://github.com/bellenss/euRobotics_orocos_ws/wiki

If you are attending the hands-on session you can bring your own computer. Depending on you operating system you should install the necessary software using the following installation instructions:

YouBot Demo SetupYouBot Demo Setup The workshop will start with making you familiar with the Orocos Toolchain, which does not require the YouBot. The hands-on will continue then on a robot in simulation and on the real hardware. We will use the ROS communication protocol to send instructions to the simulator (Gazebo) or the YouBot. Installing Gazebo is not required, since this simulation will run on a dedicated machine. Documentation on the workshop application and the assignment can be found at https://github.com/bellenss/euRobotics_orocos_ws/wiki.

Registration

You first need to register for attending the euRobotics Forum. Registration for the workshop is mandatory, but free of charge. For the hands-on session, we will limit the number of participants to 20. The workshop is guided by 6 experienced Orocos users. Please register your participation by sending an email to info at intermodalics dot eu. We will confirm your participation with a short notice. Later-on, you will receive a second email with more details about how to prepare. You should receive this second, detailed email in the week of March, 20, 2011.

euRobotics Forum Linux Setup

Toolchain Installation

The installation instructions depend on if you have ROS installed or not.

NOTE: ROS is required to participate in the YouBot demo.

With ROS on Ubuntu Lucid/Maverick

Install Diamondback ROS using Debian packages for Ubuntu Lucid (10.04) and Maverick (10.10) or the ROS install scripts, in case you don't run Ubuntu.

  • Install the ros-diamondback-orocos-toolchain-ros debian package version 0.3.1 or later:  apt-get install ros-diamondback-orocos-toolchain-ros After this step, proceed to Section 2: Workshop Sources below.

With ROS on Debian, Fedora or other systems

  • We did not succeed in releasing the Diamondback 0.3.0 binary packages for your target of the Orocos Toolchain. This means that you need to build this 'stack' yourself with 'rosmake', after you installed ROS (See http://www.ros.org/wiki/diamondback/Installation). This 'rosmake' step may take about 30 minutes to an hour, depending on your laptop.

Instructions after ROS is installed:

source /opt/ros/diamondback/setup.bash
mkdir ~/ros
cd ~/ros
export ROS_PACKAGE_PATH=$HOME/ros:$ROS_PACKAGE_PATH
git clone http://git.mech.kuleuven.be/robotics/orocos_toolchain_ros.git
cd orocos_toolchain_ros
git checkout -b diamondback origin/diamondback
git submodule init
git submodule update --recursive
rosmake --rosdep-install orocos_toolchain_ros

NOTE: setting the ROS_PACKAGE_PATH is mandatory for each shell that will be used. It's a good idea to add the export ROS_PACKAGE_PATH line above to your .bashrc file (or equivalent)..

Without ROS

  • Run the bootstrap.sh (version 2.3.1) script in an empty directory.

Workshop Sources

An additional package is being prepared that will contain the workshop files. See euRobotics Workshop Sources.

euRobotics Forum Mac OS-X Setup

Toolchain Installation

Due to a dynamic library issue in the current 2.3 release series, Mac OS-X can not be supported during the Workshop. We will make available a bootable USB stick which contains a pre-installed Ubuntu environment containing all necessary packages.

euRobotics Forum Windows Setup

Toolchain Installation

Windows users can participate in the first part of the hands-on where Orocos components are created and used. Pay attention that installing the Orocos Toolchain on Win32 platforms may take a full day in case you are not familiar with CMake, compiling Boost or any other dependency of the RTT and OCL.

Requirements:

  • Visual Studio 2005 or 2008
  • CMake 2.6.3 or newer (2.8.x works too)
  • Boost C++ 1.40.0
  • Readline for Windows (see Taskbrowser with readline on Windows)
  • Cygwin installation (default setup is fine)

See the Compiling on Windows with Visual Studio wiki page for instructions. The TAO/Corba part is not required to participate in the workshop.

You need to follow the instructions for RTT/OCL v2.3.1 or newer, which you can download from the Orocos Toolchain page. We recommend to build for Release.

In case you have no time nor the experience to set this up, we provide bootable USB sticks that contain Ubuntu Linux with all workshop files.

Workshop Sources

An additional package is being prepared that will contain the workshop files. See euRobotics Workshop Sources for downloading the sources.

Windows users might also install the Kst program which is a KDE plot program that also runs on Linux. We provided a .kst file for plotting the workshop data. See the Kst download page.

Testing Your Setup

In case you completed building and installing RTT and OCL, you can launch a cygwin or cmd.exe prompt and run the orocreate-pkg script to create a new package, which is in your c:\orocos\bin directory. Make sure that your PATH variable is propertly extended with
set PATH=%PATH%;c:\orocos\bin;c:\orocos\lib;c:\orocos\lib\orocos\win32;c:\orocos\lib\orocos\win32\plugins
(replace c:\orocos with the actual installation path, which might also be c:\Program Files\orocos)

You repeat the classical CMake steps with this package, generate the Solution file and build and install it. Then start up the deployer with the deployer-win32.exe program and type 'ls'. It should start and show meaningful information. If you see strange characters in the output, you need to turn of the colors with the '.nocolors' command at the Deployer's prompt.

euRobotics Workshop Material

The euRobotics Forum workshop on Orocos has been a great success. About 30 people attended and participated in the hands-on workshop. The Real-Time & Open Source in Robotics track drew more than 60 people. Both tracks were overbooked.

You can find all presentation material in PDF form below

euRobotics Workshop Sources

There are two ways you can get the sources for the workshop:

  • Using Git
  • Using a zip file

Since the sources are still evolving, it might be necessary to update your version before the workshop.

Hello World application

The first, ROS-independent part, uses the classical hello-world examples from the rtt-exercises package.

You can either check it out with

mkdir ~/ros
cd ~/ros
git clone git://gitorious.org/orocos-toolchain/rtt_examples
cd rtt_examples/rtt-exercises

Or you can download the examples from here. You need at least version 2.3.1 of the exercises.

If you're not using ROS, you can download/unzip it in another directory than ~/ros.

Youbot demo application

The hands-on session involves working on a demo application with a YouBot robot. The application allows you to

  • Drive the YouBot around based on pose estimates using laser scan measurements
  • Simulate the YouBot on your own computer

The Youbot demo application is available on https://github.com/bellenss/euRobotics_orocos_ws (this is still work in progress and will be updated regularly)

You can either check it out with

mkdir ~/ros
export ROS_PACKAGE_PATH=\$ROS_PACKAGE_PATH:$HOME/ros
cd ~/ros
git clone http://robotics.ccny.cuny.edu/git/ccny-ros-pkg/scan_tools.git
git clone http://git.mech.kuleuven.be/robotics/orocos_bayesian_filtering.git
git clone http://git.mech.kuleuven.be/robotics/orocos_kinematics_dynamics.git
git clone git://github.com/bellenss/euRobotics_orocos_ws.git
roscd youbot_supervisor
rosmake --rosdep-install

Check that ~/ros is in your ROS_PACKAGE_PATH environment variable at all times, by also adding the export line above to your .bashrc file.

Testing your setup

Here are some instructions to see if you're running a system usable for the workshop.

Non-ROS users

You have built RTT with the 'autoproj' tool. This tool generated an 'env.sh' file. You need to source that file in each terminal where you want to build or run an Orocos application:

cd orocos-toolchain
source env.sh

Next, cd to the rtt-exercises directory that you unpacked, enter hello-1-task-execution, and type make:

cd rtt-exercises-2.3.0/hello-1-task-execution
make all
cd build
./HelloWorld-gnulinux

ROS users

You have built RTT from the orocos_toolchain_ros package. Make sure that you source the /opt/ros/diamondback/setup.bash script and that the unpacked exercises are under a directory of the ROS_PACKAGE_PATH:

cd rtt-exercises-2.3.0
source /opt/ros/diamondback/setup.bash
export ROS_PACKAGE_PATH=\$ROS_PACKAGE_PATH:\$(pwd)

Next, you proceed with going into an example directory and type make:

cd hello-1-task-execution
make all
./HelloWorld-gnulinux

Testing the YouBot Demo

After you have built the youbot_supervisor, you can test the demo by opening two consoles and do in them:

First console:

roscd youbot_supervisor
./simulation.sh

Second console:

roscd youbot_supervisor
./changePathKst (you only have to do this once after installation)
kst plotSimulation.kst

If you do not have 'kst', install it with  sudo apt-get install kst kst-plugins

European Robotics Forum 2012: workshops

At the European Robotics Forum 2012 KU Leuven and Intermodalics are organizing a three-part seminar, appealing to both industry and research institutes, titled:

  1. Introduction to state charts and reusable, modular task specification through the Orocos eco-system
  2. Hands-on1: getting started with state charts in the Orocos eco-system
  3. Hands-on2: getting started with instantaneous motion specification using constraints (iTaSC): reusable and modular task specification

The sessions will be on March 6, 8h30-10h30 + 11h00-12h30 + 13h30-15h00 (Track four). For more detail consult the European Robotics Forum program

Remaining seats: (last update: March 2, 2012)
  • Hands-on 1: 0 out of 20
  • Hands-on 2: 0 out of 20
  • We're fully booked, but don't be shy to come and peek or sit along, although we can't guarantee you a table or a chair !

    (Information on last year's workshop can be found here.)

    Registration

    You first need to register for attending the euRobotics Forum. Registration for the workshop is mandatory, but free of charge. For the hands-on sessions (hands-on 1 and hands-on 2), we will limit the number of participants to 20. The workshops are guided by different experienced Orocos users. Please register your participation by sending an email to info at intermodalics dot eu indicating which workshops you want to attend. We will confirm your participation with a short notice. Later-on, you will receive a second email with more details about how to prepare. You should receive this second, detailed email in the week of February, 27, 2012.

    Motivation and objective

    The workshop consists of three rather independent parts. It is advised but not required to follow the preceding session(s) when attending session two or three.
    1. The first session is a presentation session, it introduces the basic concepts of Orocos application programming, followed by rFSM state charts and the iTaSC framework.
    2. The second session is a hands-on session, that aims at making the participants familiar with rFSM state charts, which is a powerful though easy to use tool for robotic coordination and supervision tasks,
    3. The third sessions is also a hands-on session, that aims at introducing the concepts of constraint-based motion specification using the iTaSC framework. This framework and its software implementation was developed at the KU Leuven during the past years. It's key advantages are the composability of (partial) constraints and reusability of the constraint specification. The software is an open-source project, which has recently reached its 2.0 version.

    Approach

    1. Presentation session, giving a high-level overview of rFSM and iTaSC by introducing the key concepts.
    2. Hands-on session: guided exercise where the participants will have to create an application with interacting state machines, that can be used for example to coordinate the behavior of the iTaSC application of the following session.
    3. Hands-on session: guided exercise where the participants will have to create an application consisting of multiple tasks on a robot in simulation. Eg. Drawing a figure on a table and avoiding a moving obstacle with a Kuka Youbot.

    Feedback form

    Participant feedback is gratefully appreciated. Please fill in the feedback form. Some browsers/pdf viewers do not support in-browser usage of the form. To avoid problems, please download the form first.

    Presentations

    iTaSC hands-on session

    AttachmentSize
    RTT-Overview.pdf1.67 MB
    erf_itasc_theory_opt.pdf526.82 KB

    Installation instructions

    Ubuntu Installation with ROS

    Installation

    • Install Electric ROS using Debian packages for Ubuntu Lucid (10.04) or later. In case you don't run Ubuntu you can use the ROS install scripts. See the ros installation instructions.
      • Make sure the following debian packages are installed: ros-electric-rtt-ros-integration ros-electric-rtt-ros-comm ros-electric-rtt-geometry ros-electric-rtt-common-msgs ros-electric-pr2-controllers ros-electric-pr2-simulator ruby
    • Create a directory in which you want to install all the workshops source (for instance erf)

    mkdir ~/erf

    • Add this directory to your $ROS_PACKAGE_PATH

    export ROS_PACKAGE_PATH=~/erf:$ROS_PACKAGE_PATH

    • Get rosinstall

    sudo apt-get install python-setuptools
    sudo easy_install -U rosinstall

    • Get the workshop's rosinstall file . Save it as erf.rosinstall in the erf folder.
    • Run rosinstall

    rosinstall ~/erf erf.rosinstall /opt/ros/electric

    • As the rosinstall tells you source the setup script

    source ~/erf/setup.bash

    • Install all dependencies (ignore warnings)

    rosdep install itasc_examples
    rosdep install rFSM

    • Compile the workshop sources

    rosmake itasc_examples

    Setup

    • Add the following functions in your $HOME/.bashrc file:

    useERF(){
        source $HOME/erf/setup.bash;
        source $HOME/erf/setup.sh;
        source /opt/ros/electric/stacks/orocos_toolchain/env.sh;
        setLUA;
    }
     
    setLUA(){
        if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi
        if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi
        export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find youbot_master_rtt`/lua/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua"
        export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so"
        export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
    }
     
    useERF

    Running the demo

    Gazebo simulation

    • Open a terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Run the script that starts the gazebo simulator (and two translator topics to communicate with the itasc code)

    ./run_gazebo.sh

    • Open another terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Run the script that starts the itasc application

    ./run_simulation.sh

    Real youbot

    • Make sure that you are connected to the real youbot.
    • Open another terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Check the name of the network connection with the robot (for instance eth0) and put this connection name in the youbot_driver cpf file (cpf/youbot_driver.cpf)
    • Run the script that starts the itasc application

    ./run.sh

    KDL-Examples

    Some small examples for usage.

    Do not hesitate to add your own small examples.

    rfsm-session

    Additonal Information on the Practical Session

    Documentation Links

    Other

    • Markus' slides (see below)
    AttachmentSize
    pres.pdf378.19 KB

    Geometric relations semantics

    Remark that this wiki contains a summary of the theoretical article and the software article both published as a tutorial for IEEE Robotics and Automation Magazine:
    • De Laet T, Bellens S, Smits R, Aertbeliën E, Bruyninckx H, and De Schutter J (2013), Geometric Relations between Rigid Bodies: Semantics for Standardization, IEEE Robotics & Automation Magazine, Vol. 20, No. 1, pp. 84-93.
    • De Laet T, Bellens S, Bruyninckx H, and De Schutter J (2013), Geometric Relations between Rigid Bodies: From Semantics to Software, IEEE Robotics & Automation Magazine, Vol. 20, No. 2, pp. 91-102.

    The geometric relations semantics software (C++) implements the geometric relation semantics theory, hereby offering support for semantic checks for your rigid body relations calculations. This will avoid commonly made errors, and hence reduce application and, especially, system integration development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.

    The screenshot below shows the output of the semantic checks of the (wrong) composition of two positions and two orientations.

    Output of the semantic checks of the (wrong) composition of two positions and two orientationsOutput of the semantic checks of the (wrong) composition of two positions and two orientations

    The goal of the software is to provide semantic checking for calculations with geometric relations between rigid bodies on top of existing geometric libraries, which are only working on specific coordinate representations. Since there are already a lot of libraries with good support for geometric calculations on specific coordinate representations (The Orocos Kinematics and Dynamics library, the ROS geometry library, boost, ...) we do not want to design yet another library but rather will extend these existing geometric libraries with semantic support. The effort to extend an existing geometric library with semantic support is very limited: it boils down to the implementation of about six function template specializations.

    What is it?

    This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.

    Rigid bodies are essential primitives in the modelling of robotic devices, tasks and perception, starting with the basic geometric relations such as relative position, orientation, pose, translational velocity, rotational velocity, and twist. This wiki elaborates on the background and the software for the semantics underlying rigid body relationships. This wiki is based on the research of the KU Leuven robotics group, in this case mainly conducted by Tinne De Laet, to explain semantics of all coordinate-invariant properties and operations, and, more importantly, to document all the choices that are made in coordinate representations of these geometric relations. This resulted in a set of concrete suggestions for standardizing terminology and notation, and software with a fully unambiguous software interface, including automatic checks for semantic correctness of all geometric operations on rigid-body coordinate representations.

    The geometric relations semantics software prevents commonly made errors in geometric rigid-body relations calculations like:

    • Logic errors in geometric relation calculations: A lot of logic errors can occur during geometric relation calculations. For instance (there is no need to understand the details just have a look at the difference in syntax), the inverse of $\textrm{Position} \left(e|\mathcal{C}, f |\mathcal{D}\right)$ is $\textrm{Position} \left(f|\mathcal{D}, e |\mathcal{C}\right)$, while the inverse of the translational velocity $\textrm{TranslationVelocity} \left(e|\mathcal{C}, \mathcal{D}\right)$ is $\textrm{TranslationVelocity} \left(e|\mathcal{D}, \mathcal{C}\right)$. When using the semantic representation proposed in this paper, the semantics of the inverse geometric relation can be automatically derived from the forward geometric relation, preventing logic errors. A second example emerges when composing the relations involving three rigid bodies: in order to get the geometric relation of $\mathcal{C}$ with respect to body $\mathcal{D}$ one can compose the geometric relation between $\mathcal{C}$ and third body $\mathcal{E}$ with the geometric relation between body $\mathcal{E}$ and the body $\mathcal{D}$ (and not the geometric relation between the body $\mathcal{D}$ and the body $\mathcal{E}$ for instance). Such a logic constraint can be checked easily by including, for instance, the body and reference body in the semantic representation of the geometric relations.
    • Composition of twists with different velocity reference point: Composing twists requires a common velocity reference point (i.e. the twists have to express the translational velocity of the same point on the body). By including the velocity reference point of the twist in the semantic representation, this constraint can be checked explicitly.
    • Composition of geometric relations expressed in different coordinate frames: Composing geometric relations using coordinate representations like position vectors, translational and rotational velocity vectors, and 6D vector twists, requires that the coordinates are expressed in the same coordinate frame. By including the coordinate frame in the coordinate semantic representation of the geometric relations, this constraint can be checked explicitly.
    • Composition of poses and orientation coordinate representations in wrong order: The rotation matrix and homogeneous transformation matrix coordinate representations can be composed using simple multiplication. Since matrix multiplication is however not commutative, a common error is to use a wrong multiplication order in the composition. The correct multiplication order can however be directly derived when including the bodies, frames, and points in the coordinate semantic representation of the geometric relations.
    • Integration of twists when velocity reference point and coordinate frame do not belong to same frame: A twist can only be integrated when it expresses the translational velocity of the origin of the coordinate frame the twist is expressed in. When including the velocity reference point and the coordinate frame in the coordinate semantic representation of the twist, this constraint can be explicitly checked.

    Background

    This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.

    Background and terminology

    A rigid body is an idealization of a solid body of infinite or finite size in which deformation is neglected. We often abbreviate “rigid body” to “body”, and denotes it by the symbol $\mathcal{A}$. A body in three-dimensional space has six degrees of freedom: three degrees of freedom in translation and three in rotation. The subspace of all body motions that involve only changes in the orientation is often denoted by SO(3) (the Special Orthogonal group in three-dimensional space). It forms a group under the operation of composition of relative motion. The space of all body motions, including translations, is denoted by SE(3) (the Special Euclidean group in three-dimensional space).

    A general six-dimensional displacement between two bodies is called a (relative) pose: it contains both the position and orientation. Remark that the position, orientation, and pose of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative position, orientation, and pose between two bodies are relevant geometric relations.

    A general six-dimensional velocity between two bodies is called a (relative) twist: it contains both the rotational and the translational velocity. Similar to the position, orientation, and pose, the translational velocity, rotational velocity, and twist of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative translational velocity, rotational velocity, and twist between two bodies are relevant geometric relations.

    When doing actual calculations with the geometric relations between rigid bodies, one has to use the coordinate representation of the geometric relations, and therefore has to choose a coordinate frame in which the coordinates are expressed in order to obtain numerical values for the geometric relations.

    Semantics

    Geometric primitives

    The geometric relations between bodies are described using a set of geometric primitives:
    • A (spatial) point is the primitive to represent the position of a body. Points have neither volume, area, length, nor any other higher dimensional analogue. We denote points by the symbols $a$, $b$, ...
    • A vector is the geometric primitive that connects a point $a$ to a point $b$. It has a magnitude (the straight-line distance between the two points), and a direction (from $a$ to $b$). To express the magnitude of a vector, a (length) scale must be chosen.
    • An orientation frame represents an orientation, by means of three orthonormal vectors indicating the frame’s X-axis $X$, Y-axis $Y$ , and Z-axis $Z$. We denote orientation frames by the symbols $\left[a\right]$, $\left[b\right]$, ...
    • A (displacement) frame represents position and orientation of a body, by means of an orientation frame and a point (which is the orientation frame’s origin). We denote frames by the symbols $\left\{a\right\}$, $\left\{b\right\}$, ...

    Each of these geometric primitives can be fixed to a body, which means that the geometric primitive coincides with the body not only instantaneously, but also over time. For the point $a$ and the body $\mathcal{C}$ for instance, this is written as $a|\mathcal{C}$. The figure below presents the geometric primitives body, point, vector, orientation frame, and frame graphically.

    Geometric PrimitivesGeometric Primitives

    Geometric relations

    The table below summarizes the semantics for the following geometric relations between rigid bodies: position, orientation, pose, translational velocity, rotational velocity, and twist.

    Geometric relationsGeometric relations

    Force, Torque, and Wrench

    Screw theory, the algebra and calculus of pairs of vectors that arise in the kinematics and dynamics of rigid bodies, shows the duality between wrenches, consisting of the torque and force vectors, and twists, consisting of translational and rotational velocity vectors. The parallelism between translational, rotational velocity, and twist on the one hand, and torque, force, and wrench on the other hand, is directly reflected in the semantic representation (see the table below) and the coordinate representations.

    Geometric relations force, torque, and wrenchGeometric relations force, torque, and wrench

    Design

    The software implements the geometric relation semantics, hereby offering support for semantic checks for your rigid body relations. This will avoid commonly made errors, and hence reduce application (and, especially, system integration) development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.

    The design idea

    The goal of the geometric_relations_semantics library is to provide semantic checking for calculations with geometric relations between rigid bodies on top of existing geometric libraries, which are only working on specific coordinate representations. Since there are already a lot of libraries with good support for geometric calculations on specific coordinate representations (The Orocos Kinematics and Dynamics library, the ROS geometry library, boost, ...) we do not want to design yet another library but rather will extend these existing geometric libraries with semantic support. The effort to extend an existing geometric library with semantic support is very limited: it boils down to the implementation of about six function template specializations.

    For the semantic checking, we created the (templated) geometric_semantics core library, providing all the necessary semantic support for geometric relations (relative positions, orientations, poses, translational velocities, rotational velocities, twists, forces, torques, and wrenches) and the operations on these geometric relations (composition, integration, inversion, ...).

    If you want to perform actual geometric relation calculations, you will need particular coordinate representations (for instance a homogeneous transformation matrix for a pose) and a geometric library offering support for calculations on these coordinate representations (for instance multiplication of homogeneous transformation matrices). To this end, you can build your own library depending on the geometric_semantics core library in which you implement a limited number of functions, which make the connection between semantic operations (for instance composition) and actual coordinate representation calculations (for instance multiplication of homogeneous transformation matrices). We already provide support for two geometric libraries: the Orocos Kinematics and Dynamics library and the ROS geometry library, in the geometric_semantics_kdl and geometric_semantics_tf libraries, respectively.

    The design

    For every geometric relation (position, orientation, pose, translational velocity, rotational velocity, twist, force, torque, and wrench) the geometric_semantics library contains four classes. Here we will explain the design with the position geometric relation, but all other geometric relations have a similar design. For the position geometric relation there are four classes:
    • PositionSemantics: This class contains the semantics of the (coordinate-free) Position geometric relation. For instance in this case it contains the information on the point, reference point, body, and reference body.
    • PositionCoordinatesSemantics: This class contains a PositionSemantics object of the geometric relation at hand and the extra semantic information needed for semantics of position coordinate geometric relation, i.e. the coordinate frame in which the coordinates are expressed.
    • PositionCoordinates: This templated class contains the actual coordinate representation of the geometric relation, for instance a position vector for the position geometric relation. The template is the actual geometry object (of an external library) you will use as a coordinate representation, for instance a KDL::Vector.
    • Position: This templated class is a composition of a PositionCoordinatesSemantics object and a PositionCoordinates object. In case you want both semantic support and want to do actual geometric calculations, this is the level you will work at.

    Again, the template is the actual geometry (of an external library) you will use as a coordinate representation, for instance a KDL::Vector.

    The above described design is illustrated by the figure below. Position geometric relation designPosition geometric relation design

    Remark that all four of the above 'levels' are of actual use:

    • PositionSemantics: to do coordinate-free semantic checking (without actual geometric calculations);
    • PositionCoordinateSemantics: to do semantic checking involving coordinate systems (without actual geometric calculations);
    • PositionCoordinates: to do the actual geometric calculations; and
    • Position: to do both semantic checking and the actual geometric calculations.

    Pose, Twist, and Wrench

    We need to give some extra information on the pose, twist, and wrench geometric relations since they can be represented as a composition of two other geometric relations (Pose = Position + Orientation, Twist = TranslationalVelocity + RotationalVelocity, Wrench = Force + Torque) or as a new geometric relation. For example we could want to use a homogeneous transformation matrix as a coordinate representation of a pose, and in this case we would also want, for efficiency reasons, to do direct calculations on the homogeneous transformation matrices. In another case we want to represent the pose as the composition of a position (with for instance a position vector as a coordinate representation) and an orientation (with for instance Euler angles as a coordinate representation). The software allows both designs as illustrated in the two figures below. Pose geometric relation design as a basic geometric relationPose geometric relation design as a basic geometric relation Pose geometric relation design as an composition of a Position and Orientation geometric relationPose geometric relation design as an composition of a Position and Orientation geometric relation

    The software structure and content

    Core library: geometric_semantics

    KDL support: geometric_semantics_kdl

    ROS tf support: geometric_semantics_tf

    ROS messages: geometric_semantics_msgs

    ROS message conversions: geometric_semantics_msgs_conversions

    ROS tf messages support: geometric_semantics_tf_msgs

    ROS tf message conversions: geometric_semantics_tf_msgs_conversions

    Examples: geometric_semantics_examples

    Quick start

    Overview

    The framework is ordered following a OROCOS-ROS approach and consists of one stack:
    • geometric_relations_semantics.

    This stack consists of following packages:

    • geometric_semantics: geometric_semantics is the core of the geometric_relations_semantics stack and provides c++ code for the semantic support of geometric relations between rigid bodies (relative position, orientation, pose, translational velocity, rotational velocity, twist, force, torque, and wrench). If you want to use semantic checking for the geometric relation operations between rigid bodies in your application you can check the geometric_semantics_examples package. If you want to create support for your own geometry types on top of the geometric_semantics package, the geometric_semantics_kdl provides a good starting point.
    • geometric_semantics_examples: geometric_semantics_examples groups some examples showing how the geometric_semantics can be used to provide semantic checking for the geometric relations between rigid bodies in your application.
    • geometric_semantics_orocos_typekit: geometric_semantics_orocos_typekit provides Orocos typekit support for the geometric_semantics types, such that the geometric semantics types are visible within Orocos (in the TaskBrowser component, in Orocos script, reporting, reading and writing to files (for instance for properties), ... ).
    • geometric_semantics_orocos_typekit_kdl: geometric_semantics_orocos_typekit_kdl provides Orocos typekit support for geometric semantics coordinate representations using KDL types.
    • geometric_semantics_msgs: geometric_semantics_msgs provides ROS messages matching the C++ types defined on the geometric_semantics package, in order to support semantic support during message based communication.
    • geometric_semantics_msgs_conversions: geometric_semantics_msgs_conversions provides support conversions between geometric_semantics_msgs and the C++ geometric_semantics types defined on the geometric_semantics package.
    • geometric_semantics_msgs_kdl: geometric_semantics_kdl provides support for orocos_kdl types on top of the geometric_semantics package (for instance KDL::Frame to represent the relative pose of two rigid bodies). If you want to create support for your own geometry types on top of the geometric_semantics package, this package provides a good starting point.
    • geometric_semantics_msgs_tf: geometric_semantics_tf provides support for tf datatypes (see http://www.ros.org/wiki/tf/Overview/Data%20Types) on top of the geometric_semantics package (for instance tf::Pose to represent the relative pose of two rigid bodies).
    • geometric_semantics_tf_msgs: geometric_semantics_tf_msgs provides ROS messages matching the C++ types defined on the geometric_semantics_tf package, in order to support semantic support for tf types during message based communication.
    • geometric_semantics_tf_msgs_conversions: geometric_semantics_tf_msgs_conversions provides support conversions between geometric_semantics_tf_msgs and the C++ geometric_semantics_tf types defined on the geometric_semantics_tf package.

    Each package contains the following subdirectories:

    • src/ containing the source code of the components (mainly C++ or python for the ROS msgs support).

    Installation instructions

    Warning, so far we only provide support for linux-based systems. For Windows or Mac, you're still at your own, but we are always interested in your experiences and in extensions of the installation instructions, quick start guide, and user guide.

    Dependencies

    Compiling from source

    • First you should get the sources from git using:

    git clone https://gitlab.mech.kuleuven.be/rob-dsl/geometric-relations-semantics.git
    • Go into the geometric_relations_semantics directory using:

    cd geometric_relations_semantics
    • Add this directory to your ROS_PACKAGE_PATH environment variable using:

    export ROS_PACKAGE_PATH=$PWD:$ROS_PACKAGE_PATH
    • Install the dependencies and build the library using:

    rosdep install geometric_relations_semantics
    rosmake geometric_relations_semantics
    • Everything should compile out of the box, and you are now ready to start using geometric relation semantics support.
    • If you want to run the test of the packages you should use for instance (for the geometric_semantics core package):

    roscd geometric_semantics
    make test

    Setup

    It is strongly recommended that you add the geometric_relations_semantics directory to your ROS_PACKAGE_PATH in your .bashrc file.

    (Re)building the stack or individual packages

    • To build the geometric_relations_semantics stack use:

    rosmake geometric_relations_semantics
    • You can also (re)build any package individually using:

    rosmake PACKAGE_NAME

    Running the tests

    • If you want to run the test of the packages you should for go to the package for instance (for the geometric_semantics core package):

    roscd geometric_semantics
    • And next make and run the tests:

    make test

    User guide

    If you are looking for installation instructions you should read the quick start.

    Setting the build options of the core library

    You can customize the behavior of the semantic checking (checking or not, and screen output or not) by changing the build options of the geometric_semantics library (see CMakeLists.txt of geometric_semantics package)
    • add_definitions(-DCHECK): when using this build flag, the semantic checking will be enabled.
    • add_definitions(-DOUTPUT_CORRECT): when using this build flag, you will get screen output for operations that are semantically correct.
    • add_definitions(-DOUTPUT_WRONG): when using this build flag, you will get screen output for operations that are semantically wrong.

    Using the geometric relations semantics in your own application

    Here we will explain how you can use the geometric relations semantics in your application, in particular using the Orocos Kinematics and Dynamics library as a geometry library, supplemented with the semantic support.

    Preparing your own application using the ROS-build system

    • Create a new ROS package (in this case with name myApplication), with a dependency on the geometric_semantics_kdl:

    roscreate-pkg myApplication geometric_semantics_kdl

    This will automatically create a directory with name myApplication and a basic build infrastructure (see the roscreate-pkg documentation)

    • Add the newly created directory to your ROS_PACKAGE_PATH environment variable:

    cd myApplication
    export ROS_PACKAGE_PATH=$PWD:$ROS_PACKAGE_PATH

    Writing your own application

    • Go to the application directory:

    roscd myApplication

    • Create a main C++ file

    touch myApplication.cpp

    • Edit the C++ file with your favorite editor
      • Include the necessary headers. For instance:

    #include <Pose/Pose.h>
    #include <Pose/PoseCoordinatesKDL.h>

      • It can be convenient to use the geometric_semantics namespace and for instance the one of your geometry library (in this case KDL):

    using namespace geometric_semantics;                                                                                                                                                                                                    using namespace KDL;

      • In your main you should create the necessary geometric relations. For instance for a pose, first create the KDL coordinates:

    Rotation coordinatesRotB2_B1=Rotation::EulerZYX(M_PI,0,0);
    Vector coordinatesPosB2_B1(2.2,0,0);
    KDL::Frame coordinatesFrameB2_B1(coordinatesRotB2_B1,coordinatesPosB2_B1)

    Then use this KDL coordinates to create a PoseCoordinates object:

    PoseCoordinates<KDL::Frame> poseCoordB2_B1(coordinatesFrameB2_B1);

    Then create a Pose object using both the semantic information and the PoseCoordinates:

    Pose<KDL::Frame> poseB2_B1("b2","b2","B2","b1","b1","B1","b1",poseCoordB2_B1);

      • Now you are ready to do actual calculations using semantic checking. For instance to take the inverse:

    Pose<KDL::Frame> poseB1_B2 = poseB2_B1.inverse()

    Building your own application

    • To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

    rosbuild_add_executable(myApplication myApplication.cpp)

    • Now you are ready to build, so type

    rosmake myApplication

    and the executable will be created in the bin directory.

    • To run the executable do:

    bin/myApplication

    You will get the semantic output on your screen.

    Extending your geometry library with semantic checking

    Imagine you have your own geometry library with support for geometric relation coordinate representations and calculations with these coordinate representations. You however would like to have semantic support on top of this geometry library. Probably the best thing to do in this case is to mimic our support for the Orocos Kinematics and Dynamics Library. To have a look at it do:

    roscd geometric_semantics_kdl/

    Template specialization

    The only thing you have to do is write template specializations. So for instance to get support for KDL::Rotation, which is a coordinate representation for a Orientation geometric relation, you have to write the template specialization for OrientationCoordinates<T>, i.e. OrientationCoordinates<KDL::Rotation>.

    Semantic constraints invoked by your coordinate representations

    The first thing to find out is which semantic constraints are invoked by the particular coordinate representation you use. For instance a KDL::Rotation represents a 3x3 rotation matrix and invokes the semantic constraint that the reference orientation frame is equal to the coordinate frame.

    The possible semantic constraints are listed in the *Coordinates.h files in the geometric_semantics core library. So for instance for OrientationCoordinates we find there an enumeration of the different possible semantic constraints imposed by Orientation coordinate representations:

        /**                                                                                                                                                                                                                                      
          *\brief Constraints imposed by the orientation coordinate representation to the semantics                                                                                                                                              
          */                                                                                                                                                                                                                                     
        enum Constraints{                                                                                                                                                                                                                        
            noConstraints = 0x00,                                                                                                                                                                                                                
            coordinateFrame_equals_referenceOrientationFrame = 0x01, // constraint that the orientation frame on the reference body has to be equal to the coordinate frame                                                                      
        };  

    You should specify the constraint when writing the template specialization of the OrientationCoordinates<KDL::Rotation>:

    // template specialization for KDL::Rotation                                                                                                                                                                                             
        template <>                                                                                                                                                                                                                              
        OrientationCoordinates<KDL::Rotation>::OrientationCoordinates(const KDL::Rotation& coordinates):                                                                                                                                         
            data(coordinates),                                                                                                                                                                                                                   
            constraints(coordinateFrame_equals_referenceOrientationFrame){    };  

    Specializing other functions to do actual coordinate calculations

    The other function template specializations specify the actual coordinate calculations that have to be performed for semantic operations like inverse, changing the coordinate frame, changing the orientation frame, ... For instance, to specialize the inverse for KDL::Rotation coordinate representations:

       template <>                                                                                                                                                                                                                              
        OrientationCoordinates<KDL::Rotation> OrientationCoordinates<KDL::Rotation>::inverse2Impl() const                                                                                                                                        
        {                                                                                                                                                                                                                                        
            return OrientationCoordinates<KDL::Rotation>(this->data.Inverse());                                                                                                                                                                  
        }        

    Tutorials

    Setting up a package and the build system for your application

    This tutorial explains (one possibility) to set up a build system for your application using the geometric_relations_semantics. The possibility we explain uses the ROS package and build infrastructure, and will therefore assume you have ROS installed and set up on your computer.

    • Create a new ROS package (in this case with name myApplication), with a dependency on the geometric_semantics library and for instance the geometric_semantics_kdl library:

    roscreate-pkg myApplication geometric_semantics geometric_semantics_kdl

    This will automatically create a directory with name myApplication and a basic build infrastructure (see the roscreate-pkg documentation)

    • Add the newly created directory to your ROS_PACKAGE_PATH environment variable:

    export ROS_PACKAGE_PATH=myApplication:$ROS_PACKAGE_PATH

    Your first application using semantic checking on geometric relations (without coordinate checking)

    This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

    In this tutorial we first explain how you can create basic semantic objects (without coordinates and coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.

    Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

    Prepare the main file

    • Go to the directory of our first application using:

    roscd myApplication
    • Create a main file (in this tutorial called myFirstApplication.cpp) in which we will put the code of our first application.

    touch myFirstApplication.cpp
    • Edit the C++ file with your favorite editor. For instance:

    vim myFirstApplication.cpp
    • Include the necessary headers.

    #include <Position/PositionSemantics.h>
    #include <Orientation/OrientationSemantics.h>
    #include <Pose/PoseSemantics.h>
    #include <LinearVelocity/LinearVelocitySemantics.h>
    #include <AngularVelocity/AngularVelocitySemantics.h>
    #include <Twist/TwistSemantics.h>
    #include <Force/ForceSemantics.h>
    #include <Torque/TorqueSemantics.h>
    #include <Wrench/WrenchSemantics.h>

    • Next we use the geometric_semantics namespace for convenience:

    using namespace geometric_semantics;

    • Create a main program:

    int main (int argc, const char* argv[])
    {
    // Here comes the code of our first application
    }

    Building your first application

    • To build you application you should edit the CMakeLists.txt file created in your application directory.

    vim CMakeLists.txt

    • Add the C++ main file to be build as an executable by adding the following line:

    rosbuild_add_executable(myFirstApplication myFirstApplication.cpp)

    • Now you are ready to build, so type

    rosmake myApplication

    and the executable will be created in the bin directory.

    • To run the executable do:

    bin/myFirstApplication

    You will get the semantic output on your screen.

    Creating the geometric relations semantics

    • We will start with creating the geometric relation semantics objects for the relation between body C with point a and orientation frame [e], and body D with point b and orientation frame [f]:

        // Creating the geometric relations semantics
        PositionSemantics position("a","C","b","D");
        OrientationSemantics orientation("e","C","f","D");
        PoseSemantics pose("a","e","C","b","f","D");
     
        LinearVelocitySemantics linearVelocity("a","C","D");
        AngularVelocitySemantics angularVelocity("C","D");
        TwistSemantics twist("a","C","D");
     
        TorqueSemantics torque("a","C","D");
        ForceSemantics force("C","D");
        WrenchSemantics wrench("a","C","D");

    Doing semantic operations

    • We can for instance take the inverses of the created semantic geometric relations by:

        //Doing semantic operations with the geometric relations
        // inverting
        PositionSemantics positionInv = position.inverse();
        OrientationSemantics orientationInv = orientation.inverse();
        PoseSemantics poseInv = pose.inverse();
        LinearVelocitySemantics linearVelocityInv = linearVelocity.inverse();
        AngularVelocitySemantics angularVelocityInv = angularVelocity.inverse();
        TwistSemantics twistInv = twist.inverse();
        TorqueSemantics torqueInv = torque.inverse();
        ForceSemantics forceInv = force.inverse();
        WrenchSemantics wrenchInv = wrench.inverse();
    And if we print the inverses, we will see they are semantically correct:
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Inverses: " << std::endl;
        std::cout << "     " << positionInv << " is the inverse of " << position << std::endl;
        std::cout << "     " << orientationInv << " is the inverse of " << orientation << std::endl;
        std::cout << "     " << poseInv << " is the inverse of " << pose << std::endl;
        std::cout << "     " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl;
        std::cout << "     " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl;
        std::cout << "     " << twistInv << " is the inverse of " << twist << std::endl;
        std::cout << "     " << torqueInv << " is the inverse of " << torque << std::endl;
        std::cout << "     " << forceInv << " is the inverse of " << force << std::endl;
        std::cout << "     " << wrenchInv << " is the inverse of " << wrench << std::endl;

    • Now we can for instance compose the result with their inverses. Mind that the order of composition does not matter, since this is automatically derived from the semantic information in the objects.

        //Composing
        PositionSemantics positionComp = compose(position,positionInv);
        OrientationSemantics orientationComp = compose(orientation,orientationInv);
        PoseSemantics poseComp = compose(pose,poseInv);
        LinearVelocitySemantics linearVelocityComp = compose(linearVelocity,linearVelocityInv);
        AngularVelocitySemantics angularVelocityComp = compose(angularVelocity,angularVelocityInv);
        TwistSemantics twistComp = compose(twist,twistInv);
        TorqueSemantics torqueComp = compose(torque,torqueInv);
        ForceSemantics forceComp = compose(force,forceInv);
        WrenchSemantics wrenchComp = compose(wrench,wrenchInv);
    If you execute the program you will get screen output on the semantic correctness of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Composed objects: " << std::endl;
        std::cout << "     " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl;
        std::cout << "     " << orientationComp << " is the composition of " << orientation << " and " << orientationInv <<  std::endl;
        std::cout << "     " << poseComp << " is the composition of " << pose <<  " and " << poseInv << std::endl;
        std::cout << "     " << linearVelocityComp << " is the composition of " << linearVelocity  << " and " << linearVelocityInv << std::endl;
        std::cout << "     " << angularVelocityComp << " is the composition of " << angularVelocity  << " and " << angularVelocityInv << std::endl;
        std::cout << "     " << twistComp << " is the composition of " << twist <<  " and " << twistInv << std::endl;
        std::cout << "     " << torqueComp << " is the composition of " << torque <<  " and " << torqueInv << std::endl;
        std::cout << "     " << forceComp << " is the composition of " << force <<  " and " << forceInv << std::endl;
        std::cout << "     " << wrenchComp << " is the composition of " << wrench <<  " and " << wrenchInv << std::endl;

    AttachmentSize
    myFirstApplication.cpp4.28 KB

    Your second application using semantic checking on geometric relations including coordinate checking

    This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

    In this tutorial we first explain how you can create basic semantic objects (without coordinates but with coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.

    Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

    Prepare the main file

    • Prepare a mySecondApplication.cpp main file as explained in this tutorial.
    • Edit the C++ file with your favorite editor. For instance:

    vim mySecondApplication.cpp
    • Include the necessary headers.

    #include <Position/PositionCoordinatesSemantics.h>
    #include <Orientation/OrientationCoordinatesSemantics.h>
    #include <Pose/PoseCoordinatesSemantics.h>
    #include <LinearVelocity/LinearVelocityCoordinatesSemantics.h>
    #include <AngularVelocity/AngularVelocityCoordinatesSemantics.h>
    #include <Twist/TwistCoordinatesSemantics.h>
    #include <Force/ForceCoordinatesSemantics.h>
    #include <Torque/TorqueCoordinatesSemantics.h>
    #include <Wrench/WrenchCoordinatesSemantics.h>

    • Next we use the geometric_semantics namespace for convenience:

    using namespace geometric_semantics;

    • Create a main program:

    int main (int argc, const char* argv[])
    {
    // Here comes the code of our second application
    }

    Building your second application

    • To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

    rosbuild_add_executable(mySecondApplication mySecondApplication.cpp)

    • Now you are ready to build, so type

    rosmake myApplication

    and the executable will be created in the bin directory.

    • To run the executable do:

    bin/mySecondApplication

    You will get the semantic output on your screen.

    Creating the geometric relations coordinates semantics

    • We will start with creating the geometric relation coordinates semantics objects for the relation between body C with point a and orientation frame [e], and body D with point b and orientation frame [f], all expressed in coordinate frame [r]:

        // Creating the geometric relations coordinates semantics
        PositionCoordinatesSemantics position("a","C","b","D","r");
        OrientationCoordinatesSemantics orientation("e","C","f","D","r");
        PoseCoordinatesSemantics pose("a","e","C","b","f","D","r");
     
        LinearVelocityCoordinatesSemantics linearVelocity("a","C","D","r");
        AngularVelocityCoordinatesSemantics angularVelocity("C","D","r");
        TwistCoordinatesSemantics twist("a","C","D","r");
     
        TorqueCoordinatesSemantics torque("a","C","D","r");
        ForceCoordinatesSemantics force("C","D","r");
        WrenchCoordinatesSemantics wrench("a","C","D","r");

    Doing semantic coordinate operations

    • We can for instance take the inverses of the created geometric relation coordinates semantics by:

        //Doing semantic operations with the geometric relations
        // inverting
        PositionCoordinatesSemantics positionInv = position.inverse();
        OrientationCoordinatesSemantics orientationInv = orientation.inverse();
        PoseCoordinatesSemantics poseInv = pose.inverse();
        LinearVelocityCoordinatesSemantics linearVelocityInv = linearVelocity.inverse();
        AngularVelocityCoordinatesSemantics angularVelocityInv = angularVelocity.inverse();
        TwistCoordinatesSemantics twistInv = twist.inverse();
        TorqueCoordinatesSemantics torqueInv = torque.inverse();
        ForceCoordinatesSemantics forceInv = force.inverse();
        WrenchCoordinatesSemantics wrenchInv = wrench.inverse();
    And if we print the inverses, we will see they are semantically correct:
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Inverses: " << std::endl;
        std::cout << "     " << positionInv << " is the inverse of " << position << std::endl;
        std::cout << "     " << orientationInv << " is the inverse of " << orientation << std::endl;
        std::cout << "     " << poseInv << " is the inverse of " << pose << std::endl;
        std::cout << "     " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl;
        std::cout << "     " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl;
        std::cout << "     " << twistInv << " is the inverse of " << twist << std::endl;
        std::cout << "     " << torqueInv << " is the inverse of " << torque << std::endl;
        std::cout << "     " << forceInv << " is the inverse of " << force << std::endl;
        std::cout << "     " << wrenchInv << " is the inverse of " << wrench << std::endl;

    • Now we can for instance compose the result with their inverses. Mind that the order of composition does not matter, since this is automatically derived from the semantic information in the objects.

        //Composing
        PositionCoordinatesSemantics positionComp = compose(position,positionInv);
        OrientationCoordinatesSemantics orientationComp = compose(orientation,orientationInv);
        PoseCoordinatesSemantics poseComp = compose(pose,poseInv);
        LinearVelocityCoordinatesSemantics linearVelocityComp = compose(linearVelocity,linearVelocityInv);
        AngularVelocityCoordinatesSemantics angularVelocityComp = compose(angularVelocity,angularVelocityInv);
        TwistCoordinatesSemantics twistComp = compose(twist,twistInv);
        TorqueCoordinatesSemantics torqueComp = compose(torque,torqueInv);
        ForceCoordinatesSemantics forceComp = compose(force,forceInv);
        WrenchCoordinatesSemantics wrenchComp = compose(wrench,wrenchInv);
    If you execute the program you will get screen output on the semantic correctness (and mark: in this case also incorrectness) of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Composed objects: " << std::endl;
        std::cout << "     " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl;
        std::cout << "     " << orientationComp << " is the composition of " << orientation << " and " << orientationInv <<  std::endl;
        std::cout << "     " << poseComp << " is the composition of " << pose <<  " and " << poseInv << std::endl;
        std::cout << "     " << linearVelocityComp << " is the composition of " << linearVelocity  << " and " << linearVelocityInv << std::endl;
        std::cout << "     " << angularVelocityComp << " is the composition of " << angularVelocity  << " and " << angularVelocityInv << std::endl;
        std::cout << "     " << twistComp << " is the composition of " << twist <<  " and " << twistInv << std::endl;
        std::cout << "     " << torqueComp << " is the composition of " << torque <<  " and " << torqueInv << std::endl;
        std::cout << "     " << forceComp << " is the composition of " << force <<  " and " << forceInv << std::endl;
        std::cout << "     " << wrenchComp << " is the composition of " << wrench <<  " and " << wrenchInv << std::endl;

    AttachmentSize
    mySecondApplication.cpp4.72 KB

    Your third application doing actual geometric calculations on top of the semantic checking

    This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

    In this tutorial we first explain how you can create full geometric relation objects (with semantics and actual coordinate representation) and perform operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench. To this end we will use the coordinate representations of the Orocos Kinematics and Dynamics Library. The semantic support on top of this geometry library is already provided by the geometric_semantics_kdl package.

    Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

    Prepare the main file

    • Prepare a myThirdApplication.cpp main file as explained in this tutorial.
    • Edit the C++ file with your favorite editor. For instance:

    vim myThirdApplication.cpp
    • Include the necessary headers

    #include <Position/Position.h>
    #include <Orientation/Orientation.h>
    #include <Pose/Pose.h>
    #include <LinearVelocity/LinearVelocity.h>
    #include <AngularVelocity/AngularVelocity.h>
    #include <Twist/Twist.h>
    #include <Force/Force.h>
    #include <Torque/Torque.h>
    #include <Wrench/Wrench.h>
     
    #include <Position/PositionCoordinatesKDL.h>
    #include <Orientation/OrientationCoordinatesKDL.h>
    #include <Pose/PoseCoordinatesKDL.h>
    #include <LinearVelocity/LinearVelocityCoordinatesKDL.h>
    #include <AngularVelocity/AngularVelocityCoordinatesKDL.h>
    #include <Twist/TwistCoordinatesKDL.h>
    #include <Force/ForceCoordinatesKDL.h>
    #include <Torque/TorqueCoordinatesKDL.h>
    #include <Wrench/WrenchCoordinatesKDL.h>
     
    #include <kdl/frames.hpp>
    #include <kdl/frames_io.hpp>

    • Next we use the geometric_semantics namespace and the KDL namespace for convenience:

    using namespace geometric_semantics;
    using namespace KDL;

    • Create a main program:

    int main (int argc, const char* argv[])
    {
    // Here goes the code of our third application
    }

    Building your Third application

    • To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

    rosbuild_add_executable(myThirdApplication myThirdApplication.cpp)

    • Now you are ready to build, so type

    rosmake myApplication

    and the executable will be created in the bin directory.

    • To run the executable do:

    bin/myThirdApplication

    You will get the semantic output on your screen.

    Creating the geometric relations

    • We will start with creating the geometric relation objects for the relation between body C with point a and orientation frame [e], and body D with point b and orientation frame [f], all expressed in coordinate frame [r], together with their coordinate representations using KDL types.

      // Creating the geometric relations 
     
        // a Position with a KDL::Vector
        Vector coordinatesPosition(1,2,3);
        Position<Vector> position("a","C","b","D","r",coordinatesPosition);
     
        // an Orientation with KDL::Rotation
        Rotation coordinatesOrientation=Rotation::EulerZYX(M_PI/4,0,0);
        Orientation<Rotation> orientation("e","C","f","D","f",coordinatesOrientation);
     
        // a Pose with a KDL::Frame
        KDL::Frame coordinatesPose(coordinatesOrientation,coordinatesPosition);
        Pose<KDL::Frame> pose1("a","e","C","b","f","D","f",coordinatesPose);
     
        // a Pose as aggregation of a Position and a Orientation
        Pose<Vector,Rotation> pose2(position,orientation);
     
        // a LinearVelocity with a KDL::Vector
        Vector coordinatesLinearVelocity(1,2,3);
        LinearVelocity<Vector> linearVelocity("a","C","D","r",coordinatesLinearVelocity);
     
        // a AngularVelocity with a KDL::Vector
        Vector coordinatesAngularVelocity(1,2,3);
        AngularVelocity<Vector> angularVelocity("C","D","r",coordinatesAngularVelocity);
     
        // a Twist with a KDL::Twist
        KDL::Twist coordinatesTwist(coordinatesLinearVelocity,coordinatesAngularVelocity);
        geometric_semantics::Twist<KDL::Twist> twist1("a","C","D","r",coordinatesTwist);
     
        // a Twist of a LinearVelocity and a AngularVelocity
        geometric_semantics::Twist<Vector,Vector> twist2(linearVelocity,angularVelocity);
     
        // a Torque with a KDL::Vector
        Vector coordinatesTorque(1,2,3);
        Torque<Vector> torque("a","C","D","r",coordinatesTorque);
     
        // a Force with a KDL::Vector
        Vector coordinatesForce(1,2,3);
        Force<Vector> force("C","D","r",coordinatesForce);
     
        // a Wrench with a KDL::Wrench
        KDL::Wrench coordinatesWrench(coordinatesForce,coordinatesTorque);
        geometric_semantics::Wrench<KDL::Wrench> wrench1("a","C","D","r",coordinatesWrench);
     
        // a Wrench of a Force and a Torque
        geometric_semantics::Wrench<KDL::Vector,KDL::Vector> wrench2(torque,force);

    Doing geometric operations

    • We can for instance take the inverses of the created geometric relation by:

        //Doing operations with the geometric relations
        // inverting
        Position<Vector> positionInv = position.inverse();
        Orientation<Rotation> orientationInv = orientation.inverse();
        Pose<KDL::Frame> pose1Inv = pose1.inverse();
        Pose<Vector,Rotation> pose2Inv = pose2.inverse();
        LinearVelocity<Vector> linearVelocityInv = linearVelocity.inverse();
        AngularVelocity<Vector> angularVelocityInv = angularVelocity.inverse();
        geometric_semantics::Twist<KDL::Twist> twist1Inv = twist1.inverse();
        geometric_semantics::Twist<Vector,Vector> twist2Inv = twist2.inverse();
        Torque<Vector> torqueInv = torque.inverse();
        Force<Vector> forceInv = force.inverse();
        geometric_semantics::Wrench<KDL::Wrench> wrench1Inv = wrench1.inverse();
        geometric_semantics::Wrench<Vector,Vector> wrench2Inv = wrench2.inverse();
    And if we print the inverses, we will see they are semantically correct:
        // print the inverses
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Inverses: " << std::endl;
        std::cout << "     " << positionInv << " is the inverse of " << position << std::endl;
        std::cout << "     " << orientationInv << " is the inverse of " << orientation << std::endl;
        std::cout << "     " << pose1Inv << " is the inverse of " << pose1 << std::endl;
        std::cout << "     " << pose2Inv << " is the inverse of " << pose2 << std::endl;
        std::cout << "     " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl;
        std::cout << "     " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl;
        std::cout << "     " << twist1Inv << " is the inverse of " << twist1 << std::endl;
        std::cout << "     " << twist2Inv << " is the inverse of " << twist2 << std::endl;
        std::cout << "     " << torqueInv << " is the inverse of " << torque << std::endl;
        std::cout << "     " << forceInv << " is the inverse of " << force << std::endl;
        std::cout << "     " << wrench1Inv << " is the inverse of " << wrench1 << std::endl;
        std::cout << "     " << wrench2Inv << " is the inverse of " << wrench2 << std::endl;

    • Now we can for instance compose the result with their inverses. Mind that the order of composition does not matter, since this is automatically derived from the semantic information in the objects.

        //Composing
        Position<Vector> positionComp = compose(position,positionInv);
        Orientation<Rotation> orientationComp = compose(orientation,orientationInv);
        Pose<KDL::Frame> pose1Comp = compose(pose1,pose1Inv);
        Pose<Vector,Rotation> pose2Comp = compose(pose2,pose2Inv);
        LinearVelocity<Vector> linearVelocityComp = compose(linearVelocity,linearVelocityInv);
        AngularVelocity<Vector> angularVelocityComp = compose(angularVelocity,angularVelocityInv);
        geometric_semantics::Twist<KDL::Twist> twist1Comp = compose(twist1,twist1Inv);
        geometric_semantics::Twist<Vector,Vector> twist2Comp = compose(twist2,twist2Inv);
        Torque<Vector> torqueComp = compose(torque,torqueInv);
        Force<Vector> forceComp = compose(force,forceInv);
        geometric_semantics::Wrench<KDL::Wrench> wrench1Comp = compose(wrench1,wrench1Inv);
        geometric_semantics::Wrench<Vector,Vector> wrench2Comp = compose(wrench2,wrench2Inv);;    

    If you execute the program you will get screen output on the semantic correctness (and mark: in this case also incorrectness) of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:

        // print the composed objects
        std::cout << "-----------------------------------------" << std::endl;
        std::cout << "Composed objects: " << std::endl;
        std::cout << "     " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl;
        std::cout << "     " << orientationComp << " is the composition of " << orientation << " and " << orientationInv <<  std::endl;
        std::cout << "     " << pose1Comp << " is the composition of " << pose1 <<  " and " << pose1Inv << std::endl;
        std::cout << "     " << pose2Comp << " is the composition of " << pose2 <<  " and " << pose2Inv << std::endl;
        std::cout << "     " << linearVelocityComp << " is the composition of " << linearVelocity  << " and " << linearVelocityInv << std::endl;
        std::cout << "     " << angularVelocityComp << " is the composition of " << angularVelocity  << " and " << angularVelocityInv << std::endl;
        std::cout << "     " << twist1Comp << " is the composition of " << twist1 <<  " and " << twist1Inv << std::endl;
        std::cout << "     " << twist2Comp << " is the composition of " << twist2 <<  " and " << twist2Inv << std::endl;
        std::cout << "     " << torqueComp << " is the composition of " << torque <<  " and " << torqueInv << std::endl;
        std::cout << "     " << forceComp << " is the composition of " << force <<  " and " << forceInv << std::endl;
        std::cout << "     " << wrench1Comp << " is the composition of " << wrench1 <<  " and " << wrench2Inv << std::endl;
        std::cout << "     " << wrench2Comp << " is the composition of " << wrench1 <<  " and " << wrench2Inv << std::endl;

    AttachmentSize
    myThirdApplication.cpp7.63 KB

    Some extra examples

    In case you are looking for some extra examples you can have a look at the geometric_semantics_examples package. So far it already contains an example showing the advantage of using semantics when integrating twists, and when programming two position controlled robots.

    FAQ

    Use cases

    Semantics Reasoning

    Coordinate Semantics Reasoning

    Coordinate Calculations

    Coordinate Semantics Reasoning and Coordinate Calculations

    KDL wiki

    Kinematic ChainKinematic Chain

    Skeleton of a serial robot arm with six revolute joints. This is one example of a kinematic structure, reducing the motion modelling and specification to a geometric problem of relative motion of reference frames. The Kinematics and Dynamics Library (KDL) develops an application independent framework for modelling and computation of kinematic chains, such as robots, biomechanical human models, computer-animated figures, machine tools, etc. It provides class libraries for geometrical objects (point, frame, line,... ), kinematic chains of various families (serial, humanoid, parallel, mobile,... ), and their motion specification and interpolation.

    Installation Manual

    This document is not ready yet, but it's a wiki page so feel free to contribute

    Requirements

    Supported Platforms

    • Linux
    • Windows
    • Mac

    Installation

    There are different ways for getting the software.

    From debian/ubuntu packages

    Orocos KDL is part of the geometry stack in the ROS distributions pre-Electric.

    • ros-cturtle/diamondback-geometry

    Since ROS Electric it is available stand-alone as the orocos-kinematics-dynamics stack.

    • ros-electric/fuerte-orocos-kinematics-dynamics

    Source using git

    git clone https://github.com/orocos/orocos_kinematics_dynamics.git

    Building source with ROS

    Building source with plain CMake

    • goto your orocos_kdl directory
    • create a build directory inside the orocos_kdl-dir and go inside:

    mkdir <kdl-dir>/build ; cd <kdl-dir>/build
    • Launch ccmake

    ccmake ..
    • configure [c], select the bindings you want to create, choose an appropriate installation directory
    • configure[c], generate makefiles [g]
    • make, wait, install and check

    make;make check;make install

    KDL typekit

    The use of KDL types in the TaskBrowser

    Compiling

    You need to check out the rtt_geometry stack which contains the kdl_typekit package here:

     http://github.com/orocos-toolchain/rtt_geometry
    
    and build&install it using the provided Makefile (uses defaults) or CMakeLists.txt (if you want to modify paths).

    Importing

    Import the kdl_typekit in Orocos by using the 'import' Deployment command in the TaskBrowser or the 'Import' Deployment property in your deployment xml file:

     import("kdl_typekit")
    

    Creation of variables of a KDL type

    • Make sure you've loaded the KDL typekit, by checking all available types in the TaskBrowser:

    .types
    • In the list you should find eg. KDL.Frame, so you can create a variable z of this type by: var KDL.Frame z
    • z has a standard value now (Identity frame), there are multiple ways to change it:
      • value by value:
    z.p.X=1 or z.M.X_x=2
      • the full position vector
    z.p = KDL.Vector(1,2,3)
      • the full rotation matrix
    z.M=KDL.Rotation(0,1.57,0) (roll, pitch, yaw angles???)
    • You can check the current value by just typing it's variable name, for this example, z.
    z
    • Look in the KDL typekit for more details

    User Manual

    Why to use KDL?

    • Extensive support for :
      • Geometric primitives: point, frame, twist ...
      • Kinematic Trees: chain and tree structures. In literature, multiple definitions exist for a kinematic structure: a chain as the equivalent for all types of kinematic structures (chain, tree, graph) or chain as the serial version of a kinematic structure. KDL uses the last, or using graph-theory terminology:
        • A closed-loop mechanism is a graph,
        • an open-loop mechanism is a tree, and
        • an unbranched tree is a chain.
    Next to kinematics, also parameters for dynamics are included (inertia...)
    • Realtime-safe operations/functions whenever relevant: they do not lead to dynamic memory allocations and all of them are deterministic in time.
    • Python bindings
    • Typekits and transport-kits for Orocos/RTT
    • Integrated in ROS

    Getting Help

    Geometric primitives

    KDL::Vector

    Browse KDL API Documentation

    A Vector is a 3x1 matrix containing X-Y-Z coordinate values. It is used for representing: 3D position of a point wrt a reference frame, rotational and translational part of a 6D motion or force entity : <equation id="vector">$\textrm{KDL::Vector} = \left[ \begin{array}{ccc} x \\ y \\ z \end{array}\right]$<equation>

    Creating Vectors

      Vector v1; //The default constructor, X-Y-Z are initialized to zero
      Vector v2(x,y,z); //X-Y-Z are initialized with the given values 
      Vector v3(v2); //The copy constructor
      Vector v4 = Vector::Zero(); //All values are set to zero

    Get/Set individual elements

    The operators [ ] and ( ) use indices from 0..2, index checking is enabled/disabled by the DEBUG/NDEBUG definitions:

      v1[0]=v2[1];//copy y value of v2 to x value of v1 
      v2(1)=v3(3);//copy z value of v3 to y value of v2
      v3.x( v4.y() );//copy y value of v4 to x value of v3

    Multiply/Divide with a scalar

    You can multiply or divide a Vector with a double using the operator * and /:

      v2=2*v1;
      v3=v1/2;

    Add and subtract vectors

      v2+=v1;
      v3-=v1;
      v4=v1+v2;
      v5=v2-v3;

    Cross and scalar product

      v3=v1*v2; //Cross product
      double a=dot(v1,v2)//Scalar product

    Resetting

    You can reset the values of a vector to zero:

      SetToZero(v1);

    Comparing vectors

    Element by element comparison with or without user-defined accuracy:
      v1==v2;
      v2!=v3;
      Equal(v3,v4,eps);//with accuracy eps

    KDL::Rotation

    link to API

    A Rotation is the 3x3 matrix that represents the 3D rotation of an object wrt the reference frame.

    <equation id="rotation">$ \textrm{KDL::Rotation} = \left[\begin{array}{ccc}Xx&Yx&Zx\\Xy&Yy&Zy\\Xz&Yz&Zz\end{array}\right] $<equation>

    Creating Rotations

    Safe ways to create a Rotation

    The following result always in consistent Rotations. This means the rows/columns are always normalized and orthogonal:

      Rotation r1; //The default constructor, initializes to an 3x3 identity matrix
      Rotation r1 = Rotation::Identity();//Identity Rotation = zero rotation
      Rotation r2 = Rotation::RPY(roll,pitch,yaw); //Rotation built from Roll-Pitch-Yaw angles
      Rotation r3 = Rotation::EulerZYZ(alpha,beta,gamma); //Rotation built from Euler Z-Y-Z angles
      Rotation r4 = Rotation::EulerZYX(alpha,beta,gamma); //Rotation built from Euler Z-Y-X angles
      Rotation r5 = Rotation::Rot(vector,angle); //Rotation built from an equivalent axis(vector) and an angle.

    Other ways

    The following should be used with care, they can result in inconsistent rotation matrices, since there is no checking if columns/rows are normalized or orthogonal

      Rotation r6( Xx,Yx,Zx,Xy,Yy,Zy,Xz,Yz,Zz);//Give each individual element (Column-Major)
      Rotation r7(vectorX,vectorY,vectorZ);//Give each individual column

    Getting values

    Individual values, the indices go from 0..2:

      double Zx = r1(0,2);
    Getting EulerZYZ, Euler ZYX, Roll-Pitch-Yaw angles , equivalent rotation axis with angle:
      r1.GetEulerZYZ(alpha,beta,gamma);
      r1.GetEulerZYX(alpha,beta,gamma);
      r1.GetRPY(roll,pitch,yaw);
      axis = r1.GetRot();//gives only rotation axis
      angle = r1.GetRotAngle(axis);//gives both angle and rotation axis
    Getting the Unit vectors:
      vecX=r1.UnitX();//or
      r1.UnitX(vecX);
      vecY=r1.UnitY();//or
      r1.UnitY(vecY);
      vecZ=r1.UnitZ();//or
      r1.UnitZ(vecZ);

    Inverting Rotations

    Replacing a rotation by its inverse:

      r1.SetInverse();//r1 is inverted and overwritten
    Getting the inverse rotation without overwriting the original:
      r2=r1.Inverse();//r2 is the inverse rotation of r1

    Composing rotations

    Compose two rotations to a new rotation, the order of the rotations is important:

      r3=r1*r2;
     
    Compose a rotation with elementary rotations around X-Y-Z:
      r1.DoRotX(angle);
      r2.DoRotY(angle);
      r3.DoRotZ(angle);
    this is the shorthand version of:
      r1 = r1*Rotation::RotX(angle)

    Rotation of a Vector

    Rotating a Vector using a Rotation and the operator *:
      v2=r1*v1;

    Comparing Rotations

    Element by element comparison with or without user-defined accuracy:
      r1==r2;
      r1!=r2;
      Equal(r1,r2,eps);

    KDL::Frame

    link to API documentation

    A Frame is the 4x4 matrix that represents the pose of an object/frame wrt a reference frame. It contains:

    • a Rotation M for the rotation of the object/frame wrt the reference frame.
    • a Vector p for the position of the origin of the object/frame in the reference frame

    <equation id="frame">$ \textrm{KDL::Frame} = \left[\begin{array}{cc}\mathbf{M}(3 \times 3) &p(3 \times 1)\\ 0(1 \times 3)&1 \end{array}\right] $<equation>

    Creating Frames

      Frame f1;//Creates Identity frame
      Frame f1=Frame::Identity();//Creates an identity frame: Rotation::Identity() and Vector::Zero()
      Frame f2(your_rotation);//Create a frame with your_rotation and a zero vector
      Frame f3(your_vector);//Create a frame with your_vector and a identity rotation
      Frame f4(your_rotation,your_vector);//Create a frame with your_rotation
      Frame f5(your_vector,your_rotation);//and your_vector
      Frame f5(f6);//the copy constructor

    Getting values

    Individual values from the 4x4 matrix, the indices go from 0..3:
      double x = f1(0,3);
      double Yy = f1(1,1);
    Another way is to go through the underlying Rotation and Vector:
       Vector p = f1.p;
       Rotation M = f1.M;

    Composing frames

    You can use the operator * to compose frames. If you have a Frame F_A_B that expresses the pose of frame B wrt frame A, and a Frame F_B_C that expresses the pose of frame C wrt to frame B, the calculation of Frame F_A_C that expresses the pose of frame C wrt to frame A is as follows:
      Frame F_A_C = F_A_B * F_B_C;
    F_A_C.p is the location of the origin of frame C expressed in frame A, and F_A_C.M is the rotation of frame C expressed in frame A.

    Inverting Frames

    Replacing a frame by its inverse:

      //not yet implemented
    Getting the inverse:
      f2=f1.Inverse();//f2 is the inverse of f1

    Comparing frames

    Element by element comparison with or without user-defined accuracy:
      f1==f2;
      f1!=f2;
      Equal(f1,f2,eps);

    KDL::Twist

    link to API documentation

    A Twist is the 6x1 matrix that represents the velocity of a Frame using a 3D translational velocity Vector vel and a 3D angular velocity Vector rot:

    <equation id="twist">$\textrm{KDL::Twist} = \left[\begin{array}{c} v_x\\v_y\\v_z\\ \hline \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[\begin{array}{c} \textrm{vel} \\ \hline \textrm{rot}\end{array} \right] $<equation>

    Creating Twists

      Twist t1; //Default constructor, initializes both vel and rot to Zero
      Twist t2(vel,rot);//Vector vel, and Vector rot
      Twist t3 = Twist::Zero();//Zero twist
    Note: in contrast to the creation of Frames, the order in which vel and rot Vectors are supplied to the constructor is important.

    Getting values

    Using the operators [ ] and ( ), the indices from 0..2 return the elements of vel, the indices from 3..5 return the elements of rot:

      double vx = t1(0);
      double omega_y = t1[4];
      t1(1) = vy;
      t1[5] = omega_z;
    Because some robotics literature put the rotation part on top it is safer to use the vel, rot members to access the individual elements:
      double vx = t1.vel.x();//or
      vx = t1.vel(0);
      double omega_y = t1.rot.y();//or
      omega_y = t1.rot(1);
      t1.vel.y(v_y);//or
      t1.vel(1)=v_y;
      //etc

    Multiply/Divide with a scalar

    The same operators as for Vector are available:
      t2=2*t1;
      t2=t1*2;
      t2=t1/2;

    Adding/subtracting Twists

    The same operators as for Vector are available:
      t1+=t2;
      t1-=t2;
      t3=t1+t2;
      t3=t1-t2;

    Comparing Twists

    Element by element comparison with or without user-defined accuracy:
      t1==t2;
      t1!=t2;
      Equal(t1,t2,eps);

    KDL::Wrench

    A Wrench is the 6x1 matrix that represents a force on a Frame using a 3D translational force Vector force and a 3D moment Vector torque:

    <equation id="wrench">$\textrm{KDL::Wrench} = \left[\begin{array}{c} f_x\\f_y\\f_z\\ \hline t_x \\ t_y \\ t_z \end{array} \right] = \left[\begin{array}{c} \textrm{force} \\ \hline \textrm{torque}\end{array} \right] $<equation>

    Creating Wrenches

      Wrench w1; //Default constructor, initializes force and torque to Zero
      Wrench w2(force,torque);//Vector force, and Vector torque
      Wrench w3 = Wrench::Zero();//Zero wrench

    Getting values

    Using the operators [ ] and ( ), the indices from 0..2 return the elements of force, the indices from 3..5 return the elements of torque:

      double fx = w1(0);
      double ty = w1[4];
      w1(1) = fy;
      w1[5] = tz;
    Because some robotics literature put the torque part on top it is safer to use the torque, force members to access the individual elements:
      double fx = w1.force.x();//or
      fx = w1.force(0);
      double ty = w1.torque.y();//or
      ty = w1.torque(1);
      w1.force.y(fy);//or
      w1.force(1)=fy;//etc

    Multiply/Divide with a scalar

    The same operators as for Vector are available:
      w2=2*w1;
      w2=w1*2;
      w2=w1/2;

    Adding/subtracting Wrenchs

    The same operators as for Twist are available:
      w1+=w2;
      w1-=w2;
      w3=w1+w2;
      w3=w1-w2;

    Comparing Wrenchs

    Element by element comparison with or without user-defined accuracy:
      w1==w2;
      w1!=w2;
      Equal(w1,w2,eps);

    Twist and Wrench transformations

    Wrenches and Twists are expressed in a certain reference frame; the translational Vector vel of the Twists and the moment Vector torque of the Wrenches represent the velocity of, resp. the moment on, a certain reference point in that frame. Common choices for the reference point are the origin of the reference frame or a task specific point.

    The values of a Wrench or Twist change if the reference frame or reference point is changed.

    Changing only the reference point

    If you want to change the reference point you need the Vector v_old_new from the old reference point to the new reference point expressed in the reference frame of the Wrench or Twist:

    t2 = t1.RefPoint(v_old_new);
    w2 = w1.RefPoint(v_old_new);

    Changing only the reference frame

    If you want to change the reference frame but want to keep the reference point intact you can use a Rotation matrix R_AB which expresses the rotation of the current reference frame B wrt to the new reference frame A:

    ta = R_AB*tb;
    wa = R_AB*wb;

    Note: This operation seems to multiply a 3x3 matrix R_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from R_AB.

    Changing both the reference frame and the reference point

    If you want to change both the reference frame and the reference point you can use a Frame F_AB which contains (i) Rotation matrix R_AB which expresses the rotation of the current reference frame B wrt to the new reference frame A and (ii) the Vector v_old_new from the old reference point to the new reference point expressed in A:

    ta = F_AB*tb;
    wa = F_AB*wb;

    Note: This operation seems to multiply a 4x4 matrix F_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from F_AB.

    First order differentiation and integration

    t = diff(F_w_A,F_w_B,timestep)//differentiation
    F_w_B = F_w_A.addDelta(t,timestep)//integration
    t is the twist that moves frame A to frame B in timestep seconds. t is expressed in reference frame w using the origin of A as velocity reference point.

    Kinematic Trees

    A KDL::Chain or KDL::Tree composes/consists of the concatenation of KDL::Segments. A KDL::Segment composes a KDL::Joint and KDL::RigidBodyInertia, and defines a reference and tip frame on the segment. The following figures show a KDL::Segment, KDL::Chain, and KDL::Tree, respectively. At the bottom of this page you'll find the links to a more detailed description.

    KDL segmentKDL segment

    • Black: KDL::Segment:
      • reference frame {F_reference} (implicitly defined by the definition of the other frames wrt. this frame)
      • tip frame {F_tip}: frame from the end of the joint to the tip of the segment, default: Frame::Identity(). The transformation from the joint to the tip is denoted T_tip (in KDL directly represented by a KDL::Frame). In a kinematic chain or tree, a child segment is added to the parent segment's tip frame (tip frame of parent=reference frame of the child(ren)).
      • composes a KDL::Joint (red) and a KDL::RigidBodyInertia (green)
    • Red: KDL::Joint: single DOF joint around or along an axis of the joint frame {F_joint}. This joint frame has the same orientation as the the reference frame {F_reference} but can be offset wrt. this reference frame by the vector p_origin (default: no offset).
    • Green: KDL::RigidBodyInertia: Cartesian space inertia matrix, the arguments are the mass, the vector from the reference frame {F_reference} to cog (p_cog) and the rotational inertia in the cog frame {F_cog}.

    KDL chainKDL chain KDL treeKDL tree

    Select your revision: (1.0.x is the released version, 1.1.x is under discussion (see kinfam_refactored git branch))

    Kinematic Trees - KDL 1.0.x

    KDL::Joint

    Link to API

    A Joint allows translation or rotation in one degree of freedom between two Segments

    Creating Joints

    Joint rx = Joint(Joint::RotX);//Rotational Joint about X
    Joint ry = Joint(Joint::RotY);//Rotational Joint about Y
    Joint rz = Joint(Joint::RotZ);//Rotational Joint about Z
    Joint tx = Joint(Joint::TransX);//Translational Joint along X
    Joint ty = Joint(Joint::TransY);//Translational Joint along Y
    Joint tz = Joint(Joint::TransZ);//Translational Joint along Z
    Joint fixed = Joint(Joint::None);//Rigid Connection
    Note: See the API document for a full list of construction possibilities

    Pose and twist of a Joint

    Joint rx = Joint(Joint::RotX);
    double q = M_PI/4;//Joint position
    Frame f = rx.pose(q);
    double qdot = 0.1;//Joint velocity
    Twist t = rx.twist(qdot);
    f is the pose resulting from moving the joint from its zero position to a joint value q t is the twist expressed in the frame corresponding to the zero position of the joint, resulting from applying a joint speed qdot

    KDL::Segment

    Link to API

    A Segment is an ideal rigid body to which one single Joint is connected and one single tip frame. It contains:

    • a Joint located at the root frame of the Segment.
    • a Frame describing the pose between the end of the Joint and the tip frame of the Segment.

    Creating Segments

    Segment s = Segment(Joint(Joint::RotX),
                    Frame(Rotation::RPY(0.0,M_PI/4,0.0),
                              Vector(0.1,0.2,0.3) )
                        );
    Note: The constructor takes copies of the arguments, you cannot change the frame or joint afterwards!!!

    Pose and twist of a Segment

    double q=M_PI/2;//joint position
    Frame f = s.pose(q);//s constructed as in previous example
    double qdot=0.1;//joint velocity
    Twist t = s.twist(q,qdot);
    fis the pose resulting from moving the joint from its zero position to a joint value q and expresses the new tip frame wrt the root frame of the Segment s. t is the twist of the tip frame expressed in the root frame of the Segment s, resulting from applying a joint speed qdot at the joint value q

    KDL::Chain

    Link to API

    A KDL::Chain is

    • a kinematic description of a serial chain of bodies connected by joints.
    • built out of KDL::Segments.

    A Chain has

    • a default constructor, creating an empty chain without any segments.
    • a copy-constructor, creating a copy of an existing chain.
    • a =-operator.

    Chain chain1;
    Chain chain2(chain3);
    Chain chain4 = chain5;

    Chains are constructed by adding segments or existing chains to the end of the chain. These functions add copies of the arguments, not the arguments themselves!

    chain1.addSegment(segment1);
    chain1.addChain(chain2);

    You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):

    unsigned int nj = chain1.getNrOfJoints();
    unsigned int js = chain1.getNrOfSegments();

    You can iterate over the segments of a chain by getting a reference to each successive segment:

    Segment& segment3 = chain1.getSegment(3);

    KDL::Tree

    Link to API

    A KDL::Tree is

    • a kinematic description of a tree of bodies connected by joints.
    • built out of KDL::Segments.

    A Tree has

    • a constructor that creates an empty tree without any segments and with the given name as its root name. The root name will be "root" if no name is given.
    • a copy-constructor, creating a copy of an existing tree.
    • a =-operator

    Tree tree1;
    Tree tree2("RootName");
    Tree tree3(tree4);
    Tree tree5 = tree6;

    Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!

    bool exit_value;
    exit_value = tree1.addSegment(segment1,"root");
    exit_value = tree1.addChain(chain1,"Segment 1");
    exit_value = tree1.addTree(tree2,"root");

    You can get the number of joints and number of segments (this is not always the same since a segment can have a fixed joint (Joint::None), which is not included in the number of joints):

    unsigned int nj = tree1.getNrOfJoints();
    unsigned int js = tree1.getNrOfSegments();

    You can retrieve the root segment:

    std::map<std::string,TreeElement>::const_iterator root = tree1.getRootSegment();

    You can also retrieve a specific segment in a tree by its name:

    std::map<std::string,TreeElement>::const_iterator segment3 = tree1.getSegment("Segment 3");

    You can retrieve the segments in the tree:

    std::map<std::string,TreeElement>& segments = tree1.getSegments();

    It is possible to request the chain in a tree between a certain root and a tip:

    bool exit_value;
    Chain chain;
    exit_value = tree1.getChain("Segment 1","Segment 3",chain);
    //Segment 1 and segment 3 are included but segment 1 is renamed.
    Chain chain2;
    exit_value = tree1.getChain("Segment 3","Segment 1",chain2);
    //Segment 1 and segment 3 are included but segment 3 is renamed.

    Kinematic Trees - KDL 1.1.x

    KDL::Joint

    Link to API

    A Joint allows translation or rotation in one degree of freedom between two Segments

    Creating Joints

    Joint rx = Joint(Joint::RotX);//Rotational Joint about X
    Joint ry = Joint(Joint::RotY);//Rotational Joint about Y
    Joint rz = Joint(Joint::RotZ);//Rotational Joint about Z
    Joint tx = Joint(Joint::TransX);//Translational Joint along X
    Joint ty = Joint(Joint::TransY);//Translational Joint along Y
    Joint tz = Joint(Joint::TransZ);//Translational Joint along Z
    Joint fixed = Joint(Joint::None);//Rigid Connection
    Note: See the API document for a full list of construction possibilities

    Pose and twist of a Joint

    Joint rx = Joint(Joint::RotX);
    double q = M_PI/4;//Joint position
    Frame f = rx.pose(q);
    double qdot = 0.1;//Joint velocity
    Twist t = rx.twist(qdot);
    f is the pose resulting from moving the joint from its zero position to a joint value q t is the twist expressed in the frame corresponding to the zero position of the joint, resulting from applying a joint speed qdot

    KDL::Segment

    Link to API

    A Segment is an ideal rigid body to which one single Joint is connected and one single tip frame. It contains:

    • a Joint located at the root frame of the Segment.
    • a Frame describing the pose between the end of the Joint and the tip frame of the Segment.

    Creating Segments

    Segment s = Segment(Joint(Joint::RotX),
                    Frame(Rotation::RPY(0.0,M_PI/4,0.0),
                              Vector(0.1,0.2,0.3) )
                        );
    Note: The constructor takes copies of the arguments, you cannot change the frame or joint afterwards!!!

    Pose and twist of a Segment

    double q=M_PI/2;//joint position
    Frame f = s.pose(q);//s constructed as in previous example
    double qdot=0.1;//joint velocity
    Twist t = s.twist(q,qdot);
    fis the pose resulting from moving the joint from its zero position to a joint value q and expresses the new tip frame wrt the root frame of the Segment s. t is the twist of the tip frame expressed in the root frame of the Segment s, resulting from applying a joint speed qdot at the joint value q

    KDL::Chain

    Link to API

    A KDL::Chain is

    • a kinematic description of a serial chain of bodies connected by joints.
    • built out of KDL::Segments.

    A Chain has

    • a default constructor, creating an empty chain without any segments.
    • a copy-constructor, creating a copy of an existing chain.
    • a =-operator is supported too.

    Chain chain1;
    Chain chain2(chain3);
    Chain chain4 = chain5;

    Chains are constructed by adding segments or existing chains to the end of the chain. All segments must have a different name (or "NoName"), otherwise the methods will return false and the segments will not be added. The functions add copies of the arguments, not the arguments themselves!

    bool exit_value;
    bool exit_value = chain1.addSegment(segment1);
    exit_value = chain1.addChain(chain2);

    You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):

    unsigned int nj = chain1.getNrOfJoints();
    unsigned int js = chain1.getNrOfSegments();

    You can iterate over the segments of a chain by getting a reference to each successive segment. The method will return false if the index is out of bounds.

    Segment segment3;
    bool exit_value = chain1.getSegment(3, segment3);

    You can also request a segment by name:

    Segment segment3;
    bool exit_value = chain1.getSegment("Segment 3", segment3);

    The root and leaf segment can be requested, as well as all segments in the chain.

    bool exit_value;
    Segment root_segment;
    Segment leaf_segment;
    std::vector<Segment> segments;
    exit_value = chain1.getRootSegment(root_segment);
    exit_value = chain1.getLeafSegment(leaf_segment);
    exit_value = chain1.getSegments(segments);

    You can request a part of the chain between a certain root and a tip:

    bool exit_value;
    Chain part_chain;
     
    exit_value = chain1.getChain_Including(1,3, part_chain);
    exit_value = chain1.getChain_Including("Segment 1","Segment 3", part_chain);
    //Segment 1 and Segment 3 are included in the new chain!
     
    exit_value = chain1.getChain_Excluding(1,3, part_chain);
    exit_value = chain1.getChain_Excluding("Segment 1","Segment 3", part_chain);
    //Segment 1 is not included in the chain. Segment 3 is included in the chain.

    There is a function to copy the chain up to a given segment number or segment name:

    bool exit_value;
    Chain chain_copy;
    exit_value = chain1.copy(3, chain_copy);
    exit_value = chain1.copy("Segment 3", chain_copy);
    //Segment 3, 4,... are not included in the copy!

    KDL::Tree

    Link to API

    A KDL::Tree is

    • a kinematic description of a tree of bodies connected by joints.
    • built out of KDL::Segments.

    A Tree has

    • a constructor that creates an empty tree without any segments and with the given name as its root name. The root name will be "root" if no name is given.
    • a copy-constructor, creating a copy of an existing tree.
    • a =-operator.

    Tree tree1;
    Tree tree2("RootName");
    Tree tree3(tree4);
    Tree tree5 = tree6;

    Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!

    bool exit_value;
    exit_value = tree1.addSegment(segment1,"root");
    exit_value = tree1.addChain(chain1,"Segment 1");
    exit_value = tree1.addTree(tree2,"root");

    You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):

    unsigned int nj = tree1.getNrOfJoints();
    unsigned int js = tree1.getNrOfSegments();

    You can retrieve the root segment and the leaf segments:

    bool exit_value;
    std::map<std::string,TreeElement>::const_iterator root;
    std::map<std::string,TreeElement> leafs;
    exit_value = tree1.getRootSegment(root);
    exit_value = tree1.getLeafSegments(leafs);

    You can also retrieve a specific segment in a tree by its name:

    std::map<std::string,TreeElement>::const_iterator segment3;
    bool exit_value = tree1.getSegment("Segment 3",segment3);

    You can retrieve the segments in the tree:

    std::map<std::string,TreeElement> segments;
    bool exit_value = tree1.getSegments(segments);

    It is possible to request the chain in a tree between a certain root and a tip:

    bool exit_value;
    Chain chain;
    exit_value = tree1.getChain("Segment 1","Segment 3",chain);
    //Segment 1 and segment 3 are included but segment 1 is renamed.
    Chain chain2;
    exit_value = tree1.getChain("Segment 3","Segment 1",chain2);
    //Segment 1 and segment 3 are included but segment 3 is renamed.

    This chain can also be requested in a tree structure with the given root name ("root" if no name is given).

    bool exit_value;
    Tree tree;
    exit_value = tree1.getChain("Segment 1","Segment 3",tree,"RootName");
    Tree tree2;
    exit_value = tree1.getChain("Segment 3","Segment 1",tree2,"RootName");

    There is a function to copy a tree excluding some segments and all their decendants.

    bool exit_value;
    Tree tree_copy;
    exit_value = tree1.copy("Segment 3", tree_copy);
    //tree1 is copied up to segment 3 (excluding segment 3).
    std::vector<std::string> vect;
    vect.push_back("Segment 1");
    vect.push_back("Segment 7");
    exit_value = tree1.copy(vect,tree_copy);

    Kinematic and Dynamic Solvers

    KDL contains for the moment only generic solvers for kinematic chains. They can be used (with care) for every KDL::Chain.

    The idea behind the generic solvers is to have a uniform API. We do this by inheriting from the abstract classes for each type of solver:

    • ChainFkSolverPos
    • ChainFkSolverVel
    • ChainIkSolverVel
    • ChainIkSolverPos

    A seperate solver has to be created for each chain. At construction time, it will allocate all necessary resources.

    A specific type of solver can add some solver-specific functions/parameters to the interface, but still has to use the generic interface for it's main solving purpose.

    The forward kinematics use the function JntToCart(...) to calculate the Cartesian space values from the Joint space values. The inverse kinematics use the function CartToJnt(...) to calculate the Joint space values from the Cartesian space values.

    Recursive forward kinematic solvers

    For now we only have one generic forward and velocity position kinematics solver.

    It recursively adds the poses/velocity of the successive segments, going from the first to the last segment, you can also get intermediate results by giving a Segment number:

    ChainFkSolverPos_recursive fksolver(chain1);
    JntArray q(chain1.getNrOfJoints);
    q=...
    Frame F_result;
    fksolver.JntToCart(q,F_result,segment_nr);

    Kuka LBR user group

    This page collects all useful information for the User Group for the KUKA Light-Weight-Robot.

    The following institutes are currently involved:

    [We can add your details here!]

    At K.U.Leuven we released Orocos Components for communicating with the LBR using RSI and FRI interfaces. The RSI component should be usable for all KUKA Robots that offer RSI.

    The FRI interface software can be found at: https://github.com/wdecre/kuka-robot-hardware (replaces http://git.mech.kuleuven.be/robotics/kuka_robot_hardware.git)

    The RSI interface software can be found at: http://svn.mech.kuleuven.be/repos/orocos/orocos-apps/public_release/Kuka_RSI At KU Leuven RSI is currently not actively used.

    The FRI and RSI interface, provide you with an Orocos component that you can add to your robot application to handle the communication with the robot controller.

    A readme file with the main installation steps is provided with the code (git or svn checkout). All comments, discussions, questions and suggestions are very welcome at the mailing list: see http://lists.mech.kuleuven.be/mailman/listinfo/kuka-lwr for info on how to subscribe.

    Links of Orocos components

    Links collection of Orocos components

    Konrad Banachowicz: https://github.com/konradb3/orocos-components

    OCL v1.x wiki

    This wiki has only information for the OCL 1.x releases. For OCL 2.x, look at the 'Toolchain' wiki.

    Taskbrowser with readline on Windows

    In order to have readline tab-completion in the taskbrowser, you'll need OCL 1.12.0 or 2.1.0 or later.

    Download Readline

    First download the readline-5.2 precompiled libraries from:

    homepage:

    download:

    It is advised to keep copies/backups of these files on your own site, since they are not official readline releases, but patched to work on Windows.

    Build Readline (OPTIONAL)

    The readline.lib can be rebuild in MSVC by downloading:

    and then open the solution in the directory:

    • src/readline/5.2/readline-5.2/msvc/readline.sln

    The build will place a static readline.lib in the ../lib directory.

    Configuring OCL

    Add the paths similar to these lines to your orocos-ocl.cmake file, or to your environment:
     set(CMAKE_INCLUDE_PATH ${CMAKE_INCLUDE_PATH} "C:/Documents and Settings/virtual/My documents/readline5.2/include")
     set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "C:/Documents and Settings/virtual/My documents/readline5.2/lib")
    
    Where 'C:/Documents and Settings/virtual/My documents/' is the directory where you unpacked the downloads.

    Continue to configure OCL in the cmake GUI by turning off the NO_GPL flag (by default on on Windows). It will then try to link the taskbrowser with the readline.lib file, which should succeed. After installing ocl, readline should work as on Linux, but only on the standard cygwin or cmd.exe prompts, not on rxvt

    quotes from "Really Reusable Robot Code and the Player/Stage Project"

    Further some significants parts of the paper "Really Reusable Robot Code and the Player/Stage Project" have been copied. The purpose it to present a possible philosophy to drive the development of OCL 2.0 (it is recommended to read the entire paper). Feel free to discuss these concepts in the forum.



    Our design philosophy is heavily influenced by the operating systems (OS) community, which has already solved many of the same problems that we face in robotics research. For example, the principle function of an operating system is to hide the details of the underlying hardware, which may vary from machine to machine. Similarly, we want to hide the details of the underlying robot. Just as I expect my web browser to work with any mouse, I want my navigation system to work with any robot. Where OS programmers have POSIX, we want a common development environment for robotic applications. Operating systems are equipped with standard tools for using and inspecting the system, such as (in UNIX variants) top, bash, ls, and X11. We desire a similar variety of high-quality tools to support experimental robotics.
    Operating systems also support virtually any programming language and style. They do this by allowing the low-level OS interface (usually written in C) to be easily wrapped in other languages, and by providing language-neutral interfaces (e.g., sockets, files) when possible. Importantly, no constraints or normative judgments are made on how best to structure a program that uses the OS. We take the same approach in building robotics infrastructure. Though not strictly part of the OS, another key feature of modern development environments is the availability of standard algorithms and related data structures, such as qsort(), TCP, and the C++ Standard Template Library. We follow this practice of incorporating polished versions of established algorithms into the common code repository, so that each researcher need not re-implement, for example, Monte Carlo localization. Finally, an important but often over-looked aspect of OS design is that access is provided at all levels. While most C programmers will manage memory allocation with the library functions malloc() and free(), when necessary they can dig deeper and invoke the system call brk() directly. We need the same multi-level access for robots; while one researcher may be content to command a robot with high-level “goto” commands, another will want to directly control wheel velocities.
    Player comprises four key abstractions: The Player Abstract Device Interface (PADI), the message protocol, the transport mechanism, and the implementation. Each abstraction represents a reusable and separable layer. For example, the TCP client/server transport could be replaced by a CORBA
    The central abstraction that enables portability and code re-use in Player is the PADI speci?cation. The PADI defines the syntax and semantics of the data that is exchanged between the robot control code and the robot hardware. For ease of use, the PADI is currently specified as a set of C message structures; the same information could instead be written in an Interface Definition Language (IDL), such as the one used in CORBA systems. The PADI’s set of abstract robot control interfaces constitutes a virtual machine, a target platform for robot controllers that is instantiated at run time by particular devices. The goal of the PADI is to provide a virtual machine that is rich enough to support any foreseeable robot control system, but simple enough to allow for an e?cient implementation on a wide array of robot hardware. The key concepts used in the PADI, both borrowed from the OS community, are the character device model and the driver/interface model.
    The interface/driver model groups devices by logical functionality, so that devices which do approximately the same job appear identical from the user’s point of view. An interface is a specification for the contents of the data stream, so an interface for a robotic character device maps the input stream into sensor readings, output stream into actuator commands, and ioctls into device configurations. The code that implements the interface, converting between a device’s native formats and the interface’s required formats is called a driver. Drivers are usually speci?c to a particular device, or a family of devices from the same vendor. Code that is written to target the interface rather than any specific device is said to be device independent. When multiple devices have drivers that implement the same interface, the controlling code is portable among those devices. Many hardware devices have unique features that do not appear in the standard interface. These features are accessed by device-specific ioctls, while the read and write streams are generally device independent. Interfaces should be designed to be suficiently complete so as to not require use of device-specific ioctls in normal operation, in order to maintain device independence and portability. There is not a one-to-one mapping between interface definitions and physical hardware components. For example, the Pioneer’s native P2OS interface bundles odometry and sonar data into the same packet, but a Player controller that only wants to log the robot’s position does not need the range data. For portability, Player separates the data into two logical devices, decoupling the logical functionality from the details of the Pioneer’s implementation. The pioneer driver controls one physical piece of hardware, the Pioneer microcontroller, but implements two different devices: position2d and sonar. These two devices can be opened, closed, and controlled independently, relieving the user of the burden of remembering details about the internals of the robot.
    In order to more conveniently support different devices, we introduced the interface/driver distinction to Player. An interface, such as sonar, is a generic specification of the format for data, command, and configuration interactions that a device allows. A driver, such as pioneer-sonar, specifies how the low-level device control will be carried out. In general, more than one driver may support a given interface; conversely, a given driver may support multiple interfaces. Thus we have extended to robot control the device model that is used in most operating systems, where, for example, a wide variety of joysticks all present the same “joystick” interface to the programmer.
    The primary cost of adherence to a generic interface for an entire class of devices is that the features and functionality that are unique to each device are ignored. Imagine a fiducial-finder interface whose data format includes only the bearing and distance to each fiducial. In order to support that interface, a driver that can also determine a fiducial’s identity will be under-utilized, some of its functionality having been sacrificed for the sake of portability. This issue is usually addressed by either adding configuration requests to the existing interface or defining a new interface that exposes the desired features of the device. Consider Player’s Monte-Carlo localization driver amcl; it can support both the sophisticated localization interface that includes multiple pose hypotheses, and the simple position2d interface that includes one pose and is also used by robot odometry systems.
    These higher-level drivers use other drivers, instead of hardware, as sources of data and sinks for commands. The amcl driver, for example, is an adaptive Monte Carlo localization system [TFBD00] that takes data from a position2d device, a laser device, and a map device, and in turn provides robot pose estimates via the localize interface (as mentioned above, amcl also supports the simpler position2d interface, through which only the most likely pose estimate is provided). Other Player drivers perform functionality such as path-planning, obstacle avoidance, and various image-processing tasks. The development of such higher-level drivers and corresponding interfaces yields three key benefits. First, we save time and effort by implementing well-known and useful algorithms in such a way that they are immediately reusable by the entire community. Just as C programmers can call qsort() instead of reimplementing quicksort, robotics students and researchers students should be able to use Player’s vfh driver instead of reimplementing the Vector Field Histogram navigation algorithm [UB98]. The author of the driver benefits by having her code tested by other scientists in environments and with robots to which she may not have access, which can only improve the quality of the algorithm and its implementation. Second, we create a common development environment for implementing such algorithms. Player’s C++ Driver API clearly defines the input/output and startup/shutdown functionality that a driver must have. Code that is written against this API can enter a community repository where it is easily understood and can be reused, either in whole or in part. Finally, we create an environment in which alternative algorithms can be easily substituted. If a new localization driver implements the familiar localize interface, then it is a drop-in replacement for Player’s amcl. The two algorithms can be run in parallel on the same data and the results objectively compared.

    RTT v1.x wiki

    This wiki has only information for the RTT 1.x releases. For RTT 2.x, look at the Toolchain wiki.

    Documentation suggestions

    From recent discussion on ML, simply a place to put down ideas before we forget them ...

    • Use Wiki for FAQ instead of XML doc

    FAQ

    • My shared libraries won't load
    • The deployer won't load my plugins
    • Can I use dynamic memory allocation, and where?
    • How do I run in real time? ie how do I configure my system to allow

    Orocos to run in real-time

    • Why do I have periodic delays when attaching a remote deployer?
    • Configuring OmniORB instead of TAO
    • OmniORB options for IDL
    • How do I set a client application using pyOmniOrb (OmniORB python bindings)?

    <quote> Actually it's an option of the omniidl compiler... the command to use is

    omniidl -bcxx -Wba myIdlFile.idl

    This will become definately a FAQ item :-) <quote>

    • My wiki page is blank

    <quote> When your text is not appearing on your wiki page, it's because you ended your wiki page with an indented line. So if your last line is:

    this is my last line

    the wiki code clears the whole page. It's clearly a Drupal/wiki module thing/bug. <quote>

    • Is it possible to log messages from scripts and state machines?

    Check out OCL's HmiConsoleOutput component.

    • What is the coding style used by Orocos?

    You can take a look at the CODING_STYLE.txt file. Also, we worked out the indentation rules for Eclipse and Emacs.

    • Error linking readline with OCL's taskbrowser

    Tutorials

    • Seems like I have to first read up on Activities. Can you point me to a good example for such a component test and mockobject in rtt/tests?
    • Hello world as an application
    • Hello world with the deployer
    • Use of reporting component
    • CMake and non-standard install location for Orocos
    • Distributed deployment - ie how to use more than one deployer, and setting up CORBA
    • Adding types to Orocos
    • Adding types to Orocos+Corba
    • Changing ReadDataPort/WriteDataPort to DataPort for CORBA-based deployment

    System examples

    • Robotics
    Examples like Stephen proposed earlier
    Focus on : Kinematics, path planning, HMI/interfacing
    • Machine control
    Similar to Robotics with without kinematics or path planning
    Master controller that controls slave devices through a state machine
    Focus on : state machines & events
    • Distributed
    Sensor data processing using various distributed components
    Focus on : Data flow

    end

    Examples and Tutorials

    The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

    There are several sources where you can find code and tutorials. Click below to read the rest of this post.The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

    There are several sources where you can find code and tutorials. Some code is listed in wiki pages, other is downloadable in a separate package and finally you can find code snippets in the manuals too.

    Simple examples

    RTT Examples Get started with simple, ready-to-compile examples of how to create a component

    Naming connections, not ports: Orocos' best kept secret

    Using omniORBpy to interact with a component from Python

    Advanced examples

    These advanced examples are mainly about extending and configuring the RTT for your specific needs.

    Using plugins and toolkits to support custom data types

    Using non-periodic components to implement a simple TCP client

    Using XML substitution to manage complex deployments

    Using real-time logging

    Developing plugins and toolkits

    This is a work in progress and only for RTT 1.x !

    Rationale

    Problem: You want to pass custom types between distributed components, be able to see the value(s) of your custom type with in a deployer, and be able to read/write the custom type to/from XML files.

    Solution: Develop two plugins that tell Orocos about your custom types.

    <!-- break -->

    Assumptions

    • The build directory is within the source directory. This helps with dynamic library loading.

    Compatabilitry

    Tested on v1.8 trunk on Mac OS X Leopard with omniORB from MacPorts, and Ubuntu Jaunty with ACE/TAO.

    Files

    See the attachments at the bottom of this page.

    Overview

    An RTT toolkit plugin provides information to Orocos about one or more custom types. This type of plugin allows RTT to display your types values in a deployer, load/save your types to/from XML files, and provides constructors and operators that can be used to manipulate your types within program scripts and state machines.

    An RTT transport plugin provides methods to transport your custom types across CORBA, and hence between distributed Orocos components.

    This is a multi-part example demonstrating plugins for two boost::posix_time types: ptime and time_duration.

    • Part 1 Without the plugin creates components that use your custom type, and demonstrates that Orocos does not know anything about these types
    • Part 2 Toolkit plugin demonstrates an Orocos plugin that makes the types available to Orocos. In a deployer, you can now see the values of your custom types
    • Part 3 Transport plugin demonstrates an Orocos transport plugin making your custom types available across CORBA. Now you can pass types between deployers.
    • TBD Part 4 will demonstrate XML manipulation
    • TBD Part 5 will demonstrate accesors and manipulators for use in scripts and state machines

    For additional information on plugins and their development, see [1].

    Also, the KDL toolkit and transport plugins are good examples. See src/bindings/rtt in the KDL source.

    Structure

    The overall structure of this examples is show below
    .
    |-- BoostToolkit.cpp
    |-- BoostToolkit.hpp
    |-- CMakeLists.txt
    |-- config
    |   |-- FindACE.cmake
    |   |-- FindCorba.cmake
    |   |-- FindOmniORB.cmake
    |   |-- FindOrocos-OCL.cmake
    |   |-- FindOrocos-RTT.cmake
    |   |-- FindTAO.cmake
    |   |-- UseCorba.cmake
    |   `-- UseOrocos.cmake
    |-- corba
    |   |-- BoostCorbaConversion.hpp
    |   |-- BoostCorbaToolkit.cpp
    |   |-- BoostCorbaToolkit.hpp
    |   |-- BoostTypes.idl
    |   |-- CMakeLists.txt
    |   `-- tests
    |       |-- CMakeLists.txt
    |       |-- corba-combined.cpp
    |       |-- corba-recv.cpp
    |       `-- corba-send.cpp
    `-- tests
        |-- CMakeLists.txt
        |-- combined.cpp
        |-- no-toolkit.cpp
        |-- recv.cpp
        |-- recv.hpp
        |-- send.cpp
        `-- send.hpp

    The toolkit plugin is in the root directory, with supporting test files in the tests directory.

    CMake support files are in the config directory.

    The transport plugin is in the corba directory, with supporting test files in the corba/tests directory.

    Limitations

    Currently, this example does
    • Show how to write a plugin telling Orocos about your custom types
    • Show how to write a transport plugin allowing Orocos to move your custom types between deployers/processes.
    • Demonstrate how to test said plugins.
    • Use either ACE/TAO or OmniORB for CORBA support

    Currently, this example does not yet

    • Show how to read/write the custom types to/from XML file
    • Provide manipulators and/or accessors of your custom types, that can be used in scripts and state machines.
    • Does not demonstrate testing of the CORBA transport plugin within a single deployer, using two components. An optimization in RTT bypasses the CORBA mechanism in this case, rendering the test useless.
    • Does not deal with all intricacies of the boost types (eg all of the special values).

    NB I could not find a method to get at the underlying raw 64-bit or 96-bit boost representation of ptime. Hence, the transport plugin inefficiently transports a ptime type using two separate data values. If you know of a method to get at the raw representation, I would love to know. Good luck in template land ...

    References

    [1] Extending the Real-Time Toolkit
    AttachmentSize
    BoostToolkit.hpp2.64 KB
    BoostToolkit.cpp3.58 KB
    CMakeLists.txt1.83 KB
    corba/BoostCorbaToolkit.hpp934 bytes
    corba/BoostCorbaToolkit.cpp1.34 KB
    corba/QBoostCorbaConversion.hpp5.18 KB
    corba/CMakeLists.txt738 bytes
    plugins.tar_.bz214.24 KB

    Part 1 Without the plugin

    This is a work in progress

    This part creates components that use your custom type, and demonstrates that Orocos does not know anything about these types.

    Files

    See the attachments at the bottom of Developing plugins and toolkits.

    To build

    In a shell

    cd /path/to/plugins
    mkdir build
    cd build
    cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=OFF
    make

    For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

    Tested in Mac OS X Leopard 10.5.7.

    To run

    In a shell

    cd /path/to/plugins/build
    ./no-toolkit

    This starts a test case that uses an OCL taskbrowser to show two components: send and recv. If you issue a "ls" or "ls Send" command, you will get output similar to the following:

     Data Flow Ports: 
     RW(C)   unknown_t ptime          = (unknown_t)
     RW(C)   unknown_t timeDuration   = (unknown_t)

    Each component has two ports, named ptime and time_duration. Notice that both ports are connected "(C)", but that Orocos considers each an unknown type with unknown value.

    Part 2 Toolkit plugin will build a toolkit plugin that allows Orocos to understand these types.

    Part 2 Toolkit plugin

    This is a work in progress

    This part creates a toolkit plugin making our types known to Orocos.

    Files

    See the attachments at the bottom of Developing plugins and toolkits

    To build

    Everything needed for this part was built in Part 1.

    To run

    In a shell

    cd /path/to/plugins/build
    ./combined

    The combined tests uses an OCL taskbrowser to show two components: send and recv. Typing an "ls" or "ls Send" command, as in Part 1, you will get something like the following:

    RW(C) boost_ptime ptime          = 2009-Aug-09 16:14:19.724622
    RW(C) boost_timeduration timeDuration   = 00:00:00.200005

    Note that Orocos now knows the correct types (eg boost_ptime) and can display each ports value. Issue multiple ls commands and you will see the values change. The ptime is simply the date and time at which the send component set the port value, and the duration is the time between port values being set on each iteration (ie this should approximately be the period of the send component).

    Toolkit plugin

    The toolkit plugin is defined in BoostToolkit.hpp.

    namespace Examples
    {
        /// \remark these do not need to be in the same namespace as the plugin
     
        /// put the time onto the stream
        std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t);
        /// put the time onto duration the stream
        std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d);
        /// get a time from the stream
        std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t);
        /// get a time duration from the stream
        std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d);
    The toolkit plugin is contained in an Examples namespace. First up we define input and output stream operators for each of our types.

        class BoostPlugin : public RTT::ToolkitPlugin
        {
        public:
            virtual std::string getName();
     
            virtual bool loadTypes();
            virtual bool loadConstructors();
            virtual bool loadOperators();
        };
     
        /// The singleton for the Toolkit.
        extern BoostPlugin BoostToolkit;
    The actual plugin class and singleton object are then defined. The plugin provides a name that is unique across all plugins, and contains information on the types, constructors and operators for each or our custom types.

        /// provide ptime type to RTT type system
        /// \remark the 'true' argument indicates that we supply stream operators
        struct BoostPtimeTypeInfo : 
            public RTT::TemplateTypeInfo<boost::posix_time::ptime,true> 
        {
            BoostPtimeTypeInfo(std::string name) :
                    RTT::TemplateTypeInfo<boost::posix_time::ptime,true>(name)
            {};
            bool decomposeTypeImpl(const boost::posix_time::ptime& img, RTT::PropertyBag& targetbag);
            bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::ptime& img);
        };
     
        /// provide time duration type to RTT type system
        /// \remark the 'true' argument indicates that we supply stream operators
        struct BoostTimeDurationTypeInfo : 
            public RTT::TemplateTypeInfo<boost::posix_time::time_duration,true> 
        {
            BoostTimeDurationTypeInfo(std::string name) :
                    RTT::TemplateTypeInfo<boost::posix_time::time_duration,true>(name)
            {};
            bool decomposeTypeImpl(const boost::posix_time::time_duration& img, RTT::PropertyBag& targetbag);
            bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::time_duration& img);
        };
     
    } // namespace Exampels
    We then provide a type information class for each of our two custom types. These type info classes are the mechanism for Orocos to work with XML and our custom types.NB the true boolean value to each TypeInfo class indicates that stream operators are available (as defined above).

    The toolkit plugin implementation is in the BoostToolkit.cpp file.

    namespace Examples
    {
        using namespace RTT;
        using namespace RTT::detail;
        using namespace std;
     
        std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t)
        {
            os << boost::posix_time::to_simple_string(t);
            return os;
        }
     
        std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d)
        {
            os << boost::posix_time::to_simple_string(d);
            return os;
        }
     
        std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t)
        {
            is >> t;
            return is;
        }
     
        std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d)
        {
            is >> d;
            return is;
        }
    After picking up some RTT workspaces, we declare the stream operators to use the underlying boost stream operators. TODO explain why need these stream operators.

        BoostPlugin BoostToolkit;
     
        std::string BoostPlugin::getName()
        {
            return "Boost";
        }
    Next we create the singleton instance of the plugin as BoostToolkit. TODO explain naming scheme. Then we declare the unique name of this plugin, "Boost".

        bool BoostPlugin::loadTypes()
        {
            TypeInfoRepository::shared_ptr ti = TypeInfoRepository::Instance();
     
            /* each quoted name here (eg "boost_ptime") must _EXACTLY_ match that
               in the associated TypeInfo::composeTypeImpl() and
               TypeInfo::decomposeTypeImpl() functions (in this file), as well as
               the name registered in the associated Corba plugin's 
               registerTransport() function (see corba/BoostCorbaToolkit.cpp)
            */
            ti->addType( new BoostPtimeTypeInfo("boost_ptime") );
            ti->addType( new BoostTimeDurationTypeInfo("boost_timeduration") );
     
            return true;
        }
    The loadTypes() method provides the actual association for Orocos, from a type name to a TypeInfo class. This is how Orocos identifies a type at runtime. The choice of name is critical - it is what is shown in the deployer for an items type, and should make immediate sense when you see it. It probably also should not be too long, to keep things readable within the deployer and taskbrowser. The name you use here for each type is very important and must match with names in other places (TODO list the other places?).

        bool BoostPlugin::loadConstructors()
        {
            // no constructors for these particular types
     
            return true;
        }
     
        bool BoostPlugin::loadOperators()
        {
            // no operators for these particular types
     
            return true;
        }
    Currently this example does not provide any constructors or operators, useable in program scripts and state machines. TODO update this.

        bool BoostPtimeTypeInfo::decomposeTypeImpl(const boost::posix_time::ptime& source, 
                                                 PropertyBag& targetbag)
        {
            targetbag.setType("boost_ptime");
            assert(0);
            return true;
        }
     
        bool BoostPtimeTypeInfo::composeTypeImpl(const PropertyBag& bag, 
                                                  boost::posix_time::ptime& result)
        {
            if ( "boost_ptime" == bag.getType() ) // ensure is correct type
            {
                // \todo
                assert(0);
            }
            return false;
        }
    The implementation of a TypeInfo class for one of our custom types must use the same type name as use in loadTypes() above. These functions would also provide the mechanism to load/save the type to/from XML. TODO update this.

    ORO_TOOLKIT_PLUGIN(Examples::BoostToolkit)
    This macro (lying outside the namespace!) takes the fully qualified singleton, and makes it available to the RTT type system at runtime. It basically makes the singleton identifiable as an RTT toolkit plugin, when Orocos loads the dynamic library formed from this toolkit.

    Build system

    Now the build system takes this .cpp file, and turns it into a dynamic library. We are going to examine the root CMakeLists.txt to see how to create this library, but for now, we will ignore the corba parts of that file.

    cmake_minimum_required(VERSION 2.6)
     
    # pick up additional cmake package files (eg FindXXX.cmake) from this directory
    list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/config")
    First we enforce the minimum CMake version we require, and ensure that we can pick up FindXXX.cmake files from our config directory

    find_package(Orocos-RTT 1.6.0 REQUIRED corba)
    find_package(Orocos-OCL 1.6.0 REQUIRED taskbrowser)
    We have to find both Orocos RTT and Orocos OCL, and we require the additional corba component from RTT and the additional taskbrowser component from OCL.

    include(${CMAKE_SOURCE_DIR}/config/UseOrocos.cmake)
    The UseOrocos.cmake file makes RTT and OCL available to us, and provides us with some useful macros (eg create_component).

    create_component(BoostToolkit-${OROCOS_TARGET} 
      VERSION 1.0.0 
      BoostToolkit.cpp)
     
    TARGET_LINK_LIBRARIES(BoostToolkit-${OROCOS_TARGET} 
      boost_date_time)
    The create_component macro makes an Orocos shared library for us. This library will contain only our toolkit plugin. Note that we make the library name dependent on the Orocos target we are building for (eg macosx or gnulinux). This allows us to have plugins for multiple architectures on the same machine (typically, gnulinux and xenomai, or similar). We also have to link the shared library against the boost "date time" library, as we are using certain boost functionality that is not available in the header files.

    SUBDIRS(tests)
    Lastly we also build the 'test' directory.

    Tests

    There are two very simply test components that communicate each of our custom types between them. Tests are very important when developing plugins. Trying to debug a plugin within a complete system is a daunting challenge - do it in isolation first.

    The send component regularly updates the current time on its ptime port, and the duration between ptime port updates on its timeDuration port.

    class Send : public RTT::TaskContext
    {
    public:
        RTT::DataPort<boost::posix_time::ptime>                ptime_port;
        RTT::DataPort<boost::posix_time::time_duration>    timeDuration_port;
    public:
        Send(std::string name);
        virtual ~Send();
     
        virtual bool startHook();
        virtual void updateHook();
    protected:
        boost::posix_time::ptime    lastNow;
    };

    The implementation is very simple, and will not be discussed in detail here.

    #include "send.hpp"
     
    Send::Send(std::string name) :
            RTT::TaskContext(name),
            ptime_port("ptime"),
            timeDuration_port("timeDuration")
    {
        ports()->addPort(&ptime_port);
        ports()->addPort(&timeDuration_port);
    }
     
    Send::~Send()
    {
    }
     
    bool Send::startHook()
    {
        // just set last to now
        lastNow            = boost::posix_time::microsec_clock::local_time();
        return true;
    }
     
    void Send::updateHook()
    {
        boost::posix_time::ptime            now;
        boost::posix_time::time_duration    delta;
     
        // send the current time, and the duration since the last updateHook()
        now        = boost::posix_time::microsec_clock::local_time();
        delta   = now - lastNow;
     
        ptime_port.Set(now);
        timeDuration_port.Set(delta);
     
        lastNow = now;
    }

    The recv component has the same ports but does nothing. It is simply an empty receiver component, that allows us to view its ports within the deployer.

    class Recv : public RTT::TaskContext
    {
    public:
        RTT::DataPort<boost::posix_time::ptime>                ptime_port;
        RTT::DataPort<boost::posix_time::time_duration>        timeDuration_port;
     
    public:
        Recv(std::string name);
        virtual ~Recv();
    };

    And the recv implementation.

    #include "recv.hpp"
     
    Recv::Recv(std::string name) :
            RTT::TaskContext(name),
            ptime_port("ptime"),
            timeDuration_port("timeDuration")
    {
        ports()->addPort(&ptime_port);
        ports()->addPort(&timeDuration_port);
    }
     
    Recv::~Recv()
    {
    }

    Now the combined test program just combines one of each test component directly within the same executable.

    #include <rtt/RTT.hpp>
    #include <rtt/PeriodicActivity.hpp>
    #include <rtt/TaskContext.hpp>
    #include <rtt/os/main.h>
    #include <rtt/Ports.hpp>
    #include <ocl/TaskBrowser.hpp>
     
    #include "send.hpp"
    #include "recv.hpp"
    #include "../BoostToolkit.hpp"
     
    using namespace std;
    using namespace Orocos;
     
    int ORO_main(int argc, char* argv[])
    {
        RTT::Toolkit::Import(Examples::BoostToolkit);
    This forcibly loads our toolkit plugin.

     
        Recv                recv("Recv");
        PeriodicActivity    recv_activity(ORO_SCHED_OTHER, 0, 0.1, recv.engine());
        Send                 send("Send");
        PeriodicActivity    send_activity(ORO_SCHED_OTHER, 0, 0.2, send.engine());
     
        if ( connectPeers( &send, &recv ) == false )
        {
            log(Error) << "Could not connect peers !"<<endlog();
            return -1;
        }
        if ( connectPorts( &send, &recv) == false )
        {
            log(Error) << "Could not connect ports !"<<endlog();
            return -1;
        }
    Connect the ports of the two components, and makes them peers

        send.configure();
        recv.configure();
        send_activity.start();
        recv_activity.start();
     
        TaskBrowser browser( &recv );
        browser.setColorTheme( TaskBrowser::whitebg );
        browser.loop();
    Configures and starts both compents, and then runs an OCL::TaskBrowser over the receive component.

        send_activity.stop();
        recv_activity.stop();
     
        return 0;
    }
    Stops and exits cleanly.

    The differences between the combined and no-toolkit test programs will be covered in Part 2, but essentially amounts to not loading the toolkit.

    Part 3 Transport plugin will build a transport plugin allowing Orocos to communicate these types across CORBA.

    Part 3 Transport plugin

    'This is a work in progress''

    This part builds a transport plugin allowing Orocos to communicate these types across CORBA.

    Files

    See the attachments at the bottom of Developing plugins and toolkits

    To build

    In a shell

    cd /path/to/plugins
    mkdir build
    cd build
    cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=ON
    make

    The only difference from building in Part 1, is to turn ON CORBA.

    For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

    Tested in Mac OS X Leopard 10.5.7.

    To run

    In a shell

    cd /path/to/plugins/build/corba/tests
    ./corba-recv

    In a second shell

    cd /path/to/plugins/build/corba/tests
    ./corba-send

    Now the same exact two test components of Parts 1 and 2 are in separate processes. Typing ls in either process will present the same values (subject to network latency delays, which typically are not human perceptible) - the data and types are now being communicated between deployers.

    Now, the transport plugin is responsible for communicating the types between deployers, while the toolkit plugin is responsible for knowing each type and being able to display it. Separate responsibilities. Separate plugins.

    NB for the example components, send must be started after recv. Starting only corba-recv and issuing ls will display the default values for each type. Also, quitting the send component and then attempting to use the recv component will lockup the recv deployer. These limitations are not due to the plugins - they are simply due to the limited functionality of these test cases.

    Without the transport plugin

    Running the same two corba test programs but without loading the transport plugin, is instructive as to what happens when you do not match up certain things in the toolkit sources. This is very important!

    In a shell

    cd /path/to/plugins/build/corba/tests
    ./corba-recv-no-toolkit
    An ls in the recv component now gives
     Data Flow Ports: 
     RW(U) boost_ptime ptime          = not-a-date-time
     RW(U) boost_timeduration timeDuration   = 00:00:00
    This is expected, as we have not connected the send component yet and so recv has default values.

    In a second shell

    cd /path/to/plugins/build/corba/tests
    ./corba-send-no-toolkit

    The send component without the transport plugin fails to start, with:

    $ ./build/corba/tests/corba-send-no-toolkit 
    0.008 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Forcing priority (0) of thread to 0.
    0.008 [ Warning][PeriodicThread] Forcing priority (0) of thread to 0.
    0.027 [ Warning][SingleThread] Forcing priority (0) of thread to 0.
    5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] ControlTask 'Send' already bound \
    to CORBA Naming Service.
    5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Trying to rebind... done. New \
    ControlTask bound to Naming Service.
    5.130 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Can not create a proxy for data \
    connection.
    5.130 [ ERROR  ][./build/corba/tests/corba-send-no-toolkit::main()] Dynamic cast failed \
    for 'PN3RTT14DataSourceBaseE', 'unknown_t', 'unknown_t'. Do your typenames not match?
    Assertion failed: (doi && "Dynamic cast failed! See log file for details."), function createConnection, \
    file /opt/install/include/rtt/DataPort.hpp, line 462.
    Abort trap
    The culprit here is that we tried to pass unknown types through CORBA. While the toolkit plugin tells Orocos about a type, it takes a transport plugin to tell Orocos how to communicate the type. The above failure indicates that Orocos came across a type named unknown_t and did not know how to deal with it. We will cover this more later in the tutorial, and specifically where and why this occurs. As a matter of interest, comparing the sources of corba/tests/corba-recv.cpp and corba/tests/corba-recv-no-toolkit.cpp, the differences are
    *** corba/tests/corba-recv.cpp    2009-07-29 22:08:32.000000000 -0400
    --- corba/tests/corba-recv-no-toolkit.cpp    2009-08-09 16:32:03.000000000 -0400
    ***************
    *** 11,17 ****
      #include <rtt/os/main.h>
      #include <rtt/Ports.hpp>
     
    - #include "../BoostCorbaToolkit.hpp"
      #include "../../BoostToolkit.hpp"
     
      // use Boost RTT Toolkit test components
    --- 11,16 ----
    ***************
    *** 27,33 ****
      int ORO_main(int argc, char* argv[])
      {
          RTT::Toolkit::Import( Examples::BoostToolkit  );
    -     RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
     
          Recv                recv("Recv");
          PeriodicActivity    recv_activity(
    --- 26,31 ----
    We simply did not load the transport plugin.

    Transport plugin

    The transport plugin implementation spans three files. We will cover them in turn

    Defining CORBA types

    We define the CORBA types in corba/BoostTypes.idl. This is a file in CORBA's Interface Description Language (IDL). There are plenty of references on the web, for instance [1].

    // must be in RTT namespace to match some rtt/corba code
    module RTT {
    module Corba {
    These structures must be in the RTT::Corba namespace.

        struct time_duration
        {
            short    hours;
            short    minutes;
            short    seconds;
            long    nanoseconds;
        };
    We send a time duration as individual time components. Note that we avoid boost's fraction_secionds fiasco, and always send nanoseconds even if the sender or receiver implementations only support microseconds.

        // can't get at underlying type, so send this way (yes, more overhead)
        // see BoostCorbaConversion.hpp::struct AnyConversion<boost::posix_time::ptime>
        // for further details.
        struct ptime
        {
            // julian day
            long            date;    
            time_duration    time_of_day;
        };
    };
    };
    I was not able to find a way to get to the native 64 or 96 bits that define a ptime value. Consequently, we inefficiently send a ptime as a julian day and a time duration within the day. Adequate for an example, but definitely more data than we would like to send.

    Note that CORBA IDL knows about certain types already, e.g. short and long, and that we can use our time_duration structure in later structures.

    We will come back to this IDL file during the build process.

    The transport plugin

    The actual plugin is defined in corba/BoostCorbaToolkit.hpp. This is the equivalent of the BoostToolkit.hpp file, except for a transport plugin.

    namespace Examples {
    namespace Corba {
     
        class CorbaBoostPlugin : public RTT::TransportPlugin
        {
        public:
            /// register this transport into the RTT type system
            bool registerTransport(std::string name, RTT::TypeInfo* ti);
     
            /// return the name of this transport type (ie "CORBA")
            std::string getTransportName() const;
     
            /// return the name of this transport
            std::string getName() const;
        };
     
        // the global instance
        extern CorbaBoostPlugin     corbaBoostPlugin;
     
    // namespace
    }
    }
    The transport plugin provides its name, the name of its transport mechanism, and a function to register the transport into Orocos. Note that no types are mentioned here as that is taken care of by the toolkit plugin. A transport plugin without a corresponding toolkit plugin is useless. Orocos will not know about the types and hence will not even make it to looking up transports for a given type.

    The implementation of the plugin is in corba/BoostCorbaToolkit.cpp, and is very straight forward.

    namespace Examples {
    namespace Corba {
     
    bool CorbaBoostPlugin::registerTransport(std::string name, TypeInfo* ti)
    {
        assert( name == ti->getTypeName() );
        // name must match that in plugin::loadTypes() and 
        // typeInfo::composeTypeInfo(), etc
        if ( name == "boost_ptime" )
            return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::ptime >() );
        if ( name == "boost_timeduration" )
            return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::time_duration >() );
        return false;
    }
    Registering a transport registers each type for a given transport protocol (the ORO_CORBA_PROTOCOL_ID above, defined in rtt/src/corba/CorbaLib.hpp). Each type of transport must have a unique protocol ID, though currently Orocos only supports one, CORBA. Registration occurs automatically when the transport is loaded.

    std::string CorbaBoostPlugin::getTransportName() const {
        return "CORBA";
    }
     
    std::string CorbaBoostPlugin::getName() const {
        return "CorbaBoost";
    }
    The plugin's name is CorbaBoost, and must be unique within all plugins (transport and toolkit, I believe). We choose to prefix the name of our toolkit plugin with Corba, to keep them recognizable.

    For a CORBA transport plugin, the name returned by getTransportName() should be CORBA.

    CorbaBoostPlugin corbaBoostPlugin;
     
    // namespace
    }
    }
     
    ORO_TOOLKIT_PLUGIN(Examples::Corba::corbaBoostPlugin);
    Finally, the plugin itself is instantiated, and the appropriate macro is used so that Orocos can identify this as a plugin when loading it from a dynamic library.
     

    Converting types

    I will only cover the code for converting one of the types. The other is very similar - you can examine it yourself in the source file.

    #include "BoostTypesC.h"
    #include <rtt/corba/CorbaConversion.hpp>
    #include <boost/date_time/posix_time/posix_time_types.hpp>  // no I/O
    Here we pick up some standard RTT types, and the I/O operators for our custom boost types. We also pick up BoostTypesC.h. This is a file that CORBA generates from our BoostTypes.idl file above, and contains CORBA-specific code. Ignore its contents, but just realise that it is generated from the .idl file.

    // must be in RTT namespace to match some rtt/corba code
    namespace RTT
    {
    For some historical reason, I believe this has to be in the RTT namespace. Not sure if that is still true, but ... maybe it is to match the generated output from the .idl file?

    template<>
    struct AnyConversion< boost::posix_time::time_duration >
    {
        // define the Corba and standard (ie non-Corba) types we are using
        typedef Corba::time_duration                CorbaType;
        typedef boost::posix_time::time_duration    StdType;
    Here we define some shorthand types, to make typing easier. I also find that having these two types names this way, Corba vs Std, makes it easier to read some of the later code. The actual Corba::timer_duration type comes from the files generated from our .idl file.

    The last four of the following six functions are required by the CORBA library, to enable conversion between the CORBA and non-CORBA types. The two convert functions are their for convenience, and to save replicating code.

        // convert CorbaType to StdTypes
        static void convert(const CorbaType& orig, StdType& ret) 
        {
            ret = boost::posix_time::time_duration(orig.hours,
                                                   orig.minutes,
                                                   orig.seconds,
                                                   orig.nanoseconds);
        }
     
        // convert StdType to CorbaTypes
        static void convert(const StdType& orig, CorbaType& ret) 
        {
            ret.hours       = orig.hours();
            ret.minutes     = orig.minutes();
            ret.seconds     = orig.seconds();
            ret.nanoseconds = orig.fractional_seconds();
        }
    The above two functions do the actual work of converting data to/from the CORBA and standard types. In this case we can basically copy individual data members - more complicated types may require further conversions, manipulation, etc.

        static CorbaType* toAny(const StdType& orig) {
            CorbaType* ret = new CorbaType();
            convert(orig, *ret);
            return ret;
        }
     
        static StdType get(const CorbaType* orig) {
            StdType ret;
            convert(*orig, ret);
            return ret;
        }
     
        static bool update(const CORBA::Any& any, StdType& ret) {
            CorbaType* orig;
            if ( any >>= orig ) 
            {
                convert(*orig, ret);
                return true;
            }
            return false;
        }
     
        static CORBA::Any_ptr createAny( const StdType& t ) {
            CORBA::Any_ptr ret = new CORBA::Any();
            *ret <<= toAny( t );
            return ret;
        }
    };
    The above four functions are, as previously mentioned, the standard interface to convert types to/from CORBA types. While the syntax might appear a little strange to you (e.g "<<=" operator), you can just copy the above to your own custom types (I copy these between transport plugins, an advantage of creating CORBA and standard types at top). Note well the one dynamic allocation in the toAny() function: transport plugins are most definitely not real-time capable.

    The same six functions then follow for our boost::ptime type. They are not covered in detail here.

    Build system

    IF (ENABLE_CORBA)
     
      INCLUDE(${CMAKE_SOURCE_DIR}/config/UseCorba.cmake)
    This include ensures we know about the CORBA library, and also picks up some CMake macros we need.

      FILE( GLOB IDLS [^.]*.idl )
      FILE( GLOB CPPS [^.]*.cpp )
      ORO_ADD_CORBA_SERVERS(CPPS HPPS ${IDLS} )
    The ORO_ADD_CORBA_SERVERS CMake macro we go from UseCorba.cmake, takes a list of source files (CPPS), a list of header files (HPPS - we have none here) and a list of interface description files (IDLS), and creates the necessary CMake code to generate the CORBA files from the IDL files. Basically, this takes our BoostTypes.idl file and produces header and source files to deal with that CORBA type. Note that this macro appends to the existing files listed in CPPS and HPPS - we'll need them shortly.

      INCLUDE_DIRECTORIES( ${CMAKE_CURRENT_BINARY_DIR}/. )
    We now have our own source files in the source directory, as well as source files generated into the build directory. This ensures we can pick up the source files from the build directory as well.

      CREATE_COMPONENT(BoostToolkit-corba-${OROCOS_TARGET} 
        VERSION 1.0.0
        ${CPPS})
      TARGET_LINK_LIBRARIES(BoostToolkit-corba-${OROCOS_TARGET}     
        ${OROCOS-RTT_CORBA_LIBRARIES}
        ${CORBA_LIBRARIES})
    Here we create a componet shared library, that contains only the transport plugins. Note that the library contains all the source files in the CPPS CMake variable, which now contains all the .cpp files in this directory (due to the "FILE(GLOB ...) statement) as well as the source files generated from the ORO_ADD_CORBA_SERVERS macro. These make up our transport toolkit. Fundamentally, the transport toolkit shared library is no different than a shared library of standard components, except for a tiny bit of C++ code that comes out of the ORO_TOOLKIT_PLUGIN() macro at the end of the BoostCorbaToolkit.cpp'' file. RTT then recognizes this shared library as containing a transport plugin.

      SUBDIRS(tests)
     
    ENDIF (ENABLE_CORBA)
    And lastly, pick up the tests.

    Tests

    The corba test programs contain one component each, to distribute the two components and hence require the CORBA transport plugin. The exact same send and receive test components are used from Part 2.

    The corba-send test program instantiates a send component, and uses an RTT ControlTaskProxy to represent the remote receive component.

    #include <rtt/corba/ControlTaskServer.hpp>
    #include <rtt/corba/ControlTaskProxy.hpp>
    #include <rtt/RTT.hpp>
    #include <rtt/PeriodicActivity.hpp>
    #include <rtt/TaskContext.hpp>
    #include <rtt/os/main.h>
    #include <rtt/Ports.hpp>
    #include <ocl/TaskBrowser.hpp>
     
    #include "../BoostCorbaToolkit.hpp"
    #include "../../BoostToolkit.hpp"
     
    #include "../../tests/send.hpp"
     
    using namespace std;
    using namespace Orocos;
    using namespace RTT::Corba;
     
    int ORO_main(int argc, char* argv[])
    {
        RTT::Toolkit::Import( Examples::BoostToolkit  );
        RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
    Import both the toolkit and transport plugins.

        Send                send("Send");
        PeriodicActivity    send_activity(
            ORO_SCHED_OTHER, 0, 1.0 / 10, send.engine());   // 10 Hz
     
        // start Corba and find the remote task
        ControlTaskProxy::InitOrb(argc, argv);
        ControlTaskServer::ThreadOrb();
    Initialize the CORBA Orb, and then thread it (yes, this does use Proxy and Server functions - this is ok). This puts the CORBA Orb in a background thread, allowing us to run the taskbrowser (below) in the main thread.

        TaskContext* recv = ControlTaskProxy::Create( "Recv" );
        assert(NULL != recv);
    Creates a proxy task context for a remote component named "Recv". This will use the name service (by default) to find this component.

        if ( connectPeers( recv, &send ) == false )
        {
            log(Error) << "Could not connect peers !"<<endlog();
        }
        // create data object at recv's side
        if ( connectPorts( recv, &send) == false )
        {
            log(Error) << "Could not connect ports !"<<endlog();
        }
    Connect the local send component to the proxy recv component.

        send.configure();
        send_activity.start();
        log(Info) << "Starting task browser" << endlog();
        OCL::TaskBrowser tb( recv );
        tb.loop();
        send_activity.stop();
    Start a task browser on the proxy component. We are, after all, interested in the reception of the data. You could have instead run the taskbrowser on the send component.

        ControlTaskProxy::DestroyOrb();
     
        return 0;
    }
    Cleanly shutdown the orb and exit.

    The receive test program has a similar structure to the send test program.

    #include <rtt/corba/ControlTaskServer.hpp>
    #include <rtt/corba/ControlTaskProxy.hpp>
    #include <rtt/RTT.hpp>
    #include <rtt/PeriodicActivity.hpp>
    #include <rtt/TaskContext.hpp>
    #include <rtt/os/main.h>
    #include <rtt/Ports.hpp>
     
    #include "../BoostCorbaToolkit.hpp"
    #include "../../BoostToolkit.hpp"
     
    #include "../../tests/recv.hpp"
     
    #include <ocl/TaskBrowser.hpp>
     
    using namespace std;
    using namespace Orocos;
    using namespace RTT::Corba;
     
     
    int ORO_main(int argc, char* argv[])
    {
        RTT::Toolkit::Import( Examples::BoostToolkit  );
        RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin  );
     
        Recv                recv("Recv");
        PeriodicActivity    recv_activity(
            ORO_SCHED_OTHER, 0, 1.0 / 5, recv.engine());    // 5 Hz
     
        // Setup Corba and Export:
        ControlTaskServer::InitOrb(argc, argv);
        ControlTaskServer::Create( &recv );
        ControlTaskServer::ThreadOrb();
    We make the receive component a CORBA server, meaning that the send component will connect to this component. It could have been done the other way around - in this example, it simply impacts which test program has to be started first (the server must be running for the client to connect to it). Again we thread the ORB to put it in its own background thread.

        // Wait for requests:
        recv.configure();
        recv_activity.start();
        OCL::TaskBrowser tb( &recv );
        tb.loop();
        recv_activity.stop();
    Run the taskbrowser on the recieve component (in the main thread). Note that the send component is not mentioned anywhere. The "server" does not know about any "clients", but the "clients" do need to know about the server.

        // Cleanup Corba:
        ControlTaskServer::ShutdownOrb();
        ControlTaskServer::DestroyOrb();
     
        return 0;
    }
    Cleanly shutdown and destroy the CORBA Orb and exit.

    The no-toolkit versions of the test programs are identical, except they simply do not load the transport plugin, making it impossible to transport the boost types over CORBA.

    References

    [1] http://www.iona.com/support/docs/manuals/orbix/33/html/orbix33cxx_pguide/IDL.html

    Experienced Users

    Now located at http://orocos.org/wiki/rtt/examples-and-tutorials

    Name connections, not ports (aka Orocos' best kept secret)

    Rationale

    Problem: How to reuse a component when you need the ports to have different names?

    Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

    Assumptions

    • The build directory is within the source directory. Click below to read the rest of this post.== Rationale ==

    Problem: How to reuse a component when you need the ports to have different names?

    Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

    Assumptions

    • The build directory is within the source directory. This helps with dynamic library loading.
    • Admittedly, this is contrived example but the structure is very useful and occurs more frequently than you may realise (say using N copies of a camera component, deploying components for both a left and a right robot arm within the same deployer, etc).

    Files

    HMI.hpp

    Robot.hpp

    OneAxisFilter.hpp

    HMI.cpp

    Robot.cpp

    OneAxisFilter.cpp

    Connect-1.xml

    Connect-2.xml

    Connect-3.xml

    Buildable tarball

    Example overview

    This example occurs in three parts

    1. A Human-Machine-Interface (HMI) component connects to a Robot component, and provides a desired cartesian position.
    2. A one-axis filter is placed between the HMI and Robot component, to zero out one axis (say, you did not want the robot to move in one direction due to an obstacle or something similar)
    3. A second one-axis filter is placed between the first filter and the Robot. The two filters are the exact same component with the same named ports.

    Components

    class HMI : public RTT::TaskContext
    {
    protected:
        // *** OUTPUTS ***
     
        /// desired cartesian position
        RTT::WriteDataPort<KDL::Frame>            cartesianPosition_desi_port;
     
    public:
        HMI(std::string name);
        virtual ~HMI();
     
    protected:
        /// set the desired cartesian position to an initial value
        /// \return true
        virtual bool startHook();
    };
    The HMI provides one output port that specified the desired cartesian position. This is set to an initial value in startHook().

    class Robot : public RTT::TaskContext
    {
    protected:
        // *** INPUTS ***
     
        /// desired cartesian position
        RTT::ReadDataPort<KDL::Frame>            cartesianPosition_desi_port;
     
    public:
        Robot(std::string name);
        virtual ~Robot();
    };
    The robot accepts a desired cartesian position as input (but in this example does nothing with it).

    class OneAxisFilter : public RTT::TaskContext
    {
    protected:
        // *** INPUTS ***
     
        /// desired cartesian position
        RTT::ReadDataPort<KDL::Frame>            inputPosition_port;
     
        // *** OUTPUTS ***
     
        /// desired cartesian position
        RTT::WriteDataPort<KDL::Frame>            outputPosition_port;
     
        // *** CONFIGURATION ***
     
        /// specify which axis to filter (should be one of "x", "y", or "z")
        RTT::Property<std::string>                axis_prop;
     
    public:
        OneAxisFilter(std::string name);
        virtual ~OneAxisFilter();
     
    protected:
        /// validate axis_prop value
        /// \return true if axis_prop value is valid, otherwise false
        virtual bool configureHook();
        /// filter one translational axis (as specified by axis_prop)
        virtual void updateHook();
    };
    The OneAxisFilter component takes an input cartesian position, zeroes out one axis (the axis of interest is specified in a property), and then outputs the filtered cartesian position.

    Component implementation

    The component implementations are not given in this example, as they are not the interesting part of the solution, but are available in the Files section above.

    The interesting part is in the deployment files ...

    Deployment

    Part 1: HMI and Robot

    This part simply connects the HMI and robot together (see deployment file Connect-1.xml).

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
     
      <simple name="Import" type="string">
        <value>liborocos-rtt</value>
      </simple>
      <simple name="Import" type="string">
        <value>liborocos-kdl</value>
      </simple>
      <simple name="Import" type="string">
        <value>liborocos-kdltk</value>
      </simple>
      <simple name="Import" type="string">
        <value>libConnectionNaming</value>
      </simple>
    The first section of the deployment file simply loads the Orocos libaries we use (including the KDL toolkit, so that we can inspect and modify KDL types within the deployer), and then loads our shared libary (libConnectionNaming).

      <struct name="HMI" type="HMI">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    The next section creates an HMI component, with the connection for its output port named "cartesianPosition_desi" (ie the same as the port name). The syntax for port/connection naming is:

          <simple name="portName" type="string">
            <value>connectionName</value>
          </simple>
    which makes port portName part of connection connectionName.

      <struct name="Robot" type="Robot">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>HMI</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    </properties>
    Lastly, the robot component is created with its input port on a connection named cartesianPosition_desi.

    Now, the deployer uses connection names when connecting components between peers, not port names. So it attempts to connect a Robot.cartesianPosition_desi connection to a Vehicle. cartesianPosition_desi connection (which in this part, matches the port names).

    Build the library, and then run this part with

    cd /path/to/ConnectionNaming/build
    deployer-macosx -s ../Connect-1.xml

    Examine the HMI and Robot components, and note that each has a connected port, and the port values match.

    Part 2: HMI, one filter and a robot

    This part adds a filter component between the HMI and the robot (see Connect-2.xml)

    As with Part 1, the first part of the file loads the appropriate libraries (left out here, as it is identical to Part 1).

      <struct name="HMI" type="HMI">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>unfiltered_cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    Again an HMI component is deployed, except this time the deployer will connect the cartesianPosition_desi port as part of a connection named unfiltered_cartesianPosition_desi.

      <struct name="Filter" type="OneAxisFilter">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>HMI</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="inputPosition" type="string">
            <value>unfiltered_cartesianPosition_desi</value></simple>
          <simple name="outputPosition" type="string">
            <value>filtered_cartesianPosition_desi</value></simple>
        </struct> 
        <simple name="PropertyFile" type="string">
          <value>../Filter1.cpf</value></simple>
      </struct>
    The Filter component is deployed with its input port being part of a connection named unfiltered_cartesianPosition_desi, while its output port is part of a connection named filtered_cartesianPosition_desi. Comparison with the HMI port/connections above, and the Robot port/connections below, you can see that the Filter's input port is connected to the HMI and the output port is connected to the Robot.

      <struct name="Robot" type="Robot">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>Filter</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>filtered_cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    The robot component is the same as Part 1, except that its input port is part of a connection named filtered_cartesianPosition_desi (ie connected to the Filter).

    Run this part with

    cd /path/to/ConnectionNaming/build
    deployer-macosx -s ../Connect-2.xml

    Examine all three components, and note that all ports are connected, and in particular, that the HMI and Filter.inputPosition ports match while the Filter.outputPosition and Vehicle ports match (ie they have the 'x' axis filtered out).

    Using connection naming allows us to connect ports of different names. This is particularly useful with a generic component like this filter, as in one deployment it may connect to a component with ports named cartesianPosition_desi, while in another deployment it may connect to ports named CartDesiPos, or any other names. The filter component is now decoupled from the actual port names used to deploy it.

    Part 3: HMI, two filters and a robot

    This part adds a second filter between the first filter and the robot.

    As with Parts 1 and 2, the libraries are loaded first.

      <struct name="HMI" type="HMI">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>unfiltered_cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    There is no change in the HMI from Part 2.

      <struct name="Filter1" type="OneAxisFilter">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>HMI</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="inputPosition" type="string">
            <value>unfiltered_cartesianPosition_desi</value></simple>
          <simple name="outputPosition" type="string">
            <value>filtered_cartesianPosition_desi</value></simple>
        </struct> 
        <simple name="PropertyFile" type="string">
          <value>../Filter1.cpf</value></simple>
      </struct>
    There is no change in the first filter from Part 2.

      <struct name="Filter2" type="OneAxisFilter">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>HMI</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="inputPosition" type="string">
            <value>filtered_cartesianPosition_desi</value></simple>
          <simple name="outputPosition" type="string">
            <value>double_filtered_cartesianPosition_desi</value></simple>
        </struct> 
        <simple name="PropertyFile" type="string">
          <value>../Filter2.cpf</value></simple>
      </struct>
    The second filter has its input port part of a connection named filtered_cartesianPosition_desi (ie it is connected to Filter1's output port), and the second filter's output port is part of a connecton named double_filtered_cartesianPosition_desi (which as you will see, is connected to the robot's input port).

      <struct name="Robot" type="Robot">
        <struct name="Activity" type="PeriodicActivity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>Filter2</value></simple>
        </struct> 
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>double_filtered_cartesianPosition_desi</value></simple>
        </struct> 
      </struct>
    The only change in the robot component, from Part 2, is to change its peer to Filter2 and to use a connection named double_filtered_cartesianPosition_desi (ie connect it to Filter2).

    Run this part with

    cd /path/to/ConnectionNaming/build
    deployer-macosx -s ../Connect-3.xml

    Examine all components, and note which ports are connected, and what their values are. Note that the vehicle has two axes knocked out (x and y).

    Points to note

    1. WARNING The deployer displays port names for ports within components, while the OCL reporting component also uses port names. Only the act of connecting ports between peers when deploying a component network, makes use of the connection naming shown above.
    2. Using connection naming allows us to reuse a component without resorting to renaming its ports or modifying its code in any way. This is an example of deployment-time configuration. Note that there are certainly instances where run-time configuration of port-names may be needed (eg the component has to name its ports based on the component name itself), but in our experience, deployment-time configuration is more frequent and decouples components better.
    3. Note that as many filters as are required could be chained together in this manner, and that none of the input, output, nor filter components need know that they are connected in such a fashion. Decoupling is your friend, and allowed the Filter component writer to simply concentrate on writing a component that did one thing well: filtered a cartesian position (yes, a trivial example, but a valid point nonetheless).
    4. You may notice that the deployment files do not specify peer combinations in pairs. The peers are mentioned in one direction only. We use this to decouple (yet again) a component from knowing what peers it is connected to, where possible. For example, Filter1 in both Parts 2 and 3 does not now what component is down-stream from it. It doesn't know, nor does it care, whether it is being filtered again, connected to a robot, or whatever. Again, decoupling. This can dramatically help when deploying large systems.

    To build

    In a shell

    cd /path/to/ConnectionNaming
    mkdir build
    cd build
    cmake .. -DOROCOS_TARGET=macosx
    make

    For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

    Tested in Mac OS X Leopard 10.5.7.

    AttachmentSize
    HMI.hpp2.16 KB
    Robot.hpp1.99 KB
    OneAxisFilter.hpp2.52 KB
    HMI.cpp2.04 KB
    Robot.cpp1.94 KB
    OneAxisFilter.cpp2.92 KB
    Connect-1.xml1.96 KB
    Connect-2.xml2.91 KB
    Connect-3.xml3.85 KB
    connectionNaming.tar_.bz27.01 KB

    Simple Examples

    Now located at http://orocos.org/wiki/rtt/examples-and-tutorials

    Simple TCP client using non-periodic component

    Rationale

    Problem: You want a component that connects to a remote TCP server, and reads data from it (this example could easily write, instead of reading). This component will block for varying amounts of time when reading.

    Solution: Use a non-periodic component. This example outlines one method to structure the component, to deal with the non-blocking reads while still being responsive to other components, being able to run a state machine, etc.

    <!-- break -->

    Assumptions

    • Uses Qt sockets to avoid operating-system intracacies and differences when using actual sockets. The code can easily be modified to use bind(), accept(), listen(), etc. instead. It is the structure of the solution that we are interested in.
    • The build directory is within the source directory. This helps with dynamic library loading.
    • Does not attempt reconnection if unable connect on first attempt.
    • Non-robust error handling.
    • Does not validate property values (a robust component would validate that the timeouts were valid, eg. not negative, within a configureHook()).

    Files

    SimpleNonPeriodicClient.cpp

    SimpleNonPeriodicClient.hpp

    SimpleNonPeriodicClient.xml

    SimpleNonPeriodicClient.cpf

    Buildable tarball

    The .cpf file has a .txt extension simply to keep the wiki happy. To use the file, rename it to SimpleNonPeriodicClient.cpf.

    Component definition

    This is the class definition

    class SimpleNonPeriodicClient : public RTT::TaskContext
    {
    protected:
        // DATA INTERFACE
     
        // *** OUTPUTS ***
     
        /// the last read data
        RTT::WriteDataPort<std::string>            lastRead_port;
     
        /// the number of items sucessfully read
        RTT::Attribute<int>                        countRead_attr;
     
        // *** CONFIGURATION ***
     
        // name to listen for incoming connections on, either FQDN or IPv4 addres
        RTT::Property<std::string>                hostName_prop;
        // port to listen on
        RTT::Property<int>                        hostPort_prop;
        // timeout in seconds, when waiting for connection
        RTT::Property<int>                        connectionTimeout_prop;
        // timeout in seconds, when waiting to read
        RTT::Property<int>                        readTimeout_prop;
     
    public:
        SimpleNonPeriodicClient(std::string name);
        virtual ~SimpleNonPeriodicClient();
     
    protected:
        /// reset count and lastRead, attempt to connect to remote
        virtual bool startHook();
        /// attempt to read and process one packet
        virtual void updateHook();
        /// close the socket and cleanup
        virtual void stopHook();
        /// cause updateHook() to return
        virtual bool breakUpdateHook();
     
        /// Socket used to connect to remote host
        QTcpSocket*    socket;
        /// Flag indicating to updateHook() that we want to quit
        bool        quit;
    };

    The component has a series of properties specifying the remote host and port to connect to, as well as timeout parameters. It also uses an RTT Attribute to count the number of successful reads that have occurred, and stores the last read data as a string in a RTT data port.

    Component implementation

    #include "SimpleNonPeriodicClient.hpp"
    #include <rtt/Logger.hpp>
    #include <ocl/ComponentLoader.hpp>
     
    #include <QTcpSocket>

    The class definition is included as well as the RTT logger, and importantly, the OCL component loader that turns this class into a deployable componet in a shared library.

    Most importantly, all Qt related headers come after all Orocos headers. This is required as Qt redefines certain words (eg "slot", "emit") which when used in our or Orocos code cause compilation errors.

    SimpleNonPeriodicClient::SimpleNonPeriodicClient(std::string name) :
            RTT::TaskContext(name),
            lastRead_port("lastRead", ""),
            countRead_attr("countRead", 0),
            hostName_prop("HostName", 
                          "Name to listen for incoming connections on (FQDN or IPv4)", ""),
            hostPort_prop("HostPort", 
                          "Port to listen on (1024-65535 inclusive)", 0),
            connectionTimeout_prop("ConnectionTimeout", 
                                   "Timeout in seconds, when waiting for connection", 0),
            readTimeout_prop("ReadTimeout", 
                             "Timeout in seconds, when waiting for read", 0),
            socket(new QTcpSocket), 
            quit(false)
    {
        ports()->addPort(&lastRead_port);
     
        attributes()->addAttribute(&countRead_attr);
     
        properties()->addProperty(&hostName_prop);
        properties()->addProperty(&hostPort_prop);
        properties()->addProperty(&connectionTimeout_prop);
        properties()->addProperty(&readTimeout_prop);
    }

    The constuctor simply sets up the data interface elements (ie the port, attribute and properties), and gives them appropriate initial values. Note that some of these initial values are illegal, which would aid in any validation code in a configureHook() (which has not been done in this example).

    SimpleNonPeriodicClient::~SimpleNonPeriodicClient()
    {
        delete socket;
    }

    The destructor cleans up by deleting the socket we allocated in the constructor.

    Now to the meat of it

    bool SimpleNonPeriodicClient::startHook()
    {
        bool        rc                    = false;        // prove otherwise
        std::string    hostName            = hostName_prop.rvalue();
        int            hostPort            = hostPort_prop.rvalue();
        int         connectionTimeout    = connectionTimeout_prop.rvalue();
     
        quit = false;
     
        // attempt to connect to remote host/port
        log(Info) << "Connecting to " << hostName << ":" << hostPort << endlog();
        socket->connectToHost(hostName.c_str(), hostPort);
        if (socket->waitForConnected(1000 * connectionTimeout))    // to millseconds
        {
            log(Info) << "Connected" << endlog();
            rc = true;
        }
        else
        {    
            log(Error) << "Error connecting: " << socket->error() << ", " 
                       << socket->errorString().toStdString() << endlog();
            // as we now return false, this component will fail to start.
        }
     
        return rc;
    }
    The startHook() uses the properites loaded from the SimpleNonPeriodicClient.cpf file, to attempt to connect to the remote host. If the remote port is not ready, the attempted connection will timeout.

    If the connection does not occur successfully, then startHook() will return false which prevents the component from actually being started. No reconnection is attempted (see Assumptions above)

    void SimpleNonPeriodicClient::updateHook()
    {
        // wait for some data to arrive, timing out if necessary
        int     readTimeout        = readTimeout_prop.rvalue();
        log(Debug) << "Waiting for data with timeout=" << readTimeout << " seconds" << endlog();
        if (!socket->waitForReadyRead(1000 * readTimeout))
        {
            log(Error) << "Error waiting for data: " << socket->error() << ", " 
                       << socket->errorString().toStdString() 
                       << ". Num bytes = " 
                       << socket->bytesAvailable() << endlog();
            log(Error) << "Disconnecting" << endlog();
            // disconnect socket, and do NOT call this function again
            // ie no engine()->getActivity()->trigger()
            socket->disconnectFromHost();
            return;        
        }
     
        // read and print whatever data is available, but stop if instructed
        // to quit
        while (!quit && (0 < socket->bytesAvailable()))
        {
    #define    BUFSIZE        10
            char            str[BUFSIZE + 1];    // +1 for terminator
            qint64            numRead;
     
            numRead = socket->read((char*)&str[0], 
                                   min(BUFSIZE, socket->bytesAvailable()));
            str[BUFSIZE] = '\0';        // forcibly terminate
            if (0 < numRead)
            {
                log(Info) << "Got " << numRead << " bytes : '" << &str[0] << "'" << endlog();
                countRead_attr.set(countRead_attr.get() + 1);
                lastRead_port.Set(&str[0]);
            }
        }
     
        // if not quitting then trigger another immediate call to this function, to
        // get the next batch of data
        if (!quit)
        {
            engine()->getActivity()->trigger();
        }
    }

    The updateHook() function attempts to wait until data is available, and then reads the data BUFSIZE characters at a time. If it times out waiting for data, then it errors out and disconnects the port. This is not a robust approach and a real algorithm would deal with this differently.

    As data may be continually arriving and/or we get more than BUFSIZE characters at a time, the while loop may iterate several times. The quit flag will indicate if the user wants to stop the component, and that we should stop reading characters.

    Of particular note is the last line

    engine()->getActivity()->trigger();
    This causes updateHook() to be called again immediately by the execution engine. Essentially, this makes the non-periodic component act as a periodic component with a varying period. Of course, this is not called if the component is being stopped (ie quit==true).

    void SimpleNonPeriodicClient::stopHook()
    {
        if (socket->isValid() &&
            (QAbstractSocket::ConnectedState == socket->state()))
        {
            log(Info) << "Disconnecting" << endlog();
            socket->disconnectFromHost();
        }
    }
    The stopHook() simply disconnects the socket if it is currently connected.

    bool SimpleNonPeriodicClient::breakUpdateHook()
    {
        quit = true;
        return true;
    }
    The breakUpdateHook() is very important, as it is the only way to inform a blocked updateHook() that it is time to return and quit. In this example we set the quit flag and return true. The quit flag will be picked up by updateHook() when it finishes waiting for data (in socket->waitForReadyRead()). Returning true from breakUpdateHook() tells the execution engine that we successfully told updateHook() to return and that it should wait (one second, hardcoded) for updateHook() to complete and return. If we returned false, then stop would also return false.

    We could have also done something like socket->abort() to forcibly terminate any blocked socket->waitForReadyRead() calls.

    When using system calls (e.g. read() ) instead of Qt classes you could attempt to send a signal to interrupt the system call, however, this might not have the desired effect when the component is deployed ... the reader is advised to be careful here.

    ORO_CREATE_COMPONENT(SimpleNonPeriodicClient)
    This line of code creates a deployable component for the SimpleNonPeriodicClient) class, that the deployer can load from a shared library.

    To build

    In a shell

    cd /path/to/SimpleNonPeriodicClient
    mkdir build
    cd build
    cmake .. -DOROCOS_TARGET=macosx
    make

    For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

    Tested in Mac OS X Leopard 10.5.7, and Ubuntu Jaunty Linux.

    To run

    Start one shell and run netcat to act as the server (NB 50001 is the HostPort value from your SimpleNonPeriodicClient.cpf file)

    nc -l 50001

    Start a second shell and deploy the SimpleNonPeriodicClient component

    cd /path/to/SimpleNonPeriodicClient/build
    deployer-macosx -s ../SimpleNonPeriodicClient.xml

    Now type in the first shell and when you hit enter, then netcat will send the data and it will be printed by the SimpleNonPeriodicClient component (where N is the size of the buffer in updateHook()).

    Points to note:

    1. The SimpleNonPeriodicClient component will time out if you do not hit enter within ReadTimeout seconds (as specified in the SimpleNonPeriodicClient.cpf file).
    2. Setting the ORO_LOGLEVEL environment variable to 5 or 6, or running the deployer with -linfo or -ldebugoptions, will generate additional debugging statements.
    3. The component will take up to ReadTimeout seconds to respond to the user typing quit in the deployer, as breakUpdateHook() does not forcibly exit the socket->waitForReadyRead() call.

    AttachmentSize
    SimpleNonPeriodicClient.cpp7.42 KB
    SimpleNonPeriodicClient.hpp3.11 KB
    SimpleNonPeriodicClient.xml1 KB
    SimpleNonPeriodicClient-cpf.txt748 bytes
    SimpleNonPeriodicClient.tar_.bz27.72 KB

    Sample output

    Sample output

    The netcat shell, with the text the user typed in.

    nc -l 50001 
    The quick brown fox jumps
    over the lazy dog. 

    The deployer shell, showing the text read in chunks, as well as the updated port and attribute within the component.

    deployer-macosx -s ../SimpleNonPeriodicClient.xml -linfo
    0.009 [ Info   ][deployer-macosx::main()] No plugins present in /usr/lib/rtt/macosx/plugins
    0.009 [ Info   ][DeploymentComponent::loadComponents] Loading '../SimpleNonPeriodicClient.xml'.
    0.010 [ Info   ][DeploymentComponent::loadComponents] Validating new configuration...
    0.011 [ Info   ][DeploymentComponent::loadLibrary] Storing orocos-rtt
    0.011 [ Info   ][DeploymentComponent::loadLibrary] Loaded shared library 'liborocos-rtt-macosx.dylib'
    0.054 [ Info   ][DeploymentComponent::loadLibrary] Loaded multi component library 'libSimpleNonPeriodicClient.dylib'
    0.054 [ Warning][DeploymentComponent::loadLibrary] Component type name SimpleNonPeriodicClient already used: overriding.
    0.054 [ Info   ][DeploymentComponent::loadLibrary] Loaded component type 'SimpleNonPeriodicClient'
    0.055 [ Info   ][DeploymentComponent::loadLibrary] Storing SimpleNonPeriodicClient
    0.058 [ Info   ][DeploymentComponent::loadComponent] Adding SimpleNonPeriodicClient as new peer:  OK.
    0.058 [ Warning][SingleThread] Forcing priority (0) of thread to 0.
    0.058 [ Info   ][NonPeriodicActivity] SingleThread created with priority 0 and period 0.
    0.058 [ Info   ][NonPeriodicActivity] Scheduler type was set to `4'.
    0.059 [ Info   ][PropertyLoader:configure] Configuring TaskContext 'SimpleNonPeriodicClient' with '../SimpleNonPeriodicClient.cpf'.
    0.059 [ Info   ][DeploymentComponent::configureComponents] Configured Properties of SimpleNonPeriodicClient from ../SimpleNonPeriodicClient.cpf
    0.059 [ Info   ][DeploymentComponent::configureComponents] Re-setting activity of SimpleNonPeriodicClient
    0.059 [ Info   ][DeploymentComponent::configureComponents] Configuration successful.
    0.060 [ Info   ][DeploymentComponent::startComponents] Connecting to 127.0.0.1:50001
    0.064 [ Info   ][DeploymentComponent::startComponents] Connected
    0.065 [ Info   ][DeploymentComponent::startComponents] Startup successful.
    0.065 [ Info   ][deployer-macosx::main()] Successfully loaded, configured and started components from ../SimpleNonPeriodicClient.xml
       Switched to : Deployer
    0.066 [ Info   ][SimpleNonPeriodicClient] Entering Task Deployer
     
      This console reader allows you to browse and manipulate TaskContexts.
      You can type in a command, event, method, expression or change variables.
      (type 'help' for instructions)
        TAB completion and HISTORY is available ('bash' like)
     
     In Task Deployer[S]. (Status of last Command : none )
     (type 'ls' for context info) :4.816 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'The quick '
    4.816 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'brown fox '
    7.448 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : 'jumps
    over'
    7.448 [ Info   ][SimpleNonPeriodicClient] Got 10 bytes : ' the lazy '
    12.448 [ ERROR  ][SimpleNonPeriodicClient] Error waiting for data: 5, Network operation timed out. Num bytes = 5
    12.448 [ ERROR  ][SimpleNonPeriodicClient] Disconnecting
     
     
     In Task Deployer[S]. (Status of last Command : none )
     (type 'ls' for context info) :ls SimpleNonPeriodicClient
     
     Listing TaskContext SimpleNonPeriodicClient :
     
     Configuration Properties: 
         string HostName       = 127.0.0.1            (Name to listen for incoming connections on (FQDN or IPv4))
            int HostPort       = 50001                (Port to listen on (1024-65535 inclusive))
            int ConnectionTimeout = 5                    (Timeout in seconds, when waiting for connection)
            int ReadTimeout    = 5                    (Timeout in seconds, when waiting for read)
     
     Execution Interface:
      Attributes   : 
            int countRead      = 4                   
     
      Methods      : activate cleanup configure error getErrorCount getPeriod getWarningCount inFatalError inRunTimeError inRunTimeWarning isActive isConfigured isRunning resetError start stop trigger update warning 
      Commands     : (none)
      Events       : (none)
     
     Data Flow Ports: 
      W(U)      string lastRead       =  the lazy 
     
     Task Objects: 
      this           ( The interface of this TaskContext. ) 
      scripting      ( Access to the Scripting interface. Use this object in order to load or query programs or state machines. ) 
      engine         ( Access to the Execution Engine. Use this object in order to address programs or state machines which may or may not be loaded. ) 
      marshalling    ( Read and write Properties to a file. ) 
      lastRead       ( (No description set for this Port) ) 
     
     Peers        : (none)
     
     In Task Deployer[S]. (Status of last Command : none )
     (type 'ls' for context info) :quit
     
    18.089 [ Info   ][DeploymentComponent::stopComponents] Stopped SimpleNonPeriodicClient
    18.089 [ Info   ][DeploymentComponent::cleanupComponents] Cleaned up SimpleNonPeriodicClient
    18.090 [ Info   ][DeploymentComponent::startComponents] Disconnected and destroyed SimpleNonPeriodicClient
    18.090 [ Info   ][DeploymentComponent::startComponents] Kick-out successful.
    18.091 [ Info   ][Logger] Orocos Logging Deactivated.

    Using XML substitution to manage complex deployments

    Rationale

    Problem: You deploy multiple configurations of your system, perhaps choosing between a real and simulated robot, some real and simulated device, etc. You want to parameterize the deployments to reduce the number of files you have to write for the varying configuration combinations

    Solution: Use the XML ENTITY element.

    Assumptions

    • Works with Xerces only (v2 tested, v3 should also support this). Will not work with the default TinyXML processor.

    Compatabilitry

    Tested on v1.x trunk on Mac OS X Snow Leopard. These instructions should apply identically to RTT 2.x installations.

    Files

    See the attachments at the bottom of this page.

    Approach

    This simple example demonstrates how to deploy a tiny system in two configurations, by simply changing the name of the deployed component. This approach can be (and has been) used to manage deployments with many system configurations.

    There is a top-level file per configuration, which specifies all the parameters. Each top-level file then includes a child file which instantiates components, etc.

    One top level file

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd"
    [
      <!-- internal entities for substituion -->
      <!ENTITY name "Console">
      <!ENTITY lib "liborocos-rtt">
      <!-- external entity for file substitution -->
      <!ENTITY FILE_NAME SYSTEM "test-entity-child.xml">
    ]
    >
     
    <properties>
     
      &FILE_NAME;
     
    </properties>

    The internal entity values are used to substitute component names, and other basic parameters. The external entity value (&FILE_NAME) is used to include child files, so that the entity values defined in the top-level file are available within the child file. Using the Orocos' builtin include statement does not make the top-level entity values available within the child file.

    The child file simply substitutes the two internal entities for a library name, and a component name.

    <properties>
     
      <simple name="Import" type="string">
        <value>&lib;</value>
      </simple>
      <simple name="Import" type="string">
        <value>liborocos-ocl-common</value>
      </simple>
     
      <struct name="&name;" type="OCL::HMIConsoleOutput">
      </struct>
     
    </properties>

    The other top level file differs from the first top level file only in the name of the component.

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd"
    [
      <!ENTITY name "Console2">
      <!ENTITY lib "liborocos-rtt">
      <!ENTITY file SYSTEM "test-entity-child.xml">
    ]
    >
     
    <properties>
     
      &file;
     
    </properties>

    You can use relative paths within the external entity filename. I have had inconsistent success with this - sometimes the relative path is needed, and other times it isn't. I think that it only needs the path relative to the file being included from, so if that file is already loaded on a relative path then you need to specify the child file only relative to the parent file, and not the current working directory that you started the deployment in.

    AttachmentSize
    test-entity.xml1.56 KB
    test-entity2.xml278 bytes
    test-entity-child.xml307 bytes

    Using real-time logging

    This page collects notes and issues on the use of real-time logging. Its contents will eventually become the documentation for this feature.

    This feature has been integrated in the Orocos 1.x and 2.x branches but is still considered under development. However, if you need a real-time logging infrastructure (ie text messages to users), this is exactly where you need to be. If you need a real-time data stream logging of ports, use the OCL's Reporting NetCDFReporting Component instead.

    It is noted in the text where Orocos 1.x and 2.x differ.

    Restrictions and issues

    Restrictions

    Startup the logging components first: Logging events prior to logging service's configure() will be dropped. The problem is that the logging service connects categories and appenders, and is it itself a component. So until it is configured, and the connections are all made, no appenders are available to deal with the event. Therefore you are suggested to put your appender components and the logging service in a separate deployment XML or script file which is loaded first. This will allow your application components to use logging from the start (component creation). See the ocl/logging/tests/data availablility XML deployment files for examples. OCL's deployer can execute in order multiple XML or script files.

    Categories can not be created in real-time: They live on the normal heap via new/delete. Create all categories in your component's constructor or during configureHook() or similar.

    NDC's are not supported. They involve std::string and std::vector which we currently can't replace.

    Works only with OCL's deployers: If you use a non-deployer mechanism to bring up your system, you will need to add code to ensure that the log4cpp framework creates our OCL::Category objects, and not the default (non-real-time) log4cpp::Category objects. This should be done early in your application, prior to any components and categories being created.

        log4cpp::HierarchyMaintainer::set_category_factory(
            OCL::logging::Category::createOCLCategory);

    Issues

    On the ML it was requested to log when events have been lost. There are two places that this would need to be implemented, both annotated with TODO's in the code.
    • When creation of the OCL::String objects in a LoggingEvent exhausts the memory pool
    • When the buffer between a category and its appenders is full

    This is not currently dealt with, but could be in future implementations.

    In RTT/OCL 1.x, multiple appenders connected to the same category will, receive only some of the incoming logging events. This is as each appender will pop different elements from the category's buffer. This issue has been solved in 2.x.

    The size of the buffer between a category and its appenders is currently fixed (see ocl/logging/Category.cpp). This will be fixed lateron on the 2.x branch. Note that that fixed size plus the default consumption rate of the FileAppender means you can exhaust the default TLSF memory pool in very short order. For a complex application (~40 components, 400 Hz cycle rate) we increased the default buffer size to 200, increased the memory pool to 10's of kilobytes (or megabytes) and increased the FileAppender consumption rate to 500 messages per second.

    Viewing logs

    We can use standard log viewers for Log4j in two ways:

    1. Use FileAppender which writes log lines to a file and let the viewers read that file
    2. Use Log4cxxAppender which creates a network socket to which Log4cxx/Log4j viewers can connect.

    These log viewers are compatible:

    Complete application example

    As at October 2010, assumes you are using for RTT 1.x:

    And for RTT 2.x, use the Orocos Toolchain 2.2 or later from :

    then build in the following order, with these options ON:

    • log4cpp (default options)
    • RTT: ENABLE_RT_MALLOC, OS_RT_MALLOC
    • OCL: BUILD_RTALLOC, BUILD_LOGGING

    The deployer now defaults to a 20k real-time memory pool (see OCL CMake option ORO_DEFAULT_RTALLOC_SIZE), all Orocos RTT::Logger calls end up inside of log4cpp, and the default for RTT::Logger logging events is to log to a file "orocos.log". Same as always. But now you can configure all logging in one place!

    IMPORTANT Be aware that there are two logging hierarchies at work here:

    1. a non-real-time, log4cpp-based logging in use by RTT::Logger (currently only for RTT 1.x)
    2. a real-time, OCL::Logging-based (with log4cpp underneath) in use by application code

    In time, hopefully these two will evolve into just the latter.

    Required Build flags

    We're assuming here that you used 'orocreate-pkg' to setup a new application. So you're using the UseOrocos CMake macros.

    1. Your application's manifest.xml must depend on ocl.
    2. Your application's CMakeLists.txt must include the line : orocos_use_package(ocl-logging)

    Both steps will make sure that your libraries link with the Orocos logging libraries and that include files are found.

    Configuring real-time memory pool size

    The deployer's have command line options for this

    deployer-macosx --rtalloc-mem-size 10k
    deployer-corba-macosx --rtalloc-mem-size 30m
    deployer-corba-macosx --rtalloc 10240      # understands shortened, but unique, options
    See note at top of file regarding TLSF's bookkeeping overhead. The pool needs to be larger than that value.

    Configuring RTT::Logger logging

    NOTE: this feature is not available on the official release. Skip to the next section (Configuring OCL::logging) if you're not using the log4cpp branch of the RTT

    You can use any of log4cpp's configurator approaches to configure, but the deployer's already know about PropertyConfigurator's. You can pass a log4cpp property file to the deployer and that will be used to configure the first of the hierarchies above - the non-real-time, logging used by RTT::Logger. For example

    deployer-macosx --rtt-log4cpp-config-file /z/l/log4cpp.conf
    where the file /z/l/log4cpp.conf is something like
    # root category logs to application (this level is also the default for all 
    # categories who's level is NOT explicitly set in this file)
    log4j.rootCategory=DEBUG, applicationAppender
     
    # orocos setup
    log4j.category.org.orocos.rtt=INFO, orocosAppender
    log4j.additivity.org.orocos.rtt=false   # do not also log to parent categories
     
    log4j.appender.orocosAppender=org.apache.log4j.FileAppender
    log4j.appender.orocosAppender.fileName=orocos-log4cpp.log
    log4j.appender.orocosAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.orocosAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %m%n
    This configuration file simply changes the output filename and format. You could also add additional appenders (e.g. to stdout, to socket) and change the logging level for sub-categories, if RTT supported them (e.g. scripting.rtt.orocos.org).

    IMPORTANT Note the direction of the category name, from org to rtt. This is specific to log4cpp and other log4j-style frameworks. Using a category "rtt.orocos.org" and sub-category "scripting.rtt.orocos.org" won't do what you, nor log4cpp, expect.

    Configuring OCL::logging (XML)

    This is how you would setup logging from a Deployer XML file. If you prefer to use a script, see the next section.

    See ocl/logging/tests/xxx.xml for complete examples and more detail, but in short

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
     
      <simple name="Import" type="string">
        <value>liborocos-logging</value>
      </simple>
      <simple name="Import" type="string">
        <value>libTestComponent</value>
      </simple>
     
      <struct name="TestComponent" type="OCL::logging::test::Component">
        <struct name="Activity" type="Activity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
      </struct>
     
      <struct name="AppenderA" type="OCL::logging::FileAppender">
        <struct name="Activity" type="Activity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
        <struct name="Properties" type="PropertyBag">
          <simple name="Filename" type="string"><value>appendera.log</value></simple>
          <simple name="LayoutName" type="string"><value>pattern</value></simple>
          <simple name="LayoutPattern" type="string"><value>%d [%t] %-5p %c %x - %m%n</value></simple>
        </struct>
      </struct>
     
      <struct name="LoggingService" type="OCL::logging::LoggingService">
        <struct name="Activity" type="Activity">
          <simple name="Period" type="double"><value>0.5</value></simple>
          <simple name="Priority" type="short"><value>0</value></simple>
          <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple>
        </struct>
     
        <simple name="AutoConf" type="boolean"><value>1</value></simple>
        <simple name="AutoStart" type="boolean"><value>1</value></simple>
     
        <struct name="Properties" type="PropertyBag">
          <struct name="Levels" type="PropertyBag">
            <simple name="org.orocos.ocl.logging.tests.TestComponent" 
                    type="string"><value>info</value></simple>
          </struct>
     
          <struct name="Appenders" type="PropertyBag">
            <simple name="org.orocos.ocl.logging.tests.TestComponent" 
                    type="string"><value>AppenderA</value></simple>
          </struct>
        </struct>
     
        <struct name="Peers" type="PropertyBag">
          <simple type="string"><value>AppenderA</value></simple>
        </struct> 
     
      </struct>
     
    </properties>
    which creates one component that logs to a org.orocos.ocl.logging.tests.TestComponent category, and that category is connected to one appender that logs to a file appendera.log.

    Run this XML file, save it in 'setup_logging.xml' and use it:

      deployer-gnulinux -s setuplogging.xml

    Configuring OCL::logging (Lua)

    This is how you would setup logging from a Lua script file. If you prefer to use a XML, see the previous section.

    require("rttlib")
     
    -- Set this to true to write the property files the first time.
    write_props=false
     
    tc = rtt.getTC()
    depl = tc:getPeer("deployer")
     
    -- Create components. Enable BUILD_LOGGING and BUILD_TESTS for this to
    -- work.
    depl:loadComponent("TestComponent","OCL::logging::test::Component")
    depl:setActivity("TestComponent", 0.5, 0, 0)
     
    depl:loadComponent("AppenderA", "OCL::logging::FileAppender")
    depl:setActivity("AppenderA", 0.5, 0, 0)
     
    depl:loadComponent("LoggingService", "OCL::logging::LoggingService")
    depl:setActivity("LoggingService", 0.5, 0, 0)
     
    test = depl:getPeer("TestComponent")
    aa = depl:getPeer("AppenderA")
    ls = depl:getPeer("LoggingService")
     
    depl:addPeer("AppenderA","LoggingService")
     
    -- Load marshalling service to read/write components
    depl:loadService("LoggingService","marshalling")
    depl:loadService("AppenderA","marshalling")
     
    if write_props then
       ls:provides("marshalling"):writeProperties("logging_properties.cpf")
       aa:provides("marshalling"):writeProperties("appender_properties.cpf")
       print("Wrote property files. Edit them and set write_props=false")
       os.exit(0)
    else
       ls:provides("marshalling"):loadProperties("logging_properties.cpf")
       aa:provides("marshalling"):loadProperties("appender_properties.cpf")
    end
     
    test:configure()
    aa:configure()
    ls:configure()
     
    test:start()
    aa:start()
    ls:start()

    To run this script, save it in 'setup_logging.lua' and do:

    rttlua-gnulinux -i setup_logging.lua

    Using OCL::Logging in C++

    The component itself uses logging like the following simplified example
    // TestComponent.hpp
    #include <ocl/LoggingService.hpp>
    #include <ocl/Category.hpp>
     
    class Component : public RTT::TaskContext
    {
    ...
        /// Our logging category
        OCL::logging::Category* logger;
    };
    // TestComponent.cpp
    #include <rtt/rt_string.hpp>
     
    Component::Component(std::string name) :
            RTT::TaskContext(name),
            logger(dynamic_cast<OCL::logging::Category*>(
                       &log4cpp::Category::getInstance("org.orocos.ocl.logging.tests.TestComponent")))
    {
    }
     
    bool Component::startHook()
    {
        bool ok = (0 != logger);
        if (!ok)
        {
            log(Error) << "Unable to find existing OCL category '" << categoryName << "'" << endlog();
        }
     
        return ok;
    }
     
    void Component::updateHook()
    {
        // RTT 1.X
        logger->error(OCL::String("Had an error here"));
        logger->debug(OCL::String("Some debug data ..."));
        // RTT 2.X
        logger->error(RTT::rt_string("Had an error here"));
        logger->debug(RTT::rt_string("Some debug data ..."));
        logger->getRTStream(log4cpp::Priority::DEBUG) << "Some debug data and a double value " << i;
    }

    IMPORTANT YOu must dynamic_cast to an OCL::logging::Category* to get the logger, as shown in the constructor above. Failure to do this can lead to trouble. You must also use explicitly use OCL::String() syntax when logging. Failure to do this produces compiler errors, as otherwise the system defaults to std::string and then you are no longer real-time. See the FAQ below for more description.

    And the output of the above looks something like this:

    // file orocos.log, from RTT::Logger configured with log4cpp
    20100414T09:50:11.844 [INFO] ControlTask 'HMI' found CORBA Naming Service.
    20100414T09:50:11.845 [WARN] ControlTask 'HMI' already bound to CORBA Naming Service.
    and from a deployer with OCL::logging (note that here the categories are set as components.something)
    20100414T21:41:22.539 [INFO ] components.HMI Started servicing::HMI
    20100414T21:41:23.039 [DEBUG] components.Robot Motoman robot started
    20100414T21:41:42.539 [INFO ] components.ConnectionMonitor Connected
    and if you combine RTT::Logger and your own log4cpp-logging, say in a GUI application
    20100414T21:41:41.982 [INFO ] org.orocos.rtt Thread created with scheduler type '1', priority 0 and period 0.
    20100414T21:41:41.982 [INFO ] org.orocos.rtt Creating Proxy interface for HMI
    20100414T21:41:42.016 [DEBUG] org.me.myapp Connections made successfully
    20100414T21:41:44.595 [DEBUG] org.me.myapp.Robot Request position hold

    The last one is the most interesting. All RTT::Logger calls have been sent to the same appender as the application logs to. This means you can use the exact same logging statements in both your components (when they use OCL::Logging) and in your GUI code (when they use log4cpp directly). Less maintenance, less hassle, only one (more) tool to learn. The configuration file for the last example looks something like

    # root category logs to application (this level is also the default for all 
    # categories who's level is NOT explicitly set in this file)
    log4j.rootCategory=DEBUG, applicationAppender
     
    # orocos setup
    log4j.category.org.orocos.rtt=INFO, applicationAppender
    log4j.additivity.org.orocos.rtt=false   # do not also log to parent categories
     
    # application setup
    log4j.category.org.me =INFO, applicationAppender
    log4j.additivity.org.me=false         # do not also log to parent categories
     
    log4j.category.org.me.gui=WARN
    log4j.category.org.me.gui.Robot=DEBUG
    log4j.category.org.me.gui.MainWindow=INFO
     
    log4j.appender.applicationAppender=org.apache.log4j.FileAppender
    log4j.appender.applicationAppender.fileName=application.log
    log4j.appender.applicationAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.applicationAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %c %m%n

    Technical details

    • We rely on a real-time allocator called TLSF.
    • There is a several kilobyte overhead for TLSF's bookkeeping (~3k on 32-bit Ubuntu, ~6k on 64-bit Snow Leopard). You must take this into account, although the standard OCL TLSF pool size (256k) should cover your needs.
    • Only the OCL::String (in 1.x) and RTT::rt_string (in 2.x) objects in OCL::logging::LoggingEvent objects use the real-time memory pool.
    • When you create a category, all parent categories up to the root are created. For example, "org.me.myapp.cat1" causes creation of five (5) categories: "org.me.myapp.cat1", "org.me.myapp", "org.me", "org", and "" (the root category) (presuming none of these already exist). These all occur on the normal heap (see below).
    • Currently, exhausting the real-time memory pool results in logging events being silently dropped (also, see next item).
    • For real-time performance, ensure that TLSF is built with MMAP and SBRK support OFF in RTT's CMake options (-DOS_RT_MALLOC_MMAP=OFF -DOS_RT_MALLOC_SBRK=OFF).
    • TLSF use with multiple threads is currently supported only for non-macosx platforms. Use on macosx will exhibit (understandable) corruption in the TLSF bookkeeping (causes assert's).

    FAQ

    Logging statements are not recorded

    Q: You are logging and everything seems fine, but you get no output to file/socket/stdout (depending on what your appender is).

    A: Make sure you are using an OCL::logging::Category* and not a log4cpp::Category. The latter will silently compile and run, but it will discard all logging statements. This situation can also mask the fact that you are accidentally using std::string and not OCL::String. For example

    log4cpp::Category* logger = log4cpp::Category::getInstance(name);
    logger->debug("Hello world")
    When the above is used within the OCL real-time logging framework, no logging statements are recorded and it is not running in real-time. Changing the above to

    OCL::logging::Category* logger = 
      dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name));
    logger->debug("Hello world")
    will cause a compile error of
    /path/to/log4cpp/include/log4cpp/Category.hh: In member function ‘virtual bool MyComponent::configureHook():
    /path/to/log4cpp/include/log4cpp/Category.hh:310: error:void log4cpp::Category::debug(const char*, ...)’ is inaccessible
    /path/to/my/source/MyComponent.cpp:64: error: within this context
    because the "Hello world" string is being treated as a std::string, which you can not use with OCL::logging::Category. Finally, correct the code to

    OCL::logging::Category* logger = 
      dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name));
    logger->debug(OCL::String("Hello world"))
    and the code compiles and runs, and now logging statements are recorded.

    omniORBpy - python binding for omniORB

    This page describes a working example of using omniORBpy to interact with an Orocos component. The example is very simple, and is intended for people who do not know where to start developing a CORBA client.

    Your first stop is: http://omniorb.sourceforge.net/omnipy3/omniORBpy/ The omniORBpy version 3 User’s Guide. Read chapters 1 and 2. Optionally read chapter 6. The example works with and without naming services.

    Once you are comfortable with omniORBpy, do the following (I assume you are kind enough to be a Linux user working on a console):

    1. download the rtt examples, and compile the smallnet orocos component (you might need first to fix the Makefile paths):
      • wget http://www.orocos.org/stable/examples/rtt/rtt-examples-1.10.0.tar.gz
        tar xf rtt-examples-1.10.0.tar.gz 
        cd rtt-examples-1.10.0/corba-example/
        make smallnet
    2. download the corba idls and, for simplicity's sake, copy the IDLs to a new empty directory:
      • svn co http://svn.mech.kuleuven.be/repos/orocos/trunk/rtt/src/corba/ 
        mkdir omniclt 
        cp corba/*idl omnictl/
        cd omniclt
    3. generate the Python stubs, two new directories should appear (namely RTT and RTT__POA)
      • omniidl -bpython *idl 
    4. download the attached python file (orocosclient.py) to your home directory and copy it where your IDLs are (current directory)
      • cp ~/orocosclient.py .
    5. open a new console and run your smallnet application
      • sudo ../smallnet

    If you get something like

    0.011 [ Warning][SmallNetwork] ControlTask 'ComponentA' could not find CORBA Naming Service.
    0.011 [ Warning][SmallNetwork] Writing IOR to 'std::cerr' and file 'ComponentA.ior'
    IOR:0...10100
    it means your omniNames is either not configured or not running. Try:
    sudo ../smallnet -ORBInitRef NameService=corbaname::127.0.0.1
    if this work (you see a line like: 0.011 [ Info ][SmallNetwork] ControlTask 'ComponentA' found CORBA Naming Service.) then you need to modify the paremeter InitRef in your omniORB4.cfg (or similar, which is usually in /etc/) and make it read like:
    InitRef=NameService=corbaname::127.0.0.1
    • finally run the python application
      • python orocosclient.py 

    If you are not able to make your naming service work, try using the component's IOR. After running you smallnet server, copy the complete IOR printed on screen and paste it as argument to the python program (including the word "IOR:")

    python orocosclient.py IOR:0...10100

    Look at the IDLs and the code to understand how things work. I am no python expert, so if the coding style looks weird to you, my apologies. Good luck!

    AttachmentSize
    orocosclient.py_.txt1.99 KB

    Frequently asked questions (FAQ)

    Future home of FAQ

    How to build Debian packages

    Rationale

    You want to build debian packages once, so that you can install on multiple machines without building from source on each.

    Assumptions

    1. You are building for gnulinux only.
    2. You have "svn-b", etc, alises setup (see "man svn-buildpackage").
    3. You are using Synaptic as your package manager.
    4. Example code is for Orocos v1.8, but also applies to later versions, including 2.x
    5. BASE_DIR is whatever directory you want to put everything into.

    To build the Orocos RTT packages (1.x)

    cd BASE_DIR
    svn co ...
    cd rtt
    debchange -v 1.8.0-0
    cd debian
    ./create-control.sh gnulinux    # optionally add "lxrt", "xenomai"
    svn add *1.8*install
    cd ..
    export DEB_BUILD_OPTIONS="parallel=2"    # or 4, 8, depending on your computer
    svn-br     # or svn-b

    Packages are built into BASE_DIR/build-area.

    To build the Orocos RTT packages (2.x)

    cd BASE_DIR
    git clone http://git.gitorious.org/orocos-toolchain/rtt.git
    cd rtt
    debchange -v 2.3.0-1
    cd debian
    ./create-control.sh gnulinux    # optionally add "lxrt", "xenomai"
    git add *2.3*install
    git commit -sm"2.3 release install files"
    cd ..
    export DEB_BUILD_OPTIONS="parallel=2"    # or 4, 8, depending on your computer
    git-buildpackage --git-upstream-branch=origin

    Packages are built into BASE_DIR/build-area.

    Make the packages available to your package manager

    Create your own repository

    cd BASE_DIR
    dpkg-scanpackages build_area /dev/null | gzip -9c > Packages.gz

    Now open/etc/apt/sources.list in your favorite editor, and append the following lines to the bottom (substituting the full path to your repos for /path/to/BASE_DIR/).

    # Orocos packages
    deb file:///path/to/BASE_DIR/ ./

    Open Synaptic, reload, search for orocos and install.

    KDL and OCL

    Follow the same basic approach first for KDL, then for OCL

    1. build packages
    2. update the repository doing just the "dpkg-scanpackages" line again
    3. install

    NB KDL and OCL will happily both build into "build_area" alongside RTT.

    Test installed packages

    • 1.x: Build the quicky components. Requires OCL (install at least the

    orocos-ocl-gnulinux1.8-bin and liborocos-ocl-gnulinux1.8-dev packages).

    # 1.x:
    svn co ...
    cd quicky
    mkdir build && cd build
    cmake ..
    make
     
    # one of the following two exports, depending on your situation
    export LD_LIBRARY_PATH=.
    export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:.
     
    deployer-gnulinux -s ../quicky.xml
    ls Quicky   # you should see Data_W != 0

    • 2.x: Run 'orocreate-pkg testme' and try to build the testme package.

    orocreate-pkg testme
    cd testme
     
    #non-ROS:
    make install
    #ROS:
    make
     
    deployer-gnulinux 
    > import("testme")
    > displayComponentTypes()

    Test CORBA deployer

    These instructions test inter-process communication on the same machine. See [2] for more details on running CORBA-based deployers between computers.

    In the first shell start the naming service and the deployer

    Naming_Service -m 0 -ORBDottedDecimalAddresses 1 -ORBListenEndpoints  iiop://127.0.0.1:2809 -ORBDaemon
    export NameServiceIOR=corbaloc:iiop:127.0.0.1:2809/NameService
    deployer-corba-gnulinux -s ../quicky.xml -- -ORBDottedDecimalAddresses 1
    ls Quicky   # you should see Data_W != 0

    In the second shell run the taskbrowser and see the Quicky component running in the deployer

    export NameServiceIOR=corbaloc:iiop:127.0.0.1:2809/NameService
    ctaskbrowser-gnulinux Deployer -ORBDottedDecimalAddresses 1
    ls Quicky   # you should see Data_W != 0

    Gotchas

    If the v1.8 files have already been committed to the repository, then you don't need the debchange and svn add commands when building the packages.

    Make repository available to other machines

    See [1] below.

    References

    [1] http://www.debian.org/doc/manuals/repository-howto/repository-howto#setting-up [2] http://orocos.org/wiki/rtt/frequently-asked-questions-faq/using-corba

    How to re-build Debian packages

    This page describes how to re-build debian packages for another Debian/Ubuntu release than they were prepared for.

    Note1: This only applies if you want to use the same version as the version in the package repository. If you want a newer version, consult How to build Debian packages.

    Note2: The steps below will rebuild Orocos for all targets in the repository, so lxrt, xenomai and gnulinux. If you care only for one of these targets, see also How to build Debian packages.

    First, make sure you added this deb-src line to your sources.list file:

    deb-src http://www.fmtc.be/debian etch main
    Next, type from your 'HOME/src' directory:

    sudo apt-get update
    apt-get source orocos-rtt
    sudo apt-get build-dep orocos-rtt
    sudo apt-get install devscripts build-essential fakeroot dpatch
    cd orocos-rtt-1.6.0
    dpkg-buildpackage -rfakeroot -uc -us
    cd ..
    for i in *.deb; do sudo dpkg -i \$i; done

    You can repeat the same process for orocos-ocl.

    Using CORBA

    Outlines how to use CORBA to distribute applications. Differs by CORBA implementation and whether you are using DNS names or IP addresses. Examples below support the ACE/TAO and OmniORB CORBA implementations.

    Sample system:

    • Deploying components in demo.xml with deployer-corba, on machine1.me.home with IP address 192.168.12.132
    • Running a GUI program demogui to connect to deployer components, on machine2.me.home with IP address 192.168.12.133
    • Use a name server without multi-casting[1], on machine1.me.home.
    • Using a bash shell.
    • Both machines are gnulinux (though this has been verified with macosx, and mixing macosx and gnulinux)

    Working DNS

    If you have working forward and reverse DNS entries (ie dig machine1.me.home returns 192.168.12.132, and dig -x 192.168.12.132 returns machine1.me.home)

    ACE/TAO

    machine1 \$ Naming_Service -m 0 -ORBListenEndpoints iiop://machine1.me.home:2809 \
    -ORBDaemon &
    machine1 \$ export NameServiceIOR=corbaloc:iiop:machine1.me.home:2809/NameService
    machine1 \$ deployer-corba-gnulinux -s demo.xml
     
    machine2 \$ export NameServiceIOR=corbaloc:iiop:machine1.me.home:2809/NameService
    machine2 \$ ./demogui

    OmniORB

    OmniORB does not support the NameServiceIOR environment variable

    machine1 \$ omniNames -start &
    machine1 \$ deployer-corba-gnulinux -s demo.xml  
     
    machine2 \$ ./demogui -ORBInitRef NameService=corbaloc:iiop:machine1.me.home:2809/NameService

    Note that if you swap which machines run the deployer and demogui, then change the above to

    machine1 \$ omniNames -start &
     
    machine2 \$ deployer-corba-gnulinux -s demo.xml  -- \
    -ORBInitRef NameService=corbaloc:iiop:machine1.me.home:2809/NameService
     
    machine1 \$ ./demogui 

    Non-working DNS or you must use IP addresses

    If you don't have DNS or you must use IP addresses for some reason.

    ACE/TAO

    machine1 \$ Naming_Service -m 0 -ORBDottedDecimalAddresses 1 \
    -ORBListenEndpoints iiop://192.168.12.132:2809 -ORBDaemon &
    machine1 \$ export NameServiceIOR=corbaloc:iiop:192.168.12.132:2809/NameService
    machine1 \$ deployer-corba-gnulinux -s demo.xml -- -ORBDottedDecimalAddresses 1
     
    machine2 \$ export NameServiceIOR=corbaloc:iiop:192.168.12.132:2809/NameService
    machine2 \$ ./demogui -ORBDottedDecimalAddresses 1

    For more information on the ORBListenEndPoints syntax and possibilities, see http://www.dre.vanderbilt.edu/~schmidt/DOC_ROOT/TAO/docs/ORBEndpoint.html

    OmniORB

    machine1 \$ omniNames -start &
    machine1 \$ deployer-corba-gnulinux -s demo.xml  
     
    machine2 \$ ./demogui -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService

    And the reverse

    machine1 \$ omniNames -start &
     
    machine2 \$ deployer-corba-gnulinux -s demo.xml  -- \
    -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService
     
    machine1 \$ ./demogui 

    Localhost

    Certain distro's and certain CORBA versions will exhibit problems even with localhost only scenarios (demonstrated with OmniORB under Ubuntu Jackaloupe). If you can not connect to the name service running on the same machine, substitue the primary network interface's IP address for localhost in any NameService value.

    For example, instead of

    machine1 \$ omniNames -start &
     
    machine2 \$ deployer-corba-gnulinux -s demo.xml 

    or even

    machine1 \$ omniNames -start &
     
    machine2 \$ deployer-corba-gnulinux -s demo.xml  -- \
    -ORBInitRef NameService=corbaloc:iiop:localhost:2809/NameService

    use

    machine1 \$ omniNames -start &
     
    machine2 \$ deployer-corba-gnulinux -s demo.xml  -- \
    -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService

    NB as of RTT v1.8.2 and OmniORB v4.1.0, programs like demogui (which use RTT::ControlTaskProxy::InitOrb() to initialize CORBA) do not support -ORBDottedDecimalAddresses (in case you try to use it).

    Multi-homed machines

    Computers that have multiple network interfaces present additional problems. The following is for omniORB (verified with a mix of v4.1.3 on Mac OS X, andv v4.1.1 on Ubuntu Hardy), for a system running a name server, a deployer, and a GUI. The example system has a 192.168.1.0 wired subnet and a 10.0.10.0 wireless subnet, and you have a mobile vehicle that has to communicate over the wireless subnet but it also has a wired interface.

    The problem may appear as one of

    • The vehicle can not contact the name server when the wired interface is disconnected but it is up (NB on rare occasions, we've seen this even with the wired interface disconnected and down!)
    • Your GUI can connect to the deployer, but then locks up or throws a CORBA exception when trying to connect to certain remote Orocos items (we had this happen specifically for methods with parameters).

    The solution is to forcibly specify the endPoint parameter to the name server. In the omniorb.cfg file on the computer running the name server, add (for the example networks above)

    endPoint = giop:tcp:10.0.10.14:
    where 10.0.10.14 is the IP adrs of that computer. This forces the name server to publish end points on the wireless network first. Despite this, it will still publish the wired interface but it will come after the wireless. Specifying the endPoint parameter on the command line (instead of the config file) will not work, as then the name sever publishes the wired network first, and then the wireless network second.

    If the above still does not work, then set the endPoint parameter in all computer's config files (note that the end point is the IP adrs of each computer, so it will be (say) 10.0.10.14 for the computer running the name server and the deployer, and (say) 10.0.10.21 for the computer running the GUI). This will force everyone onto the wireless network, instead of relying on what the name server is publishing.

    To debug this problem see the debugging section below, but after starting the name server you will see it output its published endpoints (right after the configuration dump). Also, if you get the lockup then adding the debug settings will cause the GUI or deployer to output each message and what direction/IP it is going on. If they have strayed on to the wired network it will be visibly obvious.

    NB we found that the clientTransportRule and serverTransportRule parameters had no affect on this problem.

    NB the above solution works now matter which computer the name server is running on (ie with the deployer, or with the GUI).

    Debugging

    Add the following to the omniorb.cfg file

    dumpConfiguration = 1
    traceLevel = 25
    and get ready for lots of output.

    See also

    [1] http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/current/doc-xml/orocos-components-manual.html#id479277

    ACE/TAO http://www.cs.wustl.edu/~schmidt/TAO.html

    OmniORB http://omniorb.sourceforge.net/

    Installation

    For general installation instructions specific to each software version, see the top level wiki page for each project (eg. RTT, KDL, etc) and look for Installation in the left toolbar.

    See below for specific additional instructions.

    Installing from binaries / package managers

    Installing via Macports on Mac OS X

    How to build Debian packages

    Installing from source

    To install from source on *NIX systems such as Linux and Mac OS X, see the installation page specific to your software version (e.g. v1.8 RTT).

    To install from source on Windows, see the following wiki pages (also check the forums, a lot of good material is in there also).

    Debian Etch installation from public repositories (x86 only!)

    The Orocos Real-Time Toolkit and Component Library have been prepared as Debian packages for Debian Etch. The pages

    How to build Debian packages

    How to re-build Debian packages

    contain instructions for building your own packages on other distributions, like Ubuntu.

    Copy/paste the following commands, and enter your password when asked (only works in Ubuntu Feisty or later and Debian Etch or later):

    wget -q -O - http://www.orocos.org/keys/psoetens.gpg | sudo apt-key add -
    sudo wget -q http://www.fmtc.be/debian/sources.list.d/fmtc.list -O /etc/apt/sources.list.d/fmtc.list
    These commands install the GPG key and the repository location of the Orocos packages.

    Next, for Debian Etch, type:

    sudo apt-get update
    sudo apt-get install liborocos-rtt-corba-gnulinux1.8-dev
    You can install Orocos for additional targets (or versions) by replacing gnulinux1.8 by another target name (or version). All target libraries can be installed at the same time, the -dev header files only for a single target and version at a time.

    For your application development, you'll most likely use the Orocos Component library as well:

    sudo apt-get install orocos-ocl-gnulinux1.8-dev orocos-ocl-gnulinux1.8-bin
    Again, you may install additional targets and/or versions.

    We recommend using the pkg-config tool to discover the compilation flags required to compile your application with the RTT or OCL. This is described in the installation manual.

    Installing via Macports on Mac OS X

    These are instructions to install the latest version of each of RTT, KDL, BFL and OCL, on Mac OS X using Macports.

    Macports does not have official ports for these Orocos projects, however, the approach below is the recommended way to load unofficial ports in to Macports. [1]

    Installation

    These instructions use /opt/myports to hold the Orocos port files. You can substitute any other directory for MYPORTDIR (ie /opt/myports). Instructions are for bash shell - change appropriately for your own shell.

    1. Download the Portfile files from this page's Attachments (at bottom of page).

    2. Execute the following commands (substituting /opt/myports for the location you wish to store the Orocos port files, and ~/Downloads for the directory you downloaded the portfiles to)

    export MYPORTDIR=/opt/myports
    export DOWNLOADDIR=~/Downloads
     
    mkdir \$MYPORTDIR
    cd \$MYPORTDIR
    mkdir devel
    cd devel
    mkdir orocos-rtt orocos-kdl orocos-bfl orocos-ocl
    cp \$DOWNLOADDIR/orocos-rtt-Portfile.txt orocos-rtt/Portfile
    cp \$DOWNLOADDIR/orocos-kdl-Portfile.txt orocos-kdl/Portfile
    cp \$DOWNLOADDIR/orocos-bfl-Portfile.txt orocos-bfl/Portfile
    cp \$DOWNLOADDIR/orocos-ocl-Portfile.txt orocos-ocl/Portfile
    And for the RTT patch file
    cd \$MYPORTDIR/devel
    mkdir orocos-rtt/files
    cp \$DOWNLOADDIR/rtt-patch-config-check_depend.cmake.diff orocos-rtt/files/patch-config-check_depend.cmake.diff

    You should now have a tree that looks like

    tree /opt/myports/
    /opt/myports/
    `-- devel
        |-- orocos-bfl
        |   `-- Portfile
        |-- orocos-kdl
        |   `-- Portfile
        |-- orocos-ocl
        |   `-- Portfile
        `-- orocos-rtt
           |-- Portfile
           `-- files
               `-- patch-config-check_depend.cmake.diff

    3. Edit /opt/local/etc/macports/sources.conf with superuser privileges (ie via sudo), and add the follwing line before the rsync:///rsync.macports.org/...' line.

    # (substitute your ''MYPORTDIR'' value from above) 
    file:///opt/myports

    4. Execute these commands to tell Macports about your new ports.

    cd \$MYPORTDIR
    sudo portindex

    5. Now install each port with the following commands (the following commands add the optional CORBA support, via omniORB in Macports, as well as the helloworld and other useful parts of OCL)

    sudo port install orocos-rtt +corba
    sudo port install orocos-kdl +corba
    sudo port install orocos-bfl
    sudo port install orocos-ocl +corba+deployment+motion_control+reporting+taskbrowser+helloworld

    6. Verify installation by downloading test-macports.xml from this page's Attachments, and then using these commands

    deployer-macosx -s /path/to/test-macports.xml
    This should succesfully load and start the OCL HelloWorld component within the taskbrowser. You may need to specify the paths to the dynamic libraries, for this to work
    export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib
    Yes, it is DYLD_FALLBACK_LIBRARY_PATH and not DYLD_LIBRARY_PATH. Search the forum if you want to know why ...

    Building your application

    To build against MacPorts-installed Orocos, add the following to your environment before CMake'ing your project

    export CMAKE_PREFIX_PATH=/opt/local
    If you already have CMAKE_PREFIX_PATH set, then append "/opt/local" to your existing entry.

    If you use Makefiles or autoconf to build your project, you'll need to tell those build systems to find Orocos headers, libraries and binaries under /opt/local. Instructions are not provided here for that.

    To run using MacPorts-installed OROCOS, add the following to your environment

    export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib:/opt/local/lib/rtt/macosx/plugins
    If you already have DYLD_FALLBACK_LIBRARY_PATH set, then append the above to your existing entry.

    Updating an existing installation

    (Not yet tested)

    1. Download the new portfiles.
    2. Uninstall each of the old ports

    ...
    sudo port uninstall orocos-rtt
    1. Copy the new portfiles on top of the old ones
    2. Regenerate the port index
    3. Reinstall

    Limitations and issues

    Current limitations

    • CORBA support assumes omniorb
    • KDL supports the Python plugin for Python 2.5, but not Python 2.6.
    • BFL defaults to boost
    • OCL could use more variants to provide finer control

    Issues

    • If when running an Orocos executable, or your own executable linked with Orocos, you get the following error

    dyld: Symbol not found: __cg_jpeg_resync_to_restart 
      Referenced from: 
    /System/Library/Frameworks/ApplicationServices.framework/Versions/A/\
    Frameworks/ImageIO.framework/Versions/A/ImageIO 
      Expected in: /opt/local/lib/libJPEG.dylib 
    then ensure you have the DYLD_FALLBACK_LIBRARY_PATH set and not DYLD_LIBRARY_PATH. See [2] for further details.

    References

    [1] Macports guide with detailed information on the port system.

    [2] http://www.nabble.com/Incorrect-libjpeg.dylib-after-installing-ImageMagick-td22625866.html

    Original bug report for this and accompanying forum entry

    AttachmentSize
    orocos-rtt-Portfile.txt1.82 KB
    orocos-kdl-Portfile.txt2.13 KB
    orocos-bfl-Portfile.txt1.36 KB
    orocos-ocl-Portfile.txt2.67 KB
    rtt-patch-config-check_depend.cmake_.diff607 bytes
    test-macports.xml472 bytes

    RTT Dictionary

    RTT Dictionary

    This page containts a list of terms used in RTT (and Orocos in general).

    Activity

    • An Activity object executes the ExecutionEngine, which in turn executes programs, state machines, processes, incoming commands, incoming events and finally executes the user code.
    • An activity can be: activity, periodic activity, non-periodic activity, sequential activity, slave activity...

    Attribute

    • Attributes are solely for run-time values.
    • You can alter the attributes of any task, program or state machine. The TaskBrowser will confirm validity of the assignment with 'true' or 'false'

    Command

    • Commands are 'sent' by other components to instruct the receiver to 'reach a goal'
    • When a command is entered, it is sent to the component, which will execute it in its own thread on behalf of the sender. The different stages of its lifetime are displayed by the prompt. Hitting enter will refresh the status line.
    • A Command might be rejected (return false) in case it received invalid arguments.
    • A command has a designated reciever.
    • A command cannot, in general, be completely executed instantaneously, so the caller should not block and wait for its completion.
    • But the Command object offers all functionalities to let the caller know about the progress in the execution of the command.
    • Commands are used for actions taking time and setpoints.

    Component

    • Components are implemented by the TaskContext class.
    • It is useful speaking of a context because it defines the context in which an activity (a program) operates. It defines the interface of the component, its properties, its peer components and uses its ExecutionEngine to execute its programs and to process commands and events.
    • A task's interface consists of members.

    Data-Flow Ports

    • Data-Flow Ports are a thread-safe data transport mechanism to communicate buffered or un-buffered data between components.
    • When a value is Set(), it is sent to whatever is connected to that port. Use Get() to read the port.
    • The advantage of using ports is that they are completely thread-safe for reading and writing, without requiring user code.

    Events

    • Events are related to commands, but allow broadcasting of data, while a command has a designated receiver.
    • Events allows functions to be executed when a change in the system occurs.
    • eg. alarms, publishing state changes

    Members

    • Members are: Commands, Methods, Ports, Attributes and Properties and Events, which are all public.

    Method

    • Methods are used for algorithms and complex configurations.
    • Methods are callable by other components to 'calculate' a result immediately, just like a 'C' function.

    Peer

    • The peers of a component are the components which are known, and may be used, by this component.

    Property

    • Properties are meant for persistent configuration and can be written to disk.
    • Properties are run-time modifiable parameters, stored in XML files..

    RTT on MS Windows

    This page collects all the documentation users collected for building and using RTT on Windows. Note that Native Windows support is available from RTT 1.10.0 on, and that you might no longer need some of the proposed workarounds (such as using mingw or cygwin).

    The recommended way of compiling the RTT on Windows is by using the Compiling on Windows with Visual Studio instructions.

    Compiling RTT in Windows/MinGW Native API

    Status

    The native API port of RTT can be found in the 1.10.x releases. Only minor issues are left and most unit tests pass.
    • it is based solely on mingw32, no msys or cygwin.
    • it use native windows threads. No need for pthreads.
    • only ports RTT, CORBA is supported as well.
    • it uses the fixes that Peter made regarding 'weak' symbol handling.
    • 1 unit test is till doing difficult
    • Priorities are supported on thread level, though creating new processes could help get better "realtime" support.

    This document is slightly outdated.

    Compiling RTT

    • CMake is used to generate the project: cmake -G"MinGW Makefiles" .

    Linking and Compiling an application

    To have something sort of working under native Win32/Visual Studio 2003 using the MinGW Compiled RTT (with the patches).

    Using the info here: http://www.mingw.org/old/mingwfaq.shtml#faq-msvcdll

    I managed to create DEF files, and use Microsofts LIB tool to turn the library it into something MSVC likes.

    I'm no CMake expert, and don't have the time to learn **another** build scripting language, however I created the CMake files in the usual way, built RTT and ensure it compiled cleanly. I hacked the created makefiles by a search of my source tree for "--out-implib" and found that link.txt that lives in build\src\CMakeFiles\orocos-rtt-dynamic_win32.dir had that string in it. So I added the --output-def,..\..\libs\liborocos-rtt-win32.dll.def, to create the def file, and rebuilt RTT, this created the DEF file, I than ran it through the Microsoft LIB tool as described.

    I then created a MSVC project, added the library to my linker settings, and made a very simply MSVC console application:

    #include "rtt\os\main.h"
    #include "rtt\rtt-config.h"
     
    int ORO_main(int, char**)
    {
          return 0;
    }

    I also needed to setup my MSVC preprocessor definitions:

    NDEBUG
    _CONSOLE
    __i386__
    __MINGW32__
    OROCOS_TARGET=win32

    Hopefully I am now at a stage when I can actually start to evaluate RTT :-) If anyone has any ideas on how to properly get the CMakeList.txt to generate the DEF files without nasty post-CMake hacks, then I would love to hear it...

    Compiling on Windows with Visual Studio

    This page summarizes how to compile RTT with Microsoft Visual Studio, using the native win32 api. RTT supports Windows out of the box from RTT 1.10.0 and 2.3.0 on. OCL is supported from 1.12.0 and 2.3.0 on.

    This tutorial assumes you extracted the Orocos sources and all its dependencies in c:\orocos

    For new users, RTT/OCL v2.3.x or later is recommended, included in the Orocos Toolchain v2.3.x.

    Rationale

    We only support Visual Studio 2008 and 2005. Support for 2010 is on its way. You're invited to try VS2010 out and suggest patches to the orocos-dev mailing list.

    Orocos does not come with a Visual Studio Solution. You need to generate one using the CMake tool which you can download from http://www.cmake.org. The most important step for CMake is to set the paths to where the dependencies of Orocos are installed. So before you can get to building Orocos, you need to build its dependencies, which don't use CMake, but their own build mechanism.

    Only RTT and OCL of the toolchain are supported on Windows. The ruby based 'orogen' and 'typegen' tools, part of the toolchain, are not supported. Also ROS integration is not supported on Windows.

    Important notice about Release or Debug

    Debug and Release builds can not be mixed in Visual Studio's C++ compiler (you will have crashes when mixing a Debug and Release DLL that has a C++ API). By convention, a Debug .DLL can be recognized because it ends with ....d.dll. We recommend that you do Release builds when evaluating the Orocos toolchain and on production systems. Debug builds are considerably larger than Release builds.

    RTT Dependencies

    There are two major libraries required by RTT: Boost C++ and a CORBA transport library (if you require one).

    CORBA using ACE/TAO (optional)

    In case you require distributed Orocos components, you need to setup ACE/TAO, which does the work under the hood for RTT. Download the latest TAO version, extract it and open the solution (ACE_wrappers/TAO/TAO_ACE_vc8.sln) file with Visual Studio. Build the 'Naming_Service_vc8' project, and make sure that you choose the configuration (Debug/Release) that fits your purpose. The Naming_Service project builds automatically the right components we need to build RTT. Check the TAO build instructions in case you encounter problems.

    You must have this set as system environment variables:

    set ACE_ROOT=c:\orocos\ACE_wrappers
    set TAO_ROOT=%ACE_ROOT%\tao
    set PATH=%PATH%;%ACE_ROOT%\bin;%ACE_ROOT%\lib

    You can also set these using Configuration -> System -> Advanced -> Environment Variables

    CORBA using OMNIORB (optional)

    Untested.

    Boost (required)

    While TAO builds, you might want to check out the pre-built boost libraries from boost-pro consulting on http://www.boostpro.com/download . Alternatively, get boost from http://www.boost.org and follow their build instructions. RTT 1.10.0 requires Boost 1.36.0 at least, since it requires the intrusive containers added in that release. RTT 2.3.0 or later require Boost 1.40.0. Note this bug for Boost 1.44.0: https://svn.boost.org/trac/boost/ticket/4487

    We recommend Boost 1.40.0 for Windows. Also, unzip Boost with 7Zip or similar, but not with the default Windows unzip program, which is extremely slow.

    Make sure to install these components: program_options, thread, unit_test_framework, filesystem, system.

    Also add the lib directory to your PATH system environment variable:

    set PATH=%PATH%;c:\orocos\boost_1_40\lib

    CMake (required)

    Download and install cmake (http://www.cmake.org). We're going to use cmake-gui.exe to configure our build system. Use CMake version 2.6.3 or newer.

    XML Parser

    RTT will use its internal 'tinyxml' parser on Windows. No need to install anything for this.

    Setting up CMake

    First you need to add two 'PATH' entries for telling cmake where Boost and TAO are installed. In the top RTT directory, there is a file named orocos-rtt.default.cmake. Copy it to orocos-rtt.cmake (in the same directory) and add these two lines:
    set(CMAKE_INCLUDE_PATH ${CMAKE_INCLUDE_PATH} 
        "c:/orocos/boost_1_40;c:/orocos/ACE_wrappers;c:/orocos/ACE_wrappers/TAO;c:/orocos/ACE_wrappers/TAO/orbsvcs")
    set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} 
        "c:/orocos/boost_1_40/lib;c:/orocos/ACE_wrappers/lib")
    (note the forward-slashes, even on Windows!) Edit these lines to use your Boost version and install location. The OROCOS_TARGET should be automatically detected. If not, add:
    set( OROCOS_TARGET win32 CACHE STRING 
         "The Operating System target. One of [lxrt gnulinux xenomai macosx win32]")
    and remove the lines with references to other OROCOS_TARGET settings.

    Start the cmake-gui and set your source and build paths ( For example, c:\orocos\orocos-rtt-1.10.0 and c:\orocos\orocos-rtt-1.10.0\build ). Now click 'Configure' at the bottom. Check that there are no errors. If components are missing, you probably need to fix the above PATHs.

    • For RTT 1.12, turn OS_NO_ASM ON, for RTT 2.3.0, turn OS_NO_ASM OFF.

    You probably need to click Configure again and then click 'Generate', which will generate your Visual Studio solution and project files in the 'build' directory.

    Open the generated solution in MSVS and build the 'ALL_BUILD' target, which will build the RTT (and the unit tests if you enabled them).

    Unit tests (Optional)

    In order to enable unit tests, you need to turn on BUILD_TESTING in the CMake GUI.

    The unit tests will fail if the required DLLs are not in your path. In your system settings, or on the command prompt of Windows, add c:\orocos\boost_1_40\lib and c:\orocos\ACE_wrappers\lib to your PATH environment (reboot if necessary).

    Next, run a 'make install' and add the c:\orocos\bin directory to your PATH (or whatever you used as install path.) In RTT 2.3.0, the default install path is c:\Program Files\orocos (so add c:\Program Files\orocos\bin to PATH). It is recommended to keep this default, since OCL uses that too.

    Now you should be able to run the unit tests. The process could be a bit streamlined more and may be improved in later releases.

    Installing RTT

    Once everything is build, you can use the 'INSTALL' project to copy the files into the correct installation directories. This is necessary for OCL and the other Orocos applications such that they find headers and libraries in the expected locations.

    Building OCL

    Building OCL on Windows follows a similar path as RTT. Start CMake and point it to your OCL source tree and create a 'build' directory in there.

    OCL Dependencies

    Also OCL needs to know where Boost, TAO and other dependencies are installed. There's again an orocos-ocl.default.cmake file which you can copy to orocos-ocl.cmake.

    There is a separate Wiki page for enabling Readline (tab-completion) in the TaskBrowser. See Taskbrowser with readline on Windows.

    Missing Features on Windows

    • The RTT on Windows is not real-time ! It exists for convenience, simulation, and use by Windows GUIs.

    Compiling the RTT on Windows/cygwin

    This page describes all steps you need to take in order to compile the real-time toolkit on a Windows machine. We rely on the Cygwin libraries and tools to accomplish this. Visual Studio or mingw32 are as of this writing not yet supported. Also CORBA is not yet ported.

    Download and install Cygwin

    You can get it from http://www.cygwin.com and use the setup.exe script. Make sure you additionally install:
    • From 'Devel': binutils, boost, boost-devel, cmake, cppunit, gcc4-g++, libncurses-devel, make, readline, subversion

    Download, install and patch RTT

    The official RTT release does not yet include cygwin support. You need to download the RTT 1.6 sources in your cygwin environment and apply the patch in attachment:
     cd orocos-rtt; patch -p1 < orocos-rtt-cygwin.patch
    The patch can be found here: https://www.fmtc.be/bugzilla/orocos/show_bug.cgi?id=605

    Build RTT

    Create a build directory in the orocos-rtt directory (mkdir build) and run:
     cmake .. -DBOOST=/usr/include/boost-1_33_1
     make
    This is a slow process. If you have multiple CPU cores, use 'make -j2' or -jN with N cores. In case you want to change more option, type 'ccmake ..' between the cmake and make commands.

    Set your PATH

    Cygwin relies on the PATH environment variable to locate the Orocos DLL's. From your build directory, type:
     export PATH=$PATH:`pwd`/src

    Test your setup

    Next test your setup with a 'make check' (add -jN). No more than 2 tests should fail, all related to timing.

    Install your setup

    Do 'make install'. The RTT will by default be installed in /usr/local. You don't need to add the /usr/local/lib directory to your PATH.

    The 1st RTT Developers Workshop

    Main SponsorMain Sponsor
    Place: BARCELONA PAL Robotics offices

    Calle Pujades 77-79, 4th floor 4ª 08005 Barcelona, Spain.

    See also PAL Robotics

    Date: Mo 19th of July - Fr 23 July
    1st RTT Developers Workshop

    AGENDA

    Date Time Topic/Title & Mediator Description
    Mon 19/7 9h-13h Big Picture Day

    Arriving at PAL offices

    Peter shows what 2.0 is and where it is heading
    13h-14h Lunch
    14h-19h Who're you + plans presentation You made some Impress/Beamer slides and present your work + ideas for future work in < 10 slides.
    20h Dinner Opening dinner sponsored by The SourceWorks
    Tue 20/7 9h-13h Typekit generation Orogen + message generation.

    YARP transport 2.0 (only ports, no methods, events, etc).

    Code explosion & extern template solution.

    13h-14h Lunch
    14h - 19h Component generation Introduction.

    What is its place in RTT?

    20h Dinner
    Wed 21/7 9h-13h Building Fix up RTT/OCL cmake.

    Structure of components, repositories and applications (graveyard/attic)

    13h-14h Lunch
    14h-16h Documentation improvement Structure. Website

    Missing part.

    Real examples and tutorials Examples. Restructure.

    Success stories, who uses

    16h Visiting
    20h Dinner
    Thu 22/7 9h-13h Logging Current status.

    Fix architecture.

    RTT::Logger remove/replace

    13h-14h Lunch
    14h-16h Reporting
    16h-19h Upgrading from v1 to v2 Describe rtt v2 converter. Caveat document. Try out on user systems.
    20h Dinner Closing dinner sponsored by The SourceWorks
    Fri 23/7 9h-13h Wrapping-up Day Finishing loose ends, integration and discussions for future work
    13h-14h Lunch
    14h-17h Wrapping-up Day Finishing loose ends, integration and discussions for future work

    If you need or want to provide sponsorship, contact Peter.

    Participants - Please add your days of presence !

    • Peter Soetens (Sun 18/7 - Fri 23/7)
      • 19inch all-in-one Core2 system
      • Ubuntu 9.10
    • Sylvain Joyeux
    • Markus Klotzbuecher (Sun 18/7 - Fri 23/7)
      • TP x61
      • Debian Testing
    • Stephen Roderick (Fri 16/7 - Fri 23/7)
      • 15" Mac Book Pro
      • Mac OS X Snow Leopard and Ubuntu Lynx 10.04
    • Charles Lesire-Cabaniols (Mon 19/7 noon - Thu 22/7 8am)
      • Dell Core2 Laptop
      • Ubuntu lucid 10.4
    • Carles Lopez
    • Adolfo Rodriguez Tsouroukdissian (I will be around the whole week)

    Ideas:

    • Presentation of participants: how, where and why Orocos is used by others
    • Build system ? Version control ? What web ressources ?
      • DFKI has its own build system (http://sites.google.com/site/rubyinmotion). Mainly standardized on CMake and git.
      • where to put the code and do release management ? (trying out gitorious.org ?)
      • use standard approach for end user build-support files across all sub-projects (ie if CMake, then use the same approach to FindXXX and Config files). Provide example use cases to build against Orocos (ie there is no FindOrocos-XXX.cmake provided by any sub-project, which IMHO is Very Bad for a new user).
    • the components: Orocos has the OCL and DFKI has already open-sourced some components. In what form do we want to distribute them ?
    • the toolchain: where are we going, what are we going to use and lay out a schedule for that.
      • idea from Peter to standardize the type description on the ROS datatypes vs. oroGen's C++ parser
      • component specification is more than datatypes. oroGen
    • release strategy: how to release RTT 2.0 and associated tools. Target date ?
    • integrating new transports
      • YARP transport from Charles
      • ROS transport ? (who would do that ?)
    • Additional maintainers (particularly for OCL)
    • Standardized API's to ease language bindings (eg Python, Java, LISP ... yes, LISP :-) )
    • Integration of dependant packages (e.g. TLSF). Currently (for real-time logging), we have a circular build problem in that RTT needs log4cpp, log4cpp needs TLSF, but TLSF is installed as part of RTT. Big Problem! Peter mentioned integrating log4cpp, but I'm not sure taht this is the best approach (ie long term consequences, keeping up with releases, scalability).
    • Integration of real-time logging from OCL into RTT
      • Use of real-time logging by RTT itself. Transition plan to accomplish this.
    • Clean shutdown semantics (ie allowing state machines and scripts to cleanly shutdown)
    • Scalability of deployment
      • Deploying subsystems/containers/groups of components
      • Parameterizing deployment files (e.g. variable substitution within XML files)

    A. First day

    Morning

    Peter started to present the 2.x functionality, state and (from its point of view), shortcomings. The following are the points that have been risen during the discussion.

    • TimeService: what is it for, and is not there other solutions ?
    • right now, SlaveActivity is different from Slave execution engine. Why ?

    Properties and attributes

    • RTT::Property[persistent] / RTT::Attribute[non persistent]
    • RTT::Property is NOT thread safe
    • know if a property has been written

    Ports

    • could we have not OldData for buffer ports => how difficult ?
    • discussion on overruns, and default drop policy for buffers. In general, we need to discuss an interface to monitor transports
    • data flow now has N data holder elements, where N is the number of total connections. 1.x needed at most one data holder element per output port.

    Methods / Operations / Services

    • operation execution policy: OwnThread vs.ClientThread. Agreed that OwnThread should be the default policy as it is safer from a thread-safety point of view.
    • better naming for ServiceRequester / ServiceProvider -- Method / Operation
      • rename ServiceRequester to Service to map Operation
      • what about the caller side ?
        • OperationHandle
        • __OperationCaller__ for now, preferred as it says what it does
        • PeerOperation
        • RemoteOperation
        • OperationProxy (-- for Peter and Markus)
        • OperationStub (-- for Peter and Markus)
        • OperationInterface (no: Interface is used a lot in the code for base classes)

    Misc

    • chicken and egg problem with the deployer, especially with basic services like real-time logging

    Plugins

    • possibility to do autoloading based on a PATH environment variable or to do loading per-plugin
    • there is a cost to load a typekit that is not needed as the shared library has to be mapped in memory and typekits are quite big

    Code size

    • instanciating RTT::Operation for void()(void)
      • 60kB for dispatching code
      • 60kB for distributed C++ dispatching code
      • Yuk

    Events

    • 2.0 does not have events right now
    • can (and will) be implemented on top of the ports
    • one limitation: only one argument. Not a big issue if we have an automated wrapping like oroGen
    • we're now getting into the details of event handling ;-)

    end

    B. Second day

    The discussions starts with explaining the improved TypeInfo infrastructure:

    • Normally , everything should be generated by the tools
    • If the tools don't make it, you can generate a typekit manually by:
      • Add a StructTypeInfo<T> instead of TemplateTypeInfo<T> (the latter still exists)
      • Define a boost::serialization function the decomposes your struct

    ROS messages and orogen

    • Can orogen parse a generated ros message class ?
      • It can't since it does not work when a class has virtual functions. Also the ANTLR parser is not 'good enough'.
      • gccxml tool can help here, it also removes ANTLR then.

    Sylvain explains how orogen works

    • List dependencies
    • Declare used types (header files to use)
    • Declare task definitions
    • Declare deployments

    Sylvain shows how orogen requires the #ifndef __orogen in the headers listed. gccxml is a fix for this too.

    Hosting on gitorious is being discussed. It allows us to group code in 'projects' and collaborate better using git.

    Autoproj is discussed as a tool to bootstrap the orocos packages. It's an alternative to manually download and build everything. It may work in parallel with rosbuild, in case application software depends on both ros and orocos. This needs to be tested.

    The work is divided for the rest of the day:

    • Charles + Peter : Yarp transport for 2.0
    • Markus + Peter : Find collect segfault bug
    • Stephen + Sylvain: Mac OS-X testing of autoproj/ruby etc.
    • Sylvain + Peter + Markus : gccxml into orogen

    We decided to rename orogen to typegen

    The day concluded with investigating the code size/compile time issue. The culprits are the operations added to the ports in the typekit code. We investigated several solutions to tackle this, especially in the light of code/typekit generation.

    C. Third day

    The day started with a re-evaluation of the agenda and release timelines. The proposed release date for 2.0 was august 16th.

    This list of topics will be covered this week:

    • Documentation review
    • Website review
    • Real-time Logging
    • Build system review
    • Crash in Collect found by Markus
    • Yarp transport

    This list of issues will be solved before 2.0.0:

    • Code size/compilation time issue
    • Tool + cmake macros to create new component projects
    • typegen tool to generate type kits
    • gitorious migration of all orocos projects, including code generation tools
    • OCL cleanup and migration to new cmake macros

    These issues will be delayed after 2.0.0:

    • Thread-safe property writing (allow a peer/3rd party to change a property)
    • Attribute/Property resolution, ie, maybe it's easier to introduce a 'persistent' flag in properties which flags if it needs to be serialized or not. Attributes can then be removed.
    • Service discovery. Sylvain manages these things in a supervision layer written in Ruby. It's not clear yet how far the C++ DeploymentComponent needs to go in this issue.
    • Diagnostics service that detects thread overruns, buffer overflows etc.
    • Connection browsing: ask a port to which other ports it is connected such that we can visualise that
    • Deployment gui to show or create component networks
    • Full Mac-OS-X and Win32 support. These will mature in the 2.0.x releases as users on these platforms test the release.

    The rest of the day continued as planned on the agenda. In the morning, a new CMake build system for components, plugins and executables was created to maximize maintainability and ease-of-use of creating new Orocos software. OCL too will switch to this system. The interface (CMake macros) and logic behind it was discussed. This tool will be further developed to be ready before the 2.0 release.

    In the afternoon, the documentation and website structure was discussed. We came to the conclusion that no-one only downloads the RTT. For 2.0, they will download, RTT, the infrastructure components (TaskBrowser, Deployment, Reporting, Diagnostics etc) and the tool-chain (typekit generation, component generation etc.). This will require a restructuring of the website and the documentation, to no longer be RTT-centric, but to be 'Orocos ecosystem' centric.

    The documentation will contain 3 pillars:

    • Getting started
      1. Download toolchain
      2. Build
      3. Run demo
    • Setting up a real system
      1. Your first component
      2. Deploying it
      3. Creating a component network
    • Reference documentation
      1. API
      2. Cheat-Sheet
      3. Manuals

    The reference manuals will be cleaned up too, such that they suit better 'for reference' and less serve as 'first read for new users'.

    During this day, the code size problem, typegen development and Yarp transport were also further polished.

    It ended with a visit to 'Parc Guell' and a walk to the old city centre, where we enjoyed a well deserved tapas meal.

    Compiling RTT in Windows/MinGW + pthreads-32

    This page describes the steps to take in order to compile the real-time toolkit (RTT) on a Windows machine, under MinGW and pthreads-32.

    The following has been tested on Windows XP, running in a virtual machine on Mac OS X Leopard.

    Outstanding issues

    • Not all RTT tests pass
    • TAO does not completely build
    • CORBA support in RTT untested due to the above

    Warning: the default GCC 3.4.5 compiler in MinGW outputs a lot of warnings when compiling RTT. Mostly they are "foo might be used uninitialized in this function" in STL code.

    Install MinGW

    See the following links for the basic approach

    See detailed instructions in URL's above and below, but basically (unless otherwise noted, all actions are in MSys Unix shell, and, all unix-built items are installed in /mingw (which is c:\msys\1.0\mingw in DOS prompt) )

    1. Install MinWG - base, C++, make (then add c:\minwg\bin to system PATH)
    2. Install msysxxx.exe
    3. Install msys DTK
    4. Install bash (i386 versions) by untarring in / in msys
    5. Install coreutils-5.97-MSYS-1.0.11-snapshot.tar.bz by untar'ing and then manually copying contents to / (have to mv the bin/cp command)
    6. Download autoconf, automake and libtool: untar, configure with --prefix=/mingw, and build/install
    7. Set env vars in /etc/profile: CFLAGS, PKG_CONFIG_PATH
    8. Install glib2, gettext and pkg-config from gtk URL. Extract into /mingw

    Install dependancy packages

    Compile CMake from unix source (in build dir)
     cmake-xxx/bootstrap --prefix=/mingw --no-qt-gui
     make && make install
    Run pthreads32 installer (just untar's)
        - manually copy pre-built/include/* to /c/mingw/include    (C:\mingw/include)
        - manually copy pre-built/lib/*GC2* to /c/mingw/lib        (C:\mingw/lib)
        - to run pthreads tests, need to copy prebuilt .a/.dll into .. dir, and copy queueuserapcex to ../.. 
    Boost (as at 2009-Jan, use v1.35 not v1.37 until we fix RTT for v1.37)
        *** DOS shell ***
        cd boost-jam-xxx
        .\build.bat gcc        ** won't build in unix shell with build.sh **
        *** unix shell ***
        cd boost-jam-xxx
        cp binntx86/bjame.exe /mingw/bin
        cd ~/software/build/boost_1_35
        bjam --toolset=gcc --layout=system --prefix=/mingw --with-date_time --with-graph \ 
            --with-system --with-function_types  --with-program_options  install
    
    Cppunit, get tarball from sourceforge
        untar and configure with --prefix=/mingw
        correct line 7528 in libtool, to be c:/MinGW/bin../lib/dllcrt2.o for first item 
        make && make install
    

    Build RTT

    Get trunk of RTT, patch with this file, configure (ensure set OROCOS_TARGET=win32), make, and install as usual.
     cd /path/to/rtt; patch -p0 < patch-rtt-mingw-1.patch
     

    Set your PATH

    Ensure your PATH in the MSYS shell contains /mingw/bin and /mingw/lib.

    Test your setup

    Next test your setup with a 'make check'. Currently 4 of 8 tests fail ... more work to do here.

    Partial ACE/TAO CORBA build

    This gets most of ACE/TAO to build, but not yet all.
        download, follow MinGW build instructions on the website.
            add "#undef ACE_LACKS_USECONDS_T" to ace/config-win32-mingw.h" before compiling
        copy ace/libACE.dll to /mingw/lib
        make TAO ** this fails
        You can build all we need by manually doing ''make'' in the following directories. Note that the last couple of TAO dir's have problems.
            ace, ace/protocols, kokyu, tao, tao/TAO_IDL, tao/orbsvcs    
    
    NB Can parallel build ace but not its tests nor tao.

    NB Not all tests pass. At least one of the ACE tests fail.

    More useful URLs

    http://www.mingw.org/wiki/MinGWiki http://iua-share.upf.es/wikis/clam/index.php/Devel/Windows_MinGW_build http://www.gimp.org/~tml/gimp/win32/ http://www.gtk.org/download-windows.html http://www.cleardefinition.com/page/Build_Boost_for_MinGW/ http://www.dre.vanderbilt.edu/~schmidt/DOC_ROOT/ACE/ACE-INSTALL.html#mingw http://psi-im.org/wiki/Compiling_Qt4_on_Windows http://www.qtsoftware.com/downloads/opensource/appdev/windows-cpp

    D. Fourth day

    Logging Day

    Stephen gives an overview of the current log4cpp + Orocos architecture and how he accomplished real-time logging. Log4cpp supports

    • 1 category can have 0..* appenders
    • 1 appender has 0..1 category (0 makes no sense though)

    Orocos supports

    • A RTString type that uses tlsf, but still lives in OCL.
    • A real-time logging category type. Any number of these can be created with their own 'org.orocos.rtt.xyz' scope.
    • A file appender component type. Appending over the network (CORBA) is untested though

    Decisions for v2.0

    • Deprecate RTT::Logger in v2.0
    • Move OCL::String into RTT
    • Move log4cpp itself into RTT (or into orocos-log4cpp gitorious project)

    v2.2 or later

    • Move OCL::Logging into RTT and port to v2.x
    • Make LoggingService support lookup of ports by category (called via operation to do so)
    • Support multiple appenders per category
    • Either logging messages go to stderr if appender not yet connected to category, or they continue to get discarded
    • Deployer by default starts LoggingService and FileAppender (to orocos.log). User can turn this behaviour off with a command line parameter, allowing them to configure the logging system via a site deployment file.
    • Add streaming capability : logger->debug << xyz;
    • Replace RTT::Logger with calls to RTT::Logging framework
    • Complete OCL::String plugin to support use within scripting
    • Add LoggingPlugin
    • support use from scripting to query, modify and use OCL::Category
    • Add additional appenders (eg socket)

    Services discussion

    Peter explains how services made their entry into the design and how they can be used.
    • Services have to have different names from ports (v2)
    • TaskContext has a default service (this->provides())
    • TC is really a service container/executor.
    • Properties and operations must be in a service
    • Ports were _not_ in a service. This will be changed such that ports belong to a Service. A Provides Service can have both input and output ports. This is reasonable and meets real-world semantics, however, it does sound slightly contradictory. Must be well explained with examples.
    • Talking of dropping the “Providers” in “ServiceProviders”, and just having “Services” and “ServiceRequestors”

    E. Fifth day

    It's hacking day and implementing/finishing most of what we started this week.

    • Stephen is testing on Mac-OS-X. Found a bug in tlsf where NULL and 0 were mixed, causing it not to handle memory exhaustion cases correctly.
    • Peter makes the API changes that were proposed and fixes bugs others find on the go.
    • Sylvain is setting up the gitorious project

    The Road to RTT 2.0

    This Chapter collects all information about the migration to RTT 2.0. Nothing here is final, it's a scratch book to get us there. There are talk pages to discuss the contents of these pages.

    These are the major work areas:

    • New Data Flow API, proposed by S. Joyeux
    • Streamlined Execution Flow API, proposed by P. Soetens (RTT::Message)
    • Full distribution support and cleanup (Events in CORBA)
    • Alternative Data Flow transport layer (non blocking).
    • Small tools for interacting with Components

    If you want to contribute, you can post your comments in the following wiki pages. It will be (hopefully) more concise and straightforward compared with the developers Forum.

    • Which weakness have you detected in RTT?
    • Which features would you like to have in RTT 2.0?

    These items are worked out on separate Wiki pages.

    RTT and OCL 2.0 have been merged on the master branches of all official git repositories:

    Stable releases are on different branches, for example toolchain-2.0:

    Goals for RTT 2.0

    The sections below formulate the major goals which RTT 2.0 wishes to attain.

    Simplicity

    The Real-Time Toolkit shouldn't be in the way of building complex applications, instead it should help making it easier. We're improving on different fronts to make it more simple to use for both beginners and experienced power users.

    API: user oriented

    The API is clearly separated in what public (rtt user) and private (rtt internal) APIs are. The number of concepts are reduced and a sane default is chosen where alternatives are possible. Policies allow users to deviate from the default behavior.

    Tooling: enhancing the experience

    The RTT is a very extensible library. When users require an extension, they don't need to write much or any additional code. Tools assist in generating helper libraries for adding user types (type plugins) or user interfaces (service plugins) to the RTT. The generated code is readable and understandable and documented. If required, these can be overriden by hand-written code such that tools in development do not block user development.

    Component model: components are simple

    RTT 2.0 components are simple to understand and explain. In essence they are stateful input/output systems that offer services to supervisors.

    The input/output is offered by means of port based communication between data processing algorithms. An input port receives data, an output port sends data. The algorithms in the component define the transformation from input to output.

    Service based communication offers operations such as configuration or task execution. A component always specifies if a service is provided or requested. This allows run-time dependency and system state checking, but also automatic connection/disconnection management which is important in distributed environments.

    Components are stateful. They don't just start processing data right away. They can validate their preconditions, be queried for their current state and be started and stopped in a controlled manner. Although there is a standard state machine in each component that regulates these transitions, users can extend these without limitations.

    Acceptable Upgrade Path

    The first users of RTT 2.0 will be current users, seeing solutions for problems they have today. The upgrade path will be documented and assistive tools will be provided. Whenever possible, backwards compatibility is maintained.

    Interoperability

    The field knows a number of succesful robotics frameworks, languages and operating systems. RTT 2.0 is designed to allow bridges to these components.

    Other frameworks

    RTT 2.0 can easily interoperate with other robotics frameworks that provide the concepts of port based data flow communication and functional services.

    Other languages

    RTT 2.0 offers the 1.x real-time scripting language, but in addition binds to other languages as well. A real-time language binding to Lua is offered. Not real-time bindings are offered over a CORBA language independent interface.

    Other operating systems

    RTT 2.0 runs on Linux, RTAI, Xenomai, Mac OS-X and Windows. These are the main operating systems of the current advanced robotics domain.

    Robustness

    Complex systems are hard to startup, shutdown or to recover from disfunctional components. RTT 2.0 aids the system architect in maintaining a robust machine controller, even in distributed setups.

    Service oriented architectures

    Components are aware of the available services and have chance to execute fall-back scenarios when they disappear. They are notified in time such that they can take proper action and recover and resume when a service becomes available again. Local and global supervisors keep track of these state changes such that such mechanisms do not need to be hard-coded into each component.

    Separation between real-time and not real-time processes

    A real-time component can not be disturbed due to the addition of a lower priority communicating peer. This allows to build systems incrementally around a hard-realtime core. The RTT decouples the communication between sender and receiver and allows real-time data transports to assure delivery.

    Contribute! Which weakness have you detected in RTT?

    INTRODUCTION

    You can edit this page to post your contribution to OrocosRTT 2.0. Please, keep your comment concise and clear: if you want to launch a long debate, you can still use the Developers Forum! Short examples can help other people understanding what you mean.


    A) According to the section 4. of the Orocos Component Builder's Manual, the callback of a synchronous event is executed inside the thread of the event's emitter. Imagine that TaskA emits an event, and TaskB, who subscribes synchronously to it, has an handle with a infinite loop: the behavior of TaskA would be jeopardize. Keep in mind that:
    • TaskA hasn't any clue to know what will happen inside the callback of TaskB.
    • It can't prevent TaskB from connecting synchronously.
    • Once blocked, there is nothing it can do.

    B) What would happen if a TaskContext is attached to a PeriodicActivity, but internally it was designed to run as a NonPeriodicActivity. What would happen if a sensor with a refresh rate of 10 Hz is read from a Component deployed at 1000 Hz? May be the Activity of the TC should be defined by the TC itself, even if this would mean to have it is hard-coded in the TC.
    C) Because of single thread serialization, we can have that a sleep in 1 Task, affect other tasks which are not aware and are not responsible of it. See the source code in the sub page.

    Problems with single thread serialization

    Because of single thread serialization, something unexpected for the programmer happens.

    1) You expect TaskA to be independent from TaskB, but it isn't. If you think it is a problem of resources of the computer, change the activity frequency of 1 of the two tasks.

    Suggestion: A) let the programmer choose if single thread serialization is used or not. B) keep 1 thread for 1 activity policy for default. It will help less experienced user to avoid common errors. Experienced user can decide to "unleash" the power of STS if they want to.

    2) after the "block" for 0.5 seconds, the "lost cycles" are executed all at once. In other words, updateHook is called 5 times in a row. This may have very umpredictable results. It could be desirable for some applications (filter with data buffer) or catastrophic in other applications (motion control loop).

    Suggestion: C) let the user decide if the "lost cycles" or the PeriodicActivity need to be executed later or are defenitively lost.

    using namespace std;
    using namespace RTT;
    using namespace Orocos;
     
    TimeService::ticks _timestamp;
    double getTime() { return TimeService::Instance()->getSeconds(_timestamp); }
     
    class TaskA
        : public TaskContext
    {
    protected:
         PeriodicActivity act1;
    public:
     
        TaskA(std::string name)
        : TaskContext(name),
          act1(1, 0.10, this->engine() )
        {
         //Start the component's activity:
        this->start();
        }  
        void updateHook()
        {
        printf("TaskA  [%.2f] Loop\n", getTime());
        }
    };
     
    class TaskB
        : public TaskContext
    {
    protected:
        int num_cycles;
        PeriodicActivity act2;
    public:
         TaskB(std::string name)
        : TaskContext(name),
          act2(2, 0.10, this->engine() )
        {
        num_cycles = 0;            
        //Start the component's activity:
        this->start();
        }   
        void updateHook()
        {
        num_cycles++;
        printf("TaskB  [%.2f] Loop\n", getTime());
     
        // once every 20 cycles (2 seconds), a long calculation is done
        if(num_cycles%20 == 0)
        {
            printf("TaskB  [%.2f] before calling long calculation\n", getTime());
     
            // calculation takes longer than expected (0.5 seconds). 
            // it could be something "unexpected", desired or even a bug... 
            // it would not be relevant for this example.
            for(int i=0; i<500; i++) usleep(1000);
     
            printf("TaskB  [%.2f] after calling long calculation\n", getTime());
        }
        }
    };
     
    int ORO_main(int argc, char** argv)
    {
        TaskA    tA("TaskA");
        TaskB    tB("TaskB");
     
        // notice: the task has not been connected. there isn't any relationship between them.
        // In the mind of the programmer, any of them is independent, because they have their own activity.
     
        // if one of the two frequency of the PeriodicActivities is changed, there isn't any problem, since they go on 2 separate threads.  
        getchar();
        return 0;
    }

    Contribute! Suggest a new feature to be included in RTT 2.0.

    INTRODUCTION

    Please be concise and provide a short example and your motivation to include it in RTT. Ask first yourself:

    • "Am I the only beneficiary of this new feature?"
    • "Can this feature be obtained with a simple layer on the top of RTT ?"

    If you answered "no" to both the questions and you have already debated the new future in the Developers forum, please post here your suggestion.

    Create Reference Application Architectures

    In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control, application configuration from a central database or topic based data flow topologies.

    1 Central Property Service (ROS like) This tasks sets up components such that they get the system wide configuration from a dedicated property server. The property server loads an XML file with all the values and other components query these values. Advanced components even extend the property server at places. A GUI is not included in this work package.

    2 Universal Robot Controller (Using KDL, OCL, standard components) This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. A GUI is not included in this work package.

    3 Topic based data flow (ROS and CORBA EventService like) A deployer can configure components as such that their ports are connected to 'global' topics for sending and receiving. This is similar to what many existing frameworks do today and may demonstrate how compatibility with these frameworks can be accomplished.

    4 GUI communication with Orocos How a remote GUI could connect to a running application.

    Please add yours

    Detailed Roadmap

    These pages outline the roadmap for RTT-2.0 in 2009. We aim to have a release candidate by december 2009, with the release following in januari 2010.

    • A work package is divided in tasks with deliverables.
    • All deliverables are public and are made public without delay.
    • All development is done in git repositories.
    • For each change committed to the local git repository, that change is committed to a public repository hosted at github.com within 24 hours.
    • For each task and at the end of each work package, all unit tests are expected to pass. In case additional unit tests are required for a work package, these are listed explicitly as deliverables.
    • The order of execution of tasks within a work package is suggestive and may differ from the actual order.
    • In case a task modifies the RTT API or structure, the task's deliverable implicitly includes the adaption to aforementioned modifications of following parts of OCL: CMake Build system; directories: taskbrowser, deployment, ocl, hardware, reporting, helloworld, timer, doc, debian.
    • These changes are collected in the ocl-2.0 git repository.
    • When the form of a deliverable is 'Patch set', this is equivalent to one or more commits on the public git repository.

    WP1 RTT Cleanup

    This work package contains structural clean-ups for the RTT source code, such as CMake build system, portability and making the public interface slimmer and explicit. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community. This package is put up front because it allows early adopters to switch only at the beginning to the new code structure and that all subsequent packages are executed in the new structure.

    Links : (various posts on Orocos mailing lists)

    Allocated Work : 15 days

    Tasks:

    1.1 Partition in name spaces and hide internal classes in subdirectories.

    A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

    Deliverable Title Form
    1.1.1 Internal headers are in subdirectories Patch set
    1.1.2 Internal classes are in nested namespaces of the RTT namespace Patch set

    1.2 Improve CMake build system

    Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

    Deliverable Title Form
    1.2.1 Standardized on CMake 2.6 Patch set
    1.2.2 Use CMake lists instead of strings Patch set
    1.2.3 No more use of Linux specific include paths Patch set
    1.2.4 Separate finding from using libraries for all RTT dependencies Patch set

    1.3 Group user contributed code in rtt/extras.

    This directory offers variants of implementations found in the RTT, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT API, these contributions are organized in a separate directory. Users are warned that these extras might not be of the same quality as native RTT classes.

    Deliverable Title Form
    1.3.1 Orocos rtt-extras directory Directory in RTT

    1.4 Improve portability

    Some GNU/GCC/Linux specific constructs have entered the source code, which makes maintenance on and portability to other platforms a harder task. To structurally support other platforms, the code will be compiled with another compiler (non-gnu) and a build flag ORO_NO_ATOMICS (or similar) is added to exclude all compiler and assembler specific code and replace it with ISO-C/C++ or RTT-FOSI compliant constructs.

    Deliverable Title Form
    1.4.1 Code compiles on non-gnu compiler Patch set
    1.4.2 Code compiles without assembler constructs Patch set

    1.5 Default to activity with one thread per component

    The idea is to provide each component with a robust default activity object which maps to exactly one thread. This thread can periodically execute or be non periodic. The user can switch between these modes at configuration or run-time.

    Deliverable Title Form
    1.5.1 Generic Activity class which is by default present in every component. Patch set
    1.5.2 Unit test for this class Patch set

    1.6 Standardize on Boost Unit Testing Framework

    Before the other work packages are started, the RTT must standardize on a unit test framework. Until now, this is the CppUnit framework. The more portable and configurable Boost UTF has been chosen for unit testing of RTT 2.0.

    Deliverable Title Form
    1.6.1 CppUnit removed and Boost UTF in place Patch set

    1.7 Provide CMake macros for applications and components

    When users want to build Orocos components or applications, they require flags and settings from the installed RTT and OCL libraries. A CMake macro which gathers these flags for compiling an Orocos component or application is provided. This is inspired on how ROS components are compiled.

    Deliverable Title Form
    1.7.1 CMake macro CMake macro file
    1.7.2 Unit test that tests this macro Patch set

    1.8 Allow lock-free policies to be configured

    Some RTT classes use hard-coded lock-free algorithms, which may be in the way (due to resource restrictions) for some embedded systems. It should be possible to change the policy to not use a lock-free algorithm in that class (cfr the 'strategy' design pattern'). An example is the use of AtomicQueue in the CommandProcessor.

    Deliverable Title Form
    1.8.1 Allow to set/override lock-free algorithm policy patch

    CMake Rework

    This page collects all the data and links used to improve the CMake build system, such that you can find quick links inhere instead of scrolling through the forum.

    Thread on Orocos-dev : http://www.orocos.org/node/1073 (in case you like to scroll)

    CMake manual on how to use and create Findxyz macros : http://www.vtk.org/Wiki/CMake:How_To_Find_Libraries

    List of many alternative modules : http://zi.fi/cmake/Modules/

    An alternative solution for users of RTT and OCL is installing the Orocos-RTT-target-config.cmake macros, which serve a similar purpose as the pkgconfig .pc files: they accumulate the flags used to build the library. This may be a solution for Windows systems. Also, CMake suggests that .pc files are only 'suggestive' and that still the standard CMake macros must be used to fully capture and store all information of the dependency you're looking at.

    Directories and namespace rework

    The orocos/src directory reflects the /usr/include/rtt directory structure, I'll post it here from the user's point of view, so what she finds in the include dir:

    Abbrevs: (N)BC: (No) Backwards Compatibility guaranteed between 2.x.0 and 2.y.0. Backwards compatibility is always guaranteed between 2.x.y and 2.x.z. In case of NBC, a class might disappear or change, as long as it is not a base class of a BC qualified class.

    Directory Namespace BC/NBC Comments Header File list
    rtt/*.hpp RTT BC Public API: maintains BC, a limited set of classes and interfaces. This is the most important list to get right. A header not listed in here goes into one of the subdirectories. Please add/complete/remove. TaskContext.hpp Activity.hpp SequentialActivity.hpp SlaveActivity.hpp DataPort.hpp BufferPort.hpp Method.hpp Command.hpp Event.hpp Property.hpp PropertyBag.hpp Attribute.hpp Time.hpp Timer.hpp Logger.hpp
    rtt/plugin/*.hpp RTT::plugin BC All plugin creation and loading stuff. Plugin.hpp
    rtt/types/*.hpp RTT::types BC All type system stuff (depends partially on plugin). Everything you (or a tool) need(s) to add your own types to the RTT. Toolkit.hpp ToolkitPlugin.hpp Types.hpp TypeInfo.hpp TypeInfoName.hpp TypeStream.hpp TypeStream-io.hpp VectorComposition.hpp TemplateTypeInfo.hpp Operators.hpp OperatorTypes.hpp BuildType.hpp
    rtt/interface/*.hpp RTT::interface BC Most interfaces/base classes used by classes in the RTT namespace. ActionInterface.hpp, ActivityInterface.hpp, OperationInterface.hpp, PortInterface.hpp, RunnableInterface.hpp, BufferInterface.hpp
    rtt/internal/*.hpp RTT::internal NBC Supportive classes that don't fit another category but are definately not for users to use directly. ExecutionEngine.hpp CommandProcessor.hpp DataSource*.hpp Command*.hpp Buffer*.hpp Function*.hpp *Factory*.hpp Condition*.hpp Local*.hpp EventC.hpp MethodC.hpp CommandC.hpp
    rtt/scripting/*.hpp RTT::scripting NBC Users should not include these directly.
    rtt/extras/*.hpp RTT::extras BC Alternative implementations of certain interfaces in the RTT namespace. May contain stuff useful for embedded or other specific use cases.
    rtt/dev/*.hpp RTT::dev BC Minimal Device Interface, As-is in RTT 1.x AnalogInInterface.hpp AnalogOutInterface.hpp AxisInterface.hpp DeviceInterface.hpp DigitalInput.hpp DigitalOutput.hpp EncoderInterface.hpp PulseTrainGeneratorInterface.hpp AnalogInput.hpp AnalogOutput.hpp CalibrationInterface.hpp DigitalInInterface.hpp DigitalOutInterface.hpp DriveInterface.hpp HomingInterface.hpp SensorInterface.hpp
    rtt/corba/*.hpp RTT::corba BC CORBA transport files. Users include some headers, some not. Should this also have the separation between rtt/corba and rtt/corba/internal ? I would rename the IDL modules to RTT::corbaidl in order to clear out compiler/doxygen confusion. Also note that current 1.x namespace is RTT::Corba.
    rtt/property/*.hpp RTT::property BC Formerly 'rtt/marsh'. Marshalling and loading classes for properties. CPFDemarshaller.hpp CPFDTD.hpp CPFMarshaller.hpp
    rtt/dlib/*.hpp RTT::dlib BC As-is static distribution library files. They are actually a form of 'extras'. Maybe they belong in there... DLibCommand.hpp
    rtt/boost/*.hpp boost ? We'll try to get rid of this in 2.x
    rtt/os/*.hpp RTT::OS BC As-is. (Rename to RTT::os ?) Atomic.hpp fosi_internal_interface.hpp MutexLock.hpp rt_list.hpp StartStopManager.hpp threads.hpp CAS.hpp MainThread.hpp oro_allocator.hpp rtconversions.hpp rtstreambufs.hpp Semaphore.hpp Thread.hpp Time.hpp fosi_internal.hpp Mutex.hpp OS.hpp rtctype.hpp rtstreams.hpp ThreadInterface.hpp
    rtt/targets/* - BC We need this for allowing to install multiple -dev versions (-gnulinux+-xenomai for example) in the same directory. rtt-target.h <target>

    Will go: 'rtt/impl' and 'rtt/boost'.

    Open question to be answered: Interfaces like ActivityInterface, PortInterface, RunnableInterface etc. -> Do they go into rtt/, rtt/internal or maybe rtt/interface ?

    !!! PLEASE add a LOG MESSAGE when you edit this wiki to motivate your edit !!!

    WP2 Data Flow API and Implementation Improvement

    Context: Because the current data flow communication primitives in RTT limit the reusability and potential implementations, Sylvan Joyeux proposed a new, but fairly compatible, design. It is intended that this new implementation can almost transparently replace the current code base. Additionally, this package extends the DataFlow transport to support out-of-band real-time communication using Xenomai IPC primitives.

    Link : http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

    Estimated work : 45 days for a demonstrable prototype.

    Tasks:

    2.1 Review and merge proposed code and improve/fix where necessary

    Sylvain's code is clean and of high standards, however, it has not been unit tested yet and needs a second look.

    Deliverable Title Form
    2.1.1 Code reviewed and imported in RTT-2.0 branch Patch set
    2.1.2 Unit tests for reading, writing, connecting and disconnecting in-process communication Patch set

    2.2 Port CORBA type transport to new code base

    Sylvain's code has initial CORBA support. The plan is to cooperate on the implementation and offer the same or better features as the current CORBA implementation does. Also the DataFlowInterface.idl will be cleaned up to reflect the new semantics.

    Deliverable Title Form
    2.2.1 CORBA enabled data flow between proxies and servers which uses the RTT type system merged on RTT-2.0 branch Patch set
    2.3 Allow Real-Time data port access with CORBA Proxy

    A disadvantage of the current data port is that ports connected over CORBA may cause stalls when reading or writing them. The Proxy or Server implementation should, if possible, do the communication in the background and not let the other component's task block.

    Deliverable Title Form
    2.3.1 Event driven network-thread allocated in Proxy code to receive and send data flow samples Patch set
    2.4 Reduce footprint of data connections

    The current lock-free data connections allocate memory for allowing access by 16 threads, even if only two threads connect. One solution is to let the allocated memory grow with the number of connections, such that no more memory is allocated than necessary.

    Deliverable Title Form
    2.4.1 Let lock-free data object and buffer memory grow proportional to connected ports Patch set
    2.5 Out of band data flow review

    It is often argued that CORBA is excellent for setting up and configuring services, but not for continuous data transmission. There are for example CORBA standards that only mediate setup interfaces but leave the data communication connections up to the implementation. This task looks at how ROS and other frameworks set up out-of band data flow and how such a client-server architecture can be added to RTT/CORBA.

    Deliverable Title Form
    2.5.1 Report on out of band implementations and similarities to RTT. Email on Orocos-dev
    2.6 Create automatic marshalling of user types

    Since the out-of-band communication will require objects to be transformed to a byte stream and back, a marshalling system must be in place. The idea is to let the user specify his data types as IDL structs (or equivalent) and to generate a toolkit from that definition. The toolkit will re-use the generated CORBA marshalling/demarshalling code to provide this service to the out-of-band communication channels.

    Deliverable Title Form
    2.6.1 Marshalling/demarshalling in toolkits Patch set
    2.6.2 Tool to convert data specification into toolkit Executable
    2.7 Create out-of-band data flow communication

    The first communication mechanism to support is data flow. This will be demonstrated with a Xenomai RTPIPE implementation (or equivalent) which is setup between a network of components.

    Deliverable Title Form
    2.7.1 Real-time inter-process communication of data flow values on Xenomai Patch set
    2.7.2 Unit test for setting up, connecting and validating Real-Time properties of data ports in RT IPC setting. Patch set
    2.8 Update documentation and Examples

    In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

    Deliverable Title Form
    2.8.1 Unit tests updated Patch set
    2.8.2 rtt-examples, rtt-exercises updated Patch set
    2.8.3 orocos-corba manual updated Patch set

    2.9 Organize and Port OCL deployment, reporting and taskbrowsing

    RTT 2.0 data ports will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new data ports. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

    Deliverable Title Form
    2.9.1 Deployment, Reporting and TaskBrowser updated Patch set

    WP3 Method / Message / Event Unified API

    Context: Commands are too complex for both users and framework/transport implementers. However, current day-to-day use confirms the usability of an asynchronous and thread-safe messaging mechanism. It was proposed to reduce the command API to a message API and unify the synchronous / asynchronous relation between methods and messages with synchronous / asynchronous events. This will lead to simpler implementations, simpler usage scenarios and reduced concepts in the RTT.

    The registration and connection API of these primitives also falls under this WP.

    Link: http://www.orocos.org/wiki/rtt/rtt-2.0/executionflow

    Estimated work : 55 days for a demonstrable prototype.

    Tasks:

    3.1 Provide a real-time memory allocator for messages

    In contrast to commands, each message invocation leads to a new message sent to the receiver. This requires heap management from a real-time memory allocator, such as the highly recommended TLSF (Two-Level Segregate Fit) allocator, which must be integrated in the RTT code base. If the RTOS provides, the native RTOS memory allocator is used, such as in Xenomai.

    Deliverable Title Form
    3.1.1 Real-time allocation integrated in RTT-2.0 Patch set

    3.2 Message implementation

    Unit test and implement the new Message API for use in C++ and scripts. This implies a MessageProcessor (replaces CommandProcessor), a 'messages()' interface and using it in scripting.

    Deliverable Title Form
    3.2.1 Message implementation for C++ Patch set
    3.2.2 Message implementation for Scripting Patch set

    3.3 Demote the Command implementation

    Commands (as they are now) become second rang because they don't appear in the interface anymore, being replaced by messages. Users may still build Command objects at the client side both in C++ as in scripting. The need for and the plausibility of identical functionality with today's Command objects is yet to be investigated.

    Deliverable Title Form
    3.3.1 Client side C++ Command construction Patch set
    3.3.2 Client side scripting command creation Patch set

    3.4 Unify the C++ Event API with Method/Message semantics

    Events today duplicate much of method/command functionality, because they also allow synchronous / asynchronous communication between components. It is the intention to replace much of the implementation with interfaces to methods and messages and let events cause Methods to be called or Messages to be sent. This change will remove the EventProcessor, which will be replaced by the MessageProcessor. This will greatly simplify the event API and semantics for new users. Another change is that allowing calling Events on the component's interface can only be done by means of registering it as a method or message.

    Deliverable Title Form
    3.4.1 Connection of only Method/Message objects to events Patch set
    3.4.2 Adding events as methods or messages to the TaskContext interface. Patch set

    3.5 Allow event delivery policies

    Adding a callback to an event puts a burden on the event emitter. The owner of the event must be allowed to impose a policy on the event such that this burden can be bounded. One such policy can be that all callbacks must be executed outside the thread of the owning component. This task is to extend the RTT such that it contains such a policy.

    Deliverable Title Form
    3.5.1 Allow to set the event delivery policy for each component Patch set

    3.6 Allow to specify requires interfaces

    Today one can connect data ports automatically because both providing and requiring data is presented in the interface. This is not so for methods, messages or events. This task makes it possible to describe which of these primitives a component requires from a peer such that they can be automatically connected during application deployment. The required primitives are grouped in interfaces, such that they can be connected as a group from provider to requirer.

    Deliverable Title Form
    3.6.1 Mechanism to list the requires interface of a component Patch set
    3.6.2 Feature to connect interfaces in deployment component. Patch set

    3.7 Improve and create Method/Message CORBA API

    With the experience of the RTT 1.0 IDL API, the existing API is improved to reduce the danger of memory leaks and allow easier access to Orocos components when using only the CORBA IDL. The idea is to remove the Method and Command interfaces and change the create methods in CommandInterface and MethodInterface to execute functions.

    Deliverable Title Form
    3.7.1 Simplify CORBA API Patch set

    3.8 Port new Event mechanism to CORBA

    Since the new Event mechanism will seamlessly integrate with the Method/Message API, a CORBA port, which allows remote components to subscribe to component events must be straightforward to make.

    Deliverable Title Form
    3.8.1 CORBA idl and implementation for using events. Patch set

    3.9 Update documentation, unit tests and Examples

    In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

    Deliverable Title Form
    3.9.1 Unit tests updated Patch set
    3.9.2 rtt-examples, rtt-exercises updated Patch set
    3.9.3 Orocos component builders manual updated Patch set

    3.10 Organize and Port OCL deployment, taskbrowsing

    The new RTT 2.0 execution API will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new primitives. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

    Deliverable Title Form
    3.10.1 Deployment, Reporting and TaskBrowser updated Patch set

    WP4 Create Reference Application Architecture

    In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control. This work package fleshes out that example.

    Links : (various posts on Orocos mailing lists)

    Estimated Work : 5 days for the application architecture with documentation

    Tasks:

    4.1 Universal Robot Controller (Using KDL, OCL, standard components)

    This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. Both axes and end effector can be controlled in position and velocity mode. A state machine switches between these modes. A GUI is not included in this work package.

    Deliverable Title Form
    4.1.1 Robot Controller example tar ball

    Full distribution support

    There are two major changes required in the CORBA IDL interface.

    1. A new interface for attaching callbacks to events in the component
    2. A rewrite of the
      1. DataFlowInterface,
      2. MethodInterface,
      3. CommandInterface / MessageInterface.

    The first point will be relatively straight forward, as events attach methods and messages, which will be represented in the CORBA interface as well.

    The DataFlowInterface will be adapted to reflect the rework on the new Data flow api. Much will depend on the out-of-band or through-CORBA nature of the data flow.

    The MethodInterface should no longer work with 'session' objects, and all calls are related to the main interface, such that a method object can be freed after invocation.

    The CommandInterface might be removed, in case it can be 'reconstructed' from lower level primitives. A MessageInterface will replace it which allows to send messages, analogous to the exiting MethodInterface.

    The 'ControlTask' interface will remain mostly as is, extended with events() and messages().

    RTT 2.0.0-beta1

    This page is for helping you understand what's in RTT/OCL 2.0.0-beta1 release and what's not.

    Caveats

    First the bad things:
    • Do not use this release on real machines !
    • There are *no* guarantees for real-time operation yet.
    • CORBA transport does not work yet and needs to change drastically
    • The API is 'pretty' stable, but the transport rework might have influences. This release will certainly not be binary compatible with the final 2.0.0 release.
    • OCL has not completely catched up, and also needs to be restructured further into a leaner repository.
    • Do not manually upgrade your code ! Use the rtt2-converter script found on this site first.
    • RTT::Command is gone ! See Replacing Commands
    • RTT::Event is gone ! See Replacing Events
    • Reacting to Operations (former Event) is not yet possible in state machine scripts.
    • RTT::DataPort,BufferPort etc are gone ! See RTT 2.0 Data Flow Ports
    • In case you have patches on the orocos-rtt source tree, all files have moved drastically. First all went into rtt/ instead of src/. Next, all non-API files went into subdirectories.

    For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0

    Missing things

    The final release will have these, but this one has not:
    • A plugin system in RTT to load types (type kits) and plugins (like scripting, marshalling,...)
    • A tool/workflow to create type kits automatically
    • A working CORBA transport
    • RT-Logging framework
    • Service deployment in the DeploymentComponent
    • Misc fixes/minor feature additions and better documentation
    • Repackaged OCL tree. Especially, in OCL, only TaskBrowser, Reporting and DeploymentComponent are fully operational.
    • Debian packages have not been updated yet
    • A couple of unit tests still fail. You should see at the end:

    88% tests passed, 3 tests failed out of 25
     
    The following tests FAILED:
              6 - mqueue-test (Failed)
             19 - types_test (Failed)
             22 - function_test (Failed)

    New Features

    Updated Examples and Documentation

    Most documentation (manuals and online API reference) is up-to-date, but sometimes a bit rough or lacking illustrations. The rtt-exercises have been upgraded to support RTT 2.0 API.

    New style Data Ports

    The data flow ports have been reworked to allow far more flexible component development and system deployment. Details are at RTT 2.0 Data Flow Ports. Motivation can be found at Redesign of the data flow interface

    Improved TaskBrowser

    Allows you to declare new variables, shows what a component requires and provides and if these interfaces are connected.

    Improved Deployment

    Specify port connection properties using XML, connect provided to required services.

    Improved Reporting

    Data flow logs are now sample based, such that you can trace the flow and state of connections.

    Method vs Operation

    The RTT 1.x Method, Command and Event APIs have been removed and replaced by Method/Operation. Details are at Methods vs Operations

    Real-Time Allocation

    RTT includes a copy of the TLSF library for supporting places where real-time allocation is beneficial. The RT-Logger infrastructure and the Method/Operation infrastructure take advantage of this. Normal users won't use this feature directly.

    A real-time MQueue transport

    Data flow between processes is now possible in real-time. The real-time MQueue transport allows to transport data between processes using Posix MQueues as well as in Xenomai.

    For each type to be transported using the MQueue transport, a separate transport typekit must be available (this may change in the final 2.0 release).

    Simplified API

    Creating a component has been greatly simplified and the amount of code to write reduced to the absolute minimum. Documentation of operations or ports is now optional. Attributes and properties can be added by using a plain C++ class variable, the need to specify templates has been removed in some places.

    Services

    Component interfaces are now defined as services and a component can 'provide' or 'require' a service. These tools can be used to connect methods to operations at run-time without the necessary lookup code. For example:
     Method<bool(int,int)> setreso;
     setreso = this->getPeer("Camera")->getMethod<bool(int,int)>("setResolution");
     if ( setreso.ready() == false )
        log(Error) << "Could not find setResolution Method." <<endlog();
     else
        setreso(640,480);
    becomes:
     Method<bool(int,int)> setreso("setResolution");
     this->requires("Camera")->addMethod(mymethod);
     
     // Deployment component will setup setResolution for us...
     setreso(640,480);

    RTT 2.0.0-beta2

    This page is for helping you understand what's in RTT/OCL 2.0.0-beta2 release and what's not.

    See the RTT 2.0.0-beta1 page for the notes of the previous beta, these will not be repeated here.

    Caveats

    Like in any beta, first the bad things:
    • Do not use this release on real machines !
    • There are *no* guarantees for real-time operation yet.
    • The API is 'pretty' stable, but the type system rework might have influences, especially on RTT 2.0 typekits (aka RTT 1.0 toolkits). This release will certainly not be binary compatible with the final 2.0.0 release.
    • Do not manually upgrade your code ! Use the rtt2-converter script found on this site first.
    • Reacting to Operations (former Event) is not yet possible in state machine scripts.
    • This release requires CMAKE 2.6-patch3 or later

    For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0

    Missing things

    The final release will have these, but this one has not:
    • A plugin system in RTT to load types (type kits) and plugins (like scripting, marshalling,...)
    • A tool/workflow to create type kits automatically
    • RT-Logging framework
    • Service deployment in the DeploymentComponent
    • Misc fixes/minor feature additions and better documentation
    • Repackaged OCL tree. Especially, in OCL, only TaskBrowser, Reporting and DeploymentComponent are fully operational.
    • Debian packages have not been updated yet
    • A couple of unit tests still fail. You should see at the end:

    97% tests passed, 1 tests failed out of 31
     
    The following tests FAILED:
             24 - types_test (Failed)
    If other tests fail, this may be because of too strict timing checks, but you can report them anyway on the orocos-dev mailing list or rtt-dev website forum.

    New Features

    See the RTT 2.0.0-beta1 page for the features added in beta1. Most features below relate to the CORBA transport.

    Feature compatibility with RTT 1.x

    This release is able to build the same type of applications as with RTT 1.x. It may be rough on the edges, but no big chunks of functionality (or unit tests) have been left out.

    Updated CORBA IDL

    Want to use an Orocos component from another language or computer ? The simplified CORBA IDL gives quick access to all properties, operations and ports.

    Transparent remote or inter-process communication

    The corba::TaskContextProxy and corba::TaskContextServer allow fully transparant communication between components, providing the same semantics as in-process communication. The full TaskContext C++ api is available in IDL.

    Improved memory usage and reduced bandwidth/callbacks

    Calling an operation, setting a parameter, all these tasks are done with a single call from client to server. No callbacks from server to client are done as in RTT 1.x. This saves a lot of memory on both client and server side and eliminates virtually all memory leaks related to the CORBA transport.

    Adapted OCL components

    TaskBrowser and (Corba)Deployment code is fully operational and feature-equivalent to RTT 1.x. One can deploy Orocos components using a CORBA deployer and connect to them using other deployers or taskbrowsers.

    RTT and OCL Cleanup

    This work package claims all remaining proposed clean-ups for the RTT source code. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community.

    1 Partition in name spaces and hide internal classes in subdirectories. A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

    2 Improve CMake build system Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

    3 Group user contributed code in rtt-extras and ocl-extras packages. These packages offer variants of implementations found in the RTT and OCL, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT and OCL APIs, these contributions are organized in separate packages. Other users are warned that these extras might not be of the same quality as native RTT and OCL classes.

    Real time logging

    Recent ML posts indicate the desire for a real-time (RT) capable logging framework, to supplement/replace the existing non-RT RTT::Logger. See http://www.orocos.org/forum/rtt/rtt-dev/logging-replacement for details.

    NB Work in progress. Feedback welcomed

    See https://www.fmtc.be/bugzilla/orocos/show_bug.cgi?id=708 for progress and patches.

    Initial requirements

    Approximately in order of priority (in my mind at least)

    0) Completely disable all logging

    1) Able log variable sized string messages

    2) Able log from non-realtime and realtime code

    3) Minimize (as reasonably practicable) the effect on runtime performance (eg minimize CPU cycles consumed)

    4) Support different log levels

    5) Support different "storage mediums" (ie able to log messages to file, to socket, to stdout)

    Except for 3, and the "realtime" part of 2, the above is the functionality of the existing RTT::Logger

    6) Support different log levels within a deployed system (ie able to log debug in one area, and info in another)

    7) Support multiple storage mediums simultaneously at runtime

    8) Runtime configuration of storage mediums and logging levels

    9) Allow the user to extend the possible storage mediums at deployment-time (ie user can provide new storage class)

    Optional IMHO

    10) Support nested diagnostic contexts [1] [2] (a more advanced version of the Logger::In() that RTT's logger currently supports)

    Logging framework

    I see 3 basic choices, all of which are log4j ports (none of which support real-time right now)
    1. log4cplus - does not appear to be maintained.
    2. log4cxx - Apache license, well maintained, large, up to date functionality, heavy dependancies (APR, etc)
    3. log4cpp - LGPL license, moderately maintained, medium size, fairly up to date (re log4j and logbook), no dependancies

    I prefer 3) as it has the basic functionality we need, is license compatible, has a good design, and we've been offered developer access to modify it. I also think modifying a slightly less-well-known framework will be easier than getting some of our mod's in to log4cxx.

    NOTE on the ML I was using the logback term logger, but log4cpp calls it a category. I am switching to category from now on!

    Preliminary design

    Add TLSF to RTT (a separate topic).

    Fundamentally, replace std::string, wrap one class, and override two functions. :-)

    Typedef/template in a real-time string to the logging framework, instead of std::string (also any std::map, etc).

    Create an OCL::Category class derived from log4cpp::Category. Add an (optionally null) association to an RTT::BufferDataPort< log4cpp::LoggingEvent > (which uses rt_strings internally). Override the callAppenders() function to push to the port instead of directly calling appenders.

    Modify the getCategory() function in the hierarchy maintainer to return our OCL:: Category instead of log4cpp::category. Alternatively, leave it producing log4cpp::category but contain that within the OCL::Category object (has-a instead of is-a relationship, in OO speak). The alternative is less modification to log4cpp, but worse performance and potentially more wrapping code.

    Deployment

    I have a working prototype of the OCL deployment for this (without the actual logging though), and it is really ugly. As in Really Ugly! To simplify the format and number of files involved, and reduce duplication, I suggest extending the OCL deployer to better support logging.

    Sample system

    Component C1 - uses category org.me.myapp
    Component C2 - uses category org.me.myapp.c2
     
    Appender A - console
    Appender B - file
    Appender C - serial
     
    Logger org.me.myapp has level=info and appender A
    Logger org.me.myapp.C2 has level=debug and appenders B, C

    Configuration file for log4cpp

    log4j.logger.org.me.myapp=info, AppA
    log4j.logger.org.me.myapp.C2=debug, AppB, AppC
     
     
    log4j.appender.AppA=org.apache.log4j.ConsoleAppender
    log4j.appender.AppB=org.apache.log4j.FileAppender
    log4j.appender.AppC=org.apache.log4j.SerialAppender
     
    # AppA uses PatternLayout.
    log4j.appender.AppA.layout=org.apache.log4j.PatternLayout
    log4j.appender.AppA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
    # AppB uses SimpleLayout.
    log4j.appender.AppB.layout=org.apache.log4j.SimpleLayout
    # AppC uses PatternLayout with a different pattern from AppA
    log4j.appender.AppC.layout=org.apache.log4j.PatternLayout
    log4j.appender.AppC.layout.ConversionPattern=%d [%t] %-5p %c %x - %m%n

    One possible Orocos XML deployer configuration

    File: AppDeployer.xml

    <struct name="ComponentC1" 
        ... />
    <struct name="ComponentC2" 
        ... />
     
    <struct name="AppenderA" type="ocl::ConsoleAppender"> 
        <simple name="PropertyFile" ...><value>AppAConfig.cpf</value></simple>
        <struct name="Peers"> <simple>Logger</simple>
    </struct>
     
    <struct name="AppenderB" type="ocl::FileAppender"> 
        <simple name="PropertyFile" ... />
        <struct name="Peers"> <simple>Logger</simple>
    </struct>
     
    <struct name="AppenderC" type="ocl::SerialAppender"> 
        <simple name="PropertyFile" ... />
        <struct name="Peers"> <simple>Logger</simple>
    </struct>
     
    <struct name="Logger" type="ocl::Logger"> 
        <simple name="PropertyFile" ...><value>logger.org.me.myapp.cpf</value></simple>
    </struct>

    File: AppAConfig.cpf

    <properties>
      <simple name="LayoutClass" type="string"><value>ocl.PatternLayout</value>
      <simple name="LayoutConversionPattern" type="string"><value>%-4r [%t] %-5p %c %x - %m%n</value>
    </properties>

    … other appender .cpf files …

    File: logger.org.me.myapp.cpf

    <properties>
        <struct name="Categories" type="PropertyBag">
            <simple name="org.me.myapp" type="string"><value>info</value></simple>
            <simple name="org.me.myapp.C2" type="string"><value>debug</value></simple>
        </struct>
        <struct name="Appenders" type="PropertyBag">
            <simple name="org.me.myapp" type="string"><value>AppenderA</value></simple>
            <simple name="org.me.myapp.C2" type="string"><value>AppenderB</value></simple>
            <simple name="org.me.myapp.C2" type="string"><value>AppenderC</value></simple>
        </struct>
    </properties>

    The logger component is no more than a container for ports. Why special case this? Simply to make life easier for the deployer and to keep the deployer syntax and semantic model similar to what it currently is. A deployer deploys components - the only real special casing here is the connecting of ports (by the logger code) that aren't mentioned in the deployment file. If you use the existing deployment approach, you have to create a component per category, and mention the port in both the appenders and the category. This is what I currently have, and as I said, it is Really Ugly.

    Example logger functionality (error checking elided)

    Logger::configureHook()
     
        // create a port for each category with an appender
        for each appender in property bag
            find existing category
            if category not exist
                create category
                create port
                associate port with category
            find appender component
            connect category port with appender port
     
        // configure categories
        for each category in property bag
            if category not exist
                create category
            set category level

    Important points

    There will probably need to be a restriction that to maintain real-time, categories are found prior to a component being started (e.g. in configureHook() or startHook() ).

    Note that not all OCL::Category objects contain a port. Only those category objects with associated appenders actually have a port. This is how the hierarchy works. If you have category "org.me.myapp.1.2.3" and it has no appenders but your log level is sufficient, then the logging action gets passed up the hierarchy. Say that category "org.me.myapp" has an appender (and that no logging level stops this logging action in the hierarchy in between), then that appender will actually log this event.

    Also should create toolkit and transport plugins to deal with the log4cpp::LoggingEvent struct. This will allow for remote appenders, as well as viewing within the taskbrowser.

    Port names would perhaps be something like "org.me.myapp.C1" => log_org_me_myapp_C1".

    Real-Time Strings ?

    It's not so much the string that needs to be real-time, but the stringstream, which converts our data (strings, ints,...) into a string buffer. Conveniently, the boost::iostream library allows with two lines of code to create a real-time string stream:

    #include <boost/iostreams/device/array.hpp>
    #include <boost/iostreams/stream.hpp>
     
    namespace io = boost::iostreams;
     
    int main()
    {
      // prepare static sink
      const int MAX_MSG_LENGTH = 100;
      char sink[MAX_MSG_LENGTH];
      memset( sink, 0, MAX_MSG_LENGTH);
     
      // create 'stringstream' 
      io::stream<io::array_sink>  out(sink);
     
      out << "Hello World! "; // space required to avoid stack smashing abort.
     
      // close and flush stringstream
      out.close();
     
      // re-open from position zero.
      out.open( sink );
     
      // overwrites old data.
      out << "Hello World! ";
    }
    If user code 'only' uses const& to strings or C-strings, there is no need for rt_string, but for an rt_stringstream. The above code allows to realize that with a statically allocated (and non expandable!) char buffer. Replacing this buffer with a dynamically growing buffer will probably need an rt_string after all.

    Unfortunately, the log4cpp::LoggingEvent is passed through RTT buffers, and this has std::string members. So, we need rt_string also, but rt_stringstream will be very useful also.

    Warning For anyone using the boost::iostreams like above, either clear the array to 0's first, or ensure you explicitly write the string termination character ('\0'). The out << "..."; statement does not terminate the string otherwise. Also, I did not need the "space ... to avoid stack smashing abort" bit on Snow Leopard with gcc 4.2.1.

    Using boost::iostream repeatedly ... you need to reset the stream between each use

    #include <boost/iostreams/device/array.hpp>
    #include <boost/iostreams/stream.hpp>
    #include <boost/iostreams/seek.hpp>
     
    namespace io = boost::iostreams;
     
    ...
     
    char            str[500];
    io::stream<io::array_sink>    ss(str);
     
    ss << "cartPose_desi " << vehicleCartPosition_desi << '\0';
    logger->debug(OCL::String(&str[0]));
     
    // reset stream before re-using
    io::seek(ss, 0, BOOST_IOS::beg);        
    ss << "cartPose_meas " << vehicleCartPosition_meas << '\0';
    logger->debug(OCL::String(&str[0]));

    Problems/Questions/Issues

    If before the Logger is configured (and hence, the buffer ports and appender associations are created), a component logs to a category, the logging event is lost. At that time no appenders exist. It also means that for any component that logs prior to configure time, by default, those logging events are lost. I think that this requires further examination, but would likely involve more change to the OCL deployer.

    The logger configure code presumes that all appenders already exist. Is this an issue?

    Is the port-category association a shared_ptr<port> style, or does the category simply own the port?

    If the logger component has the ports added to it as well as to the category, then you could peruse the ports within the taskbrowser. Is this useful? If this is useful, is it worth making the categories and their levels available somehow for perusal within the taskbrowser?

    References

    [1] http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html

    [2] Patterns for Logging Diagnostic Messages Abstract

    [3] log4j and a short introduction to it.

    [4] logback - log4j successor

    [5] log4cpp

    [6] log4cxx

    [7] log4cplus

    Redesign of the data flow interface

    (Copied from http://github.com/doudou/orocos-rtt/commit/dc1947c8c1bdace90cf0a3aa2047ad248619e76b)

    • write ports are now common to all types of connections, and writing is "send and forget"
    • read ports still specify their type (data or buffer). The management of the connection type is offloaded on the port object (i.e. no more an intermediate ConnectionInterface object)
    • the ports maintain a list of "connected" ports. It is therefore possible to do some connection management, i.e. one knows who is listening to what.

    Here is the mail that led to this implementation:

    The problems

    • the current implementation is not about data connections (getting data flowing from one port to another). It is about managing shared memory places, where different ports read and write. That is quite obvious for the data ports (i.e. there is a shared data sample that anyone can read or write), and is IMO completely meaningless for buffer ports. Buffer ports are really in need of a data flow model (see above a more specific critic about multi-output buffers)
    • Per se, this does not seem a problem. Data is getting transmitted from one port to the other, isn't it ?
    Well, actually it is a problem because it forbids a clean connection management implementation. Why ? Because there is no way to know who is reading and who is writing ... Thus, the completely useless disconnect() call. Why useless ? Because if you do: (this is pseudo-code of course)

         connect(source, dest)
         source.disconnect()
    
    Then dest.isConnected() returns true, even though dest will not get any data from anywhere (there is no writer anymore on that connection).

    This is more general, as it is for instance very difficult to implement proper connection management in the CORBA case.
    • Because of this connection management issue, it is very difficult to implement a "push" model. It leads to huge problems with the CORBA transport when wireless is bad, because each pop or get needs a few calls.
    • It makes the whole implementation a huge mess. There is at least twice the number of classes normally needed to implement a connection model *and* code is not reused (DataPort is actually *not* a subclass of both ReadDataPort and WriteDataPort, same for buffers).
    • We already had a long thread about multiple-output buffered connections. I'll summarize what for me was the most important points:
      • the current implementation allows to distribute workload seamlessly between different task contexts.
      • it does not allow to send the same set of samples to different task contexts. Ther is a hack allowing to read buffer connections as if they were data connections, but it is a hack given that the reader cannot know if it is really reading a sample or reading a default value because the buffer is empty.
    IMO the first case is actually rare in robotic control (and you can implement a generic workload-sharing component with nicer features like for instance keeping the ordering between input and output) like in the following example:

                                 => A0 A3 [PROCESSING] => A'0 A'3
     A0 A1 A2 A3 => [WORK SHARING                                  WORK SHARING] => A'0 A'1 A'2 A'3
                                 => A1 A2 [PROCESSING] => A'1 A'2
    
    The second case is much more common. For instance, in my robot, I want to have a safety component that monitors a laser scanner (near-obstacle detection for the purpose of safety) and the same laser scans to go to a SLAM algorithm. I cannot do that for now, because I need a buffered connection to the SLAM algorithm. I cannot use the aforementionned hack either because for now I plan to put a network connection between the scanner driver and the two targets, and therefore I cannot really guarantee which component will get what.

    Proposal

    What I'm proposing is getting back to a good'ol data flow model, namely:

    • making write ports "send and forget". If the port fails to write, then it is the problem of the reader ! I really don't see what the writer can do about it anyway, given that it does not know what the data will be used for (principle of component separation). The reader can still detect that its input buffer is full and that it did not get some samples and do something about it.

    • making write ports "connection-type less". I.e. no WRITE data ports and WRITE buffer ports anymore, only write ports. This will allow to connect a write port to a read port with any kind of connections. Actually, I don't see a use case where the port designer can actually decide what kind of connection is best for its OUTPUT ports. Some examples:
      • in the laser scanner example above, the safety component would like a data port and the slam a buffer port
      • in position filtering, some components just want the latest positions and other components all the position stream (for interpolation purposes for instance)
      • in general, GUI vs. X. GUIs want most of the time the latest values.
      • ... I'm sure I can come up with other examples if you want them
    • locating the sample on the read ports (i.e. no ConnectionInterface and subclasses anymore). The bad: one copy of each sample per read port. The good: you implement the point above (write ports do not have a connection type), and you fix buffer connections once and for all.
    • removing (or deprecating) read/write ports. They really have no place in a data flow model.

    Simplified, more robust default activities

    From RTT 1.8 on, an Orocos component is created with a default 'SequentialActivity', which uses ('piggy-backs on') the calling thread to execute its asynchronous functions. It has been argued that this is not a safe default, because a component with a faulty asynchronous function can terminate the thread of a calling component, in case the 'caller' emits an asynchronous event (this is quite technical, you need to be on orocos-dev for a while to understand this).

    Furthermore, in case you do want to assign a thread, you need to select a 'PeriodicActivity' or 'NonPeriodicActivity', which have their quirks as well. For example, PeriodicActivity serialises activities with equal period and periodicity, and NonPeriodicActivity says what it isn't instead of what it is.

    The idea is to create a new activity type which allocates one thread, and which can be periodic or non-periodic. The other activity types remain (and/or are renamed) for specialist users that know what they want.

    Streamlined Execution Flow API

    It started with an idea on FOSDEM. It went on as a long mail (click link for full text and discussion) on the Orocos-dev mailing list.

    Here's the summary:

    • RTT interoperates badly with other software, for example, any external process needs to go through a convoluted CORBA layer. There are also no tools that could ease the job (except the ctaskbrowser), for example some small shell commands that can query/change a component.
    • RTT has remaining usability issues. Sylvain already identified the short commings of data/buffer ports and proposed a solution. But any user wrestling with the Should I use an Event (syn/asyn)-Method-Command-DataPort'?' question only got the answer: Well, we got Events(syn/asyn)-Methods-Commands and DataPorts !'. It's not coherent. There are other frameworks doing a better job. We can do a far better job.
    • RTT has issues with its current distribution implementation: programs can be constructed as such that they cause mem leaks at the remote side, Events never got into the CORBA interface (there is a reason for that), and our data ports over CORBA are equally weak as the C++ implementation.
    • And then there are also the untaken opportunities to reduce RTT & component code size drastically and remove complex features.

    The pages below analyse and propose new solutions. The pages are in chronological order, so later pages represent more recent views.

    First analysis

    I've seen people using the RTT for inter-thread communication in two major ways: or implement a function as a Method, or as a Command. Where the command was the thread-safe way to change the state of a component. The adventurous used Events as well, but I can't say they're a huge success (we got like only one 'thank you' email in its whole existence...). But anyway, Commands are complex for newbies, Events (syn/asyn) aren't better. So for all these people, here it comes: the RTT::Message object. Remember, Methods allow a peer component to _call_ a function foo(args) of the component interface. Messages will have the meaning of _sending_ another component a message to execute a function foo(args). Contrary to Methods, Messages are 'send and forget', they return void. The only guarantee you got is, that if the receiver was active, it processed it. For now, forget that Commands exist. We have two inter- component messaging primitives now: Messages and Methods. And each component declares: You can call these methods and send these messages. They are the 'Level 0' primitives of the RTT. Any transport should support these. Note that conveniently, the transport layer may implement messages with the same primitive as data ports. But we, users, don't care. We still have Data Ports to 'broadcast' our data streams and now we have Messages as well to send directly to component X.

    Think about it. The RTT would be already usable if each component only had data ports and a Message/Method interface. Ask the AUTOSAR people, it's very close to what they have (and can live with).

    There's one side effect of the Message: we will need a real-time memory allocator to reserve a piece of memory for each message sent, and free it when the message is processed. Welcome TLSF. In case such a thing is not possible wanted by the user, Messages can fall back to using pre-allocated memory, but at the cost of reduced functionality (similar to what Commands can do today). Also, we'll have a MessageProcessor, which replaces and is a slimmed down version of the CommandProcessor today.

    So where does this leave Events? Events are of the last primitives I explain in courses because they are so complex. They don't need to be. Today you need to attach a C/C++ function to an event and optionally specify an EventProcessor. Depending on some this-or-thats the function is executed in this-or-the-other thread. Let's forget about that. In essence, an Event is a local thing that others like to know about: Something happened 'here', who wants to know? Events can be changed such that you can say: If event 'e' happens, then call this Method. And you can say: if event 'e' happens, send me this Message. You can subscribe as many callbacks as you want. Because of the lack of this mechanism, the current Event implementation has a huge foot print. There's a lot to win here.

    Do you want to allow others to raise the event ? Easy: add it to the Message or Method interface, saying: send me this Message and I'll raise the event, or call this Method and you'll raise it, respectively. But if someone can raise it, is your component's choice. That's what the event interface should look like. It's a Level 1. A transport should do no more than allowing to connect Methods and Messages (which it already supports, Level 1) to Events. No more. Even our CORBA layer could do that.

    The implementation of Event can benefit from a rt_malloc as well. Indirectly. Each raised Event which causes Messages to be sent out will use the Message's rt_malloc to store the event data by just sending the Message. In case you don't have/want an rt_malloc, you fall back to what events can roughly do today. But with lots of less code ( Goodbye RTT::ConnectionC, Goodbye RTT::EventProcessor ).

    And now comes the climax: Sir Command. How does he fit in the picture? He'll remain in some form, but mainly as a 'Level 2' citizen. He'll be composed of Methods, Messages and Events and will be dressed out to be no more than a wrapper, keeping related classes together or even not that. Replacing a Command with a Message hardly changes anything in the C++ side. For scripts, Commands were damn useful, but we will come up with something satisfactory. I'm sure.

    How does all this interface shuffling allows us to get 'towards a sustainable distributed component model'? That's because we're seriously lowering the requirements on the transport layer:

    • It only needs to implement the Level 0 primitives. How proxies and servers are built depends on the transport. You can do so manually (dlib like) or automatically (CORBA like)
    • It allows the transport to control memory better, share it between clients and clean it up at about any time.
    • The data flow changes Sylvain proposes strengthen our data flow model and I'm betting on it that it won't use CORBA as a transport. Who knows.

    And we are at the same time lowering the learning curve for new users:

    • You can easily explain the basic primitives: Properties=>XML, DataPorts=>process data, Methods/Messages=>client/server requests. When they're familiar with these, they can start playing with Events (which build on top of Method/Messages and play a role in DataPorts as well). And finally, if they'll ever need, the Convoluted Command can encompass the most complex scenarios.
    • You can more easily connect with other middleware or external programs. People with other middleware will see the opportunities for 1-to-1 mappings or even implement it as a transport in the RTT.

    Dissecting Command and Method: blocking/non-blocking vs synchronous/asynchronous

    (Please feel free to edit/comment etc. This is a community document, not a personal document)

    Notes on naming

    The word service is used to name the offering of a C/C++ function for others to call. Today in Orocos Components offer services in the form of 'RTT::Method' or 'RTT::Command' objects. Both lead to the execution of a function, but in a different way. Also, despite the title, it is advised to refrain from using the terms synchronous/asynchronous, because they are relative terms and may cause confusion if the context is not clear.

    An alternative naming is possible: the offering of a C/C++ function could be named 'operation' and the collection of a given set of operations in an interface could be called a 'service'. This definition would line up better with service oriented architectures like OSGi.

    Purpose

    This page collects the ideas around the new primitives that will replace/enhance Method and/or Command. Although Method is a clearly understood primitive by users, Command isn't because of its multi-threaded nature. It is too complex to setup/use and can lead to unsafe applications (segfaults) if used incorrectly. To get these primitives better, we re-look at what users want to do and how to map this to RTT primitives.

    What users want to do

    Users want to control which thread executes which function, and if they want to wait(block) on the result or not. This all in order to meet deadlines in real-time systems. In practice, this boils down to:

    • When calling services (ie functions) of other components, one may opt to wait until the service returns the result, or not and optionally collect the result later. This is often best decided at the caller side, because both cases will cause different client code for sending/receiving the results
    • When implementing services in a component, the component may decide that the caller's thread executes the function, or that it will execute the function in it's own thread. Clearly, this can only be decided at the receiver side, because both cases will cause a different implementation of the function to be executed. Especially with respect to thread-safety.

    Dissecting the cases

    When putting the above in a table, you get:
    Calling a service (a function)
    Wait? \ Thread? Caller Component
    Yes (Method) (?)
    No X (Command)

    For reference, the current RTT 1.x primitives are shown. There are two remarkable spots: the X and the (?).

    • The X is a practically impossible situation. It would involve that the client thread does not wait, but its thread still executes the function. This could only be resolved if a 'third' thread executes the service on behalf of the caller. It is unclear at which priority this thread should execute, what it's lifetime and exclusivity is and so on.
    • The (?) marks a hole in the current RTT API. Users could only implement this behaviour by busy-waiting on the Command's done() function. However, that is disastrous in real-time systems, because of starvation or priority inversion issues that crop up with such techniques.

    Another thing you should be aware of that in the current implementation, caller and component must agree on how the service is invoked. If the Component defines a Method, the caller must execute it in its own thread and wait for the result. There's no other way for the caller to deviate from this. In practice, this means that the component's interface dictates how the caller can use its services. This is consistent with how UML defines operations, but other frameworks, like ICE, allow any function part of the interface to be called blocking or non-blocking. Clearly, ICE has some kind of thread-pool behind the scenes that does the dispatching and collects the results on behalf of the caller.

    Backwards compatibility - Or how it is now

    Orocos users have written many components and the primary idea of RTT 2.0 is to solve issues these components still have due to defects in the current RTT 1.x design. Things that do work satisfactory should keep working without modification of the user's design.

    Method

    It is very likely that the RTT::Method primitive will remain to exist as it is today. Little problems have been reported and it is easy to understand. The only disadvantage is that it can not be called 'asynchronously'. For example: if a component defines a Method, but the caller does not have the resources to invoke it (due to a deadline), it needs to setup a separate thread to do the call on its behalf. This is error prone. Orocos users often solve this by defining a command and trying to get the result data back somehow (also error prone).

    Command

    Commands serve multiple purposes in today programming with Orocos.
    • First, they allow thread-safe execution of a piece of code in a component. Because the component thread executes the function, no locking or synchronization primitives are required.
    • Second, they allow a caller to dispatch work to another component, in case the caller does not have the time or resources to execute a function.
    • Third, they allow to track the status of the execution. The caller can poll to see if the function has been queued, executed, what it returned (a boolean) etc.
    • Fourth, they allow to track the status of the 'effect' of the command, past its execution. This is done by attaching a completion condition, which returns a bool and can indicate if the effect of the command has been completed or not. For example, if the command is to move to a position, the completion condition would return true if the position is reached, while the command function would have only programmed the interpolator to reach that position. Completion conditions are not that much used, and must be polled.

    A simpler form of Command will be provided that does not contain the completion condition. It is too seldomly used.

    It is to the proposals to show how to emulate the old behavior with the new primitives.

    Proposals

    Each proposal should try to solve these issues:

    The ability to let caller and component choose which execution semantics they want when calling or offering a service (or motivate why a certain choice is limited):

    • The ability to wait for a service to be completed
    • The ability to invoke a service and not wait for the result
    • The ability to specify in the component implementation if a function is executed in the component's thread
    • The ability to specify in the component implementation if a function is executed in the caller's thread

    And regarding easy use and backwards compatibility:

    • Show how old-time behavior can be emulated with the new proposal
    • Show which semantics changed
    • How these primitives will be used in the scripting languages and in C++

    And finally:

    • Define proper names for each behavior.

    Proposal 1: Method/Message

    This is one of the earliest proposals. It proposes to keep Method as-is, remove Command and replace it with a new primitive: RTT::Message. The Message is a stripped Command. It has no completion condition and is send-and-forget. One can not track the status or retrieve arguments. It also uses a memory manager to allow to invoke the same Message object multiple times with different data.

    Emulating a completion condition is done by defining the completion condition as a Method in the component interface and requiring that the sender of the Message checks that Method to evaluate progress. In scripting this becomes:

    // Old:
      do comp.command("hello"); // waits (polls) here until complete returns true
     
    // New: Makes explicit what above line does:
      do comp.message("hello"); // proceeds immediately
      while ( comp.message_complete("hello") == false ) // polling
         do nothing;

    In C++, the equivalent is slightly different:

    // Old:
      if ( command("hello") ) {
         //... user specific logic that checks command.done() 
      }
     
    // New:
      if ( message("hello") ) { // send and forget, returns immediately
         // user specifc logic that checks message_complete("hello")
      }

    Users have indicated that they also wanted to be able to specify in C++:

      message.wait("hello"); // send and block until executed.

    It is not clear yet how the wait case can be implemented efficiently.

    The user visible object names are:

    • RTT::Method to add a 'client thread' C/C++ function to the component interface or call one.
    • RTT::Message to add a 'component thread' C/C++ function to the component interface or call one.

    This proposal solves:

    • A simpler replacement for Command
    • Acceptable emulation capacities of old user code
    • The invocation of multiple times the same message object in a row.

    This proposal omits:

    • The choice of caller/component to choose independently
    • Solving case 'X' (see above)
    • How message.wait() can be implemented

    Other notes:

    • It has been mentioned that 'Message' is not a good and too confusing name.

    Proposal 2: Method/Service

    This proposal focuses on separating the definition of a Service (component side) from the calling of a Method (caller side).

    The idea is that components only define services, and assign properties to these services. The main properties to toggle are 'executed in my thread or callers thread, or even another thread'. But other properties could be added too. For example: a 'serialized' property which causes the locking of a (recursive!) mutex during the execution of the service. The user of the service can not and does not need to know how these properties are set. He only sees a list of services in the interface.

    It is the caller that chooses how to invoke a given service: waiting for the result ('call') or not ('send'). If he doesn't want to wait, he has the option to collect the results later ('collect'). The default is blocking ('call'). Note that this waiting or not is completely independent of how the service was defined by the component, the framework will choose a different 'execution' implementation depending on the combination of the properties of service and caller.

    This means that this proposal allows to have all four quadrants of the table above. This proposal does not detail yet how to implement case (X) though, which requires a 3rd thread to do the actual execution of the service (neither component nor caller wish to do execute the C function).

    This would result in the following scripting code on caller side:

    //Old:
      do comp.the_method("hello");
     
    //New:
      do comp.the_service.call("hello"); // equivalent to the_method.
     
    //Old:
      do comp.the_command("hello");
     
    //New:
      do comp.the_service.send("hello"); // equivalent to the_command, but without completion condition.

    This example shows two use cases for the same 'the_service' functionality. The first case emulates an RTT 1.x method. It is called and the caller waits until the function has been executed. You can not see here which thread effectively executes the call. Maybe it's 'comp's thread, in which case the caller's thread is blocking until it the function is executed. Maybe it's the caller's thread, in which case it is effectively executing the function. The caller doesn't care actually. The only thing that has effect is that it takes a certain amount of time to complete the call, *and* that if the call returns, the function has been effectively executed.

    The second case is emulating an RTT 1.x command. The send returns immediately and there is no way in knowing when the function has been executed. The only guarantee you have is that the request arrived at the other side and bar crashes and infinite loops, will complete some time in the future.

    A third example is shown below where another service is used with a 'send' which returns a result. The service takes two arguments: a string and a double. The double is the answer of the service, but is not yet available when the send is done. So the second argument is just ignored during the send. A handle 'h' is returned which identifies your send request. You can re-use this handle to collect the results. During collection, the first argument is now ignored, and the second argument is filled in with the result of the service. Collection may be blocking or not.

    //New, with collecting results:
      var double ignored_result, result;
     
      set h = comp.other_service.send("hello", ignored_result);
     
      // some time later :
      comp.other_service.collect(h, "ignored", result); // blocking !
     
      // or poll for it:
      if ( comp.other_service.collect_if_done( h, "ignored", result ) == true ) then {
         // use result...
      }

    In C++ the above examples are written as:

    //New calling:
      the_service.call("hello", result); // also allowed: the_service("hello", result);
     
    //New sending:
      the_service.send("hello", ignored_result);
     
    //New sending with collecting results:
      h = other_service.send("hello", ignored_result);
     
      // some time later:
      other_service.collect(h, "ignored", result); // blocking !
     
      // or poll for it:
      if ( other_service.collect_if_done( h, "ignored", result ) == true ) {
         // use result...
      }

    Completion condition emulation is done like in Proposal 1.

    The definition of the service happens at the component's side. The component decides for each service if it is executed in his thread or the callers thread:

      // by default creates a service executed by caller, equivalent to defining a RTT 1.x Method  
      RTT::Service the_service("the_service", &foo_service );
     
      // sets the service to be executed by the component's thread, equivalent to Command
      the_service.setExecutor( this );
     
      //above in one line:
      RTT::Service the_service("the_service", &foo_service, this );

    The user visible object names are:

    • RTT::Service to add a C/C++ function to the component interface (replaces use of Method/Command).
    • RTT::CallMethod or similar to call a service, please discuss a good/better name.
    • RTT::SendMethod or similar to send (and collect results from) a service, please discuss a good/better name.

    This proposal solves:

    • Allows to specify threading parameters in the component independent of call/send semantics.
    • Removes user method/command dilemma.
    • Aligns better with 3rd party frameworks that also offer 'services'.

    This proposal omits:

    • How collection semantics are exactly.
    • How to resolve a 'send' with a 'service executed in thread of caller' (case X). Should a send indicate which thread must do the send on its behalf ? Is the execution deferred in another point in time in the caller's thread ?

    Your Proposal here

    ...

    Provides vs Requires interfaces

    Users can express the 'provides' interface of an Orocos Component. However, there is no easy way to express which other components a component requires. The notable exception is data flow ports, which have in-ports (requires) and out-ports (provides). It is however not possible to express this requires interface for the execution flow interface, thus for methods, commands/messages and events. This omission makes the component specification incomplete.

    One of the first questions raised is if this must be expressed in C++ or during 'modelling'. That is, UML can express the requires dependency, so why should the C++ code also contain it in the form of code ? It should only contain it if you can't generate code from your UML model. Since this is not yet available for Orocos components, there is no other choice than expressing it in C++.

    A requires interface specification should be optional and only be present for:

    • completing the component specification, allowing better review and understanding
    • automatically connecting component 'execution' interfaces, such that the manual lookup work which you need to write today can be omitted.

    We apply this in code examples to various proposed primitives in the pages below.

    New Command API

    Commands are no longer a part of the TaskContext API. They are helper classes which replicate the old RTT 1.0 behaviour. In order to setup commands more easily, it is allowed to register them as a 'requires()' interface.

    This is all very experimental.

    /**
     * Provider of a Message with command-like semantics
     */
    class TaskA    : public TaskContext
    {
        Message<void(double)>   message;
        Method<bool(double)>    message_is_done;
        Event<void(double)>     done_event;
     
        void mesg(double arg1) {
            return;
        }
     
        bool is_done(double arg1) {
            return true;
        }
     
    public:
     
        TaskA(std::string name)
            : TaskContext(name),
              message("Message",&TaskA::mesg, this),
              message_is_done("MessageIsDone",&TaskA::is_done, this),
              done_event("DoneEvent")
        {
            this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
            this->provides()->addMethod(&method, "Is the Message done?", "arg1", "Argument 1");
            this->provides()->addEvent(&done_event, "Emited when the Message is done.", "arg1", "Argument 1");
        }
     
    };
     
    class TaskB   : public TaskContext
    {
        // RTT 1.0 style command object
        Command<bool(double)>   command1;
        Command<bool(double)>   command2;
     
    public:
     
        TaskB(std::string name)
            : TaskContext(name),
              command1("command1"),
              command2("command2")
        {
            // the commands are now created client side, you
            // can not add them to your 'provides' interface
            command1.useMessage("Message");
            command1.useCondition("MessageIsDone");
            command2.useMessage("Message");
            command2.useEvent("DoneEvent");
     
            // this allows automatic setup of the command.
            this->requires()->addCommand( &command1 );
            this->requires()->addCommand( &command2 );
        }
     
        bool configureHook() {
            // setup is done during deployment.
            return command1.ready() && command2.ready();
        }
     
        void updateHook() {
            // calls TaskA:
            if ( command1.ready() && command2.ready() )
                command1( 4.0 );
            if ( command1.done() && command2.ready() )
                command2( 1.0 );
        }
    };
     
    int ORO_main( int, char** )
    {
        // Create your tasks
        TaskA ta("Provider");
        TaskB tb("Subscriber");
     
        connectPeers(ta, tb);
        // connects interfaces.
        connectInterfaces(ta, tb);
        return 0;
    }

    New Event API

    The idea of the new Event API is that: 1. only the owner of the event can emit the event (unless the event is also added as a Method or Message) 2. Only methods or message objects can subscribe to events.

    /**
     * Provider of Event
     */
    class TaskA    : public TaskContext
    {
        Event<void(string)>   event;
     
    public:
     
        TaskA(std::string name)
            : TaskContext(name),
              event("Event")
        {
            this->provides()->addEvent(&event, "The Event", "arg1", "Argument 1");
            // OR:
            this->provides("FooInterface")->addEvent(&event, "The Event", "arg1", "Argument 1");
     
            // If you want the user to let him emit the event:
            this->provides()->addMethod(&event, "Emit The Event", "arg1", "Argument 1");
        }
     
        void updateHook() {
            event("hello world");
        }
    };
     
    /**
     * Subscribes a local Method and a Message to Event
     */
    class TaskB   : public TaskContext
    {
        Message<void(string)>   message;
        Method<void(string)>    method;
     
        // Message callback
        void mesg(double arg1) {
            return;
        }
     
        // Method callback
        int meth(double arg1) {
            return 0;
        }
     
    public:
     
        TaskB(std::string name)
            : TaskContext(name),
              message("Message",&TaskB::mesg, this),
              method("Method",&TaskB::meth, this)
        {
            // optional:
            // this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
            // this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1");
     
            // subscribe to event:
            this->requires()->addCallback("Event", &message);
            this->requires()->addCallback("Event", &method);
     
            // OR:
            // this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1");
            // this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1");
     
            // subscribe to event:
            this->requires("FooInterface")->addCallback("Event", &message);
            this->requires("FooInterface")->addCallback("Event", &method);
        }
     
        bool configureHook() {
            // setup is done during deployment.
            return message.ready() && method.ready();
        }
     
        void updateHook() {
            // we only receive
        }
    };
     
    int ORO_main( int, char** )
    {
        // Create your tasks
        TaskA ta("Provider");
        TaskB tb("Subscriber");
     
        connectPeers(ta, tb);
        // connects interfaces.
        connectInterfaces(ta, tb);
        return 0;
    }

    New Message API

    This use case shows how one can use messages in the new API. The unchanged method is added for comparison. Note that I have also added the provides() and requires() mechanism such that the RTT 1.0 construction:

      method = this->getPeer("PeerX")->getMethod<int(double)>("Method");

    is no longer required. The connection is made similar as data flow ports are connected.

    /**
     * Provider
     */
    class TaskA    : public TaskContext
    {
        Message<void(double)>   message;
        Method<int(double)>     method;
     
        void mesg(double arg1) {
            return;
        }
     
        int meth(double arg1) {
            return 0;
        }
     
    public:
     
        TaskA(std::string name)
            : TaskContext(name),
              message("Message",&TaskA::mesg, this),
              method("Method",&TaskA::meth, this)
        {
            this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
            this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1");
            // OR:
            this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1");
            this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1");
        }
     
    };
     
    class TaskB   : public TaskContext
    {
        Message<void(double)>   message;
        Method<int(double)>     method;
     
    public:
     
        TaskB(std::string name)
            : TaskContext(name),
              message("Message"),
              method("Method")
        {
            this->requires()->addMessage( &message );
            this->requires()->addMethod( &method );
            // OR:
            this->requires("FooInterface")->addMessage( &message );
            this->requires("FooInterface")->addMethod( &method );
        }
     
        bool configureHook() {
            // setup is done during deployment.
            return message.ready() && method.ready();
        }
     
        void updateHook() {
            // calls TaskA:
            method( 4.0 );
            // sends two messages:
            message( 1.0 );
            message( 2.0 );
        }
    };
     
    int ORO_main( int, char** )
    {
        // Create your tasks
        TaskA ta("Provider");
        TaskB tb("Subscriber");
     
        connectPeers(ta, tb);
        // connects interfaces.
        connectInterfaces(ta, tb);
        return 0;
    }

    New Method, Operation, Service API

    This page shows some use cases on how to use the newly proposed services classes in RTT 2.0.

    WARNING: This page assumes the reader has familiarity with the current RTT 1.x API.

    First, we introduce the new classes that would be added to the RTT:

    #include <rtt/TaskContext.hpp>
    #include <string>
     
    using RTT::TaskContext;
    using std::string;
     
    /**************************************
     * PART I: New Orocos Classes
     */
     
    /**
     * An operation is a function a component offers to do.
     */
    template<class T>
    class Operation {};
     
    /**
     * A Service collects a number of operations.
     */
    class ServiceProvider {
    public:
        ServiceProvider(string name, TaskContext* owner);
    };
     
    /**
     * Is the invocation of an Operation.
     * Methods can be executed blocking or non blocking,
     * in the latter case the caller can retrieve the results
     * later on.
     */
    template<class T>
    class Method {};
     
    /**
     * A ServiceRequester collects a number of methods
     */
    class ServiceRequester {
    public:
        ServiceRequester(string name, TaskContext* owner);
     
        bool ready();
    };

    What is important to notice here is the symmetry:

     (Operation, ServiceProvider) <-> (Method, ServiceRequester).
    The left hand side is offering services, the right hand side is using the services.

    First we define that we provide a service. The user starts from his own C++ class with virtual functions. This class is then implemented in a component. A helper class ties the interface to the RTT framework:

    /**************************************
     * PART II: User code for PROVIDING a service
     */
     
    /**
     * Example Service as abstract C++ interface (non-Orocos).
     */
    class MyServiceInterface {
    public:
        /**
         * Description.
         * @param name Name of thing to do.
         * @param value Value to use.
         */
        virtual int foo_function(std::string name, double value) = 0;
     
        /**
         * Description.
         * @param name Name of thing to do.
         * @param value Value to use.
         */
        virtual int bar_service(std::string name, double value) = 0;
    };
     
    /**
     * MyServiceInterface exported as Orocos interface.
     * This could be auto-generated from reading MyServiceInterface.
     *
     */
    class MyService {
    protected:
        /**
         * These definitions are not required in case of 'addOperation' below.
         */
        Operation<int(const std::string&,double)> operation1;
        Operation<int(const std::string&,double)> operation2;
     
        /**
         * Stores the operations we offer.
         */
        ServiceProvider provider;
    public:
        MyService(TaskContext* owner, MyServiceInterface* service)
        : provider("MyService", owner),
          operation1("foo_function"), operation2("bar_service")
        {
                    // operation1 ties to foo_function and is executed in caller's thread.
            operation1.calls(&MyServiceInterface::foo_function, service, Service::CallerThread);
            operation1.doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
                    provider.addOperation( operation1 );
     
            // OR: (does not need operation1 definition above)
            // Operation executed by caller's thread:
            provider.addOperation("foo_function", &MyServiceInterface::foo_function, service, Service::CallerThread)
                    .doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
     
            // Operation executed in component's thread:
            provider.addOperation("bar_service", &MyServiceInterface::bar_service, service, Service::OwnThread)
                    .doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
        }
    };

    Finally, any component is free to provide the service defined above. Note that it shouldn't be that hard to autogenerate most of the above code.

    /**
     * A component that implements and provides a service.
     */
    class MyComponent : public TaskContext, protected MyServiceInterface
    {
        /**
         * The class defined above.
         */
        MyService serv;
    public:
        /**
         * Just pass on TaskContext and MyServiceInterface pointers:
         */
        MyComponent() : TaskContext("MC"), serv(this,this)
        {
     
        }
     
    protected:
        // Implements MyServiceInterface
        int foo_function(std::string name, double value)
        {
            //...
            return 0;
        }
        // Implements MyServiceInterface
        int bar_service(std::string name, double value)
        {
            //...
            return 0;
        }
    };

    The second part is about using this service. It creates a ServiceRequester object that stores all the methods it wants to be able to call.

    Note that both ServiceRequester below and ServiceProvider above have the same name "MyService". This is how the deployment can link the interfaces together automatically.

    /**************************************
     * PART II: User code for REQUIRING a service
     */
     
    /**
     * We need something like this to define which services
     * our component requires.
     * This class is written explicitly, but it can also be done
     * automatically, as the example below shows.
     *
     * If possible, this class should be generated too.
     */
    class MyServiceUser {
        ServiceRequester rservice;
    public:
        Method<int(const string&, double)> foo_function;
        MyServiceUser( TaskContext*  owner )
        : rservice("MyService", owner), foo_function("foo_function")
          {
            rservice.requires(foo_function);
          }
    };
     
    /**
     * Uses the MyServiceUser helper class.
     */
    class UserComponent2 : public TaskContext
    {
        // also possible to (privately) inherit from this class.
        MyServiceUser mserv;
    public:
        UserComponent2() : TaskContext("User2"), mserv(this)
        {
        }
     
        bool configureHook() {
            if ( ! mserv->ready() ) {
                // service not ready
                return false;
            }
        }
     
        void updateHook() {
            // blocking:
            mserv.foo_function.call("name", 3.14);
            // etc. see updateHook() below.
        }
    };

    The helper class can again be omitted, but the Method<> definitions must remain in place (in contrast, the Operation<> definitions for providing a service could be omitted).

    The code below also demonstrates the different use cases for the Method object.

    /**
     * A component that uses a service.
     * This component doesn't need MyServiceUser, it uses
     * the factory functions instead:
     */
    class UserComponent : public TaskContext
    {
        // A definition like this must always be present because
        // we need it for calling. We also must provide the function signature.
        Method<int(const string&, double)> foo_function;
    public:
        UserComponent() : TaskContext("User"), foo_function("foo_function")
        {
            // creates this requirement automatically:
            this->requires("MyService")->add(&foo_function);
        }
     
        bool configureHook() {
            if ( !this->requires("MyService")->ready() ) {
                // service not ready
                return false;
            }
        }
     
        /**
         * Use the service
         */
        void updateHook() {
            // blocking:
            foo_function.call("name", 3.14);
            // short/equivalent to call:
            foo_function("name", 3.14);
     
            // non blocking:
            foo_function.send("name", 3.14);
     
            // blocking collect of return value of foo_function:
            int ret = foo_function.collect();
     
            // blocking collect of any arguments of foo_function:
            string ret1; double ret2;
            int ret = foo_function.collect(ret1, ret2);
     
            // non blocking collect:
            int returnval;
            if ( foo_function.collectIfDone(ret1,ret2,returnval) ) {
                // foo_function was done. Any argument that needed updating has
                // been updated.
            }
        }
    };

    Finally, we conclude with an example of requiring the same service multiple times, for example, for controlling two stereo-vision cameras.

    /**
     * Multi-service case: use same service multiple times.
     * Example: stereo vision with two cameras.
     */
    class UserComponent3 : public TaskContext
    {
        // also possible to (privately) inherit from this class.
        MyVisionUser vision;
    public:
        UserComponent3() : TaskContext("User2"), vision(this)
        {
            // requires a service exactly two times:
            this->requires(vision)["2"];
            // OR any number of times:
            // this->requires(vision)["*"];
            // OR range:
            // this->requires(vision)["0..2"];
        }
     
        bool configureHook() {
            if ( ! vision->ready() ) {
                // only true if both are ready.
                return false;
            }
     
        }
     
        void updateHook() {
            // blocking:
            vision[0].foo_function.call("name", 3.14);
            vision[1].foo_function.call("name", 3.14);
            // or iterate:
            for(int i=0; i != vision.interfaces(); ++i)
                vision[i].foo_function.call("name",3.14);
            // etc. see updateHook() above.
     
            /* Scripting equivalent:
             * for(int i=0; i != vision.interfaces(); ++i)
             *   do vision[i].foo_function.call("name",3.14);
             */
        }
    };

    Upgrading from RTT 1.x to 2.0

    For upgrading, we have:

    More details are split into several child pages.

    Methods vs Operations

    RTT 2.0 has unified events, commands and methods in the Operation interface.

    Purpose

    To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

    Component interface

    In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. Click below to read the rest of this post.RTT 2.0 has unified events, commands and methods in the Operation interface.

    Purpose

    To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

    Component interface

    In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. So the first task is to create such an operation object for each function you want to provide.

    This is how a function is added to the component interface:

      #include <rtt/Operation.hpp>;
      using namespace RTT;
     
      class MyTask
        : public RTT::TaskContext
      {
        public:
        string getType() const { return "SpecialTypeB" }
        // ...
     
        MyTask(std::string name)
          : RTT::TaskContext(name),
        {
           // Add the a C++ method to the operation interface:
           addOperation( "getType", &MyTask::getType, this )
                    .doc("Read out the name of the system.");
         }
         // ...
      };
     
      MyTask mytask("ATask");

    The writer of the component has written a function 'getType()' which returns a string that other components may need. In order to add this operation to the Component's interface, you use the TaskContext's addOperation function. This is a short-hand notation for:

           // Add the C++ method to the operation interface:
           provides()->addOperation( "getType", &MyTask::getType, this )
                    .doc("Read out the name of the system.");

    Meaning that we add 'getType()' to the component's main interface (also called 'this' interface). addOperation takes a number of parameters: the first one is always the name, the second one a pointer to the function and the third one is the pointer to the object of that function, in our case, MyTask itself. In case the function is a C function, the third parameter may be omitted.

    If you don't want to polute the component's this interface, put the operation in a sub-service:

           // Add the C++ method objects to the operation interface:
           provides("type_interface")
                ->addOperation( "getType", &MyTask::getType, this )
                    .doc("Read out the name of the system.");

    The code above dynamically created a new service object 'type_interface' to which one operation was added: 'getType()'. This is similar to creating an object oriented interface with one function in it.

    Calling an Operation in C++

    Now another task wants to call this function. There are two ways to do this: from a script or in C++. This section explains how to do it in C++

    Your code needs a few things before it can call a component's operation:

    • It needs to be a peer of instance 'ATask' of MyTask.
    • It needs to know the signature of the operation it wishes to call: string (void) (this is the function's declaration without the function's name).
    • It needs to know the name of the operation it wishes to call: "getType"

    Combining these three givens, we must create an OperationCaller object that will manage our call to 'getType':

    #include <rtt/OperationCaller.hpp>
    //...
     
      // In some other component:
      TaskContext* a_task_ptr = getPeer("ATask");
     
      // create a OperationCaller<Signature> object 'getType':
      OperationCaller<string(void)> getType
           = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)'
     
      // Call 'getType' of ATask:
      cout << getType() <<endl;

    A lot of work for calling a function no ? The advantages you get are these:

    • ATask may be located on any computer, or in any process.
    • You didn't need to include the header of ATask, so it's very decoupled.
    • If ATask disappears, the OperationCaller object will let you know, instead of crashing your program.
    • The exposed operation is directly available from the scripting interface.

    Calling Operations in scripts

    In scripts, operations are accessed far more easier. The above C++ part is reduced to:

    var string result = "";
    set result = ATask.getType();

    Tweaking Operation's Execution

    In real-time applications, it is important to know which thread will execute which code. By default the caller's thread will execute the operation's function, but you can change this when adding the operation by specifying the ExecutionType:

           // Add the C++ method to the operation interface:
           // Execute function in component's thread:
           provides("type_interface")
                ->addOperation( "getType", &MyTask::getType, this, OwnThread )
                    .doc("Read out the name of the system.");

    So this causes that when getType() is called, it gets queued for execution in the ATask component, is executed by its ExecutionEngine, and when done, the caller will resume. The caller (ie the OperationCaller object) will not notice this change of execution path. It will wait for the getType function to complete and return the results.

    Not blocking when calling operations

    In the examples above, the caller always blocked until the operation returns the result. This is not mandatory. A caller can 'send' an operation execution to a component and collect the returned values later. This is done with the 'send' function:

    // This first part is equal to the example above:
     
    #include <rtt/OperationCaller.hpp>
    //...
     
      // In some other component:
      TaskContext* a_task_ptr = getPeer("ATask");
     
      // create a OperationCaller<Signature> object 'getType':
      OperationCaller<string(void)> getType
           = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)'
     
    // Here it is different:
     
      // Send 'getType' to ATask:
      SendHandle<string(void)> sh = getType.send();
     
      // Collect the return value 'some time later':
      sh.collect();             // blocks until getType() completes
      cout << sh.retn() <<endl; // prints the return value of getType().

    Other variations on the use of SendHandle are possible, for example polling for the result or retrieving more than one result if the arguments are passed by reference. See the Component Builder's Manual for more details.

    RTT 2.0 Data Flow Ports

    RTT 2.0 has a more powerful, simple and flexible system to exchange data between components.

    Renames

    Every instance of ReadDataPort and ReadBufferPort must be renamed to 'InputPort' and every instance of WriteDataPort and WriteBufferPort must be renamed to OutputPort. 'DataPort' and 'BufferPort' must be renamed according to their function.

    The rtt2-converter tool will do this renaming for you, or at least, make its best guess.

    Usage

    InputPort and OutputPort have a read() and a write() function respectively:

    using namespace RTT;
    double data;
     
    InputPort<double> in("name");
    FlowStatus fs = in.read( data ); // was: Get( data ) or Pull( data ) in 1.x
     
    OutputPort<double> out("name");
    out.write( data );               // was: Set( data ) or Push( data ) in 1.x

    As you can see, Get() and Pull() are mapped to read(), Set() and Push() to write(). read() returns a FlowStatus object, which can be NoData, OldData, NewData. write() does not return a value (send and forget).

    Writing to a not connected port is not an error. Reading from a not connected (or never written to) port returns NoData.

    Your component can no longer see if a connection is buffered or not. It doesn't need to know. It can always inspect the return value of read() to see if a new data sample arrived or not. In case multiple data samples are ready to read in a buffer, read() will fetch each sample in order and each time return NewData, until the buffer is empty, in which case it returns the last data sample read with 'OldData'.

    If data exchange is buffered or not is now fixed by 'Connection Policies', or 'RTT::ConnPolicy' objects. This allows you to be very flexible on how components are connected, since you only need to specify the policy at deployment time. It is possible to define a default policy for each input port, but it is not recommended to count on a certain default when building serious applications. See the 'RTT::ConnPolicy' API documentation for which policies are available and what the defaults are.

    Deployment

    The DeploymentComponent has been extended such that it can create new-style connections. You only need to add sections to your XML files, you don't need to change existing ones. The sections to add have the form:

      <!-- You can set per data flow connection policies -->
      <struct name="SensorValuesConnection" type="ConnPolicy">
        <!-- Type is 'shared data' or buffered: DATA: 0 , BUFFER: 1 -->
        <simple name="type" type="short"><value>1</value></simple>
        <!-- buffer size is 12 -->
        <simple name="size" type="short"><value>12</value></simple>
      </struct>
      <!-- You can repeat this struct for each connection below ... -->

    Where 'SensorValuesConnection' is a connection between data flow ports, like in the traditional 1.x way.

    Consult the deployment component manual for all allowed ConnPolicy XML options.

    Real-time with Complex data

    The data flow implementation tries to pass on your data as real-time as possible. This requires that your operator=() of your data type is hard real-time. In case your operator=() is only real-time if enough storage is allocated on beforehand, you can inform your output port of the amount of storage to pre-allocate. You can do this by using:

      std::vector<double> joints(10, 0.0);
      OutputPort<std::vector<double> > out("out");
     
      out.setDataSample( joints ); // initialises all current and future connections to hold a vector of size 10.
     
      // modify joint values... add connections etc.
     
      out.write( joints );  // always hard real-time if joints.size() <= 10

    As the example shows, a single call to setDataSample() is enough. This is not the same as write() ! A write() will deliver data to each connected InputPort, a setDataSample() will only initialize the connections, but no actual writing is done. Be warned that setDataSample() may clear all data already in a connection, so it is better to call it before any data is written to the OutputPort.

    In case your data type is always hard real-time copyable, there is no need to call setDataSample. For example:

      KDL::Frame f = ... ; // KDL::Frame never (de-)allocates memory during copy or construction.
     
      OutputPort< KDL::Frame > out("out");
     
      out.write( f );  // always hard real-time

    Further reading

    Please also consult the Component Builder's Manual and the Doxygen documentation for further reference.

    RTT 2.0 Renaming table

    This page lists the renamings/relocations done on the RTT 2.0 branch (available through gitorious on http://www.gitorious.org/orocos-toolchain/rtt/commits/master) and also offers the conversion scripts to do the renaming.

    A note about headers/namespaces: If a header is in rtt/extras, the namespace will be RTT::extras and vice versa. A header in rtt/ has namespace RTT. Note: the OS namespace has been renamed to lowercase os. The Corba namespace has been renamed to lowercase corba.

    Scripts

    The script attached to the bottom of this page converts RTT 1.x code according to the renaming table below. They do so in a quasi-intelligent way and catch most cases quite correctly. Some changes require additional manual intervention because the script can not guess the missing content. You will also need to download the rtt2-converter program from here.

    Namespace conversions and simple renames

    Many other files moved into sub-namespaces. For all these renames and more, a script is attached to this wiki page. You need to download headers.txt, classdump.txt and to-rtt-2.0.pl.txt and rename the to-rtt-2.0.pl.txt script to to-rtt-2.0.pl:
    mv to-rtt-2.0.pl.txt to-rtt-2.0.pl
    chmod a+x to-rtt-2.0.pl
    ./to-rtt-2.0.pl $(find . -name "*.cpp" -o -name "*.hpp")
    The script will read headers.txt and class-dump.txt to do its renaming work for every changed RTT header and class on the list of files you give as an argument. Feel free to report problems on the orocos-dev mailing list or RTT-dev forum.

    Minor manual fixes may be expected after running this script. Be sure to have your sources version controlled, such that you can first test what the script does before permanently changing files.

    Flow port and services conversions

    A second program is required to convert flow port and method/command -> operation conversions that go further than a simple rename. This is called rtt2-converter. It requires boost 1.41.0 or newer. You can build rtt2-converter from Eclipse or using the Makefile using 'make all'. It also requires a recent installed version of boost with the regex library (It will link with -lregex and includes regex boost headers). See the rtt2-converter README.txt for instructions and download the converter sources from the toolchain/upgrading link.

    tar xjf rtt2-converter-1.1.tar.bz2
    cd rtt2-converter-1.1
    make
    ./rtt2-converter Component.hpp Component.cpp

    The script takes preferably both header and implementation of your component, but will also accept a single file. It needs both class definition and implementation to make its best guesses on how to convert. If all your code is in a .hpp or .cpp file, you only need to specify that file. If nothing is to be done, the file will remain the same, so you may 'accidentally' feed non-Orocos files, or a file twice.

    To run this on a large codebase, you can do something similar to:

    # Calls : ./rtt2-converter Component.hpp Component.cpp for each file in orocos-app
    for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $(dirname $i)/$(basename $i cpp)hpp $i; done
     
    # Calls : ./rtt2-converter Component.cpp for each .cpp file in orocos-app
    for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $i; done
     
    # Calls : ./rtt2-converter Component.hpp for each .hpp file in orocos-app
    for i in $(find /home/user/src/orocos-app -name "*.hpp"); do ./rtt2-converter $i; done
    This looks up all .cpp files in an orocos-app directory and calls rtt2-converter on each hpp/cpp pair. The dirname/basename construct is for replacing the .cpp extension with .hpp. If you have a mixed hpp+cpp/cpp/hpp repository, you'll have to run the for loop three times as shown above. The script is robust against calling it multiple times on the same file.

    Core API

    RTT 1.0 RTT 2.0 Comments
    RTT::PeriodicActivity RTT::extras::PeriodicActivity Use of RTT::Activity is prefered
    RTT::Timer RTT::os::Timer
    RTT::SlaveActivity, SequentialActivity, SimulationThread, IRQActivity, FileDescriptorActivity, EventDrivenActivity, SimulationActivity, ConfigurationInterface, Configurator, TimerThread RTT::extras::... EventDrivenActivity has been removed.
    RTT::OS::SingleThread, RTT::OS::PeriodicThread RTT::os::Thread Can do periodic and non-periodic and switch at run-time.
    RTT::TimeService RTT::os::TimeService
    RTT::DataPort,BufferPort RTT::InputPort,RTT::OutputPort Buffered/unbuffered is decided upon connection time. Only input/output is hardcoded.
    RTT::types() RTT::types::Types() The function name collided with the namespace name
    RTT::Toolkit* RTT::types::Typekit* More logical name
    RTT::Command RTT::Operation Create an 'OwnThread' operation type
    RTT::Method RTT::Operation Create an 'ClientThread' operation type
    RTT::Event RTT::internal::Signal Events are replaced by OutputPort or Operation, the Signal class is a synchonous-only callback manager.
    commands()->getCommand<T>() provides()->getOperation() get a provided operation, no template argument required
    commands()->addCommand() provides()->addOperation().doc("Description") add a provided operation, document using .doc("doc").doc("a1","a1 doc")...
    methods()->getMethod<T>() provides()->getOperation() get a provided operation, no template argument required
    methods()->addMethod() provides()->addOperation().doc("Description") add a provided operation, document using .doc("doc").doc("a1","a1 doc")...
    attributes()->getAttribute<T>() provides()->getAttribute() get a provided attribute, no template argument required
    attributes()->addAttribute(&a) provides()->addAttribute(a) add a provided attribute, passed by reference, can now also add a normal member variable.
    properties()->getProperty<T>() provides()->getProperty() get a provided property, no template argument required
    properties()->addProperty(&p) provides()->addProperty(p).doc("Description") add a provided property, passed by reference, can now also add a normal member variable.
    events()->getEvent<T>() ports()->getPort() OR provides()->getOperation<T>() Event<T> was replaced by OutputPort<T> or Operation<T>
    ports()->addPort(&port, "Description") ports()->addPort( port ).doc("Description") Takes argument by reference and documents using .doc("text").

    Scripting

    RTT 1.0 RTT 2.0 Comments
    scripting() getProvider<Scripting>("scripting") Returns a RTT::Scripting object. Also add #include <rtt/scripting/Scripting.hpp>

    Marshalling

    RTT 1.0 RTT 2.0 Comments
    marshalling() getProvider<Marshalling>("marshalling") Returns a RTT::Marshalling object. Also add #include <rtt/marsh/Marshalling.hpp>
    RTT::Marshaller RTT::marsh::MarshallingInterface Normally not needed for normal users.
    RTT::Demarshaller RTT::marsh::DemarshallingInterface Normally not needed for normal users.

    CORBA Transport

    RTT 1.0 RTT 2.0 Comments
    RTT::Corba::* RTT::corba::C* Each proxy class or idl interface starts with a 'C' to avoid confusion with the same named RTT C++ classes
    RTT::Corba::ControlTaskServer RTT::corba::TaskContextServer renamed for consistency.
    RTT::Corba::ControlTaskProxy RTT::corba::TaskContextProxy renamed for consistency.
    RTT::Corba::Method,Command RTT::corba::COperationRepository,CSendHandle No need to create these helper objects, call COperationRepository directly
    RTT::Corba::AttributeInterface,Expression,AssignableExpression RTT::corba::CAttributeRepository No need to create expression objects, query/use CAttributeRepository directly.
    AttachmentSize
    class-dump.txt7.89 KB
    headers.txt10.17 KB
    to-rtt-2.0.pl.txt4.78 KB

    Replacing Commands

    RTT 2.0 has dropped the support for the RTT::Command class. It has been replaced by the more powerful Methods vs Operations construct.

    The rtt2-converter tool will automatically convert your Commands to Method/Operation pairs. Here's what happens:

    // RTT 1.x code:
    class ATask: public TaskContext
    {
      bool prepareForUse();
      bool prepareForUseCompleted() const;
    public:
      ATask(): TaskContext("ATask")
      {
        this->commands()->addCommand(RTT::command("prepareForUse",&ATask::prepareForUse,&ATask::prepareForUseCompleted,this),
                                                 "prepares the robot for use");
      }
    };

    After:

    // After rtt2-converter: RTT 2.x code:
    class ATask: public TaskContext
    {
      bool prepareForUse();
      bool prepareForUseCompleted() const;
    public:
      ATask(): TaskContext("ATask")
      {
        this->addOperation("prepareForUse", &ATask::prepareForUse, this, RTT::OwnThread).doc("prepares the robot for use");
        this->addOperation("prepareForUseDone", &ATask::prepareForUseCompleted, this, RTT::ClientThread).doc("Returns true when prepareForUse is done.");
      }
    };

    What has happened is that the RTT 1.0 Command is split into two RTT 2.0 Operations: "prepareForUse" and "prepareForUseDone". The first will be executed in the component's thread ('OwnThread'), analogous to the RTT::Command semantics. The second function, prepareForUseDone, is executed in the callers thread ('ClientThread'), also analogous to the behaviour of the RTT::Command's completion condition.

    The old behavior can be simulated at the callers side by these constructs:

    Calling a 2.0 Operation as a 1.0 Command in C++

    Polling for commands in RTT 1.x was very rudimentary. One way of doing it would have looked like this:
      Command<bool(void)> prepare = atask->commands()->getCommand<bool(void)>("prepareForUse");
      prepare(); // sends the Command object.
      while (prepare.done() == false)
        sleep(1);
    You look up the command with the signature bool(void) and invoke it. Next you needed to poll for done() to return true.

    In RTT 2.0, the caller's code looks up the prepareForUse Operation and then 'sends' the request to the ATask Component. Optionally, the completion condition is looked up manually and polled for as well:

      Method<bool(void)> prepare = atask->getOperation("prepareForUse");
      Method<bool(void)> prepareDone = atask->getOperation("prepareForUseDone");
      SendHandle h = prepare.send();
     
      while ( !h.collectIfDone() && prepareDone() == false )
         sleep(1);

    The collectIfDone() and prepareDone() checks are now made explicit, while they were called implicitly in the RTT 1.x's prepare.done() function. Writing your code like this will case the exact same behaviour in RTT 2.0 as in RTT 1.x.

    In case you don't care for the 'done' condition, the above code may just be simplified to:

      Method<bool(void)> prepare = atask->getOperation("prepareForUse");
      prepare.send();

    In that case, you may ignore the SendHandle, and the object will cleanup itself at the appropriate time.

    Calling a 2.0 Operation as a 1.0 Command in Scripting

    Scripting was very convenient for using commands. A typical RTT 1.x script would have looked like:

    program foo {
      do atask.prepareForUse();
      // ... rest of the code
    }
    The script would wait at the prepareForUse() line (using polling) until the command's completion.

    To have the same behaviour in RTT 2.x using Operations, you need to make the 'polling' explicit. Furthermore, you need to 'send' the method to indicate that you do not wish to block:

    program foo {
      var SendHandle h;
      set h = atask.prepareForUse.send();
      while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
         yield;
      // ... rest of the code
    }
    Just like in the C++ code, you need to create a SendHandle variable and store the result of the send in it. You can then use h to see if the operation finished yet and if so, check the status of prepareForUseDone(). It may be convenient to put these in a function in RTT 2.x:

    function prepare_command() {
      var SendHandle h;
      set h = atask.prepareForUse.send();
      while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
         yield;
    }
    program foo {
       call prepare_command(); // note: using 'call'
      // ... rest of the code
    }
    In order to avoid blocking in the 'foo' program, you need prefix prepare_command with 'call'. This will 'inline' the function such that 'foo' will not block the ExecutionEngine until prepare_command returns. For comparison purposes, if you would omit the 'call' prefix, program would need to loop on prepare_command() in turn:

    export function prepare_command()  // note: we must export the function
    {
      var SendHandle h;
      set h = atask.prepareForUse.send();
      while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
         yield;
    }
    program foo {
      var SendHandle h;
      set h = prepare_command(); // note: not using 'call'
      while (h.collectIfDone() == false)
         yield;
      // ... rest of the code
    }
    In RTT 2.x, a program script will only yield when the word 'yield' (equivalent to RTT 1.x 'do nothing') is seen. Both function and program must yield in order to not spin in an endless loop in the ExecutionEngine.

    note

    Code without 'yield' can spin forever. This blocking 'trap' in 2.0 can be very inconvenient. It's likely that an alternative system will be provided to allow 'transparant' polling for a given function. For example this syntax could be introduced:
    program foo {
      prepare_command.call(); // (1) calls and blocks for result.
      prepare_command.send(); // (2) send() and forget.
      prepare_command.poll(); // (3) send() and poll with collectIfDone().
    }
    Syntax (1) and (2) are already present. Syntax (3) would indicate that the script must send and poll (using collectIfDone() behind the scenes) the prepare_command operation. This would always work and save users from writing the bulky SendHandle code.

    Replacing Events

    RTT 2.0 no longer supports the RTT::Event class. This page explains how to adapt your code for this.

    Rationale

    RTT::Event was broken in some subtle ways, especially the unreliable asynchronous delivery and the danger of untrusted clients made the Event fragile. It was choosen to be replaced by an OutputPort or an Operation, depending on the use case.
    • Replace an Event by an OutputPort if you want to broadcast to many components. Any event sender->receiver connection can set the buffering policy, or encapsulate a transport to another (remote) process.
    • Replace an Event by an Operation if you want to react to an interface call *inside* your component, for example, in a state machine script or in C++.

    Replacing by an OutputPort

    Output ports differ from RTT::Event in that they can take only one value as an argument. If your 1.x Event contained multiple arguments, they need to be taken together in a new struct that you create yourself. Both sender and receiver must know and understand this struct.

    For the simple case, when your Event only had one argument:

    // RTT 1.x
    class MyTask: public TaskContext
    {
       RTT::Event<void(int)> samples_processed;
     
       MyTask() : TaskContext("task"), samples_processed("samples_processed") 
       {
          events()->addEvent( &samples_processed );
       }
       // ... your other code here...
    };  
    Becomes:
    // RTT 2.x
    class MyTask: public TaskContext
    {
       RTT::OutputPort<int> samples_processed;
     
       MyTask() : TaskContext("task"), samples_processed("samples_processed") 
       {
          ports()->addPort( samples_processed ); // note: RTT 2.x dropped the '&'
       }
       // ... your other code here...
    };  

    Note: the rtt2-converter tool does not do this replacement, see the Operation section below.

    Components wishing to receive the number of samples processed, need to define an InputPort<int> and connect their input port to the output port above.

    Reacting to event data in scripts

    When using the RTT scripting service's state machine, you can react to data arriving on the port. You could for example load this script in the above component:
    StateMachine SM {
     
       var int total = 0;
     
       initial state INIT {
         entry {
         }
         // Reads samples_processed and stores the result in 'total'.
         // Only if the port return 'NewData', this branch will be evaluated.
         transition samples_processed( total ) if (total > 0 ) select PROCESSING;
       }
     
       state PROCESSING {
         entry { /* processing code, use 'total' */
         }
       }
     
       final state FINI {}

    The transition from state INIT to state PROCESSING will only by taken if samples_processed.read( total ) == NewData and if total > 0. Note: When your TaskContext is periodically executing, the read( total ) statement will be re-tried and overwritten in case of OldData and NewData. Only if the connection of samples_processed is completely empty (never written to or reset), total will not be overwritten.

    Replacing by an Operation

    Operations are can take the same signature as RTT::Event. The difference is that only the component itself can attach callbacks to an Operation, by means of the signals() function.

    For example:

    // RTT 1.x
    class MyTask: public TaskContext
    {
       RTT::Event<void(int, double)> samples_processed;
     
       MyTask() : TaskContext("task"), samples_processed("samples_processed") 
       {
          events()->addEvent( &samples_processed );
       }
       // ... your other code here...
    };  
    Becomes:
    // RTT 2.x
    class MyTask: public TaskContext
    {
       RTT::Operation<void(int,double)> samples_processed;
     
       MyTask() : TaskContext("task"), samples_processed("samples_processed") 
       {
          provides()->addOperation( samples_processed ); // note: RTT 2.x dropped the '&'
     
          // Attaching a callback handler to the operation object:
          Handle h = samples_processed.signals( &MyTask::react_foo, this );
       }
       // ... your other code here...
     
       void react_foo(int i, double d) {
           cout << i <<", " << d <<endl;
       }
    };  

    Note: the rtt2-converter tool only does this replacement automatically. Ie. it assumes all your Event objects were only used in the local component. See the RTT 2.0 Renaming table for this tool.

    Since an Operation object is always local to the component, no other components can attach callbacks. If your Operation would return a value, the callback functions needs to return it too, but it will be ignored and not received by the caller.

    The callback will be executed in the same thread as the operation's function (ie OwnThread vs ClientThread).

    Reacting to operations in scripts

    When using the RTT scripting service's state machine, you can react to calls on the Operation. You could for example load this script in the above component:
    StateMachine SM {
     
       var int total = 0;
     
       initial state INIT {
         entry {
         }
         // Reacts to the samples_processed operation to be invoked
         // and stores the argument in total. If the Operations takes multiple
         // arguments, also here multiple arguments must be given.
         transition samples_processed( total ) if (total > 0 ) select PROCESSING;
       }
     
       state PROCESSING {
         entry { /* processing code, use 'total' */
         }
       }
     
       final state FINI {}

    The transition from state INIT to state PROCESSING will only by taken if samples_processed( total ) was called by another component (using a Method object, see Methods vs Operations and if the argument in that call > 0. Note: when samples_processed would return a value, your script can not influence that return value since the return value is determined by the function tied to the Operation, not by signal handlers.

    NOTE: RTT 2.0.0-beta1 does not yet support the script syntax.

    Using Eclipse and Orocos

    .. work in progress ..

    This page describes how you can configure Eclipse in order to write Orocos applications.

    Important

    First of all, download and install the latest Eclipse version from http://www.eclipse.org/downloads with the 'CDT' plugin (C/C++ Development Toolkit).

    Don't continue if you have an Eclipse version older than Helios (3.6).

    Getting Started

    Eclipse is a great tool, but some Linux systems are not well prepared to use it. Follow these instructions carefully to get the most out of it.

    Is Java setup correctly (Linux)?

    This is a major issue. Do not use 'gij'/'kaffe'/... java implementation on Linux. You must use the Sun HotSpot(TM) OR a recent OpenJDK Runtime Environment implementation. You can check this by doing:

     java -version
     java version "1.6.0_10"
     Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
     Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)
    OR
    java -version
    java version "1.6.0_0"
    OpenJDK Runtime Environment (IcedTea6 1.6.1) (6b16-1.6.1-3ubuntu3)
    OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)

    Note that you should not see any text saying 'gij' or 'kaffe',... Ubuntu/Debian users can install Sun java by doing:

      sudo aptitude install sun-java6-jre
      sudo update-alternatives --config java
     ... select '/usr/lib/jvm/java-6-sun/jre/bin/java'

    In case of instability, or misbehaving windows/buttons. Try to use the Sun (= Oracle now) version. But also google for the export GDK_NATIVE_WINDOWS=1 solution in case you use Eclipse before Helios and a 2009 or newer Linux distro.

    Extra stuff for your happiness

    After you installed Eclipse and Java, you need to configure additionally:
    • Download and unzip the Rinzo XML editor plug-in version 0.6.0 or later and put only the ar.com.tadp.xml.rinzo.core jar file in eclipse/dropins. This plug-in allows you to have syntax highlighting when editing XML files. Add the .cpf extension to your project's 'File Associations' and let it point to the Rinzo editor. NOTE: Eclipse has an own XML Editor plugin that you can install instead.
    • Add the Eclipse CORBA Plugin in case you need to edit IDL files. The update site is http://eclipsecorba.sourceforge.net/update
    • Add the CMake Editor Plugin in case you need to edit cmake files. The update site is http://cmakeed.sourceforge.net/updates/
    • Add the .ops and .osd Orocos scripting file extensions to the 'File Associations' and let them open in the C++ editor, this adds some basic highlighting and indentation features.
    • Add include paths of your project to /usr/include or /usr/local/include or /usr/local/orocos/include such that the Eclipse Indexer can find the Orocos C++ headers and offer coding assistance. (In case you are developing the RTT itself, add orocos-rtt/src, orocos-rtt/build/src and orocos-rtt/src/os to your include paths, assuming you created a build directory 'build'.)
    • Make sure that hovering feature is on in such that you can see if the Indexer understands your C++ code.

    If you're changing Orocos code, also download and enable the Eclipse indentation file attached to this post and Import it in the 'Coding Style' tab of your project Preferences.

    Eclipse and Git

    Peter Soetens has a git repository on http://www.github.com/psoetens . You can check it out using Eclipse with the Egit/jgit plugin. Instructions can be found on this github page. Egit runs on all Java platforms (no dependency on Linux git). In order to install Egit, add
     http://download.eclipse.org/egit/updates
    to your update sites of Eclipse (Help -> Software updates...)

    If you have an existing clone (checked out with plain old git), you can 'import' it by first importing the git repository directory as a project and then right click the project -> Team -> Share Project Follow the dialogs. There's some confusion with what to type in the location box. In older versions, you'd need to type

     file:///path/to/repository
    Note the three ///

    Eclipse and SVN

    The official Orocos repository is on http://svn.mech.kuleuven.be/repos/orocos (WebSVN link: http://svn.mech.kuleuven.be/websvn/orocos ) You can use Subclipse (http://subclipse.tigris.org) and add
     http://subclipse.tigris.org/update_1.4.x
    to your updates sites of Eclipse (Help -> Software updates...)

    If you have an existing checkout, you can 'import' it by first importing the checkout directory as a project and then right click the project -> Team -> Share Project

    AttachmentSize
    orocos-coding-style.xml15.51 KB

    Setting up Ubuntu 10.10, Eclipse and Orocos

    Ubuntu or Debian Packages

    You can find Ubuntu packages of the Orocos Toolchain in the ROS package repositories. Look for the package ros-<releasename>-orocos-toolchain-ros, which installs the orocos_toolchain_ros version.

    There are also build instructions for building some of these packages manually here: How to build Debian packages

    The rest of this page mixes installing Java and building Orocos toolchain sources. In case you used the Debian/Ubuntu packages above, only do the Java setup.

    Setup

    • Replace Java.
    • Install the GNU Toolchain
    • Get Eclipse CDT
    • Install other Orocos dependencies

    Java

    I am starting with Ubuntu 10.10. First thing to do is to get rid of the OpenJDK version of Java and install the Oracle (Sun) version of Java using the Synaptic Package Manager.

    Do the following in Synaptic at the same time:

    • Search for openjdk- and mark for removal or complete removal.
    • Search for sun-java- and select sun-java6-jdk. Make sure the following are selected too (some might be selected already):

     * sun-java6-bin
     * sun-java6-jre
     * sun-java6-plugin
     * sun-java6-source
    • Apply the changes and exit Synaptic.

    GNU Toolchain and C++ Development

    Install the following:

    • build-essential
    • automake
    • bison
    • libboost-all-dev
    • cmake

    Other Orocos Dependencies

    • libxerces-c-dev
    • doxygen
    • dia-gnome
    • inkscape
    • docbook-xsl

    omniiOrb

    Using Synaptic get all the omniOrb packages that are not marked as transitional or dbg and have the same version number. (Hint: do a search of omniorb then sort by version) Include the lib* packages too.

    Build Orocos

    I do not like the bootstrap/autoproj procedure of building Orocos. I prefer using the the standard build instructions found in the RTT Installation Guide

    Errata in RTT Installation Guide:

    • you have to cd to orocos-toolchain-2.2.1/rtt/ and then mkdir build;cd build and continue with the instructions.

    Make sure to enable CORBA by using this cmake command:

    cmake .. -DOROCOS_TARGET=gnulinux -DENABLE_CORBA=ON -DCORBA_IMPLEMENTATION=OMNIORB

    OCL

    Install:

    • libnetcdf-dev
    • netcdf-bin
    • libncurses5-dev
    • libncursesw5-dev
    • libreadline-dev
    • libedit-dev
    • lua5.1
    • lua5.1-0-dev

    cd log4cpp;mkdir build;../configure;make;make install

    Now: cd ocl;mkdir build;cmake ..;make;make install

    Running Eclipse

    JDK BUG for JDK 6.0_18 and above FIX on 64bit systems

    Put this in your eclipse.ini file under -vmargs: -XX:-UseCompressedOops

    Get Eclipse IDE for C/C++ Developers,. Unzip it somewhere and then do:

    cd eclipse
    ./eclipse

    Creating an Eclipse Project from an Orocos package

    You can use Orocos packages in Eclipse easily. The easiest way is when you're using the ROS build system, since that allows you to generate an Eclipse project, with all the correct settings. If you don't use ROS, you can import it too, but you'll have to add the paths to headers etc manually.

    ROS users

    cd ~/ros
    rosrun ocl orocreate-pkg orocosworld
    cd orocosworld
    make eclipse-project

    Then go to Eclipse -> File -> Import -> Existing Project into Workspace and then follow the wizard.

    When the project is loaded, give it some time to index all header files. All include paths and build settings in Eclipse will be set up for you.

    non-ROS users

    You must have sourced env.sh !

    cd ~/src
    orocreate-pkg orocosworld
    cd orocosworld
    make

    Then go to Eclipse -> File > New > Makefile Project with Existing Code and complete the wizard page.

    The next step you need to do is to add the include paths to RTT and/or OCL and any other dependency in the C++ configuration options of your project preferences.

    Using Git and Orocos

    Getting started with git

    For a very good git introduction, see Using git without feeling stupid part 1 and part 2 !

    It's a 10 minutes read which really pays off.

    You can use Eclipse Using Eclipse And Orocos or plain git (on Linux) or TortoiseGit (on Windows).

    SVN users can use this reference for learning the first commands: http://git.or.cz/course/svn.htm... Click below to read the rest of this post.===Getting started with git=== For a very good git introduction, see Using git without feeling stupid part 1 and part 2 !

    It's a 10 minutes read which really pays off.

    You can use Eclipse Using Eclipse And Orocos or plain git (on Linux) or TortoiseGit (on Windows).

    SVN users can use this reference for learning the first commands: http://git.or.cz/course/svn.html

    Cloning the repository

    The git repositories of the Orocos Toolchain (v2.x only) are located at http://github.com/orocos-toolchain .

    Check out the rtt or ocl repositories and submit patches by using

     git clone git://github.com/orocos-toolchain/rtt.git
     cd orocos-rtt
    ...hack hack hack on master branch...
     git add <changed files>
     git commit
    ... repeat ...

    Finally:

     git format-patch origin/master
    
    And send out the resulting patch(es).

    If origin/master moved forward, then do

     git fetch origin/master
     git rebase origin/master
    
    Fetch copies the remote changes to your local repository, but doesn't update your current branch. rebase first removes your patches, the applies the fetched patches and then re-applies your personal patches on the fetched changes. In case of conflicts, see the tutorial on top of this page or man git-rebase

    Toolchain v2.x

    The Orocos Toolchain v2.X is the merging of the RTT, OCL and other tools that you require to build Orocos applications.

    We are gradually migrating the wiki pages of the RTT/OCL to the Toolchain Wiki. All wiki pages under RTT/OCL are considered to be for RTT/OCL 1.x versions

    What you find below is only for the 2.x releases.

    Component Packages

    Creating a new package

    This is extremely easily done with the orocreate-pkg script, see Getting started.

    Building and Using packages

    This section only applies to packages that can be built, ie that contain a Makefile, CMakeLists.txt, configure or any other file that describes how to build it.

    You can use packages in two ways:

    • built in-place (the ROS way)
    • installed (the traditional way, used by autoproj)

    Using packages in-place [ROS]

    For using this method, you need the ROS tools (roscd, rospack, rosmake,...) to manage your packages and your package directories must be underneath the ROS_PACKAGE_PATH. Orocos will choose this method automatically when the 'ROS_ROOT' environment variable exists. The orogen tools do not support in-place packages. Orogen needs the 'make install' step after a package has been built (see Using Installed Packages below).

    Layout

    When building a package in-place, it needs a top-level Makefile which creates a build directory, builds the libraries and puts them like this

    # User provided files:
    # Package directory:
    .../packagename/manifest.xml, Makefile, CMakeLists.txt,...
    # Sources:
    .../packagename/src/*.cpp
    # Headers:
    .../packagename/include/packagename/*.hpp
     
    # Build results:
    # Built Component libraries for 'packagename':
    .../packagename/lib/orocos/gnulinux/*.so|dll|...
    # Built Plugin libraries for 'packagename':
    .../packagename/lib/orocos/gnulinux/plugins/*.so|dll|...
    # Type libraries for 'packagename':
    .../packagename/lib/orocos/gnulinux/types/*.so|dll|...
    # Build information for 'packagename':
    .../packagename/packagename-gnulinux.pc

    For allowing multi-target builds, the libraries are put in thelib/orocos/targetname/ directory in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.

    Linking

    If you want to link against a library of a built package (because you included one of its headers), you can find that information in the packagename/packagename.pc file. The packagename might be suffixed with -<target> in case it is target specific. The packagename.pc file is generated by the build step and contains all build specific information for this package. You may not modify this file, it will be overwritten. (note: Build systems like ROS get the build information from the manifest.xml file. In Orocos, the build information is generated in a separate file).

    When you use the UseOrocos.cmake macros (Orocos Toolchain 2.3.0 or later), linking with dependees will be done automatically for you.

    You may add a link instruction using the classical CMake syntax:

    orocos_component( mycomponent ComponentSource.cpp )
    target_link_libraries( mycomponent ${YOUR_LIBRARY} )

    Using

    It is not necessary to define your RTT_COMPONENT_PATH, except if you have installed packages as well.

    The component and plugin loaders of RTT will search your ROS_PACKAGE_PATH, and its target subdirectory for components and plugins.

    You can then import the package in the deployer application by using:

    import("packagename")
    Which will make available all components, typekits and plugins of that package and its depends packages to the current application.

    Using installed packages [Autoproj]

    This method relies on a 'make install' command in your package.

    Layout

    All packages are installed in the same root directory that contains all built software, for example /opt/orocos. Orocos packages that deliver built libraries are then installed like this:

    # Install dir (the prefix):
    /opt/orocos
     
    # Headers:
    /opt/orocos/include/orocos/gnulinux/packagename/*.hpp
    # Component libraries for 'packagename':
    /opt/orocos/lib/orocos/gnulinux/packagename/*.so|dll|...
    # Plugin libraries for 'packagename':
    /opt/orocos/lib/orocos/gnulinux/packagename/plugins/*.so|dll|...
    # Type libraries for 'packagename':
    /opt/orocos/lib/orocos/gnulinux/packagename/types/*.so|dll|...
    # Build information for 'packagename':
    /opt/orocos/lib/pkgconfig/packagename-gnulinux.pc

    For allowing multi-target installs, the packages will be installed in orocos/targetname/packagename (for example: orocos/xenomai/ocl) in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.

    Linking

    If you want to link against a library of an installed package (because you included one of its headers), you can find that information in the lib/pkgconfig/packagename.pc file. Packagenames might be suffixed with -<target> in case they are target specific. When you use the UseOrocos.cmake macros (Orocos Toolchain 2.3.0 or later) or orogen, linking with dependees will be done automatically for you.

    You may add a link instruction using the classical CMake syntax:

    orocos_component( mycomponent ComponentSource.cpp )
    target_link_libraries( mycomponent -lfoobar )

    Using

    Point your RTT_COMPONENT_PATH to the lib/orocos directory:
    RTT_COMPONENT_PATH=/opt/orocos/lib/orocos
    export RTT_COMPONENT_PATH

    The component and plugin loaders of RTT will search this directory, and its target subdirectory for components and plugins. So there is no need to encode the target name in the RTT_COMPONENT_PATH (but you may do so if it is required for some case).

    You can then import the package in the deployer application by using:

    import("packagename")
    Which will make available all components, typekits and plugins of that package to the current application.

    Getting started

    Installation

    How does the Orocos Toolchain work?

    The toolchain is a set of libraries and programs that you must compile on your computer in order to build Orocos applications. In case you are on a Linux system, you can use the bootstrap.sh script, which does this for you.

    After installation, these libraries are available:

    • orocos-rtt : The component run-time library
    • orocos-ocl-* : The standard Orocos components for setting up applications
    • typelib and utilmm : Helper libraries for the tools.

    These programs are available:

    • autoproj : building and updating the toolchain
    • typegen : generates typekits from classes and structs, which tell Orocos which data you want to communicate. Orocos already knows the basic C/C++ data types, you can use this tool to 'transport' any class or struct between components.
    • orogen : create components that can be deployed either statically (through oroGen itself) or dynamically by using the deployer. Creating components using oroGen requires a very minimal knowledge of the RTT API, and is therefore suited for new users.
    • deployer : creates dynamic deployments (loads components from an XML file) + console
    • taskbrowser : console that connects to running applications
    • rttlua: alternative Lua-scriptable taskbrowser
    • cdeployer: creates dynamic deployments (without console)

    Creating components

    Orocos component libraries are living in packages. You need to understand the concept of packages in Orocos in order to be able to create and use components. See more about Component Packages

    Your primary reading material for creating components is the Orocos Components Manual. A component is compiled into a shared library (.so or .dll).

    Use the orocreate-pkg script to create a new package that contains a ready-to-compile Orocos component, which you can extend or play with. See Using orocreate-pkg for all details. (Script available from Toolchain version 2.1.1 on).

    Alternatively, the oroGen tool allows you to create components with a minimum knowledge of the RTT API.

    Creating applications

    The DeploymentComponent loads XML files or scripts and dynamically creates, configures and starts components in a single process. See the Orocos Deployment Manual

    The TaskBrowser is our primary interface with a running application. See the Orocos TaskBrowser Manual

    Upgrading from RTT 1.x

    Take a loot at the Upgrading from RTT 1.x to 2.0 webpage.

    Ugrading from Toolchain 2.x to Toolchain 2.8.x

    Tips and trics can be found at the Upgrading from Toolchain 2.x to Toolchain 2.8.x, especially for ROS integrated installations with ROSBUILD and CATKIN.

    Using orocreate-pkg

    Where to find it

    The orocreate-pkg script is installed in your bin directory where the deployer and other OCL tools are installed, or you can find it in orocos-toolchain/ocl/scripts/pkg/orocreate-pkg. When you source env.sh, orocreate-pkg will be in your PATH.

    How to use it

    The script takes at least one argument: the package name. A second option specifies what to generate in the package, in our case, a component:
    $ cd ~/orocos
    $ orocreate-pkg myrobot component
    Using templates at /home/kaltan/src/git/orocos-toolchain/ocl/scripts/pkg/templates...
    Package myrobot created in directory /home/kaltan/src/git/orocos-toolchain/myproject/myrobot
    $ cd myrobot
    $ ls
    CMakeLists.txt  Makefile  manifest.xml  src
    # Standard build (installs in the same directory as Orocos Toolchain):
      $ mkdir build ; cd build
      $ cmake .. -DCMAKE_INSTALL_PREFIX=orocos
      $ make install
    # OR: ROS build:
      $ make

    You can modify the .cpp/.hpp files and the CMakeLists.txt file to adapt them to your needs. See orocreate-pkg --help for other options which allow you to generate other files.

    What gets generated

    All files that are generated may be modified by you, except for all files in the typekit directory. That directory is generated during a build and under the control of the Orocos typegen tool, from the orogen package.

    • Makefile : default makefile to start the CMake configuration process. In case you use ROS, this file is rosmake compatible.
    • CMakeLists.txt : specify what gets build (documented in the file itself) See also our RTT Cheat Sheet
    • src/myrobot-component.cpp : template for the Myrobot component. Note that this C++ class is capitalized ('M') while the project name is lower-case ('m').
    • src/myrobot-types.hpp : put the structs/classes of communicated data in here. Restrictions apply on how such a class/struct may look like. See Typegen manual and the Orocos typekits manual.
    • typekit/ : This directory is generated by the cmake process and is under control of typegen. It contains generated code for all data found in the myrobot-types.hpp file or any other header listed in the CMakeLists.txt file.
    • Other files are examples for services and plugins and not for novice users.

    Loading the component

    After the 'make install' step, make sure that your RTT_COMPONENT_PATH includes the installation directory (or that you used -DCMAKE_INSTALL_PREFIX=orocos) and then start the deployer for your platform:

    $ deployer-gnulinux
       Switched to : Deployer
     
      This console reader allows you to browse and manipulate TaskContexts.
      You can type in an operation, expression, create or change variables.
      (type 'help' for instructions and 'ls' for context info)
     
        TAB completion and HISTORY is available ('bash' like)
     
    Deployer [S]> import("myrobot")
     = true
     
    Deployer [S]> displayComponentTypes
    I can create the following component types:
       Myrobot
       OCL::ConsoleReporting
       OCL::FileReporting
       OCL::HMIConsoleOutput
       OCL::HelloWorld
       OCL::TcpReporting
       OCL::TimerComponent
     = (void)
     
    Deployer [S]> loadComponent("TheRobot","Myrobot")
    Myrobot constructed !
     = true
     
    Deployer [S]> cd TheRobot
       Switched to : TheRobot
    TheRobot [S]> ls
     
     Listing TaskContext TheRobot[S] :
     
     Configuration Properties: (none)
     
     Provided Interface:
      Attributes   : (none)
      Operations      : activate cleanup configure error getPeriod inFatalError inRunTimeError 
    isActive isConfigured isRunning setPeriod start stop trigger update 
     
     Data Flow Ports: (none)
     
     Services: 
    (none)
     
     Requires Operations :  (none)
     Requests Services   :  (none)
     
     Peers        : (none)

    Extending the component or its plugins

    You now need to consult the Component Builder's Manual for instructions on how to use and extend your Orocos component. All relevant documentation is available on the Toolchain Reference Manuals page.

    ROS package compatibility

    The generated package contains a manifest.xml file and the CMakeLists.txt file will call ros_buildinit() if ROS_ROOT has been set and also sets the LIBRARY_OUTPUT_PATH to packagename/lib/orocos such that the ROS tools can find the libraries and the package itself. The ROS integration is mediated in the UseOrocos-RTT.cmake file, which gets included on top of the generated CMakeLists.txt file and is installed as part of the RTT. The Makefile file is rosmake compatible.

    The OCL deployer knows about ROS packages and can import Orocos components (and their dependencies) from them once your ROS_PACKAGE_PATH has been correctly set.

    Installing the OROCOS Toolchain (2.8) from source on Mac OSX.

    Tested environments

    1. OSX Mountain Lion (10.8) (Completely clean environment in virtual machine)
    2. OSX Yosemite (10.10)

    Dependency installation using Macports

    Extracted from the instructions on http://www.ros.org/wiki/groovy/Installation/OSX/MacPorts/Repository

    Setup

    • Install Apple's Developer Tools (Xcode 5 or 6).
      • 10.8: After starting Xcode select "Preferences" (?,) >> "Downloads". In "Components" you will find Command Line Tools.
      • 10.9 and 10.10: Command Line Tools are automatically installed
    • Install MacPorts
    • Add the MacPorts binary path to PATH:

    echo 'export PATH=/opt/local/bin:/opt/local/sbin:$PATH' >> ~/.bash_profile 
    • Add the MacPorts library path to LIBRARY_PATH:

    echo 'export LIBRARY_PATH=/opt/local/lib:$LIBRARY_PATH' >> ~/.bash_profile 
    • Clone ROS-MacPorts Repository Locally and Update MacPorts Configuration
      • First checkout an initial copy of the repository:

    cd ~
    git clone https://github.com/smits/ros-macports.git
      • Now we need to tell MacPorts where the local repository clone is. Do this by adding a file:///path/to/clone/location line to /opt/local/etc/macports/sources.conf:

    sudo sh -c 'echo file:///Users/user/ros-macports >> /opt/local/etc/macports/sources.conf'
      • Sync the port index

    sudo port sync

    Resolve dependencies

    • Install and select the following ports:

    sudo port install python27
    sudo port select --set python python27
    sudo port install boost libxslt lua51 ncurses pkgconfig readline netcdf netcdf-cxx omniORB p5-xml-xpath ros-hydro-catkin py27-sip ros-hydro-cmake_modules eigen3 dyncall ruby20
    sudo port select --set nosetests nosetests27
    sudo port select --set ruby ruby20
    • Open a new terminal and verify your ruby version (should be 2.0.0) :

    ruby --version 
    • Verify if we use the right gem binary (should be /opt/local/bin/gem):

    which gem 
    sudo gem install facets nokogiri
    • Check the lua version should be 5.1 not 5.2

    lua -v
    • If 5.2 uninstall lua port (which is lua 5.2)

    sudo port uninstall lua

    • Install gccxml from source:

    git clone https://github.com/gccxml/gccxml
    cd gccxml
    mkdir build
    cd build
    cmake .. -DCMAKE_INSTALL_PREFIX=/opt/local
    make
    sudo make install

    Build and installation of the Orocos Toolchain from source

    Create a catkin workspace and get the sources

    • Create a new ros workspace for building the orocos_toolchain chained with our installed ROS Groovy workspace (which in case if the MacPorts installation is /opt/local):

    mkdir -p ~/orocos_ws/src
    cd ~/orocos_ws/src
    sudo port install py27-wstool
    wstool init .
    curl https://gist.githubusercontent.com/smits/9950798/raw | wstool merge -
    wstool update
    cd orocos_toolchain
    git submodule foreach git checkout toolchain-2.8

    Build & install the toolchain

    • Build the new workspace (the RUBY_CONFIG_INCLUDE_DIR ends with darwin12 on OSX 10.8 or darwin14 on OSX 10.10)

    cd ~/orocos_ws
    source /opt/local/setup.bash
    sudo /opt/local/env.sh catkin_make_isolated --install-space /opt/orocos --install --cmake-args -DENABLE_CORBA=TRUE -DCORBA_IMPLEMENTATION=OMNIORB - DRUBY_INCLUDE_DIR=/opt/local/include/ruby-2.0.0 -DRUBY_CONFIG_INCLUDE_DIR=/opt/local/include/ruby-2.0.0/x86_64-darwin13 -DRUBY_LIBRARY=/opt/local/lib/libruby2.0.dylib -DCMAKE_PREFIX_PATH="$CMAKE_PREFIX_PATH;/opt/local"

    Use the toolchain

    • Source the setup.bash script from the orocos install space:

    source /opt/orocos/setup.bash
    • To make sure that typegen uses the right compiler for gccxml add the following export to your ~/.bash_profile:

    echo ‘export GCCXML_COMPILER=g++-mp-4.3>> ~/.bash_profile
    • You should be able to run all the tools like the deployer, typegen, rttlua and oro-createpkg

    Toolchain Tutorials

    RTT 2.0 Tutorial

    A walkthrough of how Orocos components can be built using the RTT. Your main companion is the Orocos TaskBrowser which allows you to interactively setup and interact with an Orocos application.

    These exercises are hosted on Github .

    You need to have the Component Builder's Manual (see Toolchain Reference Manuals) at hand to complete these exercises.

    Connecting and Running Components

    Setting up applications using the deployer application

    Take also at the Toolchain Reference Manuals for in-depth explanations of the deployment XML format and the different transports (CORBA, MQueue)

    Using the Logging (log4cpp) infrastructure

    Links to Orocos components

    Configuring and Starting Components from an Orocos Script

    Purpose

    To start an Orocos application without writing a single XML file. Note: this syntax is only possible from RTT 2.2.0 on. pre-2.2.0 versions only support scripts in 'program' blocks.

    You'll need to have the Scripting Chapter of the Component Builder's Manual at hand for clarifications on syntax and execution semantics.

    How it works

    We write one or more scripts that locate the components on the filesystem, create them in the application and connect and configure them. We use the DeploymentComponent's scripting API to do all this, instead of using a DeploymentComponent XML file.

    How it's done

    Create a new file 'startup.ops'. 'ops' stands for Orocos Program Script. And write this code:

    path("/opt/orocos/lib/orocos") // Path to where components are located   [1]
    import("myproject")            // imports a specific project in the path [2]
    import("ocl")                  // imports ocl from the path
    require("print")               // loads the 'print' service globally.    [3]
     
    loadComponent("HMI1","OCL::HMIComponent") // create a new HMI component [4]
    loadComponent("Controller1","MyProjectController") // create a new controller
    loadComponent("Test1","TaskContext")      // creates an empty test component

    You can test this code by doing:

    deployer-gnulinux -s startup.ops
    OR, at the taskbrowser prompt:
    deployer-gnulinux
    ...
    Deployer [S]> help runScript 
     
     runScript( string const& File ) : bool
       Runs a script.
       File : An Orocos program script.
    Deployer[S]> runScript("startup.ops")

    The first line of startup.ops ([1]) extends the standard search path for components. Every component library directly in a path will be discovered using this statement, but the paths are not recursively searched. For loading components in subdirectories of a path directory, use the import statement. In our example, it will look for the myproject directory in the component paths and the ocl directory. All libraries and plugins in these directories will be loaded as well.

    After importing, we can create components using loadComponent ([4]). The first argument is the name of the component instance, the second argument is the class type of the component. When these lines are executed, 3 new components have been created: HMI1, Controller1 and Test1.

    Finally, the line require("print") loads the printing service globally such that your script can use the 'print.ln("text")' function. See help print in the TaskBrowser after you typed require("print").

    Now extend the script to include the lines below. The create connection policy objects and connect ports between components.

    // See the Doxygen API documentation of RTT for the fields of this struct:
    var ConnPolicy cp_1
    // set the fields of cp_1 to an application-specific value:
    cp_1.type = BUFFER  // Use ''BUFFER'' or ''DATA''
    cp_1.size = 10      // size of the buffer
    cp_1.lock_policy = LOCKED // Use  ''LOCKED'', ''LOCK_FREE'' or ''UNSYNC''
    // other fields exist too...
     
    // Start connecting ports:
    connect("HMI1.positions","Controller1.positions", cp_1)
    cp_1 = ConnPolicy() // reset to defaults (DATA, LOCK_FREE)
    connect("HMI1.commands","Controller1.commands", cp_1)
    // etc...

    Connecting data ports is done using ConnPolicy structs that describe the properties of the connection to be formed. You may re-use the ConnPolicy variable, or create new ones for each connection you form. The Component Builder's Manual has more details on how the ConnPolicy struct influences how connections are configured.

    Finally, we configure and start our components:

    if ( HMI1.configure() == false )
       print.ln("HMI1 configuration failed!")
    else {
       if ( Controller1.configure() == false )
          print.ln("Controller1 configuration failed!")
       else {
          HMI1.start()
          Controller1.start()
       }
    }

    Advanced configuration using a State Machine

    For complexer scenarios, we can put our logic in a state machine that starts/stops/configures components as we go:
    StateMachine SetupShutdown {
        var bool do_cleanup = false, could_config = false;
        initial state setup {
               entry {
                    // Configure components
                    could_config = HMI1.configure() && Controller1.configure();
                    if (could_config) {
                        HMI1.start();
                        Controller1.start();
                    }
               }
               transitions { 
                    if do_cleanup then select shutdown;
                    if could_config == false then select failure;
               }
        }
     
        state failure {
               entry {
                    print.ln("Failed to configure a component!")
               }
        }
     
        final state shutdown {
               entry {
                    // Cleanup B group
                    HMI1.stop() ; Controller1.stop();
                    HMI1.cleanup() ; Controller1.cleanup();
               }
        }
    }
    RootMachine SetupShutdown deployApp;
     
    deployApp.activate()
    deployApp.start()

    State machines are explained in detail in the Scripting Chapter of the Component Builder's Manual.

    Connecting ports of components distributed with CORBA

    Purpose

    Connecting an output port of one component with an input port of another component, where both components are distributed using the CORBA deployer application, deployer-corba.

    How it works

    Connecting data flow ports of components is done by defining connections (see Naming connections ). When components are distributed using the CORBA deployment component, you need to declare a proxy component in one of the deployers and connect to a port of that proxy. You only need to setup a specific connection in one XML file, the other XML files don't need to repeat the same information.

    How it's done

    This is your first XML file for component A. We tell that it runs as a Server and that it registers its name in the Naming Service. (See also Using CORBA and the CORBA transport reference manual for setting up naming services)

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
      <struct name="ComponentA" type="HMI">
        <simple name="Server" type="boolean"><value>1</value></simple>
        <simple name="UseNamingService" type="boolean"><value>1</value></simple>
      </struct>
    </properties>

    Save this in component-a.xml and start it with: deployer-corba -s component-a.xml

    This is your second XML file for component B. It has one port, cartesianPosition_desi. We add it to a connection, named cartesianPosition_desi_conn. Next, we declare a 'proxy' to Component A we created above, and we do the same for it's port, add it to the connection named cartesianPosition_desi_conn.

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
      <struct name="ComponentB" type="Controller">
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>cartesianPosition_desi_conn</value></simple>
        </struct> 
      </struct>
     
      <!-- ComponentA is looked up using the 'CORBA' naming service -->
      <struct name="ComponentA" type="CORBA">
          <!-- We add ports of A to the connection -->
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition" type="string">
            <value>cartesianPosition_desi_conn</value></simple>
        </struct> 
      </struct>
    </properties>

    Save this file as component-b.xml and start it with deployer-corba -s component-b.xml

    When component-b.xml is started, the port connections will be created. When ComponentA exits and re-starts, ComponentB will not notice this, and you'll need to restart the component-b xml file as well. Use a streaming based protocol (ROS, POSIX MQueue) in case you want to be more robust against such situations.

    Alternative way to do the same

    You can also form the connections in a third xml file, and make both components servers like this:

    Starting ComponentA:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
      <struct name="ComponentA" type="HMI">
        <simple name="Server" type="boolean"><value>1</value></simple>
        <simple name="UseNamingService" type="boolean"><value>1</value></simple>
      </struct>
    </properties>

    Save this in component-a.xml and start it with: cdeployer -s component-a.xml

    Starting ComponentB:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
      <struct name="ComponentB" type="Controller">
        <simple name="Server" type="boolean"><value>1</value></simple>
        <simple name="UseNamingService" type="boolean"><value>1</value></simple>
      </struct>
    </properties>

    Save this in component-b.xml and start it with: cdeployer -s component-b.xml

    Creating two proxies, and the connection:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "cpf.dtd">
    <properties>
      <!-- ComponentA is looked up using the 'CORBA' naming service -->
      <struct name="ComponentA" type="CORBA">
          <!-- We add ports of A to the connection -->
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition" type="string">
            <value>cartesianPosition_desi_conn</value></simple>
        </struct> 
      </struct>
     
      <!-- ComponentB is looked up using the 'CORBA' naming service -->
      <struct name="ComponentB" type="CORBA">
          <!-- We add ports of B to the connection -->
        <struct name="Ports" type="PropertyBag">
          <simple name="cartesianPosition_desi" type="string">
            <value>cartesianPosition_desi_conn</value></simple>
        </struct> 
      </struct>
    </properties>

    Save this in connect-components.xml and start it with: deployer-corba -s connect-components.xml

    Further Reading

    See deployer and CORBA related Toolchain Reference Manuals.

    Setting up the RTT 2.4 exercises on Ubuntu

    RTT Exercises Installation with ROS

    These instructions are meant for the Orocos Toolchain version 2.4.0 or later.

    Installation

    • Install Electric ROS using Debian packages for Ubuntu Lucid (10.04) or later. In case you don't run Ubuntu you can use the ROS install scripts. See the ros installation instructions.
      • Make sure the following debian packages are installed: ros-electric-rtt-ros-integration ros-electric-rtt-ros-comm ros-electric-rtt-geometry ros-electric-rtt-common-msgs ros-electric-pr2-controllers ros-electric-pr2-simulator ros-electric-joystick-drivers ruby
    • Create a directory in which you want to install all the workshops source (for instance training)

    mkdir ~/training

    • Add this directory to your $ROS_PACKAGE_PATH

    export ROS_PACKAGE_PATH=~/training:$ROS_PACKAGE_PATH

    • Get rosinstall

    sudo apt-get install python-setuptools
    sudo easy_install -U rosinstall

    • Get the rosinstall file . Save it as orocos_exercises.rosinstall in the training folder.
    • Run rosinstall

    rosinstall ~/training orocos_exercises.rosinstall /opt/ros/electric

    • As the rosinstall tells you source the setup script

    source ~/training/setup.bash

    • Install all dependencies (ignore warnings)

    rosdep install youbot_common
    rosdep install rFSM

    • Compile the dependencies

    rosmake youbot_common rtt_dot_service rttlua_completion

    Setup

    • Add the following functions in your $HOME/.bashrc file:

    useOrocos(){
        source $HOME/training/setup.bash;
        source $HOME/training/setup.sh;
        source /opt/ros/electric/stacks/orocos_toolchain/env.sh;
        setLUA;
    }
     
    setLUA(){
        if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi
        if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi
        export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua"
        export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so"
        export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
    }
     
    useOrocos

    Testing

    You can test your setup after all the above steps by opening a new terminal and doing:
      roscd hello-1-task-execution
      make
      rosrun ocl deployer-gnulinux -s start.ops

    Using the Taskbrowser

    Creating a variable

    The created variables are listed as attributes to the component you created them in.
    • simple types (int, double...)

    var double a
    a=1.1
    • arrays, eg. of size 2

    var float64[] b(2)
    b[0]=4.4

    CMake and building

    Your CMakeLists.txt files will be created by the orocreate-pkg tool. However, when you need to tune your build, you'll need more information which you can find below.

    Finding RTT and its plugins

    In order to locate RTT or one of the plugins on your filesystem, you need to modify the find_package(OROCOS-RTT REQUIRED) command to find_package(OROCOS-RTT REQUIRED <pluginlibname>). For example:
    find_package(OROCOS-RTT REQUIRED rtt-marshalling)
    # Defines: ${OROCOS-RTT_RTT-MARSHALLING_LIBRARY} and ${OROCOS-RTT_RTT-MARSHALLING_FOUND}

    pre-2.3.2: You may only call find_package(OROCOS-RTT ... ) once. Next calls to this macro will return immediately, so you need to specify all plugins up-front. RTT versions from 2.3.2 on don't have this limitation.

    Using find_package(OROCOS-RTT)

    After find_package found the RTT and its plugins, you must explicitly use the created CMake variables in order to have them in effect. This looks typically like:

    # Link all targets AFTER THIS LINE with 'rtt-scripting' COMPONENT:
    if ( OROCOS-RTT_RTT-SCRIPTING_FOUND )
      link_libraries( ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY} )
    else( OROCOS-RTT_RTT-SCRIPTING_FOUND )
      message(SEND_ERROR "'rtt-scripting' not found !")
    endif( OROCOS-RTT_RTT-SCRIPTING_FOUND )
     
    # now define your components, libraries etc...
     
    # ...
     
    #Prefered way to link instead of the above method:
    target_link_libraries( mycomponent ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY})

    Or for linking with the standard provided CORBA transport:

    # Link all targets AFTER THIS LINE with the CORBA transport (detected by default!) :
    if ( OROCOS-RTT_CORBA_FOUND )
      link_libraries( ${OROCOS-RTT_CORBA_LIBRARIES} )
    else( OROCOS-RTT_CORBA_FOUND )
      message(SEND_ERROR "'CORBA' transport not found !")
    endif( OROCOS-RTT_CORBA_FOUND )
     
    # now define your components, libraries etc...
     
    # ...
     
    #Prefered way to link instead of the above method:
    target_link_libraries( mycomponent ${OROCOS-RTT_RTT_CORBA_LIBRARIES})

    Using other packages for building and linking (orocos_use_package)

    Orocos has a system which lets you specify which packages you want to use for including headers and linking with their libraries. Orocos will always get these flags from a pkg-config .pc file, so in order to use this system, check that the package you want to depend on provides such a .pc file.

    If the package or library you want to use has a .pc file, you can directly use this macro:

    # The CORBA transport provides a .pc file 'orocos-rtt-corba-<target>.pc':
    orocos_use_package( orocos-rtt-corba )
     
    # Link with the OCL Deployment component:
    orocos_use_package( ocl-deployment )
     
    # now define your components, libraries etc...

    This macro has a similar effect as putting this dependency in your manifest.xml file, it sets the include paths and links your libraries if OROCOS_NO_AUTO_LINKING is not defined in CMake (the default). Some packages (like OCL) define multiple .pc files, in which case you can put the ocl dependency in the manifest.xml file and use orocos_use_package() to use a specific ocl .pc file.

    If the argument to orocos_use_package() is a real package, it is advised to put the dependency in the manifest.xml file, such that the build system can use that information for dependency tracking. In case it is a library as a part of a package (in this case: CORBA is a sub-library of the 'rtt' package), you should put rtt as a dependency in the manifest.xml file, and orocos-rtt-corba with the orocos_use_package macro as shown above.

    find_package(OROCOS-RTT) syntax

    The Orocos-RTT find macro has this API (copied from the orocos-rtt.config.cmake file of RTT):
    ##################################################################################
    #
    # CMake package configuration file for the OROCOS-RTT package.
    # This script imports targets and sets up the variables needed to use the package.
    # In case this file is installed in a nonstandard location, its location can be 
    # specified using the OROCOS-RTT_DIR cache
    # entry.
    #
    # find_package COMPONENTS represent OROCOS-RTT plugins such as scripting,
    # marshalling or corba-transport.
    # The default search path for them is:
    #  /path/to/OROCOS-RTTinstallation/lib/orocos/plugins
    #  /path/to/OROCOS-RTTinstallation/lib/orocos/types
    #
    # For this script to find user-defined OROCOS-RTT plugins, the RTT_COMPONENT_PATH 
    # environment variable should be appropriately set. E.g., if the plugin is located 
    # at /path/to/plugins/libfoo-plugin.so, then add /path/to to RTT_COMPONENT_PATH
    #
    # This script sets the following variables:
    #  OROCOS-RTT_FOUND: Boolean that indicates if OROCOS-RTT was found
    #  OROCOS-RTT_INCLUDE_DIRS: Paths to the necessary header files
    #  OROCOS-RTT_LIBRARIES: Libraries to link against to use OROCOS-RTT
    #  OROCOS-RTT_DEFINITIONS: Definitions to use when compiling code that uses OROCOS-RTT
    #
    #  OROCOS-RTT_PATH: Path of the RTT installation directory (its CMAKE_INSTALL_PREFIX).
    #  OROCOS-RTT_COMPONENT_PATH: The component path of the installation 
    #                             <prefix>/lib/orocos + RTT_COMPONENT_PATH
    #  OROCOS-RTT_PLUGIN_PATH: OROCOS-RTT_PLUGINS_PATH + OROCOS-RTT_TYPES_PATH
    #  OROCOS-RTT_PLUGINS_PATH: The plugins path of the installation 
    #                           <prefix>/lib/orocos/plugins + RTT_COMPONENT_PATH * /plugins
    #  OROCOS-RTT_TYPES_PATH: The types path of the installation 
    #                         <prefix>/lib/orocos/types + RTT_COMPONENT_PATH * /types
    #
    #  OROCOS-RTT_CORBA_FOUND: Defined if corba transport support is available
    #  OROCOS-RTT_CORBA_LIBRARIES: Libraries to link against to use the corba transport
    #
    #  OROCOS-RTT_MQUEUE_FOUND: Defined if mqueue transport support is available
    #  OROCOS-RTT_MQUEUE_LIBRARIES: Libraries to link against to use the mqueue transport
    #
    #  OROCOS-RTT_VERSION: Package version
    #  OROCOS-RTT_VERSION_MAJOR: Package major version
    #  OROCOS-RTT_VERSION_MINOR: Package minor version
    #  OROCOS-RTT_VERSION_PATCH: Package patch version
    #
    #  OROCOS-RTT_USE_FILE_PATH: Path to package use file, so it can be included like so
    #                            include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake)
    #  OROCOS-RTT_USE_FILE     : Allows you to write: include( ${OROCOS-RTT_USE_FILE} )
    #
    # This script additionally sets variables for each requested 
    # find_package COMPONENTS (OROCOS-RTT plugins).
    # For example, for the ''rtt-scripting'' plugin this would be:
    #  OROCOS-RTT_RTT-SCRIPTING_FOUND: Boolean that indicates if the component was found
    #  OROCOS-RTT_RTT-SCRIPTING_LIBRARY: Libraries to link against to use this component 
    #                                    (Notice singular _LIBRARY suffix !)
    #
    # Note for advanced users: Apart from the OROCOS-RTT_*_LIBRARIES variables, 
    # non-COMPONENTS targets can be accessed by their imported name, e.g., 
    # target_link_libraries(bar @IMPORTED_TARGET_PREFIX@orocos-rtt-gnulinux_dynamic).
    # This of course requires knowing the name of the desired target, which is why using 
    # the OROCOS-RTT_*_LIBRARIES variables is recommended.
    #
    # Example usage:
    #  find_package(OROCOS-RTT 2.0.5 EXACT REQUIRED rtt-scripting foo) 
    #               # Defines OROCOS-RTT_RTT-SCRIPTING_*
    #  find_package(OROCOS-RTT QUIET COMPONENTS rtt-transport-mqueue foo)
    #               # Defines OROCOS-RTT_RTT-TRANSPORT-MQUEUE_* 
    #
    ##################################################################################

    Orocos CMakeLists.txt Example

    This example is identical to the result of running:
    orocreate-pkg example

    You may remove most of the code/statements that you don't use. We only left the most common CMake macros not commented, which tells you which ones you should use most certainly when building a component:

    #
    # The find_package macro for Orocos-RTT works best with
    # cmake >= 2.6.3
    #
    cmake_minimum_required(VERSION 2.6.3)
     
    #
    # This creates a standard cmake project. You may extend this file with
    # any cmake macro you see fit.
    #
    project(example)
     
     
    # Set the CMAKE_PREFIX_PATH in case you're not using Orocos through ROS
    # for helping these find commands find RTT.
    find_package(OROCOS-RTT REQUIRED ${RTT_HINTS})
     
    # Defines the orocos_* cmake macros. See that file for additional
    # documentation.
    include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake)
     
    #
    # Components, types and plugins.
    #
    # The CMake 'target' names are identical to the first argument of the
    # macros below, except for orocos_typegen_headers, where the target is fully
    # controlled by generated code of 'typegen'.
    #
     
     
    # Creates a component library libexample-<target>.so
    # and installs in the directory lib/orocos/example/
    #
    orocos_component(example example-component.hpp example-component.cpp) # ...you may add multiple source files
    #
    # You may add multiple orocos_component statements.
     
    #
    # Building a typekit (recommended):
    #
    # Creates a typekit library libexample-types-<target>.so
    # and installs in the directory lib/orocos/example/types/
    #
    #orocos_typegen_headers(example-types.hpp) # ...you may add multiple header files
    #
    # You may only have *ONE* orocos_typegen_headers statement !
     
    #
    # Building a normal library (optional):
    #
    # Creates a library libsupport-<target>.so and installs it in
    # lib/
    #
    #orocos_library(support support.cpp) # ...you may add multiple source files
    #
    # You may add multiple orocos_library statements.
     
     
    #
    # Building a Plugin or Service (optional):
    #
    # Creates a plugin library libexample-service-<target>.so or libexample-plugin-<target>.so
    # and installs in the directory lib/orocos/example/plugins/
    #
    # Be aware that a plugin may only have the loadRTTPlugin() function once defined in a .cpp file.
    # This function is defined by the plugin and service CPP macros.
    #
    #orocos_service(example-service example-service.cpp) # ...only one service per library !
    #orocos_plugin(example-plugin example-plugin.cpp) # ...only one plugin function per library !
    #
    # You may add multiple orocos_plugin/orocos_service statements.
     
     
    #
    # Additional headers (not in typekit):
    #
    # Installs in the include/orocos/example/ directory
    #
    orocos_install_headers( example-component.hpp ) # ...you may add multiple header files
    #
    # You may add multiple orocos_install_headers statements.
     
    #
    # Generates and installs our package. Must be the last statement such
    # that it can pick up all above settings.
    #
    orocos_generate_package()

    LuaCookbook

    Table of Contents 
    1. Important!
    2. What is this RTT-Lua stuff anyway?
    3. Getting started
      1. Compiling
      2. Setting up the path to rttlib
      3. Starting rttlua
      4. Loading rttlib.lua
      5. Basic commands (read this!)
      6. Where's my TaskContext?
      7. rttlua tab completion
      8. Getting persistent history with rlwrap
      9. Editing Lua Code
    4. Dataflow
      1. Creating Ports
      2. Adding Ports to the TaskContext Interface
      3. Connecting Ports
    5. RTT Types and Typekits
      1. Which types are available?
      2. Creating RTT types
      3. Accessing global RTT constants
      4. Convenient initalization of multi-field types
      5. Initalization of array/sequence types
    6. Properties
      1. Creating
      2. Adding to TaskContext Interface
      3. Getting a Properties from a TaskContext
      4. Properties of basic types: setting the value
      5. Properties of basic types: getting the value
      6. Properties of complex types: accessing
      7. Removing
    7. Operations
      1. Calling Operations
      2. Sending Operations
      3. Can I define new Operations from Lua?
      4. Is it possible to call a connected OperationCaller?
    8. Services
      1. Loading and Using
      2. What Operations and Ports are provided by a Service?
      3. Accessing the Global Service
    9. Activities
    10. Basic usage patterns
      1. How to write a deployment script
      2. How to write a RTT-Lua component
      3. Automatically creating and cleaning up component interfaces
      4. How to write a RTT-Lua Service
      5. How to perform runtime system validation?
    11. Using rFSM Statecharts with RTT
      1. Where to run a statemachine: Component vs. Service?
      2. How to run an rFSM in a Component
      3. Running rFSM in a Service
      4. Replacing states, functions and transitions of an existing FSM model
      5. One-liner to build a table of peers
    12. Miscellaneous
      1. Connecting RTT Ports to ROS topics
      2. Finding the path to a ROS package
      3. How are types converted between RTT and Lua?
      4. How to add a custom pretty printing function for a new type?
      5. How to use classical OCL Deployers ? (like with Corba, or with a Taskbrowser)
      6. How to generate graphical representations of rFSM models
      7. Script to generate default CPF file for a component
      8. Memory management: what is automatically garbage collected?
      9. Where to find further information?
    13. License
    14. Roadmap

    This page documents both basic and advanced use of the RTT Lua bindings by example. More formal API documentation is available here.

    Important!

    As of orocos toolchain-2.6 the deployment component launched by rttlua has been renamed from deployer to Deployer. This is to remove the differences between the classical deployer and rttlua and to facilitate portable deployment scripts. This page has been updated to use the new, uppercase name. If you are using an orocos toolchain version prior to 2.6, replace use "deployer" instead.

    What is this RTT-Lua stuff anyway?

    Lua is a simple, small and efficient scripting language. The Lua RTT bindings provide access to most of the RTT API from the Lua language. Use-cases are:

    • writing deployment scripts in Lua
    • writing Lua components and services
    • using the rFSM Statecharts with RTT

    To this end RTT-Lua consists of:

    • a Lua scriptable taskbrowser (rttlua-gnulinux etc. binaries)
    • a standard RTT Component which can be scripted with Lua
    • a RTT service which can extend existing components with Lua scripting

    Most information here is valid for all three approaches. If not, this is explicitly mentioned. The listings are shown as interactively entered into the rttlua- REPL (read-eval-print loop), but could just the same be stored in a script file.

    Getting started

    Compiling

    Currently RTT-Lua is in OCL. Is is enabled by default but will only be built if the Lua-5.1 dependency (Debian: liblua5.1-0-dev, liblua5.1-0, lua5.1) is found.

    CMake options:

    • BUILD_LUA_RTT: enable this to build the rttlua shell, the Lua component, and the Lua plugin.
    • BUILD_LUA_RTT_DYNAMIC_MODULES: (EXPERIMENTAL) build RTT and deployer as pure Lua plugins. Not recommended unless you know what you are doing.
    • BUILD_LUA_TESTCOMP: build a simple testcomponent that is used for testing the bindings. Not required for normal operation.

    Setting up the path to rttlib

    rttlib.lua is a Lua module, which is not strictly necessary, but highly recommended to load as it adds various syntactic shortcuts and pretty printing (Many examples on this page will not work without!). The easiest way to load it is to setup the LUA_PATH variable:

    export LUA_PATH=";;$HOME/src/git/orocos/ocl/lua/modules/?.lua"

    If you are a orocos_toolchain_ros user and do not want to hardcode the path like this, you can source the following script in your .bashrc:

    #!/bin/bash
    RTTLUA_MODULES=`rospack find ocl`/lua/modules/?.lua
    if [ "x$LUA_PATH" == "x" ]; then
        LUA_PATH=";"
    fi
    export LUA_PATH="$LUA_PATH;$RTTLUA_MODULES"

    Starting rttlua

    $ ./rttlua-gnulinux
    OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux)
    >

    or for orocos_toolchain_ros users:

    $ rosrun ocl rttlua-gnulinux
    OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux)
    >

    Now we have a Lua REPL that is enhanced with RTT specific functionality. In the following RTT-Lua code is indicated by a ">" prompt, while shell scripts are shown with the typical "$".

    Loading rttlib.lua

    Before doing anything it is recommended to load rttlib. Like any Lua module this can be done with the require statement. For example:

    $ ./rttlua-gnulinux
    OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux)
    > require("rttlib")
    >

    As it is annoying having to type this each time, this loading can automated by putting it in the ~/.rttlua dot file. This (Lua) file is executed on startup of rttlua:

    require("rttlib")
    rttlib.color=true

    The (optional) last line enables colors.

    Basic commands (read this!)

    • rttlib.stat() Print information about component instances and their state

    > rttlib.stat()
    Name                State               isActive  Period         
    lua                 PreOperational      true      0              
    Deployer            Stopped             true      0        

    •  rttlib.info() Print information about available components, types and services

    > rttlib.info()
    services:   marshalling scripting print LuaTLSF Lua os
    typekits:   rtt-corba-types rtt-mqueue-transport rtt-types OCLTypekit
    types:      ConnPolicy FlowStatus PropertyBag SendHandle SendStatus TaskContext array bool
                bools char double float int ints rt_string string strings uint void
    comp types: OCL::ConsoleReporting OCL::FileReporting OCL::HMIConsoleOutput OCL::HelloWorld
                OCL::LuaComponent OCL::LuaTLSFComponent OCL::TcpReporting
    ...

    Where's my TaskContext?

    Here:

    > tc = rtt.getTC()

    Above code calls the getTC() function, which returns the current TC and stores it in a variable 'tc'. For showing the interface just write =tc. In the repl the equal sign is a shortcut for 'return', which in turn causes the variable to be printed. (BTW: This works for displaying any variable)

    > =tc
    TaskContext: lua
    state: PreOperational
    isActive: true
    getPeriod: 0
    peers: Deployer
    ports: 
    properties:
       lua_string (string) =  // string of lua code to be executed during configureHook
       lua_file (string) =  // file with lua program to be executed during configuration
    operations:
       bool exec_file(string const& filename) // load (and run) the given lua script
       bool exec_str(string const& lua-string) // evaluate the given string in the lua environment

    Since (rttlua beta5) the above does not print the standard TaskContext operations anymore. To print these, use tc:show().

    rttlua tab completion

    (Yes, you really want this)

    Get it here. Checkout the README for the (simple) compilation and setup.

    Getting persistent history with rlwrap

    rttlua does not offer persistent history like in the taskbrowser. If you want it, you can use rlwrap and to wrap rttlua as follows:

    alias rttlua='rlwrap -a -r -H ~/.rttlua-history rttlua-gnulinux'

    If you run 'rttlua' it should have persistent history.

    Editing Lua Code

    Most modern editors provide basic syntax highlighting for Lua code.

    Dataflow

    The following shows the basic API, see section Automatically creating and cleaning up component interfaces for a more convenient way add/remove ports/properties.

    Creating Ports

    1. > pin = rtt.InputPort("string")
    2. > pout = rtt.OutputPort("string")
    3. > =pin
    4. [in, string, unconn, local] //
    5. > =pout
    6. [out, string, unconn, local] //

    Both In- and OutputPorts optionally take a second string argument (name) and third argument (description).

    Adding Ports to the TaskContext Interface

    1. > tc:addPort(pin)
    2. > tc:addPort(pout, "outport1", "string outport that contains latest X")
    3. > =tc -- print tc interface to confirm it is there.

    Connecting Ports

    Directly

    For this the ports don't have to be added to the TaskContext:

    1. > =pin:connect(pout)
    2. true
    3. > return pin
    4. [in, string, conn, local] //
    5. > return pout
    6. [out, string, conn, local] //
    7. >

    Using the Deployer

    The rttlua-* REPL automatically creates a deployment component that is a peer of the lua taskcontext:

    > tc = rtt.getTC()
    > depl = tc:getPeer("Deployer")
    > cp=rtt.Variable("ConnPolicy")
    > =cp
    {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0}
    > depl:connect("compA.port1","compB.port2", cp)

    RTT Types and Typekits

    Which types are available?

    > rttlib.info()
    services:       marshalling, scripting, print, os, Lua
    typekits:       rtt-types, rtt-mqueue-transport, OCLTypekit
    types:          ConnPolicy, FlowStatus, PropertyBag, SendHandle, SendStatus, TaskContext,
                    array, bool, bools, char, double, float, int, ints, rt_string, string, strings, uint, void
    comp types:     OCL::ConsoleReporting, OCL::FileReporting, OCL::HMIConsoleOutput, 
                    OCL::HelloWorld, OCL::LuaComponent, OCL::TcpReporting, OCL::TimerComponent,
                    OCL::logging::Appender, OCL::logging::FileAppender,
                    OCL::logging::LoggingService, OCL::logging::OstreamAppender, TaskContext

    Creating RTT types

    > cp = rtt.Variable("ConnPolicy")
    > =cp
    {data_size=0,type="DATA",name_id="",init=false,pull=false,transport="default",lock_policy="LOCK_FREE",size=0}
    > cp.data_size = 4711
    > print(cp.data_size)
    4711

    Accessing global RTT constants

    Printing the available constants:

    > =rtt.globals
    {SendNotReady=SendNotReady,LOCK_FREE=2,NewData=NewData,SendFailure=SendFailure,\
    SendSuccess=SendSuccess,NoData=NoData,UNSYNC=0,LOCKED=1,OldData=OldData,BUFFER=1,DATA=0}
    > 

    Accessing constants - just index!

    > =rtt.globals.LOCK_FREE
    2

    Convenient initalization of multi-field types

    It is cumbersome to initalize complex types with many subfields:

    > tc = rtt.getTC()
    > depl = tc:getPeer("Deployer")
    > depl:import("kdl_typekit")
    > t=rtt.Variable("KDL.Frame")
    > =t
    {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}}
    > t.M.X_x=3
    > t.M.Y_x=2
    > t.M.Z_x=2.3
    ...

    To avoid this, use the fromtab() method:

    > t:fromtab({M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}})

    or even shorter using the table-call syntax of Lua,

    > t:fromtab{M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}}

    Initalization of array/sequence types

    When you created an RTT array type, the initial length will be zero. You must set the length of an array before you can assign elements to it (starting from toolchain-2.5 fromtab will do this automatically:

    > ref=rtt.Variable("array")
    > ref:resize(3)
    > ref:fromtab{1,1,10}
    > print(ref) -- prints {1,1,10}
    ...

    Properties

    Creating

    > p1=rtt.Property("double", "p-gain", "Proportional controller gain")

    (Note: the second and third argument (name and description) are optional and can also be set when adding the property to a TaskContext)

    Adding to TaskContext Interface

    > tc=rtt.getTC()
    > tc:addProperty(p1)
    > =tc -- check it is there...

    Getting a Properties from a TaskContext

    > tc=rtt.getTC()
    > pgain = tc:getProperty("pgain")
    > =pgain -- will print it

    Properties of basic types: setting the value

    > p1:set(3.14)
    > =p1  -- a property can be printed!
    p-gain (double) = 3.14 // Proportional controller gain

    In particular, the following will not work:

    > p1=3.14

    Lua works with references! This will assign the variable p1 a numeric value of 3.14 and the reference to the property is lost.

    Properties of basic types: getting the value

    > print("the value of " .. p1:info().name .. " is: " .. p1:get())
    the value of p-gain is: 3.14

    Properties of complex types: accessing

    Assume a property of type KDL::Frame. Similarily to Variables the subfields can be accessed by using the dot syntax:

    > d = tc:getPeer("Deployer")
    > d:import('kdl_typekit')
    > f=rtt.Property('KDL.Frame')
    > =f
     (KDL.Frame) = {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}} // 
    > f.M.Y_y=3
    > =f.M.Y_y
    3
    > f.p.Y=1
    > =f
     (KDL.Frame) = {M={Z_y=0,Y_y=3,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=1,X=0,Z=0}} // 
    > 

    Like Variables, Properties feature a fromtab method to initalize a Property from values in a Lua table. See Section RTT Types and Typekits - Convenient initalization of multi-field types for details.

    Removing

    As properties are not automatically garbage collected, property memory must be managed manually:

    > tc:removeProperty("p-gain")
    > =tc         -- p-gain is gone now
    > p1:delete() -- delete property and free memory
    > =p1         -- p1 is 'dead' now.
    userdata: 0x186f8c8

    Operations

    Synchronous calling of operations from Lua:

    Calling Operations

    The short and convenient way

    > d = tc:getPeer("Deployer")
    > =d:getPeriod()
    0

    The significantly faster and real-time safe way (because locally cached)

    > d = tc:getPeer("Deployer")
    > op = d:getOperation("getPeriod")
    > =op -- can be printed!
    double getPeriod() // Get the configured execution period. -1.0: no thread ...
    > =op() -- call it
    0

    Sending Operations

    "Sending" Operations permits to asynchronously request an operation to be executed and collect the results at a later point in time.

    > d = tc:getPeer("Deployer")
    > op = d:getOperation("getPeriod")
    > handle=op:send() -- calling it
    > =handle:collect()
    SendSuccess    0

    Note:

    • collect() returns multiple arguments: first a SendStatus string ('SendSuccess', 'SendFailure') followed by zero to many output arguments of the operation.
    • collect blocks until the operation was executed, collectIfDone() will immediately return (but possibly with 'SendNotReady')
    • If your code make excessive use of "Sending Operations" something in your application design is probably wrong.

    Can I define new Operations from Lua?

    Answer: No.

    Workaround: define a new TaskContext that inherits from LuaComponent and add the Operation there. Implement the necessary glue between C++ and Lua by hand (not hard, but some manual work required).

    Is it possible to call a connected OperationCaller?

    Answer: No (but potentially it would be easy to add. Ask on the ML).

    Services

    Loading and Using

    For example, to load the marshalling service in a component and then to use it to write a property (cpf) file:

    > tc=rtt.getTC()
    > depl=tc:getPeer("Deployer")
    > depl:loadService("lua", "marshalling") -- load the marshalling service in the lua component
    true
    > =tc:provides("marshalling"):writeProperties("props.cpf")
    true

    A second (and slightly faster) option is to get the Operation before calling it:

    > -- get the writeProperties operation ...
    > writeProps=tc:provides("marshalling"):getOperation("writeProperties")
    > =writeProps("props.cpf") -- and call it to write the properties to a file.
    true

    What Operations and Ports are provided by a Service?

    > depl:loadService("lua", "marshalling") -- load the marshalling service
    > depl:loadService("lua", "scripting") -- load the scripting service
    > print(tc:provides())
    Service: lua
       Subservices: marshalling, scripting
       Operations:  activate, cleanup, configure, error, exec_file, exec_str, getPeriod,
                    inFatalError, inRunTimeError, isActive, isConfigured, isRunning,
                    setPeriod, start, stop, trigger, update
       Ports:       
        Service: marshalling
           Subservices: 
           Operations:  loadProperties, readProperties, readProperty, storeProperties,
                        updateFile, updateProperties, writeProperties, writeProperty
           Ports:       
        Service: scripting
           Subservices: 
           Operations:  activateStateMachine, deactivateStateMachine, eval, execute, 
                        getProgramLine, getProgramList, getProgramStatus, getProgramStatusStr,
                        getProgramText, getStateMachineLine, getStateMachineList,
                        getStateMachineState, getStateMachineStatus, getStateMachineStatusStr,
                        getStateMachineText, hasProgram, hasStateMachine, inProgramError,
                        inStateMachineError, inStateMachineState, isProgramPaused, isProgramRunning,
                        isStateMachineActive, isStateMachinePaused, isStateMachineRunning,
                        loadProgramText, loadPrograms, loadStateMachineText, loadStateMachines,
                        pauseProgram, pauseStateMachine, requestStateMachineState, resetStateMachine,
                        runScript, startProgram, startStateMachine, stepProgram,
                        stopProgram, stopStateMachine, unloadProgram, unloadStateMachine
           Ports:       
    > 

    Accessing the Global Service

    The RTT Global Service is useful for loading services into your application that don't belong to a specific component. Your C++ code accesses this object by calling

    RTT::internal::GlobalService::Instance();

    The GlobalService object can be accessed in Lua using a call to:

    gs = rtt.provides()

    And allows you to load additional services into the global service:

    gs:require("os") -- or: rtt.provides():require("os")

    Which you can access later-on again using the rtt table:

    rtt.provides("os"):argc() -- returns the number of arguments of this application
    rtt.provides("os"):argv() -- returns a string array of arguments of this application

    Activities

    You can add different types of Activities to your component:
    • periodic activity

    -- create activity for producer: period=1, priority=0,
    -- schedtype=ORO_SCHED_OTHER (1).
    depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT)
    • non-periodic activity

    -- create activity for producer: period=0, priority=0,
    -- schedtype=ORO_SCHED_OTHER (1).
    depl:setActivity("producer", 0, 0, rtt.globals.ORO_SCHED_RT)
    • master-slave activity
      • Attach a (non-)periodic activity to the master component
      • Indicate that a component is the slave of a master

    depl:setMasterSlaveActivity("name_of_master_component", "name_of_slave_component")

    Basic usage patterns

    How to write a deployment script

    (see also the example in section How to write a RTT-Lua component)

    -- deploy_app.lua
    require("rttlib")
     
    tc = rtt.getTC()
    depl = tc:getPeer("Deployer")
     
    -- import components, requires correctly setup RTT_COMPONENT_PATH
    depl:import("ocl")
    -- depl:import("componentX")
    -- import components, requires correctly setup ROS_PACKAGE_PATH (>=Orocos 2.7)
    depl:import("rtt_ros")
    rtt.provides("ros"):import("my_ros_pkg")
     
     
    -- create component 'hello'
    depl:loadComponent("hello", "OCL::HelloWorld")
     
    -- get reference to new peer
    hello = depl:getPeer("hello")
     
    -- create buffered connection of size 64
    cp = rtt.Variable('ConnPolicy')
    cp.type=1   -- type buffered
    cp.size=64  -- buffer size
    depl:connect("hello.the_results", "hello.the_buffer_port", cp)
    rtt.logl('Info', "Deployment complete!")

    run it:

    $ rttlua-gnulinux -i deploy-app.lua

    or using orocos_toolchain_ros

    $ rosrun ocl rttlua-gnulinux -i deploy-app.lua

    Note: The -i option makes rttlua enter interactive mode (the REPL) after executing the script. Without it would exit after finishing executing the script, which in this case is probably not what you want.

    How to write a RTT-Lua component

    A Lua component is created by loading a Lua-script implementing zero or more TaskContext hooks in a OCL::LuaComponent. The following RTT hooks are currently supported:

    •  bool configureHook()
    •  bool activateHook()
    •  bool startHook()
    •  void updateHook()
    •  void stopHook()
    •  void cleanupHook()
    •  void errorHook()

    All hooks are optional, but if implemented they must return the correct return value (if not void of course). It is also important to declare them as global (by not adding a local keyword. Otherwise they would be garbage collected and not called)

    The following code implements a simple consumer component with an event-triggered input port:

    require("rttlib")
    tc=rtt.getTC();
     
    -- The Lua component starts its life in PreOperational, so
    -- configureHook can be used to set stuff up.
    function configureHook()
       inport = rtt.InputPort("string", "inport")    -- global variable!
       tc:addEventPort(inport)
       cnt = 0
       return true
    end
     
    -- all hooks are optional!
    --function startHook() return true end
     
    function updateHook()
       local fs, data = inport:read()
       rtt.log("data received: " .. tostring(data) .. ", flowstatus: " .. fs)
    end
     
    -- Ports and properties are the only elements which are not
    -- automatically cleaned up. This means this must be done manually for
    -- long living components:
    function cleanupHook()
       tc:removePort("inport")
       inport:delete()
    end

    A matching producer component is shown below:

    require "rttlib"
     
    tc=rtt.getTC();
     
    function configureHook()
       outport = rtt.OutputPort("string", "outport")    -- global variable!
       tc:addPort(outport)
       cnt = 0
       return true
    end
     
    function updateHook()
       outport:write("message number " .. cnt)
       cnt = cnt + 1
    end
     
    function cleanupHook()
       tc:removePort("outport")
       outport:delete()
    end

    A deployment script to deploy these two components:

    require "rttlib"
     
    rtt.setLogLevel("Warning")
    tc=rtt.getTC()
    depl = tc:getPeer("Deployer")
     
    -- create LuaComponents
    depl:loadComponent("producer", "OCL::LuaComponent")
    depl:loadComponent("consumer", "OCL::LuaComponent")
     
    --... and get references to them
    producer = depl:getPeer("producer")
    consumer = depl:getPeer("consumer")
     
    -- load the Lua hooks
    producer:exec_file("producer.lua")
    consumer:exec_file("consumer.lua")
     
    -- configure the components (so ports are created)
    producer:configure()
    consumer:configure()
     
    -- connect ports
    depl:connect("producer.outport", "consumer.inport", rtt.Variable('ConnPolicy'))
     
    -- create activity for producer: period=1, priority=0,
    -- schedtype=ORO_SCHED_OTHER (1).
    depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT)
     
    -- raise loglevel
    rtt.setLogLevel("Debug")
     
    -- start components 
    consumer:start()
    producer:start()
     
    -- uncomment to print interface printing (for debugging)
    -- print(consumer)
    -- print(producer)
     
    -- sleep for 5 seconds
    os.execute("sleep 5")
     
    -- lower loglevel again
    rtt.setLogLevel("Warning")
     
    producer:stop()
    consumer:stop()

    Automatically creating and cleaning up component interfaces

    (available from toolchain-2.5)

    The function rttlib.create_if can (re-) generate a component interface from a specification as shown below. Conversely, rttlib.tc_cleanup will remove and destruct all ports and properties again.

    -- stupid example:
    iface_spec = {
       ports={
          { name='inp', datatype='int', type='in+event', desc="incoming event port" },
          { name='msg', datatype='string', type='in', desc="incoming non-event messages" },
          { name='outp', datatype='int', type='out', desc="outgoing data port" },
       },
     
       properties={
          { name='inc', datatype='int', desc="this value is added to the incoming data each step" }
       }
    }
     
    -- this create the interface
    iface=rttlib.create_if(iface_spec)
     
    function configureHook()
       -- it is safe to be run twice, existing ports
       -- will be ignored. Thus, running cleanup() and configure()
       -- will reconstruct the interface again.
     
       iface=rttlib.create_if(iface_spec)
       inc = iface.props.inc:get()
       return true
    end
     
    function startHook()
       -- ports/props can be indexed as follows:
       iface.ports.outp:write(1)
       return true
    end
     
    function updateHook()
       local fs, val
       fs, val = iface.ports.inp:read()
       if fs=='NewData' then iface.ports.outp:write(val+inc) end
    end
     
    function cleanupHook()
       -- remove all ports and properties
       rttlib.tc_cleanup()
    end

    How to write a RTT-Lua Service

    In contrast to Components (which typically contain functionality which is standalone), Services are useful for extending functionality of existing Components. The LuaService permits to execute arbitrary Lua programs in the context of a Component.

    Simple example

    The following dummy example loads the LuaService into a HelloWorld component and then runs a script that modifies a property:

    require "rttlib"
    tc=rtt.getTC()
    d = tc:getPeer("Deployer")
     
    -- create a HelloWorld component
    d:loadComponent("hello", "OCL::HelloWorld")
    hello = d:getPeer("hello")
     
    -- load Lua service into the HelloWorld Component
    d:loadService("hello", "Lua")
     
    -- Execute the following Lua script (defined a multiline string) in
    -- the service. This dummy examples simply modifies the Property.  For
    -- large programs it might be better tostore the program in a separate
    -- file and use the exec_file operation instead.
    proggie = [[
          require("rttlib")
          tc=rtt.getTC() -- this is the Hello Component
          prop = tc:getProperty("the_property")
          prop:set("hullo from the lua service!")
    ]]
     
    prop = hello:getProperty("the_property") -- get hello.the_property
    print("the_property before service call:", prop)
    hello:provides("Lua"):exec_str(proggie) -- execute program in the service
    print("the_property after service call: ", prop)

    Executing a LuaService function at the frequency of the host component

    More useful than just running once is to be able to execute a function synchronously with the updateHook of the host component. This can be achieved by registering a ExecutionEngine hook (much easier than it sounds!).

    The following Lua service code implements a simple monitor that tracks the currently active (TaskContext) state of the component in whose context it is running. When the state changes the new state is written to a port "tc_state", which is added to the context TC.

    This code could be useful for a supervision statemachine that can then easily react to this state change by means of an event triggered port.

    require "rttlib"
    tc=rtt.getTC()
    d = tc:getPeer("Deployer")
     
    -- create a HelloWorld component
    d:loadComponent("hello", "OCL::HelloWorld")
    hello = d:getPeer("hello")
     
    -- load Lua service into the HelloWorld Component
    d:loadService("hello", "Lua")
     
    mon_state = [[
          -- service-eehook.lua
          require("rttlib")
          tc=rtt.getTC() -- this is the Hello Component
          last_state = "not-running"
          out = rtt.OutputPort("string")
          tc:addPort(out, "tc_state", "currently active state of TaskContext")
     
          function check_state()
             local cur_state = tc:getState()
             if cur_state ~= last_state then
                out:write(cur_state)
                last_state = cur_state
             end
             return true -- returning false will disable EEHook
          end
     
          -- register check_state function to be called periodically and
          -- enable it. Important: variables like eehook below or the
          -- function check_state which shall not be garbage-collected
          -- after the first run must be declared global (by not declaring
          -- them local with the local keyword)
          eehook=rtt.EEHook('check_state')
          eehook:enable()
    ]]
     
    -- execute the mon_state program
    hello:provides("Lua"):exec_str(mon_state)

    Note: the -i option causes rttlua to go to interactive mode after executing the script (and not exiting afterwards).

    $ rttlua-gnulinux -i service-eehook.lua
    > rttlib.portstats(hello)
    the_results (string)  = 
    the_buffer_port (string)  = NoData
    tc_state (string)  = Running
    > hello:error()
    > rttlib.portstats(hello)
    the_results (string)  = 
    the_buffer_port (string)  = NoData
    tc_state (string)  = RunTimeError
    > 

    How to perform runtime system validation?

    It is often useful to validate a deployed system at runtime, however you want to avoid cluttering individual components with non-functional validation code. Here's what to do (Please also see this post on orocos-users, which inspired the following)

    Use-case: check for unconnected input ports

    1. Write a function to validate a single component

    The following function accepts a TaskContext as an argument and checks wether it has unconnected input ports. If yes it prints an error.

    function check_inport_conn(tc)
       local portnames = tc:getPortNames()
       local ret = true
       for _,pn in ipairs(portnames) do
          local p = tc:getPort(pn)
          local info = p:info()
          if info.porttype == 'in' and info.connected == false then
             rtt.logl('Error', "InputPort " .. tc:getName() .. "." .. info.name .. " is unconnected!")
             ret = false
          end
       end
       return ret
    end

    2. After deployment, execute the validation function on all components:

    This can be done using the mappeers function.

    rttlib.mappeers(check_inport_conn, depl)

    The mappeers function is a special variant of map which calls the function given as a first argument on all peers reachable from a TaskContext (given as a second argument). We pass the Deployer here, which typically knows all components.

    Here's a dummy deployment example to illustrate:

    require "rttlib"
    tc=rtt.getTC()
    depl=tc:getPeer("Deployer")
     
    -- define or import check_inport_conn function here
     
    -- dummy deployment, ports are left unconnected.
    depl:loadComponent("hello1", "OCL::HelloWorld")
    depl:loadComponent("hello2", "OCL::HelloWorld")
     
    rttlib.mappeers(check_inport_conn, depl)

    Executing it will print:

    0.155 [ ERROR  ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello1.the_buffer_port is unconnected!
    0.155 [ ERROR  ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello2.the_buffer_port is unconnected!

    Using rFSM Statecharts with RTT

    rFSM is a fast, lightweight Statechart implementation is pure Lua. Using RTT-Lua rFSM Statecharts can conveniently be used with RTT. The rFSM sources can be found here.

    Where to run a statemachine: Component vs. Service?

    Answer:

    Typically a Component will be preferred when

    • the statemachine has to coordinate/interact with/supervise multiple components
    • it shall run purely event-driven or at a different frequency than the computational components

    A Service is preferred when

    • the Statemachine coordinates/monitors only one component
    • the Statemachine runs synchronous (same frequency) with the host component

    There will, undoubtly, be exceptions!

    How to run an rFSM in a Component

    Summary: Create a OCL::LuaComponent. In configureHook load and initalize the fsm, in updateHook call  rfsm.run(fsm)

    (see the rFSM docs for general information)

    The source code for this example can be found here.

    It is a best-practice to split the initalization (setting up required functions, peers or ports used by the fsm) and the fsm model itself into two files. This way the fsm model is kept as platform independent and hence reusable as possible.

    The following initalization file is executed in the newly create LuaComponent for preparing the environment for the statemachine, that is loaded and initalized in configureHook.

    launch_fsm.lua

    require "rttlib"
    require "rfsm"
    require "rfsm_rtt"
    require "rfsmpp"
     
    local tc=rtt.getTC();
    local fsm
    local fqn_out, events_in
     
    function configureHook()
       -- load state machine
       fsm = rfsm.init(rfsm.load("fsm.lua"))
     
       -- enable state entry and exit dbg output
       fsm.dbg=rfsmpp.gen_dbgcolor("rfsm-rtt-example", 
                       { STATE_ENTER=true, STATE_EXIT=true}, 
                       false)
     
       -- redirect rFSM output to rtt log
       fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end
       fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end
       fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end
     
       -- the following creates a string input port, adds it as a event
       -- driven port to the Taskcontext. The third line generates a
       -- getevents function which returns all data on the current port as
       -- events. This function is called by the rFSM core to check for
       -- new events.
       events_in = rtt.InputPort("string")
       tc:addEventPort(events_in, "events", "rFSM event input port")
       fsm.getevents = rfsm_rtt.gen_read_str_events(events_in)
     
       -- optional: create a string port to which the currently active
       -- state of the FSM will be written. gen_write_fqn generates a
       -- function suitable to be added to the rFSM step hook to do this.
       fqn_out = rtt.OutputPort("string")
       tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state")
       rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out))
       return true
    end
     
    function updateHook() rfsm.run(fsm) end
     
    function cleanupHook()
       -- cleanup the created ports.
       rttlib.tc_cleanup()
    end

    A dummy statemachine stored in the fsm.lua file:

    return rfsm.state {
       ping = rfsm.state {
          entry=function() print("in ping entry") end,
       },
     
       pong = rfsm.state {
          entry=function() print("in pong entry") end,
       },
     
       rfsm.trans {src="initial", tgt="ping" },
       rfsm.trans {src="ping", tgt="pong", events={"e_pong"}},
       rfsm.trans {src="pong", tgt="ping", events={"e_ping"}},
    }

    Option A: Running the rFSM example with a Lua deployment script

    deploy.lua

    -- alternate lua deploy script
    require "rttlib"
     
    tc=rtt.getTC()
    d=tc:getPeer("Deployer")
     
    d:import("ocl")
    d:loadComponent("Supervisor", "OCL::LuaComponent")
    sup = d:getPeer("Supervisor")
     
    sup:exec_file("launch_fsm.lua")
    sup:configure()
    cmd = rttlib.port_clone_conn(sup:getPort("events"))

    Run it. cmd is an inverse (output) port which is connected to the incoming (from POV of the fsm) 'events' port of the fsm, so by writing to it we can send events:

    $ rosrun ocl rttlua-gnulinux -i deploy.lua
    OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux)
    INFO: created undeclared connector root.initial
    > sup:start()
    > in ping entry
     
    > cmd:write("e_pong")
    > in pong entry
     
    > cmd:write("e_ping")
    > in ping entry
     
    > cmd:write("e_pong")
    > in pong entry

    Option B: Running the rFSM example with an Orocos deployment script

    deploy.ops

    import("ocl")
    loadComponent("Supervisor", "OCL::LuaComponent")
    Supervisor.exec_file("launch_fsm.lua")
    Supervisor.configure

    After starting the supervisor we 'leave' it, so we can write to the 'events' ports:

    $ rosrun ocl deployer-gnulinux -s deploy.ops
    INFO: created undeclared connector root.initial
       Switched to : Deployer
     
      This console reader allows you to browse and manipulate TaskContexts.
      You can type in an operation, expression, create or change variables.
      (type 'help' for instructions and 'ls' for context info)
     
        TAB completion and HISTORY is available ('bash' like)
     
    Deployer [S]> cd Supervisor 
     
    TaskBrowser connects to all data ports of Supervisor
       Switched to : Supervisor
    Supervisor [S]> start 
     = true                
     
    Supervisor [R]> in ping entry
     
    Supervisor [R]> leave 
     Watching Supervisor [R]> events.write ("e_pong")
     = (void)              
     
     Watching Supervisor [R]> in pong entry
     
     Watching Supervisor [R]> events.write ("e_ping")
     = (void)              
     
     Watching Supervisor [R]> in ping entry
     
     Watching Supervisor [R]> 

    Running rFSM in a Service

    This is basically the same as executing a function periodally in a service (see the Service example above). There is a convenience function service_launch_rfsm in rfsm_rtt.lua to make this easier.

    The steps are:

    1. create LuaService in Component in question
    2. prepare Lua environment, i.e. call exec_string or exec_file to add functions.
    3. launch the fsm with the following call in your deployment script:

    require "rfsm_rtt"
     
    -- get reference to exec_str operation
    fsmfile = "fsm.lua"
    execstr_op = comp:provides("Lua"):getOperation("exec_str")
    rfsm_rtt.service_launch_rfsm(fsmfile, execstr_op, true)

    The last line means the following: launch fsm in <fsmfile> in service identified by execstr_op, true: create an execution engine hook so that the rfsm.step is called at the component frequency. (See the generated rfsm_rtt API docs).

    Replacing states, functions and transitions of an existing FSM model

    rFSM allows the creation of a FSM by loading a parent FSM into a new .lua file. This way, it is possible to add, delete and override states, transitions and functions. Though powerful, these operations can make the new FSM fairly hard to track. In this regard, a few tricks can make our life easier:
    1. naming states and transitions in a consistent way
    2. making the parent FSM as simple as possible with meaningful transition events
    3. overriding a full state is less confusing than overriding a single entry or exit function

    Generally speaking, the most effective way of creating a new FSM from a parent one is populating the original simple states by overriding them with composite states. In this context, the parent FSM provides “empty” boxes to be filled with application-specific code.

    In the following example, “daughter_fsm.lua” loads “mother_fsm.lua” and overrides a state, two transitions and a function. “daughter_fsm.lua” is launched by a Lua Orocos component named “fsm_launcher.lua” . Deployment is done by “deploy.ops” . Instructions on how to run the example follow.

    mother_fsm.lua

    -- mother_fsm.lua is a basic fsm with 2 simple states
     
    return rfsm.state {
     
       StateA = rfsm.state {
          entry=function() print("in state A") end,
       },
     
       StateB = rfsm.state {
          entry=function() print("in state B") end,
       },
     
    -- consistent transition naming makes overriding easier
       rfsm.trans {src="initial", tgt="StateA" },
       tr_A_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_mother_A_to_B"}},
       tr_B_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_mother_B_to_A"}},
    }

    daughter_fsm.lua

    -- daughter_fsm.lua loads mother_fsm.lua
    -- implementing extra states, transitions and functions
    -- by adding and overriding the original ones.
     
    require "utils"
    require "rttros"
     
    -- local variables to avoid verbose function calling
    local state, trans, conn = rfsm.state, rfsm.trans, rfsm.conn
     
    -- path to the fsm to load
    local base_fsm_file = "mother_fsm.lua"
     
    -- load the original fsm to override
    local fsm_model=rfsm.load(base_fsm_file)
     
    -- set colored outputs indicating the current state
    dbg = rfsmpp.gen_dbgcolor( {STATE_ENTER=true}, false)
     
    -- Overriding StateA 
    -- In "mother_fsm.lua" StateA is an rfsm.simple_state
    -- Here we make it an rfsm.composite_state
    fsm_model.StateA = rfsm.state {
     
            StateA1= rfsm.state {
                    entry=function() print("in State A1") end,
            },
     
            StateA2 = rfsm.state {
                    entry=function() print("in State A2") end,
            },
     
            rfsm.transition {src="initial", tgt="StateA1"},
            tr_A1_A2 = rfsm.transition {src ="StateA1", tgt="StateA2", events={"e_move_to_A2"}},
            tr_A2_A1 = rfsm.transition {src ="StateA2", tgt="StateA1", events={"e_move_to_A1"}},
    }
     
    -- Overriding single transitions 
    fsm_model.tr_A_to_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_daughter_A_to_B"}}
    fsm_model.tr_B_to_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_daughter_B_to_A"}}
     
     
    -- Overriding a specific function
    fsm_model.StateB.entry = function()
                    print("I am in State B in the daughter FSM")
            end
    return fsm_model

    fsm_launcher.lua

    require "rttlib"
    require "rfsm"
    require "rfsm_rtt"
    require "rfsmpp"
     
    local tc=rtt.getTC();
    local fsm
    local fqn_out, events_in
     
    function configureHook()
       -- load state machine
       fsm = rfsm.init(rfsm.load("daughter_fsm.lua"))
     
       -- enable state entry and exit dbg output
       fsm.dbg=rfsmpp.gen_dbgcolor("FSM loading example",
                       { STATE_ENTER=true, STATE_EXIT=true},
                       false)
     
       -- redirect rFSM output to rtt log
       fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end
       fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end
       fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end
     
       -- the following creates a string input port, adds it as a event
       -- driven port to the Taskcontext. The third line generates a
       -- getevents function which returns all data on the current port as
       -- events. This function is called by the rFSM core to check for
       -- new events.
       events_in = rtt.InputPort("string")
       tc:addEventPort(events_in, "events", "rFSM event input port")
       fsm.getevents = rfsm_rtt.gen_read_str_events(events_in)
     
       -- optional: create a string port to which the currently active
       -- state of the FSM will be written. gen_write_fqn generates a
       -- function suitable to be added to the rFSM step hook to do this.
       fqn_out = rtt.OutputPort("string")
       tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state")
       rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out))
       return true
    end 
     
    function updateHook() rfsm.run(fsm) end
     
    function cleanupHook()
       -- cleanup the created ports.
       rttlib.tc_cleanup()
    end

    deploy.ops

    import("ocl")
    loadComponent("Supervisor", "OCL::LuaComponent")
    Supervisor.exec_file("fsm_launcher.lua")
    Supervisor.configure
    Supervisor.start

    To test this example, run the Deployer:

    rosrun ocl deployer-gnulinux -lerror -s deploy.ops

    Then:

    Deployer [S]> cd Supervisor
     
    TaskBrowser connects to all data ports of Supervisor
       Switched to : Supervisor
    Supervisor [R]> leave
     
    Watching Supervisor [R]> events.write ("e_move_to_A2")
     
    FSM loading example:    STATE_EXIT          root.StateA.StateA1
    in State A2
    FSM loading example:    STATE_ENTER         root.StateA.StateA2

    One-liner to build a table of peers

    A Coordinator often needs to interact with many or all other components in its vicinity. To avoid having to write peer1 = depl:getPeer("peer1") all over, you can use the following function to generate a table of peers which are reachable from a certain component (commonly the deployer):

    peertab = rttlib.mappeers(function (tc) return tc end, depl)

    Assume the Deployer has two peers "robot" and "controller", they can be accessed as follows:

    print(peertab.robot)
    -- or 
    peertab.controller:configure()

    Miscellaneous

    Connecting RTT Ports to ROS topics

    > cp=rtt.Variable("ConnPolicy")
    > cp.transport=3 -- 3 is ROS
    > cp.name_id="/l_cart_twist/command" -- topic name
    > depl:stream("CompX.portY", cp)

    or with sweet one-liner (thx to Ruben!):

    > depl:stream("CompX.portY", rtt.provides("ros"):topic("/l_cart_twist/command"))

    Finding the path to a ROS package

    This is sometimes usefull for loading scripts etc. that are located in different packages.

    The rttros.lua collects some basic but useful stuff for interacting with ROS. This one is "borrowed" from the excellent roslua:

    > require "rttros"
    > =rttros.find_rospack("geometry_msgs")
    /home/mk/src/ros/unstable/common_msgs/geometry_msgs
    > 

    How are types converted between RTT and Lua?

    Lua has to work with two typesystems: its own and the RTT typesystem. To makes this as smooth as possible the basic RTT types are automatically converted to their corresponding Lua types as shown by the table below:

    RTT Lua
    bool boolean
    float number
    double number
    uint number
    int number
    char string
    string string
    void nil

    This conversion is done in both directions: basic values read from ports or basic return values of operation are converted to Lua; vice versa if an operation with basic Lua values is called these will automatically be converted to the corresponding RTT types.

    How to add a custom pretty printing function for a new type?

    In short: write a function which accepts a lua table representation of you data type and returns either a table or a string. Assign it to rttlib.var_pp.mytype, where mytype is the value returned by the var:getType() method. That's all!

    Quick example: ConnPolicy type

    (This is just an example. It has been done for this type already).

    The out-of-box printing of a ConnPolicy will look as follows:

    ./rttlua-gnulinux
    Orocos RTTLua 1.0-beta3 (gnulinux)
    > return rtt.Variable("ConnPolicy")
    {data_size=0,type=0,name_id="",init=false,pull=false,transport=0,lock_policy=2,size=0}

    This not too bad, but we would like to display the string representation of the C++ enums type and lock_policy. So we must write a function that returns a table...

    function ConnPolicy2tab(cp)
        if cp.type == 0 then cp.type = "DATA"
        elseif cp.type == 1 then cp.type = "BUFFER"
        else cp.type = tostring(cp.type) .. " (invalid!)" end
     
        if cp.lock_policy == 0 then cp.lock_policy = "UNSYNC"
        elseif cp.lock_policy == 1 then cp.lock_policy = "LOCKED"
        elseif cp.lock_policy == 2 then cp.lock_policy = "LOCK_FREE"
        else cp.lock_policy = tostring(cp.lock_policy) .. " (invalid!)" end
        return cp
    end

    and add it to the rttlib.var_pp table of Variable formatters as follows:

    rttlib.var_pp.ConnPolicy = ConnPolicy2tab

    now printing a ConnPolicy again calls our function and prints the desired fields:

    > return rtt.Variable("ConnPolicy")
    {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0}
    >

    How to use classical OCL Deployers ? (like with Corba, or with a Taskbrowser)

    If you are used to manage your application with the classic OCL Taskbrowser or if you want your application to be connected via Corba, you may only use lua for deployment, and continue to use your former deployer. To do so, you have to load the lua service into your favorite deployer (deployer, cdeployer, deployer-corba, ...) and then call your deployment script.

    Exemple : launch your prefered deployer :

    cdeployer -s loadLua.ops

    with loadLua.ops :

    //load the lua service
    loadService ("Deployer","Lua")
     
    //execute your deployment file
    Lua.exec_file("yourLuaDeploymentFile.lua")

    and with yourLuaDeploymentFile.lua containing the kind of stuff described in this Cookbook. Like the one in paragraph "How to write a deployment script"

    How to generate graphical representations of rFSM models

    The rfsm-viz command allows you to generate easy-to-read pictures representing the structure of your FSM model. This tool uses the rfsm2uml and fsm2dbg modules and requires the libgv-lua package. Practically:

    $ <fsm_install_dir>/tools/rfsm-viz -f <your_fsm_file>.lua

    options:

    • -f <fsm-file> : fsm input file
    • -tree : generate tree representation
    • -text : dump to simple textual format
    • -uml : generate uml state machine figure
    • -dot : generate a graphviz dot-file
    • -all : generate all represesentations
    • -format (svg|png|...): generate different file format
    • -v : be verbose
    • -h : print this

    Script to generate default CPF file for a component

    see here: https://gist.github.com/3957702 (thx to Ruben).

    Memory management: what is automatically garbage collected?

    Answer: everything besides Ports and Properties. So if you have Lua components/Services which are deleted and recreated, it is advisable to cleanup properly. This means:

    1. remove Port or Property from (all!) TaskContext interfaces to which it was added
    2. invoke the delete method to release the memory e.g.  portX:delete()

    Update for toolchain-2.5: The utility function rttlib.tc_cleanup() will do this for you.

    Where to find further information?

    Please ask questions related to RTT Lua on the orocos-users mailing list.

    Lua specific links

    License

    The RTT Lua bindings are licensed under the same license as the OROCOS RTT.

    Roadmap

    1. the core bindings are stable and no significant changes are planned
    2. release 1.0

    LuaFeatureRequests

    Lua Feature Request page

    • Merge tabcompletion into OCL (Peter)

    Older Versions

    The Orocos 1.x releases are still maintained but no longer recommended for new applications.

    Look here for information on

    Quick Start on Linux

    This page explains how to install the Orocos Toolchain from the public repositories using a script. ROS-users might want to take a look at the orocos_toolchain stack and the rtt_ros_integration stack.

    Installation: the easy way (Linux)

    1. Make sure that the Ruby interpreter (>=1.8.7) is installed on your machine ruby --version
    2. Create and "cd" into the directory in which you want to install the toolchain
    3. Save the Toolchain-2.6 bootstrap.sh script in the folder you just created
    4. In a console, run sh bootstrap.sh. This installs the toolchain-2.6 branch (latest fixes, stable).
    5. Important: as the build tool tells you, you **must** load the generated env.sh script at the end of the build !!!
      • source it in your current console (source ./env.sh)
      • also, add it to your .bashrc (append ". /path/to/the/directory/env.sh" -- without the quotes -- at the end of your .bashrc)

    Summarized:

    cd $HOME
    mkdir orocos
    cd orocos
    mkdir orocos-toolchain
    cd orocos-toolchain
    wget -O bootstrap-2.6.sh http://gitorious.org/orocos-toolchain/build/raw/toolchain-2.6:bootstrap.sh
    sh bootstrap.sh
    source env.sh

    Tweaking build and install options can be done by modifying autoproj/config.yml. You must read the README and the Autoproj Manual in order to understand how to configure autoproj. See also the very short introduction on Using Autoproj.

    Testing the installation

    When the script finishes, try some Orocos toolchain commands (installed by default in 'install/bin'):

    typegen
    deployer-gnulinux
    ctaskbrowser

    Maintaining the Toolchain uptodate

    After some time, you can get updates by going into the root folder and do

    # Updates to latest fixes of release branch:
    autoproj update
    # Builds the toolchain
    autoproj build

    You might have to reload the env.sh script after that as well. Simply open a new console. See also Using Autoproj.

    Installation: from zip/bz2/gz files

    Download the archive from the toolchain homepage. Unpack it, it will create an orocos-toolchain-<version> directory. Next do:

    cd $HOME
    mkdir orocos
    cd orocos
    tar -xjvf /path/to/orocos-toolchain-<version>.tar.bz2
    cd orocos-toolchain-<version>
    ./bootstrap_toolchain
    source ./env.sh
    autoproj build

    • The first command, bootstrap_toolchain, is required to setup the autoproj environment.
    • The second command sets the environment for you
    • The third command builds the complete toolchain and installs it in the install/ directory. set the CMAKE_INSTALL_PREFIX to change this.

    Using Autoproj

    Note

    autoproj will only work if you completed the bootstrap process and you sourced the env.sh file. See the Quick Start page for setting up autoproj correctly.

    Minor release upgrades

    In order to get the latest bug fixes of the current release, use these commands:

    autoproj update
    autoproj build

    Major release upgrades

    In order to upgrade from one major release to another, use these commands:

    autoproj switch-config branch=toolchain-2.3
    autoproj update
    autoproj build

    You may replace the branch=toolchain-2.3 with any branch number going forward or backward in releases. We have: master, stable, toolchain-2....

    Reconfiguring

    If you'd like to reconfigure some of the package options, you can do so by writing

    autoproj update --reconfigure
    autoproj build

    Warning: this will erase your current configuration (ie CMake) in case you had modified it manually !

    A comprehensive autoproj manual can be found here

    Wiki for site admins

    About

    This page is for helping site admins to setup their wiki books. Plain Orocos users which wish to edit or add new pages should find the necessary information on the Main Page.

    This document is written in doku wiki syntax, MediaWiki syntax.

    Useful links

    In the administration panel, you are able to configure the following settings:

    Fixed

    rtt or The RTT

    Migrating to Drupal 6

    Drupal 5 Status

    Modules

    Below is a list of all modules currently installed and their support in releases 6 and 7 of Drupal. We don't necesserily need all these modules after the migration.

    Mark must-haves with an 'M'.

     
     M? 6.x 7.x   Module
     M   x  -     adsense
     M   x  -     advuser
     M   x  x     captcha
     M   x  -     captcha_pack
         x  x     cck
     M   x  -     comment_upload
         x  x     contemplate
     M   x  x     diff
     M  (x) -     drutex
     M   x  -     filterbynodetype
     M   x  -     freelinking
     M   x  x     geshifilter
     M   x  x     image
     M   x  x     imagepicker
     M   x  -     img_assist
         x  -     import_html
     M   x  -     listhandler
     M   x  -     mailhandler
     M   x  -     mailman_manager
     M  (x) -     mailsave
         -  -     mathfilter
     M   x  x     pathauto
     M   x  x     path_redirect
     M  (x) -     pearwiki_filter
     M   x  x     quote
     M   x  -     spam
     M   x  x     spamspan
     M   x  -     tableofcontents
     M   x  -     talk
     M   x  -     taxonomy_breadcrumb
     M   x  c/x   token
     M   x  -     user_mailman_register
     M   c  c     user_status
     M   x  x     views
         -  -     wiki
     M   x  -     wikitools
    
     M :  must-have
     - :  not present
     c :  present as core feature
     x :  module released
    (x) : released but unmaintained

    Newly found:

     x  -     Emailfilter - for listhandler
     x  -     JsMath - latex render in browser instead of on-server

    Drupal 6 Migration Procedure

    Testing

    1. copy 5.x database to 6.x test-database 'drupal_orocos_testing'
    2. copy 5.x website to 6.x test-site 'test'
      1. rename settings.php to point to new database and fill in the url_base !!!
    3. Go to 'test' site and select default themes & disable all modules
    4. Install Drupal 6.x and remove 5.x copy. Keep settings.php for reference.
      1. Symlink drupal5 's files to drupal6's files (avoid copy on server due to quota)
      2. Install plugin manager and administration menu
      3. Disable pulling forum posts from orocos mailing list, such that original 5.x site does not loose content.
    5. Upgrade core
    6. enable & upgrade modules
    7. apply patches from the drupal 5.x branch on the drupal6 branch. See https://github.com/psoetens/orocos-www
    8. Upgrade the Drupal theme, according to http://drupal.org/node/132442#signature
      1. Copy the existing orocos theme into the new drupal installation (sites/all/themes)
    9. Check results and themes. Special attention checklist:
      1. Media wiki layout on wiki pages
      2. Drutex on KDL pages (formulas)
      3. forum posts, quoting of parent post must work 'recursively'
      4. Major content pages : front, toolchain, kdl, rtt,...
      5. table of contents on wiki pages
      6. Syntax highlighting
      7. Mailing list pulling and posting
        1. This may require a test forum setup such that the original 5.x site is not influenced
        2. Do not enable retrieval of the original orocos-dev/orocos-users !

    Migrating

    If this works:
    1. Log in as use 1 and shut down 5.x site (maintenance mode)
      1. Maybe we should set a redirect in our .htaccess that doesn't rely on any file currently on the server.
    2. copy 5.x database again to 'drupal_orocos_backup' have backup of latest changes
    3. copy 5.x website on-server to 'www2.orocos.org' have backup of latest status
      1. Edit settings.php of that site to use 'drupal_orocos_backup' DB and url_base of www2.
    4. Log in as user 1 on www.orocos.org
      1. select default themes & disable all modules
    5. Log in as user 1 on 'test' site
    6. Edit Drupal 6.x 'test' site to use 5.x database 'drupal_orocos' (settings.php)
    7. Do upgrade of database & modules as above
    8. When all works, remove 5.x files on 'www' and copy-over 6.x files from 'test'.
    9. Leave maintenance mode

    Plan B

    If the final upgrade fails (it shouldn't, you tested it!), We can:
    • Redirect to 'www2' from 'www' (.htaccess)
    • Disable maintenance mode on www2
    • Keep playing on the 'test' site until 'it works' (test is working on 'drupal_orocos' DB) and copy over any files to 'www'
    • Put 'www2' in maintenance and remove the redirect again.

    Plan C

    We don't want this ever to happen:
    • Remove all files from 'www' and copy all files from 'www2'
    • Drop 'drupal_orocos' DB (or rename it for later reference)
    • Copy 'drupal_orocos_backup' to 'drupal_orocos'
    • Modify settings.php to contain correct DB and url_base

    iTaSC wiki

    iTaSC (instantaneous Task Specification using Constraints) is a framework to generate robot motions by specifying constraints between (parts of) the robots and their environment. iTaSC was born as a specification formalisms to generalize and extend existing approaches, such as the Operational Space Approach, the Task Function Approach, the Task Frame Formalism, geometric Cartesian Space control, and Joint Space control.

    The iTaSC concepts apply to specifications in robot, Cartesian and sensor space, to position, velocity or torque-controlled robots, to explicit and implicit specifications, and to equality and inequality constraints. The current implementation, however, is currently still limited to the velocity control and equality constraints subset. An example:: Human-Robot ComanipulationAn example:: Human-Robot Comanipulation

    Warning

    the documentation effort lags behind the conceptual and implementation effort, the best documentation can be found in our papers! (see Acknowledging iTaSC and literature)

    It is currently highly recommended to use the Devel branch, a formal release is expected soon (iTaSC DSL and stacks).

    Get started here

    Please post remarks, bug reports, suggestions, feature requests, or patches on the orocos users/dev forum/mailinglist.

    What is iTaSC?

    The iTaSC-Skill framework

    iTaSC stands for instantaneous Task Specification using Constraints, which is developed at the K.U.Leuven during the past years [1,2,5].

    The framework generates motions by specifying constraints in geometric, dynamic or sensor-space between the robots and their environment. These motion specifications constrain the relationships between objects (object frames) and their features (feature frames). Established robot motion specification formalisms such as the Operational Space Approach [3], the Task Function Approach [6], the Task Frame Formalism [4], Cartesian Space control, and Joint Space control are special cases of iTaSC and can be specified using the generic iTaSC methodology.

    The key advantages of iTaSC over traditional motion specification methodologies are:

    1. composability of partial constraints: multiple constraints can be combined, hence the constraints can be partial, they do not have to constrain the full 6D relation between two objects;
    2. reusability of constraint specification: the constraints specify a relation between feature frames, that have a semantic meaning in the context of a task, implying that the same task specification can be reused on different objects;
    3. automatic derivation of the control solution: the iTaSC methodology generates a robot motion that optimizes the constraints by automatically deriving the controllers from that constraint specification.
    4. weights and priorities: different constraints can be weighted or given priorities.

    These advantages imply that the framework can be used for any robotic system, with a wide variety of sensors.

    In order not to be limited to one single instantaneous motion specification, several iTaSC specifications can be glued together via a so-called Skill that coordinates the execution of multiple iTaSCs, and configures their parameters. Consequently, the framework separates the continuous level of motion specification from the discrete level of coordination and configuration. One skill coordinates a limited set of constraints, that together form a functional motion. Finite State Machines implement the skill functionality.

    This framework is implemented in the iTaSC software.

    References

    • [1] J. De Schutter, T. De Laet, J. Rutgeerts, W. Decre, R. Smits,E. Aertbelien, K. Claes, and H. Bruyninckx. Constraint-based task specification and estimation for sensor-based robot systems in the presence of geometric uncertainty. The International Journal of Robotics Research, 26(5):433–455, 2007.
    • [2] W. Decre, R. Smits, H. Bruyninckx, and J. De Schutter. Extending iTaSC to support inequality constraints and non-instantaneous task specification. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, pages 964–971, Kobe, Japan, 2009.
    • [3] O. Khatib. The operational space formulation in robot manipulator control. In Proceedings of the 15th International Symposium on Industrial Robots, pages 165–172, Tokyo, Japan, 1985.
    • [4] M. T. Mason. Compliance and force control for computer controlled manipulators. IEEE Transactions on Systems, Man, and Cybernetics, SMC-11(6):418–432, 1981.
    • [5] J. Rutgeerts. Constraint-based task specification and estimation for sensor-based robot tasks in the presence of geometric uncertainty. PhD thesis, Department of Mechanical Engineering, Katholieke Universiteit Leuven, Belgium, 2007.
    • [6] C. Samson, M. Le Borgne, and B. Espiau. Robot Control, the Task Function Approach. Clarendon Press, Oxford, England, 1991.
    • [7] R. Smits and H. Bruyninckx. Composition of complex robot applications via data flow integration. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 5576–5580, Shangai, China, 2011.

    Acknowledging iTaSC and literature

    Please cite following papers when using ideas or software based on/ of iTaSC:

    acknowledging iTaSC paradigm/concept

    Original concept

    Extensions

    Bibtex

    @Article{            DeSchutter-ijrr2007,
      author          = {De~Schutter, Joris and De~Laet, Tinne and
                         Rutgeerts, Johan and Decr\'e, Wilm and Smits, Ruben and
                         Aertbeli\"en, Erwin and Claes, Kasper and
                         Bruyninckx, Herman},
      title           = {Constraint-Based Task Specification and Estimation
                                for Sensor-Based Robot Systems in the Presence of
                         Geometric Uncertainty},
      journal         =  {The International Journal of Robotics Research},
      volume          = {26},
      number          = {5},
      pages           = {433--455},
      year            = {2007},
      keywords        = {constraint-based programming, task specification,
                         iTaSC, estimation, geometric uncertainty}
    }
     
    @InProceedings{     decre09,
     author           = {Decr\'e, Wilm and Smits, Ruben and Bruyninckx, Herman
                         and De~Schutter, Joris},
     title            = {Extending {iTaSC} to support inequality constraints
                         and non-instantaneous task specification},
     title            = {Proceedings of the 2009 IEEE International Conference on
    Robotics and Automation},
     booktitle        = {Proceedings of the 2009 IEEE International Conference on
    Robotics and Automation},
     year             = {2009},
     address          = {Kobe, Japan}
     pages            = {964--971},
     keywords         = {constraint-based programming, task specification, iTaSC,
                         convex optimization, inequality constraints, laser tracing}
    }
     
    @InProceedings{     DecreBruyninckxDeSchutter2013,
     author           = {Decr\'e, Wilm and and Bruyninckx, Herman and De~Schutter, Joris},
     title            = {Extending the {Itasc} Constraint-Based Robot Task Specification Framework to Time-           Independent Trajectories and User-Configurable Task Horizons},
     title            = {Proceedings of the  IEEE International Conference on Robotics and Automation},
     booktitle        = {Proceedings of the  IEEE International Conference on Robotics and Automation},
     year             = {2013},
     address          = {Karlsruhe, Germany}
     pages            = {1933--1940},
     keywords         = {constraint-based programming, task specification, human-robot cooperation}
    }

    acknowledging iTaSC software

    Bibtex

    @inproceedings{     vanthienenIROS2013,
      author          = {Vanthienen, Dominick and Klotzbuecher, Markus and De~Laet,      Tinne and De~Schutter, Joris and Bruyninckx, Herman},
      title           = {Rapid application development of constrained-based task modelling and execution using Domain Specific Languages},
      booktitle       = {Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems},
      title           = {Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems},
      organization    = {IROS2013},
      year            = {2013},
      address         = {Tokyo, Japan}
      pages           = {1860--1866}
    }
     
    @inproceedings{      vanthienen_syroco2012,
      title           = {Force-Sensorless and Bimanual Human-Robot Comanipulation},
      author          = {Vanthienen, Dominick and De~Laet, Tinne and Decr\'e, Wilm and Bruyninckx, Herman and         De~Schutter, Joris},
      booktitle       = {10th IFAC Symposium on Robot Control (SYROCO)},
      year            = {2012},
      month           = {September, 5--7},
      address         = {Dubrovnik, Croatia},
      volume          = {10}
    }
     
    @InProceedings{    SmitsBruyninckxDeSchutter2009,
      author          = {Smits, Ruben and Bruyninckx, Herman and De~Schutter, Joris},
      title           = {Software support for high-level specification, execution and estimation of event-driven, constraint-based multi-sensor robot tasks},
      booktitle       = {Proceedings of the 2009 International Conference on Advanced Robotics},
      title           = {Proceedings of the 2009 International Conference on Advanced Robotics},
      year            = {2009},
      address         = {Munich, Germany}
      pages           = {}, 
      keywords        = {specification, itasc, skills}
    }

    Literature

    Papers on iTaSC applications

    • Smits, R., De Laet, T., Claes, K., Bruyninckx, H., De Schutter, J. (2008). iTASC: a tool for multi-sensor integration in robot manipulation. In : Proceedings of the Multisensor fusion and integration for intelligent systems, Seoul, Korea, Aug 20-22, 2008 (pp. 445-452)
    • De Laet, T., Smits, R., Bruyninckx, H., De Schutter, J. (2012). Constraint-based task specification and control for visual servoing application scenarios. Automatisierungstechnik, 60(5), 260-269
    • Borghesan, G., Willaert, B., De Laet, T., De Schutter, J. (2012). Teleoperation in Presence of Uncertainties: a Constraint-Based Approach. Symposium on robot control. Dubrovnik, Croatia, 5-7 September 2012
    • Vanthienen, D., De Laet, T., Decré, W., Bruyninckx, H., De Schutter, J. (2012). Force-Sensorless and Bimanual Human-Robot Comanipulation. In : 10th IFAC Symposium on Robot Control, 10. IFAC Symposium on Robot Control. Dubrovnik, Croatia, 5-7 September 2012 (pp. 1-8). IFAC
    • Borghesan, G., Willaert, B., De Schutter, J. (2012). A constraint-based programming approach to physical human-robot interaction. In : IEEE International Conference on Robotics and Automation (ICRA), 2012 (pp. 3890-3896)

    related papers

    • Klotzbücher, M., Smits, R., Bruyninckx, H., De Schutter, J. (2011). Reusable Hybrid Force-Velocity controlled Motion Specifications with executable Domain Specific Languages. In : 2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, San Francisco, USA., 25-30 September, 2011 IEEE
    • Klotzbücher, M., Bruyninckx, H. (2012). Coordinating Robotic Tasks and Systems with rFSM Statecharts. JOSER: Journal of Software Engineering for Robotics, 3 (1), 28-56.

    Phd's

    • J. Rutgeerts. Constraint-based task specification and estimation for sensor-based robot tasks in the presence of geometric uncertainty. PhD thesis, Department of Mechanical Engineering, KU Leuven, Belgium, 2007
    • R. Smits, “Robot skills: design of a constraint-based methodology and software support,” Ph.D. dissertation, Dept. Mech. Eng., KU Leuven, Belgium, May 2010

    Workshops

    • Vanthienen, D., De Laet, T., De Schutter, J., Bruyninckx, H. (2013). Software framework for robot application development: a constraint-based task programming approach. IEEE International Conference on Robotics and Automation SDIR workshop. Karlsruhe, 6 May 2013.
    • Vanthienen, D., Robyns, S., Aertbeliën, E., De Schutter, J. (2013). Force-sensorless robot force control within the instantaneous task specification and estimation (iTaSC) framework. Benelux Meeting on Systems and Control. Houffalize, Belgium, 26-28 March 2013.
    • Vanthienen, D., De Laet, T., Decré, W., De Schutter, J. (2013). Acceleration- vs. velocity-resolved constraint-based instantaneous task specification and estimation (iTaSC). Benelux Meeting on Systems and Control 2013. Houffalize, Belgium, 26-28 March 2013.
    • Vanthienen, D., De Laet, T., Decré, W., Smits, R., Klotzbücher, M., Buys, K., Bellens, S., Gherardi, L., Bruyninckx, H., De Schutter, J. (2011). iTaSC as a unified framework for task specification, control, and coordination, demonstrated on the PR2. IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, 25-30 September 2011.
    • Vanthienen, D., De Laet, T., Smits, R., Buys, K., Bellens, S., Klotzbücher, M., Bruyninckx, H., De Schutter, J. (2011). Demonstration of iTaSC as a unified framework for task specification, control, and coordination for mobile manipulation. IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, 25-30 September 2011.

    More detailed list of literature

    iTaSC is not iTaSK

    Remark the common mistake in the naming, it is iTaSC (instantaneous Task Specification using Constraints), not iTASK.

    The iTaSC Software

    The software implements the iTaSC-Skill framework in Orocos, which is integrated in ROS by the Orocos-ROS-integration [1]. The Real-Time Toolkit (RTT) of the Orocos project enables the control of robots on a hard-realtime capable operating system, e.g. Xenomai-Linux or RTAI-Linux. The rFSM subproject of Orocos allows scripted Finite State Machines, hence Skills, to be executed in hard realtime. The figure below shows the software architecture, mentioning the formulas for the resolved velocity case without prioritization for clarification. The key advantages of the software design include:

    1. the modular design, allowing users to implement their own solver, scene graph, motion generators ...,
    2. the modular task specification that allows users to reuse tasks, and enables a future task-web application to down- or upload tasks,
    3. the flexible user interface, allowing users to change the weights and priorities of different constraints, and to add or remove constraints.

    Furthermore, the Bayesian Filtering Library (BFL) and Kinematics and Dynamics Library (KDL) of the Orocos project are used to retrieve stable estimates out of sensor data, and to specify robot and virtual kinematic chains respectively.

    iTaSC framework schemeiTaSC framework scheme

    License

    The iTaSC software is licensed under a dual LGPLv2.1/BSD license. You may redistribute this software and/or modify it under either the terms of the GNU Lesser General Public License version 2.1 (LGPLv2.1) or (at your discretion) of the Modified BSD License.

    Acknowledgements

    The developers gratefully acknowledge the financial support by:
    • European FP7 project Rosetta (FP7-230902, Robot control for skilled execution of tasks in natural interaction with humans; based on autonomy, cumulative knowledge and learning)
    • European FP7 project BRICS (FP7-231940, Best practices in robotics)
    • KU Leuven's Concerted Research Action GOA/2010/011 Global real-time optimal control of autonomous robots and mechatronic systems
    • Flemish FWO project G040410N Autonome manipulatietaken met een vliegende robot. (Autonomous manipulation with a flying robot.)

    Roadmap

    We are interested in (contributions to):
    • Include other types of constraints, eg. inequality constraints
    • Expand the capabilities to include uncertainty constraints (and make a nice example/tutorial)
    • Include resolved acceleration control
    • Make more tutorials and examples (eg. including MTTD)
    • ...

    (to be expanded)

    References

    • [1] R. Smits and H. Bruyninckx. Composition of complex robot applications via data flow integration. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 5576–5580, Shangai, China, 2011.

    iTaSC DSL: rapid iTaSC application development

    iTaSC DSL is a Domain Specific Language for constraint-based programming, more specifically iTaSC.

    • The DSL provides a formal model for iTaSC applications, that also serves as a design template and guideline.
    • It provides a 'scripting language' to model an iTaSC application. This reduces the effort of creating an iTaSC application wrt. the previous labour intensive process of editing multiple files.
    • The DSL is not just a scripting language, but a formal model. A model of an iTaSC application (M1 level model) can be checked for conformity to the iTaSC model (M2 level model). These checks occur before running the application, and return meaningful errors instead of (obscure) run-time errors, hence reducing debugging efforts.
    • An iTaSC application model can be 'executed/parsed' to running code.

    For more explanation and examples, please read D. Vanthienen, M. Klotzbuecher, T. De Laet, J. De Schutter, and H. Bruyninckx, Rapid application development of constrained-based task modelling and execution using domain specific languages, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 2013, pp. 1860–1866.

    Videos and links

    http://people.mech.kuleuven.be/~dvanthienen/IROS2013/

    Code

    The code, including examples can be found on: http://bitbucket.org/dvanthienen/itasc_dsl

    It is recommended to use the devel branch for the DSL as well as the iTaSC stacks.

    Running examples

    In a typical use case, you'll interact with the running application through events.

    For the Orocos reference implementation (in a ROS environment): One way is to send events in the Orocos task browser through the event_firer component that is automatically started when parsing and deploying an application.

    Another, more user-friendly way, is to send events on the /itasc/ ros_common_events_in ROS topic (see README of the itasc_dsl repository).

    A GUI to send events can be found on: https://bitbucket.org/apertuscus/python_gui

    Look for an example at the itasc_erf2012_demo, that contains a run_eventgui.sh, launching this GUI with events for the iTaSC ERF2012 tutorial.

    iTaSC quick start

    Overview

    The framework is ordered following a OROCOS-ROS approach and consists of 1 meta-stack:
    • itasc: This ROS unary stack serving as a meta-stack, it's purpose is to keep the framework stacks together

    This meta-stack consists of following stacks:

    • itasc_core: This ROS unary stack contains the core functionality of iTaSC: the scene and template (headers) for all itasc component types (solvers, virtual kinematic chains, controller/constraints, robots, objects)
    • itasc_solvers: Contains a number of solver packages for iTaSC
    • itasc_robots_objects: Contains a number of robot and object packages for iTaSC
    • itasc_tasks: Contains a number of task packages for iTaSC (combination of virtual kinematic chains and constraint/controllers)

    Each package contains the following subdirectories:

    • src/ containing the source code of the components (C++)
    • cpf/ containing the property files for the components and FSM's
    • scripts/ containing the FSM's and components in a LUA implementation (eg. Supervisors)
    • launch/ containing eventual ROS launch files

    Code

    The installation instructions further on will cover the installation of the source code and dependencies.

    Dependencies

    Source code

    Source code of iTaSC can be found on following git repositories:
    • https://gitlab.mech.kuleuven.be/groups/rob-itasc

    and for the iTaSC DSL

    • https://bitbucket.org/dvanthienen/itasc_dsl.git

    Installation instructions for ROS Indigo

    ''The following explanation uses the ROS workspace and rosinstall tools, it is however easy to follow the same instructions without these tools, as will be hinted further on. IMPORTANT: All packages are catkin-based.

    Pre-requisites

    sudo apt-get install libeigen3-dev sip-dev liburdfdom-dev ros-indigo-angles ros-indigo-tf2-ros ros-indigo-geometric-shapes liblua5.1-0 liblua5.1-0-dev collada-dom-dev libbullet-dev ros-indigo-orocos-kdl ros-indigo-orocos-kinematics-dynamics ros-indigo-orocos-toolchain ros-indigo-geometry ros-indigo-robot-model ros-indigo-rtt-geometry ros-indigo-rtt-ros-integration ros-indigo-rtt-sensor-msgs ros-indigo-rtt-visualization-msgs

    • Afterwards add the following to your .bashrc file:

    source /opt/ros/indigo/setup.bash

    • Make a catkin workspace (for instance ~/ws):

    mkdir -p ~/ws/src
    cd ~/ws/src
    catkin_init_workspace
    cd ~/ws/
    catkin_make

    • Afterwards add the following to your .bashrc file underneath /opt/ros/indigo source command:

    source ~/ws/devel/setup.sh

    Installation file

    • Download the ROS installation file here
    • First copy this file into your workspace ws and then merge:

    cd  ~/ws
    wstool init src
    wstool merge -t src itasc_dsl.rosinstall
    wstool update -t src
    • OR manually clone each repository mentioned in the file, switch to the correct branch/tag, and add to your ROS_PACKAGE_PATH (NOT RECOMMENDED)

    Application dependent install files

    The KUKA youBot and PR2 have not yet been tested in Indigo.

    Setup

    • Download this file, move and rename it to ~/.bash_itasc_dsl
    • Add the following at the end of your ~/.bashrc file, in the here presented order

    source ~/.bash_itasc_dsl
    useITaSC_deb
    • re-source your .bashrc
      • source ~/.bashrc

    Build the packages

    cd ~/ws/
    catkin_make

    Installation instructions for ROS Fuerte or Groovy

    The following explanation uses the ROS workspace and rosinstall tools, it is however easy to follow the same instructions without these tools, as will be hinted further on.

    Pre-requisites

    • Install following stacks:
      • sudo apt-get install libeigen3-dev ros-groovy-rtt-ros-integration ros-groovy-rtt-geometry ros-groovy-rtt-common-msgs ros-groovy-rtt-ros-comm
    • rosws (ROS workspace, optional but highly recommended):
      • Create a new ros workspace or use an existing one
      • Add the following to your .bashrc file:

    source /opt/ros/groovy/setup.bash
    source ~/path_to_workspace/setup.bash

    Installation file

    • Download the ROS installation file here
    • Merge this file into your workspace:
      • rosws merge itasc_dsl.rosinstall
      • rosws update
    • OR manually clone each repository mentioned in the file, switch to the correct branch/tag, and add to your ROS_PACKAGE_PATH

    Application dependent install files

    • When using the KUKA youBot:
      • Download the ROS installation file here
      • rosws merge itasc_youbot_fuerte.rosinstall
      • rosws update

    Setup

    • Download this file, move and rename it to ~/.bash_itasc_dsl
    • Add at the following at the end of your ~/.bashrc file, in following order:
      • source /path/to/your/rosworkspace/setup.bash
      • source .bash_itasc_dsl
      • useITaSC_deb
    • re-source your .bashrc
      • source ~/.bashrc

    For your convenience we put here some extra instructions for commonly used platforms:

    When using a PR2 robot

    This assumes you have PR2 related packages installed, see PR2 installation.
    • Go to your ros workspace
    • Create the rtt pr2 controller messages and add it to your ros workspace
      • rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs
      • rosws set rtt_pr2_controllers_msgs
    • Convert the following XACRO scripts:
      • roscd itasc_pr2
      • ./convert_xacro.sh

    When using a KUKA YouBot robot

    This assumes you have KUKA YouBot related packages installed, see for example ERF 2012 laser tracing tutorial.
    • add the location of YouBot related lua files to your LUA_PATH:
      • uncomment the line mentioning 'youbot_driver_rtt' in ~/.bash_itasc_dsl

    Build the packages

    Build the core packages

    • rosmake itasc_core trajectory_generators itasc_tasks rFSM rttlua_completion itasc_solvers fixed_object  itasc_robot_object moving_object moving_object_tf

    Build application dependent packages

    • Compile the dedicated itasc components for the PR2 or Youbot, located in the itasc_robots_objects stack
    • To compile for all supported platforms:
      • rosmake itasc

    Test the installation

    • Easy application on a PR2: (TODO move to tutorials/dsl readme)
      • start a real or simulated PR2. For the simulated environment:
        • roslaunch pr2_gazebo pr2_empty_world.launch
      • Go to the itasc_dsl package and run the bunny pose example
      • roscd itasc_dsl
      • ./run_bunny.sh
      • This should move the arms in a grippers down 'bunny' pose
      • Play around the task browser or manually quit the application using CTRL+d
    • The itasc_dsl package contains more example models to try out
    • Interaction with a running iTaSC application happens through events, multiple options exist (see ERF 2012 laser tracing tutorial for an example):
      • sending events using a python Qt GUI: see python_gui
      • sending events on a ROS topic
      • sending events using the Orocos event_firer (automatically loaded when using the DSL parser)
    • Follow one of the tutorials (deprecated, needs update)
    • For known compatibility issues: see below

    Design Workflow

    Multiple files should be created or adapted to create a new application, a good workflow could be:
    1. Make a list of all tasks that have to be executed
    2. Draw the kinematic loops for each task (see also Comanipulation Demo)
    3. Draw the component layout of your application
    4. Draw the behavior of your application (coordination of the behavior at runtime), this will afterwards be implemented in the composite_task_fsm.lua and the running_task_fsm.lua files. Consult 'subFSM's of the running state' of the iTaSC user guide for more information.
    5. Create or download all necessary packages (tasks, robots, objects, solvers...)
      1. Create the components (computational part)
      2. Create the skills (FSM: coordination and configurational part), but leave the sub state machines dictating behavior at runtime for now (eg. running_taskname_fsm.lua).
    6. Create the FSM on the iTaSC level: itasc_configuration.lua
    7. Create the FSM on the application level
    8. Check, create or adapt the configuration files (.cpf) of your components.
    9. Create a deploy script: run.ops and run.sh (after this step, you can test that your application gets to the running state, without errors)
    10. Create the FSMs that coordinate the behavior at runtime
      1. on itasc level: composite_task_fsm.lua
      2. on task level (for each task): running_taskname_fsm.lua

    Known compatibility issues

    • robot_model/kdl_parser from the ROS Groovy debian packages: downgrade to robot_model-1.8
      • used in itasc_pr2 and itasc_robot_object to convert urdf model to kdl tree
      • this version returns foo-bar tree
      • prevents itasc_pr2 and itasc_robot_object components to configure
    • In case your computer has limited RAM (2GB):
      • In .bashr file: add  export ROS_PARALLEL_JOBS=' -j1 -l1'
      • When performing rosmake:  rosmake --threads 1

    iTaSC user guide

    On this page, you'll find information on the iTaSC framework design and functionality. On the following pages you can find more information on how to create the needed software yourself:

    Creating an iTaSC application

    The iTaSC main manual is currently in pdf version, until the wiki version is finished.

    Computation

    See pdf manual

    Configuration and coordination: Skills

    What is a skill?

    A Skill is a specific combination of the configuration and coordination of Tasks. An iTaSC skill is implemented in the framework using the Lua based rFSM Finite State Machine (FSM) engine. The following design rules should be/are applied:
    • Event driven: Events trigger the FSMs to transition from one state to another.
    • Each FSM should be designed such that it is framework independent (e.g. from OROCOS RTT).
    • Each FSM is loaded in a Supervisor component, that contains the OROCOS (RTT) specific parts of the FSM.

    The 3 FSM levels

    There are 3 levels of FSM for an iTaSC application (hierarchical state machine):
    1. Application: The state machine of this level takes care of the behavior of the whole system: it configures and coordinates components not part of iTaSC (e.g. hardware interfaces and trajectory generators) and iTaSC as one composite component. The application developer takes care of the first part and sends (the fixed set of) events to configure and coordinate the iTaSC composite component. The transitions of the application state machine are always triggered by the a “done” signal raised by the iTaSC FSM (i.e. “e_ITASCConfigured” event, see 1.4) and additionally by user defined events that are raised by non-iTaSC components (i.e. hardware ready).
    2. iTaSC: The state machine of this level configures and coordinates the behavior of iTaSC components. It has a fixed structure, leaving two parts to be specified by the application developer ("user"). Firstly, the user must specify the description of the scene and the composition of the kinematic loops in the configuration file (“itasc_configuration.lua”). This file DOES NOT describe the actual behavior of the task but the components which are involved (Composition). Secondly, the subFSM part of the running state must be defined. The running state of the state machine is composed by two parts: the first coordination part is fixed and takes care of running all iTaSC components in the right order (actually making iTaSC components running as a composite component, from the user perspective), the second, the subFSM part (highlighted in green in fig The 3 FSM levels), specifies the high level workflow of the task. This second sub-state machine defines the transitions between the tasks.
    3. Task: The state machine of this level contains a more concrete coordination of the task: while the previous level abstracts from task specifics, here, the actual triggers to change the characteristics of the iTaSC components must be implemented (i.e. assignment of property values, etc.) see 1.5 for an example.

    These levels are not only present on the configuration/coordination but also on the computational level (see slides). As hinted before your application FSM will only 'see' the components 'outside' iTaSC (robot drivers, sensor components...) and iTaSC as one composite component. Similarly, the iTaSC FSM 'sees' a task as one entity. Section 'The sub-FSMs of the running state', gives a good example/effect of this distinction.

    As a result of the 3 levels, your application is always in 3 states: one for each level.

    The 3 FSM levelsThe 3 FSM levels

    The structure of a FSM

    In the standard implementation, a FSM of a certain level consist of 3 files, e.g. for a task:
    1. taskname_fsm.lua: This is the actual state machine
    2. running_taskname_coordination.lua: This is the coordination part of the running state ensuring that the iTaSC algorithm is executed in the right order.
    3. running_taskname_fsm.lua: This is the sub-FSM part of the running state. This part should be edited to implement the behavior of the running application.

    The coordination and FSM part of the running state are executed sequentially. The full FSM is loaded in a supervisor component: taskname_supervisor.lua

    On the iTaSC level, composite_task_fsm.lua is used instead of running_itasc_fsm.lua, to highlight its meaning. There is also an additional file: itasc_configuration.lua, which is part of the configuration state of the itasc_fsm.lua.

    FSM structureFSM structure

    The state machine implemented in name_fsm.lua is a composite state machine, consisting of two states:

    • NONemergency state: is a subFSM, containing the actual state machine, as shown in the figure above,
    • emergency state: an 'e_emergency' event is fired by one of the FSMs or components, in case of a failure. This event is caught by all state machines, causing them to transition to the emergency state, leaving whatever state the NONemergency sub state machine is in.

    This structure can be found in all statemachines of all levels (except for the application FSM, where the division of the running state is not (always) necessary).

    The event-transition flow

    In order to get your application running, the application has to be configured and started. After running, you also want it to stop (. Moreover, these actions should happen in a coordinated way.

    As explained above, there are three levels: application, iTaSC and task: each of which makes abstraction of the level below it. As a result, events are propagated down the hierarchy to take effect and responses are sent back up, to acknowledge execution. The design of the Application and Task FSM should comply with the same rationale (i.e. each transition is triggered by the lower level FSM). The standard event-transition flow consists of:

    1. (every component initializes at start-up, without the need for an event)
    2. the application level FSM transitions to the configuring state, after initialization. In this state, all application level components are configured and an event is sent ("e_configITASC"), to which the iTaSC level FSM will react by transitioning to the configuring state.
    3. The iTaSC level FSM, now in its configuring state, will on his turn configure the iTaSC level components and send an event ("e_configTasks") that triggers the tasks to get to the configuring state.
    4. the task level FSMs (also in the configuring state now) on their turn will configure the task level components. After successful completion, the task level FSMs will transition to the configured state and send out an event to acknowledge this completion of the configuration.
    5. When all tasks and iTaSC level components indicated a successful configuration, the iTaSC level FSM will transition to the configured state and send out an "e_ITASCConfigured" event.
    6. When all application level components indicate a successful configuration and this "e_ITASCConfigured" event is received, the application level FSM will transition to the application configured state.
    7. After an event triggering the transition of the application level FSM from the configured state to the starting state (in the most examples, just an e_done = completion of the configured state actions), a similar event-transition flow follows for the starting-started states.

    The flow for the stopping-stopped states is also similar. The running states are different in the sense that there is no 'ran state': the state machines will stay in the running state until they are stopped.

    The sub-FSMs of the running state

    The actual behavior of the application at runtime, is governed by the sub state machines of the running states of each level, which form a hierarchical state machine. The idea is that a high(er) level description is implemented in the composite_task_fsm.lua. The actual behavior of the individual tasks is governed by a separate sub-state machine for each task, running_taskname_fsm.lua. In other words: the composite_task_fsm coordinates the behavior between the different tasks. The running_taskname_fsm coordinates the behavior of the task.

    The following figure gives an example of the composite task and tasks in case of the simultaneous laser tracing on a table and a barrel example, used in previous paragraphs. The goal is to (in this order):

    1. Move to a certain starting pose (position and orientation)
    2. Trace a sine on a table and a circle on a barrel, if a barrel is detected
    3. Trace a sine on a table, if no barrel is detected.

    In the figure, a (sub-)FSM is represented by a purple rounded box, a state by a rounded black box and a possible state transition by an arrow. State transitions are triggered by an event or combination of events. The state transitions of the task subFSMs, indicated by a colored arrow and circle, are caused by the event with the corresponding color, fired in the composite_task_fsm.lua.

    To prevent overloading the figure, only a limited number of actions is shown, e.g. only the entry part of the state and not the exit part (which will, e.g. deactivate the trajectory generator and tasks which were activated). SubFSMs of the running stateSubFSMs of the running state

    The composite state of the example in the figure consists of 4 states.

    1. The initial state is the "moveToStart" state,
      1. which activates the needed trajectory (actually a set point generator),
      2. raises an event to cause the needed cartesian_motion task to reach a "moveToPose" state
      3. and then calls the Lua function "CartesianMoveTo()", which is implemented in the itasc_supervisor.lua (no RTT specifics in the statemachine, remember!).
    2. Depending on the presence of a barrel (somehow detected, and notified to the FSMs by an event), there is a state transition to the "traceSine" or "traceSineAndCircle" state, after completing the "moveToStart" movement (notified by another event). Either of this states will
      1. activate the right trajectory/set point generator,
      2. send an event that causes the running_table_tracing_fsm.lua subFSM of the table_tracing task (and running_barrel_tracing_fsm.lua subFSM of the barrel_tracing task) to transition to a traceFigure state.
    3. After completion of the tracing task (or another stop-transition causing event), the composite state machine will reach a stop(ped) state.

    As can be seen, the composite task FSM just sends an event to trigger the task subFSMs to reach the appropriate state. The task subFSM will take care of task specific behavior, e.g.

    • selecting the feature coordinates to constrain
    • activate these constraints
    • change the weight of a specific constraint
    • alter control parameters (gains...)
    • ...

    Doing so, the tasks can be easily adapted/ swapped/ changed/ downloaded.

    Note: The names of the tasks are specific; i.e. they are the names of the components that are used for the tasks. The name of the task package will be more general, e.g. xyPhiThetaPsiZ_PID task (named after the structure of it's VKC and controller type). Cfr. class - object of object oriented programming.

    Event types

    There are currently three types of events, that differ in how they are communicated (see also 'Communication') and treated:
    • Common events: events that 'can wait' to be handled, together with other events, in the next update/iteration, e.g. "e_startItasc" event,
    • Priority events: events that can't wait to be handled in the next update, e.g. "e_emergency" (fired in case of a failure),
    • Trigger events: events that trigger other state machines or components, typically used to ensure an algorithm distributed over multiple FSM is executed during the correct time step and in the right order, e.g. "e_triggerTasks", fired by running_itasc_coordination.lua.

    Composition

    Communication

    All communication of data, including events, is done over Orocos ports. The FSMs communicate their events by the event ports of the components they are loaded in (supervisors). There are separate ports for each type of event (in and output event-port for each type):
    • Common events: Communicated over a buffered connection
    • Priority events: Communicated over a buffered connection on event-triggered ports
    • Trigger events: Communicated over a non-buffered connection on event-triggered ports

    Conventions

    To automate the majority of the scripting, the following conventions are taken into account in the examples:
    • the components and scripts have the task name in their names, e.g. for cartesian_motion
      • package name: cartesian_motion
      • component names: VKC_cartesian_motion.hpp
      • script names: cartesian_motion_supervisor.lua
    • For conformity it is advised to use lower case names with underscores to separate words.
    AttachmentSize
    iTaSC_Manual.pdf396.63 KB

    How to create a new task?

    Please read the iTaSC_Manual first, to get acquainted with the iTaSC terminology and structure. A task is the combination of a virtual_kinematic_chain and a constraint/controller. In the iTaSC software, it is a (ROS-)package that contains:

    • src/ containing the source code of the components (C++)
      • VKC_taskname.hpp/cpp
      • CC_taskname.hpp/cpp
    • cpf/ containing the property files for the components and FSM's
    • scripts/ containing the FSM's and components in a LUA implementation
      • taskname_supervisor.lua: the supervisor orocos-component, containing the Orocos specific code, eg. ports to recieve/send events on...
      • taskname_fsm.lua: the finite-state machine containing the actual Skill
      • running_taskname_coordination.lua: the sub-finite-state machine of the running state of the task, containing the coordination part of the task, it determines what the task part of the iTaSC algorithm should do and in which order
      • running_taskname_fsm.lua: the sub-finite-state machine machine of the running state of the task, determining the actual actions of the task (when to enable the task, when to change weights...)
    • launch/ containing eventual ROS launch files

    The running_taskname_coordination.lua and running_taskname_fsm.lua, are sub-FSM's of the running state of the task (defined in taskname_fsm.lua). They are executed sequentially, first the coordination part, then the FSM part.

    Getting started

    A tasks consists of a Virtual Kinematic Chain (VKC) (except for constraints on joints only) and a Constraint Controller (CC).

    Virtual Kinematic Chain (VKC)

    A VKC inherits from VirtualKinematicChain.hpp in the itasc_core unary stack, which serves as a basic template.

    Important are the expected reference frames and points for the data on following ports. (o1=object 1, o2= object 2)

    • Inputs
      • $T_{o1}^{o2}$ = RelPose(o2|o2,o1|o1,o1) = pose of o2 on body o2 wrt. o1 on body o1 expressed in o1
      • $_{o1}^{o1}t_{o2}^{o1}$ = RelTwist(o1|o1,o1,o2) = twist with ref.point o1 on object o1 expressed in o1
    • Outputs
      • $T_{o1}^{o2}$ = RelPose(o2|o2,o1|o1,o1) = pose of o2 on body o2 wrt. o1 on body o1 expressed in o1
      • $J_{u}\chi_{u}$ =
      • $_{o1}^{o2}J_{f o1}^{o1}$ = Jacobian(o1|o2,o1,o1) = ref.point o1 on object o2 expressed in o1

    The expected references are also mentioned as comments in the files

    Constraint/Controller (CC)

    A Constraint/Controller inherits from ConstraintController.hpp in the itasc_core unary stack, which serves as a basic template. task_layouttask_layout

    A full template will be made available soon... At the moment, start from an example... Have a look at the keep_distance task-package (in the itasc_comanipulation_demo stack) as a good example of a task. Special cases are:

    • cartesian_motion: This task, defines a virtual kinematic chain with in stead of feature coordinates x,y,z,roll, pitch, yaw, the full pose (KDL::Frame), to enhance efficiency of the code (and prevent singularity problems).A separate port, the ChifT port transfers this data from the VKC to the CC, which should be connected manually at the moment!. ChifT = RelPose(o2|o2,o1|o1,o1) = pose of o2 on body o2 wrt. o1 on body o1 expressed in w
    • joint_motion: This task has no virtual kinematic chain, because its output equation is y=q. It constrains only the joint coordinates.

    Conventions

    To automate the majority of the scripting, the following conventions are taken into account in the examples, which is recommended to do for new tasks too:
    • the components and scripts have the task name in their names, eg. for cartesian_motion
      • package name: cartesian_motion
      • component names: VKC_cartesian_motion.hpp
      • script names: cartesian_motion_supervisor.lua
    • For conformity it is advised to use lower case names with underscores to separate words

    How to create a new robot or object?

    Should inherit from SubRobot.hpp, which can be found in the itasc_core. This file is a template for a robot or object component. See itasc_robots_objects stack for examples.

    As can be seen in the examples, a robot component contains always a KDL::Tree, even if the robot is just a chain. This is to be able to use the KDL::Tree functionality, which is regrettable, not perfectly similar as the KDL::Chain functionality. E.g. tree.getSegment(string name) has a string as input, chain.getSegment(number) has a number as input, but not a string...

    How to create a new solver?

    Coming soon, have a look at itasc_solvers for examples.

    iTaSC API

    a

    iTaSC tutorials

    List of iTaSC tutorials Please report any issues on the orocos users or dev mailinglist

    Youbot lissajous tracing tutorial (Cartesian VKC)

    Summary

    This tutorial explains how to create an application to trace a Lissajous figure with a Kuka Youbot, starting from existing packages (itasc_core, robots, tasks, solvers, trajectory generators...). The code was developed by the Leuven team during the BRICS research camp 2011.

    Installation

    Dependencies

    • itasc and it's dependencies
    • trajectory_generators
    • youbot drivers
    • ROS Electric

    The easiest way to install all needed dependencies: (How to find the debian packages on ros.org)

    • ROS Electric
    • Orocos toolchain
      • sudo apt-get install ros-electric-rtt-common-msgs
      • sudo apt-get install ros-electric-rtt-ros-comm
      • sudo apt-get install ros-electric-rtt-ros-integration
      • git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
    • Orocos kinematics and dynamics
      • sudo apt-get install ros-electric-orocos-kinematics-dynamics
    • rFSM
      • needs lua:
        • sudo aptitude install liblua5.1-0-dev
        • sudo aptitude install liblua5.1-0
        • sudo aptitude install lua5.1
        • git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
      • git clone https://github.com/kmarkus/rFSM.git
    • Trajectory Generators
      • git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
    • youbot hardware stack
      • git clone http://git.mech.kuleuven.be/robotics/youbot_hardware.git -b devel
        • this depends on: git clone http://git.mech.kuleuven.be/robotics/soem.git
      • git clone git://git.mech.kuleuven.be/robotics/motion_control.git -b devel
    • youbot_description package of the youbot-ros-pkg (=a stack), no need to compile it! WARNING: there are two repos around, make sure you have this one!
      • git clone https://github.com/smits/youbot-ros-pkg.git
    • iTaSC
      • git clone http://git.mech.kuleuven.be/robotics/itasc.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git (+switch to devel branch)

    Download the solution of the tutorial

    git clone http://git.mech.kuleuven.be/robotics/itasc_examples.git

    Setup

    • It is strongly recommended that you add the following to a setup script or your .bashrc
      • Make sure that all packages are added to you ROS_PACKAGE_PATH variable
      • Source env.sh in the orocos_toolchain stack
      • Set the LUA_PATH variable:

    if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi
    if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi
     
    export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find youbot_master_rtt`/lua/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua"
     
    export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so"
     
    export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
    • Create the youbot.urdf file out of the youbot.urdf.xacro file
      • cd `rospack find youbot_description`/robots/ (part of the youbot-ros-pkg)
      • rosrun xacro xacro.py youbot.urdf.xacro -o youbot.urdf

    Make

    rosmake itasc_youbot_lissajous_app

    The tutorial

    Note: The solution doesn't make use of the templates of the application and itasc level FSMs yet. The behavior should be the same, but you'll find more (copied) files in the scripts folder, than you will have created in your folder, when following this tutorial. (Don't worry, you'll notice what is copied and what not).'

    This tutorial explains how to create an iTaSC application, starting from existing packages. The scheme we want to create is depicted in following figure:

    itasc_youbot_app schemeitasc_youbot_app scheme

    The tutorial will follow the design workflow as explained here.

    List of all tasks/motions to be executed

    1. Let the end effector go to a start position, with respect to a certain point in space
    2. Let the end effector trace a lissajous figure in the air, with respect to a certain point in space

    Draw the kinematic loops for each task

    The two motions are constraining the same relationship between the end effector of the robot and a fixed frame in space. Therefore, the same task can be used for both motions. The only difference is a different input from the trajectory generators. The following figures show the kinematic loop describing the task: kinematic loopkinematic loop
    • frames
      • o_1=f_1=end effector
      • o_2=f_2=fixed object
    • feature coordinates
      • X_fI=(-)
      • X_fII=in this case actually the full pose!
      • X_fIII=(-)
    • outputs
      • y= X_f = T

    itasc_youbot_app kinematic loopitasc_youbot_app kinematic loop

    Draw the behavior of the application at runtime

    This will afterwards be implemented in the composite_task_fsm.lua and the running_cartesian_tracing_fsm.lua files. Consult 'subFSM's of the running state' of the iTaSC user guide for more information. The following figure depicts the behavior of this application.

    In the figures, a (sub-)FSM is represented by a purple rounded box, a state by a rounded black box and a possible state transition by an arrow. State transitions are triggered by an event or combination of events. The state transitions of the task subFSM that are indicated by a colored arrow and circle, are caused by the event with the corresponding color, fired in the composite_task_fsm.lua. composite task FSM of the youbot_lissajous_appcomposite task FSM of the youbot_lissajous_app To automatically transition from the MoveToStart to the TraceFigure state, an event indicating that the start position is reached must be fired. This event will be generated by the 'cartesian_generator'. running_cartesian_tracing_fsm of the youbot_lissajous_apprunning_cartesian_tracing_fsm of the youbot_lissajous_app

    Create or download all necessary packages

    • general components and scripts (from the itasc_core stack)
      • scene
      • start from the itasc and application level script templates in the script subdir
    • robots and objects (from the itasc_robots_objects stack)
      • youbot: itasc_youbot package
      • fixed_object: fixed_object package
    • task (from the itasc_tasks stack)
      • cartesian_tracing: cartesian_motion package
    • solver (from the itasc_solvers stack)
      • solver: wdls_prior_vel_solver package
    • trajectory generators (from the trajectory_generators stack)
      • cartesian_generator: cartesian_trajectory_generator package
      • lissajous_generator: lissajous_generator package

    Overview of the modifications needed:

    • Computation: No modifications needed
    • Coordination:Modifications needed
      • for the specific behavior at runtime, see 'behavior at runtime'
      • for the other behavior, see following sections
    • Configuration: Modifications needed
      • see 'Check, create or adapt the configuration files of your components'
    • Communication and Composition:
      • see 'Create a deploy script'

    Create an empty ROS-package for your application and create 2 subdirectories:

    • scripts: this subdirectory will contain the for your application adapted scripts
    • cpf: this subdirectory will contain all (non-standard) property files for your application

    Create the FSM of the cartesian_tracing task

    A FSM on the task level consists of 3 parts (see also 'the 3 FSM levels' of the iTaSC manual), for this task:
    • cartesian_tracing_fsm.lua: this is the actual FSM, other files are loaded in certain states of this FSM,
    • running_cartesian_tracing_coordination.lua: this part takes care of the coordination of the algorithm at runtime, it is part of the running state of the actual FSM,
    • running_cartesian_tracing_fsm.lua: this part takes care of the coordination of the behavior at runtime, it is also part of the running state of the actual FSM and is executed after the coordination of the algorithm,

    Templates of these files can be found in the cartesian_motion package, scripts subdirectory (cartesian_tracing is an instance of cartesian_motion).

    • cartesian_tracing_fsm.lua: you can use the template without modifications: no need to change the name, just use cartesian_motion_fsm.lua,
    • running_cartesian_tracing_coordination.lua: you can use the template without modifications: no need to change the name, just use running_cartesian_tracing_coordination.lua,
    • running_cartesian_tracing_fsm.lua: this file has to be edited (application dependent), copy this file to the scripts subdirectory of the package you have created for this application, leave it for now, in the section 'Create the FSMs that coordinate the behavior at runtime' is explained what to edit here,

    The actual FSM is loaded in the cartesian_tracing_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the running_cartesian_tracing_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application. Leave it for now.

    The FSM for this application consists of multiple files on different locations, the cartesian_tracing_supervisor has properties that contain the path to these files. Create a property file (cartesian_tracing_supervisor.cpf for example) in your scripts subdirectory of the package you have created for this application and edit these properties.

    There is no timer_id_property for task supervisors, because tasks are triggered by the iTaSC level by events.

    Create the FSM on the iTaSC level

    The FSM on the iTaSC level consists of 4 parts (see also 'the 3 FSM levels' of the iTaSC manual):
    • itasc_fsm.lua: this is the actual FSM, other files are loaded in certain states of this FSM, it is responsible, among other things, to configure, start, stop (and cleanup) the iTaSC level components,
    • running_itasc_coordination.lua: this part takes care of the coordination of the algorithm at runtime, it is part of the running state of the actual FSM,
    • composite_task_fsm.lua: this part takes care of the coordination of the behavior at runtime, it is also part of the running state of the actual FSM and is executed after the coordination of the algorithm,
    • itasc_configuration: this file contains the description of the scene and the composition of the kinematic loops.

    Templates of these files can be found in the itasc_core package, scripts subdirectory.

    • itasc_fsm.lua: you can use the template without modifications, because this application makes use of the template components of the cartesian_motion package, which are configured/started... by the template FSM,
    • running_itasc_coordination.lua: you can use the template without modifications,
    • composite_task_fsm.lua: this file has to be edited (application dependent), copy this file to the scripts subdirectory of the package you have created for this application, leave it for now, in the section 'Create the FSMs that coordinate the behavior at runtime' is explained what to edit here,
    • itasc_configuration.lua: this file has to be edited (application dependent),copy this file to the scripts subdirectory of the package you have created for this application.

    Edit the itasc_configuration.lua file you just have copied: define the scene and kinematic loops as depicted in the figures of the first steps of this tutorial. Look at the comments in the template for more information on the syntax.

    The actual FSM is loaded in the itasc_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the composite_task_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application. Leave it for now.

    The FSM for this application consists of multiple files on different locations, the itasc_supervisor has properties that contain the path to these files. Create a property file (.cpf) in your scripts subdirectory of the package you have created for this application and edit these properties. The itasc_supervisor and application_supervisor have a property "application_timer_id": make sure these have the same value. Take in this case eg. 1. The timer id makes sure that both components are triggered by the same timer.

    Create the FSM on the application level

    This is similar to the creation of the FSMs on the other levels. The FSM on the application level for this application consists of only 1 part (see also 'the 3 FSM levels' of the iTaSC manual):
    • application_fsm.lua: this is the actual FSM.

    A template of this file can be found in the itasc_core package, scripts subdirectory.

    • application_fsm.lua: this file has to be edited (application dependent), copy this file to the scripts subdirectory of the package you have created for this application.

    The application FSM is loaded in the application_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the application_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application.

    Edit the application_fsm and application_supervisor files:

    • Check the functions called in the application_fsm and verify that the right rtt specific code is present in the application_supervisor, e.g. configureTrajectoryGenerators(), an example can be found in the template file itself.
    • Add new functions for application specifics: the idea is to put a function in the FSM and the implementation with the RTT specifics in the supervisor.

    Make sure that you configure, start, stop (and cleanup) all application level components in this state machine!

    The FSM for this application can be on different locations, the application_supervisor has properties that contain the path to these file. Create a property file (application_supervisor.cpf) in your scripts subdirectory of the package you have created for this application and edit these properties. The itasc_supervisor and application_supervisor have a property "application_timer_id": make sure these have the same value. Take in this case eg. 1. The timer id makes sure that both components are triggered by the same timer.

    Check, create or adapt the configuration files of your components

    The following subsections explain which property files (.cpf files) to edit. The explanation on how to create such a .cpf file can be found in the orocos component builder's manual. An alternative is to deploy your component and write the properties with their current values to a file. Then adapt the values in this file. This alternative way allows you to create a cpf file, without learning the xml syntax.

    Configuration of the robot and object

    The application has one robot, the youbot and one object, the fixed_object. Make a copy of the youbot.cpf file in the cpf subdirectory you just created. You can find the original file in the cpf subdirectory of the itasc_youbot package. The fixed_object doesn't need a cpf file. In the youbot.cpf file set the desired values of the properties of the youbot component:
    • the urdf_file property to the file location of the urdf file of the youbot on your system,
    • leave the other properties as they are (all elements of W=1, joint names of the arm count up from 1 till 4, the joint names of the base are baseTransX, baseTransY, baseRotZ, in this order).

    Configuration of the solvers

    No changes needed.

    Configuration of the task

    The application has one task: cartesian_tracing, which is an instance of cartesian_motion The constraint/controller has to be tuned for the application: Create a CC_cartesian_tracing.cpf file in the cpf subdirectory you just created. In this file set the desired values of the properties:
    • All feature coordinates have the same weight: put all elements of W to 1.
    • Tune the control values: Kp (put 2 for all for now).
    • We want to use velocity feed-forward: put all elements of Kff to 1.
    • put the rotation type on 0 (RPY), and rotation reference frame to 1 (=object 1)

    Configuration of the trajectory generators

    The application has two trajectory generators: cartesian_generator and lissajous_generator. Create for both a cpf file in the cpf subdirectory you just created.
    • In the cartesian_generator.cpf file, add:
      • the maximum velocity in m/s (in cartesian space): put for now 0.05,
      • the maximum acceleration in m/s^2 (in cartesian space): put for now 0.02
    • In the lissajous_generator.cpf file, add:
      • the frequency of the sine in the x direction, in Hz: 0.06,
      • the frequency ratio of the sine in the y direction vs. the sine in the x direction: 0.666666 (meaning the second frequency will be 0.4Hz),
      • amplitude of the sine in the x direction, in m: 0.25,
      • amplitude ratio of the sine in the y direction vs. the sine in the x direction: 1,
      • phase difference between the sines in radians (phase of x-phase of y): 0,
      • the index of the yd vector (containing the desired positions), that needs a fixed value as constraint, starting from 0. In our case z: 2,
      • the constant desired value that we constrain on the position determined with the previous property, in meters: 0.45.

    Create a deploy script

    The deploy script's primary responsability is:
    • the loading of components (making an instance of a certain component type),
    • the composition of the non-iTaSC level components,
    • the connection of the non-iTaSC level components,
    • starting the supervisors and timers to get everything running.

    Start from the following templates, which you can find in the itasc_core package, scripts subdirectory:

    • run.ops
    • run.sh

    Copy these files to the package you have created for this application.

    Edit the run.ops file (see also comments in template):

    • import components, requires correctly setup RTT_COMPONENT_PATH or ROS_PACKAGE_PATH
    • load components
    • set activities
      • periodic activities: general and application level components
      • non-periodic activities: all iTaSC and task level components and the application_supervisor
    • connect peers
    • execute lua files (important that it is before loading property files)
    • configure lua components (already here so they can create the ports and properties before we connect/load them)
    • load property files
    • connect ports
      • create connectionPolicies: buffered/ non-buffered connections
      • timer ports
      • event ports
      • application_supervisor connections
        • add connections between the application supervisor and your tasks for the priority events only, with a bufferedcp
      • itasc_supervisor connections
        • add connections between the itasc supervisor and your tasks for all types of events, with the right type of connection
      • add connections between components that fire events and the FSMs that need to react on them
      • task ports
        • add connections between application level components and task level components, e.g. trajectory generators and CC, with the right connection, normally cp
        • add connections between the CC and VKC for non standard ports, e.g. the pose in case of a cartesian_motion task, standard itasc level ports are connected automatically
    • configure timers
    • start timers
    • start the supervisors
    • order is of importance! First tasks, then itasc_supervisor, then application_supervisor !!
    • Set up timer
      • first argument: timer identity number:
      • second argument: timer period in seconds
      • Make sure the all application and itasc level supervisors that have to be triggered at the same moment have the same timer_id, in this case: application_supervisor and itasc_supervisor. They have a property application_timer_id for this purpose, standard set to 1

    put before configuring the timer:

    # we have to configure it first to get the ports connected, maybe better to put all this in the application_fsm.lua
    youbot_driver.configure()                                    
    connect("youbot.qdot_to_arm", "youbot_driver.Arm1.joint_velocity_command", cp)
    connect("youbot.qdot_to_base", "youbot_driver.Base.cmd_twist", cp)       
    connect("youbot_driver.Arm1.jointstate", "youbot.q_from_arm", cp)   
    connect("youbot_driver.Base.odometry", "youbot.q_from_base", cp) 

    The template creates automatically an eventFirer, which is a component with ports connected to the event ports of the itasc- and application_supervisor. This allows easy firing events yourself at runtime, by writing an event on one of the ports.

    Create the FSMs that coordinate the behavior at runtime

    This section explains how to create the finite state machines that coordinate the behavior at runtime, which is already drawn in the section 'Draw the behavior of the application at runtime' above and explained in detail in 'subFSM's of the running state' of the iTaSC user guide. It consists in this application of the interaction of two state machines: composite_task_fsm.lua at the iTaSC level and running_cartesian_tracing_fsm.lua at the task level (only 1 task in this application).

    For both levels:

    • The idea is to put a function in the FSM and the implementation with the RTT specifics in the supervisor.
    • Make sure the task FSM is reacting on the right events, send out by the composite_task_fsm.
    • The section 'Create a deploy script' explains how to get the events from other state machines and components in your state machine.

    iTaSC level

    Start from the template composite_task_fsm.lua in the itasc_core package, scripts subdirectory. Copy this file to the scripts subdirectory of the package you have created for this application. Implement here the composite_task_fsm state machine as drawn above. In the figure is a bullet with a circle shown as a representation for the initial transition. In this case the initial transition shown in the figure is preceded by the obligatory 'initialize' and 'initialized' states, which are already present in the template.

    The event needed for the transition from the MoveToStart to the TraceFigure state, will be send out by the 'cartesian_generator'. Look in his code for the event name.

    task level

    Edit the running_cartesian_tracing_fsm.lua that you have created in the section 'Create the FSM of the cartesian_tracing task' above. Implement here the running_cartesian_tracing_fsm state machine as drawn above.

    Configuration of the solution of the tutorial

    • Setup binaries to avoid running as root (required to grant the ethercat driver non-root raw socket access)
      •  roscd ocl/bin/
      •  for i in deployer* rttlua*; do sudo setcap cap_net_raw+ep  $i; done
    • Platform configuration
      •  roscd youbot_master_rtt/lua/
      • in youbot_test.lua configure your youbot type and network interface to which the youbot ethercat is connected.
        • FUERTE_YOUBOT=false -- false Malaga, true is FUERTE
        • ETHERCAT_IF='ethcat'

    Execution of the solution of the tutorial

    Connection with ethercat check (optional)

    • roscd soem_core/bin
    • sudo ./slaveinfo eth0 (or ethcat...)

    Calibration

    • Open a terminal window
    • roscd youbot_master_rtt/lua
    • rttlua-gnulinux -i youbot_test.lua
    • When youbot test got up and running without errors:
      • start_calib()
      • kill the application when it reached the home position (upright): ctrl+C

    Application

    • Open 2 terminal windows
    • In the first, run a roscore: roscore
    • In the second, go to itasc_youbot_lissajous_app: roscd itasc_youbot_lissajous_app
    • run the application: ./run.sh
    • When your application has gone trough the configuration and starting phase, it will reach the running state: You should find a line on your screen saying "===Application up and running!==="
    • To interact with the composite_task_fsm, you can send events to it:
      • Start the full application (go to start pose and start tracing the lissajous figure after that): event_firer.itasc_common_events_in.write("e_start")
      • More eventnames can be found in scripts/composite_task_fsm.lua => transitions

    FAQ

    • Q: My robot is not moving, when I send eg. the e_start event
    • A:
      1. Check the values send by the solver to the youbot: go to the scene in the taskbrowser and type: youbot_qdot.last if this is NaN, see answer on that question...
      2. Check whether your arm is in Velocity mode: go to the youbot_driver in the taskbrowser and type: Arm1.control_mode.last, if it responds with 'MotorStop', type: Arm1.setControllerMode(Velocity)
    • Q: The solver is sending NaN as qdot
    • A: Check that your Wq is not a zero matrix (in itasc_youbot package, cpf/youbot.cpf)

    Youbot lissajous tracing tutorial (ERF2012)

    Summary

    This tutorial explains how to create an application to trace a Lissajous figure with a Kuka Youbot, starting from existing packages (itasc_core, robots, tasks, solvers, trajectory generators...). The tutorial consists of a laser tracing task with a non-cartesian Virtual Kinematic Chain (a chain including the distance along the laser), a cartesian_motion task for the movement to the initial pose and joint limit avoidance. The higher level FSM (composite_task_fsm.lua) allows to switch easily settings, enabling a better understanding of some basic iTaSC principles. The tutorial was given as a hands-on workshop on the European Robotics Forum 2012, accompanied by these slides .

    Installation

    Ubuntu Installation with ROS Electric

    Installation instructions

    Ubuntu 12.04 Installation with ROS Fuerte

    • Install Fuerte ROS using Debian packages for Ubuntu Precise (12.04) or later. In case you don't run Ubuntu you can use the ROS install scripts. See the ros installation instructions.
    • Make sure the following debian packages are installed:

    sudo apt-get install ros-fuerte-pr2-controllers
    sudo apt-get install ros-fuerte-pr2-simulator

    • Create a directory in which you want to install all the demo source (for instance erf)

    mkdir ~/erf

    • Add this directory to your $ROS_PACKAGE_PATH

    export ROS_PACKAGE_PATH=~/erf:$ROS_PACKAGE_PATH

    • Get rosinstall

    sudo apt-get install python-setuptools
    sudo easy_install -U rosinstall

    • Get the workshop's rosinstall file . Save it as erf_fuerte.rosinstall in the erf folder.
    • Run rosinstall

    rosinstall ~/erf erf_fuerte.rosinstall /opt/ros/fuerte/

    • As the rosinstall tells you source the setup script

    source ~/erf/setup.bash

    • Install all dependencies (ignore warnings)

    rosdep install itasc_examples
    rosdep install rFSM

    Setup

    • Add the following functions in your $HOME/.bashrc file:

    useERF(){
        source $HOME/erf/setup.bash;
        source $HOME/erf/setup.sh;
        source `rosstack find orocos_toolchain`/env.sh;
        setLUA;
    }
     
    setLUA(){
        if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi
        if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi
        export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find youbot_driver_rtt`/lua/?.lua"
        export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua"
        export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so"
        export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
    }
     
    useERF

    Make

    Compile the workshop sources
    rosmake itasc_erf2012_demo

    The tutorial

    see these slides

    List of all tasks/motions to be executed

    Draw the kinematic loops for each task

    Draw the behavior of the application at runtime

    Create or download all necessary packages

    Create the FSM on the iTaSC level

    Create the FSM on the application level

    Check, create or adapt the configuration files of your components

    Create a deploy script

    Create the FSMs that coordinate the behavior at runtime

    Execution of the solution of the tutorial

    Execution

    Gazebo simulation

    • Open a terminal and run roscore

    roscore
    • Open another terminal and launch an empty gazebo world

    roslaunch gazebo_worlds empty_world.launch
    • Open another terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Run the script that starts the gazebo simulator (and two translator topics to communicate with the itasc code)

    ./run_gazebo.sh

    • Open another terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Run the script that starts the itasc application

    ./run_simulation.sh
    • Look for the following output:

    itasc fsm: STATE ENTER
    root.NONemergency.RunningITASC.Running
    [cartesian generator] moveTo will start from

    Real youbot

    • Make sure that you are connected to the real youbot.
    • Open another terminal and go to the itasc_erf_2012 package:

    roscd itasc_erf2012_demo/
    • Check the name of the network connection with the robot (for instance eth0) and put this connection name in the youbot_driver cpf file (cpf/youbot_driver.cpf)
    • Run the script that starts the itasc application

    ./run.sh
    • Look for the following output:

    itasc fsm: STATE ENTER
    root.NONemergency.RunningITASC.Running
    [cartesian generator] moveTo will start from

    Command the robot

    Interact with the iTaSC level FSM @ runtime by sending events to it. There are two ways to do so:
    • One way is to send events in the Orocos task browser through the event_firer component.
      • To send an event, e.g. "e_my_event", type in the Task Browser:

    event_firer.itasc_common_events_in.write("e_my_event")
      • Possible events (as indicated on the composite_task_fsm scheme)
        • e_start_tracing: start tracing the figure, default: z off, rot on, penalize base
        • e_toggle_robot_weights: toggle between penalize base and penalize arm
        • e_toggle_z_constraint: toggle between z off and z on
        • e_toggle_rot_constraint: toggle between rot off and rot on
    • (RECOMMENDED) Another, more user-friendly way, is to send events on the /itasc/ ros_common_events_in ROS topic. This can also be used in a graphical way, by using the run_eventgui.sh executable. It launches a QT based GUI that uses the configuration files in the launch directory. It will show the possible events to interact with the application as clickable buttons. You'll need to download the following code in order to use this GUI: https://bitbucket.org/apertuscus/python_gui

    FAQ

    • I get the message "starting from" (a unity matrix) and the simulated robot doesn't move
      • In the new Youbot model, the /odom topic changed to /base_odometry/odom, this is adapted on the master branch, or change it manually in the run_simulation.ops file. You can check whether this is causing the problem by reading the youbot.q_from_base port and checking if it returns "NoData".
    • I get the error "[PropertyLoader:configure] The type 'KDL.JntArray' did not provide a type composition function, but I need one to compose it from a PropertyBag."

    54398d0653067580edd5c5ec66bda5eac0aa29e4 and 81e5fab65ee3587056a4d5fda4eb5ce796082eaf

    human-PR2 comanipulation demo

    Summary

    This tutorial explains the human-robot comanipulation demo with the PR2 as demonstrated on IROS 2011, San Francisco, California (incl. video). Detailed information on the kinematic loops can be found in the iTaSC_comanipulation_demo.pdf, downloadable at the end of this page. The following paper contains detailed information on the application, including the force nulling control design: Vanthienen, D., De Laet, T., Decré, W., Bruyninckx, H., De Schutter, J. (2012). Force-Sensorless and Bimanual Human-Robot Comanipulation. 10th IFAC Symposium on Robot Control. international symposium on robot control. Dubrovnik, Croatia, 5-7 September 2012 (art.nr. 127) comanipulation: Human-Robot Comanipulationcomanipulation: Human-Robot Comanipulation

    Installation

    Dependencies

    • itasc and it's dependencies
    • trajectory_generators
    • ROS Electric or Fuerte is required for this tutorial (core and PR2 functionality)
    • the following ROS packages
      • tf
      • tf_conversions
      • geometry_msgs
      • pr2_controllers
      • pr2_kinematics
      • pr2_robot

    Instructions for ROS Electric

    The easiest way to install all needed dependencies:
    • ROS Electric and how to find the debian packages on ros.org
    • PR2 related code look at
    • Orocos toolchain (use version/branch toolchain-2.5)
      • get the Orocos toolchain, if you don't have it yet, it makes sense for this application to get it the ROS way
      • sudo apt-get install ros-electric-rtt-common-msgs
      • sudo apt-get install ros-electric-rtt-ros-comm
      • sudo apt-get install ros-electric-rtt-ros-integration
      • git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
    • Orocos kinematics and dynamics
      • sudo apt-get install ros-electric-orocos-kinematics-dynamics
    • rFSM
      • needs lua:
        • sudo aptitude install liblua5.1-0-dev
        • sudo aptitude install liblua5.1-0
        • sudo aptitude install lua5.1
      • rtt-lua tab completion: git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
      • git clone https://github.com/kmarkus/rFSM.git
    • opencv_additions (dependencies of findFace)
      • git clone http://git.mech.kuleuven.be/robotics/opencv_additions.git
    • Trajectory Generators
      • git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
    • iTaSC
      • git clone http://git.mech.kuleuven.be/robotics/itasc.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
    • rtt-ros integration messages (more info)
      • rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs

    Instructions for ROS Fuerte

    The easiest way to install all needed dependencies: (How to find the debian packages on ros.org)
    • ROS Fuerte
    • PR2 related code look at
    • Orocos toolchain (use version/branch toolchain-2.5)
      • get the Orocos toolchain, if you don't have it yet, it makes sense for this application to get it the ROS way
      • git clone http://git.mech.kuleuven.be/robotics/rtt_common_msgs.git
      • git clone http://git.mech.kuleuven.be/robotics/rtt_ros_comm.git
      • git clone http://git.mech.kuleuven.be/robotics/rtt_ros_integration.git
      • git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
    • Orocos kinematics and dynamics
      • sudo apt-get install ros-fuerte-orocos-kinematics-dynamics
    • rFSM
      • needs lua:
        • sudo aptitude install liblua5.1-0-dev
        • sudo aptitude install liblua5.1-0
        • sudo aptitude install lua5.1
      • rtt-lua tab completion: git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
      • git clone https://github.com/kmarkus/rFSM.git
    • opencv_additions (dependencies of findFace)
      • git clone http://git.mech.kuleuven.be/robotics/opencv_additions.git
    • Trajectory Generators
      • git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
    • iTaSC
      • git clone http://git.mech.kuleuven.be/robotics/itasc.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
      • git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
    • rtt-ros integration messages (more info)
      • rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs

    Installation of the tutorial

    git clone http://git.mech.kuleuven.be/robotics/itasc_comanipulation_demo.git

    Setup

    It is strongly recommended that you add the following to a setup script or your .bashrc
    • Make sure that all packages are added to you ROS_PACKAGE_PATH variable
    • Source env.sh in the orocos_toolchain stack
    • Set the LUA_PATH variable:

    if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi
    if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi
     
    export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua"
    export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua"
     
    export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so"
     
    export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"

    Make

    rosmake itasc_comanipulation_demo_app

    Don't forget to...

    • Run the convert_xacro.sh script in the itasc_pr2 package:

    roscd itasc_pr2
    ./convert_xacro.sh

    The example

    The following figure shows the component layout of the comanipulation demo. (Click on it to get a pdf version.) comanipulation layoutcomanipulation layout An overview of all components involved can be found here

    Execution

    • Open two terminals
    • go in both to itasc_comanipulation_demo_app: roscd itasc_comanipulation_demo_app
    • (On a simulated PR2: open an extra terminal and start the PR2 in simulation
    • run the PR2 low level controllers in the first (P controller with reduced gain): ./runControllers
    • run the application in the second terminal: ./run.sh
    • When your application has gone trough the configuration and starting phase, it will reach the running state: You should find a line on your screen saying "=>iTaSCFSM=>Running=>CompositeTaskFSM->Initialized State"
    • To interact with the CompositeTaskFSM, you can send events to it: send events to interact (don't forget to go in and afterwards out!!!) eg.:
      • event_firer.itasc_common_events_in.write("e_parallelIn")
      • event_firer.itasc_common_events_in.write("e_parallelOut")
      • event_firer.itasc_common_events_in.write("e_obstacleForceParallelLimitsIn")
      • event_firer.itasc_common_events_in.write("e_obstacleForceParallelLimitsOut")
      • more eventnames can be found in scripts/composite_task_fsm.lua => transitions

    FAQ

    Joint and segment mapping

    A KDL::Tree has no order when asking it's segments (getSegments) which makes sense since there are branches on a tree. In practice, the getSegments returns the segments in alphabetical order, which is the default order you'll get the joint segments of the PR2 and the columns of the Jacobian matrices. The itasc_pr2 component maps the inputs and outputs from the robot side to this "general" order. For each chain between the base and the object frame you request from the component, the order is internally stored in the logical order from root to leave (a chain has an order of segments). Also in this case, the output towards the iTaSC side is mapped in the "general" order (the alphabetical order).

    Compilation problems

    • Q: When I compile itasc_solvers, I get a linking error, he can't find choleski_semidfinite...
    • A: You probably forgot to source the env.sh in the orocos_toolchain stack
    AttachmentSize
    iTaSC_comanipulation_demo.pdf535.79 KB

    iTaSC FAQ

    • Warnings at start-up: the following warnings can be ignored
      • Lowering priority: if you're running on a non-real-time system, the priority will be reduced from RT to OTHER
      • Conversion from float64[] to eigen_vector
      • Port 'x' of 'y' and port 'z' of 'a' are already connected but (probably) not to each other: multiple ports connected to one port gives a warning, while this is common functionality
      • 'addPort' Pose_ee_base: name already in use. Disconnecting and replacing previous port with new one: it won't ;)
    • I get [EMERGENCY]
      • This means that an error is caught by the state machine
      • Have a look at the first [EMERGENCY] message and the line before that to figure out what went wrong
    • Where do I define my Virtual Kinematic Chain?
      • in your VKC_taskname component, eg. VKC_cartesian_motion (in the cartesian_motion/src package)
    • Where do I put the feature coordinates in the code?
      • They are implicit in your VKC definition, by defining feature coordinates and the degrees of freedom between them, you define the order in which the degrees of freedom are traversed, from one object frame to another. (cfr. going from one point to another, and you can only go in a direction once)

    iTaSC videos

    Videos of iTaSC examples and demonstrations. Click on the images below to see the video.

    Human-robot co-manipulation with the PR2

    An example:: Human-Robot ComanipulationAn example:: Human-Robot Comanipulation

    Laser tracing a lissajous figure on a vertical plane with the PR2

    This application uses the tasks from the laser tracing demo with the KUKA youBot (ERF2012), together with the joint limit avoidance task of the co-manipulation demo (IROS 2011/SYROCO 2012), executed on the PR2.

    iTaSC laser tracing a lissajous figure with PR2iTaSC laser tracing a lissajous figure with PR2