This is the Main Orocos.org Wiki page.
From here you can find links to all Orocos related Wiki's.
Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.
To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. Click below to read the rest of this post.This is the Main Orocos.org Wiki page.
From here you can find links to all Orocos related Wiki's.
Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.
To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. When that link is clicked and the page does not exist, one is offered to create it and write it.
Currently, the Orocos wiki pages are written in MediaWiki style. You should create your pages in this style as well.
Feel free to click on the 'Edit' tab above to see how this page was written (and to improve it ! ).
This section covers all development related pages.
How you can get involved and contribute & participate in the Orocos project
The Orocos toolchain uses git, with the official repositories hosted at gitorious.
The master branch gets updated when new branches are merged into it by its maintainer. This can be a merge from the bugfix branches (ie merge from toolchain-2.x) or a merge from a development branch.
The stable branch should always point to the latest toolchain-2.x tip. This isn't automated, and so it lags (probably something for a hudson job or a git commit hook).
All branches in the rtt-2.0-... are no longer updated. The rtt-2.0-mainline has been merged with master, which means that if you have a rtt-2.0-mainline branch, you can just do git pull origin master, and it will fast-forward your tree to the master branch, or you checkout the local master.
You may contribute a software package to the community. It must respect the rules set out in the Component Packages section. Packages general enough can be adopted by the Orocos Toolchain Gitorious project. Make sure that your package name only contains word and number characters and underscores. A 'dash' ('-') is not acceptable in a package name.
Small contributions should go on the mailing lists as patches. Larger features are best communicated using topic branches in a git repository clone from the official repositories. Send pull requests to the mailing lists. These topic branches should be hosted on a publicly available git server (e.g. github, gitorious).
NB for the Orocos v1, no git branches will be merged (due to SVN), use individual patches instead. v2 git branches can be merged without problems.
The easiest way to make suggestions is to use the mailing list (register here). This allows discussion about what you are suggesting (which after all, someone else may already be working), as well as informing others of what you are interested in (or are willing to do).
Before reporting a bug, please check the Bug Tracker and in the Mailing list and Forum to see whether this is a known issue. If this is a new issue then TBD email the mailing lists, OR enter an issue in the bug tracker
Goals of the meeting: discuss the future or the Orocos toolchain w.r.t. Rock and ROS.
Identified major goals
The main issue is the ability to compile the toolchain and components only once, e.g. in a Rock installation, and use them in ROS or vice-versa.
Using rock components on plain Orocos should just work [needs testing and documentation].
The use of orogen or typegen would be required
Where things are going, and how we plan to get there.
See also Roadmap ideas 3x for some really long-term ideas ...
TODO Autoproj, RTT, OCL, etc.
The goal is to provide a real-time safe, low overhead, flexbible logging system useable throughout an entire system (ie within both components and any user applications, like GUIs).
We chose to base this on log4cpp, one of the C++-based derivates of log4j, the respected Java logging system. With only minor customimzations log4cpp is now useable in user component code (but not RTT code itself, see below). It provides real-time safe, hierarchical logging with multiple levels of logging (e.g. INFO vs DEBUG).
These changes are for Toolchain >= 2.7.0 + ROS >= Hydro
Support building in these workflows:
While the project is still in the (heavy?) turmoil of the 1.x-to-2.x transition, it might be useful to start thinking about the next version, 3.x. Below are a number of developments and policies that could eventually become 3.x; please, use the project's (user and developer) mailinglists to give your opinions, using a 3.x Roadmap message tag.
Disclaimer: there is nothing official yet about any of the below-mentioned suggestions; on the contrary, they are currently just the reflections of one single person, Herman Bruyninckx. (Please, update this disclaimer if you add your own suggestions.)
General policies to be followed in this Roadmap:
Contributors to this part of the Roadmap need not be RTT developers, but motivated users!
Contributors to this part of the Roadmap need be RTT developers!
All three libraries share a common fundamental design property, and that is that they can all be considered as special cases of executable graphs, so a common support will be developed for the flexible, configurable scheduling off all computations (codels) in complex networks (Bayesian networks, kinematic/dynamic networks, control diagrams).
Contributors to this part of the Roadmap need not be RTT developers, but domain experts that have become power users of the RTT infrastructure!
As a first step, the instantaneous version of a constrained-based optimization approach to task-level control will be provided. Following steps will extend the instantaneous idea towards non-instantaneous tasks. This extension must be focused on tasks that require realtime performance, since non-realtime solutions are provided by other projects, such as ROS.
Contributors to this part of the Roadmap need not be RTT developers, but domain experts that also happen to be average users of the RTT infrastructure! They will open up the functionalities of Orocos to the normal end-user.
The first efforts in this direction have started, in the context of the European project BRICS.
Contributors to this part of the Roadmap need not be RTT developers, but programmers familiar with the advanced Eclipse features, such as ecore models, EMG, etc.
At the European Robotics Forum 2011 Intermodalics, Locomotec and K.U.Leuven are organizing a two-part seminar, appealing to both industry and research institutes, titled:
The session will be on April 7, 9h00-10h30 + 11h00-12h30
Remaining seats : 0 out of 20 (last update: 06/04/2011) |
In this presentation, Peter Soetens and Ruben Smits introduce the audience to todays Open Source robotics eco-system. Which are the strong and weak points of existing software ? Which work seamlessly together, and on which operating systems (Windows, Linux, VxWorks,... ) ? We will prove our statements with practical examples from both academic and industrial use cases. This presentation is the result of the long standing experience of the presenters with a open source technologies in robotics applications and will offer the audience leads and insights to further explore this realm.
In this hands-on session, the participants are invited to bring their own laptop with Orocos and ROS (optionally) installed. We will support Linux, Mac OS-X and Windows users and will provide instructions on how they can prepare to participate.
We will let the participants experience that the Orocos toolchain:
If you'll be using the bootable USB-sticks, prepared by the organisers, you can skip all installation instructions and directly start the assignment at https://github.com/bellenss/euRobotics_orocos_ws/wiki
If you are attending the hands-on session you can bring your own computer. Depending on you operating system you should install the necessary software using the following installation instructions:
The workshop will start with making you familiar with the Orocos Toolchain, which does not require the YouBot. The hands-on will continue then on a robot in simulation and on the real hardware. We will use the ROS communication protocol to send instructions to the simulator (Gazebo) or the YouBot. Installing Gazebo is not required, since this simulation will run on a dedicated machine. Documentation on the workshop application and the assignment can be found at https://github.com/bellenss/euRobotics_orocos_ws/wiki.
NOTE: ROS is required to participate in the YouBot demo.
Install Diamondback ROS using Debian packages for Ubuntu Lucid (10.04) and Maverick (10.10) or the ROS install scripts, in case you don't run Ubuntu.
apt-get install ros-diamondback-orocos-toolchain-ros
After this step, proceed to Section 2: Workshop Sources below.Instructions after ROS is installed:
source /opt/ros/diamondback/setup.bash mkdir ~/ros cd ~/ros export ROS_PACKAGE_PATH=$HOME/ros:$ROS_PACKAGE_PATH git clone http://git.mech.kuleuven.be/robotics/orocos_toolchain_ros.git cd orocos_toolchain_ros git checkout -b diamondback origin/diamondback git submodule init git submodule update --recursive rosmake --rosdep-install orocos_toolchain_ros
NOTE: setting the ROS_PACKAGE_PATH is mandatory for each shell that will be used. It's a good idea to add the export ROS_PACKAGE_PATH line above to your .bashrc file (or equivalent)..
Due to a dynamic library issue in the current 2.3 release series, Mac OS-X can not be supported during the Workshop. We will make available a bootable USB stick which contains a pre-installed Ubuntu environment containing all necessary packages.
Requirements:
See the Compiling on Windows with Visual Studio wiki page for instructions. The TAO/Corba part is not required to participate in the workshop.
You need to follow the instructions for RTT/OCL v2.3.1 or newer, which you can download from the Orocos Toolchain page. We recommend to build for Release.
In case you have no time nor the experience to set this up, we provide bootable USB sticks that contain Ubuntu Linux with all workshop files.
Windows users might also install the Kst program which is a KDE plot program that also runs on Linux. We provided a .kst file for plotting the workshop data. See the Kst download page.
set PATH=%PATH%;c:\orocos\bin;c:\orocos\lib;c:\orocos\lib\orocos\win32;c:\orocos\lib\orocos\win32\plugins
You repeat the classical CMake steps with this package, generate the Solution file and build and install it. Then start up the deployer with the deployer-win32.exe program and type 'ls'. It should start and show meaningful information. If you see strange characters in the output, you need to turn of the colors with the '.nocolors' command at the Deployer's prompt.
The euRobotics Forum workshop on Orocos has been a great success. About 30 people attended and participated in the hands-on workshop. The Real-Time & Open Source in Robotics track drew more than 60 people. Both tracks were overbooked.
You can find all presentation material in PDF form below
There are two ways you can get the sources for the workshop:
Since the sources are still evolving, it might be necessary to update your version before the workshop.
You can either check it out with
mkdir ~/ros cd ~/ros git clone git://gitorious.org/orocos-toolchain/rtt_examples cd rtt_examples/rtt-exercises
Or you can download the examples from here. You need at least version 2.3.1 of the exercises.
If you're not using ROS, you can download/unzip it in another directory than ~/ros.
The hands-on session involves working on a demo application with a YouBot robot. The application allows you to
The Youbot demo application is available on https://github.com/bellenss/euRobotics_orocos_ws (this is still work in progress and will be updated regularly)
You can either check it out with
mkdir ~/ros export ROS_PACKAGE_PATH=\$ROS_PACKAGE_PATH:$HOME/ros cd ~/ros git clone http://robotics.ccny.cuny.edu/git/ccny-ros-pkg/scan_tools.git git clone http://git.mech.kuleuven.be/robotics/orocos_bayesian_filtering.git git clone http://git.mech.kuleuven.be/robotics/orocos_kinematics_dynamics.git git clone git://github.com/bellenss/euRobotics_orocos_ws.git roscd youbot_supervisor rosmake --rosdep-install
Check that ~/ros is in your ROS_PACKAGE_PATH environment variable at all times, by also adding the export line above to your .bashrc file.
cd orocos-toolchain source env.sh
Next, cd to the rtt-exercises directory that you unpacked, enter hello-1-task-execution, and type make:
cd rtt-exercises-2.3.0/hello-1-task-execution make all cd build ./HelloWorld-gnulinux
cd rtt-exercises-2.3.0 source /opt/ros/diamondback/setup.bash export ROS_PACKAGE_PATH=\$ROS_PACKAGE_PATH:\$(pwd)
Next, you proceed with going into an example directory and type make:
cd hello-1-task-execution make all ./HelloWorld-gnulinux
After you have built the youbot_supervisor, you can test the demo by opening two consoles and do in them:
First console:
roscd youbot_supervisor
./simulation.sh
Second console:
roscd youbot_supervisor ./changePathKst (you only have to do this once after installation) kst plotSimulation.kst
If you do not have 'kst', install it with sudo apt-get install kst kst-plugins
At the European Robotics Forum 2012 KU Leuven and Intermodalics are organizing a three-part seminar, appealing to both industry and research institutes, titled:
The sessions will be on March 6, 8h30-10h30 + 11h00-12h30 + 13h30-15h00 (Track four). For more detail consult the European Robotics Forum program
Remaining seats: (last update: March 2, 2012) |
We're fully booked, but don't be shy to come and peek or sit along, although we can't guarantee you a table or a chair !
(Information on last year's workshop can be found here.)
Attachment | Size |
---|---|
RTT-Overview.pdf | 1.67 MB |
erf_itasc_theory_opt.pdf | 526.82 KB |
mkdir ~/erf
export ROS_PACKAGE_PATH=~/erf:$ROS_PACKAGE_PATH
sudo apt-get install python-setuptools sudo easy_install -U rosinstall
rosinstall ~/erf erf.rosinstall /opt/ros/electric
source ~/erf/setup.bash
rosdep install itasc_examples rosdep install rFSM
rosmake itasc_examples
useERF(){ source $HOME/erf/setup.bash; source $HOME/erf/setup.sh; source /opt/ros/electric/stacks/orocos_toolchain/env.sh; setLUA; } setLUA(){ if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find youbot_master_rtt`/lua/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin" } useERF
roscd itasc_erf2012_demo/
./run_gazebo.sh
roscd itasc_erf2012_demo/
./run_simulation.sh
roscd itasc_erf2012_demo/
./run.sh
Some small examples for usage.
Do not hesitate to add your own small examples.
Attachment | Size |
---|---|
pres.pdf | 378.19 KB |
Remark that this wiki contains a summary of the theoretical article and the software article both published as a tutorial for IEEE Robotics and Automation Magazine:
|
The geometric relations semantics software (C++) implements the geometric relation semantics theory, hereby offering support for semantic checks for your rigid body relations calculations. This will avoid commonly made errors, and hence reduce application and, especially, system integration development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.
The screenshot below shows the output of the semantic checks of the (wrong) composition of two positions and two orientations.
The goal of the software is to provide semantic checking for calculations with geometric relations between rigid bodies on top of existing geometric libraries, which are only working on specific coordinate representations. Since there are already a lot of libraries with good support for geometric calculations on specific coordinate representations (The Orocos Kinematics and Dynamics library, the ROS geometry library, boost, ...) we do not want to design yet another library but rather will extend these existing geometric libraries with semantic support. The effort to extend an existing geometric library with semantic support is very limited: it boils down to the implementation of about six function template specializations.
This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.
Rigid bodies are essential primitives in the modelling of robotic devices, tasks and perception, starting with the basic geometric relations such as relative position, orientation, pose, translational velocity, rotational velocity, and twist. This wiki elaborates on the background and the software for the semantics underlying rigid body relationships. This wiki is based on the research of the KU Leuven robotics group, in this case mainly conducted by Tinne De Laet, to explain semantics of all coordinate-invariant properties and operations, and, more importantly, to document all the choices that are made in coordinate representations of these geometric relations. This resulted in a set of concrete suggestions for standardizing terminology and notation, and software with a fully unambiguous software interface, including automatic checks for semantic correctness of all geometric operations on rigid-body coordinate representations.
The geometric relations semantics software prevents commonly made errors in geometric rigid-body relations calculations like:
This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.
A rigid body is an idealization of a solid body of infinite or finite size in which deformation is neglected. We often abbreviate “rigid body” to “body”, and denotes it by the symbol $\mathcal{A}$. A body in three-dimensional space has six degrees of freedom: three degrees of freedom in translation and three in rotation. The subspace of all body motions that involve only changes in the orientation is often denoted by SO(3) (the Special Orthogonal group in three-dimensional space). It forms a group under the operation of composition of relative motion. The space of all body motions, including translations, is denoted by SE(3) (the Special Euclidean group in three-dimensional space).
A general six-dimensional displacement between two bodies is called a (relative) pose: it contains both the position and orientation. Remark that the position, orientation, and pose of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative position, orientation, and pose between two bodies are relevant geometric relations.
A general six-dimensional velocity between two bodies is called a (relative) twist: it contains both the rotational and the translational velocity. Similar to the position, orientation, and pose, the translational velocity, rotational velocity, and twist of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative translational velocity, rotational velocity, and twist between two bodies are relevant geometric relations.
When doing actual calculations with the geometric relations between rigid bodies, one has to use the coordinate representation of the geometric relations, and therefore has to choose a coordinate frame in which the coordinates are expressed in order to obtain numerical values for the geometric relations.
Each of these geometric primitives can be fixed to a body, which means that the geometric primitive coincides with the body not only instantaneously, but also over time. For the point $a$ and the body $\mathcal{C}$ for instance, this is written as $a|\mathcal{C}$. The figure below presents the geometric primitives body, point, vector, orientation frame, and frame graphically.
The table below summarizes the semantics for the following geometric relations between rigid bodies: position, orientation, pose, translational velocity, rotational velocity, and twist.
The software implements the geometric relation semantics, hereby offering support for semantic checks for your rigid body relations. This will avoid commonly made errors, and hence reduce application (and, especially, system integration) development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.
For the semantic checking, we created the (templated) geometric_semantics core library, providing all the necessary semantic support for geometric relations (relative positions, orientations, poses, translational velocities, rotational velocities, twists, forces, torques, and wrenches) and the operations on these geometric relations (composition, integration, inversion, ...).
If you want to perform actual geometric relation calculations, you will need particular coordinate representations (for instance a homogeneous transformation matrix for a pose) and a geometric library offering support for calculations on these coordinate representations (for instance multiplication of homogeneous transformation matrices). To this end, you can build your own library depending on the geometric_semantics core library in which you implement a limited number of functions, which make the connection between semantic operations (for instance composition) and actual coordinate representation calculations (for instance multiplication of homogeneous transformation matrices). We already provide support for two geometric libraries: the Orocos Kinematics and Dynamics library and the ROS geometry library, in the geometric_semantics_kdl and geometric_semantics_tf libraries, respectively.
Again, the template is the actual geometry (of an external library) you will use as a coordinate representation, for instance a KDL::Vector.
The above described design is illustrated by the figure below.
Remark that all four of the above 'levels' are of actual use:
The API is available at: http://people.mech.kuleuven.be/~tdelaet/geometric_relations_semantics/doc/.
This stack consists of following packages:
Each package contains the following subdirectories:
git clone https://gitlab.mech.kuleuven.be/rob-dsl/geometric-relations-semantics.git
cd geometric_relations_semantics
export ROS_PACKAGE_PATH=$PWD:$ROS_PACKAGE_PATH
rosdep install geometric_relations_semantics rosmake geometric_relations_semantics
roscd geometric_semantics make test
rosmake geometric_relations_semantics
rosmake PACKAGE_NAME
roscd geometric_semantics
make test
If you are looking for installation instructions you should read the quick start.
Here we will explain how you can use the geometric relations semantics in your application, in particular using the Orocos Kinematics and Dynamics library as a geometry library, supplemented with the semantic support.
roscreate-pkg myApplication geometric_semantics_kdl
This will automatically create a directory with name myApplication and a basic build infrastructure (see the roscreate-pkg documentation)
cd myApplication export ROS_PACKAGE_PATH=$PWD:$ROS_PACKAGE_PATH
roscd myApplication
touch myApplication.cpp
#include <Pose/Pose.h> #include <Pose/PoseCoordinatesKDL.h>
using namespace geometric_semantics; using namespace KDL;
Rotation coordinatesRotB2_B1=Rotation::EulerZYX(M_PI,0,0); Vector coordinatesPosB2_B1(2.2,0,0); KDL::Frame coordinatesFrameB2_B1(coordinatesRotB2_B1,coordinatesPosB2_B1)
Then use this KDL coordinates to create a PoseCoordinates object:
PoseCoordinates<KDL::Frame> poseCoordB2_B1(coordinatesFrameB2_B1);
Then create a Pose object using both the semantic information and the PoseCoordinates:
Pose<KDL::Frame> poseB2_B1("b2","b2","B2","b1","b1","B1","b1",poseCoordB2_B1);
Pose<KDL::Frame> poseB1_B2 = poseB2_B1.inverse()
rosbuild_add_executable(myApplication myApplication.cpp)
rosmake myApplication
and the executable will be created in the bin directory.
bin/myApplication
You will get the semantic output on your screen.
Imagine you have your own geometry library with support for geometric relation coordinate representations and calculations with these coordinate representations. You however would like to have semantic support on top of this geometry library. Probably the best thing to do in this case is to mimic our support for the Orocos Kinematics and Dynamics Library. To have a look at it do:
roscd geometric_semantics_kdl/
The possible semantic constraints are listed in the *Coordinates.h files in the geometric_semantics core library. So for instance for OrientationCoordinates we find there an enumeration of the different possible semantic constraints imposed by Orientation coordinate representations:
/** *\brief Constraints imposed by the orientation coordinate representation to the semantics */ enum Constraints{ noConstraints = 0x00, coordinateFrame_equals_referenceOrientationFrame = 0x01, // constraint that the orientation frame on the reference body has to be equal to the coordinate frame };
You should specify the constraint when writing the template specialization of the OrientationCoordinates<KDL::Rotation>:
// template specialization for KDL::Rotation template <> OrientationCoordinates<KDL::Rotation>::OrientationCoordinates(const KDL::Rotation& coordinates): data(coordinates), constraints(coordinateFrame_equals_referenceOrientationFrame){ };
The other function template specializations specify the actual coordinate calculations that have to be performed for semantic operations like inverse, changing the coordinate frame, changing the orientation frame, ... For instance, to specialize the inverse for KDL::Rotation coordinate representations:
template <> OrientationCoordinates<KDL::Rotation> OrientationCoordinates<KDL::Rotation>::inverse2Impl() const { return OrientationCoordinates<KDL::Rotation>(this->data.Inverse()); }
This tutorial explains (one possibility) to set up a build system for your application using the geometric_relations_semantics. The possibility we explain uses the ROS package and build infrastructure, and will therefore assume you have ROS installed and set up on your computer.
roscreate-pkg myApplication geometric_semantics geometric_semantics_kdl
This will automatically create a directory with name myApplication and a basic build infrastructure (see the roscreate-pkg documentation)
export ROS_PACKAGE_PATH=myApplication:$ROS_PACKAGE_PATH
This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.
In this tutorial we first explain how you can create basic semantic objects (without coordinates and coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.
Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.
roscd myApplication
touch myFirstApplication.cpp
vim myFirstApplication.cpp
#include <Position/PositionSemantics.h> #include <Orientation/OrientationSemantics.h> #include <Pose/PoseSemantics.h> #include <LinearVelocity/LinearVelocitySemantics.h> #include <AngularVelocity/AngularVelocitySemantics.h> #include <Twist/TwistSemantics.h> #include <Force/ForceSemantics.h> #include <Torque/TorqueSemantics.h> #include <Wrench/WrenchSemantics.h>
using namespace geometric_semantics;
int main (int argc, const char* argv[]) { // Here comes the code of our first application }
vim CMakeLists.txt
rosbuild_add_executable(myFirstApplication myFirstApplication.cpp)
rosmake myApplication
and the executable will be created in the bin directory.
bin/myFirstApplication
You will get the semantic output on your screen.
// Creating the geometric relations semantics PositionSemantics position("a","C","b","D"); OrientationSemantics orientation("e","C","f","D"); PoseSemantics pose("a","e","C","b","f","D"); LinearVelocitySemantics linearVelocity("a","C","D"); AngularVelocitySemantics angularVelocity("C","D"); TwistSemantics twist("a","C","D"); TorqueSemantics torque("a","C","D"); ForceSemantics force("C","D"); WrenchSemantics wrench("a","C","D");
//Doing semantic operations with the geometric relations // inverting PositionSemantics positionInv = position.inverse(); OrientationSemantics orientationInv = orientation.inverse(); PoseSemantics poseInv = pose.inverse(); LinearVelocitySemantics linearVelocityInv = linearVelocity.inverse(); AngularVelocitySemantics angularVelocityInv = angularVelocity.inverse(); TwistSemantics twistInv = twist.inverse(); TorqueSemantics torqueInv = torque.inverse(); ForceSemantics forceInv = force.inverse(); WrenchSemantics wrenchInv = wrench.inverse();
std::cout << "-----------------------------------------" << std::endl; std::cout << "Inverses: " << std::endl; std::cout << " " << positionInv << " is the inverse of " << position << std::endl; std::cout << " " << orientationInv << " is the inverse of " << orientation << std::endl; std::cout << " " << poseInv << " is the inverse of " << pose << std::endl; std::cout << " " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl; std::cout << " " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl; std::cout << " " << twistInv << " is the inverse of " << twist << std::endl; std::cout << " " << torqueInv << " is the inverse of " << torque << std::endl; std::cout << " " << forceInv << " is the inverse of " << force << std::endl; std::cout << " " << wrenchInv << " is the inverse of " << wrench << std::endl;
//Composing PositionSemantics positionComp = compose(position,positionInv); OrientationSemantics orientationComp = compose(orientation,orientationInv); PoseSemantics poseComp = compose(pose,poseInv); LinearVelocitySemantics linearVelocityComp = compose(linearVelocity,linearVelocityInv); AngularVelocitySemantics angularVelocityComp = compose(angularVelocity,angularVelocityInv); TwistSemantics twistComp = compose(twist,twistInv); TorqueSemantics torqueComp = compose(torque,torqueInv); ForceSemantics forceComp = compose(force,forceInv); WrenchSemantics wrenchComp = compose(wrench,wrenchInv);
std::cout << "-----------------------------------------" << std::endl; std::cout << "Composed objects: " << std::endl; std::cout << " " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl; std::cout << " " << orientationComp << " is the composition of " << orientation << " and " << orientationInv << std::endl; std::cout << " " << poseComp << " is the composition of " << pose << " and " << poseInv << std::endl; std::cout << " " << linearVelocityComp << " is the composition of " << linearVelocity << " and " << linearVelocityInv << std::endl; std::cout << " " << angularVelocityComp << " is the composition of " << angularVelocity << " and " << angularVelocityInv << std::endl; std::cout << " " << twistComp << " is the composition of " << twist << " and " << twistInv << std::endl; std::cout << " " << torqueComp << " is the composition of " << torque << " and " << torqueInv << std::endl; std::cout << " " << forceComp << " is the composition of " << force << " and " << forceInv << std::endl; std::cout << " " << wrenchComp << " is the composition of " << wrench << " and " << wrenchInv << std::endl;
Attachment | Size |
---|---|
myFirstApplication.cpp | 4.28 KB |
This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.
In this tutorial we first explain how you can create basic semantic objects (without coordinates but with coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.
Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.
vim mySecondApplication.cpp
#include <Position/PositionCoordinatesSemantics.h> #include <Orientation/OrientationCoordinatesSemantics.h> #include <Pose/PoseCoordinatesSemantics.h> #include <LinearVelocity/LinearVelocityCoordinatesSemantics.h> #include <AngularVelocity/AngularVelocityCoordinatesSemantics.h> #include <Twist/TwistCoordinatesSemantics.h> #include <Force/ForceCoordinatesSemantics.h> #include <Torque/TorqueCoordinatesSemantics.h> #include <Wrench/WrenchCoordinatesSemantics.h>
using namespace geometric_semantics;
int main (int argc, const char* argv[]) { // Here comes the code of our second application }
rosbuild_add_executable(mySecondApplication mySecondApplication.cpp)
rosmake myApplication
and the executable will be created in the bin directory.
bin/mySecondApplication
You will get the semantic output on your screen.
// Creating the geometric relations coordinates semantics PositionCoordinatesSemantics position("a","C","b","D","r"); OrientationCoordinatesSemantics orientation("e","C","f","D","r"); PoseCoordinatesSemantics pose("a","e","C","b","f","D","r"); LinearVelocityCoordinatesSemantics linearVelocity("a","C","D","r"); AngularVelocityCoordinatesSemantics angularVelocity("C","D","r"); TwistCoordinatesSemantics twist("a","C","D","r"); TorqueCoordinatesSemantics torque("a","C","D","r"); ForceCoordinatesSemantics force("C","D","r"); WrenchCoordinatesSemantics wrench("a","C","D","r");
//Doing semantic operations with the geometric relations // inverting PositionCoordinatesSemantics positionInv = position.inverse(); OrientationCoordinatesSemantics orientationInv = orientation.inverse(); PoseCoordinatesSemantics poseInv = pose.inverse(); LinearVelocityCoordinatesSemantics linearVelocityInv = linearVelocity.inverse(); AngularVelocityCoordinatesSemantics angularVelocityInv = angularVelocity.inverse(); TwistCoordinatesSemantics twistInv = twist.inverse(); TorqueCoordinatesSemantics torqueInv = torque.inverse(); ForceCoordinatesSemantics forceInv = force.inverse(); WrenchCoordinatesSemantics wrenchInv = wrench.inverse();
std::cout << "-----------------------------------------" << std::endl; std::cout << "Inverses: " << std::endl; std::cout << " " << positionInv << " is the inverse of " << position << std::endl; std::cout << " " << orientationInv << " is the inverse of " << orientation << std::endl; std::cout << " " << poseInv << " is the inverse of " << pose << std::endl; std::cout << " " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl; std::cout << " " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl; std::cout << " " << twistInv << " is the inverse of " << twist << std::endl; std::cout << " " << torqueInv << " is the inverse of " << torque << std::endl; std::cout << " " << forceInv << " is the inverse of " << force << std::endl; std::cout << " " << wrenchInv << " is the inverse of " << wrench << std::endl;
//Composing PositionCoordinatesSemantics positionComp = compose(position,positionInv); OrientationCoordinatesSemantics orientationComp = compose(orientation,orientationInv); PoseCoordinatesSemantics poseComp = compose(pose,poseInv); LinearVelocityCoordinatesSemantics linearVelocityComp = compose(linearVelocity,linearVelocityInv); AngularVelocityCoordinatesSemantics angularVelocityComp = compose(angularVelocity,angularVelocityInv); TwistCoordinatesSemantics twistComp = compose(twist,twistInv); TorqueCoordinatesSemantics torqueComp = compose(torque,torqueInv); ForceCoordinatesSemantics forceComp = compose(force,forceInv); WrenchCoordinatesSemantics wrenchComp = compose(wrench,wrenchInv);
std::cout << "-----------------------------------------" << std::endl; std::cout << "Composed objects: " << std::endl; std::cout << " " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl; std::cout << " " << orientationComp << " is the composition of " << orientation << " and " << orientationInv << std::endl; std::cout << " " << poseComp << " is the composition of " << pose << " and " << poseInv << std::endl; std::cout << " " << linearVelocityComp << " is the composition of " << linearVelocity << " and " << linearVelocityInv << std::endl; std::cout << " " << angularVelocityComp << " is the composition of " << angularVelocity << " and " << angularVelocityInv << std::endl; std::cout << " " << twistComp << " is the composition of " << twist << " and " << twistInv << std::endl; std::cout << " " << torqueComp << " is the composition of " << torque << " and " << torqueInv << std::endl; std::cout << " " << forceComp << " is the composition of " << force << " and " << forceInv << std::endl; std::cout << " " << wrenchComp << " is the composition of " << wrench << " and " << wrenchInv << std::endl;
Attachment | Size |
---|---|
mySecondApplication.cpp | 4.72 KB |
This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.
In this tutorial we first explain how you can create full geometric relation objects (with semantics and actual coordinate representation) and perform operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench. To this end we will use the coordinate representations of the Orocos Kinematics and Dynamics Library. The semantic support on top of this geometry library is already provided by the geometric_semantics_kdl package.
Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.
vim myThirdApplication.cpp
#include <Position/Position.h> #include <Orientation/Orientation.h> #include <Pose/Pose.h> #include <LinearVelocity/LinearVelocity.h> #include <AngularVelocity/AngularVelocity.h> #include <Twist/Twist.h> #include <Force/Force.h> #include <Torque/Torque.h> #include <Wrench/Wrench.h> #include <Position/PositionCoordinatesKDL.h> #include <Orientation/OrientationCoordinatesKDL.h> #include <Pose/PoseCoordinatesKDL.h> #include <LinearVelocity/LinearVelocityCoordinatesKDL.h> #include <AngularVelocity/AngularVelocityCoordinatesKDL.h> #include <Twist/TwistCoordinatesKDL.h> #include <Force/ForceCoordinatesKDL.h> #include <Torque/TorqueCoordinatesKDL.h> #include <Wrench/WrenchCoordinatesKDL.h> #include <kdl/frames.hpp> #include <kdl/frames_io.hpp>
using namespace geometric_semantics; using namespace KDL;
int main (int argc, const char* argv[]) { // Here goes the code of our third application }
rosbuild_add_executable(myThirdApplication myThirdApplication.cpp)
rosmake myApplication
and the executable will be created in the bin directory.
bin/myThirdApplication
You will get the semantic output on your screen.
// Creating the geometric relations // a Position with a KDL::Vector Vector coordinatesPosition(1,2,3); Position<Vector> position("a","C","b","D","r",coordinatesPosition); // an Orientation with KDL::Rotation Rotation coordinatesOrientation=Rotation::EulerZYX(M_PI/4,0,0); Orientation<Rotation> orientation("e","C","f","D","f",coordinatesOrientation); // a Pose with a KDL::Frame KDL::Frame coordinatesPose(coordinatesOrientation,coordinatesPosition); Pose<KDL::Frame> pose1("a","e","C","b","f","D","f",coordinatesPose); // a Pose as aggregation of a Position and a Orientation Pose<Vector,Rotation> pose2(position,orientation); // a LinearVelocity with a KDL::Vector Vector coordinatesLinearVelocity(1,2,3); LinearVelocity<Vector> linearVelocity("a","C","D","r",coordinatesLinearVelocity); // a AngularVelocity with a KDL::Vector Vector coordinatesAngularVelocity(1,2,3); AngularVelocity<Vector> angularVelocity("C","D","r",coordinatesAngularVelocity); // a Twist with a KDL::Twist KDL::Twist coordinatesTwist(coordinatesLinearVelocity,coordinatesAngularVelocity); geometric_semantics::Twist<KDL::Twist> twist1("a","C","D","r",coordinatesTwist); // a Twist of a LinearVelocity and a AngularVelocity geometric_semantics::Twist<Vector,Vector> twist2(linearVelocity,angularVelocity); // a Torque with a KDL::Vector Vector coordinatesTorque(1,2,3); Torque<Vector> torque("a","C","D","r",coordinatesTorque); // a Force with a KDL::Vector Vector coordinatesForce(1,2,3); Force<Vector> force("C","D","r",coordinatesForce); // a Wrench with a KDL::Wrench KDL::Wrench coordinatesWrench(coordinatesForce,coordinatesTorque); geometric_semantics::Wrench<KDL::Wrench> wrench1("a","C","D","r",coordinatesWrench); // a Wrench of a Force and a Torque geometric_semantics::Wrench<KDL::Vector,KDL::Vector> wrench2(torque,force);
//Doing operations with the geometric relations // inverting Position<Vector> positionInv = position.inverse(); Orientation<Rotation> orientationInv = orientation.inverse(); Pose<KDL::Frame> pose1Inv = pose1.inverse(); Pose<Vector,Rotation> pose2Inv = pose2.inverse(); LinearVelocity<Vector> linearVelocityInv = linearVelocity.inverse(); AngularVelocity<Vector> angularVelocityInv = angularVelocity.inverse(); geometric_semantics::Twist<KDL::Twist> twist1Inv = twist1.inverse(); geometric_semantics::Twist<Vector,Vector> twist2Inv = twist2.inverse(); Torque<Vector> torqueInv = torque.inverse(); Force<Vector> forceInv = force.inverse(); geometric_semantics::Wrench<KDL::Wrench> wrench1Inv = wrench1.inverse(); geometric_semantics::Wrench<Vector,Vector> wrench2Inv = wrench2.inverse();
// print the inverses std::cout << "-----------------------------------------" << std::endl; std::cout << "Inverses: " << std::endl; std::cout << " " << positionInv << " is the inverse of " << position << std::endl; std::cout << " " << orientationInv << " is the inverse of " << orientation << std::endl; std::cout << " " << pose1Inv << " is the inverse of " << pose1 << std::endl; std::cout << " " << pose2Inv << " is the inverse of " << pose2 << std::endl; std::cout << " " << linearVelocityInv << " is the inverse of " << linearVelocity << std::endl; std::cout << " " << angularVelocityInv << " is the inverse of " << angularVelocity << std::endl; std::cout << " " << twist1Inv << " is the inverse of " << twist1 << std::endl; std::cout << " " << twist2Inv << " is the inverse of " << twist2 << std::endl; std::cout << " " << torqueInv << " is the inverse of " << torque << std::endl; std::cout << " " << forceInv << " is the inverse of " << force << std::endl; std::cout << " " << wrench1Inv << " is the inverse of " << wrench1 << std::endl; std::cout << " " << wrench2Inv << " is the inverse of " << wrench2 << std::endl;
//Composing Position<Vector> positionComp = compose(position,positionInv); Orientation<Rotation> orientationComp = compose(orientation,orientationInv); Pose<KDL::Frame> pose1Comp = compose(pose1,pose1Inv); Pose<Vector,Rotation> pose2Comp = compose(pose2,pose2Inv); LinearVelocity<Vector> linearVelocityComp = compose(linearVelocity,linearVelocityInv); AngularVelocity<Vector> angularVelocityComp = compose(angularVelocity,angularVelocityInv); geometric_semantics::Twist<KDL::Twist> twist1Comp = compose(twist1,twist1Inv); geometric_semantics::Twist<Vector,Vector> twist2Comp = compose(twist2,twist2Inv); Torque<Vector> torqueComp = compose(torque,torqueInv); Force<Vector> forceComp = compose(force,forceInv); geometric_semantics::Wrench<KDL::Wrench> wrench1Comp = compose(wrench1,wrench1Inv); geometric_semantics::Wrench<Vector,Vector> wrench2Comp = compose(wrench2,wrench2Inv);;
If you execute the program you will get screen output on the semantic correctness (and mark: in this case also incorrectness) of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:
// print the composed objects std::cout << "-----------------------------------------" << std::endl; std::cout << "Composed objects: " << std::endl; std::cout << " " << positionComp << " is the composition of " << position << " and " << positionInv << std::endl; std::cout << " " << orientationComp << " is the composition of " << orientation << " and " << orientationInv << std::endl; std::cout << " " << pose1Comp << " is the composition of " << pose1 << " and " << pose1Inv << std::endl; std::cout << " " << pose2Comp << " is the composition of " << pose2 << " and " << pose2Inv << std::endl; std::cout << " " << linearVelocityComp << " is the composition of " << linearVelocity << " and " << linearVelocityInv << std::endl; std::cout << " " << angularVelocityComp << " is the composition of " << angularVelocity << " and " << angularVelocityInv << std::endl; std::cout << " " << twist1Comp << " is the composition of " << twist1 << " and " << twist1Inv << std::endl; std::cout << " " << twist2Comp << " is the composition of " << twist2 << " and " << twist2Inv << std::endl; std::cout << " " << torqueComp << " is the composition of " << torque << " and " << torqueInv << std::endl; std::cout << " " << forceComp << " is the composition of " << force << " and " << forceInv << std::endl; std::cout << " " << wrench1Comp << " is the composition of " << wrench1 << " and " << wrench2Inv << std::endl; std::cout << " " << wrench2Comp << " is the composition of " << wrench1 << " and " << wrench2Inv << std::endl;
Attachment | Size |
---|---|
myThirdApplication.cpp | 7.63 KB |
In case you are looking for some extra examples you can have a look at the geometric_semantics_examples package. So far it already contains an example showing the advantage of using semantics when integrating twists, and when programming two position controlled robots.
Skeleton of a serial robot arm with six revolute joints. This is one example of a kinematic structure, reducing the motion modelling and specification to a geometric problem of relative motion of reference frames. The Kinematics and Dynamics Library (KDL) develops an application independent framework for modelling and computation of kinematic chains, such as robots, biomechanical human models, computer-animated figures, machine tools, etc. It provides class libraries for geometrical objects (point, frame, line,... ), kinematic chains of various families (serial, humanoid, parallel, mobile,... ), and their motion specification and interpolation.
This document is not ready yet, but it's a wiki page so feel free to contribute
There are different ways for getting the software.
Orocos KDL is part of the geometry stack in the ROS distributions pre-Electric.
Since ROS Electric it is available stand-alone as the orocos-kinematics-dynamics stack.
git clone https://github.com/orocos/orocos_kinematics_dynamics.git
mkdir <kdl-dir>/build ; cd <kdl-dir>/build
ccmake ..
make;make check;make install
http://github.com/orocos-toolchain/rtt_geometryand build&install it using the provided Makefile (uses defaults) or CMakeLists.txt (if you want to modify paths).
Import the kdl_typekit in Orocos by using the 'import' Deployment command in the TaskBrowser or the 'Import' Deployment property in your deployment xml file:
import("kdl_typekit")
.types
var KDL.Frame z
z.p.X=1
or z.M.X_x=2
z.p = KDL.Vector(1,2,3)
z.M=KDL.Rotation(0,1.57,0)
(roll, pitch, yaw angles???)z
A Vector is a 3x1 matrix containing X-Y-Z coordinate values. It is used for representing: 3D position of a point wrt a reference frame, rotational and translational part of a 6D motion or force entity : <equation id="vector">$\textrm{KDL::Vector} = \left[ \begin{array}{ccc} x \\ y \\ z \end{array}\right]$<equation>
Vector v1; //The default constructor, X-Y-Z are initialized to zero Vector v2(x,y,z); //X-Y-Z are initialized with the given values Vector v3(v2); //The copy constructor Vector v4 = Vector::Zero(); //All values are set to zero
The operators [ ] and ( ) use indices from 0..2, index checking is enabled/disabled by the DEBUG/NDEBUG definitions:
v1[0]=v2[1];//copy y value of v2 to x value of v1 v2(1)=v3(3);//copy z value of v3 to y value of v2 v3.x( v4.y() );//copy y value of v4 to x value of v3
You can multiply or divide a Vector with a double using the operator * and /:
v2=2*v1; v3=v1/2;
v2+=v1; v3-=v1; v4=v1+v2; v5=v2-v3;
v3=v1*v2; //Cross product double a=dot(v1,v2)//Scalar product
You can reset the values of a vector to zero:
SetToZero(v1);
v1==v2; v2!=v3; Equal(v3,v4,eps);//with accuracy eps
A Rotation is the 3x3 matrix that represents the 3D rotation of an object wrt the reference frame.
<equation id="rotation">$ \textrm{KDL::Rotation} = \left[\begin{array}{ccc}Xx&Yx&Zx\\Xy&Yy&Zy\\Xz&Yz&Zz\end{array}\right] $<equation>
The following result always in consistent Rotations. This means the rows/columns are always normalized and orthogonal:
Rotation r1; //The default constructor, initializes to an 3x3 identity matrix Rotation r1 = Rotation::Identity();//Identity Rotation = zero rotation Rotation r2 = Rotation::RPY(roll,pitch,yaw); //Rotation built from Roll-Pitch-Yaw angles Rotation r3 = Rotation::EulerZYZ(alpha,beta,gamma); //Rotation built from Euler Z-Y-Z angles Rotation r4 = Rotation::EulerZYX(alpha,beta,gamma); //Rotation built from Euler Z-Y-X angles Rotation r5 = Rotation::Rot(vector,angle); //Rotation built from an equivalent axis(vector) and an angle.
The following should be used with care, they can result in inconsistent rotation matrices, since there is no checking if columns/rows are normalized or orthogonal
Rotation r6( Xx,Yx,Zx,Xy,Yy,Zy,Xz,Yz,Zz);//Give each individual element (Column-Major) Rotation r7(vectorX,vectorY,vectorZ);//Give each individual column
Individual values, the indices go from 0..2:
double Zx = r1(0,2);
r1.GetEulerZYZ(alpha,beta,gamma); r1.GetEulerZYX(alpha,beta,gamma); r1.GetRPY(roll,pitch,yaw); axis = r1.GetRot();//gives only rotation axis angle = r1.GetRotAngle(axis);//gives both angle and rotation axis
vecX=r1.UnitX();//or r1.UnitX(vecX); vecY=r1.UnitY();//or r1.UnitY(vecY); vecZ=r1.UnitZ();//or r1.UnitZ(vecZ);
Replacing a rotation by its inverse:
r1.SetInverse();//r1 is inverted and overwritten
r2=r1.Inverse();//r2 is the inverse rotation of r1
Compose two rotations to a new rotation, the order of the rotations is important:
r3=r1*r2;
r1.DoRotX(angle); r2.DoRotY(angle); r3.DoRotZ(angle);
r1 = r1*Rotation::RotX(angle)
v2=r1*v1;
r1==r2; r1!=r2; Equal(r1,r2,eps);
A Frame is the 4x4 matrix that represents the pose of an object/frame wrt a reference frame. It contains:
<equation id="frame">$ \textrm{KDL::Frame} = \left[\begin{array}{cc}\mathbf{M}(3 \times 3) &p(3 \times 1)\\ 0(1 \times 3)&1 \end{array}\right] $<equation>
Frame f1;//Creates Identity frame Frame f1=Frame::Identity();//Creates an identity frame: Rotation::Identity() and Vector::Zero() Frame f2(your_rotation);//Create a frame with your_rotation and a zero vector Frame f3(your_vector);//Create a frame with your_vector and a identity rotation Frame f4(your_rotation,your_vector);//Create a frame with your_rotation Frame f5(your_vector,your_rotation);//and your_vector Frame f5(f6);//the copy constructor
double x = f1(0,3); double Yy = f1(1,1);
Vector p = f1.p; Rotation M = f1.M;
Frame F_A_C = F_A_B * F_B_C;
Replacing a frame by its inverse:
//not yet implemented
f2=f1.Inverse();//f2 is the inverse of f1
f1==f2; f1!=f2; Equal(f1,f2,eps);
A Twist is the 6x1 matrix that represents the velocity of a Frame using a 3D translational velocity Vector vel and a 3D angular velocity Vector rot:
<equation id="twist">$\textrm{KDL::Twist} = \left[\begin{array}{c} v_x\\v_y\\v_z\\ \hline \omega_x \\ \omega_y \\ \omega_z \end{array} \right] = \left[\begin{array}{c} \textrm{vel} \\ \hline \textrm{rot}\end{array} \right] $<equation>
Twist t1; //Default constructor, initializes both vel and rot to Zero Twist t2(vel,rot);//Vector vel, and Vector rot Twist t3 = Twist::Zero();//Zero twist
Using the operators [ ] and ( ), the indices from 0..2 return the elements of vel, the indices from 3..5 return the elements of rot:
double vx = t1(0); double omega_y = t1[4]; t1(1) = vy; t1[5] = omega_z;
double vx = t1.vel.x();//or vx = t1.vel(0); double omega_y = t1.rot.y();//or omega_y = t1.rot(1); t1.vel.y(v_y);//or t1.vel(1)=v_y; //etc
t2=2*t1; t2=t1*2; t2=t1/2;
t1+=t2; t1-=t2; t3=t1+t2; t3=t1-t2;
t1==t2; t1!=t2; Equal(t1,t2,eps);
A Wrench is the 6x1 matrix that represents a force on a Frame using a 3D translational force Vector force and a 3D moment Vector torque:
<equation id="wrench">$\textrm{KDL::Wrench} = \left[\begin{array}{c} f_x\\f_y\\f_z\\ \hline t_x \\ t_y \\ t_z \end{array} \right] = \left[\begin{array}{c} \textrm{force} \\ \hline \textrm{torque}\end{array} \right] $<equation>
Wrench w1; //Default constructor, initializes force and torque to Zero Wrench w2(force,torque);//Vector force, and Vector torque Wrench w3 = Wrench::Zero();//Zero wrench
Using the operators [ ] and ( ), the indices from 0..2 return the elements of force, the indices from 3..5 return the elements of torque:
double fx = w1(0); double ty = w1[4]; w1(1) = fy; w1[5] = tz;
double fx = w1.force.x();//or fx = w1.force(0); double ty = w1.torque.y();//or ty = w1.torque(1); w1.force.y(fy);//or w1.force(1)=fy;//etc
w2=2*w1; w2=w1*2; w2=w1/2;
w1+=w2; w1-=w2; w3=w1+w2; w3=w1-w2;
w1==w2; w1!=w2; Equal(w1,w2,eps);
The values of a Wrench or Twist change if the reference frame or reference point is changed.
t2 = t1.RefPoint(v_old_new); w2 = w1.RefPoint(v_old_new);
ta = R_AB*tb; wa = R_AB*wb;
Note: This operation seems to multiply a 3x3 matrix R_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from R_AB.
ta = F_AB*tb; wa = F_AB*wb;
Note: This operation seems to multiply a 4x4 matrix F_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from F_AB.
t = diff(F_w_A,F_w_B,timestep)//differentiation F_w_B = F_w_A.addDelta(t,timestep)//integration
A KDL::Chain or KDL::Tree composes/consists of the concatenation of KDL::Segments. A KDL::Segment composes a KDL::Joint and KDL::RigidBodyInertia, and defines a reference and tip frame on the segment. The following figures show a KDL::Segment, KDL::Chain, and KDL::Tree, respectively. At the bottom of this page you'll find the links to a more detailed description.
Select your revision: (1.0.x is the released version, 1.1.x is under discussion (see kinfam_refactored git branch))
A Joint allows translation or rotation in one degree of freedom between two Segments
Joint rx = Joint(Joint::RotX);//Rotational Joint about X Joint ry = Joint(Joint::RotY);//Rotational Joint about Y Joint rz = Joint(Joint::RotZ);//Rotational Joint about Z Joint tx = Joint(Joint::TransX);//Translational Joint along X Joint ty = Joint(Joint::TransY);//Translational Joint along Y Joint tz = Joint(Joint::TransZ);//Translational Joint along Z Joint fixed = Joint(Joint::None);//Rigid Connection
Joint rx = Joint(Joint::RotX); double q = M_PI/4;//Joint position Frame f = rx.pose(q); double qdot = 0.1;//Joint velocity Twist t = rx.twist(qdot);
A Segment is an ideal rigid body to which one single Joint is connected and one single tip frame. It contains:
Segment s = Segment(Joint(Joint::RotX), Frame(Rotation::RPY(0.0,M_PI/4,0.0), Vector(0.1,0.2,0.3) ) );
double q=M_PI/2;//joint position Frame f = s.pose(q);//s constructed as in previous example double qdot=0.1;//joint velocity Twist t = s.twist(q,qdot);
A KDL::Chain is
A Chain has
Chain chain1; Chain chain2(chain3); Chain chain4 = chain5;
Chains are constructed by adding segments or existing chains to the end of the chain. These functions add copies of the arguments, not the arguments themselves!
chain1.addSegment(segment1); chain1.addChain(chain2);
You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):
unsigned int nj = chain1.getNrOfJoints(); unsigned int js = chain1.getNrOfSegments();
You can iterate over the segments of a chain by getting a reference to each successive segment:
Segment& segment3 = chain1.getSegment(3);
A KDL::Tree is
A Tree has
Tree tree1; Tree tree2("RootName"); Tree tree3(tree4); Tree tree5 = tree6;
Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!
bool exit_value; exit_value = tree1.addSegment(segment1,"root"); exit_value = tree1.addChain(chain1,"Segment 1"); exit_value = tree1.addTree(tree2,"root");
You can get the number of joints and number of segments (this is not always the same since a segment can have a fixed joint (Joint::None), which is not included in the number of joints):
unsigned int nj = tree1.getNrOfJoints(); unsigned int js = tree1.getNrOfSegments();
You can retrieve the root segment:
std::map<std::string,TreeElement>::const_iterator root = tree1.getRootSegment();
You can also retrieve a specific segment in a tree by its name:
std::map<std::string,TreeElement>::const_iterator segment3 = tree1.getSegment("Segment 3");
You can retrieve the segments in the tree:
std::map<std::string,TreeElement>& segments = tree1.getSegments();
It is possible to request the chain in a tree between a certain root and a tip:
bool exit_value; Chain chain; exit_value = tree1.getChain("Segment 1","Segment 3",chain); //Segment 1 and segment 3 are included but segment 1 is renamed. Chain chain2; exit_value = tree1.getChain("Segment 3","Segment 1",chain2); //Segment 1 and segment 3 are included but segment 3 is renamed.
A Joint allows translation or rotation in one degree of freedom between two Segments
Joint rx = Joint(Joint::RotX);//Rotational Joint about X Joint ry = Joint(Joint::RotY);//Rotational Joint about Y Joint rz = Joint(Joint::RotZ);//Rotational Joint about Z Joint tx = Joint(Joint::TransX);//Translational Joint along X Joint ty = Joint(Joint::TransY);//Translational Joint along Y Joint tz = Joint(Joint::TransZ);//Translational Joint along Z Joint fixed = Joint(Joint::None);//Rigid Connection
Joint rx = Joint(Joint::RotX); double q = M_PI/4;//Joint position Frame f = rx.pose(q); double qdot = 0.1;//Joint velocity Twist t = rx.twist(qdot);
A Segment is an ideal rigid body to which one single Joint is connected and one single tip frame. It contains:
Segment s = Segment(Joint(Joint::RotX), Frame(Rotation::RPY(0.0,M_PI/4,0.0), Vector(0.1,0.2,0.3) ) );
double q=M_PI/2;//joint position Frame f = s.pose(q);//s constructed as in previous example double qdot=0.1;//joint velocity Twist t = s.twist(q,qdot);
A KDL::Chain is
A Chain has
Chain chain1; Chain chain2(chain3); Chain chain4 = chain5;
Chains are constructed by adding segments or existing chains to the end of the chain. All segments must have a different name (or "NoName"), otherwise the methods will return false and the segments will not be added. The functions add copies of the arguments, not the arguments themselves!
bool exit_value; bool exit_value = chain1.addSegment(segment1); exit_value = chain1.addChain(chain2);
You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):
unsigned int nj = chain1.getNrOfJoints(); unsigned int js = chain1.getNrOfSegments();
You can iterate over the segments of a chain by getting a reference to each successive segment. The method will return false if the index is out of bounds.
Segment segment3; bool exit_value = chain1.getSegment(3, segment3);
You can also request a segment by name:
Segment segment3; bool exit_value = chain1.getSegment("Segment 3", segment3);
The root and leaf segment can be requested, as well as all segments in the chain.
bool exit_value; Segment root_segment; Segment leaf_segment; std::vector<Segment> segments; exit_value = chain1.getRootSegment(root_segment); exit_value = chain1.getLeafSegment(leaf_segment); exit_value = chain1.getSegments(segments);
You can request a part of the chain between a certain root and a tip:
bool exit_value; Chain part_chain; exit_value = chain1.getChain_Including(1,3, part_chain); exit_value = chain1.getChain_Including("Segment 1","Segment 3", part_chain); //Segment 1 and Segment 3 are included in the new chain! exit_value = chain1.getChain_Excluding(1,3, part_chain); exit_value = chain1.getChain_Excluding("Segment 1","Segment 3", part_chain); //Segment 1 is not included in the chain. Segment 3 is included in the chain.
There is a function to copy the chain up to a given segment number or segment name:
bool exit_value; Chain chain_copy; exit_value = chain1.copy(3, chain_copy); exit_value = chain1.copy("Segment 3", chain_copy); //Segment 3, 4,... are not included in the copy!
A KDL::Tree is
A Tree has
Tree tree1; Tree tree2("RootName"); Tree tree3(tree4); Tree tree5 = tree6;
Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!
bool exit_value; exit_value = tree1.addSegment(segment1,"root"); exit_value = tree1.addChain(chain1,"Segment 1"); exit_value = tree1.addTree(tree2,"root");
You can get the number of joints and number of segments (this is not always the same since a segment can have a Joint::None, which is not included in the number of joints):
unsigned int nj = tree1.getNrOfJoints(); unsigned int js = tree1.getNrOfSegments();
You can retrieve the root segment and the leaf segments:
bool exit_value; std::map<std::string,TreeElement>::const_iterator root; std::map<std::string,TreeElement> leafs; exit_value = tree1.getRootSegment(root); exit_value = tree1.getLeafSegments(leafs);
You can also retrieve a specific segment in a tree by its name:
std::map<std::string,TreeElement>::const_iterator segment3; bool exit_value = tree1.getSegment("Segment 3",segment3);
You can retrieve the segments in the tree:
std::map<std::string,TreeElement> segments; bool exit_value = tree1.getSegments(segments);
It is possible to request the chain in a tree between a certain root and a tip:
bool exit_value; Chain chain; exit_value = tree1.getChain("Segment 1","Segment 3",chain); //Segment 1 and segment 3 are included but segment 1 is renamed. Chain chain2; exit_value = tree1.getChain("Segment 3","Segment 1",chain2); //Segment 1 and segment 3 are included but segment 3 is renamed.
This chain can also be requested in a tree structure with the given root name ("root" if no name is given).
bool exit_value; Tree tree; exit_value = tree1.getChain("Segment 1","Segment 3",tree,"RootName"); Tree tree2; exit_value = tree1.getChain("Segment 3","Segment 1",tree2,"RootName");
There is a function to copy a tree excluding some segments and all their decendants.
bool exit_value; Tree tree_copy; exit_value = tree1.copy("Segment 3", tree_copy); //tree1 is copied up to segment 3 (excluding segment 3). std::vector<std::string> vect; vect.push_back("Segment 1"); vect.push_back("Segment 7"); exit_value = tree1.copy(vect,tree_copy);
KDL contains for the moment only generic solvers for kinematic chains. They can be used (with care) for every KDL::Chain.
The idea behind the generic solvers is to have a uniform API. We do this by inheriting from the abstract classes for each type of solver:
A seperate solver has to be created for each chain. At construction time, it will allocate all necessary resources.
A specific type of solver can add some solver-specific functions/parameters to the interface, but still has to use the generic interface for it's main solving purpose.
The forward kinematics use the function JntToCart(...) to calculate the Cartesian space values from the Joint space values. The inverse kinematics use the function CartToJnt(...) to calculate the Joint space values from the Cartesian space values.
It recursively adds the poses/velocity of the successive segments, going from the first to the last segment, you can also get intermediate results by giving a Segment number:
ChainFkSolverPos_recursive fksolver(chain1); JntArray q(chain1.getNrOfJoints); q=... Frame F_result; fksolver.JntToCart(q,F_result,segment_nr);
This page collects all useful information for the User Group for the KUKA Light-Weight-Robot.
The following institutes are currently involved:
[We can add your details here!]
At K.U.Leuven we released Orocos Components for communicating with the LBR using RSI and FRI interfaces. The RSI component should be usable for all KUKA Robots that offer RSI.
The FRI interface software can be found at: https://github.com/wdecre/kuka-robot-hardware (replaces http://git.mech.kuleuven.be/robotics/kuka_robot_hardware.git)
The RSI interface software can be found at: http://svn.mech.kuleuven.be/repos/orocos/orocos-apps/public_release/Kuka_RSI At KU Leuven RSI is currently not actively used.
The FRI and RSI interface, provide you with an Orocos component that you can add to your robot application to handle the communication with the robot controller.
A readme file with the main installation steps is provided with the code (git or svn checkout). All comments, discussions, questions and suggestions are very welcome at the mailing list: see http://lists.mech.kuleuven.be/mailman/listinfo/kuka-lwr for info on how to subscribe.
Links collection of Orocos components
Konrad Banachowicz: https://github.com/konradb3/orocos-components
This wiki has only information for the OCL 1.x releases. For OCL 2.x, look at the 'Toolchain' wiki.
In order to have readline tab-completion in the taskbrowser, you'll need OCL 1.12.0 or 2.1.0 or later.
homepage:
download:
It is advised to keep copies/backups of these files on your own site, since they are not official readline releases, but patched to work on Windows.
and then open the solution in the directory:
The build will place a static readline.lib in the ../lib directory.
set(CMAKE_INCLUDE_PATH ${CMAKE_INCLUDE_PATH} "C:/Documents and Settings/virtual/My documents/readline5.2/include") set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "C:/Documents and Settings/virtual/My documents/readline5.2/lib")Where 'C:/Documents and Settings/virtual/My documents/' is the directory where you unpacked the downloads.
Continue to configure OCL in the cmake GUI by turning off the NO_GPL flag (by default on on Windows). It will then try to link the taskbrowser with the readline.lib file, which should succeed. After installing ocl, readline should work as on Linux, but only on the standard cygwin or cmd.exe prompts, not on rxvt
Further some significants parts of the paper "Really Reusable Robot Code and the Player/Stage Project" have been copied. The purpose it to present a possible philosophy to drive the development of OCL 2.0 (it is recommended to read the entire paper). Feel free to discuss these concepts in the forum.
This wiki has only information for the RTT 1.x releases. For RTT 2.x, look at the Toolchain wiki.
From recent discussion on ML, simply a place to put down ideas before we forget them ...
Orocos to run in real-time
<quote> Actually it's an option of the omniidl compiler... the command to use is
omniidl -bcxx -Wba myIdlFile.idl
This will become definately a FAQ item :-) <quote>
<quote> When your text is not appearing on your wiki page, it's because you ended your wiki page with an indented line. So if your last line is:
this is my last line
the wiki code clears the whole page. It's clearly a Drupal/wiki module thing/bug. <quote>
Check out OCL's HmiConsoleOutput component.
You can take a look at the CODING_STYLE.txt file. Also, we worked out the indentation rules for Eclipse and Emacs.
end
The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.
There are several sources where you can find code and tutorials. Click below to read the rest of this post.The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.
There are several sources where you can find code and tutorials. Some code is listed in wiki pages, other is downloadable in a separate package and finally you can find code snippets in the manuals too.
RTT Examples Get started with simple, ready-to-compile examples of how to create a component
Naming connections, not ports: Orocos' best kept secret
Using omniORBpy to interact with a component from Python
These advanced examples are mainly about extending and configuring the RTT for your specific needs.
Using plugins and toolkits to support custom data types
Using non-periodic components to implement a simple TCP client
Using XML substitution to manage complex deployments
This is a work in progress and only for RTT 1.x !
Problem: You want to pass custom types between distributed components, be able to see the value(s) of your custom type with in a deployer, and be able to read/write the custom type to/from XML files.
Solution: Develop two plugins that tell Orocos about your custom types.
<!-- break -->
An RTT transport plugin provides methods to transport your custom types across CORBA, and hence between distributed Orocos components.
This is a multi-part example demonstrating plugins for two boost::posix_time types: ptime and time_duration.
For additional information on plugins and their development, see [1].
Also, the KDL toolkit and transport plugins are good examples. See src/bindings/rtt in the KDL source.
. |-- BoostToolkit.cpp |-- BoostToolkit.hpp |-- CMakeLists.txt |-- config | |-- FindACE.cmake | |-- FindCorba.cmake | |-- FindOmniORB.cmake | |-- FindOrocos-OCL.cmake | |-- FindOrocos-RTT.cmake | |-- FindTAO.cmake | |-- UseCorba.cmake | `-- UseOrocos.cmake |-- corba | |-- BoostCorbaConversion.hpp | |-- BoostCorbaToolkit.cpp | |-- BoostCorbaToolkit.hpp | |-- BoostTypes.idl | |-- CMakeLists.txt | `-- tests | |-- CMakeLists.txt | |-- corba-combined.cpp | |-- corba-recv.cpp | `-- corba-send.cpp `-- tests |-- CMakeLists.txt |-- combined.cpp |-- no-toolkit.cpp |-- recv.cpp |-- recv.hpp |-- send.cpp `-- send.hpp
The toolkit plugin is in the root directory, with supporting test files in the tests directory.
CMake support files are in the config directory.
The transport plugin is in the corba directory, with supporting test files in the corba/tests directory.
Currently, this example does not yet
NB I could not find a method to get at the underlying raw 64-bit or 96-bit boost representation of ptime. Hence, the transport plugin inefficiently transports a ptime type using two separate data values. If you know of a method to get at the raw representation, I would love to know. Good luck in template land ...
Attachment | Size |
---|---|
BoostToolkit.hpp | 2.64 KB |
BoostToolkit.cpp | 3.58 KB |
CMakeLists.txt | 1.83 KB |
corba/BoostCorbaToolkit.hpp | 934 bytes |
corba/BoostCorbaToolkit.cpp | 1.34 KB |
corba/QBoostCorbaConversion.hpp | 5.18 KB |
corba/CMakeLists.txt | 738 bytes |
plugins.tar_.bz2 | 14.24 KB |
This is a work in progress
This part creates components that use your custom type, and demonstrates that Orocos does not know anything about these types.
cd /path/to/plugins mkdir build cd build cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=OFF make
For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").
Tested in Mac OS X Leopard 10.5.7.
In a shell
cd /path/to/plugins/build ./no-toolkit
This starts a test case that uses an OCL taskbrowser to show two components: send and recv. If you issue a "ls" or "ls Send" command, you will get output similar to the following:
Data Flow Ports: RW(C) unknown_t ptime = (unknown_t) RW(C) unknown_t timeDuration = (unknown_t)
Each component has two ports, named ptime and time_duration. Notice that both ports are connected "(C)", but that Orocos considers each an unknown type with unknown value.
Part 2 Toolkit plugin will build a toolkit plugin that allows Orocos to understand these types.
This is a work in progress
This part creates a toolkit plugin making our types known to Orocos.
Everything needed for this part was built in Part 1.
In a shell
cd /path/to/plugins/build ./combined
The combined tests uses an OCL taskbrowser to show two components: send and recv. Typing an "ls" or "ls Send" command, as in Part 1, you will get something like the following:
RW(C) boost_ptime ptime = 2009-Aug-09 16:14:19.724622 RW(C) boost_timeduration timeDuration = 00:00:00.200005
Note that Orocos now knows the correct types (eg boost_ptime) and can display each ports value. Issue multiple ls commands and you will see the values change. The ptime is simply the date and time at which the send component set the port value, and the duration is the time between port values being set on each iteration (ie this should approximately be the period of the send component).
namespace Examples { /// \remark these do not need to be in the same namespace as the plugin /// put the time onto the stream std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t); /// put the time onto duration the stream std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d); /// get a time from the stream std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t); /// get a time duration from the stream std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d);
class BoostPlugin : public RTT::ToolkitPlugin { public: virtual std::string getName(); virtual bool loadTypes(); virtual bool loadConstructors(); virtual bool loadOperators(); }; /// The singleton for the Toolkit. extern BoostPlugin BoostToolkit;
/// provide ptime type to RTT type system /// \remark the 'true' argument indicates that we supply stream operators struct BoostPtimeTypeInfo : public RTT::TemplateTypeInfo<boost::posix_time::ptime,true> { BoostPtimeTypeInfo(std::string name) : RTT::TemplateTypeInfo<boost::posix_time::ptime,true>(name) {}; bool decomposeTypeImpl(const boost::posix_time::ptime& img, RTT::PropertyBag& targetbag); bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::ptime& img); }; /// provide time duration type to RTT type system /// \remark the 'true' argument indicates that we supply stream operators struct BoostTimeDurationTypeInfo : public RTT::TemplateTypeInfo<boost::posix_time::time_duration,true> { BoostTimeDurationTypeInfo(std::string name) : RTT::TemplateTypeInfo<boost::posix_time::time_duration,true>(name) {}; bool decomposeTypeImpl(const boost::posix_time::time_duration& img, RTT::PropertyBag& targetbag); bool composeTypeImpl(const RTT::PropertyBag& bag, boost::posix_time::time_duration& img); }; } // namespace Exampels
The toolkit plugin implementation is in the BoostToolkit.cpp file.
namespace Examples { using namespace RTT; using namespace RTT::detail; using namespace std; std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t) { os << boost::posix_time::to_simple_string(t); return os; } std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d) { os << boost::posix_time::to_simple_string(d); return os; } std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t) { is >> t; return is; } std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d) { is >> d; return is; }
BoostPlugin BoostToolkit; std::string BoostPlugin::getName() { return "Boost"; }
bool BoostPlugin::loadTypes() { TypeInfoRepository::shared_ptr ti = TypeInfoRepository::Instance(); /* each quoted name here (eg "boost_ptime") must _EXACTLY_ match that in the associated TypeInfo::composeTypeImpl() and TypeInfo::decomposeTypeImpl() functions (in this file), as well as the name registered in the associated Corba plugin's registerTransport() function (see corba/BoostCorbaToolkit.cpp) */ ti->addType( new BoostPtimeTypeInfo("boost_ptime") ); ti->addType( new BoostTimeDurationTypeInfo("boost_timeduration") ); return true; }
bool BoostPlugin::loadConstructors() { // no constructors for these particular types return true; } bool BoostPlugin::loadOperators() { // no operators for these particular types return true; }
bool BoostPtimeTypeInfo::decomposeTypeImpl(const boost::posix_time::ptime& source, PropertyBag& targetbag) { targetbag.setType("boost_ptime"); assert(0); return true; } bool BoostPtimeTypeInfo::composeTypeImpl(const PropertyBag& bag, boost::posix_time::ptime& result) { if ( "boost_ptime" == bag.getType() ) // ensure is correct type { // \todo assert(0); } return false; }
ORO_TOOLKIT_PLUGIN(Examples::BoostToolkit)
cmake_minimum_required(VERSION 2.6) # pick up additional cmake package files (eg FindXXX.cmake) from this directory list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/config")
find_package(Orocos-RTT 1.6.0 REQUIRED corba) find_package(Orocos-OCL 1.6.0 REQUIRED taskbrowser)
include(${CMAKE_SOURCE_DIR}/config/UseOrocos.cmake)
create_component(BoostToolkit-${OROCOS_TARGET} VERSION 1.0.0 BoostToolkit.cpp) TARGET_LINK_LIBRARIES(BoostToolkit-${OROCOS_TARGET} boost_date_time)
SUBDIRS(tests)
The send component regularly updates the current time on its ptime port, and the duration between ptime port updates on its timeDuration port.
class Send : public RTT::TaskContext { public: RTT::DataPort<boost::posix_time::ptime> ptime_port; RTT::DataPort<boost::posix_time::time_duration> timeDuration_port; public: Send(std::string name); virtual ~Send(); virtual bool startHook(); virtual void updateHook(); protected: boost::posix_time::ptime lastNow; };
The implementation is very simple, and will not be discussed in detail here.
#include "send.hpp" Send::Send(std::string name) : RTT::TaskContext(name), ptime_port("ptime"), timeDuration_port("timeDuration") { ports()->addPort(&ptime_port); ports()->addPort(&timeDuration_port); } Send::~Send() { } bool Send::startHook() { // just set last to now lastNow = boost::posix_time::microsec_clock::local_time(); return true; } void Send::updateHook() { boost::posix_time::ptime now; boost::posix_time::time_duration delta; // send the current time, and the duration since the last updateHook() now = boost::posix_time::microsec_clock::local_time(); delta = now - lastNow; ptime_port.Set(now); timeDuration_port.Set(delta); lastNow = now; }
The recv component has the same ports but does nothing. It is simply an empty receiver component, that allows us to view its ports within the deployer.
class Recv : public RTT::TaskContext { public: RTT::DataPort<boost::posix_time::ptime> ptime_port; RTT::DataPort<boost::posix_time::time_duration> timeDuration_port; public: Recv(std::string name); virtual ~Recv(); };
And the recv implementation.
#include "recv.hpp" Recv::Recv(std::string name) : RTT::TaskContext(name), ptime_port("ptime"), timeDuration_port("timeDuration") { ports()->addPort(&ptime_port); ports()->addPort(&timeDuration_port); } Recv::~Recv() { }
Now the combined test program just combines one of each test component directly within the same executable.
#include <rtt/RTT.hpp> #include <rtt/PeriodicActivity.hpp> #include <rtt/TaskContext.hpp> #include <rtt/os/main.h> #include <rtt/Ports.hpp> #include <ocl/TaskBrowser.hpp> #include "send.hpp" #include "recv.hpp" #include "../BoostToolkit.hpp" using namespace std; using namespace Orocos; int ORO_main(int argc, char* argv[]) { RTT::Toolkit::Import(Examples::BoostToolkit);
Recv recv("Recv"); PeriodicActivity recv_activity(ORO_SCHED_OTHER, 0, 0.1, recv.engine()); Send send("Send"); PeriodicActivity send_activity(ORO_SCHED_OTHER, 0, 0.2, send.engine()); if ( connectPeers( &send, &recv ) == false ) { log(Error) << "Could not connect peers !"<<endlog(); return -1; } if ( connectPorts( &send, &recv) == false ) { log(Error) << "Could not connect ports !"<<endlog(); return -1; }
send.configure(); recv.configure(); send_activity.start(); recv_activity.start(); TaskBrowser browser( &recv ); browser.setColorTheme( TaskBrowser::whitebg ); browser.loop();
send_activity.stop(); recv_activity.stop(); return 0; }
The differences between the combined and no-toolkit test programs will be covered in Part 2, but essentially amounts to not loading the toolkit.
Part 3 Transport plugin will build a transport plugin allowing Orocos to communicate these types across CORBA.
'This is a work in progress''
This part builds a transport plugin allowing Orocos to communicate these types across CORBA.
In a shell
cd /path/to/plugins mkdir build cd build cmake .. -DOROCOS_TARGET=macosx -DENABLE_CORBA=ON make
The only difference from building in Part 1, is to turn ON CORBA.
For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").
Tested in Mac OS X Leopard 10.5.7.
In a shell
cd /path/to/plugins/build/corba/tests ./corba-recv
In a second shell
cd /path/to/plugins/build/corba/tests ./corba-send
Now the same exact two test components of Parts 1 and 2 are in separate processes. Typing ls in either process will present the same values (subject to network latency delays, which typically are not human perceptible) - the data and types are now being communicated between deployers.
Now, the transport plugin is responsible for communicating the types between deployers, while the toolkit plugin is responsible for knowing each type and being able to display it. Separate responsibilities. Separate plugins.
NB for the example components, send must be started after recv. Starting only corba-recv and issuing ls will display the default values for each type. Also, quitting the send component and then attempting to use the recv component will lockup the recv deployer. These limitations are not due to the plugins - they are simply due to the limited functionality of these test cases.
Running the same two corba test programs but without loading the transport plugin, is instructive as to what happens when you do not match up certain things in the toolkit sources. This is very important!
In a shell
cd /path/to/plugins/build/corba/tests ./corba-recv-no-toolkit
Data Flow Ports: RW(U) boost_ptime ptime = not-a-date-time RW(U) boost_timeduration timeDuration = 00:00:00
In a second shell
cd /path/to/plugins/build/corba/tests ./corba-send-no-toolkit
The send component without the transport plugin fails to start, with:
$ ./build/corba/tests/corba-send-no-toolkit 0.008 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Forcing priority (0) of thread to 0. 0.008 [ Warning][PeriodicThread] Forcing priority (0) of thread to 0. 0.027 [ Warning][SingleThread] Forcing priority (0) of thread to 0. 5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] ControlTask 'Send' already bound \ to CORBA Naming Service. 5.078 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Trying to rebind... done. New \ ControlTask bound to Naming Service. 5.130 [ Warning][./build/corba/tests/corba-send-no-toolkit::main()] Can not create a proxy for data \ connection. 5.130 [ ERROR ][./build/corba/tests/corba-send-no-toolkit::main()] Dynamic cast failed \ for 'PN3RTT14DataSourceBaseE', 'unknown_t', 'unknown_t'. Do your typenames not match? Assertion failed: (doi && "Dynamic cast failed! See log file for details."), function createConnection, \ file /opt/install/include/rtt/DataPort.hpp, line 462. Abort trap
*** corba/tests/corba-recv.cpp 2009-07-29 22:08:32.000000000 -0400 --- corba/tests/corba-recv-no-toolkit.cpp 2009-08-09 16:32:03.000000000 -0400 *************** *** 11,17 **** #include <rtt/os/main.h> #include <rtt/Ports.hpp> - #include "../BoostCorbaToolkit.hpp" #include "../../BoostToolkit.hpp" // use Boost RTT Toolkit test components --- 11,16 ---- *************** *** 27,33 **** int ORO_main(int argc, char* argv[]) { RTT::Toolkit::Import( Examples::BoostToolkit ); - RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin ); Recv recv("Recv"); PeriodicActivity recv_activity( --- 26,31 ----
We define the CORBA types in corba/BoostTypes.idl. This is a file in CORBA's Interface Description Language (IDL). There are plenty of references on the web, for instance [1].
// must be in RTT namespace to match some rtt/corba code module RTT { module Corba {
struct time_duration { short hours; short minutes; short seconds; long nanoseconds; };
// can't get at underlying type, so send this way (yes, more overhead) // see BoostCorbaConversion.hpp::struct AnyConversion<boost::posix_time::ptime> // for further details. struct ptime { // julian day long date; time_duration time_of_day; }; }; };
Note that CORBA IDL knows about certain types already, e.g. short and long, and that we can use our time_duration structure in later structures.
We will come back to this IDL file during the build process.
The actual plugin is defined in corba/BoostCorbaToolkit.hpp. This is the equivalent of the BoostToolkit.hpp file, except for a transport plugin.
namespace Examples { namespace Corba { class CorbaBoostPlugin : public RTT::TransportPlugin { public: /// register this transport into the RTT type system bool registerTransport(std::string name, RTT::TypeInfo* ti); /// return the name of this transport type (ie "CORBA") std::string getTransportName() const; /// return the name of this transport std::string getName() const; }; // the global instance extern CorbaBoostPlugin corbaBoostPlugin; // namespace } }
The implementation of the plugin is in corba/BoostCorbaToolkit.cpp, and is very straight forward.
namespace Examples { namespace Corba { bool CorbaBoostPlugin::registerTransport(std::string name, TypeInfo* ti) { assert( name == ti->getTypeName() ); // name must match that in plugin::loadTypes() and // typeInfo::composeTypeInfo(), etc if ( name == "boost_ptime" ) return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::ptime >() ); if ( name == "boost_timeduration" ) return ti->addProtocol(ORO_CORBA_PROTOCOL_ID, new CorbaTemplateProtocol< boost::posix_time::time_duration >() ); return false; }
std::string CorbaBoostPlugin::getTransportName() const { return "CORBA"; } std::string CorbaBoostPlugin::getName() const { return "CorbaBoost"; }
For a CORBA transport plugin, the name returned by getTransportName() should be CORBA.
CorbaBoostPlugin corbaBoostPlugin; // namespace } } ORO_TOOLKIT_PLUGIN(Examples::Corba::corbaBoostPlugin);
I will only cover the code for converting one of the types. The other is very similar - you can examine it yourself in the source file.
#include "BoostTypesC.h" #include <rtt/corba/CorbaConversion.hpp> #include <boost/date_time/posix_time/posix_time_types.hpp> // no I/O
// must be in RTT namespace to match some rtt/corba code namespace RTT {
template<> struct AnyConversion< boost::posix_time::time_duration > { // define the Corba and standard (ie non-Corba) types we are using typedef Corba::time_duration CorbaType; typedef boost::posix_time::time_duration StdType;
The last four of the following six functions are required by the CORBA library, to enable conversion between the CORBA and non-CORBA types. The two convert functions are their for convenience, and to save replicating code.
// convert CorbaType to StdTypes static void convert(const CorbaType& orig, StdType& ret) { ret = boost::posix_time::time_duration(orig.hours, orig.minutes, orig.seconds, orig.nanoseconds); } // convert StdType to CorbaTypes static void convert(const StdType& orig, CorbaType& ret) { ret.hours = orig.hours(); ret.minutes = orig.minutes(); ret.seconds = orig.seconds(); ret.nanoseconds = orig.fractional_seconds(); }
static CorbaType* toAny(const StdType& orig) { CorbaType* ret = new CorbaType(); convert(orig, *ret); return ret; } static StdType get(const CorbaType* orig) { StdType ret; convert(*orig, ret); return ret; } static bool update(const CORBA::Any& any, StdType& ret) { CorbaType* orig; if ( any >>= orig ) { convert(*orig, ret); return true; } return false; } static CORBA::Any_ptr createAny( const StdType& t ) { CORBA::Any_ptr ret = new CORBA::Any(); *ret <<= toAny( t ); return ret; } };
The same six functions then follow for our boost::ptime type. They are not covered in detail here.
IF (ENABLE_CORBA) INCLUDE(${CMAKE_SOURCE_DIR}/config/UseCorba.cmake)
FILE( GLOB IDLS [^.]*.idl ) FILE( GLOB CPPS [^.]*.cpp ) ORO_ADD_CORBA_SERVERS(CPPS HPPS ${IDLS} )
INCLUDE_DIRECTORIES( ${CMAKE_CURRENT_BINARY_DIR}/. )
CREATE_COMPONENT(BoostToolkit-corba-${OROCOS_TARGET} VERSION 1.0.0 ${CPPS}) TARGET_LINK_LIBRARIES(BoostToolkit-corba-${OROCOS_TARGET} ${OROCOS-RTT_CORBA_LIBRARIES} ${CORBA_LIBRARIES})
SUBDIRS(tests) ENDIF (ENABLE_CORBA)
The corba-send test program instantiates a send component, and uses an RTT ControlTaskProxy to represent the remote receive component.
#include <rtt/corba/ControlTaskServer.hpp> #include <rtt/corba/ControlTaskProxy.hpp> #include <rtt/RTT.hpp> #include <rtt/PeriodicActivity.hpp> #include <rtt/TaskContext.hpp> #include <rtt/os/main.h> #include <rtt/Ports.hpp> #include <ocl/TaskBrowser.hpp> #include "../BoostCorbaToolkit.hpp" #include "../../BoostToolkit.hpp" #include "../../tests/send.hpp" using namespace std; using namespace Orocos; using namespace RTT::Corba; int ORO_main(int argc, char* argv[]) { RTT::Toolkit::Import( Examples::BoostToolkit ); RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin );
Send send("Send"); PeriodicActivity send_activity( ORO_SCHED_OTHER, 0, 1.0 / 10, send.engine()); // 10 Hz // start Corba and find the remote task ControlTaskProxy::InitOrb(argc, argv); ControlTaskServer::ThreadOrb();
TaskContext* recv = ControlTaskProxy::Create( "Recv" ); assert(NULL != recv);
if ( connectPeers( recv, &send ) == false ) { log(Error) << "Could not connect peers !"<<endlog(); } // create data object at recv's side if ( connectPorts( recv, &send) == false ) { log(Error) << "Could not connect ports !"<<endlog(); }
send.configure(); send_activity.start(); log(Info) << "Starting task browser" << endlog(); OCL::TaskBrowser tb( recv ); tb.loop(); send_activity.stop();
ControlTaskProxy::DestroyOrb(); return 0; }
The receive test program has a similar structure to the send test program.
#include <rtt/corba/ControlTaskServer.hpp> #include <rtt/corba/ControlTaskProxy.hpp> #include <rtt/RTT.hpp> #include <rtt/PeriodicActivity.hpp> #include <rtt/TaskContext.hpp> #include <rtt/os/main.h> #include <rtt/Ports.hpp> #include "../BoostCorbaToolkit.hpp" #include "../../BoostToolkit.hpp" #include "../../tests/recv.hpp" #include <ocl/TaskBrowser.hpp> using namespace std; using namespace Orocos; using namespace RTT::Corba; int ORO_main(int argc, char* argv[]) { RTT::Toolkit::Import( Examples::BoostToolkit ); RTT::Toolkit::Import( Examples::Corba::corbaBoostPlugin ); Recv recv("Recv"); PeriodicActivity recv_activity( ORO_SCHED_OTHER, 0, 1.0 / 5, recv.engine()); // 5 Hz // Setup Corba and Export: ControlTaskServer::InitOrb(argc, argv); ControlTaskServer::Create( &recv ); ControlTaskServer::ThreadOrb();
// Wait for requests: recv.configure(); recv_activity.start(); OCL::TaskBrowser tb( &recv ); tb.loop(); recv_activity.stop();
// Cleanup Corba: ControlTaskServer::ShutdownOrb(); ControlTaskServer::DestroyOrb(); return 0; }
The no-toolkit versions of the test programs are identical, except they simply do not load the transport plugin, making it impossible to transport the boost types over CORBA.
Now located at http://orocos.org/wiki/rtt/examples-and-tutorials
Problem: How to reuse a component when you need the ports to have different names?
Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->
Problem: How to reuse a component when you need the ports to have different names?
Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->
This example occurs in three parts
class HMI : public RTT::TaskContext { protected: // *** OUTPUTS *** /// desired cartesian position RTT::WriteDataPort<KDL::Frame> cartesianPosition_desi_port; public: HMI(std::string name); virtual ~HMI(); protected: /// set the desired cartesian position to an initial value /// \return true virtual bool startHook(); };
class Robot : public RTT::TaskContext { protected: // *** INPUTS *** /// desired cartesian position RTT::ReadDataPort<KDL::Frame> cartesianPosition_desi_port; public: Robot(std::string name); virtual ~Robot(); };
class OneAxisFilter : public RTT::TaskContext { protected: // *** INPUTS *** /// desired cartesian position RTT::ReadDataPort<KDL::Frame> inputPosition_port; // *** OUTPUTS *** /// desired cartesian position RTT::WriteDataPort<KDL::Frame> outputPosition_port; // *** CONFIGURATION *** /// specify which axis to filter (should be one of "x", "y", or "z") RTT::Property<std::string> axis_prop; public: OneAxisFilter(std::string name); virtual ~OneAxisFilter(); protected: /// validate axis_prop value /// \return true if axis_prop value is valid, otherwise false virtual bool configureHook(); /// filter one translational axis (as specified by axis_prop) virtual void updateHook(); };
The component implementations are not given in this example, as they are not the interesting part of the solution, but are available in the Files section above.
The interesting part is in the deployment files ...
This part simply connects the HMI and robot together (see deployment file Connect-1.xml).
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <simple name="Import" type="string"> <value>liborocos-rtt</value> </simple> <simple name="Import" type="string"> <value>liborocos-kdl</value> </simple> <simple name="Import" type="string"> <value>liborocos-kdltk</value> </simple> <simple name="Import" type="string"> <value>libConnectionNaming</value> </simple>
<struct name="HMI" type="HMI"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi</value></simple> </struct> </struct>
<simple name="portName" type="string"> <value>connectionName</value> </simple>
<struct name="Robot" type="Robot"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>HMI</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi</value></simple> </struct> </struct> </properties>
Now, the deployer uses connection names when connecting components between peers, not port names. So it attempts to connect a Robot.cartesianPosition_desi connection to a Vehicle. cartesianPosition_desi connection (which in this part, matches the port names).
Build the library, and then run this part with
cd /path/to/ConnectionNaming/build deployer-macosx -s ../Connect-1.xml
Examine the HMI and Robot components, and note that each has a connected port, and the port values match.
This part adds a filter component between the HMI and the robot (see Connect-2.xml)
As with Part 1, the first part of the file loads the appropriate libraries (left out here, as it is identical to Part 1).
<struct name="HMI" type="HMI"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>unfiltered_cartesianPosition_desi</value></simple> </struct> </struct>
<struct name="Filter" type="OneAxisFilter"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>HMI</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="inputPosition" type="string"> <value>unfiltered_cartesianPosition_desi</value></simple> <simple name="outputPosition" type="string"> <value>filtered_cartesianPosition_desi</value></simple> </struct> <simple name="PropertyFile" type="string"> <value>../Filter1.cpf</value></simple> </struct>
<struct name="Robot" type="Robot"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>Filter</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>filtered_cartesianPosition_desi</value></simple> </struct> </struct>
Run this part with
cd /path/to/ConnectionNaming/build deployer-macosx -s ../Connect-2.xml
Examine all three components, and note that all ports are connected, and in particular, that the HMI and Filter.inputPosition ports match while the Filter.outputPosition and Vehicle ports match (ie they have the 'x' axis filtered out).
Using connection naming allows us to connect ports of different names. This is particularly useful with a generic component like this filter, as in one deployment it may connect to a component with ports named cartesianPosition_desi, while in another deployment it may connect to ports named CartDesiPos, or any other names. The filter component is now decoupled from the actual port names used to deploy it.
This part adds a second filter between the first filter and the robot.
As with Parts 1 and 2, the libraries are loaded first.
<struct name="HMI" type="HMI"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>unfiltered_cartesianPosition_desi</value></simple> </struct> </struct>
<struct name="Filter1" type="OneAxisFilter"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>HMI</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="inputPosition" type="string"> <value>unfiltered_cartesianPosition_desi</value></simple> <simple name="outputPosition" type="string"> <value>filtered_cartesianPosition_desi</value></simple> </struct> <simple name="PropertyFile" type="string"> <value>../Filter1.cpf</value></simple> </struct>
<struct name="Filter2" type="OneAxisFilter"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>HMI</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="inputPosition" type="string"> <value>filtered_cartesianPosition_desi</value></simple> <simple name="outputPosition" type="string"> <value>double_filtered_cartesianPosition_desi</value></simple> </struct> <simple name="PropertyFile" type="string"> <value>../Filter2.cpf</value></simple> </struct>
<struct name="Robot" type="Robot"> <struct name="Activity" type="PeriodicActivity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>Filter2</value></simple> </struct> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>double_filtered_cartesianPosition_desi</value></simple> </struct> </struct>
Run this part with
cd /path/to/ConnectionNaming/build deployer-macosx -s ../Connect-3.xml
Examine all components, and note which ports are connected, and what their values are. Note that the vehicle has two axes knocked out (x and y).
In a shell
cd /path/to/ConnectionNaming mkdir build cd build cmake .. -DOROCOS_TARGET=macosx make
For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").
Tested in Mac OS X Leopard 10.5.7.
Attachment | Size |
---|---|
HMI.hpp | 2.16 KB |
Robot.hpp | 1.99 KB |
OneAxisFilter.hpp | 2.52 KB |
HMI.cpp | 2.04 KB |
Robot.cpp | 1.94 KB |
OneAxisFilter.cpp | 2.92 KB |
Connect-1.xml | 1.96 KB |
Connect-2.xml | 2.91 KB |
Connect-3.xml | 3.85 KB |
connectionNaming.tar_.bz2 | 7.01 KB |
Now located at http://orocos.org/wiki/rtt/examples-and-tutorials
Solution: Use a non-periodic component. This example outlines one method to structure the component, to deal with the non-blocking reads while still being responsive to other components, being able to run a state machine, etc.
<!-- break -->
The .cpf file has a .txt extension simply to keep the wiki happy. To use the file, rename it to SimpleNonPeriodicClient.cpf.
This is the class definition
class SimpleNonPeriodicClient : public RTT::TaskContext { protected: // DATA INTERFACE // *** OUTPUTS *** /// the last read data RTT::WriteDataPort<std::string> lastRead_port; /// the number of items sucessfully read RTT::Attribute<int> countRead_attr; // *** CONFIGURATION *** // name to listen for incoming connections on, either FQDN or IPv4 addres RTT::Property<std::string> hostName_prop; // port to listen on RTT::Property<int> hostPort_prop; // timeout in seconds, when waiting for connection RTT::Property<int> connectionTimeout_prop; // timeout in seconds, when waiting to read RTT::Property<int> readTimeout_prop; public: SimpleNonPeriodicClient(std::string name); virtual ~SimpleNonPeriodicClient(); protected: /// reset count and lastRead, attempt to connect to remote virtual bool startHook(); /// attempt to read and process one packet virtual void updateHook(); /// close the socket and cleanup virtual void stopHook(); /// cause updateHook() to return virtual bool breakUpdateHook(); /// Socket used to connect to remote host QTcpSocket* socket; /// Flag indicating to updateHook() that we want to quit bool quit; };
The component has a series of properties specifying the remote host and port to connect to, as well as timeout parameters. It also uses an RTT Attribute to count the number of successful reads that have occurred, and stores the last read data as a string in a RTT data port.
#include "SimpleNonPeriodicClient.hpp" #include <rtt/Logger.hpp> #include <ocl/ComponentLoader.hpp> #include <QTcpSocket>
The class definition is included as well as the RTT logger, and importantly, the OCL component loader that turns this class into a deployable componet in a shared library.
Most importantly, all Qt related headers come after all Orocos headers. This is required as Qt redefines certain words (eg "slot", "emit") which when used in our or Orocos code cause compilation errors.
SimpleNonPeriodicClient::SimpleNonPeriodicClient(std::string name) : RTT::TaskContext(name), lastRead_port("lastRead", ""), countRead_attr("countRead", 0), hostName_prop("HostName", "Name to listen for incoming connections on (FQDN or IPv4)", ""), hostPort_prop("HostPort", "Port to listen on (1024-65535 inclusive)", 0), connectionTimeout_prop("ConnectionTimeout", "Timeout in seconds, when waiting for connection", 0), readTimeout_prop("ReadTimeout", "Timeout in seconds, when waiting for read", 0), socket(new QTcpSocket), quit(false) { ports()->addPort(&lastRead_port); attributes()->addAttribute(&countRead_attr); properties()->addProperty(&hostName_prop); properties()->addProperty(&hostPort_prop); properties()->addProperty(&connectionTimeout_prop); properties()->addProperty(&readTimeout_prop); }
The constuctor simply sets up the data interface elements (ie the port, attribute and properties), and gives them appropriate initial values. Note that some of these initial values are illegal, which would aid in any validation code in a configureHook() (which has not been done in this example).
SimpleNonPeriodicClient::~SimpleNonPeriodicClient() { delete socket; }
The destructor cleans up by deleting the socket we allocated in the constructor.
Now to the meat of it
bool SimpleNonPeriodicClient::startHook() { bool rc = false; // prove otherwise std::string hostName = hostName_prop.rvalue(); int hostPort = hostPort_prop.rvalue(); int connectionTimeout = connectionTimeout_prop.rvalue(); quit = false; // attempt to connect to remote host/port log(Info) << "Connecting to " << hostName << ":" << hostPort << endlog(); socket->connectToHost(hostName.c_str(), hostPort); if (socket->waitForConnected(1000 * connectionTimeout)) // to millseconds { log(Info) << "Connected" << endlog(); rc = true; } else { log(Error) << "Error connecting: " << socket->error() << ", " << socket->errorString().toStdString() << endlog(); // as we now return false, this component will fail to start. } return rc; }
If the connection does not occur successfully, then startHook() will return false which prevents the component from actually being started. No reconnection is attempted (see Assumptions above)
void SimpleNonPeriodicClient::updateHook() { // wait for some data to arrive, timing out if necessary int readTimeout = readTimeout_prop.rvalue(); log(Debug) << "Waiting for data with timeout=" << readTimeout << " seconds" << endlog(); if (!socket->waitForReadyRead(1000 * readTimeout)) { log(Error) << "Error waiting for data: " << socket->error() << ", " << socket->errorString().toStdString() << ". Num bytes = " << socket->bytesAvailable() << endlog(); log(Error) << "Disconnecting" << endlog(); // disconnect socket, and do NOT call this function again // ie no engine()->getActivity()->trigger() socket->disconnectFromHost(); return; } // read and print whatever data is available, but stop if instructed // to quit while (!quit && (0 < socket->bytesAvailable())) { #define BUFSIZE 10 char str[BUFSIZE + 1]; // +1 for terminator qint64 numRead; numRead = socket->read((char*)&str[0], min(BUFSIZE, socket->bytesAvailable())); str[BUFSIZE] = '\0'; // forcibly terminate if (0 < numRead) { log(Info) << "Got " << numRead << " bytes : '" << &str[0] << "'" << endlog(); countRead_attr.set(countRead_attr.get() + 1); lastRead_port.Set(&str[0]); } } // if not quitting then trigger another immediate call to this function, to // get the next batch of data if (!quit) { engine()->getActivity()->trigger(); } }
The updateHook() function attempts to wait until data is available, and then reads the data BUFSIZE characters at a time. If it times out waiting for data, then it errors out and disconnects the port. This is not a robust approach and a real algorithm would deal with this differently.
As data may be continually arriving and/or we get more than BUFSIZE characters at a time, the while loop may iterate several times. The quit flag will indicate if the user wants to stop the component, and that we should stop reading characters.
Of particular note is the last line
engine()->getActivity()->trigger();
void SimpleNonPeriodicClient::stopHook() { if (socket->isValid() && (QAbstractSocket::ConnectedState == socket->state())) { log(Info) << "Disconnecting" << endlog(); socket->disconnectFromHost(); } }
bool SimpleNonPeriodicClient::breakUpdateHook() { quit = true; return true; }
We could have also done something like socket->abort() to forcibly terminate any blocked socket->waitForReadyRead() calls.
When using system calls (e.g. read() ) instead of Qt classes you could attempt to send a signal to interrupt the system call, however, this might not have the desired effect when the component is deployed ... the reader is advised to be careful here.
ORO_CREATE_COMPONENT(SimpleNonPeriodicClient)
In a shell
cd /path/to/SimpleNonPeriodicClient mkdir build cd build cmake .. -DOROCOS_TARGET=macosx make
For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").
Tested in Mac OS X Leopard 10.5.7, and Ubuntu Jaunty Linux.
Start one shell and run netcat to act as the server (NB 50001 is the HostPort value from your SimpleNonPeriodicClient.cpf file)
nc -l 50001
Start a second shell and deploy the SimpleNonPeriodicClient component
cd /path/to/SimpleNonPeriodicClient/build deployer-macosx -s ../SimpleNonPeriodicClient.xml
Now type in the first shell and when you hit enter, then netcat will send the data and it will be printed by the SimpleNonPeriodicClient component (where N is the size of the buffer in updateHook()).
Points to note:
Attachment | Size |
---|---|
SimpleNonPeriodicClient.cpp | 7.42 KB |
SimpleNonPeriodicClient.hpp | 3.11 KB |
SimpleNonPeriodicClient.xml | 1 KB |
SimpleNonPeriodicClient-cpf.txt | 748 bytes |
SimpleNonPeriodicClient.tar_.bz2 | 7.72 KB |
The netcat shell, with the text the user typed in.
nc -l 50001 The quick brown fox jumps over the lazy dog.
The deployer shell, showing the text read in chunks, as well as the updated port and attribute within the component.
deployer-macosx -s ../SimpleNonPeriodicClient.xml -linfo 0.009 [ Info ][deployer-macosx::main()] No plugins present in /usr/lib/rtt/macosx/plugins 0.009 [ Info ][DeploymentComponent::loadComponents] Loading '../SimpleNonPeriodicClient.xml'. 0.010 [ Info ][DeploymentComponent::loadComponents] Validating new configuration... 0.011 [ Info ][DeploymentComponent::loadLibrary] Storing orocos-rtt 0.011 [ Info ][DeploymentComponent::loadLibrary] Loaded shared library 'liborocos-rtt-macosx.dylib' 0.054 [ Info ][DeploymentComponent::loadLibrary] Loaded multi component library 'libSimpleNonPeriodicClient.dylib' 0.054 [ Warning][DeploymentComponent::loadLibrary] Component type name SimpleNonPeriodicClient already used: overriding. 0.054 [ Info ][DeploymentComponent::loadLibrary] Loaded component type 'SimpleNonPeriodicClient' 0.055 [ Info ][DeploymentComponent::loadLibrary] Storing SimpleNonPeriodicClient 0.058 [ Info ][DeploymentComponent::loadComponent] Adding SimpleNonPeriodicClient as new peer: OK. 0.058 [ Warning][SingleThread] Forcing priority (0) of thread to 0. 0.058 [ Info ][NonPeriodicActivity] SingleThread created with priority 0 and period 0. 0.058 [ Info ][NonPeriodicActivity] Scheduler type was set to `4'. 0.059 [ Info ][PropertyLoader:configure] Configuring TaskContext 'SimpleNonPeriodicClient' with '../SimpleNonPeriodicClient.cpf'. 0.059 [ Info ][DeploymentComponent::configureComponents] Configured Properties of SimpleNonPeriodicClient from ../SimpleNonPeriodicClient.cpf 0.059 [ Info ][DeploymentComponent::configureComponents] Re-setting activity of SimpleNonPeriodicClient 0.059 [ Info ][DeploymentComponent::configureComponents] Configuration successful. 0.060 [ Info ][DeploymentComponent::startComponents] Connecting to 127.0.0.1:50001 0.064 [ Info ][DeploymentComponent::startComponents] Connected 0.065 [ Info ][DeploymentComponent::startComponents] Startup successful. 0.065 [ Info ][deployer-macosx::main()] Successfully loaded, configured and started components from ../SimpleNonPeriodicClient.xml Switched to : Deployer 0.066 [ Info ][SimpleNonPeriodicClient] Entering Task Deployer This console reader allows you to browse and manipulate TaskContexts. You can type in a command, event, method, expression or change variables. (type 'help' for instructions) TAB completion and HISTORY is available ('bash' like) In Task Deployer[S]. (Status of last Command : none ) (type 'ls' for context info) :4.816 [ Info ][SimpleNonPeriodicClient] Got 10 bytes : 'The quick ' 4.816 [ Info ][SimpleNonPeriodicClient] Got 10 bytes : 'brown fox ' 7.448 [ Info ][SimpleNonPeriodicClient] Got 10 bytes : 'jumps over' 7.448 [ Info ][SimpleNonPeriodicClient] Got 10 bytes : ' the lazy ' 12.448 [ ERROR ][SimpleNonPeriodicClient] Error waiting for data: 5, Network operation timed out. Num bytes = 5 12.448 [ ERROR ][SimpleNonPeriodicClient] Disconnecting In Task Deployer[S]. (Status of last Command : none ) (type 'ls' for context info) :ls SimpleNonPeriodicClient Listing TaskContext SimpleNonPeriodicClient : Configuration Properties: string HostName = 127.0.0.1 (Name to listen for incoming connections on (FQDN or IPv4)) int HostPort = 50001 (Port to listen on (1024-65535 inclusive)) int ConnectionTimeout = 5 (Timeout in seconds, when waiting for connection) int ReadTimeout = 5 (Timeout in seconds, when waiting for read) Execution Interface: Attributes : int countRead = 4 Methods : activate cleanup configure error getErrorCount getPeriod getWarningCount inFatalError inRunTimeError inRunTimeWarning isActive isConfigured isRunning resetError start stop trigger update warning Commands : (none) Events : (none) Data Flow Ports: W(U) string lastRead = the lazy Task Objects: this ( The interface of this TaskContext. ) scripting ( Access to the Scripting interface. Use this object in order to load or query programs or state machines. ) engine ( Access to the Execution Engine. Use this object in order to address programs or state machines which may or may not be loaded. ) marshalling ( Read and write Properties to a file. ) lastRead ( (No description set for this Port) ) Peers : (none) In Task Deployer[S]. (Status of last Command : none ) (type 'ls' for context info) :quit 18.089 [ Info ][DeploymentComponent::stopComponents] Stopped SimpleNonPeriodicClient 18.089 [ Info ][DeploymentComponent::cleanupComponents] Cleaned up SimpleNonPeriodicClient 18.090 [ Info ][DeploymentComponent::startComponents] Disconnected and destroyed SimpleNonPeriodicClient 18.090 [ Info ][DeploymentComponent::startComponents] Kick-out successful. 18.091 [ Info ][Logger] Orocos Logging Deactivated.
Problem: You deploy multiple configurations of your system, perhaps choosing between a real and simulated robot, some real and simulated device, etc. You want to parameterize the deployments to reduce the number of files you have to write for the varying configuration combinations
Solution: Use the XML ENTITY element.
There is a top-level file per configuration, which specifies all the parameters. Each top-level file then includes a child file which instantiates components, etc.
One top level file
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd" [ <!-- internal entities for substituion --> <!ENTITY name "Console"> <!ENTITY lib "liborocos-rtt"> <!-- external entity for file substitution --> <!ENTITY FILE_NAME SYSTEM "test-entity-child.xml"> ] > <properties> &FILE_NAME; </properties>
The internal entity values are used to substitute component names, and other basic parameters. The external entity value (&FILE_NAME) is used to include child files, so that the entity values defined in the top-level file are available within the child file. Using the Orocos' builtin include statement does not make the top-level entity values available within the child file.
The child file simply substitutes the two internal entities for a library name, and a component name.
<properties> <simple name="Import" type="string"> <value>&lib;</value> </simple> <simple name="Import" type="string"> <value>liborocos-ocl-common</value> </simple> <struct name="&name;" type="OCL::HMIConsoleOutput"> </struct> </properties>
The other top level file differs from the first top level file only in the name of the component.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd" [ <!ENTITY name "Console2"> <!ENTITY lib "liborocos-rtt"> <!ENTITY file SYSTEM "test-entity-child.xml"> ] > <properties> &file; </properties>
You can use relative paths within the external entity filename. I have had inconsistent success with this - sometimes the relative path is needed, and other times it isn't. I think that it only needs the path relative to the file being included from, so if that file is already loaded on a relative path then you need to specify the child file only relative to the parent file, and not the current working directory that you started the deployment in.
Attachment | Size |
---|---|
test-entity.xml | 1.56 KB |
test-entity2.xml | 278 bytes |
test-entity-child.xml | 307 bytes |
This page collects notes and issues on the use of real-time logging. Its contents will eventually become the documentation for this feature.
This feature has been integrated in the Orocos 1.x and 2.x branches but is still considered under development. However, if you need a real-time logging infrastructure (ie text messages to users), this is exactly where you need to be. If you need a real-time data stream logging of ports, use the OCL's Reporting NetCDFReporting Component instead.
It is noted in the text where Orocos 1.x and 2.x differ.
Categories can not be created in real-time: They live on the normal heap via new/delete. Create all categories in your component's constructor or during configureHook() or similar.
NDC's are not supported. They involve std::string and std::vector which we currently can't replace.
Works only with OCL's deployers: If you use a non-deployer mechanism to bring up your system, you will need to add code to ensure that the log4cpp framework creates our OCL::Category objects, and not the default (non-real-time) log4cpp::Category objects. This should be done early in your application, prior to any components and categories being created.
log4cpp::HierarchyMaintainer::set_category_factory( OCL::logging::Category::createOCLCategory);
This is not currently dealt with, but could be in future implementations.
In RTT/OCL 1.x, multiple appenders connected to the same category will, receive only some of the incoming logging events. This is as each appender will pop different elements from the category's buffer. This issue has been solved in 2.x.
The size of the buffer between a category and its appenders is currently fixed (see ocl/logging/Category.cpp). This will be fixed lateron on the 2.x branch. Note that that fixed size plus the default consumption rate of the FileAppender means you can exhaust the default TLSF memory pool in very short order. For a complex application (~40 components, 400 Hz cycle rate) we increased the default buffer size to 200, increased the memory pool to 10's of kilobytes (or megabytes) and increased the FileAppender consumption rate to 500 messages per second.
We can use standard log viewers for Log4j in two ways:
These log viewers are compatible:
As at October 2010, assumes you are using for RTT 1.x:
And for RTT 2.x, use the Orocos Toolchain 2.2 or later from :
then build in the following order, with these options ON:
The deployer now defaults to a 20k real-time memory pool (see OCL CMake option ORO_DEFAULT_RTALLOC_SIZE), all Orocos RTT::Logger calls end up inside of log4cpp, and the default for RTT::Logger logging events is to log to a file "orocos.log". Same as always. But now you can configure all logging in one place!
IMPORTANT Be aware that there are two logging hierarchies at work here:
In time, hopefully these two will evolve into just the latter.
We're assuming here that you used 'orocreate-pkg' to setup a new application. So you're using the UseOrocos CMake macros.
Both steps will make sure that your libraries link with the Orocos logging libraries and that include files are found.
The deployer's have command line options for this
deployer-macosx --rtalloc-mem-size 10k deployer-corba-macosx --rtalloc-mem-size 30m deployer-corba-macosx --rtalloc 10240 # understands shortened, but unique, options
NOTE: this feature is not available on the official release. Skip to the next section (Configuring OCL::logging) if you're not using the log4cpp branch of the RTT
You can use any of log4cpp's configurator approaches to configure, but the deployer's already know about PropertyConfigurator's. You can pass a log4cpp property file to the deployer and that will be used to configure the first of the hierarchies above - the non-real-time, logging used by RTT::Logger. For example
deployer-macosx --rtt-log4cpp-config-file /z/l/log4cpp.conf
# root category logs to application (this level is also the default for all # categories who's level is NOT explicitly set in this file) log4j.rootCategory=DEBUG, applicationAppender # orocos setup log4j.category.org.orocos.rtt=INFO, orocosAppender log4j.additivity.org.orocos.rtt=false # do not also log to parent categories log4j.appender.orocosAppender=org.apache.log4j.FileAppender log4j.appender.orocosAppender.fileName=orocos-log4cpp.log log4j.appender.orocosAppender.layout=org.apache.log4j.PatternLayout log4j.appender.orocosAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %m%n
IMPORTANT Note the direction of the category name, from org to rtt. This is specific to log4cpp and other log4j-style frameworks. Using a category "rtt.orocos.org" and sub-category "scripting.rtt.orocos.org" won't do what you, nor log4cpp, expect.
This is how you would setup logging from a Deployer XML file. If you prefer to use a script, see the next section.
See ocl/logging/tests/xxx.xml for complete examples and more detail, but in short
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <simple name="Import" type="string"> <value>liborocos-logging</value> </simple> <simple name="Import" type="string"> <value>libTestComponent</value> </simple> <struct name="TestComponent" type="OCL::logging::test::Component"> <struct name="Activity" type="Activity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> </struct> <struct name="AppenderA" type="OCL::logging::FileAppender"> <struct name="Activity" type="Activity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Properties" type="PropertyBag"> <simple name="Filename" type="string"><value>appendera.log</value></simple> <simple name="LayoutName" type="string"><value>pattern</value></simple> <simple name="LayoutPattern" type="string"><value>%d [%t] %-5p %c %x - %m%n</value></simple> </struct> </struct> <struct name="LoggingService" type="OCL::logging::LoggingService"> <struct name="Activity" type="Activity"> <simple name="Period" type="double"><value>0.5</value></simple> <simple name="Priority" type="short"><value>0</value></simple> <simple name="Scheduler" type="string"><value>ORO_SCHED_OTHER</value></simple> </struct> <simple name="AutoConf" type="boolean"><value>1</value></simple> <simple name="AutoStart" type="boolean"><value>1</value></simple> <struct name="Properties" type="PropertyBag"> <struct name="Levels" type="PropertyBag"> <simple name="org.orocos.ocl.logging.tests.TestComponent" type="string"><value>info</value></simple> </struct> <struct name="Appenders" type="PropertyBag"> <simple name="org.orocos.ocl.logging.tests.TestComponent" type="string"><value>AppenderA</value></simple> </struct> </struct> <struct name="Peers" type="PropertyBag"> <simple type="string"><value>AppenderA</value></simple> </struct> </struct> </properties>
Run this XML file, save it in 'setup_logging.xml' and use it:
deployer-gnulinux -s setuplogging.xml
This is how you would setup logging from a Lua script file. If you prefer to use a XML, see the previous section.
require("rttlib") -- Set this to true to write the property files the first time. write_props=false tc = rtt.getTC() depl = tc:getPeer("deployer") -- Create components. Enable BUILD_LOGGING and BUILD_TESTS for this to -- work. depl:loadComponent("TestComponent","OCL::logging::test::Component") depl:setActivity("TestComponent", 0.5, 0, 0) depl:loadComponent("AppenderA", "OCL::logging::FileAppender") depl:setActivity("AppenderA", 0.5, 0, 0) depl:loadComponent("LoggingService", "OCL::logging::LoggingService") depl:setActivity("LoggingService", 0.5, 0, 0) test = depl:getPeer("TestComponent") aa = depl:getPeer("AppenderA") ls = depl:getPeer("LoggingService") depl:addPeer("AppenderA","LoggingService") -- Load marshalling service to read/write components depl:loadService("LoggingService","marshalling") depl:loadService("AppenderA","marshalling") if write_props then ls:provides("marshalling"):writeProperties("logging_properties.cpf") aa:provides("marshalling"):writeProperties("appender_properties.cpf") print("Wrote property files. Edit them and set write_props=false") os.exit(0) else ls:provides("marshalling"):loadProperties("logging_properties.cpf") aa:provides("marshalling"):loadProperties("appender_properties.cpf") end test:configure() aa:configure() ls:configure() test:start() aa:start() ls:start()
To run this script, save it in 'setup_logging.lua' and do:
rttlua-gnulinux -i setup_logging.lua
// TestComponent.hpp #include <ocl/LoggingService.hpp> #include <ocl/Category.hpp> class Component : public RTT::TaskContext { ... /// Our logging category OCL::logging::Category* logger; };
// TestComponent.cpp #include <rtt/rt_string.hpp> Component::Component(std::string name) : RTT::TaskContext(name), logger(dynamic_cast<OCL::logging::Category*>( &log4cpp::Category::getInstance("org.orocos.ocl.logging.tests.TestComponent"))) { } bool Component::startHook() { bool ok = (0 != logger); if (!ok) { log(Error) << "Unable to find existing OCL category '" << categoryName << "'" << endlog(); } return ok; } void Component::updateHook() { // RTT 1.X logger->error(OCL::String("Had an error here")); logger->debug(OCL::String("Some debug data ...")); // RTT 2.X logger->error(RTT::rt_string("Had an error here")); logger->debug(RTT::rt_string("Some debug data ...")); logger->getRTStream(log4cpp::Priority::DEBUG) << "Some debug data and a double value " << i; }
IMPORTANT YOu must dynamic_cast to an OCL::logging::Category* to get the logger, as shown in the constructor above. Failure to do this can lead to trouble. You must also use explicitly use OCL::String() syntax when logging. Failure to do this produces compiler errors, as otherwise the system defaults to std::string and then you are no longer real-time. See the FAQ below for more description.
And the output of the above looks something like this:
// file orocos.log, from RTT::Logger configured with log4cpp 20100414T09:50:11.844 [INFO] ControlTask 'HMI' found CORBA Naming Service. 20100414T09:50:11.845 [WARN] ControlTask 'HMI' already bound to CORBA Naming Service.
20100414T21:41:22.539 [INFO ] components.HMI Started servicing::HMI 20100414T21:41:23.039 [DEBUG] components.Robot Motoman robot started 20100414T21:41:42.539 [INFO ] components.ConnectionMonitor Connected
20100414T21:41:41.982 [INFO ] org.orocos.rtt Thread created with scheduler type '1', priority 0 and period 0. 20100414T21:41:41.982 [INFO ] org.orocos.rtt Creating Proxy interface for HMI 20100414T21:41:42.016 [DEBUG] org.me.myapp Connections made successfully 20100414T21:41:44.595 [DEBUG] org.me.myapp.Robot Request position hold
The last one is the most interesting. All RTT::Logger calls have been sent to the same appender as the application logs to. This means you can use the exact same logging statements in both your components (when they use OCL::Logging) and in your GUI code (when they use log4cpp directly). Less maintenance, less hassle, only one (more) tool to learn. The configuration file for the last example looks something like
# root category logs to application (this level is also the default for all # categories who's level is NOT explicitly set in this file) log4j.rootCategory=DEBUG, applicationAppender # orocos setup log4j.category.org.orocos.rtt=INFO, applicationAppender log4j.additivity.org.orocos.rtt=false # do not also log to parent categories # application setup log4j.category.org.me =INFO, applicationAppender log4j.additivity.org.me=false # do not also log to parent categories log4j.category.org.me.gui=WARN log4j.category.org.me.gui.Robot=DEBUG log4j.category.org.me.gui.MainWindow=INFO log4j.appender.applicationAppender=org.apache.log4j.FileAppender log4j.appender.applicationAppender.fileName=application.log log4j.appender.applicationAppender.layout=org.apache.log4j.PatternLayout log4j.appender.applicationAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l} [%-5p] %c %m%n
A: Make sure you are using an OCL::logging::Category* and not a log4cpp::Category. The latter will silently compile and run, but it will discard all logging statements. This situation can also mask the fact that you are accidentally using std::string and not OCL::String. For example
log4cpp::Category* logger = log4cpp::Category::getInstance(name); logger->debug("Hello world")
OCL::logging::Category* logger = dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name)); logger->debug("Hello world")
/path/to/log4cpp/include/log4cpp/Category.hh: In member function ‘virtual bool MyComponent::configureHook()’: /path/to/log4cpp/include/log4cpp/Category.hh:310: error: ‘void log4cpp::Category::debug(const char*, ...)’ is inaccessible /path/to/my/source/MyComponent.cpp:64: error: within this context
OCL::logging::Category* logger = dynamic_cast<OCL::logging::Category*>(&log4cpp::Category::getInstance(name)); logger->debug(OCL::String("Hello world"))
This page describes a working example of using omniORBpy to interact with an Orocos component. The example is very simple, and is intended for people who do not know where to start developing a CORBA client.
Your first stop is: http://omniorb.sourceforge.net/omnipy3/omniORBpy/ The omniORBpy version 3 User’s Guide. Read chapters 1 and 2. Optionally read chapter 6. The example works with and without naming services.
Once you are comfortable with omniORBpy, do the following (I assume you are kind enough to be a Linux user working on a console):
wget http://www.orocos.org/stable/examples/rtt/rtt-examples-1.10.0.tar.gz tar xf rtt-examples-1.10.0.tar.gz cd rtt-examples-1.10.0/corba-example/ make smallnet
svn co http://svn.mech.kuleuven.be/repos/orocos/trunk/rtt/src/corba/ mkdir omniclt cp corba/*idl omnictl/ cd omniclt
omniidl -bpython *idl
cp ~/orocosclient.py .
sudo ../smallnet
If you get something like
0.011 [ Warning][SmallNetwork] ControlTask 'ComponentA' could not find CORBA Naming Service. 0.011 [ Warning][SmallNetwork] Writing IOR to 'std::cerr' and file 'ComponentA.ior' IOR:0...10100
sudo ../smallnet -ORBInitRef NameService=corbaname::127.0.0.1
InitRef=NameService=corbaname::127.0.0.1
python orocosclient.py
If you are not able to make your naming service work, try using the component's IOR. After running you smallnet server, copy the complete IOR printed on screen and paste it as argument to the python program (including the word "IOR:")
python orocosclient.py IOR:0...10100
Look at the IDLs and the code to understand how things work. I am no python expert, so if the coding style looks weird to you, my apologies. Good luck!
Attachment | Size |
---|---|
orocosclient.py_.txt | 1.99 KB |
Future home of FAQ
cd BASE_DIR svn co ... cd rtt debchange -v 1.8.0-0 cd debian ./create-control.sh gnulinux # optionally add "lxrt", "xenomai" svn add *1.8*install cd .. export DEB_BUILD_OPTIONS="parallel=2" # or 4, 8, depending on your computer svn-br # or svn-b
Packages are built into BASE_DIR/build-area.
cd BASE_DIR git clone http://git.gitorious.org/orocos-toolchain/rtt.git cd rtt debchange -v 2.3.0-1 cd debian ./create-control.sh gnulinux # optionally add "lxrt", "xenomai" git add *2.3*install git commit -sm"2.3 release install files" cd .. export DEB_BUILD_OPTIONS="parallel=2" # or 4, 8, depending on your computer git-buildpackage --git-upstream-branch=origin
Packages are built into BASE_DIR/build-area.
cd BASE_DIR dpkg-scanpackages build_area /dev/null | gzip -9c > Packages.gz
Now open/etc/apt/sources.list in your favorite editor, and append the following lines to the bottom (substituting the full path to your repos for /path/to/BASE_DIR/).
# Orocos packages deb file:///path/to/BASE_DIR/ ./
Open Synaptic, reload, search for orocos and install.
Follow the same basic approach first for KDL, then for OCL
NB KDL and OCL will happily both build into "build_area" alongside RTT.
orocos-ocl-gnulinux1.8-bin and liborocos-ocl-gnulinux1.8-dev packages).
# 1.x: svn co ... cd quicky mkdir build && cd build cmake .. make # one of the following two exports, depending on your situation export LD_LIBRARY_PATH=. export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:. deployer-gnulinux -s ../quicky.xml ls Quicky # you should see Data_W != 0
orocreate-pkg testme cd testme #non-ROS: make install #ROS: make deployer-gnulinux > import("testme") > displayComponentTypes()
In the first shell start the naming service and the deployer
Naming_Service -m 0 -ORBDottedDecimalAddresses 1 -ORBListenEndpoints iiop://127.0.0.1:2809 -ORBDaemon export NameServiceIOR=corbaloc:iiop:127.0.0.1:2809/NameService deployer-corba-gnulinux -s ../quicky.xml -- -ORBDottedDecimalAddresses 1 ls Quicky # you should see Data_W != 0
In the second shell run the taskbrowser and see the Quicky component running in the deployer
export NameServiceIOR=corbaloc:iiop:127.0.0.1:2809/NameService ctaskbrowser-gnulinux Deployer -ORBDottedDecimalAddresses 1 ls Quicky # you should see Data_W != 0
If the v1.8 files have already been committed to the repository, then you don't need the debchange and svn add commands when building the packages.
[1] http://www.debian.org/doc/manuals/repository-howto/repository-howto#setting-up [2] http://orocos.org/wiki/rtt/frequently-asked-questions-faq/using-corba
This page describes how to re-build debian packages for another Debian/Ubuntu release than they were prepared for.
Note1: This only applies if you want to use the same version as the version in the package repository. If you want a newer version, consult How to build Debian packages.
Note2: The steps below will rebuild Orocos for all targets in the repository, so lxrt, xenomai and gnulinux. If you care only for one of these targets, see also How to build Debian packages.
First, make sure you added this deb-src line to your sources.list file:
deb-src http://www.fmtc.be/debian etch main
sudo apt-get update apt-get source orocos-rtt sudo apt-get build-dep orocos-rtt sudo apt-get install devscripts build-essential fakeroot dpatch cd orocos-rtt-1.6.0 dpkg-buildpackage -rfakeroot -uc -us cd .. for i in *.deb; do sudo dpkg -i \$i; done
You can repeat the same process for orocos-ocl.
Outlines how to use CORBA to distribute applications. Differs by CORBA implementation and whether you are using DNS names or IP addresses. Examples below support the ACE/TAO and OmniORB CORBA implementations.
Sample system:
If you have working forward and reverse DNS entries (ie dig machine1.me.home returns 192.168.12.132, and dig -x 192.168.12.132 returns machine1.me.home)
machine1 \$ Naming_Service -m 0 -ORBListenEndpoints iiop://machine1.me.home:2809 \ -ORBDaemon & machine1 \$ export NameServiceIOR=corbaloc:iiop:machine1.me.home:2809/NameService machine1 \$ deployer-corba-gnulinux -s demo.xml machine2 \$ export NameServiceIOR=corbaloc:iiop:machine1.me.home:2809/NameService machine2 \$ ./demogui
OmniORB does not support the NameServiceIOR environment variable
machine1 \$ omniNames -start & machine1 \$ deployer-corba-gnulinux -s demo.xml machine2 \$ ./demogui -ORBInitRef NameService=corbaloc:iiop:machine1.me.home:2809/NameService
Note that if you swap which machines run the deployer and demogui, then change the above to
machine1 \$ omniNames -start & machine2 \$ deployer-corba-gnulinux -s demo.xml -- \ -ORBInitRef NameService=corbaloc:iiop:machine1.me.home:2809/NameService machine1 \$ ./demogui
If you don't have DNS or you must use IP addresses for some reason.
machine1 \$ Naming_Service -m 0 -ORBDottedDecimalAddresses 1 \ -ORBListenEndpoints iiop://192.168.12.132:2809 -ORBDaemon & machine1 \$ export NameServiceIOR=corbaloc:iiop:192.168.12.132:2809/NameService machine1 \$ deployer-corba-gnulinux -s demo.xml -- -ORBDottedDecimalAddresses 1 machine2 \$ export NameServiceIOR=corbaloc:iiop:192.168.12.132:2809/NameService machine2 \$ ./demogui -ORBDottedDecimalAddresses 1
For more information on the ORBListenEndPoints syntax and possibilities, see http://www.dre.vanderbilt.edu/~schmidt/DOC_ROOT/TAO/docs/ORBEndpoint.html
machine1 \$ omniNames -start & machine1 \$ deployer-corba-gnulinux -s demo.xml machine2 \$ ./demogui -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService
And the reverse
machine1 \$ omniNames -start & machine2 \$ deployer-corba-gnulinux -s demo.xml -- \ -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService machine1 \$ ./demogui
Certain distro's and certain CORBA versions will exhibit problems even with localhost only scenarios (demonstrated with OmniORB under Ubuntu Jackaloupe). If you can not connect to the name service running on the same machine, substitue the primary network interface's IP address for localhost in any NameService value.
For example, instead of
machine1 \$ omniNames -start & machine2 \$ deployer-corba-gnulinux -s demo.xml
or even
machine1 \$ omniNames -start & machine2 \$ deployer-corba-gnulinux -s demo.xml -- \ -ORBInitRef NameService=corbaloc:iiop:localhost:2809/NameService
use
machine1 \$ omniNames -start & machine2 \$ deployer-corba-gnulinux -s demo.xml -- \ -ORBInitRef NameService=corbaloc:iiop:192.168.12.132:2809/NameService
NB as of RTT v1.8.2 and OmniORB v4.1.0, programs like demogui (which use RTT::ControlTaskProxy::InitOrb() to initialize CORBA) do not support -ORBDottedDecimalAddresses (in case you try to use it).
Computers that have multiple network interfaces present additional problems. The following is for omniORB (verified with a mix of v4.1.3 on Mac OS X, andv v4.1.1 on Ubuntu Hardy), for a system running a name server, a deployer, and a GUI. The example system has a 192.168.1.0 wired subnet and a 10.0.10.0 wireless subnet, and you have a mobile vehicle that has to communicate over the wireless subnet but it also has a wired interface.
The problem may appear as one of
The solution is to forcibly specify the endPoint parameter to the name server. In the omniorb.cfg file on the computer running the name server, add (for the example networks above)
endPoint = giop:tcp:10.0.10.14:
If the above still does not work, then set the endPoint parameter in all computer's config files (note that the end point is the IP adrs of each computer, so it will be (say) 10.0.10.14 for the computer running the name server and the deployer, and (say) 10.0.10.21 for the computer running the GUI). This will force everyone onto the wireless network, instead of relying on what the name server is publishing.
To debug this problem see the debugging section below, but after starting the name server you will see it output its published endpoints (right after the configuration dump). Also, if you get the lockup then adding the debug settings will cause the GUI or deployer to output each message and what direction/IP it is going on. If they have strayed on to the wired network it will be visibly obvious.
NB we found that the clientTransportRule and serverTransportRule parameters had no affect on this problem.
NB the above solution works now matter which computer the name server is running on (ie with the deployer, or with the GUI).
Add the following to the omniorb.cfg file
dumpConfiguration = 1 traceLevel = 25
ACE/TAO http://www.cs.wustl.edu/~schmidt/TAO.html
OmniORB http://omniorb.sourceforge.net/
For general installation instructions specific to each software version, see the top level wiki page for each project (eg. RTT, KDL, etc) and look for Installation in the left toolbar.
See below for specific additional instructions.
Installing via Macports on Mac OS X
To install from source on *NIX systems such as Linux and Mac OS X, see the installation page specific to your software version (e.g. v1.8 RTT).
To install from source on Windows, see the following wiki pages (also check the forums, a lot of good material is in there also).
The Orocos Real-Time Toolkit and Component Library have been prepared as Debian packages for Debian Etch. The pages
How to re-build Debian packages
contain instructions for building your own packages on other distributions, like Ubuntu.
Copy/paste the following commands, and enter your password when asked (only works in Ubuntu Feisty or later and Debian Etch or later):
wget -q -O - http://www.orocos.org/keys/psoetens.gpg | sudo apt-key add - sudo wget -q http://www.fmtc.be/debian/sources.list.d/fmtc.list -O /etc/apt/sources.list.d/fmtc.list
Next, for Debian Etch, type:
sudo apt-get update sudo apt-get install liborocos-rtt-corba-gnulinux1.8-dev
For your application development, you'll most likely use the Orocos Component library as well:
sudo apt-get install orocos-ocl-gnulinux1.8-dev orocos-ocl-gnulinux1.8-bin
We recommend using the pkg-config tool to discover the compilation flags required to compile your application with the RTT or OCL. This is described in the installation manual.
These are instructions to install the latest version of each of RTT, KDL, BFL and OCL, on Mac OS X using Macports.
Macports does not have official ports for these Orocos projects, however, the approach below is the recommended way to load unofficial ports in to Macports. [1]
These instructions use /opt/myports to hold the Orocos port files. You can substitute any other directory for MYPORTDIR (ie /opt/myports). Instructions are for bash shell - change appropriately for your own shell.
1. Download the Portfile files from this page's Attachments (at bottom of page).
2. Execute the following commands (substituting /opt/myports for the location you wish to store the Orocos port files, and ~/Downloads for the directory you downloaded the portfiles to)
export MYPORTDIR=/opt/myports export DOWNLOADDIR=~/Downloads mkdir \$MYPORTDIR cd \$MYPORTDIR mkdir devel cd devel mkdir orocos-rtt orocos-kdl orocos-bfl orocos-ocl cp \$DOWNLOADDIR/orocos-rtt-Portfile.txt orocos-rtt/Portfile cp \$DOWNLOADDIR/orocos-kdl-Portfile.txt orocos-kdl/Portfile cp \$DOWNLOADDIR/orocos-bfl-Portfile.txt orocos-bfl/Portfile cp \$DOWNLOADDIR/orocos-ocl-Portfile.txt orocos-ocl/Portfile
cd \$MYPORTDIR/devel mkdir orocos-rtt/files cp \$DOWNLOADDIR/rtt-patch-config-check_depend.cmake.diff orocos-rtt/files/patch-config-check_depend.cmake.diff
You should now have a tree that looks like
tree /opt/myports/ /opt/myports/ `-- devel |-- orocos-bfl | `-- Portfile |-- orocos-kdl | `-- Portfile |-- orocos-ocl | `-- Portfile `-- orocos-rtt |-- Portfile `-- files `-- patch-config-check_depend.cmake.diff
3. Edit /opt/local/etc/macports/sources.conf with superuser privileges (ie via sudo), and add the follwing line before the rsync:///rsync.macports.org/...' line.
# (substitute your ''MYPORTDIR'' value from above) file:///opt/myports
4. Execute these commands to tell Macports about your new ports.
cd \$MYPORTDIR sudo portindex
5. Now install each port with the following commands (the following commands add the optional CORBA support, via omniORB in Macports, as well as the helloworld and other useful parts of OCL)
sudo port install orocos-rtt +corba sudo port install orocos-kdl +corba sudo port install orocos-bfl sudo port install orocos-ocl +corba+deployment+motion_control+reporting+taskbrowser+helloworld
6. Verify installation by downloading test-macports.xml from this page's Attachments, and then using these commands
deployer-macosx -s /path/to/test-macports.xml
export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib
To build against MacPorts-installed Orocos, add the following to your environment before CMake'ing your project
export CMAKE_PREFIX_PATH=/opt/local
If you use Makefiles or autoconf to build your project, you'll need to tell those build systems to find Orocos headers, libraries and binaries under /opt/local. Instructions are not provided here for that.
To run using MacPorts-installed OROCOS, add the following to your environment
export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib:/opt/local/lib/rtt/macosx/plugins
(Not yet tested)
... sudo port uninstall orocos-rtt
Current limitations
dyld: Symbol not found: __cg_jpeg_resync_to_restart Referenced from: /System/Library/Frameworks/ApplicationServices.framework/Versions/A/\ Frameworks/ImageIO.framework/Versions/A/ImageIO Expected in: /opt/local/lib/libJPEG.dylib
[1] Macports guide with detailed information on the port system.
[2] http://www.nabble.com/Incorrect-libjpeg.dylib-after-installing-ImageMagick-td22625866.html
Original bug report for this and accompanying forum entry
Attachment | Size |
---|---|
orocos-rtt-Portfile.txt | 1.82 KB |
orocos-kdl-Portfile.txt | 2.13 KB |
orocos-bfl-Portfile.txt | 1.36 KB |
orocos-ocl-Portfile.txt | 2.67 KB |
rtt-patch-config-check_depend.cmake_.diff | 607 bytes |
test-macports.xml | 472 bytes |
This page collects all the documentation users collected for building and using RTT on Windows. Note that Native Windows support is available from RTT 1.10.0 on, and that you might no longer need some of the proposed workarounds (such as using mingw or cygwin).
The recommended way of compiling the RTT on Windows is by using the Compiling on Windows with Visual Studio instructions.
This document is slightly outdated.
Using the info here: http://www.mingw.org/old/mingwfaq.shtml#faq-msvcdll
I managed to create DEF files, and use Microsofts LIB tool to turn the library it into something MSVC likes.
I'm no CMake expert, and don't have the time to learn **another** build scripting language, however I created the CMake files in the usual way, built RTT and ensure it compiled cleanly. I hacked the created makefiles by a search of my source tree for "--out-implib" and found that link.txt that lives in build\src\CMakeFiles\orocos-rtt-dynamic_win32.dir had that string in it. So I added the --output-def,..\..\libs\liborocos-rtt-win32.dll.def, to create the def file, and rebuilt RTT, this created the DEF file, I than ran it through the Microsoft LIB tool as described.
I then created a MSVC project, added the library to my linker settings, and made a very simply MSVC console application:
#include "rtt\os\main.h" #include "rtt\rtt-config.h" int ORO_main(int, char**) { return 0; }
I also needed to setup my MSVC preprocessor definitions:
NDEBUG
_CONSOLE
__i386__
__MINGW32__
OROCOS_TARGET=win32
Hopefully I am now at a stage when I can actually start to evaluate RTT :-) If anyone has any ideas on how to properly get the CMakeList.txt to generate the DEF files without nasty post-CMake hacks, then I would love to hear it...
This page summarizes how to compile RTT with Microsoft Visual Studio, using the native win32 api. RTT supports Windows out of the box from RTT 1.10.0 and 2.3.0 on. OCL is supported from 1.12.0 and 2.3.0 on.
This tutorial assumes you extracted the Orocos sources and all its dependencies in c:\orocos
For new users, RTT/OCL v2.3.x or later is recommended, included in the Orocos Toolchain v2.3.x.
We only support Visual Studio 2008 and 2005. Support for 2010 is on its way. You're invited to try VS2010 out and suggest patches to the orocos-dev mailing list.
Orocos does not come with a Visual Studio Solution. You need to generate one using the CMake tool which you can download from http://www.cmake.org. The most important step for CMake is to set the paths to where the dependencies of Orocos are installed. So before you can get to building Orocos, you need to build its dependencies, which don't use CMake, but their own build mechanism.
Only RTT and OCL of the toolchain are supported on Windows. The ruby based 'orogen' and 'typegen' tools, part of the toolchain, are not supported. Also ROS integration is not supported on Windows.
You must have this set as system environment variables:
set ACE_ROOT=c:\orocos\ACE_wrappers set TAO_ROOT=%ACE_ROOT%\tao set PATH=%PATH%;%ACE_ROOT%\bin;%ACE_ROOT%\lib
You can also set these using Configuration -> System -> Advanced -> Environment Variables
We recommend Boost 1.40.0 for Windows. Also, unzip Boost with 7Zip or similar, but not with the default Windows unzip program, which is extremely slow.
Make sure to install these components: program_options, thread, unit_test_framework, filesystem, system.
Also add the lib directory to your PATH system environment variable:
set PATH=%PATH%;c:\orocos\boost_1_40\lib
set(CMAKE_INCLUDE_PATH ${CMAKE_INCLUDE_PATH} "c:/orocos/boost_1_40;c:/orocos/ACE_wrappers;c:/orocos/ACE_wrappers/TAO;c:/orocos/ACE_wrappers/TAO/orbsvcs") set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "c:/orocos/boost_1_40/lib;c:/orocos/ACE_wrappers/lib")
set( OROCOS_TARGET win32 CACHE STRING "The Operating System target. One of [lxrt gnulinux xenomai macosx win32]")
Start the cmake-gui and set your source and build paths ( For example, c:\orocos\orocos-rtt-1.10.0 and c:\orocos\orocos-rtt-1.10.0\build ). Now click 'Configure' at the bottom. Check that there are no errors. If components are missing, you probably need to fix the above PATHs.
You probably need to click Configure again and then click 'Generate', which will generate your Visual Studio solution and project files in the 'build' directory.
Open the generated solution in MSVS and build the 'ALL_BUILD' target, which will build the RTT (and the unit tests if you enabled them).
The unit tests will fail if the required DLLs are not in your path. In your system settings, or on the command prompt of Windows, add c:\orocos\boost_1_40\lib and c:\orocos\ACE_wrappers\lib to your PATH environment (reboot if necessary).
Next, run a 'make install' and add the c:\orocos\bin directory to your PATH (or whatever you used as install path.) In RTT 2.3.0, the default install path is c:\Program Files\orocos (so add c:\Program Files\orocos\bin to PATH). It is recommended to keep this default, since OCL uses that too.
Now you should be able to run the unit tests. The process could be a bit streamlined more and may be improved in later releases.
There is a separate Wiki page for enabling Readline (tab-completion) in the TaskBrowser. See Taskbrowser with readline on Windows.
This page describes all steps you need to take in order to compile the real-time toolkit on a Windows machine. We rely on the Cygwin libraries and tools to accomplish this. Visual Studio or mingw32 are as of this writing not yet supported. Also CORBA is not yet ported.
cd orocos-rtt; patch -p1 < orocos-rtt-cygwin.patchThe patch can be found here: https://www.fmtc.be/bugzilla/orocos/show_bug.cgi?id=605
cmake .. -DBOOST=/usr/include/boost-1_33_1 makeThis is a slow process. If you have multiple CPU cores, use 'make -j2' or -jN with N cores. In case you want to change more option, type 'ccmake ..' between the cmake and make commands.
export PATH=$PATH:`pwd`/src
|
Date | Time | Topic/Title & Mediator | Description |
---|---|---|---|
Mon 19/7 | 9h-13h | Big Picture Day Arriving at PAL offices |
Peter shows what 2.0 is and where it is heading |
13h-14h | Lunch | ||
14h-19h | Who're you + plans presentation | You made some Impress/Beamer slides and present your work + ideas for future work in < 10 slides. | |
20h | Dinner | Opening dinner sponsored by The SourceWorks | |
Tue 20/7 | 9h-13h | Typekit generation | Orogen + message generation.
YARP transport 2.0 (only ports, no methods, events, etc). Code explosion & extern template solution. |
13h-14h | Lunch | ||
14h - 19h | Component generation | Introduction.
What is its place in RTT? |
|
20h | Dinner | ||
Wed 21/7 | 9h-13h | Building | Fix up RTT/OCL cmake.
Structure of components, repositories and applications (graveyard/attic) |
13h-14h | Lunch | ||
14h-16h | Documentation improvement | Structure. Website
Missing part. Real examples and tutorials Examples. Restructure. Success stories, who uses |
|
16h | Visiting | ||
20h | Dinner | ||
Thu 22/7 | 9h-13h | Logging | Current status.
Fix architecture. RTT::Logger remove/replace |
13h-14h | Lunch | ||
14h-16h | Reporting | ||
16h-19h | Upgrading from v1 to v2 | Describe rtt v2 converter. Caveat document. Try out on user systems. | |
20h | Dinner | Closing dinner sponsored by The SourceWorks | |
Fri 23/7 | 9h-13h | Wrapping-up Day | Finishing loose ends, integration and discussions for future work |
13h-14h | Lunch | ||
14h-17h | Wrapping-up Day | Finishing loose ends, integration and discussions for future work |
If you need or want to provide sponsorship, contact Peter.
Peter started to present the 2.x functionality, state and (from its point of view), shortcomings. The following are the points that have been risen during the discussion.
Properties and attributes
Ports
Methods / Operations / Services
Misc
Plugins
Code size
Events
end
The discussions starts with explaining the improved TypeInfo infrastructure:
ROS messages and orogen
Sylvain explains how orogen works
Sylvain shows how orogen requires the #ifndef __orogen in the headers listed. gccxml is a fix for this too.
Hosting on gitorious is being discussed. It allows us to group code in 'projects' and collaborate better using git.
Autoproj is discussed as a tool to bootstrap the orocos packages. It's an alternative to manually download and build everything. It may work in parallel with rosbuild, in case application software depends on both ros and orocos. This needs to be tested.
The work is divided for the rest of the day:
We decided to rename orogen to typegen
The day concluded with investigating the code size/compile time issue. The culprits are the operations added to the ports in the typekit code. We investigated several solutions to tackle this, especially in the light of code/typekit generation.
The day started with a re-evaluation of the agenda and release timelines. The proposed release date for 2.0 was august 16th.
This list of topics will be covered this week:
This list of issues will be solved before 2.0.0:
These issues will be delayed after 2.0.0:
The rest of the day continued as planned on the agenda. In the morning, a new CMake build system for components, plugins and executables was created to maximize maintainability and ease-of-use of creating new Orocos software. OCL too will switch to this system. The interface (CMake macros) and logic behind it was discussed. This tool will be further developed to be ready before the 2.0 release.
In the afternoon, the documentation and website structure was discussed. We came to the conclusion that no-one only downloads the RTT. For 2.0, they will download, RTT, the infrastructure components (TaskBrowser, Deployment, Reporting, Diagnostics etc) and the tool-chain (typekit generation, component generation etc.). This will require a restructuring of the website and the documentation, to no longer be RTT-centric, but to be 'Orocos ecosystem' centric.
The documentation will contain 3 pillars:
The reference manuals will be cleaned up too, such that they suit better 'for reference' and less serve as 'first read for new users'.
During this day, the code size problem, typegen development and Yarp transport were also further polished.
It ended with a visit to 'Parc Guell' and a walk to the old city centre, where we enjoyed a well deserved tapas meal.
This page describes the steps to take in order to compile the real-time toolkit (RTT) on a Windows machine, under MinGW and pthreads-32.
The following has been tested on Windows XP, running in a virtual machine on Mac OS X Leopard.
Warning: the default GCC 3.4.5 compiler in MinGW outputs a lot of warnings when compiling RTT. Mostly they are "foo might be used uninitialized in this function" in STL code.
See detailed instructions in URL's above and below, but basically (unless otherwise noted, all actions are in MSys Unix shell, and, all unix-built items are installed in /mingw (which is c:\msys\1.0\mingw in DOS prompt) )
cmake-xxx/bootstrap --prefix=/mingw --no-qt-gui make && make installRun pthreads32 installer (just untar's)
- manually copy pre-built/include/* to /c/mingw/include (C:\mingw/include) - manually copy pre-built/lib/*GC2* to /c/mingw/lib (C:\mingw/lib) - to run pthreads tests, need to copy prebuilt .a/.dll into .. dir, and copy queueuserapcex to ../..Boost (as at 2009-Jan, use v1.35 not v1.37 until we fix RTT for v1.37)
*** DOS shell *** cd boost-jam-xxx .\build.bat gcc ** won't build in unix shell with build.sh ** *** unix shell *** cd boost-jam-xxx cp binntx86/bjame.exe /mingw/bin cd ~/software/build/boost_1_35 bjam --toolset=gcc --layout=system --prefix=/mingw --with-date_time --with-graph \ --with-system --with-function_types --with-program_options installCppunit, get tarball from sourceforge
untar and configure with --prefix=/mingw correct line 7528 in libtool, to be c:/MinGW/bin../lib/dllcrt2.o for first item make && make install
cd /path/to/rtt; patch -p0 < patch-rtt-mingw-1.patch
download, follow MinGW build instructions on the website. add "#undef ACE_LACKS_USECONDS_T" to ace/config-win32-mingw.h" before compiling copy ace/libACE.dll to /mingw/lib make TAO ** this fails You can build all we need by manually doing ''make'' in the following directories. Note that the last couple of TAO dir's have problems. ace, ace/protocols, kokyu, tao, tao/TAO_IDL, tao/orbsvcsNB Can parallel build ace but not its tests nor tao.
NB Not all tests pass. At least one of the ACE tests fail.
Stephen gives an overview of the current log4cpp + Orocos architecture and how he accomplished real-time logging. Log4cpp supports
Orocos supports
Decisions for v2.0
v2.2 or later
It's hacking day and implementing/finishing most of what we started this week.
This Chapter collects all information about the migration to RTT 2.0. Nothing here is final, it's a scratch book to get us there. There are talk pages to discuss the contents of these pages.
These are the major work areas:
If you want to contribute, you can post your comments in the following wiki pages. It will be (hopefully) more concise and straightforward compared with the developers Forum.
These items are worked out on separate Wiki pages.
RTT and OCL 2.0 have been merged on the master branches of all official git repositories:
Stable releases are on different branches, for example toolchain-2.0:
The sections below formulate the major goals which RTT 2.0 wishes to attain.
The input/output is offered by means of port based communication between data processing algorithms. An input port receives data, an output port sends data. The algorithms in the component define the transformation from input to output.
Service based communication offers operations such as configuration or task execution. A component always specifies if a service is provided or requested. This allows run-time dependency and system state checking, but also automatic connection/disconnection management which is important in distributed environments.
Components are stateful. They don't just start processing data right away. They can validate their preconditions, be queried for their current state and be started and stopped in a controlled manner. Although there is a standard state machine in each component that regulates these transitions, users can extend these without limitations.
INTRODUCTION
You can edit this page to post your contribution to OrocosRTT 2.0. Please, keep your comment concise and clear: if you want to launch a long debate, you can still use the Developers Forum! Short examples can help other people understanding what you mean.
Because of single thread serialization, something unexpected for the programmer happens.
1) You expect TaskA to be independent from TaskB, but it isn't. If you think it is a problem of resources of the computer, change the activity frequency of 1 of the two tasks.
Suggestion: A) let the programmer choose if single thread serialization is used or not. B) keep 1 thread for 1 activity policy for default. It will help less experienced user to avoid common errors. Experienced user can decide to "unleash" the power of STS if they want to.
2) after the "block" for 0.5 seconds, the "lost cycles" are executed all at once. In other words, updateHook is called 5 times in a row. This may have very umpredictable results. It could be desirable for some applications (filter with data buffer) or catastrophic in other applications (motion control loop).
Suggestion: C) let the user decide if the "lost cycles" or the PeriodicActivity need to be executed later or are defenitively lost.
using namespace std; using namespace RTT; using namespace Orocos; TimeService::ticks _timestamp; double getTime() { return TimeService::Instance()->getSeconds(_timestamp); } class TaskA : public TaskContext { protected: PeriodicActivity act1; public: TaskA(std::string name) : TaskContext(name), act1(1, 0.10, this->engine() ) { //Start the component's activity: this->start(); } void updateHook() { printf("TaskA [%.2f] Loop\n", getTime()); } }; class TaskB : public TaskContext { protected: int num_cycles; PeriodicActivity act2; public: TaskB(std::string name) : TaskContext(name), act2(2, 0.10, this->engine() ) { num_cycles = 0; //Start the component's activity: this->start(); } void updateHook() { num_cycles++; printf("TaskB [%.2f] Loop\n", getTime()); // once every 20 cycles (2 seconds), a long calculation is done if(num_cycles%20 == 0) { printf("TaskB [%.2f] before calling long calculation\n", getTime()); // calculation takes longer than expected (0.5 seconds). // it could be something "unexpected", desired or even a bug... // it would not be relevant for this example. for(int i=0; i<500; i++) usleep(1000); printf("TaskB [%.2f] after calling long calculation\n", getTime()); } } }; int ORO_main(int argc, char** argv) { TaskA tA("TaskA"); TaskB tB("TaskB"); // notice: the task has not been connected. there isn't any relationship between them. // In the mind of the programmer, any of them is independent, because they have their own activity. // if one of the two frequency of the PeriodicActivities is changed, there isn't any problem, since they go on 2 separate threads. getchar(); return 0; }
INTRODUCTION
Please be concise and provide a short example and your motivation to include it in RTT. Ask first yourself:
If you answered "no" to both the questions and you have already debated the new future in the Developers forum, please post here your suggestion.
In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control, application configuration from a central database or topic based data flow topologies.
1 Central Property Service (ROS like) This tasks sets up components such that they get the system wide configuration from a dedicated property server. The property server loads an XML file with all the values and other components query these values. Advanced components even extend the property server at places. A GUI is not included in this work package.
2 Universal Robot Controller (Using KDL, OCL, standard components) This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. A GUI is not included in this work package.
3 Topic based data flow (ROS and CORBA EventService like) A deployer can configure components as such that their ports are connected to 'global' topics for sending and receiving. This is similar to what many existing frameworks do today and may demonstrate how compatibility with these frameworks can be accomplished.
4 GUI communication with Orocos How a remote GUI could connect to a running application.
Please add yours
These pages outline the roadmap for RTT-2.0 in 2009. We aim to have a release candidate by december 2009, with the release following in januari 2010.
This work package contains structural clean-ups for the RTT source code, such as CMake build system, portability and making the public interface slimmer and explicit. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community. This package is put up front because it allows early adopters to switch only at the beginning to the new code structure and that all subsequent packages are executed in the new structure.
Links : (various posts on Orocos mailing lists)
Allocated Work : 15 days
Tasks:
1.1 Partition in name spaces and hide internal classes in subdirectories.
A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.
Deliverable | Title | Form |
1.1.1 | Internal headers are in subdirectories | Patch set |
1.1.2 | Internal classes are in nested namespaces of the RTT namespace | Patch set |
1.2 Improve CMake build system
Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.
Deliverable | Title | Form |
1.2.1 | Standardized on CMake 2.6 | Patch set |
1.2.2 | Use CMake lists instead of strings | Patch set |
1.2.3 | No more use of Linux specific include paths | Patch set |
1.2.4 | Separate finding from using libraries for all RTT dependencies | Patch set |
1.3 Group user contributed code in rtt/extras.
This directory offers variants of implementations found in the RTT, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT API, these contributions are organized in a separate directory. Users are warned that these extras might not be of the same quality as native RTT classes.
Deliverable | Title | Form |
1.3.1 | Orocos rtt-extras directory | Directory in RTT |
1.4 Improve portability
Some GNU/GCC/Linux specific constructs have entered the source code, which makes maintenance on and portability to other platforms a harder task. To structurally support other platforms, the code will be compiled with another compiler (non-gnu) and a build flag ORO_NO_ATOMICS (or similar) is added to exclude all compiler and assembler specific code and replace it with ISO-C/C++ or RTT-FOSI compliant constructs.
Deliverable | Title | Form |
1.4.1 | Code compiles on non-gnu compiler | Patch set |
1.4.2 | Code compiles without assembler constructs | Patch set |
1.5 Default to activity with one thread per component
The idea is to provide each component with a robust default activity object which maps to exactly one thread. This thread can periodically execute or be non periodic. The user can switch between these modes at configuration or run-time.
Deliverable | Title | Form |
1.5.1 | Generic Activity class which is by default present in every component. | Patch set |
1.5.2 | Unit test for this class | Patch set |
1.6 Standardize on Boost Unit Testing Framework
Before the other work packages are started, the RTT must standardize on a unit test framework. Until now, this is the CppUnit framework. The more portable and configurable Boost UTF has been chosen for unit testing of RTT 2.0.
Deliverable | Title | Form |
1.6.1 | CppUnit removed and Boost UTF in place | Patch set |
1.7 Provide CMake macros for applications and components
When users want to build Orocos components or applications, they require flags and settings from the installed RTT and OCL libraries. A CMake macro which gathers these flags for compiling an Orocos component or application is provided. This is inspired on how ROS components are compiled.
Deliverable | Title | Form |
1.7.1 | CMake macro | CMake macro file |
1.7.2 | Unit test that tests this macro | Patch set |
1.8 Allow lock-free policies to be configured
Some RTT classes use hard-coded lock-free algorithms, which may be in the way (due to resource restrictions) for some embedded systems. It should be possible to change the policy to not use a lock-free algorithm in that class (cfr the 'strategy' design pattern'). An example is the use of AtomicQueue in the CommandProcessor.
Deliverable | Title | Form |
1.8.1 | Allow to set/override lock-free algorithm policy | patch |
This page collects all the data and links used to improve the CMake build system, such that you can find quick links inhere instead of scrolling through the forum.
Thread on Orocos-dev : http://www.orocos.org/node/1073 (in case you like to scroll)
CMake manual on how to use and create Findxyz macros : http://www.vtk.org/Wiki/CMake:How_To_Find_Libraries
List of many alternative modules : http://zi.fi/cmake/Modules/
An alternative solution for users of RTT and OCL is installing the Orocos-RTT-target-config.cmake macros, which serve a similar purpose as the pkgconfig .pc files: they accumulate the flags used to build the library. This may be a solution for Windows systems. Also, CMake suggests that .pc files are only 'suggestive' and that still the standard CMake macros must be used to fully capture and store all information of the dependency you're looking at.
The orocos/src directory reflects the /usr/include/rtt directory structure, I'll post it here from the user's point of view, so what she finds in the include dir:
Abbrevs: (N)BC: (No) Backwards Compatibility guaranteed between 2.x.0 and 2.y.0. Backwards compatibility is always guaranteed between 2.x.y and 2.x.z. In case of NBC, a class might disappear or change, as long as it is not a base class of a BC qualified class.
Directory | Namespace | BC/NBC | Comments | Header File list |
---|---|---|---|---|
rtt/*.hpp | RTT | BC | Public API: maintains BC, a limited set of classes and interfaces. This is the most important list to get right. A header not listed in here goes into one of the subdirectories. Please add/complete/remove. | TaskContext.hpp Activity.hpp SequentialActivity.hpp SlaveActivity.hpp DataPort.hpp BufferPort.hpp Method.hpp Command.hpp Event.hpp Property.hpp PropertyBag.hpp Attribute.hpp Time.hpp Timer.hpp Logger.hpp |
rtt/plugin/*.hpp | RTT::plugin | BC | All plugin creation and loading stuff. | Plugin.hpp |
rtt/types/*.hpp | RTT::types | BC | All type system stuff (depends partially on plugin). Everything you (or a tool) need(s) to add your own types to the RTT. | Toolkit.hpp ToolkitPlugin.hpp Types.hpp TypeInfo.hpp TypeInfoName.hpp TypeStream.hpp TypeStream-io.hpp VectorComposition.hpp TemplateTypeInfo.hpp Operators.hpp OperatorTypes.hpp BuildType.hpp |
rtt/interface/*.hpp | RTT::interface | BC | Most interfaces/base classes used by classes in the RTT namespace. | ActionInterface.hpp, ActivityInterface.hpp, OperationInterface.hpp, PortInterface.hpp, RunnableInterface.hpp, BufferInterface.hpp |
rtt/internal/*.hpp | RTT::internal | NBC | Supportive classes that don't fit another category but are definately not for users to use directly. | ExecutionEngine.hpp CommandProcessor.hpp DataSource*.hpp Command*.hpp Buffer*.hpp Function*.hpp *Factory*.hpp Condition*.hpp Local*.hpp EventC.hpp MethodC.hpp CommandC.hpp |
rtt/scripting/*.hpp | RTT::scripting | NBC | Users should not include these directly. | |
rtt/extras/*.hpp | RTT::extras | BC | Alternative implementations of certain interfaces in the RTT namespace. May contain stuff useful for embedded or other specific use cases. | |
rtt/dev/*.hpp | RTT::dev | BC | Minimal Device Interface, As-is in RTT 1.x | AnalogInInterface.hpp AnalogOutInterface.hpp AxisInterface.hpp DeviceInterface.hpp DigitalInput.hpp DigitalOutput.hpp EncoderInterface.hpp PulseTrainGeneratorInterface.hpp AnalogInput.hpp AnalogOutput.hpp CalibrationInterface.hpp DigitalInInterface.hpp DigitalOutInterface.hpp DriveInterface.hpp HomingInterface.hpp SensorInterface.hpp |
rtt/corba/*.hpp | RTT::corba | BC | CORBA transport files. Users include some headers, some not. Should this also have the separation between rtt/corba and rtt/corba/internal ? I would rename the IDL modules to RTT::corbaidl in order to clear out compiler/doxygen confusion. Also note that current 1.x namespace is RTT::Corba. | |
rtt/property/*.hpp | RTT::property | BC | Formerly 'rtt/marsh'. Marshalling and loading classes for properties. | CPFDemarshaller.hpp CPFDTD.hpp CPFMarshaller.hpp |
rtt/dlib/*.hpp | RTT::dlib | BC | As-is static distribution library files. They are actually a form of 'extras'. Maybe they belong in there... | DLibCommand.hpp |
rtt/boost/*.hpp | boost | ? | We'll try to get rid of this in 2.x | |
rtt/os/*.hpp | RTT::OS | BC | As-is. (Rename to RTT::os ?) | Atomic.hpp fosi_internal_interface.hpp MutexLock.hpp rt_list.hpp StartStopManager.hpp threads.hpp CAS.hpp MainThread.hpp oro_allocator.hpp rtconversions.hpp rtstreambufs.hpp Semaphore.hpp Thread.hpp Time.hpp fosi_internal.hpp Mutex.hpp OS.hpp rtctype.hpp rtstreams.hpp ThreadInterface.hpp |
rtt/targets/* | - | BC | We need this for allowing to install multiple -dev versions (-gnulinux+-xenomai for example) in the same directory. | rtt-target.h <target> |
Will go: 'rtt/impl' and 'rtt/boost'.
Open question to be answered: Interfaces like ActivityInterface, PortInterface, RunnableInterface etc. -> Do they go into rtt/, rtt/internal or maybe rtt/interface ?
!!! PLEASE add a LOG MESSAGE when you edit this wiki to motivate your edit !!!
Context: Because the current data flow communication primitives in RTT limit the reusability and potential implementations, Sylvan Joyeux proposed a new, but fairly compatible, design. It is intended that this new implementation can almost transparently replace the current code base. Additionally, this package extends the DataFlow transport to support out-of-band real-time communication using Xenomai IPC primitives.
Link : http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow
Estimated work : 45 days for a demonstrable prototype.
Tasks:
2.1 Review and merge proposed code and improve/fix where necessary
Sylvain's code is clean and of high standards, however, it has not been unit tested yet and needs a second look.
Deliverable | Title | Form |
2.1.1 | Code reviewed and imported in RTT-2.0 branch | Patch set |
2.1.2 | Unit tests for reading, writing, connecting and disconnecting in-process communication | Patch set |
2.2 Port CORBA type transport to new code base
Sylvain's code has initial CORBA support. The plan is to cooperate on the implementation and offer the same or better features as the current CORBA implementation does. Also the DataFlowInterface.idl will be cleaned up to reflect the new semantics.
Deliverable | Title | Form |
2.2.1 | CORBA enabled data flow between proxies and servers which uses the RTT type system merged on RTT-2.0 branch | Patch set |
A disadvantage of the current data port is that ports connected over CORBA may cause stalls when reading or writing them. The Proxy or Server implementation should, if possible, do the communication in the background and not let the other component's task block.
Deliverable | Title | Form |
2.3.1 | Event driven network-thread allocated in Proxy code to receive and send data flow samples | Patch set |
The current lock-free data connections allocate memory for allowing access by 16 threads, even if only two threads connect. One solution is to let the allocated memory grow with the number of connections, such that no more memory is allocated than necessary.
Deliverable | Title | Form |
2.4.1 | Let lock-free data object and buffer memory grow proportional to connected ports | Patch set |
It is often argued that CORBA is excellent for setting up and configuring services, but not for continuous data transmission. There are for example CORBA standards that only mediate setup interfaces but leave the data communication connections up to the implementation. This task looks at how ROS and other frameworks set up out-of band data flow and how such a client-server architecture can be added to RTT/CORBA.
Deliverable | Title | Form |
2.5.1 | Report on out of band implementations and similarities to RTT. | Email on Orocos-dev |
Since the out-of-band communication will require objects to be transformed to a byte stream and back, a marshalling system must be in place. The idea is to let the user specify his data types as IDL structs (or equivalent) and to generate a toolkit from that definition. The toolkit will re-use the generated CORBA marshalling/demarshalling code to provide this service to the out-of-band communication channels.
Deliverable | Title | Form |
2.6.1 | Marshalling/demarshalling in toolkits | Patch set |
2.6.2 | Tool to convert data specification into toolkit | Executable |
The first communication mechanism to support is data flow. This will be demonstrated with a Xenomai RTPIPE implementation (or equivalent) which is setup between a network of components.
Deliverable | Title | Form |
2.7.1 | Real-time inter-process communication of data flow values on Xenomai | Patch set |
2.7.2 | Unit test for setting up, connecting and validating Real-Time properties of data ports in RT IPC setting. | Patch set |
In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.
Deliverable | Title | Form |
2.8.1 | Unit tests updated | Patch set |
2.8.2 | rtt-examples, rtt-exercises updated | Patch set |
2.8.3 | orocos-corba manual updated | Patch set |
2.9 Organize and Port OCL deployment, reporting and taskbrowsing
RTT 2.0 data ports will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new data ports. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.
Deliverable | Title | Form |
2.9.1 | Deployment, Reporting and TaskBrowser updated | Patch set |
Context: Commands are too complex for both users and framework/transport implementers. However, current day-to-day use confirms the usability of an asynchronous and thread-safe messaging mechanism. It was proposed to reduce the command API to a message API and unify the synchronous / asynchronous relation between methods and messages with synchronous / asynchronous events. This will lead to simpler implementations, simpler usage scenarios and reduced concepts in the RTT.
The registration and connection API of these primitives also falls under this WP.
Link: http://www.orocos.org/wiki/rtt/rtt-2.0/executionflow
Estimated work : 55 days for a demonstrable prototype.
Tasks:
3.1 Provide a real-time memory allocator for messages
In contrast to commands, each message invocation leads to a new message sent to the receiver. This requires heap management from a real-time memory allocator, such as the highly recommended TLSF (Two-Level Segregate Fit) allocator, which must be integrated in the RTT code base. If the RTOS provides, the native RTOS memory allocator is used, such as in Xenomai.
Deliverable | Title | Form |
3.1.1 | Real-time allocation integrated in RTT-2.0 | Patch set |
3.2 Message implementation
Unit test and implement the new Message API for use in C++ and scripts. This implies a MessageProcessor (replaces CommandProcessor), a 'messages()' interface and using it in scripting.
Deliverable | Title | Form |
3.2.1 | Message implementation for C++ | Patch set |
3.2.2 | Message implementation for Scripting | Patch set |
3.3 Demote the Command implementation
Commands (as they are now) become second rang because they don't appear in the interface anymore, being replaced by messages. Users may still build Command objects at the client side both in C++ as in scripting. The need for and the plausibility of identical functionality with today's Command objects is yet to be investigated.
Deliverable | Title | Form |
3.3.1 | Client side C++ Command construction | Patch set |
3.3.2 | Client side scripting command creation | Patch set |
3.4 Unify the C++ Event API with Method/Message semantics
Events today duplicate much of method/command functionality, because they also allow synchronous / asynchronous communication between components. It is the intention to replace much of the implementation with interfaces to methods and messages and let events cause Methods to be called or Messages to be sent. This change will remove the EventProcessor, which will be replaced by the MessageProcessor. This will greatly simplify the event API and semantics for new users. Another change is that allowing calling Events on the component's interface can only be done by means of registering it as a method or message.
Deliverable | Title | Form |
3.4.1 | Connection of only Method/Message objects to events | Patch set |
3.4.2 | Adding events as methods or messages to the TaskContext interface. | Patch set |
3.5 Allow event delivery policies
Adding a callback to an event puts a burden on the event emitter. The owner of the event must be allowed to impose a policy on the event such that this burden can be bounded. One such policy can be that all callbacks must be executed outside the thread of the owning component. This task is to extend the RTT such that it contains such a policy.
Deliverable | Title | Form |
3.5.1 | Allow to set the event delivery policy for each component | Patch set |
3.6 Allow to specify requires interfaces
Today one can connect data ports automatically because both providing and requiring data is presented in the interface. This is not so for methods, messages or events. This task makes it possible to describe which of these primitives a component requires from a peer such that they can be automatically connected during application deployment. The required primitives are grouped in interfaces, such that they can be connected as a group from provider to requirer.
Deliverable | Title | Form |
3.6.1 | Mechanism to list the requires interface of a component | Patch set |
3.6.2 | Feature to connect interfaces in deployment component. | Patch set |
3.7 Improve and create Method/Message CORBA API
With the experience of the RTT 1.0 IDL API, the existing API is improved to reduce the danger of memory leaks and allow easier access to Orocos components when using only the CORBA IDL. The idea is to remove the Method and Command interfaces and change the create methods in CommandInterface and MethodInterface to execute functions.
Deliverable | Title | Form |
3.7.1 | Simplify CORBA API | Patch set |
3.8 Port new Event mechanism to CORBA
Since the new Event mechanism will seamlessly integrate with the Method/Message API, a CORBA port, which allows remote components to subscribe to component events must be straightforward to make.
Deliverable | Title | Form |
3.8.1 | CORBA idl and implementation for using events. | Patch set |
3.9 Update documentation, unit tests and Examples
In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.
Deliverable | Title | Form |
3.9.1 | Unit tests updated | Patch set |
3.9.2 | rtt-examples, rtt-exercises updated | Patch set |
3.9.3 | Orocos component builders manual updated | Patch set |
3.10 Organize and Port OCL deployment, taskbrowsing
The new RTT 2.0 execution API will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new primitives. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.
Deliverable | Title | Form |
3.10.1 | Deployment, Reporting and TaskBrowser updated | Patch set |
In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control. This work package fleshes out that example.
Links : (various posts on Orocos mailing lists)
Estimated Work : 5 days for the application architecture with documentation
Tasks:
4.1 Universal Robot Controller (Using KDL, OCL, standard components)
This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. Both axes and end effector can be controlled in position and velocity mode. A state machine switches between these modes. A GUI is not included in this work package.
Deliverable | Title | Form |
4.1.1 | Robot Controller example | tar ball |
There are two major changes required in the CORBA IDL interface.
The first point will be relatively straight forward, as events attach methods and messages, which will be represented in the CORBA interface as well.
The DataFlowInterface will be adapted to reflect the rework on the new Data flow api. Much will depend on the out-of-band or through-CORBA nature of the data flow.
The MethodInterface should no longer work with 'session' objects, and all calls are related to the main interface, such that a method object can be freed after invocation.
The CommandInterface might be removed, in case it can be 'reconstructed' from lower level primitives. A MessageInterface will replace it which allows to send messages, analogous to the exiting MethodInterface.
The 'ControlTask' interface will remain mostly as is, extended with events() and messages().
This page is for helping you understand what's in RTT/OCL 2.0.0-beta1 release and what's not.
For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0
88% tests passed, 3 tests failed out of 25 The following tests FAILED: 6 - mqueue-test (Failed) 19 - types_test (Failed) 22 - function_test (Failed)
For each type to be transported using the MQueue transport, a separate transport typekit must be available (this may change in the final 2.0 release).
Method<bool(int,int)> setreso; setreso = this->getPeer("Camera")->getMethod<bool(int,int)>("setResolution"); if ( setreso.ready() == false ) log(Error) << "Could not find setResolution Method." <<endlog(); else setreso(640,480);
Method<bool(int,int)> setreso("setResolution"); this->requires("Camera")->addMethod(mymethod); // Deployment component will setup setResolution for us... setreso(640,480);
This page is for helping you understand what's in RTT/OCL 2.0.0-beta2 release and what's not.
See the RTT 2.0.0-beta1 page for the notes of the previous beta, these will not be repeated here.
For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0
97% tests passed, 1 tests failed out of 31 The following tests FAILED: 24 - types_test (Failed)
This work package claims all remaining proposed clean-ups for the RTT source code. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community.
1 Partition in name spaces and hide internal classes in subdirectories. A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.
2 Improve CMake build system Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.
3 Group user contributed code in rtt-extras and ocl-extras packages. These packages offer variants of implementations found in the RTT and OCL, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT and OCL APIs, these contributions are organized in separate packages. Other users are warned that these extras might not be of the same quality as native RTT and OCL classes.
Recent ML posts indicate the desire for a real-time (RT) capable logging framework, to supplement/replace the existing non-RT RTT::Logger. See http://www.orocos.org/forum/rtt/rtt-dev/logging-replacement for details.
NB Work in progress. Feedback welcomed
See https://www.fmtc.be/bugzilla/orocos/show_bug.cgi?id=708 for progress and patches.
0) Completely disable all logging
1) Able log variable sized string messages
2) Able log from non-realtime and realtime code
3) Minimize (as reasonably practicable) the effect on runtime performance (eg minimize CPU cycles consumed)
4) Support different log levels
5) Support different "storage mediums" (ie able to log messages to file, to socket, to stdout)
Except for 3, and the "realtime" part of 2, the above is the functionality of the existing RTT::Logger
6) Support different log levels within a deployed system (ie able to log debug in one area, and info in another)
7) Support multiple storage mediums simultaneously at runtime
8) Runtime configuration of storage mediums and logging levels
9) Allow the user to extend the possible storage mediums at deployment-time (ie user can provide new storage class)
Optional IMHO
10) Support nested diagnostic contexts [1] [2] (a more advanced version of the Logger::In() that RTT's logger currently supports)
I prefer 3) as it has the basic functionality we need, is license compatible, has a good design, and we've been offered developer access to modify it. I also think modifying a slightly less-well-known framework will be easier than getting some of our mod's in to log4cxx.
NOTE on the ML I was using the logback term logger, but log4cpp calls it a category. I am switching to category from now on!
Add TLSF to RTT (a separate topic).
Fundamentally, replace std::string, wrap one class, and override two functions. :-)
Typedef/template in a real-time string to the logging framework, instead of std::string (also any std::map, etc).
Create an OCL::Category class derived from log4cpp::Category. Add an (optionally null) association to an RTT::BufferDataPort< log4cpp::LoggingEvent > (which uses rt_strings internally). Override the callAppenders() function to push to the port instead of directly calling appenders.
Modify the getCategory() function in the hierarchy maintainer to return our OCL:: Category instead of log4cpp::category. Alternatively, leave it producing log4cpp::category but contain that within the OCL::Category object (has-a instead of is-a relationship, in OO speak). The alternative is less modification to log4cpp, but worse performance and potentially more wrapping code.
I have a working prototype of the OCL deployment for this (without the actual logging though), and it is really ugly. As in Really Ugly! To simplify the format and number of files involved, and reduce duplication, I suggest extending the OCL deployer to better support logging.
Sample system
Component C1 - uses category org.me.myapp Component C2 - uses category org.me.myapp.c2 Appender A - console Appender B - file Appender C - serial Logger org.me.myapp has level=info and appender A Logger org.me.myapp.C2 has level=debug and appenders B, C
Configuration file for log4cpp
log4j.logger.org.me.myapp=info, AppA log4j.logger.org.me.myapp.C2=debug, AppB, AppC log4j.appender.AppA=org.apache.log4j.ConsoleAppender log4j.appender.AppB=org.apache.log4j.FileAppender log4j.appender.AppC=org.apache.log4j.SerialAppender # AppA uses PatternLayout. log4j.appender.AppA.layout=org.apache.log4j.PatternLayout log4j.appender.AppA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n # AppB uses SimpleLayout. log4j.appender.AppB.layout=org.apache.log4j.SimpleLayout # AppC uses PatternLayout with a different pattern from AppA log4j.appender.AppC.layout=org.apache.log4j.PatternLayout log4j.appender.AppC.layout.ConversionPattern=%d [%t] %-5p %c %x - %m%n
File: AppDeployer.xml
<struct name="ComponentC1" ... /> <struct name="ComponentC2" ... /> <struct name="AppenderA" type="ocl::ConsoleAppender"> <simple name="PropertyFile" ...><value>AppAConfig.cpf</value></simple> <struct name="Peers"> <simple>Logger</simple> </struct> <struct name="AppenderB" type="ocl::FileAppender"> <simple name="PropertyFile" ... /> <struct name="Peers"> <simple>Logger</simple> </struct> <struct name="AppenderC" type="ocl::SerialAppender"> <simple name="PropertyFile" ... /> <struct name="Peers"> <simple>Logger</simple> </struct> <struct name="Logger" type="ocl::Logger"> <simple name="PropertyFile" ...><value>logger.org.me.myapp.cpf</value></simple> </struct>
File: AppAConfig.cpf
<properties> <simple name="LayoutClass" type="string"><value>ocl.PatternLayout</value> <simple name="LayoutConversionPattern" type="string"><value>%-4r [%t] %-5p %c %x - %m%n</value> </properties>
… other appender .cpf files …
File: logger.org.me.myapp.cpf
<properties> <struct name="Categories" type="PropertyBag"> <simple name="org.me.myapp" type="string"><value>info</value></simple> <simple name="org.me.myapp.C2" type="string"><value>debug</value></simple> </struct> <struct name="Appenders" type="PropertyBag"> <simple name="org.me.myapp" type="string"><value>AppenderA</value></simple> <simple name="org.me.myapp.C2" type="string"><value>AppenderB</value></simple> <simple name="org.me.myapp.C2" type="string"><value>AppenderC</value></simple> </struct> </properties>
The logger component is no more than a container for ports. Why special case this? Simply to make life easier for the deployer and to keep the deployer syntax and semantic model similar to what it currently is. A deployer deploys components - the only real special casing here is the connecting of ports (by the logger code) that aren't mentioned in the deployment file. If you use the existing deployment approach, you have to create a component per category, and mention the port in both the appenders and the category. This is what I currently have, and as I said, it is Really Ugly.
Example logger functionality (error checking elided)
Logger::configureHook() // create a port for each category with an appender for each appender in property bag find existing category if category not exist create category create port associate port with category find appender component connect category port with appender port // configure categories for each category in property bag if category not exist create category set category level
There will probably need to be a restriction that to maintain real-time, categories are found prior to a component being started (e.g. in configureHook() or startHook() ).
Note that not all OCL::Category objects contain a port. Only those category objects with associated appenders actually have a port. This is how the hierarchy works. If you have category "org.me.myapp.1.2.3" and it has no appenders but your log level is sufficient, then the logging action gets passed up the hierarchy. Say that category "org.me.myapp" has an appender (and that no logging level stops this logging action in the hierarchy in between), then that appender will actually log this event.
Also should create toolkit and transport plugins to deal with the log4cpp::LoggingEvent struct. This will allow for remote appenders, as well as viewing within the taskbrowser.
Port names would perhaps be something like "org.me.myapp.C1" => log_org_me_myapp_C1".
It's not so much the string that needs to be real-time, but the stringstream, which converts our data (strings, ints,...) into a string buffer. Conveniently, the boost::iostream library allows with two lines of code to create a real-time string stream:
#include <boost/iostreams/device/array.hpp> #include <boost/iostreams/stream.hpp> namespace io = boost::iostreams; int main() { // prepare static sink const int MAX_MSG_LENGTH = 100; char sink[MAX_MSG_LENGTH]; memset( sink, 0, MAX_MSG_LENGTH); // create 'stringstream' io::stream<io::array_sink> out(sink); out << "Hello World! "; // space required to avoid stack smashing abort. // close and flush stringstream out.close(); // re-open from position zero. out.open( sink ); // overwrites old data. out << "Hello World! "; }
Unfortunately, the log4cpp::LoggingEvent is passed through RTT buffers, and this has std::string members. So, we need rt_string also, but rt_stringstream will be very useful also.
Warning For anyone using the boost::iostreams like above, either clear the array to 0's first, or ensure you explicitly write the string termination character ('\0'). The out << "..."; statement does not terminate the string otherwise. Also, I did not need the "space ... to avoid stack smashing abort" bit on Snow Leopard with gcc 4.2.1.
Using boost::iostream repeatedly ... you need to reset the stream between each use
#include <boost/iostreams/device/array.hpp> #include <boost/iostreams/stream.hpp> #include <boost/iostreams/seek.hpp> namespace io = boost::iostreams; ... char str[500]; io::stream<io::array_sink> ss(str); ss << "cartPose_desi " << vehicleCartPosition_desi << '\0'; logger->debug(OCL::String(&str[0])); // reset stream before re-using io::seek(ss, 0, BOOST_IOS::beg); ss << "cartPose_meas " << vehicleCartPosition_meas << '\0'; logger->debug(OCL::String(&str[0]));
If before the Logger is configured (and hence, the buffer ports and appender associations are created), a component logs to a category, the logging event is lost. At that time no appenders exist. It also means that for any component that logs prior to configure time, by default, those logging events are lost. I think that this requires further examination, but would likely involve more change to the OCL deployer.
The logger configure code presumes that all appenders already exist. Is this an issue?
Is the port-category association a shared_ptr<port> style, or does the category simply own the port?
If the logger component has the ports added to it as well as to the category, then you could peruse the ports within the taskbrowser. Is this useful? If this is useful, is it worth making the categories and their levels available somehow for perusal within the taskbrowser?
[1] http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html
[2] Patterns for Logging Diagnostic Messages Abstract
[3] log4j and a short introduction to it.
[5] log4cpp
[6] log4cxx
[7] log4cplus
(Copied from http://github.com/doudou/orocos-rtt/commit/dc1947c8c1bdace90cf0a3aa2047ad248619e76b)
Here is the mail that led to this implementation:
connect(source, dest) source.disconnect()
=> A0 A3 [PROCESSING] => A'0 A'3 A0 A1 A2 A3 => [WORK SHARING WORK SHARING] => A'0 A'1 A'2 A'3 => A1 A2 [PROCESSING] => A'1 A'2
What I'm proposing is getting back to a good'ol data flow model, namely:
From RTT 1.8 on, an Orocos component is created with a default 'SequentialActivity', which uses ('piggy-backs on') the calling thread to execute its asynchronous functions. It has been argued that this is not a safe default, because a component with a faulty asynchronous function can terminate the thread of a calling component, in case the 'caller' emits an asynchronous event (this is quite technical, you need to be on orocos-dev for a while to understand this).
Furthermore, in case you do want to assign a thread, you need to select a 'PeriodicActivity' or 'NonPeriodicActivity', which have their quirks as well. For example, PeriodicActivity serialises activities with equal period and periodicity, and NonPeriodicActivity says what it isn't instead of what it is.
The idea is to create a new activity type which allocates one thread, and which can be periodic or non-periodic. The other activity types remain (and/or are renamed) for specialist users that know what they want.
It started with an idea on FOSDEM. It went on as a long mail (click link for full text and discussion) on the Orocos-dev mailing list.
Here's the summary:
The pages below analyse and propose new solutions. The pages are in chronological order, so later pages represent more recent views.
I've seen people using the RTT for inter-thread communication in two major ways: or implement a function as a Method, or as a Command. Where the command was the thread-safe way to change the state of a component. The adventurous used Events as well, but I can't say they're a huge success (we got like only one 'thank you' email in its whole existence...). But anyway, Commands are complex for newbies, Events (syn/asyn) aren't better. So for all these people, here it comes: the RTT::Message object. Remember, Methods allow a peer component to _call_ a function foo(args) of the component interface. Messages will have the meaning of _sending_ another component a message to execute a function foo(args). Contrary to Methods, Messages are 'send and forget', they return void. The only guarantee you got is, that if the receiver was active, it processed it. For now, forget that Commands exist. We have two inter- component messaging primitives now: Messages and Methods. And each component declares: You can call these methods and send these messages. They are the 'Level 0' primitives of the RTT. Any transport should support these. Note that conveniently, the transport layer may implement messages with the same primitive as data ports. But we, users, don't care. We still have Data Ports to 'broadcast' our data streams and now we have Messages as well to send directly to component X.
Think about it. The RTT would be already usable if each component only had data ports and a Message/Method interface. Ask the AUTOSAR people, it's very close to what they have (and can live with).
There's one side effect of the Message: we will need a real-time memory allocator to reserve a piece of memory for each message sent, and free it when the message is processed. Welcome TLSF. In case such a thing is not possible wanted by the user, Messages can fall back to using pre-allocated memory, but at the cost of reduced functionality (similar to what Commands can do today). Also, we'll have a MessageProcessor, which replaces and is a slimmed down version of the CommandProcessor today.
So where does this leave Events? Events are of the last primitives I explain in courses because they are so complex. They don't need to be. Today you need to attach a C/C++ function to an event and optionally specify an EventProcessor. Depending on some this-or-thats the function is executed in this-or-the-other thread. Let's forget about that. In essence, an Event is a local thing that others like to know about: Something happened 'here', who wants to know? Events can be changed such that you can say: If event 'e' happens, then call this Method. And you can say: if event 'e' happens, send me this Message. You can subscribe as many callbacks as you want. Because of the lack of this mechanism, the current Event implementation has a huge foot print. There's a lot to win here.
Do you want to allow others to raise the event ? Easy: add it to the Message or Method interface, saying: send me this Message and I'll raise the event, or call this Method and you'll raise it, respectively. But if someone can raise it, is your component's choice. That's what the event interface should look like. It's a Level 1. A transport should do no more than allowing to connect Methods and Messages (which it already supports, Level 1) to Events. No more. Even our CORBA layer could do that.
The implementation of Event can benefit from a rt_malloc as well. Indirectly. Each raised Event which causes Messages to be sent out will use the Message's rt_malloc to store the event data by just sending the Message. In case you don't have/want an rt_malloc, you fall back to what events can roughly do today. But with lots of less code ( Goodbye RTT::ConnectionC, Goodbye RTT::EventProcessor ).
And now comes the climax: Sir Command. How does he fit in the picture? He'll remain in some form, but mainly as a 'Level 2' citizen. He'll be composed of Methods, Messages and Events and will be dressed out to be no more than a wrapper, keeping related classes together or even not that. Replacing a Command with a Message hardly changes anything in the C++ side. For scripts, Commands were damn useful, but we will come up with something satisfactory. I'm sure.
How does all this interface shuffling allows us to get 'towards a sustainable distributed component model'? That's because we're seriously lowering the requirements on the transport layer:
And we are at the same time lowering the learning curve for new users:
(Please feel free to edit/comment etc. This is a community document, not a personal document)
An alternative naming is possible: the offering of a C/C++ function could be named 'operation' and the collection of a given set of operations in an interface could be called a 'service'. This definition would line up better with service oriented architectures like OSGi.
Users want to control which thread executes which function, and if they want to wait(block) on the result or not. This all in order to meet deadlines in real-time systems. In practice, this boils down to:
Wait? \ Thread? | Caller | Component |
---|---|---|
Yes | (Method) | (?) |
No | X | (Command) |
For reference, the current RTT 1.x primitives are shown. There are two remarkable spots: the X and the (?).
Another thing you should be aware of that in the current implementation, caller and component must agree on how the service is invoked. If the Component defines a Method, the caller must execute it in its own thread and wait for the result. There's no other way for the caller to deviate from this. In practice, this means that the component's interface dictates how the caller can use its services. This is consistent with how UML defines operations, but other frameworks, like ICE, allow any function part of the interface to be called blocking or non-blocking. Clearly, ICE has some kind of thread-pool behind the scenes that does the dispatching and collects the results on behalf of the caller.
A simpler form of Command will be provided that does not contain the completion condition. It is too seldomly used.
It is to the proposals to show how to emulate the old behavior with the new primitives.
Each proposal should try to solve these issues:
The ability to let caller and component choose which execution semantics they want when calling or offering a service (or motivate why a certain choice is limited):
And regarding easy use and backwards compatibility:
And finally:
This is one of the earliest proposals. It proposes to keep Method as-is, remove Command and replace it with a new primitive: RTT::Message. The Message is a stripped Command. It has no completion condition and is send-and-forget. One can not track the status or retrieve arguments. It also uses a memory manager to allow to invoke the same Message object multiple times with different data.
Emulating a completion condition is done by defining the completion condition as a Method in the component interface and requiring that the sender of the Message checks that Method to evaluate progress. In scripting this becomes:
// Old: do comp.command("hello"); // waits (polls) here until complete returns true // New: Makes explicit what above line does: do comp.message("hello"); // proceeds immediately while ( comp.message_complete("hello") == false ) // polling do nothing;
In C++, the equivalent is slightly different:
// Old: if ( command("hello") ) { //... user specific logic that checks command.done() } // New: if ( message("hello") ) { // send and forget, returns immediately // user specifc logic that checks message_complete("hello") }
Users have indicated that they also wanted to be able to specify in C++:
message.wait("hello"); // send and block until executed.
It is not clear yet how the wait case can be implemented efficiently.
The user visible object names are:
This proposal solves:
This proposal omits:
Other notes:
The idea is that components only define services, and assign properties to these services. The main properties to toggle are 'executed in my thread or callers thread, or even another thread'. But other properties could be added too. For example: a 'serialized' property which causes the locking of a (recursive!) mutex during the execution of the service. The user of the service can not and does not need to know how these properties are set. He only sees a list of services in the interface.
It is the caller that chooses how to invoke a given service: waiting for the result ('call') or not ('send'). If he doesn't want to wait, he has the option to collect the results later ('collect'). The default is blocking ('call'). Note that this waiting or not is completely independent of how the service was defined by the component, the framework will choose a different 'execution' implementation depending on the combination of the properties of service and caller.
This means that this proposal allows to have all four quadrants of the table above. This proposal does not detail yet how to implement case (X) though, which requires a 3rd thread to do the actual execution of the service (neither component nor caller wish to do execute the C function).
This would result in the following scripting code on caller side:
//Old: do comp.the_method("hello"); //New: do comp.the_service.call("hello"); // equivalent to the_method. //Old: do comp.the_command("hello"); //New: do comp.the_service.send("hello"); // equivalent to the_command, but without completion condition.
This example shows two use cases for the same 'the_service' functionality. The first case emulates an RTT 1.x method. It is called and the caller waits until the function has been executed. You can not see here which thread effectively executes the call. Maybe it's 'comp's thread, in which case the caller's thread is blocking until it the function is executed. Maybe it's the caller's thread, in which case it is effectively executing the function. The caller doesn't care actually. The only thing that has effect is that it takes a certain amount of time to complete the call, *and* that if the call returns, the function has been effectively executed.
The second case is emulating an RTT 1.x command. The send returns immediately and there is no way in knowing when the function has been executed. The only guarantee you have is that the request arrived at the other side and bar crashes and infinite loops, will complete some time in the future.
A third example is shown below where another service is used with a 'send' which returns a result. The service takes two arguments: a string and a double. The double is the answer of the service, but is not yet available when the send is done. So the second argument is just ignored during the send. A handle 'h' is returned which identifies your send request. You can re-use this handle to collect the results. During collection, the first argument is now ignored, and the second argument is filled in with the result of the service. Collection may be blocking or not.
//New, with collecting results: var double ignored_result, result; set h = comp.other_service.send("hello", ignored_result); // some time later : comp.other_service.collect(h, "ignored", result); // blocking ! // or poll for it: if ( comp.other_service.collect_if_done( h, "ignored", result ) == true ) then { // use result... }
In C++ the above examples are written as:
//New calling: the_service.call("hello", result); // also allowed: the_service("hello", result); //New sending: the_service.send("hello", ignored_result); //New sending with collecting results: h = other_service.send("hello", ignored_result); // some time later: other_service.collect(h, "ignored", result); // blocking ! // or poll for it: if ( other_service.collect_if_done( h, "ignored", result ) == true ) { // use result... }
Completion condition emulation is done like in Proposal 1.
The definition of the service happens at the component's side. The component decides for each service if it is executed in his thread or the callers thread:
// by default creates a service executed by caller, equivalent to defining a RTT 1.x Method RTT::Service the_service("the_service", &foo_service ); // sets the service to be executed by the component's thread, equivalent to Command the_service.setExecutor( this ); //above in one line: RTT::Service the_service("the_service", &foo_service, this );
The user visible object names are:
This proposal solves:
This proposal omits:
Users can express the 'provides' interface of an Orocos Component. However, there is no easy way to express which other components a component requires. The notable exception is data flow ports, which have in-ports (requires) and out-ports (provides). It is however not possible to express this requires interface for the execution flow interface, thus for methods, commands/messages and events. This omission makes the component specification incomplete.
One of the first questions raised is if this must be expressed in C++ or during 'modelling'. That is, UML can express the requires dependency, so why should the C++ code also contain it in the form of code ? It should only contain it if you can't generate code from your UML model. Since this is not yet available for Orocos components, there is no other choice than expressing it in C++.
A requires interface specification should be optional and only be present for:
We apply this in code examples to various proposed primitives in the pages below.
Commands are no longer a part of the TaskContext API. They are helper classes which replicate the old RTT 1.0 behaviour. In order to setup commands more easily, it is allowed to register them as a 'requires()' interface.
This is all very experimental.
/** * Provider of a Message with command-like semantics */ class TaskA : public TaskContext { Message<void(double)> message; Method<bool(double)> message_is_done; Event<void(double)> done_event; void mesg(double arg1) { return; } bool is_done(double arg1) { return true; } public: TaskA(std::string name) : TaskContext(name), message("Message",&TaskA::mesg, this), message_is_done("MessageIsDone",&TaskA::is_done, this), done_event("DoneEvent") { this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1"); this->provides()->addMethod(&method, "Is the Message done?", "arg1", "Argument 1"); this->provides()->addEvent(&done_event, "Emited when the Message is done.", "arg1", "Argument 1"); } }; class TaskB : public TaskContext { // RTT 1.0 style command object Command<bool(double)> command1; Command<bool(double)> command2; public: TaskB(std::string name) : TaskContext(name), command1("command1"), command2("command2") { // the commands are now created client side, you // can not add them to your 'provides' interface command1.useMessage("Message"); command1.useCondition("MessageIsDone"); command2.useMessage("Message"); command2.useEvent("DoneEvent"); // this allows automatic setup of the command. this->requires()->addCommand( &command1 ); this->requires()->addCommand( &command2 ); } bool configureHook() { // setup is done during deployment. return command1.ready() && command2.ready(); } void updateHook() { // calls TaskA: if ( command1.ready() && command2.ready() ) command1( 4.0 ); if ( command1.done() && command2.ready() ) command2( 1.0 ); } }; int ORO_main( int, char** ) { // Create your tasks TaskA ta("Provider"); TaskB tb("Subscriber"); connectPeers(ta, tb); // connects interfaces. connectInterfaces(ta, tb); return 0; }
The idea of the new Event API is that: 1. only the owner of the event can emit the event (unless the event is also added as a Method or Message) 2. Only methods or message objects can subscribe to events.
/** * Provider of Event */ class TaskA : public TaskContext { Event<void(string)> event; public: TaskA(std::string name) : TaskContext(name), event("Event") { this->provides()->addEvent(&event, "The Event", "arg1", "Argument 1"); // OR: this->provides("FooInterface")->addEvent(&event, "The Event", "arg1", "Argument 1"); // If you want the user to let him emit the event: this->provides()->addMethod(&event, "Emit The Event", "arg1", "Argument 1"); } void updateHook() { event("hello world"); } }; /** * Subscribes a local Method and a Message to Event */ class TaskB : public TaskContext { Message<void(string)> message; Method<void(string)> method; // Message callback void mesg(double arg1) { return; } // Method callback int meth(double arg1) { return 0; } public: TaskB(std::string name) : TaskContext(name), message("Message",&TaskB::mesg, this), method("Method",&TaskB::meth, this) { // optional: // this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1"); // this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1"); // subscribe to event: this->requires()->addCallback("Event", &message); this->requires()->addCallback("Event", &method); // OR: // this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1"); // this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1"); // subscribe to event: this->requires("FooInterface")->addCallback("Event", &message); this->requires("FooInterface")->addCallback("Event", &method); } bool configureHook() { // setup is done during deployment. return message.ready() && method.ready(); } void updateHook() { // we only receive } }; int ORO_main( int, char** ) { // Create your tasks TaskA ta("Provider"); TaskB tb("Subscriber"); connectPeers(ta, tb); // connects interfaces. connectInterfaces(ta, tb); return 0; }
This use case shows how one can use messages in the new API. The unchanged method is added for comparison. Note that I have also added the provides() and requires() mechanism such that the RTT 1.0 construction:
method = this->getPeer("PeerX")->getMethod<int(double)>("Method");
is no longer required. The connection is made similar as data flow ports are connected.
/** * Provider */ class TaskA : public TaskContext { Message<void(double)> message; Method<int(double)> method; void mesg(double arg1) { return; } int meth(double arg1) { return 0; } public: TaskA(std::string name) : TaskContext(name), message("Message",&TaskA::mesg, this), method("Method",&TaskA::meth, this) { this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1"); this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1"); // OR: this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1"); this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1"); } }; class TaskB : public TaskContext { Message<void(double)> message; Method<int(double)> method; public: TaskB(std::string name) : TaskContext(name), message("Message"), method("Method") { this->requires()->addMessage( &message ); this->requires()->addMethod( &method ); // OR: this->requires("FooInterface")->addMessage( &message ); this->requires("FooInterface")->addMethod( &method ); } bool configureHook() { // setup is done during deployment. return message.ready() && method.ready(); } void updateHook() { // calls TaskA: method( 4.0 ); // sends two messages: message( 1.0 ); message( 2.0 ); } }; int ORO_main( int, char** ) { // Create your tasks TaskA ta("Provider"); TaskB tb("Subscriber"); connectPeers(ta, tb); // connects interfaces. connectInterfaces(ta, tb); return 0; }
This page shows some use cases on how to use the newly proposed services classes in RTT 2.0.
WARNING: This page assumes the reader has familiarity with the current RTT 1.x API.
First, we introduce the new classes that would be added to the RTT:
#include <rtt/TaskContext.hpp> #include <string> using RTT::TaskContext; using std::string; /************************************** * PART I: New Orocos Classes */ /** * An operation is a function a component offers to do. */ template<class T> class Operation {}; /** * A Service collects a number of operations. */ class ServiceProvider { public: ServiceProvider(string name, TaskContext* owner); }; /** * Is the invocation of an Operation. * Methods can be executed blocking or non blocking, * in the latter case the caller can retrieve the results * later on. */ template<class T> class Method {}; /** * A ServiceRequester collects a number of methods */ class ServiceRequester { public: ServiceRequester(string name, TaskContext* owner); bool ready(); };
What is important to notice here is the symmetry:
(Operation, ServiceProvider) <-> (Method, ServiceRequester).The left hand side is offering services, the right hand side is using the services.
First we define that we provide a service. The user starts from his own C++ class with virtual functions. This class is then implemented in a component. A helper class ties the interface to the RTT framework:
/************************************** * PART II: User code for PROVIDING a service */ /** * Example Service as abstract C++ interface (non-Orocos). */ class MyServiceInterface { public: /** * Description. * @param name Name of thing to do. * @param value Value to use. */ virtual int foo_function(std::string name, double value) = 0; /** * Description. * @param name Name of thing to do. * @param value Value to use. */ virtual int bar_service(std::string name, double value) = 0; }; /** * MyServiceInterface exported as Orocos interface. * This could be auto-generated from reading MyServiceInterface. * */ class MyService { protected: /** * These definitions are not required in case of 'addOperation' below. */ Operation<int(const std::string&,double)> operation1; Operation<int(const std::string&,double)> operation2; /** * Stores the operations we offer. */ ServiceProvider provider; public: MyService(TaskContext* owner, MyServiceInterface* service) : provider("MyService", owner), operation1("foo_function"), operation2("bar_service") { // operation1 ties to foo_function and is executed in caller's thread. operation1.calls(&MyServiceInterface::foo_function, service, Service::CallerThread); operation1.doc("Description", "name", "Name of thing to do.", "value", "Value to use."); provider.addOperation( operation1 ); // OR: (does not need operation1 definition above) // Operation executed by caller's thread: provider.addOperation("foo_function", &MyServiceInterface::foo_function, service, Service::CallerThread) .doc("Description", "name", "Name of thing to do.", "value", "Value to use."); // Operation executed in component's thread: provider.addOperation("bar_service", &MyServiceInterface::bar_service, service, Service::OwnThread) .doc("Description", "name", "Name of thing to do.", "value", "Value to use."); } };
Finally, any component is free to provide the service defined above. Note that it shouldn't be that hard to autogenerate most of the above code.
/** * A component that implements and provides a service. */ class MyComponent : public TaskContext, protected MyServiceInterface { /** * The class defined above. */ MyService serv; public: /** * Just pass on TaskContext and MyServiceInterface pointers: */ MyComponent() : TaskContext("MC"), serv(this,this) { } protected: // Implements MyServiceInterface int foo_function(std::string name, double value) { //... return 0; } // Implements MyServiceInterface int bar_service(std::string name, double value) { //... return 0; } };
The second part is about using this service. It creates a ServiceRequester object that stores all the methods it wants to be able to call.
Note that both ServiceRequester below and ServiceProvider above have the same name "MyService". This is how the deployment can link the interfaces together automatically.
/************************************** * PART II: User code for REQUIRING a service */ /** * We need something like this to define which services * our component requires. * This class is written explicitly, but it can also be done * automatically, as the example below shows. * * If possible, this class should be generated too. */ class MyServiceUser { ServiceRequester rservice; public: Method<int(const string&, double)> foo_function; MyServiceUser( TaskContext* owner ) : rservice("MyService", owner), foo_function("foo_function") { rservice.requires(foo_function); } }; /** * Uses the MyServiceUser helper class. */ class UserComponent2 : public TaskContext { // also possible to (privately) inherit from this class. MyServiceUser mserv; public: UserComponent2() : TaskContext("User2"), mserv(this) { } bool configureHook() { if ( ! mserv->ready() ) { // service not ready return false; } } void updateHook() { // blocking: mserv.foo_function.call("name", 3.14); // etc. see updateHook() below. } };
The helper class can again be omitted, but the Method<> definitions must remain in place (in contrast, the Operation<> definitions for providing a service could be omitted).
The code below also demonstrates the different use cases for the Method object.
/** * A component that uses a service. * This component doesn't need MyServiceUser, it uses * the factory functions instead: */ class UserComponent : public TaskContext { // A definition like this must always be present because // we need it for calling. We also must provide the function signature. Method<int(const string&, double)> foo_function; public: UserComponent() : TaskContext("User"), foo_function("foo_function") { // creates this requirement automatically: this->requires("MyService")->add(&foo_function); } bool configureHook() { if ( !this->requires("MyService")->ready() ) { // service not ready return false; } } /** * Use the service */ void updateHook() { // blocking: foo_function.call("name", 3.14); // short/equivalent to call: foo_function("name", 3.14); // non blocking: foo_function.send("name", 3.14); // blocking collect of return value of foo_function: int ret = foo_function.collect(); // blocking collect of any arguments of foo_function: string ret1; double ret2; int ret = foo_function.collect(ret1, ret2); // non blocking collect: int returnval; if ( foo_function.collectIfDone(ret1,ret2,returnval) ) { // foo_function was done. Any argument that needed updating has // been updated. } } };
Finally, we conclude with an example of requiring the same service multiple times, for example, for controlling two stereo-vision cameras.
/** * Multi-service case: use same service multiple times. * Example: stereo vision with two cameras. */ class UserComponent3 : public TaskContext { // also possible to (privately) inherit from this class. MyVisionUser vision; public: UserComponent3() : TaskContext("User2"), vision(this) { // requires a service exactly two times: this->requires(vision)["2"]; // OR any number of times: // this->requires(vision)["*"]; // OR range: // this->requires(vision)["0..2"]; } bool configureHook() { if ( ! vision->ready() ) { // only true if both are ready. return false; } } void updateHook() { // blocking: vision[0].foo_function.call("name", 3.14); vision[1].foo_function.call("name", 3.14); // or iterate: for(int i=0; i != vision.interfaces(); ++i) vision[i].foo_function.call("name",3.14); // etc. see updateHook() above. /* Scripting equivalent: * for(int i=0; i != vision.interfaces(); ++i) * do vision[i].foo_function.call("name",3.14); */ } };
For upgrading, we have:
More details are split into several child pages.
RTT 2.0 has unified events, commands and methods in the Operation interface.
This is how a function is added to the component interface:
#include <rtt/Operation.hpp>; using namespace RTT; class MyTask : public RTT::TaskContext { public: string getType() const { return "SpecialTypeB" } // ... MyTask(std::string name) : RTT::TaskContext(name), { // Add the a C++ method to the operation interface: addOperation( "getType", &MyTask::getType, this ) .doc("Read out the name of the system."); } // ... }; MyTask mytask("ATask");
The writer of the component has written a function 'getType()' which returns a string that other components may need. In order to add this operation to the Component's interface, you use the TaskContext's addOperation function. This is a short-hand notation for:
// Add the C++ method to the operation interface: provides()->addOperation( "getType", &MyTask::getType, this ) .doc("Read out the name of the system.");
Meaning that we add 'getType()' to the component's main interface (also called 'this' interface). addOperation takes a number of parameters: the first one is always the name, the second one a pointer to the function and the third one is the pointer to the object of that function, in our case, MyTask itself. In case the function is a C function, the third parameter may be omitted.
If you don't want to polute the component's this interface, put the operation in a sub-service:
// Add the C++ method objects to the operation interface: provides("type_interface") ->addOperation( "getType", &MyTask::getType, this ) .doc("Read out the name of the system.");
The code above dynamically created a new service object 'type_interface' to which one operation was added: 'getType()'. This is similar to creating an object oriented interface with one function in it.
Your code needs a few things before it can call a component's operation:
Combining these three givens, we must create an OperationCaller object that will manage our call to 'getType':
#include <rtt/OperationCaller.hpp> //... // In some other component: TaskContext* a_task_ptr = getPeer("ATask"); // create a OperationCaller<Signature> object 'getType': OperationCaller<string(void)> getType = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)' // Call 'getType' of ATask: cout << getType() <<endl;
A lot of work for calling a function no ? The advantages you get are these:
var string result = ""; set result = ATask.getType();
// Add the C++ method to the operation interface: // Execute function in component's thread: provides("type_interface") ->addOperation( "getType", &MyTask::getType, this, OwnThread ) .doc("Read out the name of the system.");
So this causes that when getType() is called, it gets queued for execution in the ATask component, is executed by its ExecutionEngine, and when done, the caller will resume. The caller (ie the OperationCaller object) will not notice this change of execution path. It will wait for the getType function to complete and return the results.
// This first part is equal to the example above: #include <rtt/OperationCaller.hpp> //... // In some other component: TaskContext* a_task_ptr = getPeer("ATask"); // create a OperationCaller<Signature> object 'getType': OperationCaller<string(void)> getType = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)' // Here it is different: // Send 'getType' to ATask: SendHandle<string(void)> sh = getType.send(); // Collect the return value 'some time later': sh.collect(); // blocks until getType() completes cout << sh.retn() <<endl; // prints the return value of getType().
Other variations on the use of SendHandle are possible, for example polling for the result or retrieving more than one result if the arguments are passed by reference. See the Component Builder's Manual for more details.
RTT 2.0 has a more powerful, simple and flexible system to exchange data between components.
Every instance of ReadDataPort and ReadBufferPort must be renamed to 'InputPort' and every instance of WriteDataPort and WriteBufferPort must be renamed to OutputPort. 'DataPort' and 'BufferPort' must be renamed according to their function.
The rtt2-converter tool will do this renaming for you, or at least, make its best guess.
InputPort and OutputPort have a read() and a write() function respectively:
using namespace RTT; double data; InputPort<double> in("name"); FlowStatus fs = in.read( data ); // was: Get( data ) or Pull( data ) in 1.x OutputPort<double> out("name"); out.write( data ); // was: Set( data ) or Push( data ) in 1.x
As you can see, Get() and Pull() are mapped to read(), Set() and Push() to write(). read() returns a FlowStatus object, which can be NoData, OldData, NewData. write() does not return a value (send and forget).
Writing to a not connected port is not an error. Reading from a not connected (or never written to) port returns NoData.
Your component can no longer see if a connection is buffered or not. It doesn't need to know. It can always inspect the return value of read() to see if a new data sample arrived or not. In case multiple data samples are ready to read in a buffer, read() will fetch each sample in order and each time return NewData, until the buffer is empty, in which case it returns the last data sample read with 'OldData'.
If data exchange is buffered or not is now fixed by 'Connection Policies', or 'RTT::ConnPolicy' objects. This allows you to be very flexible on how components are connected, since you only need to specify the policy at deployment time. It is possible to define a default policy for each input port, but it is not recommended to count on a certain default when building serious applications. See the 'RTT::ConnPolicy' API documentation for which policies are available and what the defaults are.
The DeploymentComponent has been extended such that it can create new-style connections. You only need to add sections to your XML files, you don't need to change existing ones. The sections to add have the form:
<!-- You can set per data flow connection policies --> <struct name="SensorValuesConnection" type="ConnPolicy"> <!-- Type is 'shared data' or buffered: DATA: 0 , BUFFER: 1 --> <simple name="type" type="short"><value>1</value></simple> <!-- buffer size is 12 --> <simple name="size" type="short"><value>12</value></simple> </struct> <!-- You can repeat this struct for each connection below ... -->
Where 'SensorValuesConnection' is a connection between data flow ports, like in the traditional 1.x way.
Consult the deployment component manual for all allowed ConnPolicy XML options.
std::vector<double> joints(10, 0.0); OutputPort<std::vector<double> > out("out"); out.setDataSample( joints ); // initialises all current and future connections to hold a vector of size 10. // modify joint values... add connections etc. out.write( joints ); // always hard real-time if joints.size() <= 10
As the example shows, a single call to setDataSample() is enough. This is not the same as write() ! A write() will deliver data to each connected InputPort, a setDataSample() will only initialize the connections, but no actual writing is done. Be warned that setDataSample() may clear all data already in a connection, so it is better to call it before any data is written to the OutputPort.
In case your data type is always hard real-time copyable, there is no need to call setDataSample. For example:
KDL::Frame f = ... ; // KDL::Frame never (de-)allocates memory during copy or construction. OutputPort< KDL::Frame > out("out"); out.write( f ); // always hard real-time
This page lists the renamings/relocations done on the RTT 2.0 branch (available through gitorious on http://www.gitorious.org/orocos-toolchain/rtt/commits/master) and also offers the conversion scripts to do the renaming.
A note about headers/namespaces: If a header is in rtt/extras, the namespace will be RTT::extras and vice versa. A header in rtt/ has namespace RTT. Note: the OS namespace has been renamed to lowercase os. The Corba namespace has been renamed to lowercase corba.
mv to-rtt-2.0.pl.txt to-rtt-2.0.pl chmod a+x to-rtt-2.0.pl ./to-rtt-2.0.pl $(find . -name "*.cpp" -o -name "*.hpp")
Minor manual fixes may be expected after running this script. Be sure to have your sources version controlled, such that you can first test what the script does before permanently changing files.
tar xjf rtt2-converter-1.1.tar.bz2 cd rtt2-converter-1.1 make ./rtt2-converter Component.hpp Component.cpp
The script takes preferably both header and implementation of your component, but will also accept a single file. It needs both class definition and implementation to make its best guesses on how to convert. If all your code is in a .hpp or .cpp file, you only need to specify that file. If nothing is to be done, the file will remain the same, so you may 'accidentally' feed non-Orocos files, or a file twice.
To run this on a large codebase, you can do something similar to:
# Calls : ./rtt2-converter Component.hpp Component.cpp for each file in orocos-app for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $(dirname $i)/$(basename $i cpp)hpp $i; done # Calls : ./rtt2-converter Component.cpp for each .cpp file in orocos-app for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $i; done # Calls : ./rtt2-converter Component.hpp for each .hpp file in orocos-app for i in $(find /home/user/src/orocos-app -name "*.hpp"); do ./rtt2-converter $i; done
RTT 1.0 | RTT 2.0 | Comments |
RTT::PeriodicActivity | RTT::extras::PeriodicActivity | Use of RTT::Activity is prefered |
RTT::Timer | RTT::os::Timer | |
RTT::SlaveActivity, SequentialActivity, SimulationThread, IRQActivity, FileDescriptorActivity, EventDrivenActivity, SimulationActivity, ConfigurationInterface, Configurator, TimerThread | RTT::extras::... | EventDrivenActivity has been removed. |
RTT::OS::SingleThread, RTT::OS::PeriodicThread | RTT::os::Thread | Can do periodic and non-periodic and switch at run-time. |
RTT::TimeService | RTT::os::TimeService | |
RTT::DataPort,BufferPort | RTT::InputPort,RTT::OutputPort | Buffered/unbuffered is decided upon connection time. Only input/output is hardcoded. |
RTT::types() | RTT::types::Types() | The function name collided with the namespace name |
RTT::Toolkit* | RTT::types::Typekit* | More logical name |
RTT::Command | RTT::Operation | Create an 'OwnThread' operation type |
RTT::Method | RTT::Operation | Create an 'ClientThread' operation type |
RTT::Event | RTT::internal::Signal | Events are replaced by OutputPort or Operation, the Signal class is a synchonous-only callback manager. |
commands()->getCommand<T>() | provides()->getOperation() | get a provided operation, no template argument required |
commands()->addCommand() | provides()->addOperation().doc("Description") | add a provided operation, document using .doc("doc").doc("a1","a1 doc")... |
methods()->getMethod<T>() | provides()->getOperation() | get a provided operation, no template argument required |
methods()->addMethod() | provides()->addOperation().doc("Description") | add a provided operation, document using .doc("doc").doc("a1","a1 doc")... |
attributes()->getAttribute<T>() | provides()->getAttribute() | get a provided attribute, no template argument required |
attributes()->addAttribute(&a) | provides()->addAttribute(a) | add a provided attribute, passed by reference, can now also add a normal member variable. |
properties()->getProperty<T>() | provides()->getProperty() | get a provided property, no template argument required |
properties()->addProperty(&p) | provides()->addProperty(p).doc("Description") | add a provided property, passed by reference, can now also add a normal member variable. |
events()->getEvent<T>() | ports()->getPort() OR provides()->getOperation<T>() | Event<T> was replaced by OutputPort<T> or Operation<T> |
ports()->addPort(&port, "Description") | ports()->addPort( port ).doc("Description") | Takes argument by reference and documents using .doc("text"). |
RTT 1.0 | RTT 2.0 | Comments |
scripting() | getProvider<Scripting>("scripting") | Returns a RTT::Scripting object. Also add #include <rtt/scripting/Scripting.hpp> |
RTT 1.0 | RTT 2.0 | Comments |
marshalling() | getProvider<Marshalling>("marshalling") | Returns a RTT::Marshalling object. Also add #include <rtt/marsh/Marshalling.hpp> |
RTT::Marshaller | RTT::marsh::MarshallingInterface | Normally not needed for normal users. |
RTT::Demarshaller | RTT::marsh::DemarshallingInterface | Normally not needed for normal users. |
RTT 1.0 | RTT 2.0 | Comments |
RTT::Corba::* | RTT::corba::C* | Each proxy class or idl interface starts with a 'C' to avoid confusion with the same named RTT C++ classes |
RTT::Corba::ControlTaskServer | RTT::corba::TaskContextServer | renamed for consistency. |
RTT::Corba::ControlTaskProxy | RTT::corba::TaskContextProxy | renamed for consistency. |
RTT::Corba::Method,Command | RTT::corba::COperationRepository,CSendHandle | No need to create these helper objects, call COperationRepository directly |
RTT::Corba::AttributeInterface,Expression,AssignableExpression | RTT::corba::CAttributeRepository | No need to create expression objects, query/use CAttributeRepository directly. |
Attachment | Size |
---|---|
class-dump.txt | 7.89 KB |
headers.txt | 10.17 KB |
to-rtt-2.0.pl.txt | 4.78 KB |
RTT 2.0 has dropped the support for the RTT::Command class. It has been replaced by the more powerful Methods vs Operations construct.
The rtt2-converter tool will automatically convert your Commands to Method/Operation pairs. Here's what happens:
// RTT 1.x code: class ATask: public TaskContext { bool prepareForUse(); bool prepareForUseCompleted() const; public: ATask(): TaskContext("ATask") { this->commands()->addCommand(RTT::command("prepareForUse",&ATask::prepareForUse,&ATask::prepareForUseCompleted,this), "prepares the robot for use"); } };
After:
// After rtt2-converter: RTT 2.x code: class ATask: public TaskContext { bool prepareForUse(); bool prepareForUseCompleted() const; public: ATask(): TaskContext("ATask") { this->addOperation("prepareForUse", &ATask::prepareForUse, this, RTT::OwnThread).doc("prepares the robot for use"); this->addOperation("prepareForUseDone", &ATask::prepareForUseCompleted, this, RTT::ClientThread).doc("Returns true when prepareForUse is done."); } };
What has happened is that the RTT 1.0 Command is split into two RTT 2.0 Operations: "prepareForUse" and "prepareForUseDone". The first will be executed in the component's thread ('OwnThread'), analogous to the RTT::Command semantics. The second function, prepareForUseDone, is executed in the callers thread ('ClientThread'), also analogous to the behaviour of the RTT::Command's completion condition.
The old behavior can be simulated at the callers side by these constructs:
Command<bool(void)> prepare = atask->commands()->getCommand<bool(void)>("prepareForUse"); prepare(); // sends the Command object. while (prepare.done() == false) sleep(1);
In RTT 2.0, the caller's code looks up the prepareForUse Operation and then 'sends' the request to the ATask Component. Optionally, the completion condition is looked up manually and polled for as well:
Method<bool(void)> prepare = atask->getOperation("prepareForUse"); Method<bool(void)> prepareDone = atask->getOperation("prepareForUseDone"); SendHandle h = prepare.send(); while ( !h.collectIfDone() && prepareDone() == false ) sleep(1);
The collectIfDone() and prepareDone() checks are now made explicit, while they were called implicitly in the RTT 1.x's prepare.done() function. Writing your code like this will case the exact same behaviour in RTT 2.0 as in RTT 1.x.
In case you don't care for the 'done' condition, the above code may just be simplified to:
Method<bool(void)> prepare = atask->getOperation("prepareForUse"); prepare.send();
In that case, you may ignore the SendHandle, and the object will cleanup itself at the appropriate time.
Scripting was very convenient for using commands. A typical RTT 1.x script would have looked like:
program foo { do atask.prepareForUse(); // ... rest of the code }
To have the same behaviour in RTT 2.x using Operations, you need to make the 'polling' explicit. Furthermore, you need to 'send' the method to indicate that you do not wish to block:
program foo { var SendHandle h; set h = atask.prepareForUse.send(); while (h.collectIfDone() == false && atask.prepareForUseDone() == false) yield; // ... rest of the code }
function prepare_command() { var SendHandle h; set h = atask.prepareForUse.send(); while (h.collectIfDone() == false && atask.prepareForUseDone() == false) yield; } program foo { call prepare_command(); // note: using 'call' // ... rest of the code }
export function prepare_command() // note: we must export the function { var SendHandle h; set h = atask.prepareForUse.send(); while (h.collectIfDone() == false && atask.prepareForUseDone() == false) yield; } program foo { var SendHandle h; set h = prepare_command(); // note: not using 'call' while (h.collectIfDone() == false) yield; // ... rest of the code }
program foo { prepare_command.call(); // (1) calls and blocks for result. prepare_command.send(); // (2) send() and forget. prepare_command.poll(); // (3) send() and poll with collectIfDone(). }
RTT 2.0 no longer supports the RTT::Event class. This page explains how to adapt your code for this.
Output ports differ from RTT::Event in that they can take only one value as an argument. If your 1.x Event contained multiple arguments, they need to be taken together in a new struct that you create yourself. Both sender and receiver must know and understand this struct.
For the simple case, when your Event only had one argument:
// RTT 1.x class MyTask: public TaskContext { RTT::Event<void(int)> samples_processed; MyTask() : TaskContext("task"), samples_processed("samples_processed") { events()->addEvent( &samples_processed ); } // ... your other code here... };
// RTT 2.x class MyTask: public TaskContext { RTT::OutputPort<int> samples_processed; MyTask() : TaskContext("task"), samples_processed("samples_processed") { ports()->addPort( samples_processed ); // note: RTT 2.x dropped the '&' } // ... your other code here... };
Note: the rtt2-converter tool does not do this replacement, see the Operation section below.
Components wishing to receive the number of samples processed, need to define an InputPort<int> and connect their input port to the output port above.
StateMachine SM { var int total = 0; initial state INIT { entry { } // Reads samples_processed and stores the result in 'total'. // Only if the port return 'NewData', this branch will be evaluated. transition samples_processed( total ) if (total > 0 ) select PROCESSING; } state PROCESSING { entry { /* processing code, use 'total' */ } } final state FINI {}
The transition from state INIT to state PROCESSING will only by taken if samples_processed.read( total ) == NewData and if total > 0. Note: When your TaskContext is periodically executing, the read( total ) statement will be re-tried and overwritten in case of OldData and NewData. Only if the connection of samples_processed is completely empty (never written to or reset), total will not be overwritten.
Operations are can take the same signature as RTT::Event. The difference is that only the component itself can attach callbacks to an Operation, by means of the signals() function.
For example:
// RTT 1.x class MyTask: public TaskContext { RTT::Event<void(int, double)> samples_processed; MyTask() : TaskContext("task"), samples_processed("samples_processed") { events()->addEvent( &samples_processed ); } // ... your other code here... };
// RTT 2.x class MyTask: public TaskContext { RTT::Operation<void(int,double)> samples_processed; MyTask() : TaskContext("task"), samples_processed("samples_processed") { provides()->addOperation( samples_processed ); // note: RTT 2.x dropped the '&' // Attaching a callback handler to the operation object: Handle h = samples_processed.signals( &MyTask::react_foo, this ); } // ... your other code here... void react_foo(int i, double d) { cout << i <<", " << d <<endl; } };
Note: the rtt2-converter tool only does this replacement automatically. Ie. it assumes all your Event objects were only used in the local component. See the RTT 2.0 Renaming table for this tool.
Since an Operation object is always local to the component, no other components can attach callbacks. If your Operation would return a value, the callback functions needs to return it too, but it will be ignored and not received by the caller.
The callback will be executed in the same thread as the operation's function (ie OwnThread vs ClientThread).
StateMachine SM { var int total = 0; initial state INIT { entry { } // Reacts to the samples_processed operation to be invoked // and stores the argument in total. If the Operations takes multiple // arguments, also here multiple arguments must be given. transition samples_processed( total ) if (total > 0 ) select PROCESSING; } state PROCESSING { entry { /* processing code, use 'total' */ } } final state FINI {}
The transition from state INIT to state PROCESSING will only by taken if samples_processed( total ) was called by another component (using a Method object, see Methods vs Operations and if the argument in that call > 0. Note: when samples_processed would return a value, your script can not influence that return value since the return value is determined by the function tied to the Operation, not by signal handlers.
NOTE: RTT 2.0.0-beta1 does not yet support the script syntax.
.. work in progress ..
This page describes how you can configure Eclipse in order to write Orocos applications.
Don't continue if you have an Eclipse version older than Helios (3.6).
Eclipse is a great tool, but some Linux systems are not well prepared to use it. Follow these instructions carefully to get the most out of it.
java -version java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)
java -version java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.6.1) (6b16-1.6.1-3ubuntu3) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)
Note that you should not see any text saying 'gij' or 'kaffe',... Ubuntu/Debian users can install Sun java by doing:
sudo aptitude install sun-java6-jre sudo update-alternatives --config java ... select '/usr/lib/jvm/java-6-sun/jre/bin/java'
In case of instability, or misbehaving windows/buttons. Try to use the Sun (= Oracle now) version. But also google for the export GDK_NATIVE_WINDOWS=1 solution in case you use Eclipse before Helios and a 2009 or newer Linux distro.
If you're changing Orocos code, also download and enable the Eclipse indentation file attached to this post and Import it in the 'Coding Style' tab of your project Preferences.
http://download.eclipse.org/egit/updatesto your update sites of Eclipse (Help -> Software updates...)
If you have an existing clone (checked out with plain old git), you can 'import' it by first importing the git repository directory as a project and then right click the project -> Team -> Share Project Follow the dialogs. There's some confusion with what to type in the location box. In older versions, you'd need to type
file:///path/to/repositoryNote the three ///
http://subclipse.tigris.org/update_1.4.xto your updates sites of Eclipse (Help -> Software updates...)
If you have an existing checkout, you can 'import' it by first importing the checkout directory as a project and then right click the project -> Team -> Share Project
Attachment | Size |
---|---|
orocos-coding-style.xml | 15.51 KB |
There are also build instructions for building some of these packages manually here: How to build Debian packages
The rest of this page mixes installing Java and building Orocos toolchain sources. In case you used the Debian/Ubuntu packages above, only do the Java setup.
Do the following in Synaptic at the same time:
* sun-java6-bin * sun-java6-jre * sun-java6-plugin * sun-java6-source
Install the following:
Using Synaptic get all the omniOrb packages that are not marked as transitional or dbg and have the same version number. (Hint: do a search of omniorb then sort by version) Include the lib* packages too.
I do not like the bootstrap/autoproj procedure of building Orocos. I prefer using the the standard build instructions found in the RTT Installation Guide
Errata in RTT Installation Guide:
Make sure to enable CORBA by using this cmake command:
cmake .. -DOROCOS_TARGET=gnulinux -DENABLE_CORBA=ON -DCORBA_IMPLEMENTATION=OMNIORB
OCL
Install:
cd log4cpp;mkdir build;../configure;make;make install
Now: cd ocl;mkdir build;cmake ..;make;make install
JDK BUG for JDK 6.0_18 and above FIX on 64bit systems
Put this in your eclipse.ini file under -vmargs: -XX:-UseCompressedOops
Get Eclipse IDE for C/C++ Developers,. Unzip it somewhere and then do:
cd eclipse ./eclipse
You can use Orocos packages in Eclipse easily. The easiest way is when you're using the ROS build system, since that allows you to generate an Eclipse project, with all the correct settings. If you don't use ROS, you can import it too, but you'll have to add the paths to headers etc manually.
cd ~/ros rosrun ocl orocreate-pkg orocosworld cd orocosworld make eclipse-project
Then go to Eclipse -> File -> Import -> Existing Project into Workspace and then follow the wizard.
When the project is loaded, give it some time to index all header files. All include paths and build settings in Eclipse will be set up for you.
You must have sourced env.sh !
cd ~/src orocreate-pkg orocosworld cd orocosworld make
Then go to Eclipse -> File > New > Makefile Project with Existing Code and complete the wizard page.
The next step you need to do is to add the include paths to RTT and/or OCL and any other dependency in the C++ configuration options of your project preferences.
It's a 10 minutes read which really pays off.
You can use Eclipse Using Eclipse And Orocos or plain git (on Linux) or TortoiseGit (on Windows).
SVN users can use this reference for learning the first commands: http://git.or.cz/course/svn.htm... Click below to read the rest of this post.===Getting started with git=== For a very good git introduction, see Using git without feeling stupid part 1 and part 2 !
It's a 10 minutes read which really pays off.
You can use Eclipse Using Eclipse And Orocos or plain git (on Linux) or TortoiseGit (on Windows).
SVN users can use this reference for learning the first commands: http://git.or.cz/course/svn.html
The git repositories of the Orocos Toolchain (v2.x only) are located at http://github.com/orocos-toolchain .
Check out the rtt or ocl repositories and submit patches by using
git clone git://github.com/orocos-toolchain/rtt.git cd orocos-rtt...hack hack hack on master branch...
git add <changed files> git commit... repeat ...
Finally:
git format-patch origin/masterAnd send out the resulting patch(es).
If origin/master moved forward, then do
git fetch origin/master git rebase origin/masterFetch copies the remote changes to your local repository, but doesn't update your current branch. rebase first removes your patches, the applies the fetched patches and then re-applies your personal patches on the fetched changes. In case of conflicts, see the tutorial on top of this page or man git-rebase
The Orocos Toolchain v2.X is the merging of the RTT, OCL and other tools that you require to build Orocos applications.
We are gradually migrating the wiki pages of the RTT/OCL to the Toolchain Wiki. All wiki pages under RTT/OCL are considered to be for RTT/OCL 1.x versions
What you find below is only for the 2.x releases.
This is extremely easily done with the orocreate-pkg script, see Getting started.
You can use packages in two ways:
# User provided files: # Package directory: .../packagename/manifest.xml, Makefile, CMakeLists.txt,... # Sources: .../packagename/src/*.cpp # Headers: .../packagename/include/packagename/*.hpp # Build results: # Built Component libraries for 'packagename': .../packagename/lib/orocos/gnulinux/*.so|dll|... # Built Plugin libraries for 'packagename': .../packagename/lib/orocos/gnulinux/plugins/*.so|dll|... # Type libraries for 'packagename': .../packagename/lib/orocos/gnulinux/types/*.so|dll|... # Build information for 'packagename': .../packagename/packagename-gnulinux.pc
For allowing multi-target builds, the libraries are put in thelib/orocos/targetname/ directory in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.
When you use the UseOrocos.cmake macros (Orocos Toolchain 2.3.0 or later), linking with dependees will be done automatically for you.
You may add a link instruction using the classical CMake syntax:
orocos_component( mycomponent ComponentSource.cpp ) target_link_libraries( mycomponent ${YOUR_LIBRARY} )
The component and plugin loaders of RTT will search your ROS_PACKAGE_PATH, and its target subdirectory for components and plugins.
You can then import the package in the deployer application by using:
import("packagename")
# Install dir (the prefix): /opt/orocos # Headers: /opt/orocos/include/orocos/gnulinux/packagename/*.hpp # Component libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/*.so|dll|... # Plugin libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/plugins/*.so|dll|... # Type libraries for 'packagename': /opt/orocos/lib/orocos/gnulinux/packagename/types/*.so|dll|... # Build information for 'packagename': /opt/orocos/lib/pkgconfig/packagename-gnulinux.pc
For allowing multi-target installs, the packages will be installed in orocos/targetname/packagename (for example: orocos/xenomai/ocl) in order to avoid loading a library for a different target. In the example above, the targetname is gnulinux.
You may add a link instruction using the classical CMake syntax:
orocos_component( mycomponent ComponentSource.cpp ) target_link_libraries( mycomponent -lfoobar )
RTT_COMPONENT_PATH=/opt/orocos/lib/orocos export RTT_COMPONENT_PATH
The component and plugin loaders of RTT will search this directory, and its target subdirectory for components and plugins. So there is no need to encode the target name in the RTT_COMPONENT_PATH (but you may do so if it is required for some case).
You can then import the package in the deployer application by using:
import("packagename")
The toolchain is a set of libraries and programs that you must compile on your computer in order to build Orocos applications. In case you are on a Linux system, you can use the bootstrap.sh script, which does this for you.
After installation, these libraries are available:
These programs are available:
Orocos component libraries are living in packages. You need to understand the concept of packages in Orocos in order to be able to create and use components. See more about Component Packages
Your primary reading material for creating components is the Orocos Components Manual. A component is compiled into a shared library (.so or .dll).
Use the orocreate-pkg script to create a new package that contains a ready-to-compile Orocos component, which you can extend or play with. See Using orocreate-pkg for all details. (Script available from Toolchain version 2.1.1 on).
Alternatively, the oroGen tool allows you to create components with a minimum knowledge of the RTT API.
The DeploymentComponent loads XML files or scripts and dynamically creates, configures and starts components in a single process. See the Orocos Deployment Manual
The TaskBrowser is our primary interface with a running application. See the Orocos TaskBrowser Manual
$ cd ~/orocos $ orocreate-pkg myrobot component Using templates at /home/kaltan/src/git/orocos-toolchain/ocl/scripts/pkg/templates... Package myrobot created in directory /home/kaltan/src/git/orocos-toolchain/myproject/myrobot $ cd myrobot $ ls CMakeLists.txt Makefile manifest.xml src # Standard build (installs in the same directory as Orocos Toolchain): $ mkdir build ; cd build $ cmake .. -DCMAKE_INSTALL_PREFIX=orocos $ make install # OR: ROS build: $ make
You can modify the .cpp/.hpp files and the CMakeLists.txt file to adapt them to your needs. See orocreate-pkg --help for other options which allow you to generate other files.
All files that are generated may be modified by you, except for all files in the typekit directory. That directory is generated during a build and under the control of the Orocos typegen tool, from the orogen package.
After the 'make install' step, make sure that your RTT_COMPONENT_PATH includes the installation directory (or that you used -DCMAKE_INSTALL_PREFIX=orocos) and then start the deployer for your platform:
$ deployer-gnulinux Switched to : Deployer This console reader allows you to browse and manipulate TaskContexts. You can type in an operation, expression, create or change variables. (type 'help' for instructions and 'ls' for context info) TAB completion and HISTORY is available ('bash' like) Deployer [S]> import("myrobot") = true Deployer [S]> displayComponentTypes I can create the following component types: Myrobot OCL::ConsoleReporting OCL::FileReporting OCL::HMIConsoleOutput OCL::HelloWorld OCL::TcpReporting OCL::TimerComponent = (void) Deployer [S]> loadComponent("TheRobot","Myrobot") Myrobot constructed ! = true Deployer [S]> cd TheRobot Switched to : TheRobot TheRobot [S]> ls Listing TaskContext TheRobot[S] : Configuration Properties: (none) Provided Interface: Attributes : (none) Operations : activate cleanup configure error getPeriod inFatalError inRunTimeError isActive isConfigured isRunning setPeriod start stop trigger update Data Flow Ports: (none) Services: (none) Requires Operations : (none) Requests Services : (none) Peers : (none)
You now need to consult the Component Builder's Manual for instructions on how to use and extend your Orocos component. All relevant documentation is available on the Toolchain Reference Manuals page.
The generated package contains a manifest.xml file and the CMakeLists.txt file will call ros_buildinit() if ROS_ROOT has been set and also sets the LIBRARY_OUTPUT_PATH to packagename/lib/orocos such that the ROS tools can find the libraries and the package itself. The ROS integration is mediated in the UseOrocos-RTT.cmake file, which gets included on top of the generated CMakeLists.txt file and is installed as part of the RTT. The Makefile file is rosmake compatible.
The OCL deployer knows about ROS packages and can import Orocos components (and their dependencies) from them once your ROS_PACKAGE_PATH has been correctly set.
Extracted from the instructions on http://www.ros.org/wiki/groovy/Installation/OSX/MacPorts/Repository
echo 'export PATH=/opt/local/bin:/opt/local/sbin:$PATH' >> ~/.bash_profile
echo 'export LIBRARY_PATH=/opt/local/lib:$LIBRARY_PATH' >> ~/.bash_profile
cd ~ git clone https://github.com/smits/ros-macports.git
sudo sh -c 'echo file:///Users/user/ros-macports >> /opt/local/etc/macports/sources.conf'
sudo port sync
sudo port install python27 sudo port select --set python python27 sudo port install boost libxslt lua51 ncurses pkgconfig readline netcdf netcdf-cxx omniORB p5-xml-xpath ros-hydro-catkin py27-sip ros-hydro-cmake_modules eigen3 dyncall ruby20 sudo port select --set nosetests nosetests27 sudo port select --set ruby ruby20
ruby --version
which gem sudo gem install facets nokogiri
lua -v
sudo port uninstall lua
git clone https://github.com/gccxml/gccxml cd gccxml mkdir build cd build cmake .. -DCMAKE_INSTALL_PREFIX=/opt/local make sudo make install
mkdir -p ~/orocos_ws/src cd ~/orocos_ws/src sudo port install py27-wstool wstool init . curl https://gist.githubusercontent.com/smits/9950798/raw | wstool merge - wstool update cd orocos_toolchain git submodule foreach git checkout toolchain-2.8
cd ~/orocos_ws source /opt/local/setup.bash sudo /opt/local/env.sh catkin_make_isolated --install-space /opt/orocos --install --cmake-args -DENABLE_CORBA=TRUE -DCORBA_IMPLEMENTATION=OMNIORB - DRUBY_INCLUDE_DIR=/opt/local/include/ruby-2.0.0 -DRUBY_CONFIG_INCLUDE_DIR=/opt/local/include/ruby-2.0.0/x86_64-darwin13 -DRUBY_LIBRARY=/opt/local/lib/libruby2.0.dylib -DCMAKE_PREFIX_PATH="$CMAKE_PREFIX_PATH;/opt/local"
source /opt/orocos/setup.bash
echo ‘export GCCXML_COMPILER=g++-mp-4.3’ >> ~/.bash_profile
These exercises are hosted on Github .
You need to have the Component Builder's Manual (see Toolchain Reference Manuals) at hand to complete these exercises.
Take also at the Toolchain Reference Manuals for in-depth explanations of the deployment XML format and the different transports (CORBA, MQueue)
You'll need to have the Scripting Chapter of the Component Builder's Manual at hand for clarifications on syntax and execution semantics.
path("/opt/orocos/lib/orocos") // Path to where components are located [1] import("myproject") // imports a specific project in the path [2] import("ocl") // imports ocl from the path require("print") // loads the 'print' service globally. [3] loadComponent("HMI1","OCL::HMIComponent") // create a new HMI component [4] loadComponent("Controller1","MyProjectController") // create a new controller loadComponent("Test1","TaskContext") // creates an empty test component
You can test this code by doing:
deployer-gnulinux -s startup.ops
deployer-gnulinux ... Deployer [S]> help runScript runScript( string const& File ) : bool Runs a script. File : An Orocos program script. Deployer[S]> runScript("startup.ops")
The first line of startup.ops ([1]) extends the standard search path for components. Every component library directly in a path will be discovered using this statement, but the paths are not recursively searched. For loading components in subdirectories of a path directory, use the import statement. In our example, it will look for the myproject directory in the component paths and the ocl directory. All libraries and plugins in these directories will be loaded as well.
After importing, we can create components using loadComponent ([4]). The first argument is the name of the component instance, the second argument is the class type of the component. When these lines are executed, 3 new components have been created: HMI1, Controller1 and Test1.
Finally, the line require("print") loads the printing service globally such that your script can use the 'print.ln("text")' function. See help print in the TaskBrowser after you typed require("print").
Now extend the script to include the lines below. The create connection policy objects and connect ports between components.
// See the Doxygen API documentation of RTT for the fields of this struct: var ConnPolicy cp_1 // set the fields of cp_1 to an application-specific value: cp_1.type = BUFFER // Use ''BUFFER'' or ''DATA'' cp_1.size = 10 // size of the buffer cp_1.lock_policy = LOCKED // Use ''LOCKED'', ''LOCK_FREE'' or ''UNSYNC'' // other fields exist too... // Start connecting ports: connect("HMI1.positions","Controller1.positions", cp_1) cp_1 = ConnPolicy() // reset to defaults (DATA, LOCK_FREE) connect("HMI1.commands","Controller1.commands", cp_1) // etc...
Connecting data ports is done using ConnPolicy structs that describe the properties of the connection to be formed. You may re-use the ConnPolicy variable, or create new ones for each connection you form. The Component Builder's Manual has more details on how the ConnPolicy struct influences how connections are configured.
Finally, we configure and start our components:
if ( HMI1.configure() == false ) print.ln("HMI1 configuration failed!") else { if ( Controller1.configure() == false ) print.ln("Controller1 configuration failed!") else { HMI1.start() Controller1.start() } }
StateMachine SetupShutdown { var bool do_cleanup = false, could_config = false; initial state setup { entry { // Configure components could_config = HMI1.configure() && Controller1.configure(); if (could_config) { HMI1.start(); Controller1.start(); } } transitions { if do_cleanup then select shutdown; if could_config == false then select failure; } } state failure { entry { print.ln("Failed to configure a component!") } } final state shutdown { entry { // Cleanup B group HMI1.stop() ; Controller1.stop(); HMI1.cleanup() ; Controller1.cleanup(); } } } RootMachine SetupShutdown deployApp; deployApp.activate() deployApp.start()
State machines are explained in detail in the Scripting Chapter of the Component Builder's Manual.
Connecting an output port of one component with an input port of another component, where both components are distributed using the CORBA deployer application, deployer-corba.
This is your first XML file for component A. We tell that it runs as a Server and that it registers its name in the Naming Service. (See also Using CORBA and the CORBA transport reference manual for setting up naming services)
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentA" type="HMI"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-a.xml and start it with: deployer-corba -s component-a.xml
This is your second XML file for component B. It has one port, cartesianPosition_desi. We add it to a connection, named cartesianPosition_desi_conn. Next, we declare a 'proxy' to Component A we created above, and we do the same for it's port, add it to the connection named cartesianPosition_desi_conn.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentB" type="Controller"> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> <!-- ComponentA is looked up using the 'CORBA' naming service --> <struct name="ComponentA" type="CORBA"> <!-- We add ports of A to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> </properties>
Save this file as component-b.xml and start it with deployer-corba -s component-b.xml
When component-b.xml is started, the port connections will be created. When ComponentA exits and re-starts, ComponentB will not notice this, and you'll need to restart the component-b xml file as well. Use a streaming based protocol (ROS, POSIX MQueue) in case you want to be more robust against such situations.
You can also form the connections in a third xml file, and make both components servers like this:
Starting ComponentA:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentA" type="HMI"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-a.xml and start it with: cdeployer -s component-a.xml
Starting ComponentB:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <struct name="ComponentB" type="Controller"> <simple name="Server" type="boolean"><value>1</value></simple> <simple name="UseNamingService" type="boolean"><value>1</value></simple> </struct> </properties>
Save this in component-b.xml and start it with: cdeployer -s component-b.xml
Creating two proxies, and the connection:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "cpf.dtd"> <properties> <!-- ComponentA is looked up using the 'CORBA' naming service --> <struct name="ComponentA" type="CORBA"> <!-- We add ports of A to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> <!-- ComponentB is looked up using the 'CORBA' naming service --> <struct name="ComponentB" type="CORBA"> <!-- We add ports of B to the connection --> <struct name="Ports" type="PropertyBag"> <simple name="cartesianPosition_desi" type="string"> <value>cartesianPosition_desi_conn</value></simple> </struct> </struct> </properties>
Save this in connect-components.xml and start it with: deployer-corba -s connect-components.xml
See deployer and CORBA related Toolchain Reference Manuals.
These instructions are meant for the Orocos Toolchain version 2.4.0 or later.
mkdir ~/training
export ROS_PACKAGE_PATH=~/training:$ROS_PACKAGE_PATH
sudo apt-get install python-setuptools sudo easy_install -U rosinstall
rosinstall ~/training orocos_exercises.rosinstall /opt/ros/electric
source ~/training/setup.bash
rosdep install youbot_common rosdep install rFSM
rosmake youbot_common rtt_dot_service rttlua_completion
useOrocos(){ source $HOME/training/setup.bash; source $HOME/training/setup.sh; source /opt/ros/electric/stacks/orocos_toolchain/env.sh; setLUA; } setLUA(){ if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin" } useOrocos
roscd hello-1-task-execution make rosrun ocl deployer-gnulinux -s start.ops
var double a a=1.1
var float64[] b(2) b[0]=4.4
find_package(OROCOS-RTT REQUIRED rtt-marshalling) # Defines: ${OROCOS-RTT_RTT-MARSHALLING_LIBRARY} and ${OROCOS-RTT_RTT-MARSHALLING_FOUND}
pre-2.3.2: You may only call find_package(OROCOS-RTT ... ) once. Next calls to this macro will return immediately, so you need to specify all plugins up-front. RTT versions from 2.3.2 on don't have this limitation.
After find_package found the RTT and its plugins, you must explicitly use the created CMake variables in order to have them in effect. This looks typically like:
# Link all targets AFTER THIS LINE with 'rtt-scripting' COMPONENT: if ( OROCOS-RTT_RTT-SCRIPTING_FOUND ) link_libraries( ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY} ) else( OROCOS-RTT_RTT-SCRIPTING_FOUND ) message(SEND_ERROR "'rtt-scripting' not found !") endif( OROCOS-RTT_RTT-SCRIPTING_FOUND ) # now define your components, libraries etc... # ... #Prefered way to link instead of the above method: target_link_libraries( mycomponent ${OROCOS-RTT_RTT-SCRIPTING_LIBRARY})
Or for linking with the standard provided CORBA transport:
# Link all targets AFTER THIS LINE with the CORBA transport (detected by default!) : if ( OROCOS-RTT_CORBA_FOUND ) link_libraries( ${OROCOS-RTT_CORBA_LIBRARIES} ) else( OROCOS-RTT_CORBA_FOUND ) message(SEND_ERROR "'CORBA' transport not found !") endif( OROCOS-RTT_CORBA_FOUND ) # now define your components, libraries etc... # ... #Prefered way to link instead of the above method: target_link_libraries( mycomponent ${OROCOS-RTT_RTT_CORBA_LIBRARIES})
Orocos has a system which lets you specify which packages you want to use for including headers and linking with their libraries. Orocos will always get these flags from a pkg-config .pc file, so in order to use this system, check that the package you want to depend on provides such a .pc file.
If the package or library you want to use has a .pc file, you can directly use this macro:
# The CORBA transport provides a .pc file 'orocos-rtt-corba-<target>.pc': orocos_use_package( orocos-rtt-corba ) # Link with the OCL Deployment component: orocos_use_package( ocl-deployment ) # now define your components, libraries etc...
This macro has a similar effect as putting this dependency in your manifest.xml file, it sets the include paths and links your libraries if OROCOS_NO_AUTO_LINKING is not defined in CMake (the default). Some packages (like OCL) define multiple .pc files, in which case you can put the ocl dependency in the manifest.xml file and use orocos_use_package() to use a specific ocl .pc file.
If the argument to orocos_use_package() is a real package, it is advised to put the dependency in the manifest.xml file, such that the build system can use that information for dependency tracking. In case it is a library as a part of a package (in this case: CORBA is a sub-library of the 'rtt' package), you should put rtt as a dependency in the manifest.xml file, and orocos-rtt-corba with the orocos_use_package macro as shown above.
################################################################################## # # CMake package configuration file for the OROCOS-RTT package. # This script imports targets and sets up the variables needed to use the package. # In case this file is installed in a nonstandard location, its location can be # specified using the OROCOS-RTT_DIR cache # entry. # # find_package COMPONENTS represent OROCOS-RTT plugins such as scripting, # marshalling or corba-transport. # The default search path for them is: # /path/to/OROCOS-RTTinstallation/lib/orocos/plugins # /path/to/OROCOS-RTTinstallation/lib/orocos/types # # For this script to find user-defined OROCOS-RTT plugins, the RTT_COMPONENT_PATH # environment variable should be appropriately set. E.g., if the plugin is located # at /path/to/plugins/libfoo-plugin.so, then add /path/to to RTT_COMPONENT_PATH # # This script sets the following variables: # OROCOS-RTT_FOUND: Boolean that indicates if OROCOS-RTT was found # OROCOS-RTT_INCLUDE_DIRS: Paths to the necessary header files # OROCOS-RTT_LIBRARIES: Libraries to link against to use OROCOS-RTT # OROCOS-RTT_DEFINITIONS: Definitions to use when compiling code that uses OROCOS-RTT # # OROCOS-RTT_PATH: Path of the RTT installation directory (its CMAKE_INSTALL_PREFIX). # OROCOS-RTT_COMPONENT_PATH: The component path of the installation # <prefix>/lib/orocos + RTT_COMPONENT_PATH # OROCOS-RTT_PLUGIN_PATH: OROCOS-RTT_PLUGINS_PATH + OROCOS-RTT_TYPES_PATH # OROCOS-RTT_PLUGINS_PATH: The plugins path of the installation # <prefix>/lib/orocos/plugins + RTT_COMPONENT_PATH * /plugins # OROCOS-RTT_TYPES_PATH: The types path of the installation # <prefix>/lib/orocos/types + RTT_COMPONENT_PATH * /types # # OROCOS-RTT_CORBA_FOUND: Defined if corba transport support is available # OROCOS-RTT_CORBA_LIBRARIES: Libraries to link against to use the corba transport # # OROCOS-RTT_MQUEUE_FOUND: Defined if mqueue transport support is available # OROCOS-RTT_MQUEUE_LIBRARIES: Libraries to link against to use the mqueue transport # # OROCOS-RTT_VERSION: Package version # OROCOS-RTT_VERSION_MAJOR: Package major version # OROCOS-RTT_VERSION_MINOR: Package minor version # OROCOS-RTT_VERSION_PATCH: Package patch version # # OROCOS-RTT_USE_FILE_PATH: Path to package use file, so it can be included like so # include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake) # OROCOS-RTT_USE_FILE : Allows you to write: include( ${OROCOS-RTT_USE_FILE} ) # # This script additionally sets variables for each requested # find_package COMPONENTS (OROCOS-RTT plugins). # For example, for the ''rtt-scripting'' plugin this would be: # OROCOS-RTT_RTT-SCRIPTING_FOUND: Boolean that indicates if the component was found # OROCOS-RTT_RTT-SCRIPTING_LIBRARY: Libraries to link against to use this component # (Notice singular _LIBRARY suffix !) # # Note for advanced users: Apart from the OROCOS-RTT_*_LIBRARIES variables, # non-COMPONENTS targets can be accessed by their imported name, e.g., # target_link_libraries(bar @IMPORTED_TARGET_PREFIX@orocos-rtt-gnulinux_dynamic). # This of course requires knowing the name of the desired target, which is why using # the OROCOS-RTT_*_LIBRARIES variables is recommended. # # Example usage: # find_package(OROCOS-RTT 2.0.5 EXACT REQUIRED rtt-scripting foo) # # Defines OROCOS-RTT_RTT-SCRIPTING_* # find_package(OROCOS-RTT QUIET COMPONENTS rtt-transport-mqueue foo) # # Defines OROCOS-RTT_RTT-TRANSPORT-MQUEUE_* # ##################################################################################
orocreate-pkg example
You may remove most of the code/statements that you don't use. We only left the most common CMake macros not commented, which tells you which ones you should use most certainly when building a component:
# # The find_package macro for Orocos-RTT works best with # cmake >= 2.6.3 # cmake_minimum_required(VERSION 2.6.3) # # This creates a standard cmake project. You may extend this file with # any cmake macro you see fit. # project(example) # Set the CMAKE_PREFIX_PATH in case you're not using Orocos through ROS # for helping these find commands find RTT. find_package(OROCOS-RTT REQUIRED ${RTT_HINTS}) # Defines the orocos_* cmake macros. See that file for additional # documentation. include(${OROCOS-RTT_USE_FILE_PATH}/UseOROCOS-RTT.cmake) # # Components, types and plugins. # # The CMake 'target' names are identical to the first argument of the # macros below, except for orocos_typegen_headers, where the target is fully # controlled by generated code of 'typegen'. # # Creates a component library libexample-<target>.so # and installs in the directory lib/orocos/example/ # orocos_component(example example-component.hpp example-component.cpp) # ...you may add multiple source files # # You may add multiple orocos_component statements. # # Building a typekit (recommended): # # Creates a typekit library libexample-types-<target>.so # and installs in the directory lib/orocos/example/types/ # #orocos_typegen_headers(example-types.hpp) # ...you may add multiple header files # # You may only have *ONE* orocos_typegen_headers statement ! # # Building a normal library (optional): # # Creates a library libsupport-<target>.so and installs it in # lib/ # #orocos_library(support support.cpp) # ...you may add multiple source files # # You may add multiple orocos_library statements. # # Building a Plugin or Service (optional): # # Creates a plugin library libexample-service-<target>.so or libexample-plugin-<target>.so # and installs in the directory lib/orocos/example/plugins/ # # Be aware that a plugin may only have the loadRTTPlugin() function once defined in a .cpp file. # This function is defined by the plugin and service CPP macros. # #orocos_service(example-service example-service.cpp) # ...only one service per library ! #orocos_plugin(example-plugin example-plugin.cpp) # ...only one plugin function per library ! # # You may add multiple orocos_plugin/orocos_service statements. # # Additional headers (not in typekit): # # Installs in the include/orocos/example/ directory # orocos_install_headers( example-component.hpp ) # ...you may add multiple header files # # You may add multiple orocos_install_headers statements. # # Generates and installs our package. Must be the last statement such # that it can pick up all above settings. # orocos_generate_package()
This page documents both basic and advanced use of the RTT Lua bindings by example. More formal API documentation is available here.
As of orocos toolchain-2.6 the deployment component launched by rttlua has been renamed from deployer to Deployer. This is to remove the differences between the classical deployer and rttlua and to facilitate portable deployment scripts. This page has been updated to use the new, uppercase name. If you are using an orocos toolchain version prior to 2.6, replace use "deployer" instead.
Lua is a simple, small and efficient scripting language. The Lua RTT bindings provide access to most of the RTT API from the Lua language. Use-cases are:
To this end RTT-Lua consists of:
Most information here is valid for all three approaches. If not, this is explicitly mentioned. The listings are shown as interactively entered into the rttlua- REPL (read-eval-print loop), but could just the same be stored in a script file.
Currently RTT-Lua is in OCL. Is is enabled by default but will only be built if the Lua-5.1 dependency (Debian: liblua5.1-0-dev, liblua5.1-0, lua5.1) is found.
CMake options:
BUILD_LUA_RTT
: enable this to build the rttlua shell, the Lua component, and the Lua plugin.BUILD_LUA_RTT_DYNAMIC_MODULES
: (EXPERIMENTAL) build RTT and deployer as pure Lua plugins. Not recommended unless you know what you are doing.BUILD_LUA_TESTCOMP
: build a simple testcomponent that is used for testing the bindings. Not required for normal operation.rttlib.lua
is a Lua module, which is not strictly necessary, but highly recommended to load as it adds various syntactic shortcuts and pretty printing (Many examples on this page will not work without!). The easiest way to load it is to setup the LUA_PATH
variable:
export LUA_PATH=";;$HOME/src/git/orocos/ocl/lua/modules/?.lua"
If you are a orocos_toolchain_ros user and do not want to hardcode the path like this, you can source the following script in your .bashrc:
#!/bin/bash RTTLUA_MODULES=`rospack find ocl`/lua/modules/?.lua if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";" fi export LUA_PATH="$LUA_PATH;$RTTLUA_MODULES"
$ ./rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) >
or for orocos_toolchain_ros users:
$ rosrun ocl rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) >
Now we have a Lua REPL that is enhanced with RTT specific functionality. In the following RTT-Lua code is indicated by a ">
" prompt, while shell scripts are shown with the typical "$
".
Before doing anything it is recommended to load rttlib. Like any Lua module this can be done with the require
statement. For example:
$ ./rttlua-gnulinux OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) > require("rttlib") >
As it is annoying having to type this each time, this loading can automated by putting it in the ~/.rttlua
dot file. This (Lua) file is executed on startup of rttlua:
require("rttlib") rttlib.color=true
The (optional) last line enables colors.
rttlib.stat()
Print information about component instances and their state> rttlib.stat() Name State isActive Period lua PreOperational true 0 Deployer Stopped true 0
rttlib.info()
Print information about available components, types and services> rttlib.info() services: marshalling scripting print LuaTLSF Lua os typekits: rtt-corba-types rtt-mqueue-transport rtt-types OCLTypekit types: ConnPolicy FlowStatus PropertyBag SendHandle SendStatus TaskContext array bool bools char double float int ints rt_string string strings uint void comp types: OCL::ConsoleReporting OCL::FileReporting OCL::HMIConsoleOutput OCL::HelloWorld OCL::LuaComponent OCL::LuaTLSFComponent OCL::TcpReporting ...
Here:
> tc = rtt.getTC()
Above code calls the getTC()
function, which returns the current TC and stores it in a variable 'tc'. For showing the interface just write =tc
. In the repl the equal sign is a shortcut for 'return', which in turn causes the variable to be printed. (BTW: This works for displaying any variable)
> =tc TaskContext: lua state: PreOperational isActive: true getPeriod: 0 peers: Deployer ports: properties: lua_string (string) = // string of lua code to be executed during configureHook lua_file (string) = // file with lua program to be executed during configuration operations: bool exec_file(string const& filename) // load (and run) the given lua script bool exec_str(string const& lua-string) // evaluate the given string in the lua environment
Since (rttlua beta5) the above does not print the standard TaskContext operations anymore. To print these, use tc:show()
.
(Yes, you really want this)
Get it here. Checkout the README for the (simple) compilation and setup.
rttlua
does not offer persistent history like in the taskbrowser. If you want it, you can use rlwrap and to wrap rttlua as follows:
alias rttlua='rlwrap -a -r -H ~/.rttlua-history rttlua-gnulinux'
If you run 'rttlua' it should have persistent history.
Most modern editors provide basic syntax highlighting for Lua code.
The following shows the basic API, see section Automatically creating and cleaning up component interfaces for a more convenient way add/remove ports/properties.
> pin = rtt.InputPort("string") > pout = rtt.OutputPort("string") > =pin [in, string, unconn, local] // > =pout [out, string, unconn, local] //
Both In- and OutputPorts optionally take a second string argument (name) and third argument (description).
> tc:addPort(pin) > tc:addPort(pout, "outport1", "string outport that contains latest X") > =tc -- print tc interface to confirm it is there.
For this the ports don't have to be added to the TaskContext:
> =pin:connect(pout) true > return pin [in, string, conn, local] // > return pout [out, string, conn, local] // >
The rttlua-* REPL automatically creates a deployment component that is a peer of the lua taskcontext:
> tc = rtt.getTC() > depl = tc:getPeer("Deployer") > cp=rtt.Variable("ConnPolicy") > =cp {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0} > depl:connect("compA.port1","compB.port2", cp)
> rttlib.info() services: marshalling, scripting, print, os, Lua typekits: rtt-types, rtt-mqueue-transport, OCLTypekit types: ConnPolicy, FlowStatus, PropertyBag, SendHandle, SendStatus, TaskContext, array, bool, bools, char, double, float, int, ints, rt_string, string, strings, uint, void comp types: OCL::ConsoleReporting, OCL::FileReporting, OCL::HMIConsoleOutput, OCL::HelloWorld, OCL::LuaComponent, OCL::TcpReporting, OCL::TimerComponent, OCL::logging::Appender, OCL::logging::FileAppender, OCL::logging::LoggingService, OCL::logging::OstreamAppender, TaskContext
> cp = rtt.Variable("ConnPolicy") > =cp {data_size=0,type="DATA",name_id="",init=false,pull=false,transport="default",lock_policy="LOCK_FREE",size=0} > cp.data_size = 4711 > print(cp.data_size) 4711
Printing the available constants:
> =rtt.globals {SendNotReady=SendNotReady,LOCK_FREE=2,NewData=NewData,SendFailure=SendFailure,\ SendSuccess=SendSuccess,NoData=NoData,UNSYNC=0,LOCKED=1,OldData=OldData,BUFFER=1,DATA=0} >
Accessing constants - just index!
> =rtt.globals.LOCK_FREE 2
It is cumbersome to initalize complex types with many subfields:
> tc = rtt.getTC() > depl = tc:getPeer("Deployer") > depl:import("kdl_typekit") > t=rtt.Variable("KDL.Frame") > =t {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}} > t.M.X_x=3 > t.M.Y_x=2 > t.M.Z_x=2.3 ...
To avoid this, use the fromtab()
method:
> t:fromtab({M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}})
or even shorter using the table-call syntax of Lua,
> t:fromtab{M={Z_y=1,Y_y=2,X_y=3,Y_z=4,Z_z=5,Y_x=6,Z_x=7,X_x=8,X_z=9},p={Y=3,X=3,Z=3}}
When you created an RTT array type, the initial length will be zero. You must set the length of an array before you can assign elements to it (starting from toolchain-2.5 fromtab
will do this automatically:
> ref=rtt.Variable("array") > ref:resize(3) > ref:fromtab{1,1,10} > print(ref) -- prints {1,1,10} ...
> p1=rtt.Property("double", "p-gain", "Proportional controller gain")
(Note: the second and third argument (name and description) are optional and can also be set when adding the property to a TaskContext)
> tc=rtt.getTC() > tc:addProperty(p1) > =tc -- check it is there...
> tc=rtt.getTC() > pgain = tc:getProperty("pgain") > =pgain -- will print it
> p1:set(3.14) > =p1 -- a property can be printed! p-gain (double) = 3.14 // Proportional controller gain
In particular, the following will not work:
> p1=3.14
Lua works with references! This will assign the variable p1
a numeric value of 3.14 and the reference to the property is lost.
> print("the value of " .. p1:info().name .. " is: " .. p1:get()) the value of p-gain is: 3.14
Assume a property of type KDL::Frame. Similarily to Variables the subfields can be accessed by using the dot syntax:
> d = tc:getPeer("Deployer") > d:import('kdl_typekit') > f=rtt.Property('KDL.Frame') > =f (KDL.Frame) = {M={Z_y=0,Y_y=1,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=0,X=0,Z=0}} // > f.M.Y_y=3 > =f.M.Y_y 3 > f.p.Y=1 > =f (KDL.Frame) = {M={Z_y=0,Y_y=3,X_y=0,Y_z=0,Z_z=1,Y_x=0,Z_x=0,X_x=1,X_z=0},p={Y=1,X=0,Z=0}} // >
Like Variables, Properties feature a fromtab
method to initalize a Property from values in a Lua table. See Section RTT Types and Typekits - Convenient initalization of multi-field types for details.
As properties are not automatically garbage collected, property memory must be managed manually:
> tc:removeProperty("p-gain") > =tc -- p-gain is gone now > p1:delete() -- delete property and free memory > =p1 -- p1 is 'dead' now. userdata: 0x186f8c8
Synchronous calling of operations from Lua:
> d = tc:getPeer("Deployer") > =d:getPeriod() 0
> d = tc:getPeer("Deployer") > op = d:getOperation("getPeriod") > =op -- can be printed! double getPeriod() // Get the configured execution period. -1.0: no thread ... > =op() -- call it 0
"Sending" Operations permits to asynchronously request an operation to be executed and collect the results at a later point in time.
> d = tc:getPeer("Deployer") > op = d:getOperation("getPeriod") > handle=op:send() -- calling it > =handle:collect() SendSuccess 0
Note:
collect()
returns multiple arguments: first a SendStatus string ('SendSuccess', 'SendFailure') followed by zero to many output arguments of the operation.collect
blocks until the operation was executed, collectIfDone()
will immediately return (but possibly with 'SendNotReady')Answer: No.
Workaround: define a new TaskContext that inherits from LuaComponent and add the Operation there. Implement the necessary glue between C++ and Lua by hand (not hard, but some manual work required).
Answer: No (but potentially it would be easy to add. Ask on the ML).
For example, to load the marshalling service in a component and then to use it to write a property (cpf) file:
> tc=rtt.getTC() > depl=tc:getPeer("Deployer") > depl:loadService("lua", "marshalling") -- load the marshalling service in the lua component true > =tc:provides("marshalling"):writeProperties("props.cpf") true
A second (and slightly faster) option is to get the Operation before calling it:
> -- get the writeProperties operation ... > writeProps=tc:provides("marshalling"):getOperation("writeProperties") > =writeProps("props.cpf") -- and call it to write the properties to a file. true
> depl:loadService("lua", "marshalling") -- load the marshalling service > depl:loadService("lua", "scripting") -- load the scripting service > print(tc:provides()) Service: lua Subservices: marshalling, scripting Operations: activate, cleanup, configure, error, exec_file, exec_str, getPeriod, inFatalError, inRunTimeError, isActive, isConfigured, isRunning, setPeriod, start, stop, trigger, update Ports: Service: marshalling Subservices: Operations: loadProperties, readProperties, readProperty, storeProperties, updateFile, updateProperties, writeProperties, writeProperty Ports: Service: scripting Subservices: Operations: activateStateMachine, deactivateStateMachine, eval, execute, getProgramLine, getProgramList, getProgramStatus, getProgramStatusStr, getProgramText, getStateMachineLine, getStateMachineList, getStateMachineState, getStateMachineStatus, getStateMachineStatusStr, getStateMachineText, hasProgram, hasStateMachine, inProgramError, inStateMachineError, inStateMachineState, isProgramPaused, isProgramRunning, isStateMachineActive, isStateMachinePaused, isStateMachineRunning, loadProgramText, loadPrograms, loadStateMachineText, loadStateMachines, pauseProgram, pauseStateMachine, requestStateMachineState, resetStateMachine, runScript, startProgram, startStateMachine, stepProgram, stopProgram, stopStateMachine, unloadProgram, unloadStateMachine Ports: >
The RTT Global Service is useful for loading services into your application that don't belong to a specific component. Your C++ code accesses this object by calling
RTT::internal::GlobalService::Instance();
The GlobalService object can be accessed in Lua using a call to:
gs = rtt.provides()
And allows you to load additional services into the global service:
gs:require("os") -- or: rtt.provides():require("os")
Which you can access later-on again using the rtt table:
rtt.provides("os"):argc() -- returns the number of arguments of this application rtt.provides("os"):argv() -- returns a string array of arguments of this application
-- create activity for producer: period=1, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT)
-- create activity for producer: period=0, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 0, 0, rtt.globals.ORO_SCHED_RT)
depl:setMasterSlaveActivity("name_of_master_component", "name_of_slave_component")
(see also the example in section How to write a RTT-Lua component)
-- deploy_app.lua require("rttlib") tc = rtt.getTC() depl = tc:getPeer("Deployer") -- import components, requires correctly setup RTT_COMPONENT_PATH depl:import("ocl") -- depl:import("componentX") -- import components, requires correctly setup ROS_PACKAGE_PATH (>=Orocos 2.7) depl:import("rtt_ros") rtt.provides("ros"):import("my_ros_pkg") -- create component 'hello' depl:loadComponent("hello", "OCL::HelloWorld") -- get reference to new peer hello = depl:getPeer("hello") -- create buffered connection of size 64 cp = rtt.Variable('ConnPolicy') cp.type=1 -- type buffered cp.size=64 -- buffer size depl:connect("hello.the_results", "hello.the_buffer_port", cp) rtt.logl('Info', "Deployment complete!")
run it:
$ rttlua-gnulinux -i deploy-app.lua
or using orocos_toolchain_ros
$ rosrun ocl rttlua-gnulinux -i deploy-app.lua
Note: The -i
option makes rttlua enter interactive mode (the REPL) after executing the script. Without it would exit after finishing executing the script, which in this case is probably not what you want.
A Lua component is created by loading a Lua-script implementing zero or more TaskContext hooks in a OCL::LuaComponent. The following RTT hooks are currently supported:
bool configureHook()
bool activateHook()
bool startHook()
void updateHook()
void stopHook()
void cleanupHook()
void errorHook()
All hooks are optional, but if implemented they must return the correct return value (if not void of course). It is also important to declare them as global (by not adding a local
keyword. Otherwise they would be garbage collected and not called)
The following code implements a simple consumer component with an event-triggered input port:
require("rttlib") tc=rtt.getTC(); -- The Lua component starts its life in PreOperational, so -- configureHook can be used to set stuff up. function configureHook() inport = rtt.InputPort("string", "inport") -- global variable! tc:addEventPort(inport) cnt = 0 return true end -- all hooks are optional! --function startHook() return true end function updateHook() local fs, data = inport:read() rtt.log("data received: " .. tostring(data) .. ", flowstatus: " .. fs) end -- Ports and properties are the only elements which are not -- automatically cleaned up. This means this must be done manually for -- long living components: function cleanupHook() tc:removePort("inport") inport:delete() end
A matching producer component is shown below:
require "rttlib" tc=rtt.getTC(); function configureHook() outport = rtt.OutputPort("string", "outport") -- global variable! tc:addPort(outport) cnt = 0 return true end function updateHook() outport:write("message number " .. cnt) cnt = cnt + 1 end function cleanupHook() tc:removePort("outport") outport:delete() end
A deployment script to deploy these two components:
require "rttlib" rtt.setLogLevel("Warning") tc=rtt.getTC() depl = tc:getPeer("Deployer") -- create LuaComponents depl:loadComponent("producer", "OCL::LuaComponent") depl:loadComponent("consumer", "OCL::LuaComponent") --... and get references to them producer = depl:getPeer("producer") consumer = depl:getPeer("consumer") -- load the Lua hooks producer:exec_file("producer.lua") consumer:exec_file("consumer.lua") -- configure the components (so ports are created) producer:configure() consumer:configure() -- connect ports depl:connect("producer.outport", "consumer.inport", rtt.Variable('ConnPolicy')) -- create activity for producer: period=1, priority=0, -- schedtype=ORO_SCHED_OTHER (1). depl:setActivity("producer", 1, 0, rtt.globals.ORO_SCHED_RT) -- raise loglevel rtt.setLogLevel("Debug") -- start components consumer:start() producer:start() -- uncomment to print interface printing (for debugging) -- print(consumer) -- print(producer) -- sleep for 5 seconds os.execute("sleep 5") -- lower loglevel again rtt.setLogLevel("Warning") producer:stop() consumer:stop()
(available from toolchain-2.5)
The function rttlib.create_if
can (re-) generate a component interface from a specification as shown below. Conversely, rttlib.tc_cleanup
will remove and destruct all ports and properties again.
-- stupid example: iface_spec = { ports={ { name='inp', datatype='int', type='in+event', desc="incoming event port" }, { name='msg', datatype='string', type='in', desc="incoming non-event messages" }, { name='outp', datatype='int', type='out', desc="outgoing data port" }, }, properties={ { name='inc', datatype='int', desc="this value is added to the incoming data each step" } } } -- this create the interface iface=rttlib.create_if(iface_spec) function configureHook() -- it is safe to be run twice, existing ports -- will be ignored. Thus, running cleanup() and configure() -- will reconstruct the interface again. iface=rttlib.create_if(iface_spec) inc = iface.props.inc:get() return true end function startHook() -- ports/props can be indexed as follows: iface.ports.outp:write(1) return true end function updateHook() local fs, val fs, val = iface.ports.inp:read() if fs=='NewData' then iface.ports.outp:write(val+inc) end end function cleanupHook() -- remove all ports and properties rttlib.tc_cleanup() end
In contrast to Components (which typically contain functionality which is standalone), Services are useful for extending functionality of existing Components. The LuaService permits to execute arbitrary Lua programs in the context of a Component.
The following dummy example loads the LuaService into a HelloWorld component and then runs a script that modifies a property:
require "rttlib" tc=rtt.getTC() d = tc:getPeer("Deployer") -- create a HelloWorld component d:loadComponent("hello", "OCL::HelloWorld") hello = d:getPeer("hello") -- load Lua service into the HelloWorld Component d:loadService("hello", "Lua") -- Execute the following Lua script (defined a multiline string) in -- the service. This dummy examples simply modifies the Property. For -- large programs it might be better tostore the program in a separate -- file and use the exec_file operation instead. proggie = [[ require("rttlib") tc=rtt.getTC() -- this is the Hello Component prop = tc:getProperty("the_property") prop:set("hullo from the lua service!") ]] prop = hello:getProperty("the_property") -- get hello.the_property print("the_property before service call:", prop) hello:provides("Lua"):exec_str(proggie) -- execute program in the service print("the_property after service call: ", prop)
More useful than just running once is to be able to execute a function synchronously with the updateHook of the host component. This can be achieved by registering a ExecutionEngine hook (much easier than it sounds!).
The following Lua service code implements a simple monitor that tracks the currently active (TaskContext) state of the component in whose context it is running. When the state changes the new state is written to a port "tc_state", which is added to the context TC.
This code could be useful for a supervision statemachine that can then easily react to this state change by means of an event triggered port.
require "rttlib" tc=rtt.getTC() d = tc:getPeer("Deployer") -- create a HelloWorld component d:loadComponent("hello", "OCL::HelloWorld") hello = d:getPeer("hello") -- load Lua service into the HelloWorld Component d:loadService("hello", "Lua") mon_state = [[ -- service-eehook.lua require("rttlib") tc=rtt.getTC() -- this is the Hello Component last_state = "not-running" out = rtt.OutputPort("string") tc:addPort(out, "tc_state", "currently active state of TaskContext") function check_state() local cur_state = tc:getState() if cur_state ~= last_state then out:write(cur_state) last_state = cur_state end return true -- returning false will disable EEHook end -- register check_state function to be called periodically and -- enable it. Important: variables like eehook below or the -- function check_state which shall not be garbage-collected -- after the first run must be declared global (by not declaring -- them local with the local keyword) eehook=rtt.EEHook('check_state') eehook:enable() ]] -- execute the mon_state program hello:provides("Lua"):exec_str(mon_state)
Note: the -i option causes rttlua to go to interactive mode after executing the script (and not exiting afterwards).
$ rttlua-gnulinux -i service-eehook.lua > rttlib.portstats(hello) the_results (string) = the_buffer_port (string) = NoData tc_state (string) = Running > hello:error() > rttlib.portstats(hello) the_results (string) = the_buffer_port (string) = NoData tc_state (string) = RunTimeError >
It is often useful to validate a deployed system at runtime, however you want to avoid cluttering individual components with non-functional validation code. Here's what to do (Please also see this post on orocos-users, which inspired the following)
Use-case: check for unconnected input ports
1. Write a function to validate a single component
The following function accepts a TaskContext as an argument and checks wether it has unconnected input ports. If yes it prints an error.
function check_inport_conn(tc) local portnames = tc:getPortNames() local ret = true for _,pn in ipairs(portnames) do local p = tc:getPort(pn) local info = p:info() if info.porttype == 'in' and info.connected == false then rtt.logl('Error', "InputPort " .. tc:getName() .. "." .. info.name .. " is unconnected!") ret = false end end return ret end
2. After deployment, execute the validation function on all components:
This can be done using the mappeers
function.
rttlib.mappeers(check_inport_conn, depl)
The mappeers
function is a special variant of map which calls the function given as a first argument on all peers reachable from a TaskContext (given as a second argument). We pass the Deployer here, which typically knows all components.
Here's a dummy deployment example to illustrate:
require "rttlib" tc=rtt.getTC() depl=tc:getPeer("Deployer") -- define or import check_inport_conn function here -- dummy deployment, ports are left unconnected. depl:loadComponent("hello1", "OCL::HelloWorld") depl:loadComponent("hello2", "OCL::HelloWorld") rttlib.mappeers(check_inport_conn, depl)
Executing it will print:
0.155 [ ERROR ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello1.the_buffer_port is unconnected! 0.155 [ ERROR ][/home/mk/bin//rttlua-gnulinux::main()] InputPort hello2.the_buffer_port is unconnected!
rFSM is a fast, lightweight Statechart implementation is pure Lua. Using RTT-Lua rFSM Statecharts can conveniently be used with RTT. The rFSM sources can be found here.
Answer:
Typically a Component will be preferred when
A Service is preferred when
There will, undoubtly, be exceptions!
Summary: Create a OCL::LuaComponent. In configureHook
load and initalize the fsm, in updateHook
call rfsm.run(fsm)
(see the rFSM docs for general information)
The source code for this example can be found here.
It is a best-practice to split the initalization (setting up required functions, peers or ports used by the fsm) and the fsm model itself into two files. This way the fsm model is kept as platform independent and hence reusable as possible.
The following initalization file is executed in the newly create LuaComponent for preparing the environment for the statemachine, that is loaded and initalized in configureHook.
launch_fsm.lua
require "rttlib" require "rfsm" require "rfsm_rtt" require "rfsmpp" local tc=rtt.getTC(); local fsm local fqn_out, events_in function configureHook() -- load state machine fsm = rfsm.init(rfsm.load("fsm.lua")) -- enable state entry and exit dbg output fsm.dbg=rfsmpp.gen_dbgcolor("rfsm-rtt-example", { STATE_ENTER=true, STATE_EXIT=true}, false) -- redirect rFSM output to rtt log fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end -- the following creates a string input port, adds it as a event -- driven port to the Taskcontext. The third line generates a -- getevents function which returns all data on the current port as -- events. This function is called by the rFSM core to check for -- new events. events_in = rtt.InputPort("string") tc:addEventPort(events_in, "events", "rFSM event input port") fsm.getevents = rfsm_rtt.gen_read_str_events(events_in) -- optional: create a string port to which the currently active -- state of the FSM will be written. gen_write_fqn generates a -- function suitable to be added to the rFSM step hook to do this. fqn_out = rtt.OutputPort("string") tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state") rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out)) return true end function updateHook() rfsm.run(fsm) end function cleanupHook() -- cleanup the created ports. rttlib.tc_cleanup() end
A dummy statemachine stored in the fsm.lua file:
return rfsm.state { ping = rfsm.state { entry=function() print("in ping entry") end, }, pong = rfsm.state { entry=function() print("in pong entry") end, }, rfsm.trans {src="initial", tgt="ping" }, rfsm.trans {src="ping", tgt="pong", events={"e_pong"}}, rfsm.trans {src="pong", tgt="ping", events={"e_ping"}}, }
Option A: Running the rFSM example with a Lua deployment script
deploy.lua
-- alternate lua deploy script require "rttlib" tc=rtt.getTC() d=tc:getPeer("Deployer") d:import("ocl") d:loadComponent("Supervisor", "OCL::LuaComponent") sup = d:getPeer("Supervisor") sup:exec_file("launch_fsm.lua") sup:configure() cmd = rttlib.port_clone_conn(sup:getPort("events"))
Run it. cmd is an inverse (output) port which is connected to the incoming (from POV of the fsm) 'events' port of the fsm, so by writing to it we can send events:
$ rosrun ocl rttlua-gnulinux -i deploy.lua OROCOS RTTLua 1.0-beta3 / Lua 5.1.4 (gnulinux) INFO: created undeclared connector root.initial > sup:start() > in ping entry > cmd:write("e_pong") > in pong entry > cmd:write("e_ping") > in ping entry > cmd:write("e_pong") > in pong entry
Option B: Running the rFSM example with an Orocos deployment script
deploy.ops
import("ocl") loadComponent("Supervisor", "OCL::LuaComponent") Supervisor.exec_file("launch_fsm.lua") Supervisor.configure
After starting the supervisor we 'leave' it, so we can write to the 'events' ports:
$ rosrun ocl deployer-gnulinux -s deploy.ops INFO: created undeclared connector root.initial Switched to : Deployer This console reader allows you to browse and manipulate TaskContexts. You can type in an operation, expression, create or change variables. (type 'help' for instructions and 'ls' for context info) TAB completion and HISTORY is available ('bash' like) Deployer [S]> cd Supervisor TaskBrowser connects to all data ports of Supervisor Switched to : Supervisor Supervisor [S]> start = true Supervisor [R]> in ping entry Supervisor [R]> leave Watching Supervisor [R]> events.write ("e_pong") = (void) Watching Supervisor [R]> in pong entry Watching Supervisor [R]> events.write ("e_ping") = (void) Watching Supervisor [R]> in ping entry Watching Supervisor [R]>
This is basically the same as executing a function periodally in a service (see the Service example above). There is a convenience function service_launch_rfsm
in rfsm_rtt.lua
to make this easier.
The steps are:
require "rfsm_rtt" -- get reference to exec_str operation fsmfile = "fsm.lua" execstr_op = comp:provides("Lua"):getOperation("exec_str") rfsm_rtt.service_launch_rfsm(fsmfile, execstr_op, true)
The last line means the following: launch fsm in <fsmfile> in service identified by execstr_op
, true: create an execution engine hook so that the rfsm.step is called at the component frequency. (See the generated rfsm_rtt API docs).
Generally speaking, the most effective way of creating a new FSM from a parent one is populating the original simple states by overriding them with composite states. In this context, the parent FSM provides “empty” boxes to be filled with application-specific code.
In the following example, “daughter_fsm.lua” loads “mother_fsm.lua” and overrides a state, two transitions and a function. “daughter_fsm.lua” is launched by a Lua Orocos component named “fsm_launcher.lua” . Deployment is done by “deploy.ops” . Instructions on how to run the example follow.
mother_fsm.lua
-- mother_fsm.lua is a basic fsm with 2 simple states return rfsm.state { StateA = rfsm.state { entry=function() print("in state A") end, }, StateB = rfsm.state { entry=function() print("in state B") end, }, -- consistent transition naming makes overriding easier rfsm.trans {src="initial", tgt="StateA" }, tr_A_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_mother_A_to_B"}}, tr_B_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_mother_B_to_A"}}, }
daughter_fsm.lua
-- daughter_fsm.lua loads mother_fsm.lua -- implementing extra states, transitions and functions -- by adding and overriding the original ones. require "utils" require "rttros" -- local variables to avoid verbose function calling local state, trans, conn = rfsm.state, rfsm.trans, rfsm.conn -- path to the fsm to load local base_fsm_file = "mother_fsm.lua" -- load the original fsm to override local fsm_model=rfsm.load(base_fsm_file) -- set colored outputs indicating the current state dbg = rfsmpp.gen_dbgcolor( {STATE_ENTER=true}, false) -- Overriding StateA -- In "mother_fsm.lua" StateA is an rfsm.simple_state -- Here we make it an rfsm.composite_state fsm_model.StateA = rfsm.state { StateA1= rfsm.state { entry=function() print("in State A1") end, }, StateA2 = rfsm.state { entry=function() print("in State A2") end, }, rfsm.transition {src="initial", tgt="StateA1"}, tr_A1_A2 = rfsm.transition {src ="StateA1", tgt="StateA2", events={"e_move_to_A2"}}, tr_A2_A1 = rfsm.transition {src ="StateA2", tgt="StateA1", events={"e_move_to_A1"}}, } -- Overriding single transitions fsm_model.tr_A_to_B = rfsm.trans {src="StateA", tgt="StateB", events={"e_daughter_A_to_B"}} fsm_model.tr_B_to_A = rfsm.trans {src="StateB", tgt="StateA", events={"e_daughter_B_to_A"}} -- Overriding a specific function fsm_model.StateB.entry = function() print("I am in State B in the daughter FSM") end return fsm_model
fsm_launcher.lua
require "rttlib" require "rfsm" require "rfsm_rtt" require "rfsmpp" local tc=rtt.getTC(); local fsm local fqn_out, events_in function configureHook() -- load state machine fsm = rfsm.init(rfsm.load("daughter_fsm.lua")) -- enable state entry and exit dbg output fsm.dbg=rfsmpp.gen_dbgcolor("FSM loading example", { STATE_ENTER=true, STATE_EXIT=true}, false) -- redirect rFSM output to rtt log fsm.info=function(...) rtt.logl('Info', table.concat({...}, ' ')) end fsm.warn=function(...) rtt.logl('Warning', table.concat({...}, ' ')) end fsm.err=function(...) rtt.logl('Error', table.concat({...}, ' ')) end -- the following creates a string input port, adds it as a event -- driven port to the Taskcontext. The third line generates a -- getevents function which returns all data on the current port as -- events. This function is called by the rFSM core to check for -- new events. events_in = rtt.InputPort("string") tc:addEventPort(events_in, "events", "rFSM event input port") fsm.getevents = rfsm_rtt.gen_read_str_events(events_in) -- optional: create a string port to which the currently active -- state of the FSM will be written. gen_write_fqn generates a -- function suitable to be added to the rFSM step hook to do this. fqn_out = rtt.OutputPort("string") tc:addPort(fqn_out, "rFSM_cur_fqn", "current active rFSM state") rfsm.post_step_hook_add(fsm, rfsm_rtt.gen_write_fqn(fqn_out)) return true end function updateHook() rfsm.run(fsm) end function cleanupHook() -- cleanup the created ports. rttlib.tc_cleanup() end
deploy.ops
import("ocl") loadComponent("Supervisor", "OCL::LuaComponent") Supervisor.exec_file("fsm_launcher.lua") Supervisor.configure Supervisor.start
To test this example, run the Deployer:
rosrun ocl deployer-gnulinux -lerror -s deploy.ops
Then:
Deployer [S]> cd Supervisor TaskBrowser connects to all data ports of Supervisor Switched to : Supervisor Supervisor [R]> leave Watching Supervisor [R]> events.write ("e_move_to_A2") FSM loading example: STATE_EXIT root.StateA.StateA1 in State A2 FSM loading example: STATE_ENTER root.StateA.StateA2
A Coordinator often needs to interact with many or all other components in its vicinity. To avoid having to write peer1 = depl:getPeer("peer1")
all over, you can use the following function to generate a table of peers which are reachable from a certain component (commonly the deployer):
peertab = rttlib.mappeers(function (tc) return tc end, depl)
Assume the Deployer has two peers "robot" and "controller", they can be accessed as follows:
print(peertab.robot) -- or peertab.controller:configure()
> cp=rtt.Variable("ConnPolicy") > cp.transport=3 -- 3 is ROS > cp.name_id="/l_cart_twist/command" -- topic name > depl:stream("CompX.portY", cp)
or with sweet one-liner (thx to Ruben!):
> depl:stream("CompX.portY", rtt.provides("ros"):topic("/l_cart_twist/command"))
This is sometimes usefull for loading scripts etc. that are located in different packages.
The rttros.lua
collects some basic but useful stuff for interacting with ROS. This one is "borrowed" from the excellent roslua:
> require "rttros" > =rttros.find_rospack("geometry_msgs") /home/mk/src/ros/unstable/common_msgs/geometry_msgs >
Lua has to work with two typesystems: its own and the RTT typesystem. To makes this as smooth as possible the basic RTT types are automatically converted to their corresponding Lua types as shown by the table below:
RTT | Lua |
---|---|
bool | boolean |
float | number |
double | number |
uint | number |
int | number |
char | string |
string | string |
void | nil |
This conversion is done in both directions: basic values read from ports or basic return values of operation are converted to Lua; vice versa if an operation with basic Lua values is called these will automatically be converted to the corresponding RTT types.
In short: write a function which accepts a lua table representation of you data type and returns either a table or a string. Assign it to rttlib.var_pp.mytype
, where mytype is the value returned by the var:getType()
method. That's all!
Quick example: ConnPolicy
type
(This is just an example. It has been done for this type already).
The out-of-box printing of a ConnPolicy
will look as follows:
./rttlua-gnulinux Orocos RTTLua 1.0-beta3 (gnulinux) > return rtt.Variable("ConnPolicy") {data_size=0,type=0,name_id="",init=false,pull=false,transport=0,lock_policy=2,size=0}
This not too bad, but we would like to display the string representation of the C++ enums type
and lock_policy
. So we must write a function that returns a table...
function ConnPolicy2tab(cp) if cp.type == 0 then cp.type = "DATA" elseif cp.type == 1 then cp.type = "BUFFER" else cp.type = tostring(cp.type) .. " (invalid!)" end if cp.lock_policy == 0 then cp.lock_policy = "UNSYNC" elseif cp.lock_policy == 1 then cp.lock_policy = "LOCKED" elseif cp.lock_policy == 2 then cp.lock_policy = "LOCK_FREE" else cp.lock_policy = tostring(cp.lock_policy) .. " (invalid!)" end return cp end
and add it to the rttlib.var_pp
table of Variable formatters as follows:
rttlib.var_pp.ConnPolicy = ConnPolicy2tab
now printing a ConnPolicy
again calls our function and prints the desired fields:
> return rtt.Variable("ConnPolicy") {data_size=0,type="DATA",name_id="",init=false,pull=false,transport=0,lock_policy="LOCK_FREE",size=0} >
If you are used to manage your application with the classic OCL Taskbrowser or if you want your application to be connected via Corba, you may only use lua for deployment, and continue to use your former deployer. To do so, you have to load the lua service into your favorite deployer (deployer, cdeployer, deployer-corba, ...) and then call your deployment script.
Exemple : launch your prefered deployer :
cdeployer -s loadLua.ops
with loadLua.ops :
//load the lua service loadService ("Deployer","Lua") //execute your deployment file Lua.exec_file("yourLuaDeploymentFile.lua")
and with yourLuaDeploymentFile.lua containing the kind of stuff described in this Cookbook. Like the one in paragraph "How to write a deployment script"
$ <fsm_install_dir>/tools/rfsm-viz -f <your_fsm_file>.lua
options:
see here: https://gist.github.com/3957702 (thx to Ruben).
Answer: everything besides Ports and Properties. So if you have Lua components/Services which are deleted and recreated, it is advisable to cleanup properly. This means:
portX:delete()
Update for toolchain-2.5: The utility function rttlib.tc_cleanup()
will do this for you.
Please ask questions related to RTT Lua on the orocos-users mailing list.
Lua specific links
The RTT Lua bindings are licensed under the same license as the OROCOS RTT.
The Orocos 1.x releases are still maintained but no longer recommended for new applications.
Look here for information on
This page explains how to install the Orocos Toolchain from the public repositories using a script. ROS-users might want to take a look at the orocos_toolchain stack and the rtt_ros_integration stack.
ruby --version
sh bootstrap.sh
. This installs the toolchain-2.6 branch (latest fixes, stable).Summarized:
cd $HOME mkdir orocos cd orocos mkdir orocos-toolchain cd orocos-toolchain wget -O bootstrap-2.6.sh http://gitorious.org/orocos-toolchain/build/raw/toolchain-2.6:bootstrap.sh sh bootstrap.sh source env.sh
Tweaking build and install options can be done by modifying autoproj/config.yml. You must read the README and the Autoproj Manual in order to understand how to configure autoproj. See also the very short introduction on Using Autoproj.
When the script finishes, try some Orocos toolchain commands (installed by default in 'install/bin'):
typegen deployer-gnulinux ctaskbrowser
After some time, you can get updates by going into the root folder and do
# Updates to latest fixes of release branch: autoproj update # Builds the toolchain autoproj build
You might have to reload the env.sh script after that as well. Simply open a new console. See also Using Autoproj.
Download the archive from the toolchain homepage. Unpack it, it will create an orocos-toolchain-<version> directory. Next do:
cd $HOME mkdir orocos cd orocos tar -xjvf /path/to/orocos-toolchain-<version>.tar.bz2 cd orocos-toolchain-<version> ./bootstrap_toolchain source ./env.sh autoproj build
Take a look at the Getting Started page for the most important documents.
Important changes
Online API resources
Cheat sheets
All manuals
autoproj update autoproj build
autoproj switch-config branch=toolchain-2.3 autoproj update autoproj build
You may replace the branch=toolchain-2.3 with any branch number going forward or backward in releases. We have: master, stable, toolchain-2....
If you'd like to reconfigure some of the package options, you can do so by writing
autoproj update --reconfigure
autoproj build
Warning: this will erase your current configuration (ie CMake) in case you had modified it manually !
A comprehensive autoproj manual can be found here
This document is written in doku wiki syntax, MediaWiki syntax.
this->foo( new Bar("Zort") );
or: this->foo( new Bar("Zort") );
. The default lang is 'cpp'. I needed to patch the PEAR wiki filter module to get it working with the geshi filter. See http://drupal.org/node/244520Mark must-haves with an 'M'.
M? 6.x 7.x Module M x - adsense M x - advuser M x x captcha M x - captcha_pack x x cck M x - comment_upload x x contemplate M x x diff M (x) - drutex M x - filterbynodetype M x - freelinking M x x geshifilter M x x image M x x imagepicker M x - img_assist x - import_html M x - listhandler M x - mailhandler M x - mailman_manager M (x) - mailsave - - mathfilter M x x pathauto M x x path_redirect M (x) - pearwiki_filter M x x quote M x - spam M x x spamspan M x - tableofcontents M x - talk M x - taxonomy_breadcrumb M x c/x token M x - user_mailman_register M c c user_status M x x views - - wiki M x - wikitools M : must-have - : not present c : present as core feature x : module released(x) : released but unmaintainedNewly found:
x - Emailfilter - for listhandler x - JsMath - latex render in browser instead of on-server
iTaSC (instantaneous Task Specification using Constraints) is a framework to generate robot motions by specifying constraints between (parts of) the robots and their environment. iTaSC was born as a specification formalisms to generalize and extend existing approaches, such as the Operational Space Approach, the Task Function Approach, the Task Frame Formalism, geometric Cartesian Space control, and Joint Space control.
The iTaSC concepts apply to specifications in robot, Cartesian and sensor space, to position, velocity or torque-controlled robots, to explicit and implicit specifications, and to equality and inequality constraints. The current implementation, however, is currently still limited to the velocity control and equality constraints subset.
the documentation effort lags behind the conceptual and implementation effort, the best documentation can be found in our papers! (see Acknowledging iTaSC and literature)
It is currently highly recommended to use the Devel branch, a formal release is expected soon (iTaSC DSL and stacks).
Please post remarks, bug reports, suggestions, feature requests, or patches on the orocos users/dev forum/mailinglist.
The framework generates motions by specifying constraints in geometric, dynamic or sensor-space between the robots and their environment. These motion specifications constrain the relationships between objects (object frames) and their features (feature frames). Established robot motion specification formalisms such as the Operational Space Approach [3], the Task Function Approach [6], the Task Frame Formalism [4], Cartesian Space control, and Joint Space control are special cases of iTaSC and can be specified using the generic iTaSC methodology.
The key advantages of iTaSC over traditional motion specification methodologies are:
These advantages imply that the framework can be used for any robotic system, with a wide variety of sensors.
In order not to be limited to one single instantaneous motion specification, several iTaSC specifications can be glued together via a so-called Skill that coordinates the execution of multiple iTaSCs, and configures their parameters. Consequently, the framework separates the continuous level of motion specification from the discrete level of coordination and configuration. One skill coordinates a limited set of constraints, that together form a functional motion. Finite State Machines implement the skill functionality.
This framework is implemented in the iTaSC software.
Please cite following papers when using ideas or software based on/ of iTaSC:
@Article{ DeSchutter-ijrr2007, author = {De~Schutter, Joris and De~Laet, Tinne and Rutgeerts, Johan and Decr\'e, Wilm and Smits, Ruben and Aertbeli\"en, Erwin and Claes, Kasper and Bruyninckx, Herman}, title = {Constraint-Based Task Specification and Estimation for Sensor-Based Robot Systems in the Presence of Geometric Uncertainty}, journal = {The International Journal of Robotics Research}, volume = {26}, number = {5}, pages = {433--455}, year = {2007}, keywords = {constraint-based programming, task specification, iTaSC, estimation, geometric uncertainty} } @InProceedings{ decre09, author = {Decr\'e, Wilm and Smits, Ruben and Bruyninckx, Herman and De~Schutter, Joris}, title = {Extending {iTaSC} to support inequality constraints and non-instantaneous task specification}, title = {Proceedings of the 2009 IEEE International Conference on Robotics and Automation}, booktitle = {Proceedings of the 2009 IEEE International Conference on Robotics and Automation}, year = {2009}, address = {Kobe, Japan} pages = {964--971}, keywords = {constraint-based programming, task specification, iTaSC, convex optimization, inequality constraints, laser tracing} } @InProceedings{ DecreBruyninckxDeSchutter2013, author = {Decr\'e, Wilm and and Bruyninckx, Herman and De~Schutter, Joris}, title = {Extending the {Itasc} Constraint-Based Robot Task Specification Framework to Time- Independent Trajectories and User-Configurable Task Horizons}, title = {Proceedings of the IEEE International Conference on Robotics and Automation}, booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation}, year = {2013}, address = {Karlsruhe, Germany} pages = {1933--1940}, keywords = {constraint-based programming, task specification, human-robot cooperation} }
@inproceedings{ vanthienenIROS2013, author = {Vanthienen, Dominick and Klotzbuecher, Markus and De~Laet, Tinne and De~Schutter, Joris and Bruyninckx, Herman}, title = {Rapid application development of constrained-based task modelling and execution using Domain Specific Languages}, booktitle = {Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems}, title = {Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems}, organization = {IROS2013}, year = {2013}, address = {Tokyo, Japan} pages = {1860--1866} } @inproceedings{ vanthienen_syroco2012, title = {Force-Sensorless and Bimanual Human-Robot Comanipulation}, author = {Vanthienen, Dominick and De~Laet, Tinne and Decr\'e, Wilm and Bruyninckx, Herman and De~Schutter, Joris}, booktitle = {10th IFAC Symposium on Robot Control (SYROCO)}, year = {2012}, month = {September, 5--7}, address = {Dubrovnik, Croatia}, volume = {10} } @InProceedings{ SmitsBruyninckxDeSchutter2009, author = {Smits, Ruben and Bruyninckx, Herman and De~Schutter, Joris}, title = {Software support for high-level specification, execution and estimation of event-driven, constraint-based multi-sensor robot tasks}, booktitle = {Proceedings of the 2009 International Conference on Advanced Robotics}, title = {Proceedings of the 2009 International Conference on Advanced Robotics}, year = {2009}, address = {Munich, Germany} pages = {}, keywords = {specification, itasc, skills} }
The software implements the iTaSC-Skill framework in Orocos, which is integrated in ROS by the Orocos-ROS-integration [1]. The Real-Time Toolkit (RTT) of the Orocos project enables the control of robots on a hard-realtime capable operating system, e.g. Xenomai-Linux or RTAI-Linux. The rFSM subproject of Orocos allows scripted Finite State Machines, hence Skills, to be executed in hard realtime. The figure below shows the software architecture, mentioning the formulas for the resolved velocity case without prioritization for clarification. The key advantages of the software design include:
Furthermore, the Bayesian Filtering Library (BFL) and Kinematics and Dynamics Library (KDL) of the Orocos project are used to retrieve stable estimates out of sensor data, and to specify robot and virtual kinematic chains respectively.
(to be expanded)
iTaSC DSL is a Domain Specific Language for constraint-based programming, more specifically iTaSC.
For more explanation and examples, please read D. Vanthienen, M. Klotzbuecher, T. De Laet, J. De Schutter, and H. Bruyninckx, Rapid application development of constrained-based task modelling and execution using domain specific languages, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 2013, pp. 1860–1866.
It is recommended to use the devel branch for the DSL as well as the iTaSC stacks.
For the Orocos reference implementation (in a ROS environment): One way is to send events in the Orocos task browser through the event_firer component that is automatically started when parsing and deploying an application.
Another, more user-friendly way, is to send events on the /itasc/ ros_common_events_in ROS topic (see README of the itasc_dsl repository).
A GUI to send events can be found on: https://bitbucket.org/apertuscus/python_gui
Look for an example at the itasc_erf2012_demo, that contains a run_eventgui.sh, launching this GUI with events for the iTaSC ERF2012 tutorial.
This meta-stack consists of following stacks:
Each package contains the following subdirectories:
https://gitlab.mech.kuleuven.be/groups/rob-itasc
and for the iTaSC DSL
https://bitbucket.org/dvanthienen/itasc_dsl.git
''The following explanation uses the ROS workspace and rosinstall tools, it is however easy to follow the same instructions without these tools, as will be hinted further on. IMPORTANT: All packages are catkin-based.
sudo apt-get install libeigen3-dev sip-dev liburdfdom-dev ros-indigo-angles ros-indigo-tf2-ros ros-indigo-geometric-shapes liblua5.1-0 liblua5.1-0-dev collada-dom-dev libbullet-dev ros-indigo-orocos-kdl ros-indigo-orocos-kinematics-dynamics ros-indigo-orocos-toolchain ros-indigo-geometry ros-indigo-robot-model ros-indigo-rtt-geometry ros-indigo-rtt-ros-integration ros-indigo-rtt-sensor-msgs ros-indigo-rtt-visualization-msgs
source /opt/ros/indigo/setup.bash
mkdir -p ~/ws/src cd ~/ws/src catkin_init_workspace cd ~/ws/ catkin_make
source ~/ws/devel/setup.sh
cd ~/ws wstool init src wstool merge -t src itasc_dsl.rosinstall wstool update -t src
The KUKA youBot and PR2 have not yet been tested in Indigo.
source ~/.bash_itasc_dsl useITaSC_deb
source ~/.bashrc
cd ~/ws/ catkin_make
sudo apt-get install ros-groovy-orocos-toolchain
sudo apt-get install libeigen3-dev ros-groovy-rtt-ros-integration ros-groovy-rtt-geometry ros-groovy-rtt-common-msgs ros-groovy-rtt-ros-comm
source /opt/ros/groovy/setup.bash source ~/path_to_workspace/setup.bash
rosws merge itasc_dsl.rosinstall
rosws update
rosws merge itasc_youbot_fuerte.rosinstall
rosws update
source /path/to/your/rosworkspace/setup.bash
source .bash_itasc_dsl
useITaSC_deb
source ~/.bashrc
For your convenience we put here some extra instructions for commonly used platforms:
rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs
rosws set rtt_pr2_controllers_msgs
roscd itasc_pr2
./convert_xacro.sh
rosmake itasc_core trajectory_generators itasc_tasks rFSM rttlua_completion itasc_solvers fixed_object itasc_robot_object moving_object moving_object_tf
rosmake itasc
roslaunch pr2_gazebo pr2_empty_world.launch
roscd itasc_dsl
./run_bunny.sh
export ROS_PARALLEL_JOBS=' -j1 -l1'
rosmake --threads 1
These levels are not only present on the configuration/coordination but also on the computational level (see slides). As hinted before your application FSM will only 'see' the components 'outside' iTaSC (robot drivers, sensor components...) and iTaSC as one composite component. Similarly, the iTaSC FSM 'sees' a task as one entity. Section 'The sub-FSMs of the running state', gives a good example/effect of this distinction.
As a result of the 3 levels, your application is always in 3 states: one for each level.
The coordination and FSM part of the running state are executed sequentially. The full FSM is loaded in a supervisor component: taskname_supervisor.lua
On the iTaSC level, composite_task_fsm.lua is used instead of running_itasc_fsm.lua, to highlight its meaning. There is also an additional file: itasc_configuration.lua, which is part of the configuration state of the itasc_fsm.lua.
The state machine implemented in name_fsm.lua is a composite state machine, consisting of two states:
This structure can be found in all statemachines of all levels (except for the application FSM, where the division of the running state is not (always) necessary).
As explained above, there are three levels: application, iTaSC and task: each of which makes abstraction of the level below it. As a result, events are propagated down the hierarchy to take effect and responses are sent back up, to acknowledge execution. The design of the Application and Task FSM should comply with the same rationale (i.e. each transition is triggered by the lower level FSM). The standard event-transition flow consists of:
The flow for the stopping-stopped states is also similar. The running states are different in the sense that there is no 'ran state': the state machines will stay in the running state until they are stopped.
The following figure gives an example of the composite task and tasks in case of the simultaneous laser tracing on a table and a barrel example, used in previous paragraphs. The goal is to (in this order):
In the figure, a (sub-)FSM is represented by a purple rounded box, a state by a rounded black box and a possible state transition by an arrow. State transitions are triggered by an event or combination of events. The state transitions of the task subFSMs, indicated by a colored arrow and circle, are caused by the event with the corresponding color, fired in the composite_task_fsm.lua.
To prevent overloading the figure, only a limited number of actions is shown, e.g. only the entry part of the state and not the exit part (which will, e.g. deactivate the trajectory generator and tasks which were activated).
The composite state of the example in the figure consists of 4 states.
As can be seen, the composite task FSM just sends an event to trigger the task subFSMs to reach the appropriate state. The task subFSM will take care of task specific behavior, e.g.
Doing so, the tasks can be easily adapted/ swapped/ changed/ downloaded.
Note: The names of the tasks are specific; i.e. they are the names of the components that are used for the tasks. The name of the task package will be more general, e.g. xyPhiThetaPsiZ_PID task (named after the structure of it's VKC and controller type). Cfr. class - object of object oriented programming.
Attachment | Size |
---|---|
iTaSC_Manual.pdf | 396.63 KB |
Please read the iTaSC_Manual first, to get acquainted with the iTaSC terminology and structure. A task is the combination of a virtual_kinematic_chain and a constraint/controller. In the iTaSC software, it is a (ROS-)package that contains:
The running_taskname_coordination.lua and running_taskname_fsm.lua, are sub-FSM's of the running state of the task (defined in taskname_fsm.lua). They are executed sequentially, first the coordination part, then the FSM part.
Important are the expected reference frames and points for the data on following ports. (o1=object 1, o2= object 2)
The expected references are also mentioned as comments in the files
A full template will be made available soon... At the moment, start from an example... Have a look at the keep_distance task-package (in the itasc_comanipulation_demo stack) as a good example of a task. Special cases are:
Should inherit from SubRobot.hpp, which can be found in the itasc_core. This file is a template for a robot or object component. See itasc_robots_objects stack for examples.
As can be seen in the examples, a robot component contains always a KDL::Tree, even if the robot is just a chain. This is to be able to use the KDL::Tree functionality, which is regrettable, not perfectly similar as the KDL::Chain functionality. E.g. tree.getSegment(string name) has a string as input, chain.getSegment(number) has a number as input, but not a string...
Coming soon, have a look at itasc_solvers for examples.
a
List of iTaSC tutorials Please report any issues on the orocos users or dev mailinglist
The easiest way to install all needed dependencies: (How to find the debian packages on ros.org)
sudo apt-get install ros-electric-rtt-common-msgs
sudo apt-get install ros-electric-rtt-ros-comm
sudo apt-get install ros-electric-rtt-ros-integration
git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
sudo apt-get install ros-electric-orocos-kinematics-dynamics
sudo aptitude install liblua5.1-0-dev
sudo aptitude install liblua5.1-0
sudo aptitude install lua5.1
git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
git clone https://github.com/kmarkus/rFSM.git
git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
git clone http://git.mech.kuleuven.be/robotics/youbot_hardware.git -b devel
git clone http://git.mech.kuleuven.be/robotics/soem.git
git clone git://git.mech.kuleuven.be/robotics/motion_control.git -b devel
git clone https://github.com/smits/youbot-ros-pkg.git
git clone http://git.mech.kuleuven.be/robotics/itasc.git
git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git
(+switch to devel branch)git clone http://git.mech.kuleuven.be/robotics/itasc_examples.git
if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find youbot_master_rtt`/lua/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
cd `rospack find youbot_description`/robots/
(part of the youbot-ros-pkg)rosrun xacro xacro.py youbot.urdf.xacro -o youbot.urdf
rosmake itasc_youbot_lissajous_app
This tutorial explains how to create an iTaSC application, starting from existing packages. The scheme we want to create is depicted in following figure:
The tutorial will follow the design workflow as explained here.
In the figures, a (sub-)FSM is represented by a purple rounded box, a state by a rounded black box and a possible state transition by an arrow. State transitions are triggered by an event or combination of events. The state transitions of the task subFSM that are indicated by a colored arrow and circle, are caused by the event with the corresponding color, fired in the composite_task_fsm.lua. To automatically transition from the MoveToStart to the TraceFigure state, an event indicating that the start position is reached must be fired. This event will be generated by the 'cartesian_generator'.
Overview of the modifications needed:
Create an empty ROS-package for your application and create 2 subdirectories:
Templates of these files can be found in the cartesian_motion package, scripts subdirectory (cartesian_tracing is an instance of cartesian_motion).
The actual FSM is loaded in the cartesian_tracing_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the running_cartesian_tracing_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application. Leave it for now.
The FSM for this application consists of multiple files on different locations, the cartesian_tracing_supervisor has properties that contain the path to these files. Create a property file (cartesian_tracing_supervisor.cpf for example) in your scripts subdirectory of the package you have created for this application and edit these properties.
There is no timer_id_property for task supervisors, because tasks are triggered by the iTaSC level by events.
Templates of these files can be found in the itasc_core package, scripts subdirectory.
Edit the itasc_configuration.lua file you just have copied: define the scene and kinematic loops as depicted in the figures of the first steps of this tutorial. Look at the comments in the template for more information on the syntax.
The actual FSM is loaded in the itasc_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the composite_task_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application. Leave it for now.
The FSM for this application consists of multiple files on different locations, the itasc_supervisor has properties that contain the path to these files. Create a property file (.cpf) in your scripts subdirectory of the package you have created for this application and edit these properties. The itasc_supervisor and application_supervisor have a property "application_timer_id": make sure these have the same value. Take in this case eg. 1. The timer id makes sure that both components are triggered by the same timer.
A template of this file can be found in the itasc_core package, scripts subdirectory.
The application FSM is loaded in the application_supervisor component (which is written in the lua language, hence the .lua file). Since you'll (probably) need to add functions to execute RTT specific code in the application_fsm, make a copy of this file to your scripts subdirectory of the package you have created for this application.
Edit the application_fsm and application_supervisor files:
Make sure that you configure, start, stop (and cleanup) all application level components in this state machine!
The FSM for this application can be on different locations, the application_supervisor has properties that contain the path to these file. Create a property file (application_supervisor.cpf) in your scripts subdirectory of the package you have created for this application and edit these properties. The itasc_supervisor and application_supervisor have a property "application_timer_id": make sure these have the same value. Take in this case eg. 1. The timer id makes sure that both components are triggered by the same timer.
Start from the following templates, which you can find in the itasc_core package, scripts subdirectory:
Copy these files to the package you have created for this application.
Edit the run.ops file (see also comments in template):
put before configuring the timer:
# we have to configure it first to get the ports connected, maybe better to put all this in the application_fsm.lua youbot_driver.configure() connect("youbot.qdot_to_arm", "youbot_driver.Arm1.joint_velocity_command", cp) connect("youbot.qdot_to_base", "youbot_driver.Base.cmd_twist", cp) connect("youbot_driver.Arm1.jointstate", "youbot.q_from_arm", cp) connect("youbot_driver.Base.odometry", "youbot.q_from_base", cp)
The template creates automatically an eventFirer, which is a component with ports connected to the event ports of the itasc- and application_supervisor. This allows easy firing events yourself at runtime, by writing an event on one of the ports.
For both levels:
The event needed for the transition from the MoveToStart to the TraceFigure state, will be send out by the 'cartesian_generator'. Look in his code for the event name.
roscd ocl/bin/
for i in deployer* rttlua*; do sudo setcap cap_net_raw+ep $i; done
roscd youbot_master_rtt/lua/
FUERTE_YOUBOT=false -- false Malaga, true is FUERTE
ETHERCAT_IF='ethcat'
roscd soem_core/bin
sudo ./slaveinfo eth0
(or ethcat...)roscd youbot_master_rtt/lua
rttlua-gnulinux -i youbot_test.lua
roscore
roscd itasc_youbot_lissajous_app
./run.sh
event_firer.itasc_common_events_in.write("e_start")
sudo apt-get install ros-fuerte-pr2-controllers sudo apt-get install ros-fuerte-pr2-simulator
mkdir ~/erf
export ROS_PACKAGE_PATH=~/erf:$ROS_PACKAGE_PATH
sudo apt-get install python-setuptools sudo easy_install -U rosinstall
rosinstall ~/erf erf_fuerte.rosinstall /opt/ros/fuerte/
source ~/erf/setup.bash
rosdep install itasc_examples rosdep install rFSM
useERF(){ source $HOME/erf/setup.bash; source $HOME/erf/setup.sh; source `rosstack find orocos_toolchain`/env.sh; setLUA; } setLUA(){ if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find youbot_driver_rtt`/lua/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin" } useERF
rosmake itasc_erf2012_demo
roscore
roslaunch gazebo_worlds empty_world.launch
roscd itasc_erf2012_demo/
./run_gazebo.sh
roscd itasc_erf2012_demo/
./run_simulation.sh
itasc fsm: STATE ENTER root.NONemergency.RunningITASC.Running [cartesian generator] moveTo will start from
roscd itasc_erf2012_demo/
./run.sh
itasc fsm: STATE ENTER root.NONemergency.RunningITASC.Running [cartesian generator] moveTo will start from
event_firer.itasc_common_events_in.write("e_my_event")
54398d0653067580edd5c5ec66bda5eac0aa29e4 and 81e5fab65ee3587056a4d5fda4eb5ce796082eaf
sudo apt-get install ros-electric-rtt-common-msgs
sudo apt-get install ros-electric-rtt-ros-comm
sudo apt-get install ros-electric-rtt-ros-integration
git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
sudo apt-get install ros-electric-orocos-kinematics-dynamics
sudo aptitude install liblua5.1-0-dev
sudo aptitude install liblua5.1-0
sudo aptitude install lua5.1
git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
git clone https://github.com/kmarkus/rFSM.git
git clone http://git.mech.kuleuven.be/robotics/opencv_additions.git
git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
git clone http://git.mech.kuleuven.be/robotics/itasc.git
git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git
git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs
git clone http://git.mech.kuleuven.be/robotics/rtt_common_msgs.git
git clone http://git.mech.kuleuven.be/robotics/rtt_ros_comm.git
git clone http://git.mech.kuleuven.be/robotics/rtt_ros_integration.git
git clone http://git.mech.kuleuven.be/robotics/rtt_geometry.git
sudo apt-get install ros-fuerte-orocos-kinematics-dynamics
sudo aptitude install liblua5.1-0-dev
sudo aptitude install liblua5.1-0
sudo aptitude install lua5.1
git clone git://gitorious.org/orocos-toolchain/rttlua_completion.git
git clone https://github.com/kmarkus/rFSM.git
git clone http://git.mech.kuleuven.be/robotics/opencv_additions.git
git clone http://git.mech.kuleuven.be/robotics/trajectory_generators.git
git clone http://git.mech.kuleuven.be/robotics/itasc.git
git clone http://git.mech.kuleuven.be/robotics/itasc_core.git
git clone http://git.mech.kuleuven.be/robotics/itasc_robots_objects.git
git clone http://git.mech.kuleuven.be/robotics/itasc_solvers.git
git clone http://git.mech.kuleuven.be/robotics/itasc_tasks.git
rosrun rtt_rosnode create_rtt_msgs pr2_controllers_msgs
git clone http://git.mech.kuleuven.be/robotics/itasc_comanipulation_demo.git
if [ "x$LUA_PATH" == "x" ]; then LUA_PATH=";;"; fi if [ "x$LUA_CPATH" == "x" ]; then LUA_CPATH=";;"; fi export LUA_PATH="$LUA_PATH;`rospack find ocl`/lua/modules/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rFSM`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find rttlua_completion`/?.lua" export LUA_PATH="$LUA_PATH;`rospack find kdl_lua`/lua/?.lua" export LUA_CPATH="$LUA_CPATH;`rospack find rttlua_completion`/?.so" export PATH="$PATH:`rosstack find orocos_toolchain`/install/bin"
rosmake itasc_comanipulation_demo_app
roscd itasc_pr2 ./convert_xacro.sh
roscd itasc_comanipulation_demo_app
./runControllers
./run.sh
event_firer.itasc_common_events_in.write("e_parallelIn")
event_firer.itasc_common_events_in.write("e_parallelOut")
event_firer.itasc_common_events_in.write("e_obstacleForceParallelLimitsIn")
event_firer.itasc_common_events_in.write("e_obstacleForceParallelLimitsOut")
Attachment | Size |
---|---|
iTaSC_comanipulation_demo.pdf | 535.79 KB |
Videos of iTaSC examples and demonstrations. Click on the images below to see the video.