The 1st RTT Developers Workshop

Main SponsorMain Sponsor
Place: BARCELONA PAL Robotics offices

Calle Pujades 77-79, 4th floor 4ª 08005 Barcelona, Spain.

See also PAL Robotics

Date: Mo 19th of July - Fr 23 July
1st RTT Developers Workshop

AGENDA

Date Time Topic/Title & Mediator Description
Mon 19/7 9h-13h Big Picture Day

Arriving at PAL offices

Peter shows what 2.0 is and where it is heading
13h-14h Lunch
14h-19h Who're you + plans presentation You made some Impress/Beamer slides and present your work + ideas for future work in < 10 slides.
20h Dinner Opening dinner sponsored by The SourceWorks
Tue 20/7 9h-13h Typekit generation Orogen + message generation.

YARP transport 2.0 (only ports, no methods, events, etc).

Code explosion & extern template solution.

13h-14h Lunch
14h - 19h Component generation Introduction.

What is its place in RTT?

20h Dinner
Wed 21/7 9h-13h Building Fix up RTT/OCL cmake.

Structure of components, repositories and applications (graveyard/attic)

13h-14h Lunch
14h-16h Documentation improvement Structure. Website

Missing part.

Real examples and tutorials Examples. Restructure.

Success stories, who uses

16h Visiting
20h Dinner
Thu 22/7 9h-13h Logging Current status.

Fix architecture.

RTT::Logger remove/replace

13h-14h Lunch
14h-16h Reporting
16h-19h Upgrading from v1 to v2 Describe rtt v2 converter. Caveat document. Try out on user systems.
20h Dinner Closing dinner sponsored by The SourceWorks
Fri 23/7 9h-13h Wrapping-up Day Finishing loose ends, integration and discussions for future work
13h-14h Lunch
14h-17h Wrapping-up Day Finishing loose ends, integration and discussions for future work

If you need or want to provide sponsorship, contact Peter.

Participants - Please add your days of presence !

  • Peter Soetens (Sun 18/7 - Fri 23/7)
    • 19inch all-in-one Core2 system
    • Ubuntu 9.10
  • Sylvain Joyeux
  • Markus Klotzbuecher (Sun 18/7 - Fri 23/7)
    • TP x61
    • Debian Testing
  • Stephen Roderick (Fri 16/7 - Fri 23/7)
    • 15" Mac Book Pro
    • Mac OS X Snow Leopard and Ubuntu Lynx 10.04
  • Charles Lesire-Cabaniols (Mon 19/7 noon - Thu 22/7 8am)
    • Dell Core2 Laptop
    • Ubuntu lucid 10.4
  • Carles Lopez
  • Adolfo Rodriguez Tsouroukdissian (I will be around the whole week)

Ideas:

  • Presentation of participants: how, where and why Orocos is used by others
  • Build system ? Version control ? What web ressources ?
    • DFKI has its own build system (http://sites.google.com/site/rubyinmotion). Mainly standardized on CMake and git.
    • where to put the code and do release management ? (trying out gitorious.org ?)
    • use standard approach for end user build-support files across all sub-projects (ie if CMake, then use the same approach to FindXXX and Config files). Provide example use cases to build against Orocos (ie there is no FindOrocos-XXX.cmake provided by any sub-project, which IMHO is Very Bad for a new user).
  • the components: Orocos has the OCL and DFKI has already open-sourced some components. In what form do we want to distribute them ?
  • the toolchain: where are we going, what are we going to use and lay out a schedule for that.
    • idea from Peter to standardize the type description on the ROS datatypes vs. oroGen's C++ parser
    • component specification is more than datatypes. oroGen
  • release strategy: how to release RTT 2.0 and associated tools. Target date ?
  • integrating new transports
    • YARP transport from Charles
    • ROS transport ? (who would do that ?)
  • Additional maintainers (particularly for OCL)
  • Standardized API's to ease language bindings (eg Python, Java, LISP ... yes, LISP :-) )
  • Integration of dependant packages (e.g. TLSF). Currently (for real-time logging), we have a circular build problem in that RTT needs log4cpp, log4cpp needs TLSF, but TLSF is installed as part of RTT. Big Problem! Peter mentioned integrating log4cpp, but I'm not sure taht this is the best approach (ie long term consequences, keeping up with releases, scalability).
  • Integration of real-time logging from OCL into RTT
    • Use of real-time logging by RTT itself. Transition plan to accomplish this.
  • Clean shutdown semantics (ie allowing state machines and scripts to cleanly shutdown)
  • Scalability of deployment
    • Deploying subsystems/containers/groups of components
    • Parameterizing deployment files (e.g. variable substitution within XML files)

A. First day

Morning

Peter started to present the 2.x functionality, state and (from its point of view), shortcomings. The following are the points that have been risen during the discussion.

  • TimeService: what is it for, and is not there other solutions ?
  • right now, SlaveActivity is different from Slave execution engine. Why ?

Properties and attributes

  • RTT::Property[persistent] / RTT::Attribute[non persistent]
  • RTT::Property is NOT thread safe
  • know if a property has been written

Ports

  • could we have not OldData for buffer ports => how difficult ?
  • discussion on overruns, and default drop policy for buffers. In general, we need to discuss an interface to monitor transports
  • data flow now has N data holder elements, where N is the number of total connections. 1.x needed at most one data holder element per output port.

Methods / Operations / Services

  • operation execution policy: OwnThread vs.ClientThread. Agreed that OwnThread should be the default policy as it is safer from a thread-safety point of view.
  • better naming for ServiceRequester / ServiceProvider -- Method / Operation
    • rename ServiceRequester to Service to map Operation
    • what about the caller side ?
      • OperationHandle
      • __OperationCaller__ for now, preferred as it says what it does
      • PeerOperation
      • RemoteOperation
      • OperationProxy (-- for Peter and Markus)
      • OperationStub (-- for Peter and Markus)
      • OperationInterface (no: Interface is used a lot in the code for base classes)

Misc

  • chicken and egg problem with the deployer, especially with basic services like real-time logging

Plugins

  • possibility to do autoloading based on a PATH environment variable or to do loading per-plugin
  • there is a cost to load a typekit that is not needed as the shared library has to be mapped in memory and typekits are quite big

Code size

  • instanciating RTT::Operation for void()(void)
    • 60kB for dispatching code
    • 60kB for distributed C++ dispatching code
    • Yuk

Events

  • 2.0 does not have events right now
  • can (and will) be implemented on top of the ports
  • one limitation: only one argument. Not a big issue if we have an automated wrapping like oroGen
  • we're now getting into the details of event handling ;-)

end

B. Second day

The discussions starts with explaining the improved TypeInfo infrastructure:

  • Normally , everything should be generated by the tools
  • If the tools don't make it, you can generate a typekit manually by:
    • Add a StructTypeInfo<T> instead of TemplateTypeInfo<T> (the latter still exists)
    • Define a boost::serialization function the decomposes your struct

ROS messages and orogen

  • Can orogen parse a generated ros message class ?
    • It can't since it does not work when a class has virtual functions. Also the ANTLR parser is not 'good enough'.
    • gccxml tool can help here, it also removes ANTLR then.

Sylvain explains how orogen works

  • List dependencies
  • Declare used types (header files to use)
  • Declare task definitions
  • Declare deployments

Sylvain shows how orogen requires the #ifndef __orogen in the headers listed. gccxml is a fix for this too.

Hosting on gitorious is being discussed. It allows us to group code in 'projects' and collaborate better using git.

Autoproj is discussed as a tool to bootstrap the orocos packages. It's an alternative to manually download and build everything. It may work in parallel with rosbuild, in case application software depends on both ros and orocos. This needs to be tested.

The work is divided for the rest of the day:

  • Charles + Peter : Yarp transport for 2.0
  • Markus + Peter : Find collect segfault bug
  • Stephen + Sylvain: Mac OS-X testing of autoproj/ruby etc.
  • Sylvain + Peter + Markus : gccxml into orogen

We decided to rename orogen to typegen

The day concluded with investigating the code size/compile time issue. The culprits are the operations added to the ports in the typekit code. We investigated several solutions to tackle this, especially in the light of code/typekit generation.

C. Third day

The day started with a re-evaluation of the agenda and release timelines. The proposed release date for 2.0 was august 16th.

This list of topics will be covered this week:

  • Documentation review
  • Website review
  • Real-time Logging
  • Build system review
  • Crash in Collect found by Markus
  • Yarp transport

This list of issues will be solved before 2.0.0:

  • Code size/compilation time issue
  • Tool + cmake macros to create new component projects
  • typegen tool to generate type kits
  • gitorious migration of all orocos projects, including code generation tools
  • OCL cleanup and migration to new cmake macros

These issues will be delayed after 2.0.0:

  • Thread-safe property writing (allow a peer/3rd party to change a property)
  • Attribute/Property resolution, ie, maybe it's easier to introduce a 'persistent' flag in properties which flags if it needs to be serialized or not. Attributes can then be removed.
  • Service discovery. Sylvain manages these things in a supervision layer written in Ruby. It's not clear yet how far the C++ DeploymentComponent needs to go in this issue.
  • Diagnostics service that detects thread overruns, buffer overflows etc.
  • Connection browsing: ask a port to which other ports it is connected such that we can visualise that
  • Deployment gui to show or create component networks
  • Full Mac-OS-X and Win32 support. These will mature in the 2.0.x releases as users on these platforms test the release.

The rest of the day continued as planned on the agenda. In the morning, a new CMake build system for components, plugins and executables was created to maximize maintainability and ease-of-use of creating new Orocos software. OCL too will switch to this system. The interface (CMake macros) and logic behind it was discussed. This tool will be further developed to be ready before the 2.0 release.

In the afternoon, the documentation and website structure was discussed. We came to the conclusion that no-one only downloads the RTT. For 2.0, they will download, RTT, the infrastructure components (TaskBrowser, Deployment, Reporting, Diagnostics etc) and the tool-chain (typekit generation, component generation etc.). This will require a restructuring of the website and the documentation, to no longer be RTT-centric, but to be 'Orocos ecosystem' centric.

The documentation will contain 3 pillars:

  • Getting started
    1. Download toolchain
    2. Build
    3. Run demo
  • Setting up a real system
    1. Your first component
    2. Deploying it
    3. Creating a component network
  • Reference documentation
    1. API
    2. Cheat-Sheet
    3. Manuals

The reference manuals will be cleaned up too, such that they suit better 'for reference' and less serve as 'first read for new users'.

During this day, the code size problem, typegen development and Yarp transport were also further polished.

It ended with a visit to 'Parc Guell' and a walk to the old city centre, where we enjoyed a well deserved tapas meal.

Compiling RTT in Windows/MinGW + pthreads-32

This page describes the steps to take in order to compile the real-time toolkit (RTT) on a Windows machine, under MinGW and pthreads-32.

The following has been tested on Windows XP, running in a virtual machine on Mac OS X Leopard.

Outstanding issues

  • Not all RTT tests pass
  • TAO does not completely build
  • CORBA support in RTT untested due to the above

Warning: the default GCC 3.4.5 compiler in MinGW outputs a lot of warnings when compiling RTT. Mostly they are "foo might be used uninitialized in this function" in STL code.

Install MinGW

See the following links for the basic approach

See detailed instructions in URL's above and below, but basically (unless otherwise noted, all actions are in MSys Unix shell, and, all unix-built items are installed in /mingw (which is c:\msys\1.0\mingw in DOS prompt) )

  1. Install MinWG - base, C++, make (then add c:\minwg\bin to system PATH)
  2. Install msysxxx.exe
  3. Install msys DTK
  4. Install bash (i386 versions) by untarring in / in msys
  5. Install coreutils-5.97-MSYS-1.0.11-snapshot.tar.bz by untar'ing and then manually copying contents to / (have to mv the bin/cp command)
  6. Download autoconf, automake and libtool: untar, configure with --prefix=/mingw, and build/install
  7. Set env vars in /etc/profile: CFLAGS, PKG_CONFIG_PATH
  8. Install glib2, gettext and pkg-config from gtk URL. Extract into /mingw

Install dependancy packages

Compile CMake from unix source (in build dir)
 cmake-xxx/bootstrap --prefix=/mingw --no-qt-gui
 make && make install
Run pthreads32 installer (just untar's)
    - manually copy pre-built/include/* to /c/mingw/include    (C:\mingw/include)
    - manually copy pre-built/lib/*GC2* to /c/mingw/lib        (C:\mingw/lib)
    - to run pthreads tests, need to copy prebuilt .a/.dll into .. dir, and copy queueuserapcex to ../.. 
Boost (as at 2009-Jan, use v1.35 not v1.37 until we fix RTT for v1.37)
    *** DOS shell ***
    cd boost-jam-xxx
    .\build.bat gcc        ** won't build in unix shell with build.sh **
    *** unix shell ***
    cd boost-jam-xxx
    cp binntx86/bjame.exe /mingw/bin
    cd ~/software/build/boost_1_35
    bjam --toolset=gcc --layout=system --prefix=/mingw --with-date_time --with-graph \ 
        --with-system --with-function_types  --with-program_options  install
Cppunit, get tarball from sourceforge
    untar and configure with --prefix=/mingw
    correct line 7528 in libtool, to be c:/MinGW/bin../lib/dllcrt2.o for first item 
    make && make install

Build RTT

Get trunk of RTT, patch with this file, configure (ensure set OROCOS_TARGET=win32), make, and install as usual.
 cd /path/to/rtt; patch -p0 < patch-rtt-mingw-1.patch
 

Set your PATH

Ensure your PATH in the MSYS shell contains /mingw/bin and /mingw/lib.

Test your setup

Next test your setup with a 'make check'. Currently 4 of 8 tests fail ... more work to do here.

Partial ACE/TAO CORBA build

This gets most of ACE/TAO to build, but not yet all.
    download, follow MinGW build instructions on the website.
        add "#undef ACE_LACKS_USECONDS_T" to ace/config-win32-mingw.h" before compiling
    copy ace/libACE.dll to /mingw/lib
    make TAO ** this fails
    You can build all we need by manually doing ''make'' in the following directories. Note that the last couple of TAO dir's have problems.
        ace, ace/protocols, kokyu, tao, tao/TAO_IDL, tao/orbsvcs    
NB Can parallel build ace but not its tests nor tao.

NB Not all tests pass. At least one of the ACE tests fail.

More useful URLs

http://www.mingw.org/wiki/MinGWiki http://iua-share.upf.es/wikis/clam/index.php/Devel/Windows_MinGW_build http://www.gimp.org/~tml/gimp/win32/ http://www.gtk.org/download-windows.html http://www.cleardefinition.com/page/Build_Boost_for_MinGW/ http://www.dre.vanderbilt.edu/~schmidt/DOC_ROOT/ACE/ACE-INSTALL.html#mingw http://psi-im.org/wiki/Compiling_Qt4_on_Windows http://www.qtsoftware.com/downloads/opensource/appdev/windows-cpp

D. Fourth day

Logging Day

Stephen gives an overview of the current log4cpp + Orocos architecture and how he accomplished real-time logging. Log4cpp supports

  • 1 category can have 0..* appenders
  • 1 appender has 0..1 category (0 makes no sense though)

Orocos supports

  • A RTString type that uses tlsf, but still lives in OCL.
  • A real-time logging category type. Any number of these can be created with their own 'org.orocos.rtt.xyz' scope.
  • A file appender component type. Appending over the network (CORBA) is untested though

Decisions for v2.0

  • Deprecate RTT::Logger in v2.0
  • Move OCL::String into RTT
  • Move log4cpp itself into RTT (or into orocos-log4cpp gitorious project)

v2.2 or later

  • Move OCL::Logging into RTT and port to v2.x
  • Make LoggingService support lookup of ports by category (called via operation to do so)
  • Support multiple appenders per category
  • Either logging messages go to stderr if appender not yet connected to category, or they continue to get discarded
  • Deployer by default starts LoggingService and FileAppender (to orocos.log). User can turn this behaviour off with a command line parameter, allowing them to configure the logging system via a site deployment file.
  • Add streaming capability : logger->debug << xyz;
  • Replace RTT::Logger with calls to RTT::Logging framework
  • Complete OCL::String plugin to support use within scripting
  • Add LoggingPlugin
  • support use from scripting to query, modify and use OCL::Category
  • Add additional appenders (eg socket)

Services discussion

Peter explains how services made their entry into the design and how they can be used.
  • Services have to have different names from ports (v2)
  • TaskContext has a default service (this->provides())
  • TC is really a service container/executor.
  • Properties and operations must be in a service
  • Ports were _not_ in a service. This will be changed such that ports belong to a Service. A Provides Service can have both input and output ports. This is reasonable and meets real-world semantics, however, it does sound slightly contradictory. Must be well explained with examples.
  • Talking of dropping the “Providers” in “ServiceProviders”, and just having “Services” and “ServiceRequestors”

E. Fifth day

It's hacking day and implementing/finishing most of what we started this week.

  • Stephen is testing on Mac-OS-X. Found a bug in tlsf where NULL and 0 were mixed, causing it not to handle memory exhaustion cases correctly.
  • Peter makes the API changes that were proposed and fixes bugs others find on the go.
  • Sylvain is setting up the gitorious project