Hello,
I would just like to share a few thoughts about the reporter. In
particular, I think that the current deserialization method for std::vector
in the typekits is inefficient for large arrays. I have a typekit where I
have:
1) 10-20 scalars
2) and a few std::vector arrays. Two are big (> 400 elements), one which
has cca 100 elements, and a few more with cca 10 samples. Each element is a
float.
My computation component spits data at 25 Hz. What I observed, together
with Ruben, is that corresponding reporter consumes 100% of CPU (single
core). In our app, we use netCDF reporter, and after some analysis I found
out that I miss quite some samples -- most of the time every 2nd sample.
Looking at a deployer log it is pretty much clear that serialization
decomposes each array into a sequence of scalars. I can see that this is
some legacy plus that it is "OK" for small arrays. For big arrays this is
overkill, simply because each time one sample (std::vector) has to be
logged, instead of being written to consecutive mem. locations each element
of the vector is appended to a separate array.
To be honest, I don't have a clue how netCDF works nor how reporting works
under the hood, I am just thinking why is reporter slow in my case.
BTW, I am using an Intel SSD (1-2 years old), so write times should be
fast. Looking at iotop, it says that write speed is < 500 kB/s.
Moreover, reading of the generated .nc files is quite slow. When I convert
this to .mat (either from python or MATLAB) reading times are much better
Possible inefficiency of the serialization for reporting
Hi Milan,
On Thu, Apr 17, 2014 at 12:46 PM, Milan Vukov
<milan [dot] vukov [..] ...> wrote:
> Hello,
>
> I would just like to share a few thoughts about the reporter. In particular,
> I think that the current deserialization method for std::vector in the
> typekits is inefficient for large arrays. I have a typekit where I have:
>
> 1) 10-20 scalars
> 2) and a few std::vector arrays. Two are big (> 400 elements), one which has
> cca 100 elements, and a few more with cca 10 samples. Each element is a
> float.
>
> My computation component spits data at 25 Hz. What I observed, together with
> Ruben, is that corresponding reporter consumes 100% of CPU (single core). In
> our app, we use netCDF reporter, and after some analysis I found out that I
> miss quite some samples -- most of the time every 2nd sample.
>
> Looking at a deployer log it is pretty much clear that serialization
> decomposes each array into a sequence of scalars. I can see that this is
> some legacy plus that it is "OK" for small arrays. For big arrays this is
> overkill, simply because each time one sample (std::vector) has to be
> logged, instead of being written to consecutive mem. locations each element
> of the vector is appended to a separate array.
>
> To be honest, I don't have a clue how netCDF works nor how reporting works
> under the hood, I am just thinking why is reporter slow in my case.
It is. The reason we're serializing a vector into separate elements is
because kst couldn't read arrays. You can try for yourself by setting
the "Decompose" property to false in the netcdf reporter.
Since the reporter is generic, it will also disable decomposition for
all the other types (structs), so for netcdf, this means than only
ports using std::vector<double> and primitive types (float, double,
int,...). serializing vector<float> at once (ie decomposition==false)
is not implemented for netcdf...
>
> BTW, I am using an Intel SSD (1-2 years old), so write times should be fast.
> Looking at iotop, it says that write speed is < 500 kB/s.
>
> Moreover, reading of the generated .nc files is quite slow. When I convert
> this to .mat (either from python or MATLAB) reading times are much better --
> 10MB file loads instantly. I kinda believe if number of columns would be
> smaller (std::vector -> one single entry in the netCDF file), read time
> would be shorter -- just my intuition.
>
> Finally, a question for devs: how hard would be to change serialization of
> std::vector-like arrays?
Take a look at reporting/Netcdf[Header]Marshaller.hpp . That's all
code there is. The ReportingComponent does the
creation/updating/decomposition of port data into propertybags
Peter
Possible inefficiency of the serialization for reporting
Hi Peter,
Thanx a lot for the answers! If I understood you correctly, one solution
would be to use multiple ports and set decompose to off. At the moment, I
think I will stick to my struct-of-arrays approach, since I would like to
keep timestamps together with data (every typekit in my app has a timestamp