Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compare with iceoryx #108

Open
flyonaie opened this issue Apr 19, 2024 · 1 comment
Open

compare with iceoryx #108

flyonaie opened this issue Apr 19, 2024 · 1 comment
Labels
documentation Improvements or additions to documentation

Comments

@flyonaie
Copy link

flyonaie commented Apr 19, 2024

Original description by issue filer: your performance is higher than iceoryx?

Then I (@ygoldfeld) edited in the following:

It would be good to add a "Comparison with other approaches, products" section to the documentation (guided Manual); it would cover iceoryx potentially but also other things like local gRPC, capnp-RPC - and the fact that Flow-IPC can be used together with those to make them faster and possibly easier to use.

I shall also pre-answer the iceoryx question in a comment below.

@ygoldfeld ygoldfeld added the documentation Improvements or additions to documentation label Apr 22, 2024
@ygoldfeld
Copy link
Contributor

To pre-answer the original question: "your performance is higher than iceoryx?"

Firstly, there's quite a bit of discussion between me an iceoryx guys in the HackerNews announcement thread: https://news.ycombinator.com/item?id=40028118

I'd like to summarize my impression, compared to iceoryx 1 (the C++ one: not iceoryx 2, the Rust one, with future C++ bindings available according to the authors). I personally would need to play with it more to give a more authoritative answer, but I think the below isn't bad.

iceoryx is, also, fully open-source, with some commercial consulting available as well, it looks like.

At least on the surface it has a similar tag-line: true zero-copy IPC.

Currently from the available documentation and above conversations, iceoryx has a raw performance advantage in the low-level IPC transport they provide. Their published perf results show truly impressive results in the microseconds. Flow-IPC lacks a like-for-like comparison for our 3 available low-level transports (Native_socket_stream -- Unix domain socket, Blob_stream_mq_send/receiver<Bipc_mq_handle ...or... Posix_mq_handle> -- so Boost-SHM-based and POSIX-MQ-based). However, the README.md example (perf_demo test/example program in the soruce) provides an upper bound, by doing the low-level transport plus Cap'n Proto encoding/decoding (which adds some latency). This was run on different hardware, so that's not a direct comparison either, but roughly speaking it's not bad: ~100 microseconds RTT.

Since iceoryx have implemented a custom lock-free queue and optimized it for applications such as automobile internal systems, they can squeeze every little bit of performance in, which shave off those additional microseconds. That said, once one is in the 100-usec range, many (not all) applications do not care further.

In terms of zero-copy, the two are equal, in that payload size and transmission latency are independent: transmitting a gigabyte is as quick as transmitting a few bytes. Plus, Flow-IPC is modular, so it would be a fairly simple matter to (e.g.) plug-in an iceoryx pipe and get all the benefits of capnp serialization and/or SHM-jemalloc goodies, while obtaining the custom lock-free awesomeness of iceoryx.

Beyond that, the performance of the two is not comparable, as they have different feature sets. E.g., Flow-IPC provides direct SHM allocation capability and STL-transmission-over-IPC; there's no equivalent to compare to, so it's moot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants