-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Valkey Over RDMA transport #477
base: unstable
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #477 +/- ##
============================================
+ Coverage 70.07% 70.23% +0.15%
============================================
Files 109 109
Lines 59956 59957 +1
============================================
+ Hits 42016 42110 +94
+ Misses 17940 17847 -93 |
This PR could be tested by client. To build client with RDMA:
To test by commands:
|
Many cloud providers offer RDMA acceleration on their cloud platforms, and I think that there is a foundational basis for the application of Valkey over RDMA. We performed some performance tests on this PR on the 8th generation ECS instances (g8ae.4xlarge, 16 vCPUs, 64G DDR) provided by Alibaba Cloud. Test results indicate that, compared to TCP sockets, the use of RDMA can significantly enhance performance. Test command of server side:
Test command of client side:
The performance test results are as shown in the following table. Apart from LRANGE_100 (performance improvement but not substantially), in other scenarios (PING, SET, GET) the throughput can be increased by at least 76%, and the average (AVG) and P99 latencies can be reduced by at least 40%.
|
Hi, @hz-cheng I notice that you are the author of alibaba-cloud erdma driver for both linux kernel and rdma-core. Cooooooooool! |
More, If necessary, I could try reaching out to relevant colleagues to see if we can offer some Alibaba Cloud ECS instances to the community for free, so that the community can use and test Valkey over RDMA, as well as for future CI/CD purposes. |
Is there a corresponding client that enables RDMA? |
See this comment please. |
Hi @madolson , |
Almost doubled throughput is impressive. I don't know much about RDMA. It's many lines of code, but all of it is the module. That's great, but what are the risks of breaking it if we change something in the connection abstractions? We need to be aware that when we merge this, we will have to keep maintaining this. Is it possible to use TLS with RDMA? |
@pizhenwei The numbers do look great. I haven't gotten a chance to look at it yet, hopefully some time this week. |
Hi, Because the valkey-rdma.so(if built as a module) uses the To avoid the ricks from the mismatched
Once the core connection abstraction changes, all the connection types should do compat work, this rule is also applicative for rdma. I volunteer to maintain this rdma support. PS: I have experience on open source community like Linux kernel, QEMU, Redis, SPDK, libiscsi, tgt, atop, utils-linux and procps-ns.
As far as I can see, we can't use TLS with RDMA currently. I read document of openssl Abstract Record Layer, TLS with RDMA is workable in theory. But it would be amount of work. |
@pizhenwei Thanks for your contribution and @hz-cheng Thanks for your perfect number.
Let's core team member discuss this important feature, and send you feedback ASAP, Thanks |
There are two parts of this PR:
I have no experience on windows RDMA, I read document and found that windows does support RDMA, but not Linux style Verbs API. This means that we need a windows version in the future. (I imagine rdma-windows.c is needed).
It's quite easy to build RDMA support into Valkey with a few lines change. If so, the valkey-server has to link libibvers.so and librdmacm.so. Let's look at the dynamic shared libraries of module version:
If a user starts valkey-server with rdma module, valkey-server loads the additional shared libraries on demand. If building RDMA into valkey is necessary, please let me know.
Currently, the valkey-server support 3 connection type:
Run valkey-server by command: Once loading RDMA: The RDMA has better performance in a good network env like @hz-cheng's and my test report, but I tested mlx5 with packet drop rate 0.001, TCP performance affects a few, but RDMA performance drops a lot. I imagine a topo like:
It's possible to use RDMA within a short distance, or TCP over a long distance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to get this merged. It's a good contribution. I like that it's a module. When it's merged, we can let clients implement it and test it.
The RDMA port is neither a TCP port nor an UDP port? We've been talking about the possibility of adding QUIC in the future (optional dependency, maybe as a module too) and that can be on the same port too, right, since it's UDP?
Actually, the client side(for C only) is ready(as you see, several guys and me have already got the test report). Once the server side gets merged, I'll create PR for client as soon as possible.
Right.
Right. |
@pizhenwei Actually the latest RDMA technology such as Alibaba Cloud Elastic RDMA doesn't encounter this performance drop when packet drop rate 0.001,because the latest RDMA technology widely supports SACK lossy optimization. |
Hi, Let's focus on 'why does Valkey need to enable both TCP/IP and RDMA together' or 'enabling both TCP/IP and RDMA is useful or not in the real scenario', but not extend the topic to 'the latest RDMA technology' here. |
Hi @zuiderkwast , I create a new PR for the document part, and force pushed a new version here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now. Just some minor comments.
* Copyright (C) 2021-2024 zhenwei pi <[email protected]> | ||
* | ||
* This work is licensed under License 1 of the COPYING file in | ||
* the top-level directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@madolson Suggestion what we should write in the top of the file?
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
Main changes in this patch: * implement server side of connection module only, this means we can *NOT* compile RDMA support as built-in. * add necessary information in README.md * support 'CONFIG SET/GET', for example, 'CONFIG Set rdma.port 6380', then check this by 'rdma res show cm_id' and valkey-cli(with RDMA support, but not implemented in this patch) * the full listeners show like(): listener0:name=tcp,bind=*,bind=-::*,port=6379 listener1:name=unix,bind=/var/run/valkey.sock listener2:name=rdma,bind=xx.xx.xx.xx,bind=yy.yy.yy.yy,port=6379 listener3:name=tls,bind=*,bind=-::*,port=16379 valgrind test works fine: valgrind --track-origins=yes --suppressions=./src/valgrind.sup --show-reachable=no --show-possibly-lost=no --leak-check=full --log-file=err.txt ./src/valkey-server --port 6379 --loadmodule src/valkey-rdma.so port=6379 bind=xx.xx.xx.xx --loglevel verbose --protected-mode no --server_cpulist 2 --bio_cpulist 3 --aof_rewrite_cpulist 3 --bgsave_cpulist 3 --appendonly no performance test: server side: ./src/valkey-server --port 6379 # TCP port 6379 has no conflict with RDMA port 6379 --loadmodule src/valkey-rdma.so port=6379 bind=xx.xx.xx.xx bind=yy.yy.yy.yy --loglevel verbose --protected-mode no --server_cpulist 2 --bio_cpulist 3 --aof_rewrite_cpulist 3 --bgsave_cpulist 3 --appendonly no build a valkey-benchmark with RDMA support(not implemented in this patch), run on a x86(Intel Platinum 8260) with RoCEv2 interface(Mellanox ConnectX-5): client side: ./src/valkey-benchmark -h xx.xx.xx.xx -p 6379 -c 30 -n 10000000 --threads 4 -d 1024 -t ping,get,set --rdma ====== PING_INLINE ====== 480561.28 requests per second, 0.060 msec avg latency. ====== PING_MBULK ====== 540482.06 requests per second, 0.053 msec avg latency. ====== SET ====== 399952.00 requests per second, 0.073 msec avg latency. ====== GET ====== 443498.31 requests per second, 0.065 msec avg latency. Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
RDMA is the abbreviation of remote direct memory access. It is a technology that enables computers in a network to exchange data in the main memory without involving the processor, cache, or operating system of either computer. This means RDMA has a better performance than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and lower latency. In recent years, RDMA gets popular in the data center, especially RoCE(RDMA over Converged Ethernet) architecture has been widely used. Cloud Vendors also start to support RDMA instance in order to accelerate networking performance. End-user would enjoy the improvement easily. Introduce Valkey Over RDMA protocol as a new transport for Valkey. For now, we defined 4 commands: - GetServerFeature & SetClientFeature: the two commands are used to negotiate features for further extension. There is no feature definition in this version. Flow control and multi-buffer may be supported in the future, this needs feature negotiation. - Keepalive - RegisterXferMemory: the heart to transfer the real payload. The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory with RDMA write/write with imm, it's similar to several mechanisms introduced by papers(but not same): - Socksdirect: datacenter sockets can be fast and compatible <https://dl.acm.org/doi/10.1145/3341302.3342071> - LITE Kernel RDMA Support for Datacenter Applications <https://dl.acm.org/doi/abs/10.1145/3132747.3132762> - FaRM: Fast Remote Memory <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf> Link: valkey-io/valkey#477 Co-authored-by: Xinhao Kong <[email protected]> Co-authored-by: Huaping Zhou <[email protected]> Co-authored-by: zhuo jiang <[email protected]> Co-authored-by: Yiming Zhang <[email protected]> Co-authored-by: Jianxi Ye <[email protected]> Signed-off-by: zhenwei pi <[email protected]>
Hi,
Since 2021/06, I created a PR for Redis Over RDMA proposal. Then I did some work to fully abstract connection and make TLS dynamically loadable, a new connection type could be built into Redis statically, or a separated shared library(loaded by Redis on startup) since Redis 7.2.0.
Base on the new connection framework, I created a new PR, some guys(@xiezhq-hermann @zhangyiming1201 @JSpewock @uvletter @FujiZ) noticed, played and tested this PR. However, because of the lack of time and knowledge from the maintainers, this PR has been pending about 2 years.
Related doc: Introduce Valkey Over RDMA specification. (same as Redis, and this should be same)
Changes in this PR:
Finally, if this feature is considered to merge, I volunteer to maintain it.