Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce Valkey Over RDMA transport #477

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

pizhenwei
Copy link

@pizhenwei pizhenwei commented May 9, 2024

Hi,

Since 2021/06, I created a PR for Redis Over RDMA proposal. Then I did some work to fully abstract connection and make TLS dynamically loadable, a new connection type could be built into Redis statically, or a separated shared library(loaded by Redis on startup) since Redis 7.2.0.

Base on the new connection framework, I created a new PR, some guys(@xiezhq-hermann @zhangyiming1201 @JSpewock @uvletter @FujiZ) noticed, played and tested this PR. However, because of the lack of time and knowledge from the maintainers, this PR has been pending about 2 years.

Related doc: Introduce Valkey Over RDMA specification. (same as Redis, and this should be same)

Changes in this PR:

  • implement Valkey Over RDMA. (compact the Valkey style)

Finally, if this feature is considered to merge, I volunteer to maintain it.

Copy link

codecov bot commented May 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.23%. Comparing base (6bab2d7) to head (a9e96f4).
Report is 3 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable     #477      +/-   ##
============================================
+ Coverage     70.07%   70.23%   +0.15%     
============================================
  Files           109      109              
  Lines         59956    59957       +1     
============================================
+ Hits          42016    42110      +94     
+ Misses        17940    17847      -93     

see 10 files with indirect coverage changes

@pizhenwei
Copy link
Author

This PR could be tested by client.

To build client with RDMA:

make BUILD_RDMA=yes -j16

To test by commands:

Config of redis: appendonly no, port 6379, rdma-port 6379, appendonly no,
                 server_cpulist 12, bgsave_cpulist 16.
For RDMA: ./redis-benchmark -h HOST -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100 --rdma \
	  --server_cpulist 2 --bio_cpulist 3 --aof_rewrite_cpulist 3 --bgsave_cpulist 3
For TCP: ./redis-benchmark -h HOST -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100

@madolson madolson added the major-decision-pending Major decision pending by TSC team label May 12, 2024
@hz-cheng
Copy link

hz-cheng commented May 20, 2024

Many cloud providers offer RDMA acceleration on their cloud platforms, and I think that there is a foundational basis for the application of Valkey over RDMA. We performed some performance tests on this PR on the 8th generation ECS instances (g8ae.4xlarge, 16 vCPUs, 64G DDR) provided by Alibaba Cloud. Test results indicate that, compared to TCP sockets, the use of RDMA can significantly enhance performance.

Test command of server side:

./src/valkey-server --port 6379 \
  --loadmodule src/valkey-rdma.so port=6380 bind=11.0.0.114 \
  --loglevel verbose --protected-mode no \
  --server_cpulist 12 --bgsave_cpulist 16 --appendonly no 

Test command of client side:

# Test for RDMA
./src/redis-benchmark -h 11.0.0.114 -p 6380 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100 --rdma

# Test for TCP socket
./src/redis-benchmark -h 11.0.0.114 -p 6379 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100

The performance test results are as shown in the following table. Apart from LRANGE_100 (performance improvement but not substantially), in other scenarios (PING, SET, GET) the throughput can be increased by at least 76%, and the average (AVG) and P99 latencies can be reduced by at least 40%.

RDMA TCP RDMA/TCP
PING_INLINE
Throughput 666577.81 366394.31 181.93%
Latency-AVG 0.044 0.08 55.00%
Latency-P99 0.063 0.127 49.61%
PING_MBULK
Throughput 688657.81 395397.56 174.17%
Latency-AVG 0.042 0.073 57.53%
Latency-P99 0.063 0.119 52.94%
SET
Throughput 434744.78 157726.22 275.63%
Latency-AVG 0.068 0.188 36.17%
Latency-P99 0.111 0.183 60.66%
GET
Throughput 562587.94 319478.59 176.10%
Latency-AVG 0.052 0.091 57.14%
Latency-P99 0.079 0.151 52.32%
LRANGE
Throughput 526260.38 211434.36 248.90%
Latency-AVG 0.056 0.14 40.00%
Latency-P99 0.079 0.159 49.69%
LRANGE_100
Throughput 57106.96 49498.34 115.37%
Latency-AVG 0.427 0.499 85.57%
Latency-P99 4.207 13.367 31.47%

@pizhenwei
Copy link
Author

Many cloud providers offer RDMA acceleration on their cloud platforms, and I think that there is a foundational basis for the application of Redis over RDMA. We performed some performance tests on this PR on the 8th generation ECS instances (g8ae.4xlarge, 16 vCPUs, 64G DDR) provided by Alibaba Cloud. Test results indicate that, compared to TCP sockets, the use of RDMA can significantly enhance performance.

Test command of server side:

./src/valkey-server --port 6379 \
  --loadmodule src/valkey-rdma.so port=6380 bind=11.0.0.114 \
  --loglevel verbose --protected-mode no \
  --server_cpulist 12 --bgsave_cpulist 16 --appendonly no 

Test command of client side:

# Test for RDMA
./src/redis-benchmark -h 11.0.0.114 -p 6380 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100 --rdma

# Test for TCP socket
./src/redis-benchmark -h 11.0.0.114 -p 6379 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100

The performance test results are as shown in the following table. Apart from LRANGE_100 (performance improvement but not substantially), in other scenarios (PING, SET, GET) the throughput can be increased by at least 76%, and the average (AVG) and P99 latencies can be reduced by at least 40%.

指标 RDMA TCP RDMA/TCP
PING_INLINE
Throughput 666577.81 366394.31 181.93%
Latency-AVG 0.044 0.08 55.00%
Latency-P99 0.063 0.127 49.61%
PING_MBULK
Throughput 688657.81 395397.56 174.17%
Latency-AVG 0.042 0.073 57.53%
Latency-P99 0.063 0.119 52.94%
SET
Throughput 434744.78 157726.22 275.63%
Latency-AVG 0.068 0.188 36.17%
Latency-P99 0.111 0.183 60.66%
GET
Throughput 562587.94 319478.59 176.10%
Latency-AVG 0.052 0.091 57.14%
Latency-P99 0.079 0.151 52.32%
LRANGE
Throughput 526260.38 211434.36 248.90%
Latency-AVG 0.056 0.14 40.00%
Latency-P99 0.079 0.159 49.69%
LRANGE_100
Throughput 57106.96 49498.34 115.37%
Latency-AVG 0.427 0.499 85.57%
Latency-P99 4.207 13.367 31.47%

Hi, @hz-cheng

I notice that you are the author of alibaba-cloud erdma driver for both linux kernel and rdma-core. Cooooooooool!

@hz-cheng
Copy link

hz-cheng commented May 21, 2024

More, If necessary, I could try reaching out to relevant colleagues to see if we can offer some Alibaba Cloud ECS instances to the community for free, so that the community can use and test Valkey over RDMA, as well as for future CI/CD purposes.

@baronwangr
Copy link

Is there a corresponding client that enables RDMA?

@pizhenwei
Copy link
Author

Is there a corresponding client that enables RDMA?

See this comment please.

@pizhenwei
Copy link
Author

Many cloud providers offer RDMA acceleration on their cloud platforms, and I think that there is a foundational basis for the application of Redis over RDMA. We performed some performance tests on this PR on the 8th generation ECS instances (g8ae.4xlarge, 16 vCPUs, 64G DDR) provided by Alibaba Cloud. Test results indicate that, compared to TCP sockets, the use of RDMA can significantly enhance performance.
Test command of server side:

./src/valkey-server --port 6379 \
  --loadmodule src/valkey-rdma.so port=6380 bind=11.0.0.114 \
  --loglevel verbose --protected-mode no \
  --server_cpulist 12 --bgsave_cpulist 16 --appendonly no 

Test command of client side:

# Test for RDMA
./src/redis-benchmark -h 11.0.0.114 -p 6380 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100 --rdma

# Test for TCP socket
./src/redis-benchmark -h 11.0.0.114 -p 6379 -c 30 -n 10000000 -r 1000000000 \
          --threads 8 -d 512 -t ping,set,get,lrange_100

The performance test results are as shown in the following table. Apart from LRANGE_100 (performance improvement but not substantially), in other scenarios (PING, SET, GET) the throughput can be increased by at least 76%, and the average (AVG) and P99 latencies can be reduced by at least 40%.
指标 RDMA TCP RDMA/TCP
PING_INLINE
Throughput 666577.81 366394.31 181.93%
Latency-AVG 0.044 0.08 55.00%
Latency-P99 0.063 0.127 49.61%
PING_MBULK
Throughput 688657.81 395397.56 174.17%
Latency-AVG 0.042 0.073 57.53%
Latency-P99 0.063 0.119 52.94%
SET
Throughput 434744.78 157726.22 275.63%
Latency-AVG 0.068 0.188 36.17%
Latency-P99 0.111 0.183 60.66%
GET
Throughput 562587.94 319478.59 176.10%
Latency-AVG 0.052 0.091 57.14%
Latency-P99 0.079 0.151 52.32%
LRANGE
Throughput 526260.38 211434.36 248.90%
Latency-AVG 0.056 0.14 40.00%
Latency-P99 0.079 0.159 49.69%
LRANGE_100
Throughput 57106.96 49498.34 115.37%
Latency-AVG 0.427 0.499 85.57%
Latency-P99 4.207 13.367 31.47%

Hi, @hz-cheng

I notice that you are the author of alibaba-cloud erdma driver for both linux kernel and rdma-core. Cooooooooool!

Hi @madolson ,
The feedback from the cloud vendor(alibaba-cloud) side shows the improvement, this means lots of end-user will enjoy it easily. Please let me know any concern about this feature.

@zuiderkwast
Copy link
Contributor

Almost doubled throughput is impressive. I don't know much about RDMA. It's many lines of code, but all of it is the module. That's great, but what are the risks of breaking it if we change something in the connection abstractions? We need to be aware that when we merge this, we will have to keep maintaining this.

Is it possible to use TLS with RDMA?

@madolson
Copy link
Member

@pizhenwei The numbers do look great. I haven't gotten a chance to look at it yet, hopefully some time this week.

@pizhenwei
Copy link
Author

Almost doubled throughput is impressive. I don't know much about RDMA. It's many lines of code, but all of it is the module. That's great, but what are the risks of breaking it if we change something in the connection abstractions? We need to be aware that when we merge this, we will have to keep maintaining this.

Hi,

Because the valkey-rdma.so(if built as a module) uses the struct ConnectionType as ABI, the rdma support must change together with the core connection abstractions.

To avoid the ricks from the mismatched struct ConnectionType, the module side check the version strictly(so does valkey-tls.so) like:

    /* Connection modules MUST be part of the same build as valkey. */
    if (strcmp(REDIS_BUILD_ID_RAW, serverBuildIdRaw())) {
        serverLog(LL_NOTICE, "Connection type %s was not built together with the valkey-server used.", CONN_TYPE_RDMA);
        return VALKEYMODULE_ERR;
    }

Once the core connection abstraction changes, all the connection types should do compat work, this rule is also applicative for rdma. I volunteer to maintain this rdma support.

PS: I have experience on open source community like Linux kernel, QEMU, Redis, SPDK, libiscsi, tgt, atop, utils-linux and procps-ns.

Is it possible to use TLS with RDMA?

As far as I can see, we can't use TLS with RDMA currently. I read document of openssl Abstract Record Layer, TLS with RDMA is workable in theory. But it would be amount of work.

@hwware
Copy link
Member

hwware commented May 28, 2024

@pizhenwei Thanks for your contribution and @hz-cheng Thanks for your perfect number.
First, I need to say I like this feature. But I have 3 curious points here

  1. In the RDMA.md, you mentioned that Valkey Over RDMA is only supported by Linux, I am not sure if it is supported by Centos
    and MacOS? Or you mean it is not supported by Windows?

  2. Is there possible to integrate with core codes directly instead of working as a module in Valkey? any risky or difficulty?

  3. You mention both RDMA and TCP enable at the same time? Is there any benefit for it? it means some specific clients or replica nodes connect to RDMA port? Could you please describe a little bit more? Thanks

Let's core team member discuss this important feature, and send you feedback ASAP, Thanks

@pizhenwei
Copy link
Author

@pizhenwei Thanks for your contribution and @hz-cheng Thanks for your perfect number. First, I need to say I like this feature. But I have 3 curious points here

  1. In the RDMA.md, you mentioned that Valkey Over RDMA is only supported by Linux, I am not sure if it is supported by Centos
    and MacOS? Or you mean it is not supported by Windows?

There are two parts of this PR:

  • Valkey Over RDMA protocol. This defines the transmission type RC(like TCP), communication commands, and payload exchange mechanism. This depends on RDMA(aka Infiniband) specification only, but OS and hardware independent.
  • The implement of Linux. I developed and tested this feature on Ubuntu 2004/2204 and Debian 9/10. I guess @hz-cheng tested this on a newer version of Linux distribution because erdma got supported recently. I don't test it on CentOS/RHEL/Suse, but I believe it should work fine if hardware driver is ready.

I have no experience on windows RDMA, I read document and found that windows does support RDMA, but not Linux style Verbs API. This means that we need a windows version in the future. (I imagine rdma-windows.c is needed).

  1. Is there possible to integrate with core codes directly instead of working as a module in Valkey? any risky or difficulty?

It's quite easy to build RDMA support into Valkey with a few lines change. If so, the valkey-server has to link libibvers.so and librdmacm.so.

Let's look at the dynamic shared libraries of module version:

# ldd valkey-server 
	linux-vdso.so.1 (0x00007ffdbc546000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f41f0fa1000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f41f0800000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f41f10a8000)
# ldd valkey-rdma.so 
	linux-vdso.so.1 (0x00007fff83bbb000)
	librdmacm.so.1 => /lib/x86_64-linux-gnu/librdmacm.so.1 (0x00007f7e7c29e000)
	libibverbs.so.1 => /lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007f7e7c27b000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7e7c000000)
	libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007f7e7c258000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f7e7c2f5000)
	libnl-route-3.so.200 => /lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007f7e7bf7d000)

If a user starts valkey-server with rdma module, valkey-server loads the additional shared libraries on demand.

If building RDMA into valkey is necessary, please let me know.

  1. You mention both RDMA and TCP enable at the same time? Is there any benefit for it? it means some specific clients or replica nodes connect to RDMA port? Could you please describe a little bit more? Thanks

Let's core team member discuss this important feature, and send you feedback ASAP, Thanks

Currently, the valkey-server support 3 connection type:

  • unix domain socket
  • TCP/IP
  • TLS over TCP/IP (can't use the same port as TCP/IP)

Run valkey-server by command: ./valkey-server --unixsocket /run/valkey.sock --port 6379 --tls-port 6380 ...
then valkey-server listens on /run/valkey.sock , TCP/IP port 6379 & 6380 together. The 3 transports could be used together in theory. However, in the production env, we usually prefer TCP/IP in a trusted env for the better performance, or TLS in an untrusted for security.

Once loading RDMA:./valkey-server --unixsocket /run/valkey.sock --port 6379 --tls-port 6380 ... --loadmodule valkey-rdma.so port=6379 ...
then valkey-server listens on /run/valkey.sock, TCP/IP port 6379 & 6380, and RDMA port 6379 together. The 4 transports could be used together in theory.

The RDMA has better performance in a good network env like @hz-cheng's and my test report, but I tested mlx5 with packet drop rate 0.001, TCP performance affects a few, but RDMA performance drops a lot. I imagine a topo like:

DC: data center

valkey-client   valkey-client
         |            |
        TCP          RDMA
         |            |
         +----- valkey-server  -----TCP-----  valkey-server(replica)

  DC A            DC B                              DC C

It's possible to use RDMA within a short distance, or TCP over a long distance.

Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to get this merged. It's a good contribution. I like that it's a module. When it's merged, we can let clients implement it and test it.

The RDMA port is neither a TCP port nor an UDP port? We've been talking about the possibility of adding QUIC in the future (optional dependency, maybe as a module too) and that can be on the same port too, right, since it's UDP?

RDMA.md Outdated Show resolved Hide resolved
RDMA.md Outdated Show resolved Hide resolved
src/rdma.c Outdated Show resolved Hide resolved
@pizhenwei
Copy link
Author

I'd like to get this merged. It's a good contribution. I like that it's a module. When it's merged, we can let clients implement it and test it.

Actually, the client side(for C only) is ready(as you see, several guys and me have already got the test report). Once the server side gets merged, I'll create PR for client as soon as possible.

The RDMA port is neither a TCP port nor an UDP port?

Right.

We've been talking about the possibility of adding QUIC in the future (optional dependency, maybe as a module too) and that can be on the same port too, right, since it's UDP?

Right.

@coderyanghang
Copy link

The RDMA has better performance in a good network env like @hz-cheng's and my test report, but I tested mlx5 with packet drop rate 0.001, TCP performance affects a few, but RDMA performance drops a lot. I imagine a topo like:

@pizhenwei Actually the latest RDMA technology such as Alibaba Cloud Elastic RDMA doesn't encounter this performance drop when packet drop rate 0.001,because the latest RDMA technology widely supports SACK lossy optimization.

@pizhenwei
Copy link
Author

The RDMA has better performance in a good network env like @hz-cheng's and my test report, but I tested mlx5 with packet drop rate 0.001, TCP performance affects a few, but RDMA performance drops a lot. I imagine a topo like:

@pizhenwei Actually the latest RDMA technology such as Alibaba Cloud Elastic RDMA doesn't encounter this performance drop when packet drop rate 0.001,because the latest RDMA technology widely supports SACK lossy optimization.

Hi,
As far as I know, SACK is not part of IB specification, and the stock of hardwares don't support SACK. I don't think the valkey-server should be limited to deploy on the latest hardware only.

Let's focus on 'why does Valkey need to enable both TCP/IP and RDMA together' or 'enabling both TCP/IP and RDMA is useful or not in the real scenario', but not extend the topic to 'the latest RDMA technology' here.

@pizhenwei
Copy link
Author

Hi @zuiderkwast ,

I create a new PR for the document part, and force pushed a new version here.

Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now. Just some minor comments.

src/Makefile Show resolved Hide resolved
Comment on lines +4 to +7
* Copyright (C) 2021-2024 zhenwei pi <[email protected]>
*
* This work is licensed under License 1 of the COPYING file in
* the top-level directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@madolson Suggestion what we should write in the top of the file?

src/rdma.c Outdated Show resolved Hide resolved
src/rdma.c Outdated Show resolved Hide resolved
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request May 31, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
Main changes in this patch:
* implement server side of connection module only, this means we can *NOT*
  compile RDMA support as built-in.
* add necessary information in README.md
* support 'CONFIG SET/GET', for example, 'CONFIG Set rdma.port 6380', then
  check this by 'rdma res show cm_id' and valkey-cli(with RDMA support,
  but not implemented in this patch)
* the full listeners show like():
    listener0:name=tcp,bind=*,bind=-::*,port=6379
    listener1:name=unix,bind=/var/run/valkey.sock
    listener2:name=rdma,bind=xx.xx.xx.xx,bind=yy.yy.yy.yy,port=6379
    listener3:name=tls,bind=*,bind=-::*,port=16379

valgrind test works fine:
valgrind --track-origins=yes --suppressions=./src/valgrind.sup
         --show-reachable=no --show-possibly-lost=no --leak-check=full
         --log-file=err.txt ./src/valkey-server --port 6379
         --loadmodule src/valkey-rdma.so port=6379 bind=xx.xx.xx.xx
         --loglevel verbose --protected-mode no --server_cpulist 2
         --bio_cpulist 3 --aof_rewrite_cpulist 3 --bgsave_cpulist 3
         --appendonly no

performance test:
server side: ./src/valkey-server --port 6379 # TCP port 6379 has no conflict with RDMA port 6379
             --loadmodule src/valkey-rdma.so port=6379 bind=xx.xx.xx.xx bind=yy.yy.yy.yy
             --loglevel verbose --protected-mode no --server_cpulist 2 --bio_cpulist 3
             --aof_rewrite_cpulist 3 --bgsave_cpulist 3 --appendonly no

build a valkey-benchmark with RDMA support(not implemented in this patch), run
on a x86(Intel Platinum 8260) with RoCEv2 interface(Mellanox ConnectX-5):
client side: ./src/valkey-benchmark -h xx.xx.xx.xx -p 6379 -c 30 -n 10000000 --threads 4
             -d 1024 -t ping,get,set --rdma
====== PING_INLINE ======
480561.28 requests per second, 0.060 msec avg latency.

====== PING_MBULK ======
540482.06 requests per second, 0.053 msec avg latency.

====== SET ======
399952.00 requests per second, 0.073 msec avg latency.

====== GET ======
443498.31 requests per second, 0.065 msec avg latency.

Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request Jun 1, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request Jun 1, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request Jun 3, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request Jun 3, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
pizhenwei added a commit to pizhenwei/valkey-doc that referenced this pull request Jun 3, 2024
RDMA is the abbreviation of remote direct memory access. It is a
technology that enables computers in a network to exchange data in
the main memory without involving the processor, cache, or operating
system of either computer. This means RDMA has a better performance
than TCP, the test results show Valkey Over RDMA has a ~2.5X QPS and
lower latency.

In recent years, RDMA gets popular in the data center, especially
RoCE(RDMA over Converged Ethernet) architecture has been widely used.
Cloud Vendors also start to support RDMA instance in order to accelerate
networking performance. End-user would enjoy the improvement easily.

Introduce Valkey Over RDMA protocol as a new transport for Valkey. For
now, we defined 4 commands:
- GetServerFeature & SetClientFeature: the two commands are used to
  negotiate features for further extension. There is no feature
  definition in this version. Flow control and multi-buffer may be
  supported in the future, this needs feature negotiation.
- Keepalive
- RegisterXferMemory: the heart to transfer the real payload.

The 'TX buffer' and 'RX buffer' are designed by RDMA remote memory
with RDMA write/write with imm, it's similar to several mechanisms
introduced by papers(but not same):
- Socksdirect: datacenter sockets can be fast and compatible
  <https://dl.acm.org/doi/10.1145/3341302.3342071>
- LITE Kernel RDMA Support for Datacenter Applications
  <https://dl.acm.org/doi/abs/10.1145/3132747.3132762>
- FaRM: Fast Remote Memory
  <https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-dragojevic.pdf>

Link: valkey-io/valkey#477
Co-authored-by: Xinhao Kong <[email protected]>
Co-authored-by: Huaping Zhou <[email protected]>
Co-authored-by: zhuo jiang <[email protected]>
Co-authored-by: Yiming Zhang <[email protected]>
Co-authored-by: Jianxi Ye <[email protected]>
Signed-off-by: zhenwei pi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major-decision-pending Major decision pending by TSC team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants