-
I am evaluating MinIO to measuring read performance of blob data. I am performing multiple iterations of read from the same bucket and I don't want caching to interfere with the performance runs and skew the results. The code to read the data is given below: `from minio import Minio def read_blobs(bucket_name):
client = Minio( I have 1000 objects each of size ~4MB within the bucket. The first measurement typically takes 50 seconds for reading the entire data (approximately ~50 ms per object). If I run the script again, it is very fast and just takes 5 ms per object read and the whole read of all objects within the bucket finishes within 5 seconds. I am using SLES 15.1 Linux server and running MinIO in its own container and the application code in its own container, with both containers using host network. The disk read performance measurements on the host linux system with 4 MB blocks is close to 250MB/s. So, I don't think it is reading from disk from the 2nd time onwards. Not sure if the data is read from some cache. `dd if=/dev/zero of=/mnt/tstdrv/testfile bs=4096k count=1000 oflag=direct conv=fdatasync > dd-write-drive1.txt dd if=/mnt/tstdrv/testfile of=/dev/null bs=4096k iflag=direct > dd-read-drive1.txt |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
You may be seeing fallback from ODIRECT. We do not have any current plans for diving further into this. You can add a script that flushes the read cache on your server between runs. If you are looking at a professional PoC, reach out on |
Beta Was this translation helpful? Give feedback.
-
Turning off page-cache is wrong for READS; you should save on IOPs on READs, and free memory is lying around using it. Avoiding page cache is not realistic and helpful. |
Beta Was this translation helpful? Give feedback.
You may be seeing fallback from ODIRECT. We do not have any current plans for diving further into this.
You can add a script that flushes the read cache on your server between runs.
If you are looking at a professional PoC, reach out on
[email protected]
and we can assist you in your benchmarks.