You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently I have been migrating all of my company's existing GlusterFS clusters to MooseFS due to usability issues with very slow directory listings for both GlusterFS's native FUSE client and mounts exported as SAMBA shares. Samba is a crucial component to my user base as it consists of 50/50 Windows and Linux clients. So strictly using the MooseFS native fuse client is not an option for all of my users (pesky Windows folks). While I am well aware of the historically underwhelming performance characteristics associated Linux's Samba implementation, the testing below reveals something more indicative of interoperability issues between MooseFS and Samba or a potential misconfiguration on my part as other distributed/local file systems do not experience any of these issues.
MooseFS has been excellent at mitigating the issues associated with GlusterFS due to use of dedicated metadata servers vice the distributed p2p nature of GlusterFS. The bottom line is after migrating data using rsync from the old GlusteFS volumes to the newly created MooseFS exports I have been experiencing terrible (borderline horrendous) read speeds for nearly all of the Samba clients accessing any/all data re-exported MooseFS fuse mounts. Essentially, I am getting 50-100 MB/s read speeds regardless of the processing power, storage backend and network interface controllers. This phenomenon is happening on three different MooseFS clusters (HDD, SSD, and NVME) and is reproducible across all of them. These clusters have been running Ceph, GlusterFS, and BeeGFS over there lifetime and never experienced substantial drop offs in performance when acting as SAMBA gateway/proxy servers. For comparative purposes other distributed and local XFS shares hosted of my SMB proxy/gateway servers do not experience slow read speeds.
Has anyone else experienced issues with performance when exporting MooseFS mounts via CIFS/SAMBA? Are there any specific configuration/mount options required when re-exporting MooseFS exports with SMB? Nothing in the installation or administration guides states any specific requirements or caveats aboutt using MooseFS with Samba. None of the Linux clients seem to be affected by these performance issues as they pretty much hit line rates when reading data over the network from the storage cluster.
Below is a copy my samba servers configuration file:
/etc/samba/smb.conf
[global]
bind interfaces only = yes
interfaces = enp216s0f1
netbios name = redacted
server string = Samba Server Version %v
server multi channel support = yes
server role = member server
log file = /var/log/samba/log.%m
max log size = 50
security = ads
passdb backend = tdbsam
load printers = no
cups options = raw
kerberos method = secrets and keytab
idamp config : range = redacted-redacted
idamp config : backend = rid
idamp config * : range = redacted-redacted
idamp config * : backend = autorid
winbind use default domain = yes
winbind refresh tickets = yes
winbind offline logon = yes
winbind enum groups = yes
winbind enum users = yes
nt acl support = yes
workgroup = redacted
realm = redacted
hosts allow = redacted
hosts deny = ALL
durable handles = yes
ea support = no
strict locking = no
max xmit = 65535
socket options = TCP_NODELAY IPTOS_LOWDELAY
getcwd cache = yes
log level = 1
vfs objects = acl_xattr
[mfs-test]
acl_xattr:ignore system acls = no
acl_xattr:default acl style = windows
nt acl suport = yes
create mask 6660
directory mask 6750
map acls inherit = yes
path = /mnt/net_shares/moosefs/mfs-test
guest ok = yes
read only = no
available = yes
writable = yes
kernel share modes = no
kernel oplocks = no
map archive = no
map hidden = no
map readonly = no
map system = no
store dos attributes = no
hosts allow = redacted
hosts deny = ALL
posix locking = no
case senstive = true
default case = lower
preserve case = true
short case preserve = true
oplocks = yes
Note: All testing was done from a client with a Rzyen 5950X processor, 128GB DDR4 Memory and a Intel X550-T2 10GbE NIC ruining RHEL 8.9 with the same Kernel, MooseFS and Samba versions as the chunk/master servers within each cluster. Testing has also been completed on additional Windows and Linux smb clients yielding similar poor performance results.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
Recently I have been migrating all of my company's existing GlusterFS clusters to MooseFS due to usability issues with very slow directory listings for both GlusterFS's native FUSE client and mounts exported as SAMBA shares. Samba is a crucial component to my user base as it consists of 50/50 Windows and Linux clients. So strictly using the MooseFS native fuse client is not an option for all of my users (pesky Windows folks). While I am well aware of the historically underwhelming performance characteristics associated Linux's Samba implementation, the testing below reveals something more indicative of interoperability issues between MooseFS and Samba or a potential misconfiguration on my part as other distributed/local file systems do not experience any of these issues.
MooseFS has been excellent at mitigating the issues associated with GlusterFS due to use of dedicated metadata servers vice the distributed p2p nature of GlusterFS. The bottom line is after migrating data using rsync from the old GlusteFS volumes to the newly created MooseFS exports I have been experiencing terrible (borderline horrendous) read speeds for nearly all of the Samba clients accessing any/all data re-exported MooseFS fuse mounts. Essentially, I am getting 50-100 MB/s read speeds regardless of the processing power, storage backend and network interface controllers. This phenomenon is happening on three different MooseFS clusters (HDD, SSD, and NVME) and is reproducible across all of them. These clusters have been running Ceph, GlusterFS, and BeeGFS over there lifetime and never experienced substantial drop offs in performance when acting as SAMBA gateway/proxy servers. For comparative purposes other distributed and local XFS shares hosted of my SMB proxy/gateway servers do not experience slow read speeds.
Has anyone else experienced issues with performance when exporting MooseFS mounts via CIFS/SAMBA? Are there any specific configuration/mount options required when re-exporting MooseFS exports with SMB? Nothing in the installation or administration guides states any specific requirements or caveats aboutt using MooseFS with Samba. None of the Linux clients seem to be affected by these performance issues as they pretty much hit line rates when reading data over the network from the storage cluster.
Below is a copy my samba servers configuration file:
/etc/samba/smb.conf
[global]
bind interfaces only = yes
interfaces = enp216s0f1
netbios name = redacted
server string = Samba Server Version %v
server multi channel support = yes
server role = member server
log file = /var/log/samba/log.%m
max log size = 50
security = ads
passdb backend = tdbsam
load printers = no
cups options = raw
kerberos method = secrets and keytab
idamp config : range = redacted-redacted
idamp config : backend = rid
idamp config * : range = redacted-redacted
idamp config * : backend = autorid
winbind use default domain = yes
winbind refresh tickets = yes
winbind offline logon = yes
winbind enum groups = yes
winbind enum users = yes
nt acl support = yes
workgroup = redacted
realm = redacted
hosts allow = redacted
hosts deny = ALL
durable handles = yes
ea support = no
strict locking = no
max xmit = 65535
socket options = TCP_NODELAY IPTOS_LOWDELAY
getcwd cache = yes
log level = 1
vfs objects = acl_xattr
[mfs-test]
acl_xattr:ignore system acls = no
acl_xattr:default acl style = windows
nt acl suport = yes
create mask 6660
directory mask 6750
map acls inherit = yes
path = /mnt/net_shares/moosefs/mfs-test
guest ok = yes
read only = no
available = yes
writable = yes
kernel share modes = no
kernel oplocks = no
map archive = no
map hidden = no
map readonly = no
map system = no
store dos attributes = no
hosts allow = redacted
hosts deny = ALL
posix locking = no
case senstive = true
default case = lower
preserve case = true
short case preserve = true
oplocks = yes
Client/Server Info:
Moosefs Version: 3.0.117 (rpm distribution)
OS: Redhat Enterprise Edition 8.9 (Ootpa)
Kernel: 4.18.0-513.9.1.el8_9.x86_64
Samba: 4.18.6
Example of Sequential Read Tests:
Cluster-A (Average 10 Runs, 8GB Test File, MooseFS FUSE):
echo 3 > /proc/sys/vm/drop_caches
if=/mnt/net_shares/mfs-fuse/test-file of=/dev/null bs=1M count=8192 status=progress
8589934592 bytes (8.6 GiB, 8.0 GB) copied, 12.2585 s, 701 MB/s
Cluster-A (Average 10 Runs, 8GB Test File, MooseFS FUSE SMB re-export):
mount -t cifs //smb-server/mfs-test /mnt/net_shares/cifs/ -o domain=redacted,username=redacted
echo 3 > /proc/sys/vm/drop_caches
if=/mnt/net_shares/cifs/test-file of=/dev/null bs=1M count=8192 status=progress
8589934592 bytes (8.6 GiB, 8.0 GB) copied, 120.2585 s, 70.1 MB/s
Note: All testing was done from a client with a Rzyen 5950X processor, 128GB DDR4 Memory and a Intel X550-T2 10GbE NIC ruining RHEL 8.9 with the same Kernel, MooseFS and Samba versions as the chunk/master servers within each cluster. Testing has also been completed on additional Windows and Linux smb clients yielding similar poor performance results.
Beta Was this translation helpful? Give feedback.
All reactions