improve the performance of NAS clients
水滴呼叫地球  2024-08-05 15:49   published in China

Article reprinted from: Andy730 public account

 

NAS is undoubtedly a file-level core component used to share resources in the network of any enterprise user. SMB is often used in Windows, while NFS is the mainstream on Linux platforms. Of course, SMB implementations such as Samba also exist in non-Windows environments, and NFS implementations also exist in Windows environments.

With the iterative update of the two network file sharing protocol versions, the current version of the client SMB v3.x and NFS v4.x (NFS v3 in the supported Linux kernel version) have made significant progress. They now have enhanced stability and performance improvement.

This article will discuss in depth the client architecture of these two implementations.

Single TCP connection

NAS uses a client/server architecture. In the network, the NAS client (SMB or NFS) accesses the corresponding NAS server (SMB or NFS server) through the TCP/IP network.

641.png

NAS client/server architecture

one important point is that only one TCP connection is used between each NAS client and the NAS server. Whether SMB or NFS, even if there are multiple SMB mapping shares or NFS mount points on the client, there is only one TCP connection connecting the client and the server.

In the past, this TCP connection was sufficient for NAS traffic. However, with the increase of network file access, this connection has gradually become a single point of failure and performance bottleneck.

Multiple TCP connections

developers of the two protocols have already realized this problem. As early as Linux kernel 5.3 (released in 2019) and SMB 3.0 of Windows 2012 R2/Windows 8, the multi-TCP connection function has been introduced. This enables an SMB session or an NFS session to run across multiple TCP connections.

This feature is called SMB multi-channel (for SMB) and NFS nconnect (for NFS), respectively. The following is a schematic diagram showing the SMB multi-channel and NFS nconnect architectures.

 

641.png

SMB multi-channel and RSS (receiver extension, resident side scaling) Nic

641.png

NFS nconnect comparison

SMB and NFS session traffic is reused on multiple TCP connections between the NAS client and the NAS server. This multiplexing effect enables NAS clients to load balance NAS packets on all available TCP connections, thus providing higher performance for SMB and NFS respectively. If a TCP connection is unavailable, the NAS session can still run normally, thus improving the network stability of file sharing.

Precautions

so far, we have briefly explained the basic client implementation based on SMB and NFS. The similarity is that they use multiple TCP connections to process one or more SMB and NFS sessions. However, there are still some additional precautions to pay attention:

SMB multi-channel supports network cards with RSS function, which enables TCP sessions to be allocated to multiple cores of the client processor (and server). This undoubtedly eliminates CPU-level bottlenecks and further improves SMB multi-channel performance. However, note that the RSS Nic requires the support of the corresponding network driver. For more information about Windows RSS, see related documents.

The number of TCP sessions allowed by each client/server relationship is limited. SMB multi-channel allows a maximum of 32 sessions, while NFS nconnect is limited to 16.

In addition, there may be some unique requirements depending on the operating system of the client and the code of the NAS server. In the following table example, Azure NetApp Files have these specific requirements.

641.png

nconnect client requirements for using Azure NetApp Files

therefore, during deployment, it is critical for both clients and servers to verify the requirements of SMB multi-channel and NFS nconnect. The actual situation may vary.

Performance

of course, we will pay close attention to the performance improvement of these two functions. The chart in the NetApp TR-4740 SMB multi-channel 3.0 technical report clearly shows the significant performance differences between SMB sessions when SMB multi-channel is enabled or not.

641.png

Performance Comparison of SMB with and without SMB multi-channel enabled

similarly, a performance comparison table published by Pure Storage's early blog shows the powerful role of NFS nconnect. From the original throughput of about 1 Gb/s, it soared to nearly 7 Gb/s after nconnect was enabled. This huge performance improvement is remarkable.

641.png

Performance Comparison Between enabled and disabled NFS nconnect

future Outlook

this article aims to provide some basic knowledge of these NAS functions on the client side. The performance and stability of SMB and NFS clients are constantly being optimized and improved.

On the NAS server side, although this article does not discuss in detail, Parallel NFS (pNFS) provides distributed high-performance NFS services, which is a field worthy of further exploration. In addition, as a high-performance user-mode NFS server implementation, NFS Ganesha has attracted much attention. The high-performance NFS client and server technology released by Huawei under its OpenEuler project is NFS. In the SMB field, Microsoft has introduced extended SMB cluster functions since Windows Server 2012.

NAS has much more performance and stability features. I cannot list all features and all vendors that provide high-performance NAS solutions on the server and client. However, it is worth affirming that the two NAS protocols are continuously improving, and their stability and performance are also improving simultaneously.

-----

Source:cfheoh; Enhancing NAS client resiliency and performance with SMB Multichannel and NFS nconnect; May 13, 2024

 

--【本文完】---

 

 

 

Replies(
Sort By   
Reply
Reply