Cart

    Sorry, we could not find any results for your search querry.

    Configuring an NFS server

    When working with multiple VPSs, it can be desirable to make files on one VPS available on another. One way to achieve this is by using a Network File System (NFS).

    With an NFS server, you export local file systems over the network and share them with NFS clients. This allows you to share files and directories on your VPS or Block Storage with other VPSs. You do this by mounting directories shared on an NFS server on an NFS client.

    In this guide, we will show you how to install an NFS server and NFS client on a server with CentOS, AlmaLinux, Rocky Linux, Ubuntu, or Debian. The combination of operating systems does not matter.

    Delete
    • For the steps in this guide you will need:
       
      • Two or more VPSs with CentOS, AlmaLinux, Rocky Linux, Ubuntu, or Debian.
      • A private network on which the VPSs are included. In our guide 'setting up an internal IP address', we show how to give your VPSs an IP address on your private network.
         
    • Perform the steps in this article as root, or as a user with root rights.
       
    • In extensive tests, we have seen that performance in Debian 11 slows down over time. AlmaLinux, for example, performs much faster when using NFS. This is likely a kernel issue, but it may have been resolved in later Debian versions.


    Installing an NFS server

     

    Step 1

    Depending on your operating system, install an NFS server with the command:


    CentOS Stream / AlmaLinux / Rocky Linux:

    dnf -y install nfs-utils

    Ubuntu / Debian:

    apt -y install nfs-kernel-server

     

    Step 2

    After installation, enable the NFS server:

    CentOS Stream / AlmaLinux / Rocky Linux:

    systemctl start nfs-server
    systemctl enable nfs-server

    Ubuntu / Debian:

    systemctl start nfs-kernel-server
    systemctl enable nfs-kernel-server

     

    Step 3

    Create a directory you want to share with the NFS clients. You are free to adjust the name and location in the example, but remember to do so in all subsequent steps:

    mkdir /mnt/nfs_share

    Optional:

    The ins and outs of file and directory ownership are beyond the scope of this guide, but it is important to consider. For example, to ensure all clients can access the shared directory, you can remove all permission restrictions:

    CentOS Stream / AlmaLinux / Rocky Linux:

    chown -R nobody:nobody /mnt/nfs_share/

    Ubuntu/Debian:

    chown -R nobody:nogroup /mnt/nfs_share/

    If desired, also adjust the file permissions on the created directory. This is especially important if you use the root_squash option in step 3.

    chmod 777 /mnt/nfs_share/

    The 777 option gives all users and groups rights to write, read, and execute files. On this page you will find a cheat sheet for available chmod options.


     

    Step 3

    Now create the /etc/exports file. This file manages permissions and configuration for accessing the NFS server.

    nano /etc/exports

    The contents of the file depend on your use case (see the explanation below the code). A simple example might be:

    /mnt/nfs_share 192.168.2.0/24 (rw,async,root_squash)

    You first specify the directory you want to share, then the IP address or range you want to grant access to the NFS share (the IP addresses of your private network), and then additional options. You add the options between the parentheses, separated by commas without spaces, and can choose from:

    • ro: read-only access to the NFS share
    • rw: read-write access to the NFS share
    • sync: write actions are completed and confirmed one by one (both IO actions and write actions are completed) before a new write action from an NFS client is accepted. This results in slower performance, but there is no data loss if the NFS server or network connection fails.
    • async: As soon as IO requests are processed and sent to the local file system, the next action can be performed; there is no waiting for data to actually be written to disk before responding to a new action from an NFS client. If the server or network connection fails during write actions, data loss can occur. The performance of async is significantly faster than sync.
    • no_subtree_check: Prevents subtree checks (the default action). If a shared directory is a subdirectory of a larger whole, NFS scans the directories above it to check permissions, etc. Using no_subtree_check can provide some performance gain.
    • root_squash: root users on the NFS client are treated as anonymous users without root privileges by the NFS server
    • no_root_squash: root users on the NFS client remain root users on the NFS share.
    • no_all_squash: all users on the NFS client are treated as the same users on the NFS server if present
    • all_squash: all users on the NFS client are treated as anonymous users by the NFS server.

    For each new directory, add a new line to the file. Save your changes and close the file (ctrl + > > enter).

    Note: After each adjustment of these options, it is important to restart your NFS server and re-execute the exportfs command from the step below.


     

    Step 4

    Then export the file system as configured in step 3 with the exportfs command:

    exportfs -arv
    • -a exports all directories.
    • -r re-exports all directories and synchronizes /var/lib/nfs/etab with /etc/exports and files under /etc/exports.d
    • -v enables verbose output.

     

    Step 5

    Finally, allow access to the NFS share in your firewall:

    CentOS Stream / AlmaLinux / Rocky Linux:

    firewall-cmd --permanent --add-service=nfs
    firewall-cmd --permanent --add-service=rpc-bind
    firewall-cmd --permanent --add-service=mountd
    firewall-cmd --reload

    In principle, you have already specified in /etc/exports that only IP addresses on your private network have access. You can also add this to the above rules by adding the following options between --permanent and --add-service=nfs:

    --add-rich-rule="rule family='ipv4' source address='192.168.2.0/24'" 

    Ubuntu / Debian:

    ufw allow from 192.168.2.0/24 to any port nfs


    Configuring the NFS client

     

    Step 1

    Install the necessary packages on the servers you are using as NFS clients:

    CentOS Stream / AlmaLinux / Rocky Linux:

    dnf -y install nfs-utils nfs4-acl-tools

    Ubuntu / Debian:

    apt -y install nfs-common nfs4-acl-tools

     

    Step 2

    Create a directory where you want to mount the NFS share, for example:

    mkdir /mnt/nfs

     

    Step 3

    Then mount the NFS share on your NFS clients:

    mount 192.168.2.1:/mnt/nfs_share /mnt/nfs
    
    • Replace the address 192.168.2.1 with the internal IP address of the NFS server
    • Replace /mnt/nfs_share with the directory you shared on the server (step 3 of the first paragraph)
    • Replace /mnt/nfs with the directory created in the previous step

     

    Step 4 - optional

    The command in step 3 is for a temporary mount that will be lost after a reboot. To automatically mount the NFS share after a client reboot, open the /etc/fstab file:

    nano /etc/fstab
    

    Add the following content:

    192.168.2.1:/mnt/nfs_share /mnt/nfs  nfs defaults 0 0

    Save the changes and close the file with ctrl + > > enter.



    Removing the mount

     

    Unmount the mounted NFS share on your NFS client with the command:

    umount /mnt/nfs
    

    Getting an error message that the share is still busy, even though you are sure no files are being written? Then you can force the umount command with:

    umount -l /mnt/nfs


    Performance tuning

     

    Within the same availability zone, the performance of an NFS server is very fast. If you are writing many small files to another availability zone (e.g., from Delft to Amsterdam), there will be about a factor 5 difference in speed.

    In all scenarios, there are several options to consider to squeeze more performance out of your NFS server. Here are the most important ones:

    • sync vs async: Whether async (step 3 of the NFS server installation) is an option for your setup depends on your risk tolerance. In principle, there is only a risk if your NFS server becomes unavailable during a write (not read) action. If you have a local backup of your files, or can quickly reproduce a lost file, you can safely use async. On average, async performance is about twice as fast as sync.
       
    • RPCNFSDCOUNT: NFS servers use 8 processes/threads by default to handle connections from NFS clients. For larger numbers of NFS clients, increasing the number of processes/threads can impact performance. A good standard value is to increase this to 16 per core of your VPS. Depending on your OS, adjust this as follows:

      CentOS / AlmaLinux Rocky Linux:
      Open the NFS configuration:
      nano /etc/nfs.conf
      Scroll down to [nfsd] and adjust the #threads value to 16 per core, for example:
      [nfsd]
      threads = 32
      Save the changes and close the file (ctrl + > > enter). Then restart your NFS server (it doesn't hurt to also unmount and remount the NFS clients):
      systemctl restart nfs-server

      Ubuntu / Debian:
      Open the NFS configuration:
      nano /etc/sysconfig/nfs
      Adjust the RPCNFSDCOUNT value to 16 per core, for example:
      RPCNFSDCOUNT=32
      Save the changes and close the file (ctrl + > > enter). Then restart your NFS server (it doesn't hurt to also unmount and remount the NFS clients):
      systemctl restart nfs-server
       
    • MTU: MTU is a network adapter option that determines the size of data packets sent in bits. The default MTU is 1500. If you are writing many files between 1000 and 6000 bytes, it may be desirable to set a higher MTU. In other cases, adjusting the MTU will have little impact.
      The MTU should be set identically on all NFS clients and the NFS server in that case. A good option is, for example, a value of 8000. We recommend unmounting all NFS shares (umount /dir) on the NFS clients first. You can set the MTU as follows:
      ip link set dev eth1 mtu 8000
      
      Note that you adjust eth1 to the name of your private network adapter. The command above will not be retained after a reboot, but you can, for example, adjust the configuration of your private network adapter itself to retain it.

    There are more options available, such as adjusting the wtime and rtime values, but in our own tests, we could not establish any noticeable difference with these.


     

    This concludes our guide on configuring and using an NFS server.

    Should you have any questions based on this article, do not hesitate to contact our support department. You can reach them via the 'Contact us' button at the bottom of this page.

    Need help?

    Receive personal support from our supporters

    Contact us