NFS access to QRIScloud collections

Follow

Introduction

NFS (Network File System) is a protocol that allows one Linux or Unix system to "mount" a file system that consists of collection of files and directories that are stored on a different system.

The NFS protocol is designed to allow fast access to files over a local area network.  It supports the full range of Linux / Unix file system functionality, and allows different systems to simultaneously access and update shared files and directories.

An NFS file system is "exported" by a server to one or more client machines.  The clients gain access by "mounting" the NFS file system within the namespace of the client's local file system.

In the QRIScloud context, it is possible to NFS mount a QRIScloud collection on a NeCTAR (or other) VM running in QRIScloud availability zone.  The steps are as follows:

  • The collection manager requests QRIScloud support to make the collection NFS-Only.  This has significant implications for the way that a collection can be accessed, and we will only do this if we are sure that you and the manager understand this; see below "What does NFS-Only access mean?".
  • The collection manager requests QRIScloud support to enable NFS export of the collection to the relevant NeCTAR project or projects.
  • You install NFS client software on the NeCTAR instance in the project.
  • You configure the instance's second ethernet interface ("eth1") on the storage network.
  • You create the configuration files to mount the NFS file system, and check that the file system mounts correctly.

Once the NFS mounts have been set up, your applications can access the files and directories exactly the same way that your would access local files and directories.

What does NFS-only access mean?

QRIScloud collections support two access modes:

  • Standard Access means that a collection can be accessed:
    • as /QRISdata on the HPC systems,
    • as /data on the SSH data access systems, or
    • via the Medici data fabric.
  • NFS-only means that a collection can only be accessed via an NFS mount on a NeCTAR instance (virtual machine) in the QRIScloud AZ.

The access modes are mutually exclusive. A collection may be Standard Access or NFS-only.  Not both.

It is possible to change a collection between Standard Access mode and NFS-only mode, but you need to lodge a QRIScloud support request, and it takes time to implement.

Requesting NFS access

QRIScloud collections are made "enabled" for NFS access on request.  The collection custodian or technical contact needs to make the request, directly as a QRIScloud support ticket or by contacting an eRA.  The request needs to state which NeCTAR project or projects require the access.

The constraints on providing NFS access are as follows:

  • We only allow access to NeCTAR virtual machines; i.e. Instances launched from NeCTAR Projects.
  • We only allow access within the Polaris data centre (i.e. the QRIScloud availability zone).
  • We do not allow access for NeCTAR "PT" projects.
  • We discourage NFS access to HSM collections.

The NFS servers and mount details

You will need the following:

  • The NFS server IP address (<NFS-IP>) which you can look up in the table above.
  • The NFS mount path (<NFS-PATH>) which is built on the collection id. The collection is "Q0xxx", then the NFS mount path is:
       /tierxyz/Q0xxx/Q0xxx
  • The NFS mount options (<NFS-OPTS>) are:
       rw,nfsvers=3,hard,intr,nosuid,nodev,timeo=100,retrans=5

    for Ubuntu & Debian, or:

       rw,nfsvers=3,hard,intr,nosuid,nodev,timeo=100,retrans=5,nolock

    for RHEL, CentOS, Scientific Linux & Fedora.  (Change "rw" to "ro" for a read-only mount.)

Scripted NFS setup

The simple way to setup is to download and run the "q-storage-setup.sh" script.  You simply need to provide the collection id and the storage pool name, and the script works out the rest.

The script can be downloaded from Github as follows:

  $ curl -O -L https://github.com/qcif/cloud-utils/raw/master/q-storage-setup.sh
  $ chmod a+x q-storage-setup.sh

(The "-L" option tells curl to follow redirects like a web browser would do. The "chmod ..." command makes the script that you downloaded executable.)

You can then run it as follows to configure and NFS mount a QRIScloud collection, and examine its contents:

$ sudo ./q-storage-setup.sh Q0xxx
  $ sudo ls /data/Q0xxx

If you want to set up a read-only mount (or mount a collection that is exported read-only), include the "--read--only" option when running the script.  The full, current documentation for the "q-store-setup.sh" script is here

Note that the script is updated from time to time, so it is advisable to download a fresh copy from GitHub each time you need to use it, rather keeping your own copy.

Note that the script sets up the collection as an NFS automount. If you need a hard NFS mount, you will need to use the manual NFS setup procedure.

Manual NFS setup

This section outlines the procedures for setting up NFS access to a collection for  a NeCTAR virtual machine. It is lengthy, and some of the steps require care.  You will require administrator (i.e. "sudo") access on the NeCTAR virtual.  (We have written some more general documentation on NFS mounting which you can find on the NeCTAR support site.)

The following instructions assume that you have basic Linux administration skills, and the confidence to do the tasks required. (If you are nervous about breaking something, we suggest that you launch a small NeCTAR VM just for testing.)

Also note that some of the details depend on your Linux distribution.  For example, package installation details differ, as do the mechanisms for starting and enabling system services.

Installing NFS client software

You can install the necessary NFS client software on a Debian, Ubuntu or similar system by running the following commands:

  $ sudo apt-get update
  $ sudo apt-get install nfs-common autofs

On RedHat, CentOS, or Fedora, run the following:

  $ sudo yum install nfs-utils autofs

For more information on the YUM and APT package managers, please refer to the respective online manual entries.

(I have heard that future versions of Fedora will replace YUM with a new package manager, though the basics will be the same.)

Configuring the Storage Network interface

The NFS servers are on a private storage network in the 10.255.xxx.xxx range that is connected to each (QRIScloud) NeCTAR VM's second network interface (eth1).

We recommend that you run

  $ ip link show

or

  $ ifconfig -a

and look at the output.  If you see the "eth1" interface listed as active with an IP address on an "10.255.xxx.xxx", then the second interface is configured.

Checking NFS access

Before setting up NFS to automount your collection, it is a good idea to check that you can mount it by hand.  The procedure is simple:

  $ mkdir /tmp/mnt
  $ sudo mount -t nfs -o <NFS-OPTS> <NFS-IP>:<NFS-PATH> /tmp/mnt
  $ cd /tmp/mnt
  $ ls -l

If you have configured eth1 correctly, and formed the NFS options, server IP and path correctly, the "ls -l" command should list your collection's root directory.

Finally, unmount your collection as follows:

  $ cd /
  $ sudo umount /tmp/mnt

Configuring NFS auto-mounting

It is possible to mount your collection by hand each time you want to use it, using "sudo mount" as above. However, it is likely to be more convenient if your virtual machine mounts the collection automatically. You can configure it to mount the collection when your VM starts, but it is better to use an automounter.

Here are the instructions for configuring automounting using "autofs".

  1. Install the autofs (as above).
  2. Edit the "/etc/auto.master" file, and add the following line at the end of the file:
    /- file:/etc/auto.qriscloud
  3. Create a "/etc/auto.qriscloud" with the following content:
    /data/Q0xxx <NFS-OPTS> <NFS-IP>:<NFS-PATH>
  4. Start the "autofs" service:
    $ sudo service autofs start
    (Some Linux distributions handle starting of system services in other ways.  If the "service" command does not exist, use "systemctl start" or the "/etc/init.d/autofs" script.)

  5. Check that collection mounts:
    $ cd /data/Q0xxx
    $ ls -l
  6. Configure the "autofs" service to start automatically on system boot:
    $ sudo chkconfig --add autofs
    (In some Linux distributions, you can use "systemctl enable" instead of "chkconfig".)

Creating the user accounts and groups.

The final step in manual setup is to create local accounts and groups to access your collection.

  • The "q0xxx" account needs to have uid and gid of 54xxx.  This corresponds to the "q0xxx-rw" account on your colllection VM

For example.

$ sudo useradd --uid 54xxx --groups q0xxx

Run "man useradd" or "man adduser" for details of the command for adding a user account.  The details can differ for different Linux distributions.

Recommendations:

  • There is a good case for disabling login for "q0xxx" as well, and using "sudo -u q0xxx bash" to assume the "q0xxx" user identity, as required.

NFS security and access control

When you request us to NFS export a collection to a NeCTAR tenant, any instance running in the tenant is able to mount the collection as a local file system.  This means:

  • Any person who is a tenant manager or a member of the tenant can launch a new instance, and use that instance to gain full access the collection.
  • Any person who can login to a tenant instance's root account or an admin account with sudo rights, can gain full access the collection.  If it is not mounted, they can mount it.
  • If some has a non-privileged local account, they could potentially exploit an unpatched security flaw to gain root access, and then proceed as above.
  • If someone is able to hack into the system, they can then proceed as above.

Anyone who has full access to the collection will be able to override any Linux access controls implemented using permission bits or access control lists. They will be able to add, modify and delete files and directories at will.

In short, when your collection is NFS exported to a NeCTAR tenant, you need to pay close attention to the following:

  • Ensure that you don't grant Tenant access to the wrong people.
  • Ensure that you don't provide instance accounts with "sudo" rights to the wrong people.
  • Ensure that you keep your instances patched with the latest security patches to address possible external vulnerabilities AND local privilege escalation vulnerabilities.
  • Ensure that you take all necessary steps to minimize the "attack profile" for your instance:
    • SSH should not allow password authentication
    • Shut down (or do not configure) unnecessary public services; e.g. Web servers, FTP servers, NFS servers, and so on.
    • If feasible, use NeCTAR security groups and / or instance-internal firewalling to restrict access to only a few IP addresses or networks.

NFS trouble-shooting and tuning

Checking network connectivity

One cause of problems is that your NeCTAR VM does not have, or has lost, network-level connectivity to the NFS servers.  This will manifest as a failure to mount or remount a collection.

Incorrect hard mounts can "brick" an instance

If you use the incorrect settings on an NFS hard mount (i.e. in the "/etc/fstab" file), then you can easily get it into a state where it gets stuck during reboot.  The problem arises when the "fstab" settings tell the initialization procedure that an NFS file system needs to be mounted early in the startup.  If the NFS server is offline or if its IP address changes, then the initialization will block.  If this happens before the instance has gone into multi-user mode, it will be difficult to repair the "/etc/fstab" file to remove offending line.

With some versions of Ubuntu, there is a "nobootwait" mount option that allows a mount attempt to time out.  Unfortunately, support for "nobootwait" has now been removed as of Ubuntu 16.04, so this is not longer an option.

The best way to deal with this is to use the "noauto" mount option, and then add something to do a "mount -a" when the system has reached multi-user mode.  Better still, use the NFS automount approach if you can.

Have more questions? Submit a request

Comments

Powered by Zendesk