This document is written specifically for people using UQ RDM collections that are stored on QRIScloud infrastructure.
Many UQ RDM collections are stored as QRIScloud GPFS collections on RCC-managed storage infrastructure in the Polaris data centre. These collections are distinguished from collections hosted by UQ ITS because they have a Qnnnn collection identifier. The primary criteria for allocating collections on QRIScloud infrastructure are:
- the collection is likely to grow to more than 1TB, OR
- the data in the collection needs to be accessible from other QRIScloud related systems (typically HPC).
QRIScloud GPFS collections have the following properties:
- The data is accessible via the MeDiCI data fabric.
- The data is stored in an GPFS-based hierarchical storage management (HSM) system with:
- a front-end disk cache,
- a middle tier implemented using HPE DMF technology, and
- back-end tape storage systems (in Springfield and St Lucia).
- The tape copies are primarily intended for recovery from media failure and major disasters. However, it is possible to recover files from a daily snapshot that have been deleted or overwritten, provided that a request is received within a window of 90 days.
- The default QRIScloud collection sizing is as follows:
- The default data quota is 1TB per collection
- The default file count limit is 1,000,000 encompassing all files, directories and symlinks per collection.
RDM collections are accessible to users and applications in a number of ways:
- Polaris-based HPC systems (Bunya) have the collections mounted; see below.
- The QRIScloud SSH access servers have the collections mounted; see below.
- The UQ SMB file servers make collections accessible via the "R:" drive (on the UQ networks),
- The UQ Nextcloud service has collections mounted.
- Other UQ-based MeDiCI caches (by arrangement with the respective cache managers) can have the collections mounted.
In order to use a QRIScloud collection via a QRIScloud service, it is generally necessary to know the collection's Q number. Most of our services don't use the collection "short names".
Unless otherwise advised, please submit all support requests that relate to UQ RDM collections, including RDM collections with Q numbers, through the UQ ITS Help Desk. They will triage and dispatch the support requests to the most appropriate people.
This also applies to requests for changes to data quotas, file count limits, and caches sizes. Note that you will be required to provide additional information about the data that you are storing in the collection as well as usage patterns and anticipated growth rates. Requests will be assessed on their merit and approval will depend on availability of storage.
UQ Staff versus Student accounts
The University of Queensland provides people with staff ("uqxxxxxx") and student ("snnnnnnn") accounts:
- Staff accounts are required for anyone who is on the UQ payroll.
- Student accounts are provided for anyone who registered for a degree.
UQ HDR (and other) students often have both staff and student accounts.
Unfortunately, the UQ identity systems do not provide enough information to allow QRIScloud to match staff and student identities. Problems arises if a user with two identities is granted access to a collection via one identity, and then attempt to use the collection after logging in with the other one. Since our access control systems cannot tell that the two identities represent the same person, the user is denied access to the files.
There is currently no good solution for this problem. The best we can do is to offer some simple advice:
- Wherever possible, use the same identity for your HPC account and your RDM collection access.
- If you need to, grant yourself access to your collection (using the RDM portal) under both of your identities. Or ask the person who granted you access to do this.
- Avoid requesting HPC access under both your staff and student identities. The request should be refused (for various reasons) but if your request does slip past our manual checks, fixing the resulting problems can be awkward.
We can't advise whether it is better to use your staff or student identity because different UQ departments, etc have different policies on this.
Limitations of RDM GPFS collections
While QRIScloud GPFS collections appear to be normal file systems when you use them, there are certain situations where they behave differently. It is important to understand the differences to avoid getting into problems.
Cache propagation within the MeDiCI fabric
The MeDiCI data fabric comprises multiple layers of file caching that is intended to make the same data available in all places without users having to copy the files around. So, for example, if you write a file to the R: drive, you should be able to read the same file on one of the HPC systems in Polaris.
In practice, it may take some time for data written at one place in the MeDiCI data fabric to be visible in another place. Under the hood, data is being copied between caches across network links that may be congested, and the cache servers themselves could be heavily loaded.
- Be aware that data may not appear instantly.
- Don't design your computational workflows to depend on instant propagation.
- If you experience untoward delays, raise a support ticket with UQ ITS. They will triage the tickets as appropriate.
Cache propagation and Nextcloud
There is an additional complication for the Nextcloud services. To avoid placing excessive load on the data fabric, the Nextcloud services (UQ) maintain a local database of the files and directories in the collections that they present. The problem is that new files are added to the Nextcloud databases by a file scanner, and it can take a long time for the scanner to pick up a new file. (There are hundreds of millions of files to be scanned ... and that takes time.)
What this means is that if you write a new file to your collection via (for example) the R: drive and then try to view them via (for example) the UQ Nextcloud services, you are liable to find that it is not there (yet). However, if you upload a new file to Nextcloud, it will appear in the "local" cache instantly, and relatively quickly in the rest of the MeDiCI data fabric.
- Be aware that data may take a longer time to appear in Nextcloud when it was written in other places.
Tape retrieval delays and I/O errors
The "back end" of the MeDiCI data fabric is a HPE DMF server that stores data in a tape archive in Polaris. While small collections typically have all of their data cached on disk somewhere in the fabric, we do not have enough disk space to keep all data online at all times.
If your collection has more data than can be held in the cache (in Polaris) then some files will not be held online. If you attempt to access a file that is not in the cache, then the cache server will attempt to retrieve it from the DMF server. Depending on how busy the server is, that retrieval could take a number of minutes. If the delay is too long, the retrieval will time out, and your application will experience an I/O error.
(This problem should be resolved in the next 6 to 12 months, but it depends on purchasing and deployment of a low-power disk storage system.)
- If you experience a lot of delays or I/O errors accessing data, raise a support ticket with UQ ITS. They will be able to advise you on what to do.
- The HPC team have a provided a "recall_medici" script that can be used to retrieve data into the disk cache. If used wisely, this will avoid I/O errors. However, beware that if your collection's cache is already full, recalling files will cause other files to be evicted from the cache.
- Operations staff can do a couple of things if the situation warrants it:
- We can temporarily increase the cache size for a collection.
- We can perform a bulk pre-fetch of data into a collection's cache. (This can be faster than using "recall_medici", but it is labor intensive.)
- Requests for these actions must come via UQ ITS.
RDM Collection access on HPC systems
QRIScloud collections (with the exception of "NFS only" collections) are available on the RCC and QRIScloud HPC clusters to people in the collection's respective access groups.
Collections are accessible via the "/QRISdata" directory, on the HPC login nodes and the compute nodes. For example, if your collection's Q number is Q0101, then it will be accessible as "/QRISdata/Q0101". The "/RDS" path is a symlink to "/QRISdata", so that will work too.
There are some important caveats about using collections on HPC.
- If it is likely that collection files you want to access on the HPC won't be resident in the GPFS cache, use "recall_medici" to fetch them. It is advisable to do this on the login node before submitting the job that needs the data. (The "recall_medici" script may takes a long time. During that time your job will be not using the compute node effectively.)
- If there is enough local disk space available on the compute node, it is advisable for jobs to copy the files from your collection to local disk.
- You should never use a collection for "scratch" or short term storage space.
- Don't untar / unzip archives into your collection.
- Avoid writing application output files directly into your collection.
- Write the files first to local storage.
- If the output comprises small files, always organize them and tar or zip them up into bundles before writing them to collections. If you need more short term file space to do this, submit an HPC support request to UQ RCC.
- If you do not understand the above, seek help from someone who does understand. It is easy to cause yourself and other people significant problems by using collections incorrectly on HPC systems.
Finally, people with HPC accounts are permitted to use the login nodes for performing file transfers, though we prefer that they use the SSH access servers for this. See the next section for details.
RDM Collection access via the SSH access servers
QRIScloud runs a data access server for performing file transfers:
These systems are intended to be used for two purposes:
- As end-points for file transfers initiated from other systems; e.g. your laptop / workstation, a NeCTAR virtual machine or a remote HPC cluster (for example the NCI Gadi cluster)
- As a place for initiating file transfers.
All UQ users who have access to any QRIScloud collection is authorised to connect to these servers and use them for the intended purpose. You should be able to use your UQ account name and the associated password to access them. If you have multiple accounts, login using the account under which you have been granted collection access.
The SSH access servers support the SCP, SFTP and RSYNC file transfer protocols for transfers that were initiated from elsewhere. The recommended tools are:
- Windows: "putty" for SSH access, and WinSCP, Filezilla or Cyberduck for file transfer.
- MacOSX: the "ssh" command line tool for SSH access, and Filezilla or Cyberduck for GUI based file transfer, and the "scp", "sftp" or "rsync" tools for file transfer.
- Linux: as for MacOSX
For transfers initiated from the SSH servers, we recommend the "scp", "sftp" or "rsync" command tools, and also "curl" or "wget".
There are a couple of issues related to file transfers:
- If you use "rsync" to copy files into your collection, AND stale copies of the files already exist in the collection, AND the copies are not in the cache, then "rsync" can trigger an unnecessary recall from tape, resulting in very slow effective transfer rates. To avoid this, 1) use "-W" to disable delta-xfer mode, and 2) avoid using the "-c" (checksum) option.
- If you get your account details incorrect, you are liable to be locked out. If this happens, wait 10 minutes and try again.
- It is possible to set up your account on the SSH access servers to use public key (i.e. password-less) authentication. Refer to the Linux "ssh" documentation for details.
If you intend to perform large-scale transfers, we can advise on the best way to do this. For this, and problems with the access servers, please raise a QRIScloud support ticket by emailing email@example.com