File Systems
- General
- CTE-POWER
- CTE-ARM
It is your responsibility as a user of our facilities to backup all your critical data. We only guarantee a daily backup of user data under /gpfs/home. Any other backup should only be done exceptionally under demand of the interested user.
Each user has several areas of disk space for storing files. These areas may have size or time limits, please read carefully all this section to know about the policy of usage of each of these filesystems. There are 3 different types of storage available inside a node:
GPFS filesystems
: GPFS is a distributed networked filesystem which can be accessed from all the nodes and Data Transfer MachineLocal hard drive
: Every node has an internal hard driveRoot filesystem
: Is the filesystem where the operating system resides
GPFS Filesystem
The IBM General Parallel File System (GPFS) is a high-performance shared-disk file system providing fast, reliable data access from all nodes of the cluster to a global filesystem. GPFS allows parallel applications simultaneous access to a set of files (even a single file) from any node that has the GPFS file system mounted while providing a high level of control over all file system operations. In addition, GPFS can read or write large blocks of data in a single I/O operation, thereby minimizing overhead.
An incremental backup will be performed daily only for /gpfs/home.
These are the GPFS filesystems available in the machine from all nodes:
/apps
: Over this filesystem will reside the applications and libraries that have already been installed on the machine. Take a look at the directories to know the applications available for general use./gpfs/home
: This filesystem has the home directories of all the users, and when you log in you start in your home directory by default. Every user will have their own home directory to store own developed sources and their personal data. A default quota will be enforced on all users to limit the amount of data stored there. Also, it is highly discouraged to run jobs from this filesystem. Please run your jobs on your group's /gpfs/projects or /gpfs/scratch instead./gpfs/projects
: In addition to the home directory, there is a directory in /gpfs/projects for each group of users. For instance, the group bsc01 will have a /gpfs/projects/bsc01 directory ready to use. This space is intended to store data that needs to be shared between the users of the same group or project. A quota per group will be enforced depending on the space assigned by Access Committee. It is the project's manager responsibility to determine and coordinate the better use of this space, and how it is distributed or shared between their users./gpfs/scratch
: Each user will have a directory over /gpfs/scratch. Its intended use is to store temporary files of your jobs during their execution. A quota per group will be enforced depending on the space assigned.
Active Archive - HSM (Tape Layer)
Active Archive (AA) is a mid-long term storage filesystem that provides 15 PB of total space. You can access AA from the Data Transfer Machine under /gpfs/archive/hpc/your_group.
There is no backup of this filesystem. The user is responsible for adequately managing the data stored in it.
Hierarchical Storage Management (HSM) is a data storage technique that automatically moves data between high-cost and low-cost storage media. At BSC, the filesystem using HSM is the one mounted at /gpfs/archive/hpc, and the two types of storage are GPFS (high-cost, low latency) and Tapes (low-cost, high latency).
HSM System Overview
Hardware
- IBM TS4500 with 10 Frames
- 6000 Tapes 12TB LTO8
- 64 Drives
- 8 LC9 Power9 Servers
Software
- IBM Spectrum Archive 1.3.1
- Spectrum Protect Policies
Functioning policy and expected behaviour
In general, this automatic process is transparent for the user and you can only notice it when you need to access or modify a file that has been migrated. If the file has been migrated, any access to it will be delayed until its content is retrieved from tape.
- Which files are migrated to tapes and which are not?
Only the files with a size between 1GB and 12TB will be moved (migrated) to tapes from the GPFS disk when no data access and modification have been done for a period of 30 days.
- Working directory (under which copies are made)
/gpfs/archive/hpc
- What happens if I try to modify/delete an already migrated file?
From the user point of view, the deletion will be transparent and have the same behaviour. On the other hand, it is not possible to modify a migrated file; in that case, you will have to wait for the system to retrieve the file and put it back on disk.
- What happens if I'm close to my quota limit?
If there is not enough space to recover a given file from tape, the retrieve will fail and everything will remain in the same state as before, that is, you will continue to see the file on tape (in the "migrated" state).
- How can I check the status of a file?
You can use the hsmFileState script to check if the file is resident on disk or has been migrated to tape..
Examples of use cases
$ hsmFileState file_1MB.dat
resident -rw-rw-r-- 1 user group 1048576 mar 12 13:45 file_1MB.dat
$ hsmFileState file_10GB.dat
migrated -rw-rw-r-- 1 user group 10737418240 feb 12 11:37 file_10GB.dat
Local Hard Drive
Every node has a local solid state drive (SSD) that can be used as a local scratch space to store temporary files during executions of one of your jobs. This space is mounted over /scratch/tmp/$JOBID directory and pointed out by $TMPDIR environment variable. The amount of space within the /scratch filesystem can be checked in each machine System Overview. All data stored in these local drives at the compute nodes will not be available from the login nodes.
You should use the directory referred to by $TMPDIR to save your temporary files during job executions. This directory will automatically be cleaned after the job finishes.
Root Filesystem
The root file system, where the operating system is stored doesn't reside in the node, this is a NFS filesystem mounted from one of the servers.
As this is a remote filesystem only data from the operating system has to reside in this filesystem. It is NOT permitted the use of /tmp for temporary user data. The local hard drive can be used for this purpose as you could read in Local Hard Drive.
Quotas
The quotas are the amount of storage available for a user or a groups' users. You can picture it as a small disk readily available to you. A default value is applied to all users and groups and cannot be outgrown.
You can inspect your quota anytime you want using the following command from inside each filesystem:
% bsc_quota
The command provides a readable output for the quota. Check BSC Commands for more information.
If you need more disk space in this filesystem or in any other of the GPFS filesystems, the responsible for your project has to make a request for the extra space needed, specifying the requested space and the reasons why it is needed. For more information or requests you can Contact Us.
It is your responsibility as a user of our facilities to backup all your critical data. We only guarantee a daily backup of user data under /gpfs/home. Any other backup should only be done exceptionally under demand of the interested user.
Each user has several areas of disk space for storing files. These areas may have size or time limits, please read carefully all this section to know about the policy of usage of each of these filesystems. There are 4 different types of storage available inside a node:
GPFS filesystems
: GPFS is a distributed networked filesystem which can be accessed from all the nodes and Data Transfer MachineLocal hard drive
: Every node has an internal hard driveRoot filesystem
: Is the filesystem where the operating system residesNVMe
: Non-Volatile Memory Express local partition in every node.
GPFS Filesystem
The IBM General Parallel File System (GPFS) is a high-performance shared-disk file system providing fast, reliable data access from all nodes of the cluster to a global filesystem. GPFS allows parallel applications simultaneous access to a set of files (even a single file) from any node that has the GPFS file system mounted while providing a high level of control over all file system operations. In addition, GPFS can read or write large blocks of data in a single I/O operation, thereby minimizing overhead.
An incremental backup will be performed daily only for /gpfs/home.
These are the GPFS filesystems available in the machine from all nodes:
/apps
: Over this filesystem will reside the applications and libraries that have already been installed on the machine. Take a look at the directories to know the applications available for general use./gpfs/home
: This filesystem has the home directories of all the users, and when you log in you start in your home directory by default. Every user will have their own home directory to store own developed sources and their personal data. A default quota will be enforced on all users to limit the amount of data stored there. Also, it is highly discouraged to run jobs from this filesystem. Please run your jobs on your group's /gpfs/projects or /gpfs/scratch instead./gpfs/projects
: In addition to the home directory, there is a directory in /gpfs/projects for each group of users. For instance, the group bsc01 will have a /gpfs/projects/bsc01 directory ready to use. This space is intended to store data that needs to be shared between the users of the same group or project. A quota per group will be enforced depending on the space assigned by Access Committee. It is the project's manager responsibility to determine and coordinate the better use of this space, and how it is distributed or shared between their users./gpfs/scratch
: Each user will have a directory over /gpfs/scratch. Its intended use is to store temporary files of your jobs during their execution. A quota per group will be enforced depending on the space assigned.
Active Archive - HSM (Tape Layer)
Active Archive (AA) is a mid-long term storage filesystem that provides 15 PB of total space. You can access AA from the Data Transfer Machine (dt01.bsc.es and dt02.bsc.es) under /gpfs/archive/hpc/your_group.
There is no backup of this filesystem. The user is responsible for adequately managing the data stored in it.
Hierarchical Storage Management (HSM) is a data storage technique that automatically moves data between high-cost and low-cost storage media. At BSC, the filesystem using HSM is the one mounted at /gpfs/archive/hpc, and the two types of storage are GPFS (high-cost, low latency) and Tapes (low-cost, high latency).
HSM System Overview
Hardware
- IBM TS4500 with 10 Frames
- 6000 Tapes 12TB LTO8
- 64 Drives
- 8 LC9 Power9 Servers
Software
- IBM Spectrum Archive 1.3.1
- Spectrum Protect Policies
Functioning policy and expected behaviour
In general, this automatic process is transparent for the user and you can only notice it when you need to access or modify a file that has been migrated. If the file has been migrated, any access to it will be delayed until its content is retrieved from tape.
- Which files are migrated to tapes and which are not?
Only the files with a size between 1GB and 12TB will be moved (migrated) to tapes from the GPFS disk when no data access and modification have been done for a period of 30 days.
- Working directory (under which copies are made)
/gpfs/archive/hpc
- What happens if I try to modify/delete an already migrated file?
From the user point of view, the deletion will be transparent and have the same behaviour. On the other hand, it is not possible to modify a migrated file; in that case, you will have to wait for the system to retrieve the file and put it back on disk.
- What happens if I'm close to my quota limit?
If there is not enough space to recover a given file from tape, the retrieve will fail and everything will remain in the same state as before, that is, you will continue to see the file on tape (in the "migrated" state).
- How can I check the status of a file?
You can use the hsmFileState script to check if the file is resident on disk or has been migrated to tape..
Examples of use cases
$ hsmFileState file_1MB.dat
resident -rw-rw-r-- 1 user group 1048576 mar 12 13:45 file_1MB.dat
$ hsmFileState file_10GB.dat
migrated -rw-rw-r-- 1 user group 10737418240 feb 12 11:37 file_10GB.dat
Local Hard Drive
Every node has a local solid-state drive that can be used as a local scratch space to store temporary files during executions of one of your jobs. This space is mounted over /scratch/tmp/$JOBID directory and pointed out by $TMPDIR environment variable. The amount of space within the /scratch filesystem is about 1,6 TB. All data stored in these local hard drives at the compute nodes will not be available from the login nodes.
You should use the directory referred to by $TMPDIR to save your temporary files during job executions. This directory will automatically be cleaned after the job finishes.
Root Filesystem
The root file system, where the operating system is stored, has its own partition.
There is a separate partition of the local hard drive mounted on /tmp that can be used for storing user data as you can read in Local Hard Drive.
NVMe
Every node has two non-volatile memory express drives (3 TB each) that can be used as a local working directory to speed-up the code by reducing notably the access time to disk.
You can use them with the same methodology described for the local hard drive. There are environment variables to refer to the directories mounted on each drive. Use $NVME1DIR and $NVME2DIR to refer to each drive directory.
Quotas
The quotas are the amount of storage available for a user or a groups' users. You can picture it as a small disk readily available to you. A default value is applied to all users and groups and cannot be outgrown.
You can inspect your quota anytime you want using the following command from inside each filesystem:
% bsc_quota
The command provides a readable output for the quota.
If you need more disk space in this filesystem or in any other of the GPFS filesystems, the responsible for your project has to make a request for the extra space needed, specifying the requested space and the reasons why it is needed. For more information or requests you can Contact Us.
It is your responsibility as a user of our facilities to backup all your critical data. We only guarantee a daily backup of user data under /gpfs/home. Any other backup should only be done exceptionally under demand of the interested user.
Each user has several areas of disk space for storing files. These areas may have size or time limits, please read carefully all this section to know about the policy of usage of each of these filesystems. There are 3 different types of storage available in the cluster:
Root filesystem:
Is the filesystem where the operating system residesFEFS:
FEFS is a distributed networked filesystem which can be accessed from all the nodes.GPFS filesystem:
GPFS is a distributed networked filesystem which has two partitions in this machine gpfs_home and gpfs_projects. Both can be accessed from login nodes and Data Transfer Machine but only gpfs_home is mounted on computing nodes too.
Root Filesystem
The root file system, where the operating system is stored has its own partition.
There is a separate partition of the local hard drive mounted on /tmp that can be used for storing user data as you can read in Local Hard Drive.
Parallel Filesystems
The Fujitsu Exabyte File System (FEFS) is a scalable cluster file system based on Lustre with high reliability and high availability for all nodes of the cluster. Besides, the IBM General Parallel File System (GPFS) is a high-performance shared-disk file system providing fast, reliable data access from all nodes of the cluster to a global filesystem.
These filesystems allow parallel applications simultaneous access to a set of files (even a single file) from any node that has the file system mounted while providing a high level of control over all file system operations.
The following filesystems are used in the cluster:
/apps
(softlink to /fefs/apps): Over this filesystem will reside the applications and libraries that have already been installed on the machine. Take a look at the directories to know the applications available for general use./home
(softlink to /gpfs/home): This filesystem has the home directories of all the users, and when you log in you start in your home directory by default. Every user will have their own home directory to store own developed sources and their personal data. A default quota will be enforced on all users to limit the amount of data stored there. Also, it is highly discouraged to run jobs from this filesystem. Please run your jobs on your group’s /scratch instead./gpfs/projects
: In addition to the home directory, there is a directory in /gpfs/projects for each group of users. For instance, the group bsc01 will have a /gpfs/projects/bsc01 directory ready to use. This space is intended to store data that needs to be shared between the users of the same group or project. A quota per group will be enforced depending on the space assigned by Access Committee. It is the project’s manager responsibility to determine and coordinate the better use of this space, and how it is distributed or shared between their users. This filesystem is not mounted on computing nodes, you have to transfer you data and launch your jobs from /scratch/<unixgroup>/ <username>/scratch
(softlink to /fefs/scratch): It is the only filesystem intended for executions. There is a directory for every group inside it and a subdirectory for every user in the group directory. It is mounted on every node, login and computing nodes. You must transfer all your necessary scripts, input files and any other kind of data to this filesystem before launching a job.