Skip to main content

MareNostrum 5

NEW essential changes!

info

This information is provisional and will be available only during the pre-production period.

HPC user accounts management

  • Users will now have a unique username associated with their (institutional) email address:

    • Your username can now have resource assignments for multiple projects such as BSC, RES, EuroHPC, etc.
    • Your username belongs to a primary Unix group (typically corresponding to your institution but without any resource allocation) and will have an associated secondary group per project with resource allocation.
    • So you must use newgrp (Linux) and account (Slurm) with the secondary group to manage your projects' data and jobs.
  • A slight modification is applied to existing BSC staff usernames:

    • bscXXYYY bsc0XXYYY
  • A new bsc_command has been developed to easily switch between your projects, called bsc_project.

Submitting jobs

  • When submitting a job, it is now mandatory to specify both the account (which will be the same as the secondary group associated with your project) and the Slurm queue.

  • By specifying the queue, you can send jobs from any login node to any partition.

Available storage spaces

  • New (empty) filesystems:
    /gpfs/home      #one per user account (username)
    /gpfs/projects #one per project (secondary group)
    /gpfs/scratch #one per project (secondary group)
    caution
    • There will be no backup for any filesystem for now! (in the future, /gpfs/home and /gpfs/projects will be protected with backup, similar to the current setup in MN4).

MareNostrum 4 and (old) Storage filesystems

  • The final location for your old MN4-Storage data is as follows:
    /gpfs/home/<PRIMARY_GROUP>/$USER/MN4/<MN4_USER>
    /gpfs/projects/<GROUP>/MN4/<GROUP>
    /gpfs/scratch/<GROUP>/MN4/<GROUP>/<MN4_USER>

Modules and software environments

  • Each partition will have dedicated directories for module files and installed applications:
    • Modulefiles directories:

      /apps/GPP/modulefiles
      /apps/ACC/modulefiles
    • Application directories:

      /apps/GPP
      /apps/ACC

Slurm changes with performance implications

  • Due to modifications in the current Slurm version, srun no longer considers SLURM_CPUS_PER_TASK and does not inherit the --cpus-per-task option from sbatch.

  • Therefore, it is necessary to specify explicitly --cpus-per-task in your srun commands or set the environment variable SRUN_CPUS_PER_TASK instead, for example:

    • Example 1:

      [...]
      #SBATCH -n 1
      #SBATCH -c 2

      srun --cpu-per-task=2 ./openmp_binary

    • Example 2:

      [...]
      #SBATCH -n 1
      #SBATCH -c 2

      export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK}

      srun ./openmp_binary

WARNING
  • This becomes crucial when executing with more than one thread per process.
  • If the aforementioned is excluded, thread pinning (thread affinity) will be adversely affected, resulting in threads overlapping on the same cores (hardware threads).
  • This will have a direct impact on the application's performance.

Hyper-Threading

All nodes in MareNostrum 5 come with Hyper-Threading capability. In this regard, unless you explicitly request to run on SMT, you don't need to be concerned, and you can continue configuring your jobs just as you did in MN4.

info

We'll soon provide guidance on effective utilization for those interested in leveraging this new functionality.