Software Environment
All software for general purpose compute nodes at the cluster can be found at /apps/. If you need something that is not there please contact us to get it installed.
For nodes with FPGA, their software can be found at /nfs/apps.
C Compilers
In the cluster you can find these C/C++ compilers :
gcc / g++ -> GNU Compilers for C/C++
man gcc
man g++
All invocations of the C or C++ compilers follow these suffix conventions for input files:
.C, .cc, .cpp, or .cxx -> C++ source file.
.c -> C source file
.i -> preprocessed C source file
.so -> shared object file
.o -> object file for ld command
.s -> assembler source file
By default, the preprocessor is run on both C and C++ source files.
These are the default sizes of the standard C/C++ datatypes on the machine
Type | Length (bytes) |
---|---|
bool (c++ only) | 1 |
char | 1 |
wchar_t | 4 |
short | 2 |
int | 4 |
long | 8 |
float | 4 |
double | 8 |
long double | 16 |
Xilinx suite and Vivado
For FPGA nodes, the Xilinx suite (with Vivado) is available thorugh the NFS filesystem. In the future there will probably be modules for it, but currently you have to source their environment variables directly. If you intend to use it, you should allocate a compute node (general purpose or FPGA) and then source the required files. Here is an example:
# To request a general purpose node
salloc -N 1 -t 04:00:00 -p gpp --x11
# ----- OR -----
# To request a FPGA node
salloc -N 1 -t 04:00:00 --constraint=dmaxdma --x11
# Source one of the following Xilinx suite versions
module load xilinx
# ----- OR -----
# Previous versions
# module available
# module load xilinx/<VERSION>
# Run vivado GUI
vivado
Distributed Memory Parallelism
The MEEP FPGA cluster doesn't have InfiniBand connection, so the inter-node connections are via Ethernet, that means the overhead of using MPI between nodes is higher than usual, and such we recommend limiting MPI to a single node.
To compile MPI programs it is recommended to use the following handy wrappers: mpicc, mpicxx for C and C++ source code. You need to choose the Parallel environment first: module load openmpi. These wrappers will include all the necessary libraries to build MPI applications without having to specify all the details by hand.
mpicc a.c -o a.exe
mpicxx a.C -o a.exe
Shared Memory Parallelism
OpenMP directives are supported by the GNU/PGI C and C++ compilers. To use it, the flag -fopenmp/-mp must be added to the compile line.
gcc -fopenmp -o exename filename.c
g++ -fopenmp -o exename filename.C
You can also mix MPI + OPENMP code using -fopenmp/-mp with the mpi wrappers mentioned above.
Default Modules
To improve your experience, a set of default modules is automatically loaded when you log into a cluster via SSH or connect using SCP or rsync. These modules provide a solid starting point for compiling your applications.
If you prefer to prevent these modules from being loaded during login or when connecting via SCP or rsync, you can disable this behavior by running the following command on any of the login nodes of the machine:
touch ${HOME}/.avoid_load_def_modules.${BSC_MACHINE}
This command creates a file in your home directory with the name .avoid_load_def_modules.fpga
. When you connect to a cluster, if this file is detected in your home directory, the default modules will not be loaded automatically.