This COMPSs release includes an extension to improve the programmability of user-defined reductions. With this addition, the reduction can be easily described with a task annotation. The runtime is able to decompose the reductiontask into multiple tasks that can run in parallel and that take into account the node-level locality of the data, reducing the inter-node communications.
A new @container task annotation has been added that enables the orchestration of PyCOMPSs applications that include tasks deployed in containers. Docker and Singulatiry engines are supported.
dislib new release includes Daura, the most used clustering algorithm by the molecular dynamics community; a PCA with a parallel SVD; and new basic methods such as the matrix multiplication or the Kronecker product.
The Workflows and Distributed Computing team at the Barcelona Supercomputing Center (BSC) is proud to announce a new release, version 2.8 (codename Iris), of the programming environment COMPSs.
This version of COMPSs updates the result of the team’s work in the last years on the provision of a set of tools that helps developers to program and execute their applications efficiently on distributed computational infrastructures such as clusters, clouds and container managed platforms. COMPSs is a task-based programming model known for notably improving the performance of large-scale applications by automatically parallelizing their execution.
COMPSs has been available for the last years for the MareNostrum supercomputer and Spanish Supercomputing Network users, and it has been adopted in several research projects such as EUBra-BIGSEA, MUG, EGI, ASCETIC, TANGO, NEXTGenIO, and mF2C. In these projects, COMPSs has been applied to implement use cases provided by different communities across diverse disciplines as biomedicine, engineering, biodiversity, chemistry, astrophysics and earth sciences. Currently it is also under extension and adoption in applications in the projects ExaQUte, LANDSUPPORT, the BioExcel CoE, CLASS, ELASTIC, and the EXPERTISE ETN, as well as in a research contract with FUJITSU and in the Edge Twins HPC FET Innovation Launchpad project. It has also been applied in sample use cases of the ChEESE CoE.
The new release includes an extension to improve the programmability of user-defined reductions. A new @reduction directive has been defined to annotate user-defined reductions. The runtime is able to decompose a reduction task into multiple ones that can run in parallel following an inverted tree shape. The internal runtime algorithm takes into account the locality of the data, performing first the reduction in the tasks that access data in the same node, and reducing later the inter-node data. This extremely reduces the number of inter-node communications compared to a locality un-aware solution.
The COMPSs runtime supports the deployment in containers since version 1.4. However, the previous support was limited to package the whole application in a single container. From this image, the runtime was able to deploy it in multiple containers and start the application execution without user interaction with the container engine. Docker, Singularity and Mesos were supported. In version 2.8 the support for containers has been extended with a new @container directive that is used to annotate tasks that are included in a container image. In this sense, the COMPSs runtime becomes an orchestrator of workflows composed of native tasks and tasks included in container images, being able to interact with the container engine without user intervention. The tasks annotated with @container can be invocation to external binaries or user-code provided in the application. In this second case, the user-code is previously embedded in the container image.
COMPSs 2.8 also supports: a new parameter type, IN_DELETE, that specifies one-use parameters that are deleted after task execution; a new data layout for collections in MPI tasks; and the possibility of collections of Python dictionaries as parameters.
COMPSs 2.8 comes with other minor new features, extensions and bug fixes.
COMPSs had around 1000 downloads last year and is used by around 20 groups in real applications. COMPSs has recently attracted interest from areas such as engineering, image recognition, genomics and seismology, where specific courses and dissemination actions have been performed.
The packages and the complete list of features are available in the Downloads page. A virtual appliance is also available to test the functionalities of COMPSs through a step-by-step tutorial that guides the user to develop and execute a set of example applications.
Additionally, a user guide and papers published in relevant conferences and journals are available.
For more information on COMPSs please visit our webpage: http://www.bsc.es/compss
New release of dislib
The group is also proud to announce the new release of dislib 0.6.0. The Distributed Computing Library (dislib) provides distributed algorithms ready to use as a library. So far, dislib focuses on machine learning algorithms, and with an interface inspired by scikit-learn. The main objective of dislib is to facilitate the execution of big data analytics algorithms in distributed platforms, such as clusters, clouds, and supercomputers. Dislib has been implemented on top of PyCOMPSs programming model, Python biding of COMPSs.
Dislib is based on a distributed data structure, ds-array, that enables the parallel and distributed execution of the machine learning methods. The dislib library code is implemented as a PyCOMPSs application, where the different methods are annotated as PyCOMPSs tasks. At execution time, PyCOMPSs takes care of all the parallelization and data distribution aspects. However, the final dislib user code is unaware of the parallelization and distribution aspects, and is written as simple Python scripts, with an interface very similar to scikit-learn interface. Dislib includes methods for clustering, classification, regression, decomposition, model selection and data management. A research contract with FUJITSU has partially funded the dislib library and it is currently being used to evaluate the A64FX processor.
Since its recent creation, dislib has been applied in use cases of astrophysics (DBSCAN, GAIA) and in molecular dynamic workflows (Daura and PCA, BioExcel CoE).
The new release 0.6.0 includes a new multivariate linear regression, a parallel implementation of the Singular Value Decomposition (SVD), a new version of the Principal Component Analysis (PCS) using this parallel SVD, the ADMM Lasso algorithm and the Daura clustering algorithm.
Two of these codes, PCA with SVD and Daura clustering, have been developed considering the needs of the BioExcel community, which requires to analyse the results of large trajectories resulting from molecular dynamic simulations. Daura is the clustering algorithm mostly used by the biomolecular dynamics community, which is included for example in the set of GROMACS tools. However, previous implementations of this algorithm are sequential and limited to the use of a single node memory. This limitation is overcome with the new dislib implementation, enabling the clustering of very large simulation trajectories. Similarly, a previous dislib version of the PCA used a sequential method to find the eigenvalues and eigenvectors of a covariance matrix. This sequential method was limiting the PCA's speed-up and the size of the problems to be solved. The new PCA comes with a parallel SVD that overcomes the previous limitation and can use multiple nodes' memory.
Dislib 0.6.0 comes with other extensions and with a new user guide. The code is open source and available for download.
The Workflow and Distributed Computing team at the Barcelona Supercomputing Center aims to offer tools and mechanisms that enable the sharing, selection, and aggregation of a wide variety of geographically distributed computational resources in a transparent way. The research done in this team is based in the former expertise of the group, and extending it towards the aspects of distributed computing that can benefit from this expertise. The team at BSC has a strong focus on programming models and resource management and scheduling in distributed computing environments.