What’s new at CESGA?

April 10, 2023: Checkpoint client for VPN connection

Forticlient will no longer provide service as a VPN connection client for CESGA resources. You will be required to start using Checkpoint as Forticlient will be disabled in the coming weeks. In the How to connect section you can find a tutorial on how to download and install the Checkpoint.

We recommend you to download and install Checkpoint as soon as possible.

March 15, 2023: Shutdown of FinisTerrae II

On the 1st of this month, the FinisTerrae II server was completely shut down. Therefore, for all those users who were still using it and are now unable to access it, they must start using FinisTerrae III.

Furthermore, the connection to the hostname @ft.cesga.es has been redirected to @ft3.cesga.es. If you were connecting in this way, you will receive an alert message indicating that the address has changed. This is normal, the changes should simply be accepted.

For more information about the use of FinisTerrae III please refer to is User Guide.

March 10, 2023: Announce of 2023 CESGA User Workshop

Warning

DUE TO SCHEDULING CONFLICTS, THE USERS’ SESSION HAS BEEN POSTPONED UNTIL FURTHER NOTICE.

  • Introduce the current resources available at CESGA and the latest developments in the infrastructures to the user community.

  • Present results, use cases and relevant services that have benefited from the use of HPC, Big Data, and AI resources of research groups.

  • Facilitate meeting and exchange of experiences and knowledge among CESGA user community members.

  • Create a space to promote collaboration between user groups.

You can find the full article on our website.

March 9, 2023: cesga/system environment

A new environment has been created to load miniconda3 and prevent compatibility issues with the Gentoo prefix in cesga/2020. It is strongly recommended to load the cesga/system environment instead of cesga/2020 to ensure optimal performance.

For more information, please refer to: Environment modules.

March 6, 2023: Job requeue

We have uploaded a guide on how to split your long-running job, such as Revbayes, into smaller batches to extend possibilities and enhance performance. This guide is intended for users who experience very long job executions. The method utilizes system signaling, SLURM requeue capability, and application checkpointing to build a solution to this limitation while preserving balance in the system.

Moreover, this tool can also be implemented on shorter jobs as it includes a checkpointing procedure that could be useful for all users.

For more information, please refer to: Job requeue.

March 1, 2023: AMD EPYC Nodes

We have add 18 nodes with 2x AMD EPYC 7452 @ 2.35GHz with 32 cores each (64 cores per node), 256GB of RAM memory and 2TB HDD of local storage.

To use these nodes you have to add the option -C epyc when you submit a job with the sbatch command. Example:

$ sbatch -C epyc -t 24:00:00 --mem=4GB script.sh

Warning

If you are using Intel libreries in your jobs, they could fail in these AMD nodes as long as they are not Intel supported. Some libraries can work on AMD nodes but others not, causing a failure in your jobs.

December 2, 2022: Intel Cascade Lake nodes

There are 94 nodes with 2x Intel Xeon Gold 6240R (Cascade Lake) with 24 cores each (48 cores per node), 180GB of RAM memory and 2x480GB SSD of local storage. There are also known as clk nodes. 20 of these nodes have special priority so they are not available all the time for general use.

To use these nodes you have to add the option -C clk when you submit a job with the sbatch command. Example:

$ sbatch -C clk -t 24:00:00 --mem=4GB script.sh

These nodes, since they are not connected via the high-performance Mellanox Infiniband interconnect network, access to LUSTRE directories has lower performance. So if your jobs are I/O intensive in LUSTRE, they may be affected on these nodes.