.. _ft3_other_nodes: High Troughput nodes ===================== Intel Cascade Lake nodes ------------------------------------ There are 94 nodes with 2x Intel Xeon Gold 6240R (Cascade Lake) with 24 cores each (48 cores per node), 180GB of RAM memory and 2x480GB SSD of local storage. They are also known as clk nodes. **20 of these nodes have special priority so they are not available all the time for general use.** The main characteristics of these nodes are: - 2 Intel Xeon Gold 6240R (Cascade Lake) with 24 cores (48 cores per node) - 192GB of RAM memory (180GB for real use) - 2x 480GB SSD of local storage - 10 Gigabit Ethernet connection To use these nodes you have to add the option ``-C clk`` when you submit a job with the ``sbatch`` command. Example:: $ sbatch -C clk -t 24:00:00 --mem=4GB script.sh As these nodes are not connected via the high-performance Mellanox Infiniband interconnect network, their access to LUSTRE directories has lower performance. So if your jobs are I/O intensive in LUSTRE, they may be affected on these nodes. .. warning:: MPI jobs using multiple nodes are not allowed in this partition AMD EPYC nodes --------------------------- There are also 18 nodes with 2x AMD EPYC 7452 @ 2.35GHz with 32 cores each (64 cores per node), 256GB of RAM memory and 2TB HDD of local storage. The main characteristics of these nodes are: - 2x AMD EPYC 7452 @ 2.35GHz with 32 cores (64 cores per node) - 256GB of RAM memory - 2TB HDD of local storage. To use these nodes you have to add the option ``-C epyc`` when you submit a job with the ``sbatch`` command. Example:: $ sbatch -C epyc -t 24:00:00 --mem=4GB script.sh .. warning:: If you are using Intel libraries in your jobs, they could fail in these AMD nodes as long as they are not Intel supported. Some libraries can work on AMD nodes but others not, causing a failure in your jobs.