Parallelization
Sequential jobs consist in one or multiple sequential steps which must be run in a specific order. On the other hand, parallelism relies on jobs being made of multiple steps consisting in one or multiple parallel tasks than can be run in a node or even in different nodes. Parallel computing requires specific programming techniques or scripts. If not used, it would lead to simply the same computation being performed multiple times, with no actual benefit in performance and time. There are two different approaches to the parallelism with different features:
Single-node parallelism (OpenMP)
Running a program will initiate a process which is basically a copy of the program running in the main memory of the node. After that, the process clones itself into multiple threads that are sharing the same memory space (which is known as shared-memory programming) and performing computations of the program in parallel. Also, it can spawn or fork other processes whose memory spaces are independent and can communicate internally. The most used multithreaded software is written under OpenMP directives but it can be written using pthreads too.
There are some script examples here and you can find them also at FinisTerrae III directory /opt/cesga/job-scripts-examples-ft3
Multi-node parallelism (MPI)
It assumes distributed memory programming. The typical case of high-performance computing is message passing through a network that requires large-bandwidth low-latency. Message passing software usually uses MPI, a library that takes care of creating multiple instances of the same program on different nodes and allowing them to send and receive messages through the network.
There are some script examples here and you can find them also at FinisTerrae III directory /opt/cesga/job-scripts-examples-ft3
Hybrid MPI/OpenMP
You can also mix MPI and OpenMP to exploit the advantage of both. There are some script examples here and you can find them also at FinisTerrae III directory /opt/cesga/job-scripts-examples-ft3