First steps

Creating an account

The initial step entails registering as a user. We highly recommend following the outlined steps on the website and reviewing the user registration diagram to avoid any potential issues with your application.

For members of CSIC who intend to create a group, a group registration must be requested through group registration and follow the corresponding steps outlined on the website.

Check this websites to ask for short term or middle/long term projects. qcalls. Galicia Quantum Technologies Hub.

Connecting with CESGA

To ensure security, access to our servers is strictly limited to authorized centers, which include Galician universities, CSIC centers, and centers with special agreements. Therefore, only registered users from these centers are allowed to access the servers. Access from outside of these authorized centers must use VPN.

The secure access to our servers requires an SSH (Secure Shell) client that enables encrypted information transfer. SSH allows remote connection between computers over a network, facilitating the execution of commands on remote machines and the transfer of files between them. It provides strong authentication and ensures secure communication over non-secure channels. All communications are automatically and transparently encrypted including passwords, which eliminates the possibility of password capture, a common practice by which computer system security is compromised. Most versions of SSH also offer remote copy (SCP) functionality, and many provide a secure FTP (SFTP) client. Additionally, SSH allows secure X-Windows connections.

If you find any problem as a windows and VS Code Users you can check the work arround here.

For additional information regarding the VPN and its configuration, please refer to the following steps at how to connect.

**NOTE**
This `authorization request <https://altausuarios.cesga.es/solic/conex>`_ is intended for new center asociations or for users which need mandatory the registration of their public IP to granteed access due to technical problems.

First QPU Job

If you’ve applied for a project, been accepted and gained access to Qmio, then the next step is to run your first quantum computing job! In this secction of the guide we will be doing that. The only thing you need to know is your acces credentials!

Submission script

In qmio installation we have several partitions to submit jobs to, for quantum jobs we will aim for qpu partition. We need to build a bash script to send it to the queueing system. If you prefer to go direct to full code, you can go to examples section, if not, we will be building it up from the ground up in this section.

As any other bash script to be submitted to slurm will need a shebang and some slurm directives. These directives can go outside of the script or in the header of it. Inline options overwrite In-script directives.

Inline

$ sbatch -p qpu --mem=2G --time=00:10:00 submit.sh

In-script

submit.sh

#!/usr/bin/env bash
#SBATCH -p qpu
#SBATCH --mem=2G
#SBATCH --time=00:10:00

Both options act the same way and as you see there are a few options:

  • -p qpu: Informs slurm to reserve resources in the qpu partition

  • –mem=2G: asks for 2G total memory

  • –time=00:10:00: Stablishes a timelimit of 10 minutes

The later directives, mem and time, are a must to this system.

Software availability

We are almost there but you still have some software missing to be able to interact with the qpu itself. This integration from the node in the qpu partition and the qpu is done by several software layers, but the most important to the user is the message passing method. This is done in the python script with a python library crafted by us. To have the ability to use it in your python script you’ll need to load a Linux module. There are other methods to handle this that you can check in the system use > Software Management section. For now just load the module inside the submission script.

module load qmio-run

Python module qmio

The ingredients are here so now we can cook our first python script to interact with qpu. The python process will spawn in the qpu partition, so it is actually an x86 hpc node. From there it will pass some circuits with options to the qpu control node to handle the quantum part of the job. Then you’ll get results back so you can process the results and, maybe pass some more quantum work. This will enhance hybrid workloads.

Imports

You can mixmatch this with some other libraries but for this we will keep it basic.

from qmio imoprt QmioRuntimeService

Service instantiation

Now we need to call the Runtime Service. This will allow us to use it’s internal methods.

service = QmioRuntimeService()

Let’s define a circuit

Of course we need a circuit to be run in the qpu.

Backend

We can ask the service to give us a backend back by it’s name. name=“qpu” corresponds to the qpu.

backend = service.backend(name="qpu")

QPU Communication

The recommended way to manage this is entering a context. This will alow us to stablishing connection once, performing whatever number of communications and closeing it automatically when exiting the context.

with backend as bk:
    result = bk.run(circuit=circuit, shots=1000)
.run method arguments

We need to pass the circuit and the number of shots to the run method. For the rest or arguments, let’s go with defaults. Extend info checking .run.__doc__ or API reference section.

Post processing results

We can do whatever we want with results afterwards. In this example we are going to just print them.

print(result)

Checking for the result

We can use squeue to check our job status, and once it’s finished we should look for a new file called slurm-<Job_Id>.out in the same directory from where we submitted the job (sbatch command). We can change this default behaviour with slurm’s -o <PATH/TO/OUTPUT/FILE> directive.