MySQL Server

MySQL Server within Singularity

We use a singularity instance (new feature in version 2.4) to run the MySQL server service.

0. Load a singularity module:

$ module load singularity/2.6.0-xpi4udb

 

1. Get the image:

$ singularity pull --name mysql.simg shub://ISU-HPC/mysql


 

2. Download and update the MySQL root password to be used. Note, a default value will be used if you don’t do this step. In each file, change the root password from ‘my-secret-pw’ to one of your own choosing.

$ curl https://raw.githubusercontent.com/ISU-HPC/mysql/master/my.cnf > \
    ${HOME}/.my.cnf

$ curl https://raw.githubusercontent.com/ISU-HPC/mysql/master/mysqlrootpw > \ 
    ${HOME}/.mysqlrootpw


 

3. Create local directories for MySQL. These will be bind-mounted into the container and allow other containers to connect to the database via a local socket as well as for the database to be stored on the host filesystem and thus persist between container instances.

$ mkdir -p ${PWD}/mysql/var/lib/mysql ${PWD}/mysql/run/mysqld


 

4. Start the singularity instance for the MySQL server

$ singularity instance.start --bind ${HOME} \
    --bind ${PWD}/mysql/var/lib/mysql/:/var/lib/mysql \
    --bind ${PWD}/mysql/run/mysqld:/run/mysqld \
    ./mysql.simg mysql


 

5. Run the container startscript to initialize and start the MySQL server. Note that initialization is only done the first time and the script will automatically skip initialization if it is not needed. This command must be run each time the MySQL server is needed (e.g., each time the container is spun-up to provide the MySQL server).

$ singularity run instance://mysql


 

6. If the server must be accessed from multiple compute nodes, a new user must be created within MySQL. To create this user, run ‘create_remote_admin_user.sh’ within the mysql instance. You can do this by starting a shell within the running container, or by exec-ing a single command.

$ singularity exec instance://mysql create_remote_admin_user.sh

At the end of the output you’ll see that a user called ‘remote_usr’ has been created and a random password has been set. You’ll need to provide this username/password combination when connecting to the MySQL server remotely. This script should only be run once. The created user will persist between instances of the MySQL server.

If more security is required (e.g., a random username or more-constrained permissions), that is left to you the user to do yourself.

At this point, the MySQL server is running on the local machine and is accessible via a local socket as well as via the network (firewall configuration, not withstanding).


 

7. When done, stop the MySQL container instance.

$ singularity instance.stop mysql


 

Accessing the MySQL Server

The ${HOME}/.my.cnf file configured in step 2 defines login credentials for the MySQL root user (note that this is different from the host system’s root user). This allows the following connection examples to work without specifying a username or password.

 

Via Local Socket

By default, the mysql client expects to find the local socket in /run/mysqld. You can do this with the -S option from the commandline. For example, on the local host you can run

$ module load mariadb

$ mysql -S ${PWD}/mysql/run/mysqld/mysqld.sock

to connect to the containerized MySQL server. Note that this example assumes that you are still in the directory into which you pulled the singularity images and created the mysql/ directory (steps 1-3). 

When you want to access the MySQL server via the local socket, but from another container your must bind-mount ${PWD}/mysql/run/mysqld into that container.
 

 

Via Network

Assuming the host machine’s firewall is configured to allow port 3306 (MySQL standard port), you can access the containerized MySQL server by using the -h option on the commandline. For example, on the local host you can run

$ mysql -h 127.0.0.1

and you will connect to the MySQL server. If you’re on a 2nd machine (e.g., a different compute node in a multi-node job), replace the 127.0.0.1 address with the IP address (or hostname) of the machine running the server container. 

 

Determining hostnames in a multi-node HPC job

This assumes you’re using the Slurm scheduler.

On a multi-node job you can get a list of hostnames for your job using the $SLURM_NODELIST environment variable. Thus you can setup the MySQL server on the first node in the list, and point OrthoMCL at that node, by extracting the first entry from the comma-separated $SLURM_NODELIST.

FIRST_NODE=$(echo $SLURM_NODELIST | cut -d ',' -f 1)
 

 

Starting Over

If you need/want to start over from scratch, simply delete (and then re-create) the mysql/ directory created in step 2 of the MySQL server instructions. The containers themselves are read-only and do not need to be deleted/re-pulled.