Hyperion Cantos
Yggdrasill
Use the script yggdrasil
, which
- starts the VPN connection, prompting for authentication through the browser,
- asks for user confirmation that authentication succeeded before proceeding,
- sets up SSH tunnels, such that Hyperion and the office computer can be accessed, and
- waits for user confirmation to stop tunnels and VPN.
The Consul
SSH access is exposed through port 2001 on localhost. Passwordless login is configured.
ssh -p 2001 <user>@localhost
This includes SCP / SFTP, e.g.
scp -P 2001 <file> <user>@localhost:
The computer runs xrdp
on port 2002, such that a desktop login can be accessed with
remmina -c rdp://<user>@localhost:2002
There should be no other desktop session already running for the user.
WakeOnLAN is configured in the BIOS, but doesn’t seem to work through the VPN.
Hyperion
SSH access to the scheduling node is exposed through port 2000 on localhost. Passwordless login is configured.
ssh -p 2000 <user>@localhost
This includes SCP / SFTP, e.g.
scp -P 2000 <file> <user>@localhost:
There is a Dolphin bookmark for file browsing and transfer,
sftp://<user>@localhost:2000/users/<user>/
.
Software
Conda
The conda
command is made available by
flight env activate conda
Conda has been configured by a ~/.condarc
file with the contents
channels:
- conda-forge
- nodefaults
channel_priority: strict
proxy_servers:
http: http://hpc-proxy00.city.ac.uk:3128
https: http://hpc-proxy00.city.ac.uk:3128
Direct internet access is blocked from Hyperion, therefore proxy servers are used to access the package repositories.
A Conda environment r
providing R has been created using
conda create -n r r-essentials r-base
Quarto
Quarto 1.5.46 has been installed from the Linux x86 Tarball following the post-download instructions.
After installation, deno did not work. The workaround was to install deno
via conda into the r
environment and then to copy the binary:
cp ~/.conda/envs/r/bin/deno ~/opt/quarto-1.5.46/bin/tools/x86_64/
Alternatively, the QUARTO_DENO
environment variable could be used.
SLURM
Temporary interactive access to compute nodes for testing purposes is available via
srun --time=10 --pty /bin/bash
Jobs are submitted using
sbatch slurm_run.sh
slurm_run.sh
is a standard shell script, but with comments providing options to sbatch
. One actual example:
#!/bin/bash
#SBATCH --job-name eci_pda
#SBATCH --partition=nodes
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=48
#SBATCH --mem=24GB
#SBATCH --time=24:00:00
if [ -z "$SLURM_JOB_ID" ]; then
echo "This script must be run using sbatch"
exit 1
fi
source /opt/flight/etc/setup.sh
flight env activate conda
source /opt/apps/flight/env/conda+default/bin/activate r
quarto render analysis.qmd -P n_workers:48
This example asks for a single task on a single node for 24 hours, where the node needs to have at least 24 GB free and 48 CPUs available. With this setup, the analysis code itself needs to utilize the CPUs by spawning parallel processes.
Here the code is within a Quarto document using R, and the value passed as parameter n_workers
is used by the R code to create a fork cluster with so many workers.
More information at HPC - Introduction.