Software
There is a bunch of packages and software installed on the cluster. Most of them are ready to use and does not require any installation/configuration on user's site.
If you would like some software to be added to the cluster - let us know.
AAMKS
AAMKS is web-based framework for fire risk analyses. It is set up on FireUni cluster and you can access it via the webGUI.
Further information can be found on projects' GitHub. You can find there wiki full of important information and tips along with an issuetracker that waits for your bugs and feedback to be reported.
Fire Dynamics Simulator
There are several versions of FDS available on cluster:
- FDS 6.7.1
- FDS 6.7.5
- FDS 6.9.1
For further info on the software you should check the official documentation website.
To launch FDS properly on the cluster, please use FDS Template provided by us. Remember
to edit run.sh
script according to your needs.
#!/bin/bash -x
# [!] >>> set those variables >>>
#SBATCH --ntasks=4 # no. of meshes
#SBATCH --partition=common # change to six-core or four-core
#SBATCH --time=1-00:00:00 # D-HH:MM:SS - time reserved for the job
# [!] <<<
# >>> leave those variables as they are >>>
#SBATCH -e stderr.%j
#SBATCH -o stdout.%j
#SBATCH --exclusive
#SBATCH --cpus-per-task=1
# <<<
# [!] >>> set those variables >>>
FDS_VER=6.9.1 # 6.7.1 | 6.7.5 | 6.9.1 | ask admin for more
N_CORES=4 # as in SLURM partition: four-core (4) | six-core (6)
# [!] <<<
# >>> leave the code below as it is >>>
source /opt/FDS/FDS$FDS_VER/bin/FDS6VARS.sh
export OMP_NUM_THREADS=$N_CORES
mpiexec fds *.fds
# <<<
All settings requiring adjustments are listed below.
ntasks
The number of tasks for SLURM. In case of FDS that would be the number of meshes. To utilize cluster resources effectively this should be a multiple of 4 or 6.
partition
The partition (group of nodes) that will be assigned to your job. A detailed description of partittions on our cluster can be found here.
Remember that to utilize cluster resources effectively ntasks
should be multiple
of the cores number of the partition.
time
The wall time for the job. This is the maximum time for the job to finish. If it will not finish in the timeframe it will be killed by the SLURM. However, do not overestimate this value - the shorter the wall time the sooner your job may be launched.
The format is DAY-HOUR-MINUTE-SECOND
. So, i.e. 1-2:3:4
means that your job
will be killed (if not yet finished) after twenty-six hours, three minutes and
four seconds from the moment it has been launched.
FDS_VER
The version of FDS software. You can choose from the following versions: 6.7.0, 6.7.5, 6.9.1.
If you need any other version, please let us know.
N_CORES
The number of cores on the partition. Unfortunatelly, it is impossible for bash to access SLURM variables before the job is submitted. This is why you need to tell him specifically what is the number of cores on the partition you submit to.
It is 4 for partitions four*
and 6 for six-core
partition.
Provide FDS input file
You should already know how to make FDS input file (*.fds
). If not, please ask your tutor or refer to
the one of the online courses.
The file can be uploaded from Job Composer by clicking Open Dir
. This will redirect you to
the project (job) directory in file explorer. Upload the file by dragging it or with
Upload
button.
From within file explorer you can edit your files, including FDS input file.
Submit the job
Click Submit
button when everything is ready, your job will be added to the queue.
OpenFOAM
There is OpenFOAM 2406 from openfoam.com available on the cluster.
For further info on the software you should check the official website.
To launch OpenFOAM properly on the cluster, please use OpenFOAM Template provided by us. Remember
to edit run.sh
script according to your needs.
#!/bin/bash
### SLURM JOB HEADERS HERE
#SBATCH --time=1-0:00:0 # D-HH:MM:SS - time reserved for the job
#SBATCH --partition=four-core # four-core | six-core | common
#SBATCH --ntasks=4 # set if parallel job
### ENABLE OPENFOAM COMMANDS
source /opt/OpenFOAM/OpenFOAM-v2406/etc/bashrc
### OPENFOAM COMMANDS HERE
### use "mpiexec -n [ntasks] ..." when running parallel job
decomposePar
mpiexec -n 4 fireFoam -parallel
time
The wall time for the job. This is the maximum time for the job to finish. If it will not finish in the timeframe it will be killed by the SLURM. However, do not overestimate this value - the shorter the wall time the sooner your job may be launched.
The format is DAY-HOUR-MINUTE-SECOND
. So, i.e. 1-2:3:4
means that your job
will be killed (if not yet finished) after twenty-six hours, three minutes and
four seconds from the moment it has been launched.
partition
The partition (group of nodes) that will be assigned to your job. A detailed description of partittions on our cluster can be found here.
Remember that to utilize cluster resources effectively ntasks
should be multiple
of the cores number of the partition.
ntasks
The number of tasks for SLURM. To utilize cluster resources effectively this should be a multiple of 4 or 6.
ENABLE OPENFOAM COMMANDS
This line sources OpenFOAM environmental variables. Since then you can use all OpenFOAM commands directly.
OPENFOAM COMMANDS
This section consists of general OpenFOAM commands. Feel free to edit them or add additional commands specific for your case.
Provide OpenFOAM input files
You should already know how to prepare OpenFOAM input files. If not, please ask
your tutor or refer to the one of the online courses.
The files can be uploaded and organized from Job Composer by clicking Open Dir
.
This will redirect you to the project (job) directory in file explorer. Upload
the file by dragging it or with Upload
button.
From within file explorer you can also edit your files.
Submit the job
Click Submit
button when everything is ready, your job will be added to the queue.
SAFIR
SAFIR is a program for Finite Element Analysis (FEA) of structures in fire. It is developed at the University of Liege (Belgium) and John Hopkins University (USA). More information on the software you can find on their website
The version of SAFIR installed on the cluster is academic license of v2022.d.7.
SAFIR is compiled for Windows. We translate EXE binaries to unix with wine
. This package
is not available by default on the cluster, because takes a lot of space (and in case of state-less nodes
also memory). Let us know if you would like to run some SAFIR simulations to enable it for you.
To launch SAFIR properly on the cluster, please use our SAFIR Template.
#!/bin/bash -x
# [!] >>> set those variables >>>
#SBATCH --time=00-1:00:0
#SBATCH --partition=my_partition
SAFIR_FILENAME=filename # filename only, with no .IN extenstion
# [!] <<<
# >>> leave those variables and code as they are >>>
#SBATCH --ntasks=1
#SBATCH -e stderr.%j
#SBATCH -o stdout.%j
#SBATCH --cpus-per-task=1
export SAFIR_CHID=$SAFIR_FILENAME
wine cmd /c safir.bat
# <<<
SAFIR jobs are one-thread jobs. If you would like to prepare an embarassingly parallel tasks take advantage of SLURM job arrays or Python scripting
time
The wall time for the job. This is the maximum time for the job to finish. If it will not finish in the timeframe it will be killed by the SLURM. However, do not overestimate this value - the shorter the wall time the sooner your job may be launched.
The format is DAY-HOUR-MINUTE-SECOND
. So, i.e. 1-2:3:4
means that your job
will be killed (if not yet finished) after twenty-six hours, three minutes and
four seconds from the moment it has been launched.
SAFIR_FILENAME
Filename of SAFIR input file (.IN
). Remember to provide only filename with no
IN
extension.
If you need to launch i.e. test.IN
provide test
as a filename.
Provide SAFIR input file
You should already know how to make SAFIR input file (*.IN
). If not, please ask your tutor or refer to
the one of the online courses.
The files can be uploaded and organized from Job Composer by clicking Open Dir
.
This will redirect you to the project (job) directory in file explorer. Upload
the file by dragging it or with Upload
button.
From within file explorer you can also edit your files.
Submit the job
Click Submit
button when everything is ready, your job will be added to the queue.
OpenSees for Fire
Not yet configured, stay tuned for updates!
Python
The version of Python installed is 3.9.18
.
To launch your Python script add a new job from Python Template in Job Composer and edit run.sh script according to your needs.
Packages
There are following packages installed. Let us know if you need anything else.
Package | Version |
---|---|
absl-py | 2.3.1 |
astunparse | 1.6.3 |
certifi | 2025.7.9 |
charset-normalizer | 3.4.2 |
contourpy | 1.3.0 |
cycler | 0.12.1 |
flatbuffers | 25.2.10 |
fonttools | 4.58.5 |
gast | 0.6.0 |
google-pasta | 0.2.0 |
grpcio | 1.73.1 |
h5py | 3.14.0 |
idna | 3.10 |
importlib_metadata | 8.7.0 |
importlib_resources | 6.5.2 |
joblib | 1.5.1 |
keras | 3.10.0 |
kiwisolver | 1.4.7 |
libclang | 18.1.1 |
Markdown | 3.8.2 |
markdown-it-py | 3.0.0 |
MarkupSafe | 3.0.2 |
matplotlib | 3.9.4 |
mdurl | 0.1.2 |
ml_dtypes | 0.5.1 |
namex | 0.1.0 |
numpy | 2.0.2 |
opt_einsum | 3.4.0 |
optree | 0.16.0 |
packaging | 25.0 |
pandas | 2.3.0 |
pillow | 11.3.0 |
protobuf | 5.29.5 |
Pygments | 2.19.2 |
pyparsing | 3.2.3 |
python-dateutil | 2.9.0.post0 |
pytz | 2025.2 |
requests | 2.32.4 |
rich | 14.0.0 |
scikit-learn | 1.6.1 |
scipy | 1.13.1 |
setuptools | 80.9.0 |
six | 1.17.0 |
sklearn | 0.0 |
tensorboard | 2.19.0 |
tensorboard-data-server | 0.7.2 |
tensorflow | 2.19.0 |
tensorflow-io-gcs-filesystem | 0.37.1 |
termcolor | 3.1.0 |
threadpoolctl | 3.6.0 |
typing_extensions | 4.14.1 |
tzdata | 2025.2 |
urllib3 | 2.5.0 |
Werkzeug | 3.1.3 |
wheel | 0.45.1 |
wrapt | 1.17.2 |
zipp | 3.23.0 |
Jupyter Notebook
Not yet configured, stay tuned for updates!