Research Computing offers two methods to access KoKo our production research computing cluster. Click here to learn more.
How to see slurm job records in Koko. Slurm job records show memory usage in addition to time start/end. Click here to learn more.
A summary of how Research Computing backs up data for the General Purpose Computing Infrastructure and Health Infrastructure. Researchers should note that the BHRIC resources follow a standard backup schedule but that backups are best effort for the General Purpose Computing Infrastructure. We recommend that users archive their work products to Microsoft One Drive and/or Microsoft Teams.
SSH keys allow for improved access to Research Computing clusters by bypassing the need for 2Factor authentication. Adding your public key to the cluster will allow tools such as ssh, putty and others to more quickly and easily connect.
SSH keys allow for improved access to Research Computing clusters by bypassing the need for 2Factor authentication. Adding your public key to the cluster will allow tools such as ssh, putty and others to more quickly and easily connect.
SSH allows users to log in to Koko to bypass two factor authentication. Koko is our high performance computing cluster. Click here to read more.
SSH allows users to log in to Koko. Koko is our high performance cluster. Click here to read more.
Fluent is a fluid simulation software. There are multiple methods of using Fluent interactively. Click here to read more.
Koko Slurm Queues are the different partitions / queues we have available for our job scheduler (slurm).This article describes the available queue's offered on the HPC cluster. Click here to learn more.
Nodes and features available in Koko 3.0. Koko is a high performance computing cluster. Click here to read more.
Open OnDemand helps computational researchers and students efficiently utilize remote computing resources by making them easy to access from any device. It simplifies user interface and experience.
Most Research Computing Cluster resources mount storage to /mnt/beegfs, and users are provided a 300GB quota by default. The cluster will email you daily if you are over quota so make sure to stay under to avoid unexpected emails and warnings about cost overruns.
Scikit-learn is an open source data analysis library, and the gold standard for Machine Learning (ML) in the Python ecosystem. Algorithmic decision-making methods, including:Classification,Regression, and Clustering.
SBATCH Script is a way to submit jobs on Koko. These jobs can be done with SRUN and SBATCH scripts. Click here to read more.
Instructions on setting up Globus. Globus is a storage service for Koko files. Click here to read more.
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Click here to learn more.
Uploading files to compute resources can be accomplished using command line tools such as SCP or Secure File Transfer tools such as Web Transfer tools such as Globus Online, Windows File Explorer, and OnDemand.
Rclone is an open source command line program to sync files and directories to and from local storage to cloud based storage, such as Google Workspace. Rclone is installed on FAU Research Computing Services and available via module load module load rclone-1.59.1-gcc-9.4.0-k53kthf (this may change on different hardware so use module load avail to search). Click here to read more.
Provides instructions for accessing the Koko system via SSH, transferring directories and files using the SCP command, and prompts the user for their research computing password during the transfer process.
Koko is our high performance computing cluster. Sharepoint is a service that can be used to receive the files from Koko. Click here to read more.
Determining which Queue has which GPU.