nf-core/configs: purdue_negishi
Purdue RCAC Negishi cluster profile (CPU-only nf-core pipelines).
nf-core/configs: Purdue RCAC Negishi
The purdue_negishi profile configures nf-core pipelines to run on the Negishi cluster operated by the Rosen Center for Advanced Computing (RCAC) at Purdue University.
Negishi is an AMD EPYC 7763 (Milan) cluster with 128 cores and 256 GB RAM per standard node, plus 1 TB highmem nodes. See the RCAC Negishi user guide for hardware and policy details.
Prerequisites
module purge
module load nextflow
module load apptainerApptainer is the actual container runtime on Negishi (/usr/bin/singularity is a symlink to apptainer). The profile uses an apptainer {} block accordingly.
Required parameter: --cluster_account
RCAC Slurm jobs must specify an account. List yours with slist, then pass it to Nextflow:
slist
nextflow run nf-core/<pipeline> \
-profile purdue_negishi \
--cluster_account <your_account> \
--input samplesheet.csv \
--outdir resultsThe profile will refuse to submit jobs if --cluster_account is unset.
Partition routing
The profile routes each task dynamically based on its memory request:
| Memory request | Partition | Walltime cap | Notes |
|---|---|---|---|
<= 256 GB | cpu | 14 d | Default for most pipeline steps |
> 256 GB | highmem | 24 h | Slurm requires >= 65 cores per job on this partition |
If a pipeline step requests more than 256 GB RAM but fewer than 65 cores, Slurm will reject the submission. Raise the step’s CPU request in a pipeline-level config, or lower its memory request if the real need is below 256 GB.
GPU partitions on Negishi are AMD MI210 (ROCm) and are not exposed by this profile because nf-core GPU pipelines are CUDA-only.
Standby queue (optional)
Negishi offers a 4 h standby QoS with higher throughput for short jobs:
nextflow run ... -profile purdue_negishi --use_standby true ...Jobs are routed through standby only when they fit within the QoS limits (<= 4 h walltime, <= 256 GB memory). Longer or larger steps automatically fall back to the normal QoS.
Reference data
A shared iGenomes mirror is mounted at /depot/itap/datasets/igenomes and the profile sets params.igenomes_base accordingly. Use the standard nf-core --genome keys (e.g. --genome GRCh38) in supported pipelines.
To use your own reference instead, pass the relevant pipeline parameters explicitly (--fasta, --gtf, etc.).
Container cache and work directory
export NXF_SINGULARITY_CACHEDIR=$RCAC_SCRATCH/.apptainer/cache
nextflow run ... -w $RCAC_SCRATCH/nextflow-work ...Contact
- Arun Seetharam, @aseetharam, aseethar@purdue.edu
- RCAC support
Config file
// nf-core/configs: Purdue RCAC Negishi cluster profile
// Negishi: AMD EPYC 7763 (Milan), 128 cores / 256 GB per cpu node; 1 TB highmem nodes
// https://www.rcac.purdue.edu/knowledge/negishi/gateway
params {
config_profile_description = 'Purdue RCAC Negishi cluster profile (CPU-only nf-core pipelines).'
config_profile_contact = 'Arun Seetharam (@aseetharam)'
config_profile_url = 'https://www.rcac.purdue.edu/knowledge/negishi/gateway'
// Shared iGenomes mirror (identical path on Bell, Negishi, Gautschi)
igenomes_base = '/depot/itap/datasets/igenomes'
// REQUIRED. Run `slist` on Negishi to list your accounts.
cluster_account = null
// Opt-in: route jobs that fit within standby limits (<= 4 h, <= 256 GB)
// through the 4 h standby QoS. Long or high-memory jobs stay on normal QoS.
use_standby = false
}
// Tell nf-core schema validation to ignore our custom params
validation {
ignoreParams = ['cluster_account', 'use_standby']
}
process {
executor = 'slurm'
// Global ceiling: largest available node (highmem: 1 TB, 128 cores, 14 d).
// Covers all partitions; per-task routing below picks the right one.
resourceLimits = [
cpus : 128,
memory: 1000.GB,
time : 336.h
]
// Dynamic partition routing:
// highmem (1 TB, 24 h, >= 65 cores required by Slurm policy) when task.memory > 256 GB
// cpu (256 GB, 14 d) otherwise
queue = { task.memory > 256.GB ? 'highmem' : 'cpu' }
clusterOptions = {
if (!params.cluster_account) {
System.err.println("ERROR: purdue_negishi profile requires --cluster_account=<slurm_account>.")
System.err.println(" Run 'slist' on a Negishi login node to list your accounts.")
System.exit(1)
}
// standby QoS has a 4 h walltime cap and does not apply to highmem.
def standby = params.use_standby && task.memory <= 256.GB && task.time <= 4.h
"--account=${params.cluster_account}" + (standby ? ' --qos=standby' : '')
}
}
executor {
queueSize = 50
pollInterval = '30 sec'
queueStatInterval = '5 min'
submitRateLimit = '10 sec'
}
apptainer {
enabled = true
autoMounts = true
cacheDir = "${System.getenv('RCAC_SCRATCH') ?: System.getenv('SCRATCH') ?: System.getProperty('user.home')}/.apptainer/cache"
}