Configuration options
This page lists all of the available settings in the Nextflow configuration.
Unscoped options
The following settings are available: The remote work directory used by hybrid workflows. Equivalent to the Delete all files associated with a run in the work directory when the run completes successfully (default This option will prevent the use of the resume feature on subsequent executions of that pipeline run. This option is not supported for remote work directories, such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. The pipeline output directory. Equivalent to the Enable the use of previously cached task executions. Equivalent to the The pipeline work directory. Equivalent to the bucketDir-bucket-dir option of the run command.cleanupfalse).outputDir-output-dir option of the run command.resume-resume option of the run command.workDir-work-dir option of the run command.
apptainer
The apptainer scope controls how Apptainer containers are executed by Nextflow.
The following settings are available: Automatically mount host paths in the executed container (default The directory where remote Apptainer images are stored. When using a computing cluster it must be a shared folder accessible to all compute nodes. Execute tasks with Apptainer containers (default Specify additional options supported by the Apptainer engine i.e. Comma separated list of environment variable names to be included in the container environment. Directory where remote Apptainer images are retrieved. When using a computing cluster it must be a shared folder accessible to all compute nodes. Pull the Apptainer image with http protocol (default When enabled, OCI (and Docker) container images are pulled and converted to the SIF format by the Apptainer run command, instead of Nextflow (default Leave The amount of time the Apptainer pull can last, exceeding which the process is terminated (default The registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e. Specify extra command line options supported by apptainer.autoMountstrue). It requires the user bind control feature to be enabled in your Apptainer installation.apptainer.cacheDirapptainer.enabledfalse).apptainer.engineOptionsapptainer [OPTIONS].apptainer.envWhitelistapptainer.libraryDirapptainer.noHttpsfalse).apptainer.ociAutoPullfalse).ociAutoPull disabled if you are willing to build a Singularity/Apptainer native image with Wave (see the Build Singularity native images section).apptainer.pullTimeout20 min).apptainer.registryhttp://.apptainer.runOptionsapptainer exec.
aws
The aws scope controls the interactions with AWS, including AWS Batch and S3.
The following settings are available: AWS account access key. AWS profile from AWS region (e.g. AWS account secret key. The path where the AWS command line tool is installed in the host AMI. Delay between download attempts from S3 (default The AWS Batch Execution Role ARN that needs to be used to execute the Batch Job. It is mandatory when using AWS Fargate. When This option is needed when staging directories that have been restored from S3 Glacier. It does not restore objects from Glacier. The AWS Batch Job Role ARN that needs to be used to execute the Batch Job. The name of the logs group used by Batch Jobs (default Max parallel upload/download transfer operations per job (default The default value was changed from Max number of execution attempts of a job interrupted by a EC2 Spot reclaim event (default Max number of downloads attempts from S3 (default The compute platform type used by AWS Batch. Can be either The retry mode used to handle rate-limiting by AWS APIs. Can be one of The scheduling priority for all tasks when using fair-share scheduling (default The share identifier for all tasks when using fair-share scheduling. When List of container mounts. Mounts can be specified as simple e.g. Allow the access of public S3 buckets without providing AWS credentials (default The amount of time to wait (in milliseconds) when initially establishing a connection before timing out (default The AWS S3 API entry point e.g. The maximum number of open HTTP connections used by the S3 client (default The maximum size for the heap memory buffer used by concurrent downloads. It must be at least 10 times the The maximum number of retry attempts for failed retryable requests (default The minimum part size used for multipart S3 transfers (default: The object size threshold used for multipart S3 transfers (default: same as This option is no longer supported. The protocol to use when connecting to AWS. Can be The proxy host to connect through. The password to use when connecting through a proxy. The port to use when connecting through a proxy. The protocol scheme to use when connecting through a proxy. Can be The user name to use when connecting through a proxy. Use Requester Pays for S3 buckets (default Specify predefined bucket permissions, also known as canned ACL. Can be one of Use the path-based access model to access objects in S3-compatible storage systems (default This option is no longer supported. The name of the signature algorithm to use for signing requests made by the client. This option is no longer supported. The Size hint (in bytes) for the low level TCP send buffer (default This option is no longer supported. The Size hint (in bytes) for the low level TCP receive buffer (default The amount of time to wait (in milliseconds) for data to be transferred over an established, open connection before the connection is timed out (default The S3 storage class applied to stored objects, one of [ The S3 server side encryption to be used when saving objects on S3. Can be The AWS KMS key Id to be used to encrypt files stored in the target S3 bucket. The target network throughput (in Gbps) used for S3 uploads and downloads (default This option is no longer supported. The HTTP user agent header passed with all HTTP requests. This option is no longer supported. The size of a single part in a multipart upload (default This option is no longer supported. The maximum number of upload attempts after which a multipart upload returns an error (default This option is no longer supported. The maximum number of threads used for multipart upload (default This option is no longer supported. The time to wait after a failed upload attempt to retry the part upload (default The S3 storage class applied to stored objects. Can be aws.accessKeyaws.profile~/.aws/credentials.aws.regionus-east-1).aws.secretKeyaws.batch.cliPathaws.batch.delayBetweenAttempts10 sec).aws.batch.executionRoleaws.batch.forceGlacierTransfertrue, add the --force-glacier-transfer flag to AWS CLI S3 download commands (defaultfalse).aws.batch.jobRoleaws.batch.logsGroup/aws/batch/job).aws.batch.maxParallelTransfers4).aws.batch.maxSpotAttempts5 to 0.0)aws.batch.maxTransferAttempts1).aws.batch.platformTypeec2 or fargate. Set to fargate to use AWS Fargate.aws.batch.retryModestandard, legacy, adaptive, or built-in (defaultstandard).aws.batch.schedulingPriority0).aws.batch.shareIdentifieraws.batch.terminateUnschedulableJobstrue, jobs that cannot be scheduled due to lack of resources or misconfiguration are terminated and handled as task failures (defaultfalse).aws.batch.volumes/some/path or canonical format e.g. /host/path:/mount/path[:ro|rw].aws.client.anonymousfalse). Any service that does not accept unsigned requests will return a service access error.aws.client.connectionTimeout10000).aws.client.endpointhttps://s3-us-west-1.amazonaws.com. The endpoint must include the protocol prefix e.g. https://.aws.client.maxConnections50).aws.client.maxDownloadHeapMemoryminimumPartSize (default:400 MB).aws.client.maxErrorRetry-1).aws.client.minimumPartSize8 MB).aws.client.multipartThresholdaws.cllient.minimumPartSize).aws.client.protocolhttp or https (default'https').aws.client.proxyHostaws.client.proxyPasswordaws.client.proxyPortaws.client.proxySchemehttp or https (default'http').aws.client.proxyUsernameaws.client.requesterPaysfalse).aws.client.s3AclPrivate, PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead, BucketOwnerFullControl, or AwsExecRead.aws.client.s3PathStyleAccessfalse).aws.client.signerOverrideaws.client.socketSendBufferSizeHint0).aws.client.socketRecvBufferSizeHint0).aws.client.socketTimeout50000).aws.client.storageClassSTANDARD, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING] (defaultSTANDARD).aws.client.storageEncryptionAES256 or aws:kms (defaultnone).aws.client.storageKmsKeyIdaws.client.targetThroughputInGbps10).aws.client.userAgentaws.client.uploadChunkSize100 MB).aws.client.uploadMaxAttempts5).aws.client.uploadMaxThreads10).aws.client.uploadRetrySleep500ms).aws.client.uploadStorageClassSTANDARD, STANDARD_IA, ONEZONE_IA, or INTELLIGENT_TIERING (defaultSTANDARD).
azure
The azure scope allows you to configure the interactions with Azure, including Azure Batch and Azure Blob Storage.
The following settings are available: The service principal client ID. Defaults to environment variable The service principal client secret. Defaults to environment variable The Azure tenant ID. Defaults to environment variable The blob access tier used by The block size (in MB) used by The batch service account key. Defaults to environment variable The batch service account name. Defaults to environment variable Enable the automatic creation of batch pools specified in the Nextflow configuration file (default Enable the automatic creation of batch pools depending on the pipeline resources demand (default The mode in which the Available options: The The The Default value was changed from Delete all jobs when the workflow completes (default Delete all compute node pools when the workflow completes (default Delete each task when it completes (default The batch service endpoint e.g. The maximum elapsed time that jobs may run, measured from the time they are created (default The name of the batch service region, e.g. The client ID for an Azure managed identity that is available on all Azure Batch node pools. This identity is used by Fusion to authenticate to Azure storage. If set to Enable autoscaling feature for the pool identified with The internal root mount point when mounting File Shares. Must be Enable the use of low-priority VMs (default As of September 30, 2025, Low Priority VMs will no longer be supported in Azure Batch accounts that use Batch Managed mode for pool allocation. You may continue to use this setting to configure Spot VMs in Batch accounts configured with User Subscription mode. The max number of virtual machines when using auto scaling. The mount options for mounting the file shares (default The offer type of the virtual machine type used by the pool identified with Enable the task to run with elevated access. Ignored if The publisher of virtual machine type used by the pool identified with The username under which the task is run. The user must already exist on each node of the pool. The scale formula for the pool identified with The interval at which to automatically adjust the Pool size according to the autoscale formula. Must be at least 5 minutes and at most 168 hours (default The scheduling policy for the pool identified with The ID of the Compute Node agent SKU which the pool identified with Enable the The The subnet ID of a virtual network in which to create the pool. The number of virtual machines provisioned by the pool identified with The virtual machine type used by the pool identified with When the workflow completes, set all jobs to terminate on task completion (default The client ID for an Azure managed identity. Defaults to environment variable When The password to connect to a private container registry. The container registry from which to pull the Docker images (default The username to connect to a private container registry. Delay when retrying failed API requests (default Jitter value when retrying failed API requests (default Max attempts when retrying failed API requests (default Max delay when retrying failed API requests (default The blob storage account key. Defaults to environment variable The blob storage account name. Defaults to environment variable The file share mount options. The file share mount path. The blob storage shared access signature (SAS) token, which can be provided instead of an account key. Defaults to environment variable The duration of the SAS token generated by Nextflow when the azure.activeDirectory.servicePrincipalIdAZURE_CLIENT_ID.azure.activeDirectory.servicePrincipalSecretAZURE_CLIENT_SECRET.azure.activeDirectory.tenantIdAZURE_TENANT_ID.azure.azcopy.blobTierazcopy to upload files to Azure Blob Storage. Valid options are None, Hot, or Cool (defaultNone).azure.azcopy.blockSizeazcopy to transfer files between Azure Blob Storage and compute nodes (default4).azure.batch.accountKeyAZURE_BATCH_ACCOUNT_KEY.azure.batch.accountNameAZURE_BATCH_ACCOUNT_NAME.azure.batch.allowPoolCreationfalse).azure.batch.autoPoolModetrue).azure.batch.copyToolInstallModeazcopy tool is installed by Nextflow (default'node').'node'azcopy tool is installed once during the pool creation.'task'azcopy tool is installed for each task execution.'off'azcopy tool is not installed.azure.batch.deleteJobsOnCompletiontrue to false.false).azure.batch.deletePoolsOnCompletionfalse).azure.batch.deleteTasksOnCompletiontrue).
Although this setting is enabled by default, failed tasks will not be deleted unless it is explicitly enabled. This way, the default behavior is that successful tasks are deleted while failed tasks are preserved for debugging purposes.azure.batch.endpointhttps://nfbatch1.westeurope.batch.azure.com.azure.batch.jobMaxWallClockTime30d).
If jobs do not complete within this time limit, the Batch service terminates them and any tasks still running.azure.batch.locationwesteurope or eastus2. Not needed when the endpoint is specified.azure.batch.poolIdentityClientId'auto', Fusion will use the first available managed identity.azure.batch.pools.<name>.autoScale<name>.azure.batch.pools.<name>.fileShareRootPath/mnt/resource/batch/tasks/fsmounts for CentOS nodes or /mnt/batch/tasks/fsmounts for Ubuntu nodes (defaultCentOS).azure.batch.pools.<name>.lowPriorityfalse).azure.batch.pools.<name>.maxVmCountazure.batch.pools.<name>.mountOptions-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp).azure.batch.pools.<name>.offer<name> (defaultcentos-container).azure.batch.pools.<name>.privilegedrunAs is set (defaultfalse).azure.batch.pools.<name>.publisher<name> (defaultmicrosoft-azure-batch).azure.batch.pools.<name>.runAsazure.batch.pools.<name>.scaleFormula<name>.azure.batch.pools.<name>.scaleInterval10 mins).azure.batch.pools.<name>.schedulePolicy<name>. Can be either spread or pack (defaultspread).azure.batch.pools.<name>.sku<name> supports (defaultbatch.node.centos 8).azure.batch.pools.<name>.startTask.privilegedstartTask to run with elevated access (defaultfalse).azure.batch.pools.<name>.startTask.scriptstartTask that is executed as the node joins the Azure Batch node pool.azure.batch.pools.<name>.virtualNetworkazure.batch.pools.<name>.vmCount<name>.azure.batch.pools.<name>.vmType<name>.azure.batch.terminateJobsOnCompletiontrue).azure.managedIdentity.clientIdAZURE_MANAGED_IDENTITY_USER.azure.managedIdentity.systemtrue, use the system-assigned managed identity to authenticate Azure resources. Defaults to environment variable AZURE_MANAGED_IDENTITY_SYSTEM.azure.registry.passwordazure.registry.serverdocker.io).azure.registry.userNameazure.retryPolicy.delay250ms).azure.retryPolicy.jitter0.25).azure.retryPolicy.maxAttempts10).azure.retryPolicy.maxDelay90s).azure.storage.accountKeyAZURE_STORAGE_ACCOUNT_KEY.azure.storage.accountNameAZURE_STORAGE_ACCOUNT_NAME.azure.storage.fileShares.<name>.mountOptionsazure.storage.fileShares.<name>.mountPathazure.storage.sasTokenAZURE_STORAGE_SAS_TOKEN.azure.storage.tokenDurationsasToken option is not specified (default48h).
charliecloud
The charliecloud scope controls how Charliecloud containers are executed by Nextflow.
The following settings are available: The directory where remote Charliecloud images are stored. When using a computing cluster it must be a shared folder accessible to all compute nodes. Execute tasks with Charliecloud containers (default Comma separated list of environment variable names to be included in the container environment. The amount of time the Charliecloud pull can last, exceeding which the process is terminated (default The registry from where images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e. Specify extra command line options supported by the Mounts a path of your choice as the When Run containers from storage in writeable mode using overlayfs (defaultcharliecloud.cacheDircharliecloud.enabledfalse).charliecloud.envWhitelistcharliecloud.pullTimeout20 min).charliecloud.registryhttp://.charliecloud.runOptionsch-run command.charliecloud.temp/tmp directory in the container. Use the special value 'auto' to create a temporary directory each time a container is created.charliecloud.writableInputMountsfalse, mount input directories as read-only (defaulttrue).charliecloud.writeFaketrue).
This option requires unprivileged overlayfs (Linux kernel >= 5.11). For full support, tempfs with xattrs in the user namespace (Linux kernel >= 6.6) is required. See charliecloud documentation for details.
conda
The conda scope controls the creation of Conda environments by the Conda package manager.
The following settings are available: The path where Conda environments are stored. It should be accessible from all compute nodes when using a shared file system. The default was changed to The list of Conda channels that can be used to resolve Conda packages (default Extra command line options for the The amount of time to wait for the Conda environment to be created before failing (default Execute tasks with Conda environments (default Use Mamba instead of Use Micromamba instead of conda.cacheDirconda.channels'conda-forge,bioconda'.'conda-forge,bioconda'). Channel priority decreases from left to right.conda.createOptionsconda create command. See the Conda documentation for more information.conda.createTimeout20 min).conda.enabledfalse).conda.useMambaconda to create Conda environments (defaultfalse).conda.useMicromambaconda to create Conda environments (defaultfalse).
dag
The dag scope controls the generation of the workflow diagram.
The following settings are available: Supported by the HTML and Mermaid renderers. Controls the maximum depth at which to render sub-workflows (defaultno limit). Supported by the Graphviz, DOT, HTML and Mermaid renderers. Controls the direction of the DAG, can be When Graph file name (default The output format is inferred from the file extension. The following formats are supported: Graphviz DOT file. Graph Exchange XML Format (GEXF). HTML file with Mermaid diagram. Mermaid diagram. Requires Graphviz. Graphviz PDF file. Requires Graphviz Graphviz PNG file. Requires Graphviz Graphviz SVG file. When Only supported by the HTML and Mermaid renderers. When dag.depthdag.direction'LR' (left-to-right) or 'TB' (top-to-bottom) (default'TB').dag.enabledtrue enables the generation of the DAG file (defaultfalse).dag.file'dag-<timestamp>.html').dotgexfhtmlmmdpdfpngsvgdag.overwritetrue overwrites any existing DAG file with the same name (defaultfalse).dag.verbosefalse, channel names are omitted, operators are collapsed, and empty workflow inputs are removed (defaultfalse).
docker
The docker scope controls how Docker containers are executed by Nextflow.
The following settings are available: Enable Docker execution (default Specify additional options supported by the Docker engine i.e. Comma separated list of environment variable names to be included in the container environment. Fix ownership of files created by the Docker container (default Use command line options removed since Docker 1.10.0 (default Add the specified flags to the volume mounts e.g. The registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e. When Clean up the container after the execution (default Specify extra command line options supported by the Executes Docker run command as Mounts a path of your choice as the Allocates a pseudo-tty (default When docker.enabledfalse).docker.engineOptionsdocker [OPTIONS].docker.envWhitelistdocker.fixOwnershipfalse).docker.legacyfalse).docker.mountFlags'ro,Z'.docker.registryhttp://.docker.registryOverridetrue, forces the override of the registry name in fully qualified container image names with the registry specified by docker.registry (defaultfalse).
This setting allows you to redirect container image pulls from their original registry to a different registry, such as a private mirror or proxy.docker.removetrue). See the Docker documentation for details.docker.runOptionsdocker run command. See the Docker documentation for details.docker.sudosudo (defaultfalse).docker.temp/tmp directory in the container. Use the special value 'auto' to create a temporary directory each time a container is created.docker.ttyfalse).docker.writableInputMountsfalse, mount input directories as read-only (defaulttrue).
env
The env scope allows the definition one or more variables that will be exported into the environment where workflow tasks are executed.
Simply prefix your variable names with the env scope or surround them by curly brackets, as shown below:
env.ALPHA = 'some value'
env.BETA = "$HOME/some/path"
env {
DELTA = 'one more'
GAMMA = "/my/path:$PATH"
}
In the above example, variables like $HOME and $PATH are evaluated when the workflow is launched. If you want these variables to be evaluated during task execution, escape them with \$. This difference is important for variables like $PATH, which may be different in the workflow environment versus the task environment.
The env scope provides environment variables to tasks, not Nextflow itself. Nextflow environment variables such as NXF_VER should be set in the environment in which Nextflow is launched.
executor
The executor scope controls various executor behaviors.
The following settings are available: Used only by the SLURM, LSF, PBS/Torque and PBS Pro executors. The project or organization account that should be charged for running the pipeline jobs. Used only by the local executor. The maximum number of CPUs made available by the underlying system. Determines how often to log the executor status (default Used only by grid executors. Determines how long to wait for the Used only by grid executors and Google Batch. Determines the name of jobs submitted to the underlying cluster executor: The job name should satisfy the validation constraints of the underlying scheduler. Determines the number of jobs that can be killed in a single command execution (default Used only by the local executor. The maximum amount of memory made available by the underlying system. The name of the executor to be used (default Used only by the SLURM executor. When Used only by the SLURM executor. Requires SLURM 24.05 or later. When Used only by the LSF executor. Enables the per-job memory limit mode for LSF jobs. Used only by the LSF executor. Enables the per-task memory reserve mode for LSF jobs. Determines how often to check for process termination. Default varies for each executor. Determines how job status is retrieved. When The number of tasks the executor will handle in a parallel manner. A queue size of zero corresponds to no limit. Default varies for each executor. Used only by grid executors. Determines how often to fetch the queue status from the scheduler (default Determines the max rate of job submission per time unit, for example Used only by grid executors. Delay when retrying failed job submissions (default Used only by grid executors. Jitter value when retrying failed job submissions (default Used only by grid executors. Max attempts when retrying failed job submissions (default Used only by grid executors. Max delay when retrying failed job submissions (default This option was renamed from Used only by grid executors. Regex pattern that when verified causes a failed submit operation to be re-tried (default Some executor settings have different default values depending on the executor. Executor config settings can be applied to specific executors by prefixing the executor name with the symbol executor.accountexecutor.cpusexecutor.dumpInterval5 min).executor.exitReadTimeout.exitcode file to be created after the task has completed, before returning an error status (default270 sec).executor.jobNameexecutor.jobName = { "$task.name - $task.hash" }executor.killBatchSize100).executor.memoryexecutor.namelocal).executor.perCpuMemAllocationtrue, memory allocations for SLURM jobs are specified as --mem-per-cpu <task.memory / task.cpus> instead of --mem <task.memory>.executor.onlyJobStatetrue, job status queries use squeue --only-job-state without partition (-p) or user (-u) filters. This can reduce the load on the SLURM controller, especially if your SLURM administrator has enabled SchedulerParameters=enable_job_state_cache in your SLURM configuration. See --only-job-state for more information (defaultfalse).executor.perJobMemLimitexecutor.perTaskReserveexecutor.pollIntervalexecutor.queueGlobalStatusfalse only the queue associated with the job execution is queried. When true the job status is queried globally i.e. irrespective of the submission queue (defaultfalse).executor.queueSizeexecutor.queueStatInterval1 min).executor.submitRateLimit'10sec' (10 jobs per second) or '50/2min' (50 jobs every 2 minutes) (defaultunlimited).executor.retry.delay500ms).executor.retry.jitter0.25).executor.retry.maxAttempts3).executor.retry.maxDelay30s).executor.retry.reasonexecutor.submit.retry.reason to executor.retry.reason.Socket timed out).Executor-specific defaults
Executor queueSizepollIntervalAWS Batch 100010sAzure Batch 100010sGoogle Batch 100010sGrid Executors 1005sKubernetes 1005sLocal N/A 100msExecutor-specific configuration
$ and using it as special scope. For example:// block syntax
executor {
$sge {
queueSize = 100
pollInterval = '30sec'
}
$local {
cpus = 8
memory = '32 GB'
}
}
// dot syntax
executor.$sge.queueSize = 100
executor.$sge.pollInterval = '30sec'
executor.$local.cpus = 8
executor.$local.memory = '32 GB'
fusion
The fusion scope provides advanced configuration for the use of the Fusion file system.
The following settings are available: The maximum size of the local cache used by the Fusion client. The URL of the container layer that provides the Fusion client. Enable the Fusion file system (default Export access credentials required by the underlying object storage as environment variables (e.g., This configuration does not mount or provide access to credential files. For example, AWS credentials like This option leaks credentials is the task launcher script. It should only be used for testing and development purposes. The log level of the Fusion client. The output location of the Fusion log. Enable privileged containers for Fusion (default Currently only supported for AWS Batch Enable Fusion snapshotting (preview, default Currently only supported for S3. The pattern that determines how tags are applied to files created via the Fusion client (defaultfusion.cacheSizefusion.containerConfigUrlfusion.enabledfalse).fusion.exportStorageCredentialsAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN for AWS S3) to task execution environments (defaultfalse).~/.aws/credentials, ~/.aws/config, and SSO cache files are not mounted. AWS SSO users must export credentials to environment variables:eval "$(aws configure export-credentials --format env)"fusion.logLevelfusion.logOutputfusion.privilegedtrue).
Non-privileged use is supported only on Kubernetes with the k8s-fuse-plugin or a similar FUSE device plugin.fusion.snapshotsfalse). This feature allows Fusion to automatically restore a job when it is interrupted by a spot reclamation.fusion.tags[.command.*|.exitcode|.fusion.*](nextflow.io/metadata=true),[*](nextflow.io/temporary=true)). Set to false to disable tags.
google
The google scope allows you to configure the interactions with Google Cloud, including Google Cloud Batch and Google Cloud Storage.
The following settings are available: Use the given Google Cloud project ID as the billing project for storage access (default The HTTP connection timeout for Cloud Storage API requests (default The HTTP read timeout for Cloud Storage API requests (default The Google Cloud location where jobs are executed (default The Google Cloud project ID to use for pipeline execution. The set of allowed locations for VMs to be provisioned (defaultno restriction). The list of exit codes that should be automatically retried by Google Batch when The image URI of the virtual machine boot disk, e.g The size of the virtual machine boot disk, e.g The minimum CPU Platform, e.g. List of custom mount options for The Google Cloud Storage path where job logs should be stored, e.g. The default value was changed from Max number of execution attempts of a job interrupted by a Compute Engine Spot reclaim event (default The URL of an existing network resource to which the VM will be attached. You can specify the network as a full or partial URL. For example, the following are all valid URLs: The network tags to be applied to the instances created by Google Batch jobs (e.g., The Google service account email to use for the pipeline execution. If not specified, the default Compute Engine service account for the project will be used.
This service account will only be used for tasks submitted by Nextflow, not for Nextflow itself. See Credentials for more information on Google Cloud credentials. Enable the use of spot virtual machines (default The URL of an existing subnetwork resource in the network to which the VM will be attached. You can specify the subnetwork as a full or partial URL. For example, the following are all valid URLs: Do not provision public IP addresses for VMs, such that they only have an internal IP address (default Max attempts when retrying failed API requests to Cloud Storage (default Max delay when retrying failed API requests to Cloud Storage (default Delay multiplier when retrying failed API requests to Cloud Storage (defaultgoogle.enableRequesterPaysBucketsfalse). Required when accessing data from requester pays buckets.google.httpConnectTimeout'60s').google.httpReadTimeout'60s').google.locationus-central1).google.projectgoogle.batch.allowedLocationsgoogle.batch.autoRetryExitCodesgoogle.batch.maxSpotAttempts is greater than 0 (default[50001]).
See Google Batch documentation for the complete list of retryable exit codes.google.batch.bootDiskImagebatch-debian (defaultnone).
See Google documentation for details.google.batch.bootDiskSize50.GB (defaultnone).google.batch.cpuPlatform'Intel Skylake' (defaultnone).google.batch.gcsfuseOptionsgcsfuse (default['-o rw', '-implicit-dirs']).google.batch.logsPathgs://my-logs-bucket/logs.
When specified, Google Batch will write job logs to this location instead of Cloud Logging. The bucket must be accessible and writable by the service account.google.batch.maxSpotAttempts5 to 0.0).
See alsogoogle.batch.autoRetryExitCodesgoogle.batch.network
https://www.googleapis.com/compute/v1/projects/{project}/global/networks/{network}projects/{project}/global/networks/{network}global/networks/{network}google.batch.networkTags['allow-ssh', 'allow-http']).
Network tags are ignored when using instance templates.google.batch.serviceAccountEmailgoogle.batch.spotfalse).google.batch.subnetwork
https://www.googleapis.com/compute/v1/projects/{project}/regions/{region}/subnetworks/{subnetwork}projects/{project}/regions/{region}/subnetworks/{subnetwork}regions/{region}/subnetworks/{subnetwork}google.batch.usePrivateAddressfalse).
When this option is enabled, jobs can only load Docker images from Google Container Registry, and cannot use external services other than Google APIs.google.storage.retryPolicy.maxAttempts10).google.storage.retryPolicy.maxDelay'90s').google.storage.retryPolicy.multiplier2.0).
k8s
The k8s scope controls the deployment and execution of workflow applications in a Kubernetes cluster.
The following settings are available: Automatically mount host paths into the task pods (default When Map of options for the Kubernetes HTTP client. If this option is specified, it will be used instead of Available options: The interval after which the Kubernetes client configuration is refreshed (default: This setting is useful when the Kubernetes authentication token has a limited lifespan and needs to be periodically refreshed. The client configuration will be automatically reloaded after the specified interval, allowing Nextflow to obtain fresh credentials from the Kubernetes configuration. Whether to use Kubernetes The Kubernetes configuration context to use. When Include the hostname of each task in the execution trace (default The FUSE device plugin to be used when enabling Fusion in unprivileged mode (default The Kubernetes HTTP client request connection timeout e.g. The Kubernetes HTTP client request connection read timeout e.g. The strategy for pulling container images. Can be The path where the workflow is launched and the user data is stored (default The Kubernetes namespace to use (default Additional pod configuration options such as environment variables, config maps, secrets, etc. Allows the same settings as the pod process directive.
When using the The path where Nextflow projects are downloaded (default The user ID to be used to run the containers. Shortcut for the The security context to use for all pods. The Kubernetes service account name to use. The name of the persistent volume claim where the shared work directory is stored. The mount path for the persistent volume claim (default The path in the persistent volume to be mounted (default The path of the shared work directory (default Save the pod spec for each task to Delay when retrying failed API requests (default Jitter value when retrying failed API requests (default Max attempts when retrying failed API requests (default Max delay when retrying failed API requests (defaultk8s.autoMountHostPathsfalse). Only intended for development purposes when using a single node.k8s.cleanuptrue, successful pods are automatically deleted (defaulttrue).k8s.client.kube/config.
servertokentokenFileverifySslsslCertsslCertFileclientCertclientCertFileclientKeyclientKeyFilek8s.clientRefreshInterval50m).k8s.computeResourceTypePod or Job resource type to carry out Nextflow tasks (defaultPod).k8s.contextk8s.cpuLimitstrue, set both the pod CPUs request and limit to the value specified by the cpus directive, otherwise set only the request (defaultfalse).
This setting is useful when a K8s cluster requires a CPU limit to be defined through a LimitRange.k8s.fetchNodeNamefalse).k8s.fuseDevicePlugin['nextflow.io/fuse'1]).k8s.httpConnectTimeout'60s'.k8s.httpReadTimeout'60s'.k8s.imagePullPolicyIfNotPresent, Always, Never.k8s.launchDir<volume-claim-mount-path>/<user-name>). Must be a path in a shared K8s persistent volume.k8s.namespacedefault).k8s.podkuberun command, this setting also applies to the submitter pod.k8s.projectDir<volume-claim-mount-path>/projects). Must be a path in a shared K8s persistent volume.k8s.runAsUsersecurityContext option.k8s.securityContextk8s.serviceAccountk8s.storageClaimNamek8s.storageMountPath/workspace).k8s.storageSubPath/).k8s.workDir<user-dir>/work). Must be a path in a shared K8s persistent volume.k8s.debug.yaml.command.yaml in the task directory (defaultfalse).k8s.retryPolicy.delay250ms).k8s.retryPolicy.jitter0.25).k8s.retryPolicy.maxAttempts4).k8s.retryPolicy.maxDelay90s).
lineage
The lineage scope controls the generation of lineage metadata.
The following settings are available: Enable generation of lineage metadata (default The location of the lineage metadata store (defaultlineage.enabledfalse).lineage.store.location./.lineage).
mail
The mail scope controls the mail server used to send email notifications.
The following settings are available: Enable Java Mail logging for debugging purposes (default Default email sender address. Host name of the mail server. User password to connect to the mail server. Port number of the mail server. User name to connect to the mail server. Host name of an HTTP web proxy server that will be used for connections to the mail server. Port number for the HTTP web proxy server. Any SMTP configuration property supported by the Java Mail API, which Nextflow uses to send emails. See the table of available properties here.mail.debugfalse).mail.frommail.smtp.hostmail.smtp.passwordmail.smtp.portmail.smtp.usermail.smtp.proxy.hostmail.smtp.proxy.portmail.smtp.*
manifest
The manifest scope allows you to define some metadata that is useful when publishing or running your pipeline.
The following settings are available: Use Project author name (use a comma to separate multiple names). List of project contributors. Should be a list of maps. The following fields are supported in the contributor map: The contributor name. The contributor affiliated organization. The contributor email address. The contributor GitHub URL. List of contribution types, each element can be one of The contributor ORCID URL. Git repository default branch (default Free text describing the workflow project. Project documentation URL. Project related publication DOI identifier. Controls whether git sub-modules should be cloned with the main repository. Can be either a boolean value, a list of submodule names, or a comma-separated string of submodule names. Project home page URL. Project related icon location (Relative path or URL). Project license. Project main script (default Project short name. Minimum required Nextflow version. This setting may be useful to ensure that a specific version is used: See VersionNumber for details. Project organization. Pull submodules recursively from the Git repository. Project version number.manifest.authormanifest.contributors instead.manifest.contributorsnameaffiliationemailgithubcontribution'author', 'maintainer', or 'contributor'.orcidmanifest.defaultBranchmaster).manifest.descriptionmanifest.docsUrlmanifest.doimanifest.gitmodulesmanifest.homePagemanifest.iconmanifest.licensemanifest.mainScriptmain.nf).manifest.namemanifest.nextflowVersionmanifest.nextflowVersion = '1.2.3' // exact match
manifest.nextflowVersion = '1.2+' // 1.2 or later (excluding 2 and later)
manifest.nextflowVersion = '>=1.2' // 1.2 or later
manifest.nextflowVersion = '>=1.2, <=1.5' // any version in the 1.2 .. 1.5 range
manifest.nextflowVersion = '!>=1.2' // with ! prefix, stop execution if current version does not match required version.manifest.organizationmanifest.recurseSubmodulesmanifest.version
nextflow
The nextflow.publish.retryPolicy settings were moved to workflow.output.retryPolicy.
The workflow.output.retryPolicy settings were moved to nextflow.retryPolicy.
retryPolicy.delay
Delay used for retryable operations (default350ms).
retryPolicy.jitter
Jitter value used for retryable operations (default0.25).
retryPolicy.maxAttempts
Max attempts used for retryable operations (default5).
retryPolicy.maxDelay
Max delay used for retryable operations (default90s).
notification
The notification scope controls the automatic sending of an email notification on workflow completion.
The following settings are available: Map of variables that can be used in the template file. Send an email notification when the workflow execution completes (default Sender address for the email notification. Path of a template file containing the contents of the email notification. Recipient address for the email notification. Multiple addresses can be specified as a comma-separated list.notification.attributesnotification.enabledfalse).notification.fromnotification.templatenotification.to
podman
The podman scope controls how Podman containers are executed by Nextflow.
The following settings are available: Execute tasks with Podman containers (default Specify additional options supported by the Podman engine i.e. Comma separated list of environment variable names to be included in the container environment. Add the specified flags to the volume mounts e.g. The registry from where container images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e. Clean-up the container after the execution (default Specify extra command line options supported by the Mounts a path of your choice as the podman.enabledfalse).podman.engineOptionspodman [OPTIONS].podman.envWhitelistpodman.mountFlags'ro,Z'.podman.registryhttp://.podman.removetrue).podman.runOptionspodman run command.podman.temp/tmp directory in the container. Use the special value 'auto' to create a temporary directory each time a container is created.
report
The report scope controls the generation of the Execution report.
The following settings are available: Create the execution report on workflow completion (default The path of the created execution report file (default Overwrite any existing report file with the same name (defaultreport.enabledfalse).report.file'report-<timestamp>.html').report.overwritefalse).
sarus
The sarus scope controls how Sarus containers are executed by Nextflow.
The following settings are available: Execute tasks with Sarus containers (default Comma-separated list of environment variable names to be included in the container environment. Specify extra command line options supported by the Allocates a pseudo-tty (defaultsarus.enabledfalse).sarus.envWhitelistsarus.runOptionssarus run command.
See the Sarus user guide for details.sarus.ttyfalse).
shifter
The shifter scope controls how Shifter containers are executed by Nextflow.
The following settings are available: Execute tasks with Shifter containers (default Comma-separated list of environment variable names to be included in the container environment.shifter.enabledfalse).shifter.envWhitelist
singularity
The singularity scope controls how Singularity containers are executed by Nextflow.
The following settings are available: Default value was changed from Automatically mount host paths in the executed container (default The directory where remote Singularity images are stored. When using a compute cluster, it must be a shared folder accessible to all compute nodes. Execute tasks with Singularity containers (default Specify additional options supported by the Singularity engine i.e. Comma separated list of environment variable names to be included in the container environment. Directory where remote Singularity images are retrieved. When using a computing cluster it must be a shared folder accessible to all compute nodes. Pull the Singularity image with http protocol (default Requires Singularity 3.11 or later When enabled, OCI (and Docker) container images are pull and converted to a SIF image file format implicitly by the Singularity run command, instead of Nextflow (default Leave Requires Singularity 4 or later Enable OCI-mode, that allows running native OCI compliant container image with Singularity using See Leave The amount of time the Singularity pull can last, after which the process is terminated (default The registry from where Docker images are pulled. It should be only used to specify a private registry server. It should NOT include the protocol prefix i.e. Specify extra command line options supported by singularity.autoMountsfalse to true.true). It requires the user bind control feature to be enabled in your Singularity installation.singularity.cacheDirsingularity.enabledfalse).singularity.engineOptionssingularity [OPTIONS].singularity.envWhitelistsingularity.libraryDirsingularity.noHttpsfalse).singularity.ociAutoPullfalse).ociAutoPull disabled if willing to build a Singularity native image with Wave (see Build Singularity native images).singularity.ociModecrun or runc as low-level runtime (defaultfalse).--oci flag in the Singularity documentation for more details and requirements (defaultfalse).ociMode disabled if you are willing to build a Singularity native image with Wave (see Build Singularity native images).singularity.pullTimeout20 min).singularity.registryhttp://.singularity.runOptionssingularity exec.
spack
The spack scope controls the creation of a Spack environment by the Spack package manager.
The following settings are available: The path where Spack environments are stored. It should be accessible from all compute nodes when using a shared file system. Enable checksum verification of source tarballs (default Only disable when requesting a package version not yet encoded in the corresponding Spack recipe. The amount of time to wait for the Spack environment to be created before failing (default Execute tasks with Spack environments (default The maximum number of parallel package builds (defaultthe number of available CPUs).spack.cacheDirspack.checksumtrue).spack.createTimeout60 min).spack.enabledfalse).spack.parallelBuilds
timeline
The timeline scope controls the generation of the Execution timeline.
The following settings are available: Create the execution timeline on workflow completion (default Timeline file name (default Overwrite any existing timeline file with the same name (defaulttimeline.enabledfalse).timeline.file'timeline-<timestamp>.html').timeline.overwritefalse).
tower
The tower scope controls the settings for Seqera Platform (formerly Tower Cloud).
The following settings are available: The unique access token for your Seqera Platform account. Your The compute environment ID in your Seqera Platform account used to launch pipelines (default: the primary compute environment in the selected workspace). Enable workflow monitoring with Seqera Platform (default The endpoint of your Seqera Platform instance (default The workspace ID in Seqera Platform in which to save the run (defaultthe launching user's personal workspace). The workspace ID can also be specified using the environment variable tower.accessTokenaccessToken can be obtained from your Seqera Platform instance in the Tokens page.tower.computeEnvIdtower.enabledfalse).tower.endpointhttps://api.cloud.seqera.io).tower.workspaceIdTOWER_WORKSPACE_ID (config file has priority over the environment variable).
trace
The trace scope controls the generation of the Trace file.
The following settings are available: Create the trace file on workflow completion (default Comma-separated list of fields to include in the trace file. Available fields: Task ID. Task hash code. Task ID given by the underlying execution system e.g. POSIX process PID when executed locally, job ID when executed by a grid engine, etc. Nextflow process name. User provided identifier associated with this task. Task name. Task status. Options POSIX process exit status. Environment module used to run the task. Docker image name used to execute the task. The CPUs number request for the task execution. The time request for the task execution The disk space request for the task execution. The memory request for the task execution. Attempt at which the task completed. Timestamp when the task has been submitted. Timestamp when the task execution has started. Timestamp when task execution has completed. Time elapsed to complete since the submission. Task execution time i.e. delta between completion and start timestamp. The queue that the executor attempted to run the process on. Percentage of CPU used by the process. Percentage of memory used by the process. Real memory (resident set) size of the process. Equivalent to Virtual memory size of the process. Equivalent to Peak of real memory. Data is read from field Peak of virtual memory. Data is read from field Number of bytes the process read, using any read-like system call from files, pipes, tty, etc. Data is read from Number of bytes the process wrote, using any write-like system call. Data is read from Number of read-like system call invocations that the process performed. Data is read from Number of write-like system call invocations that the process performed. Data is read from Number of bytes the process directly read from disk. Data is read from Number of bytes the process originally dirtied in the page-cache (assuming they will go to disk later). Data is read from Number of voluntary context switches. Data is read from field Number of involuntary context switches. Data is read from field The variables defined in task execution environment. The directory path where the task was executed. The task command script. The value of the process The action applied on error for task failure. The host on which the task was executed. Supported only for the Kubernetes executor yet. Activate with The name of the CPU model used to execute the task. This data is read from Trace file name (default Overwrite any existing trace file with the same name (default Report trace metrics as raw numbers where applicable, i.e. report duration values in milliseconds and memory values in bytes (default Character used to separate values in each row (defaulttrace.enabledfalse).trace.fieldstask_idhashnative_idprocesstagnamestatusNEW, SUBMITTED, RUNNING, COMPLETED, FAILED, and ABORTED.exitmodulecontainercpustimediskmemoryattemptsubmitstartcompletedurationrealtimequeue%cpu%memrssps -o rss .vmemps -o vsize .peak_rssVmHWM in /proc/$pid/status file.peak_vmemVmPeak in /proc/$pid/status file.rchar/proc/$pid/io.wchar/proc/$pid/io.syscr/proc/$pid/io.syscw/proc/$pid/io.read_bytes/proc/$pid/io.write_bytes/proc/$pid/io.vol_ctxtvoluntary_ctxt_switches in /proc/$pid/status file.inv_ctxtnonvoluntary_ctxt_switches in /proc/$pid/status file.envworkdirscriptscratchscratch directive.error_actionhostnamek8s.fetchNodeName = true in the Nextflow config file.cpu_model/proc/cpuinfo.trace.file'trace-<timestamp>.txt').trace.overwritefalse).trace.rawfalse).trace.sep\t).
wave
The wave scope provides advanced configuration for the use of Wave containers.
The following settings are available: Enable the use of Wave containers (default The Wave service endpoint (default Enable Wave container freezing (default The container registry authentication must be managed by the underlying infrastructure. Enable Wave container mirroring (default The mirrored containers will have the same name, digest, and metadata. The target registry must be specified using the This option is only compatible with The container registry authentication must be managed by the underlying infrastructure. The strategy to be used when resolving multiple Wave container requirements (default The container repository used to cache image layers built by the Wave service. The corresponding credentials must be provided in your Seqera Platform account. The compression algorithm that should be used when building the container. Allowed values are Level of compression used when building a container depending the chosen algorithmgzip, estargz (0-9) and zstd (0-22). Forcefully apply compression option to all layers, including already existing layers (default The base image for the final stage in multi-stage Conda container builds (default One or more Conda packages to be always added in the resulting container (default One or more commands to be added to the Dockerfile used to build a Conda based image. The Mamba container image is used to build Conda based container. This is expected to be micromamba-docker image. The container repository where images built by Wave are uploaded. The corresponding credentials must be provided in your Seqera Platform account. The build template to use for container builds (default Multi-stage templates produce smaller images by excluding build tools from the final image. The connection timeout for the Wave HTTP client (default The maximum request rate for the Wave HTTP client (default The initial delay when a failing HTTP request is retried (default The jitter factor used to randomly vary retry delays (default The max number of attempts a failing HTTP request is retried (default The max delay when a failing HTTP request is retried (default Comma-separated list of allowed vulnerability levels when scanning containers for security vulnerabilities in Allowed values are This option requires Enable Wave container security scanning. Wave will scan the containers in your pipeline for security vulnerabilities. The following options can be specified: No security scanning is performed. The containers used by your pipeline are scanned for security vulnerabilities. The task execution is carried out regardless of the security scan result. The containers used by your pipeline are scanned for security vulnerabilities. The task is only executed if the corresponding container is free of vulnerabilities.wave.enabledfalse).wave.endpointhttps://wave.seqera.io).wave.freezefalse). Wave will provision a non-ephemeral container image that will be pushed to a container repository of your choice.
The target registry must be specified using the wave.build.repository setting. It is also recommended to specify a custom cache repository using wave.build.cacheRepository.wave.mirrorfalse). Wave will mirror (i.e. copy) the containers in your pipeline to a container registry of your choice, so that pipeline tasks can pull the containers from this registry instead of the original one.wave.build.repository setting.wave.strategy = 'container'. It cannot be used with wave.freeze.wave.strategy'container,dockerfile,conda').wave.build.cacheRepositorywave.build.compression.modegzip, estargz and zstd (defaultgzip).wave.build.compression.levelwave.build.compression.forcefalse).wave.build.conda.baseImageubuntu:24.04). This option only applies when using wave.build.template set to conda/micromamba:v2 or conda/pixi:v1.wave.build.conda.basePackagesconda-forge::procps-ng).wave.build.conda.commandswave.build.conda.mambaImagewave.build.repositorywave.build.templateconda/micromamba:v1). Supported values:
conda/micromamba:v1Standard Micromamba 1.x single-stage build. Default when unspecified.conda/micromamba:v2Micromamba 2.x with multi-stage builds.conda/pixi:v1Pixi package manager with multi-stage builds for optimized image sizes.cran/installr:v1R/CRAN packages using installr.wave.httpClient.connectTimeout30s).wave.httpClient.maxRate1/sec).wave.retryPolicy.delay450ms).wave.retryPolicy.jitter0.25).wave.retryPolicy.maxAttempts5).wave.retryPolicy.maxDelay90s).wave.scan.allowedLevelsrequired mode.low, medium, high, critical.wave.scan.mode = 'required'.wave.scan.mode'none''async''required'
workflow
The workflow scope provides workflow execution options.
The following settings are available: When Specify a closure that will be invoked at the end of a workflow run (including failed runs). See Workflow handlers for more information. Specify a closure that will be invoked if a workflow run is terminated. See Workflow handlers for more information. Currently only supported for S3. Specify the media type, also known as MIME type, of published files (default Currently only supported for local and shared filesystems. Copy file attributes (such as the last modified timestamp) to the published file (default Enable or disable publishing (default When The file publishing method (default Available options: Copy each file into the output directory. Copy each file into the output directory without following symlinks, i.e. only the link is copied. Create a hard link in the output directory for each file. Move each file into the output directory. Should only be used for files which are not used by downstream processes in the workflow. Create a relative symbolic link in the output directory for each file. Create an absolute symbolic link in the output directory for each output file. When Available options: Never overwrite existing files. Always overwrite existing files. Overwrite existing files when the file content is different. Overwrite existing files when the file size is different. Overwrite existing files when the file size or last modified timestamp is different. Currently only supported for S3. Specify the storage class for published files. Currently only supported for S3. Specify arbitrary tags for published files. For example:workflow.failOnIgnoretrue, the pipeline will exit with a non-zero exit code if any failed tasks are ignored using the ignore error strategy (defaultfalse).workflow.onCompleteworkflow.onErrorworkflow.output.contentTypefalse). Can be a string (e.g. 'text/html'), or true to infer the content type from the file extension.workflow.output.copyAttributesfalse).workflow.output.enabledtrue).workflow.output.ignoreErrorstrue, the workflow will not fail if a file can't be published for some reason (defaultfalse).workflow.output.mode'symlink').'copy''copyNoFollow''link''move''rellink''symlink'workflow.output.overwritetrue any existing file in the specified folder will be overwritten (default'standard').falsetrue'deep''lenient''standard'workflow.output.storageClassworkflow.output.tagsworkflow.output.tags = [FOO'hello', BAR'world']