MetaSnk
Description
MetaSnk is a reproducible and scalable modularized Snakemake workflow for the analysis of metagenomic datasets from human microbiomes.
MetaSnk wraps system and software dependencies within Singularity containers.
Modules:
- rawQC: It runs FastQC on a random sample of R1 reads from the paired fastq-format files.
-
preQC: FastQC only performs a quality check but no QC processing is done. The preQC rule runs a multi-step pre-processing of the paired fastq files, it includes:
-
trim_adapters: adapter-trimming with "fastp". Fastp performs a quality check and both paired fastq files are processed as follows:
- remove adapters: here we provide the Nextera XT adapters,
- base correction in overlapped regions
- trimming of the last base in read 1
- discard reads shorter than a minimum length, after trimming
- a report with quality check, before and after processing
-
filter_human: removal of reads derived from human DNA with BBTools' bbsplit
-
dedupe: removal of duplicated reads with BBTools' clumpify
-
trim_3end: 3'-end quality trimming with "fastp"
-
concatenate_fastqs: merges fastq files corresponding to the same sample into a single pair of fastq files
-
summarize_preQC: creates summarizing tables and plots
-
- PhlAnProf: It performs taxonmic and strain-level profiling using MetaPhlAn2 and StrainPhlAn. If pre-processing (preQC) was not performed PhlAnProf will trigger its execution.
- HUMAnN2Prof: It performs gene- and pathway-level functional profiling using HUMAnN2. If pre-processing(preQC) and taxonomic profiling with MetaPhlAn2 was not performed it will trigger their execution.
Authors
- Monica R. Ticlla (@mticllacc)
Requirements
Dependencies
- Snakemake >= 5.5.0
- Singularity >= 2.6
- python >= 3.6.8
- conda >= 4.6
Datasets
-
Paired-end Illumina sequences in fastq files named as follows:
sampleID-RUN_LANE-R1.fastq.gz sampleID-RUN_LANE-R2.fastq.gz
-
MetaSnk expects to find the raw fastq files in a directory (to be set in the configuration file, see below) where they are grouped into datasets; one or multiple. Each dataset directory (named at the user's discretion) must contain a directory named 'fastq', where fastq files are placed, accompanied by a sample_metatada.tsv file.
$RAW_DIR ├── dataset_test_1 ├── fastq | ├── sampleID-RUN_LANE-R1.fastq.gz | ├── sampleID-RUN_LANE-R2.fastq.gz └── sample_metadata.tsv
Notice that you can have multiple paired fastq files per sample, but each SampleID-RUN_LANE combination must be unique.
-
sample_metadata.tsv: a tab-delimited table with at least two column fields
sampleID SubjectID
Usage
Simple
Step 1: Install workflow
If you simply want to use this workflow, download and extract the latest release.
git clone https://git.scicore.unibas.ch/TBRU/MetagenomicSnake.git <path/to/MetaSnk>
cd <path/to/MetaSnk>
echo -e "#MetaSnk directory\nmetasnk=$(pwd)\nexport metasnk">>$HOME/.bashrc
export METASNK_DBS=$HOME/MetaSnk_dbs
mkdir $METASNK_DBS
echo -e "#MetaSnk DBs directory\nMETASNK_DBS=$HOME/MetaSnk_dbs\nexport METASNK_DBS">>$HOME/.bashrc
source $HOME/.bashrc
If you intend to modify and further extend this workflow or want to work under version control, fork this repository as outlined in Advanced. The latter way is recommended.
In any case, if you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this repository and, if available, its DOI (see above).
Create minimal environment
Some rules will use this environment.
conda env create -f ./envs/MetaSnk.yaml
conda activate MetaSnk
If this step fails, be sure that the dependencies listed before are already installed!
Download singularity containers and reference databases
MetaSnk wraps system requirements and software dependencies within singularity containers. Download these containers by running rule 'pullSIFS' :
snakemake --profile ./profiles/local pullSIFS
The singularity image files (.sif) will be stored in $METASNK_DBS/singularity.
MetaSnK uses reference databases that need to be downloaded to the $METASNK_DBS directory:
snakemake --profile ./profiles/local buildDBS
Step 2: Configure workflow
Configure the workflow according to your needs via editing the file config.yaml
.
Basic configuration
- Make a copy of the config.yaml (recommended) and place it in the working directory to be used by MetaSnk:
cp ./config.yaml <path_to/my_working_directory/config.yaml>
-
Open the copied config.yaml and set RAW_DIR and OUT_DIR. You must provide absolute paths.
- The RAW_DIR should point to a directory where MetaSnk expects to find raw fastq data. This directory must have the following structure:
$RAW_DIR ├── dataset_test_1 │ ├── fastq │ └── sample_metadata.tsv └── dataset_test_2 ├── fastq └── sample_metadata.tsv
- The OUT_DIR is the directory where MetaSnk will save the outputs of the workflow under the following structure:
$OUT_DIR ├── dataset_test_1 │ ├── PhlAnProf │ ├── preQC │ └── rawQC ├── dataset_test_2 │ ├── PhlAnProf │ ├── preQC │ └── rawQC ├── logs │ ├── preQC_make_report.log │ ├── rawQC_make_report.log │ └── ref_indexing.log ├── preQC_report.html └── rawQC_report.html
Step 3: Execute workflow
Activate the environment via
conda activate MetaSnk
Test your configuration by performing a dry-run via
snakemake -s $metasnk/Snakefile -n
Execute the workflow locally via
snakemake \
--profile $metasnk/profiles/local \
--cores $N \
--directory <path_to/my_working_directory> \
-s $metasnk/Snakefile <METASNK_MODULE>
using $N
cores, and specifying a working directory. Have in mind that the working directory is where MetaSnk will try to find your configuration file and also it is where snakemake will store files to track the status of a running MetaSnk workflow.
or, in a cluster environment controlled by SLURM workload manager via
snakemake \
--profile $metasnk/profiles/slurm \
--cores $N \
--cluster-config $metasnk/slurm_cluster.json \
--directory <path_to/my_working_directory> \
-s $metasnk/Snakefile <METASNK_MODULE>
See the Snakemake documentation for further details.
Step 4: Investigate results
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake \
--directory <path_to/my_working_directory> \
-s $metasnk/Snakefile rawQC_make_report
or
snakemake \
--directory <path_to/my_working_directory> \
-s $metasnk/Snakefile preQC_make_report
These reports can, e.g., be forwarded to your collaborators.
Advanced
The following recipe provides established best practices for running and extending this workflow in a reproducible way.
- Fork the repo to a personal or lab account.
- Clone the fork to the desired working directory for the concrete project/run on your machine.
- Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).
- Modify the config, and any necessary sheets (and probably the workflow) as needed.
- Commit any changes and push the project-branch to your fork on github.
- Run the analysis.
- Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request. This would be greatly appreciated.
- Optional: Push results (plots/tables) to the remote branch on your fork.
- Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).
- Optional: Delete the local clone/workdir to free space.
Testing
Tests cases are in the subfolder .test
. They are automtically executed via continuous integration with Travis CI.