BioWDL: gatk-preprocess

A BioWDL workflow for preprocessing BAM files for variantcalling. Based on the GATK Best Practices.

Please be aware that the page you are currently viewing is not for the latest available version!

This workflow performs preprocessing steps required for variantcalling based on the GATK Best Practices. This workflow can be used for both DNA data and RNA-seq data.

This workflow is part of BioWDL developed by the SASC team at Leiden University Medical Center.

Usage

This workflow can be run using Cromwell:

java -jar cromwell-<version>.jar run -i inputs.json gatk-preprocess.wdl

Inputs

Inputs are provided through a JSON file. The minimally required inputs are described below and a template containing all possible inputs can be generated using Womtool as described in the WOMtool documentation.

{
  "GatkPreprocess.reference": {
    "fasta": "A path to a reference fasta",
    "fai": "The path to the index associated with the reference fasta",
    "dict": "The path to the dict file associated with the reference fasta"
  },
  "GatkPreprocess.bamName": "The name for the output bam. The final output will be <bamName>.bam or <bamName>.bqsr",
  "GatkPreprocess.dbsnpVCF": {
    "file": "A path to a dbSNP VCF file",
    "index": "The path to the index (.tbi) file associated with the dbSNP VCF"
  },
  "GatkPreprocess.bamFile": {
    "file": "The path to an input BAM file",
    "index":"The path to the index for the input BAM file"
  }
}

Some additional inputs that may be of interest are:

{
  "GatkPreprocess.scatterSize": "The size of scatter regions (see explanation of scattering below), defaults to 10,000,000",
  "GatkPreprocess.outputRecalibratedBam": "Whether or not a recalibrated BAM file should be outputted, defaults to false",
  "GatkPreprocess.splitSplicedReads": "Whether or not SplitNCigarReads should be executed (recommended for RNA-seq data), defaults to false",
  "GatkPreprocess.scatterList.regions": "A bed file for which preprocessing will be performed"
}

An output directory can be set using an options.json file. See the cromwell documentation for more information.

Example options.json file:

{
"final_workflow_outputs_dir": "my-analysis-output",
"use_relative_output_paths": true,
"default_runtime_attributes": {
  "docker_user": "$EUID"
  }
}

Alternatively an output directory can be set with GatkPreprocess.outputDir. GatkPreprocess.outputDir must be mounted in the docker container. Cromwell will need a custom configuration to allow this.

Example

{
  "GatkPreprocess.reference": {
    "fasta": "/home/user/genomes/human/GRCh38.fasta",
    "fai": "/home/user/genomes/human/GRCh38.fasta.fai",
    "dict": "/home/user/genomes/human/GRCh38.dict"
  },
  "GatkPreprocess.bamName": "s1_preprocessed",
  "GatkPreprocess.dbsnpVCF": {
    "file": "/home/user/genomes/human/dbsnp/dbsnp-151.vcf.gz",
    "index": "/home/user/genomes/human/dbsnp/dbsnp-151.vcf.gz.tbi"
  },
  "GatkPreprocess.bamFile": {
    "file": "/home/user/mapping/results/s1.bam",
    "index":"/home/user/mapping/results/s1.bai"
  },
  "GatkPreprocess.splitSplicedReads": true,
  "GatkPreprocess.outputRecalibratedBam": true
}

Dependency requirements and tool versions

Biowdl pipelines use docker images to ensure reproducibility. This means that biowdl pipelines will run on any system that has docker installed. Alternatively they can be run with singularity.

For more advanced configuration of docker or singularity please check the cromwell documentation on containers.

Images from biocontainers are preferred for biowdl pipelines. The list of default images for this pipeline can be found in the default for the dockerImages input.

Output

This workflow will produce a BQSR report named according to the bamName input (bamName + ‘.bqsr’). If one of the splitSplicedReads or outputRecalibratedBam inputs is set to true, a new BAM file (bamName + ‘.bam’) will be produced as well.

Scattering

This pipeline performs scattering to speed up analysis on grid computing clusters. This is done by splitting the reference genome into regions of roughly equal size (see the scatterSize input). Each of these regions will be analyzed in separate jobs, allowing them to be processed in parallel.

Contact

For any question about running this workflow and feature requests, please use the github issue tracker or contact the SASC team directly at: sasc@lumc.nl.