Skip to content

nf-core/variantbenchmarking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nf-core/variantbenchmarking

GitHub Actions CI Status GitHub Actions Linting StatusAWS CICite with Zenodo nf-test

Nextflow run with conda run with docker run with singularity Launch on Seqera Platform

Get help on SlackFollow on TwitterFollow on MastodonWatch on YouTube

Introduction

nf-core/variantbenchmarking is designed to evaluate and validate the accuracy of variant calling methods in genomic research especially for Human data. Initially, the pipeline is tuned well for available gold standard truth sets (for example, Genome in a Bottle and SEQC2 samples) but it can be used to compare any two variant calling results. The workflow provides benchmarking tools for small variants including SNVs and INDELs, Structural Variants (SVs) and Copy Number Variations (CNVs) for germline and somatic analysis.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

The workflow involves several key processes to ensure reliable and reproducible results as follows:

Standardization and normalization of variants:

This initial step ensures consistent formatting and alignment of variants in test and truth VCF files for accurate comparison.

  1. Subsample if input test vcf is multisample (bcftools view)
  2. Homogenization of multi-allelic variants, MNPs and SVs (including imprecise paired breakends and single breakends) (variant-extractor)
  3. Reformatting test VCF files from different SV callers (svync)
  4. Rename sample names in test and truth VCF files (bcftools reheader)
  5. Splitting multi-allelic variants in test and truth VCF files (bcftools norm)
  6. Deduplication of variants in test and truth VCF files (bcftools norm)
  7. Use prepy in order to normalize test files. This option is only applicable for happy benchmarking of germline analysis (prepy)
  8. Split SNVs and indels if the given test VCF contains both. This is only applicable for somatic analysis (bcftools view)

Filtering options:

Applying filtering on the process of benchmarking itself might makes it impossible to compare different benchmarking strategies. Therefore, for whom like to compare benchmarking methods this subworkflow aims to provide filtering options for variants.

  1. Filtration of contigs (bcftools view)
  2. Include or exclude SNVs and INDELs (bcftools filter)
  3. Size and quality filtering for SVs (SURVIVOR filter)

Liftover of truth sets:

This sub-workflow provides option to convert genome coordinates of truth VCF and high confidence BED file to a new assembly. Golden standard truth files are build upon specific reference genomes which makes the necessity of lifting over depending on the test VCF in query.

  1. Create sequence dictionary for the reference (picard CreateSequenceDictionary). This file can be saved and reused.
  2. Lifting over truth variants (picard LiftoverVcf)
  3. Lifting over high confidence coordinates (UCSC liftover)

Statistical inference of input test and truth variants:

This step provides insights into the distribution of variants before benchmarking.

  1. Get statistics of SNVs, INDELs and complex variants (bcftools stats)
  2. Get statistics of SVs by type (SURVIVOR stats)

Benchmarking of variants:

Actual benchmarking of variants are split between SVs and small variants:

Available methods for SVs:

  1. Germline and somatic variant benchmarking using Truvari (truvari bench)
  2. Germline and somatic variant benchmarking using SVanalyzer (svanalyzer benchmark)

Available methods for CNVs:

  1. Germline and somatic variant benchmarking using Wittyer (witty.er)

Available methods for SNVs and INDELs:

  1. Germline variant benchmarking using RTG-tools (rtg vcfeval)
  2. Germline variant benchmarking using Happy tools (hap.py)
  3. Somatic variant benchmarking using Sompy (som.py)

Comparison of benchmarking results per TP, FP and FN files

It is essential to compare benchmarking results in order to infer uniquely or commonly seen TPs, FPs and FNs.

  1. Merging TP, FP and FN results for happy, rtgtools and sompy (bcftools merge)
  2. Merging TP, FP and FN results for Truvari and SVanalyzer (SURVIVOR merge)
  3. Conversion of VCF files to CSV to infer common and unique variants per caller (python script)

Reporting of benchmark results

The generation of comprehensive report that consolidates all benchmarking results.

  1. Merging summary statistics per benchmarking tool (python script)
  2. Plotting benchmark metrics per benchmarking tool (R script)
  3. Create visual HTML report for the integration of NCBENCH (datavzrd)

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

id,test_vcf,caller,vartype
test1,test1.vcf.gz,delly,sv
test2,test2.vcf,gatk,small
test3,test3.vcf.gz,cnvkit,cnv

Each row represents a vcf file (test-query file). For each vcf file, variant calling method (caller) and variant type (vartype) have to be defined.

User has to define or provide truth vcf in config files. There are readily available vcf files for benchmarking from Genome in a bottle and SEQC2 studies which can be used readily.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Now, you can run the pipeline using:

nextflow run nf-core/variantbenchmarking \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR> \
   --genome GRCh37 \
   --sample HG002
   --analysis germline

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

This pipeline outputs benchmarking results per method besides to the inferred and compared statistics.

Credits

nf-core/variantbenchmarking was originally written by Kübra Narcı (@kubranarci) as a part of benchmarking studies in German Human Genome Phenome Archieve Project (GHGA).

We thank the following people for their extensive assistance in the development of this pipeline:

Acknowledgements

GHGA

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #variantbenchmarking channel (you can join with this invite).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.