The page is a central reference of information and notes for computational biologists and software engineers in the CGA group, as we transition from the TCGA era of GDAC to the GDC/GDAN era. As of July 2016, the Genomics Data Commons has replaced the TCGA Data Coordination Center as the repository of not only TCGA data but also for other existing genomics projects (such as TARGET), as well as future genomics projects.

Reference Data

  1. For GDAN pipelines we will store on-premises reference data in /xchip/cga/reference/GDAN. 

  2. This is analogous  to the TCGA reference directory /xchip/cga/reference/tcga but is not TCGA-centric.  In addition to having hg38 reference data, the GDAN reference tree will gather the bits and pieces of "hidden data" that for expedience has been accidentally squirreled away in less than ideal locations.

  3. The first entry in the GDAN reference tree is ./GDAN/miR/miRSeqpreprocess/mature.21.fa.gz, which is used in the miRSeq preprocessor.

  4.  Ideally the reference directory will be migrate to a cloud bucket and referenced in cloud-based analysis pipelines, but that will take time.

Pipeline Construction and Algorithm Coding Guidelines

We have a window of opportunity between the end of TCGA and the beginning of GDAN, to return to best practices for pipeline and software development.

  1. Review this paper, which was the basis for the Software Carpentry workshop series of best coding practices for scientists.
  2. BIG Takeway:  by adopting 10% of the daily habits of experienced SWEs, scientists and their collaborators can become MUCH more productive and confident in their results.
  3.  To that end we are going to resume SWE/CB pair-programming efforts.
  4. After a pipeline is installed to gdc_devworkspace and configured and successfully tested:
    1. Schedule review of algorithm code (e.g. R, Python, Matlab, etc) with a SWE on staff
  5. More discussion is needed, but other ideas include:
    1. Create templates for R/Python/Matlab, with pre-defined sections (e.g. description of algorithm, description of inputs, description of outputs)
    2. With an eye towards being able to EXTRACT those descriptions programmatically INTO the output reports that are later generated
    3. Every pipeline should write a provenance.txt  file describing inputs and outputs:
      1. Input_1_<param_name> = ...
      2. Input_2_<param_name> = ...
      3. ...
      4. Output_1_ = ...
      5. Output_2_ = ...
  6. More to come ...