The problem
While PacBio’s native SMRTLink tool allows for looking at metrics for one specific run at a time, there is no easy way to query for metrics over time across multiple runs and consume it as a simple tabular dataset. The latter is very important since it would enable powerful analytics. Moreover, other systems/tools (datareview page, secondary re-analyses, etc) can also benefit from it since once data is properly structured in a “datamart” it can be used by anybody.
Challenges
We do know all these metrics are scattered in XML/JSON files all over the place in complicated folder structure in our onprem Linux system. So called “raw” metrics are relatively easy to be linked to a (run, cellWell)
The “cromwell” metrics however are particularly painful since the only way to link them back to (run, cellWell) is to track down the “symbolicLink” in relevant “inputs” folder riddled with random UUIDs all along. This requires fair amount of linux voodoo magic which significantly slows down new development.
Mission statement
It would be great if all teams (Analytics, lab, DSP, Mercury, etc) can query the metrics from our PACBIO datamart in streamlined way. Software engineers would merely use SQL/JSON to extract fields they need in very declarative way.
What will it take - the “mapping” process
For all this to work, we need to go through the “mapping” process - figuring out where all interesting SMRTLink fields are stored in the file system. Usually it goes like this: the Lab (our domain experts) would say “hey, we are interested in smrtlink field XYZ, screenshot attached, and we believe it’s stored in file …XYZ.json”. Then we (the software engineers) will implement a tiny sql/json extraction code and then all teams would be able to use it. So that for example fields which DSP has introduced will become available to other teams and vice versa.
It is very important that files digested by “metrics-flattener ETL” to be easily compared to Smrtlink-screens side by side. That’s why “Flattened metrics viewer“ was created.
PacBio metrics acceptable ranges - this is the document driving the mapping effort.
What is this “domain” field all about ?
“domain” is synthetic field derived from the location of original file being captured. It’s basically the location where these random UUIDs are masked out. As a result, all records for a given metrics-type can be easily filtered/grouped in SQL.
For example:
SELECT a.run_name, a.cell_well, c.* FROM pacbio a, json_table(DATA, '$[*]' COLUMNS( "DNABarcode" PATH '$.DNABarcode', "BioSample" PATH '$.BioSample', "HiFi Reads" PATH '$.attributes[*]?(@.id=="ccs2.number_of_ccs_reads").value', "HiFi Yield (bp)" NUMBER PATH '$.attributes[*]?(@.id=="ccs2.total_number_of_ccs_bases").value', "HiFi Read Length (mean, bp)" NUMBER PATH '$.attributes[*]?(@.id=="ccs2.mean_ccs_readlength").value', "HiFi Read Quality (median) accuracy" PATH '$.attributes[*]?(@.id=="ccs2.median_accuracy").value', "HiFi Read Quality (median)" NUMBER PATH '$.attributes[*]?(@.id=="ccs2.median_qv").value' )) AS c WHERE site_id=3 AND a.domain='CROMWELL/sl_dataset_reports/*/call-import_dataset_reports/execution/ccs.report.json*'' AND a.run_name='r64386e_20220523_180557' AND a.cell_well='4_D01'
Metrics stored in “JSON-tables”
Bunch of interesting metrics (for example ccs2.hifi_length_summary.read_length
) are stored in JSON-”tables”. Unfortunately they are organized in “column-based” fashion making it nearly impossible to extract metrics from DB later on. Therefore a new synthetic twin tables are created where metrics are organized in “row-based” fashion (in other words things are “transposed”)
As a result, straightforward JSON-extraction from DB becomes possible
SELECT a.run_name, a.cell_well, "etl.dataset", c."rowid", REPLACE(c."Read Length (bp)", CHR(191), '>=') "Read Length (bp)", -- '>=' UTF8 e2 89 a5 "Reads", "Reads (%)" ,"Yield (bp)", "Yield (%)" FROM pacbio a, json_table(DATA, '$[*]' COLUMNS( "etl.dataset" path '$."etl.dataset"', NESTED PATH '$."etl.ccs2.hifi_length_summary"[*]' COLUMNS( "rowid" PATH '$.rowid', "Read Length (bp)" PATH '$."ccs2.hifi_length_summary.read_length"', "Reads" NUMBER PATH '$."ccs2.hifi_length_summary.n_reads"', "Reads (%)" NUMBER PATH '$."ccs2.hifi_length_summary.reads_pct"', "Yield (bp)" NUMBER PATH '$."ccs2.hifi_length_summary.yield"', "Yield (%)" NUMBER PATH '$."ccs2.hifi_length_summary.yield_pct"' ) )) AS c WHERE site_id=3 AND a.domain='CROMWELL/sl_dataset_reports/*/call-import_dataset_reports/execution/ccs.report.json*' AND a.run_name='r64020e_20220519_191246' AND a.cell_well='1_B01'
Metrics stored in “attributes“ JSON-array
Other metrics are stored in “attributes” JSON-array (on the left side). A new synthetic “etl.attributes“ JSON-object is added to allow more natural JSON-extraction from the DB.
SELECT a.run_name, a.cell_well, "etl.dataset", "HiFi Reads", "HiFi Yield (bp)", "HiFi Read Length (mean, bp)" FROM pacbio a, json_table(DATA, '$[*]' COLUMNS( "HiFi Reads" NUMBER PATH '$."etl.attributes"."ccs2.number_of_ccs_reads".value', "HiFi Yield (bp)" NUMBER PATH '$."etl.attributes"."ccs2.total_number_of_ccs_bases".value', "HiFi Read Length (mean, bp)" NUMBER PATH '$."etl.attributes"."ccs2.mean_ccs_readlength".value', "etl.dataset" path '$."etl.dataset"' )) AS c WHERE site_id=3 AND a.domain='CROMWELL/sl_dataset_reports/*/call-import_dataset_reports/execution/ccs.report.json*' AND a.run_name='r64020e_20220519_191246' AND a.cell_well='1_B01'
The “superJSON” tool
Imagine you have SMRTLink screen in front of you saying “Longest Subread N50: 21250” for a given run/cell. How can you find out which metrics-file this number comes from ?
Open the “superJSON” tool (all files are merged in there), expand all nodes and search for this exact number https://analytics.broadinstitute.org/pacbioMetrics/3/r64386e_20220523_180557/4_D01/superjson
SELECT a.run_name, a.cell_well, a.movie, c."raw_data_report.insert_n50" FROM pacbio a, json_table(DATA, '$[*]' COLUMNS( "raw_data_report.insert_n50" NUMBER PATH '$."etl.attributes"."raw_data_report.insert_n50".value' )) AS c WHERE site_id=3 AND a.domain='CROMWELL/sl_dataset_reports/*/call-import_dataset_reports/execution/raw_data.report.json' AND a.run_name='r64386e_20220523_180557' AND a.cell_well='4_D01'
“per-bacrode” support
“per-barcode” metrics are supported by converting multiple “consensusreadset.xml“ files into JSONs and then merging these into a single “synthetic JSON-array“. These can be recognized by checking for trailing “*” at the end of “domain” field.
For a given cell and domain, if ETL comes across multiple files then it will naturally merge these into JSON-array.
However this logic is not sufficient if there is only 1 barcode registered per cell - therefore a list of exemption file-types (ccs.report.json
) is kept to instruct the ETL to always merge these into JSON-array regardless of number of files.
Technical caveats
This framework is tightly coupled to PacBio’s internal file-structure (unfortunately and inevitably). So, next time PacBio change their SMRTLink version, this solution may have to be fixed accordingly.
All metrics stored in PACBIO datamart are in JSON format. Metrics in XML files are converted into JSON
for each digested metrics file, a special “domain” field is generated - it allows for similar metrics to be grouped and queried via SQL later on
examples shown are for v11 installation on “sodium”. Once “skywalker” is operational switch over should be relatively easy.
ANALYTICS.PACBIO datamart (along with relevant views) is located in this Oracle instance
db.analytics.url="jdbc:oracle:thin:@//seqprod.broadinstitute.org:1521/seqprod.broadinstitute.org"
username: REPORTING
"ANALYTICS.PACBIO_STAR" view demonstrates how to merge together multiple files (ccs_report, loading, etc) in a flat per (run,cell_well) datasource. It is based on SmrtLink v10, hydrogen data (site_id=1) but techniques used are 100% legit.
Surgically extract fields from metrics-JSON via Oracle JSON
progress of Sodium PacBio flattened metrics ETL can be checked here ETL dashboard
rollback-protection is implemented so that ETL-run is cancelled if seen-before files are removed