The Dazzler DB: Organizing an Assembly Pipeline

Assembling DNA sequence data is not a simple problem. Conceptually it generally can be thought of as proceeding in a number of sequential steps, performed by what we call phases arranged in a pipeline, that perform certain aspects of the overall process. For example in the “OLC” assembly paradigm, the first phase (O) is to find all the overlaps between reads, the second phase (L) is to then use the overlaps to construct a layout of the reads, and the final phase (C) is to compute a consensus sequence over the arrangement of reads. One can easily imagine additional phases, such as a “scrubber” that detects chimeric reads, a “cleaner” that corrects read sequence, and so on. Indeed, the header banner for this blog shows a 7 phase pipeline that roughly realizes our current dazzler prototype. To facilitate the multiple and evolving phases of the dazzler assembler, we have chosen to organize all the read data into what is effectively a “database” of the reads and their meta-information. The design goals for this data base are as follows:

  1. The database stores the source Pacbio read information in such a way that it can recreate the original input data, thus permitting a user to remove the (effectively redundant) source files. This avoids duplicating the same data, once in the source file and once in the database.
  1. The data base can be built up incrementally, that is new sequence data can be added to the data base over time.
  1. The data base flexibly allows one to store any meta-data desired for reads. This is accomplished with the concept of tracks that implementers can add as they need them.
  1. The data is held in a compressed form equivalent to the .dexta and .dexqv files of the data extraction module. Both the .fasta and .quiva information for each read is held in the data base and can be recreated from it. The .quiva information can be added separately and later on if desired.
  1. To facilitate job parallel, cluster operation of the phases of our assembler, the data base has a concept of a current partitioning in which all the reads that are over a given length and optionally unique to a well, are divided up into blocks containing roughly a given number of bases, except possibly the last block which may have a short count. Often programs can be run on blocks or pairs of blocks and each such job is reasonably well balanced as the blocks are all the same size. One must be careful about changing the partition during an assembly as doing so can void the structural validity of any interim block-based results.

The Dazzler DB module source code is available on Github here. Grabbing the code and uttering “make” should produce 13 programs: fast2DB, quiva2DB, DB2fasta, DB2quiva, fasta2DAM, DAM2fasta, DBsplit, DBdust, Catrack, DBshow, DBstats, DBrm, and simulator.

One can add the information from a collection of .fasta files to a given DB with fasta2DB, and one can subsequently add the quality streams in the associated .quiva files with quiva2DB. After adding these files to a data base one can then remove them as property 1 above guarantees that these files can be recreated from the DB, and the routines DB2fasta and DB2quiva do exactly that. Indeed, here in Dresden we typically extract .fasta and .quiva files with dextract from the Pacbio .bax.h5 source files and then immediately put the .fasta and .quiva files into the relevant DB, remove the .fasta and .quiva files, and send the .bax.h5 files to our tape backup library. We can do this because we know that we can recreate the .fasta and .quiva files exactly as they were when imported to the database.

A Dazzler DB consists of one named, visible file, e.g. FOO.db, and several invisible secondary files encoding various elements of the DB. The secondary files are “invisible” to the UNIX OS in the sense that they begin with a “.” and hence are not listed by “ls” unless one specifies the -a flag. We chose to do this so that when a user lists the contents of a directory they just see a single name, e.g. FOO.db, that is the one used to refer to the DB in commands. The files associated with a database named, say FOO, are as follows:

(a) FOO.db
a text file containing (i) the list of input files added to the database so far, and (ii) how to partition the database into blocks (if the partition parameters have been set).

(b) .FOO.idx
a binary “index” of all the meta-data about each read allowing, for example, one to randomly access a read’s sequence (in the store .FOO.bps). It is 28N + 88 bytes in size where N is the number of reads in the database.

(c) .FOO.bps
a binary compressed “store” of all the DNA sequences. It is M/4 bytes in size where M is the total number of base pairs in the database.

(d) .FOO.qvs
a binary compressed “store” of the 5 Pacbio quality value streams for the reads. Its size is roughly 5/3M bytes depending on the compression achieved. This file only exists if .quiva files have been added to the database.

(e) .FOO.<track>.anno, .FOO.<track>.data
a track called <track> containing customized meta-data for each read. For example, the DBdust command annotates low complexity intervals of reads and records the intervals for each read in the two files .FOO.dust.anno and .FOO.dust.data that are together called the “dust” track. Any kind of information about a read can be recorded, such as micro-sat intervals, repeat intervals, corrected sequence, etc. Specific tracks will be described as phases that produce them are created and released.

If one does not like the convention of the secondary files being invisible, then un-defining the constant HIDE_FILES in DB.h before compiling the library, creates commands that do not place a prefixing “.” before secondary file names, e.g. FOO.idx instead of .FOO.idx. One then sees all the files realizing a DB when listing the contents of a directory with ls.

While a Dazzler DB holds a collection of Pacbio reads, a Dazzler map DB or DAM holds a collection of contigs from a reference genome assembly. This special type of DB has been introduced in order to facilitate the mapping of reads to an assembly and has been given the suffix .dam to distinguish it from an ordinary DB. It is structurally identical to a .db except:

(a) there is no concept of quality values, and hence no .FOO.qvs file.

(b) every .fasta scaffold (a sequence with runs of N’s between contigs estimating the length of the gap) is broken into a separate contig sequence in the DB and the header for each scaffold is retained in a new .FOO.hdr file.

(c) the original and first and last pulse fields in the meta-data records held in .FOO.idx, hold instead the contig number and the interval of the contig within its original scaffold sequence.

A map DB can equally well be the argument of any of the commands below that operate on normal DBs. In general, a .dam can be an argument anywhere a .db can, with the exception of routines or optioned calls to routines that involve quality values, or the special routines fasta2DAM and DAM2fasta that create a DAM and reverse said, just like the pair fasta2DB and DB2fasta. So in general when we refer to a database we are referring to either a DB or a DAM.

The command DBsplit sets or resets the current partition for a database which is determined by 3 parameters: (i) the total number of basepairs to place in each block, (ii) the minimum read length of reads to include within a block, and (iii) whether or not to only include the longest read from a given well or all reads from a well (NB: several reads of the same insert in a given well can be produced by the Pacbio instrument). Note that the length and uniqueness parameters effectively select a subset of the reads that contribute to the size of a block. We call this subset the trimmed data base. Some commands operate on the entire database, others on the trimmed database, and yet others have an option flag that permits them to operate on either at the users discretion. Therefore, one should note carefully to which version of the database a command refers to. This is especially important for any command that identifies reads by their index (ordinal position) in the database.

Once the database has been split into blocks, the commands DBshow, DBstats, and DBdust below and commands yet to come, such as the local alignment finder dalign, can take a block or blocks as arguments. On the command line this is indicated by supplying the name of the DB followed by a period and then a block number, e.g. FOO.3.db or simply FOO.3, refers to the 3’rd block of DB FOO (assuming of course it has a current partition and said partition has a 3rd block). One should note carefully that a block is a contiguous range of reads such that once it is trimmed has a given size in base pairs (as set by DBsplit). Thus like an entire database, a block can be either untrimmed or trimmed and one needs to again be careful when giving a read index to a command such as DBshow.

The command DBdust runs the symmetric DUST low complexity filter on every read in a given untrimmed DB or DB block and it is the first example of a phase that produces a track. It is incremental in that you can call it repeatedly on a given DB and it will extend the track information to include any new reads added since the last time it was run on the DB. Reads are analyzed for low-complexity intervals and these intervals constitute the information stored in the .dust-track. These intervals are subsequently “soft”-masked by the local alignment command dalign, i.e. the low-complexity intervals are ignored for the purposes of finding read pairs that might have a significant local alignment between them, but any local alignment found will include the low-complexity intervals.

When DBdust is run on a block, say FOO.3, it is run on the untrimmed version of the block and it produces a block track in the files .FOO.3.dust.anno and .FOO.3.dust.data, whereas running DBdust on the entire DB FOO produces .FOO.dust.anno and .FOO.dust.data. Thus one can run an independent job on each block of a DB, and then after the fact stitch all the block tracks together with a call to Catrack, e.g. “Catrack FOO dust“. Catrack is a general command that merges any sequence of block tracks together into a track for the specified DB. For example, a 50X human dataset would partition into roughly 375, 400Mb blocks, for which DBdust takes about 20 seconds on a typical processor. One can either call DBdust on the entire DB and wait 375*20 ~ 2.07 hours, or run 375 jobs on each block on your local compute cluster, waiting 1 or 2 minutes, depending on how loaded your cluster is, and then call Catrack on the block tracks, which takes another half a minute at most.

Tracks that encode intervals of the underlying reads in the data base, of which the “dust” track is an example, are specifically designated interval tracks, and commands like DBshow, DBstats, and daligner can accept and utlize one or more such tracks, typically specified through a -m option on the command line.

DBshow and DBstats are routines that allow one to examine the current contents of a DB or DB block. DBstats gives one a summary of the contents of an untrimmed or trimmed DB of DB block including a histogram of read lengths and histograms of desired interval tracks if desired. DBshow allows your to display the sequence and/or quality streams of any specific set of reads or read ranges in the DB or DB block where the output can be in Pacbio fasta or quiva format depending on which option flags are set. There is a flag that controls whether the command applies to the entire DB or the trimmed DB and one must be careful, as the, say 5th read of the DB may not be the 5th read of the trimmed DB. A nice trick here is that since the output is in the Pacbio formats, one can then create a “mini-DB” of the set of reads output by DBshow, by calling fasta2DB and/or quiva2DB on its output. This conveniently allows a developer to test/debug a phase on a small, troublesome subset of reads in the DB.

DBrm removes all the files, including all tracks, realizing a given DB. It is safer to use this command than to attempt to remove a DB by calling rm on each hidden file. Lastly, this release includes a simple simulator, simulator, that creates a synthetic Pacbio data set useful for testing purposes.

A precise, detailed, and up-to-date command reference can be found here.

 

Advertisements

3 thoughts on “The Dazzler DB: Organizing an Assembly Pipeline

  1. Pingback: ‘For Your Pacbios Only’ – Gene Myers’ June 1 Dazzler Update « Homolog.us – Bioinformatics

  2. I would like to suggest a feature for the DBshow command. Thanks to DALIGN it is easy now to identify certain subsets in the PacBio data but we still lack an assembler or would like to verify with other tools the results. Therefore it would be more informative if the DBshow does not provide only an identifier such as

    >Prolog/108988/0_8190 RQ=0.877

    But the original filename, so one can easier track it back to e.g. generate white/black lists with HGAP or other utils which use directly the h5. Such like:

    >m130905_003016_42182_c100546852550000001823086111241312_s1_p0/108988/0_8190 RQ=0.877

    Cheers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s