Detecting and Masking Repeats

The daligner was specifically designed to find local alignments (LAs) between reads as opposed to overlaps where in each end of the alignment reaches an end of one or the other read.  This is a necessity because raw PacBio reads can have very low quality or even random segments at any position, and one cannot therefore expect an alignment through these regions.  Finding LAs instead of overlaps, allows the Dazzler suite to then analyze pile-ograms in order to detect and address these regions (recall that a pile for a read x is the set of all LAs where x is the A-read and a pile-ogram is a depiction of all the A-intervals covering x).  Computing LAs also implies that if a read x is from a region of DNA that has a relatively conserved repetitive element in it, then every copy of the element in any read of the shotgun data set will align with the relevant interval of x.  For example, given a 50X (=c) data set and a repeat that occurs 20 (=r) times in the genome, one should expect every occurrence of the repeat in a read to be covered on average by 1,000 (=rc) LA’s in its pile.  The pile-ogram below depicts such a repeat stack:

A small repeat stackWhat’s good about this is that the excessive depth of “stacked” LA’s clearly signals the repetitive element, its approximate copy number, and its approximate boundaries.  What’s bad about this is that the daligner can spend 95% or more of its time finding all the local alignments in these repeat stacks.  This observation raises two questions: (a) Is it worth finding them all, and (b) If not, how can we avoid doing so?

The answer to the first question is “no” (i.e., it is not worth finding them all) and our argument is as follows.  First recall, that daligner can accept any number of interval tracks that it uses to soft mask the reads.  That is, it will only seed/seek alignments in regions that are not masked, but will extend an alignment through a masked region if the the two sequences involved continue to be similar.  Next observe, that if a repetitive element is flanked by unique sequence and it is short enough that with high probability there is one or more spanning reads that cover the repeat and at least say 1Kbp of the unique flank on either side, then there is no consequence to soft masking this repeat as the spanning reads will provide a coherent assembly across it as shown in the figure below as the left (small) scenario.  For Pacbio reads this is generally true of any interspersed repeat of 10Kb or less.  If a repeat is compound or so large that there are no spanning reads for it (as shown in the right (large) scenario), then all current assemblers frankly fail to assemble the region involving it, and so in some sense Effect of soft-masking repeatsno harm is done by soft-masking the long repeats too. We therefore think that soft-masking the repetitive parts of reads is a good initial strategy for the down stream assembly problem: most of the genome will come together with long reads, the “overlap”/daligner stage of assembly will be 10 to 20 times faster, and the big repeats can still be solved in a subsequent post-process by analyzing the much smaller number of reads that were completely masked as repetitive.

So on to the second question, which is how does one efficiently build the needed repeat masks?  Consider taking the hypothetical 50X data set above and comparing every 1X block of this data set against itself.  First observe that doing so is 2% of all the comparisons that would normally be performed by daligner.  Second, observe that unique parts of a read will have on average about 1 LA covering them, whereas a segment from a repeat that occurs, say, 10 times in the genome will already have have 10 LA’s covering it and will appear as a repeat stack in a read’s pile-ogram.  The probability of having 10 or more LA’s covering a unique segment of a read is exceedingly low, so simply specifying a threshold, such as 10, above which a region is labelled repetitive is an effective strategy for identifying high-copy number repetitive elements.  This mask can then be used by the daligner when performing the remaining 98% of its comparisons.  The resulting suppression of purely repeat-induced LAs will speed things up 10-20 times at almost no degradation in the performance of a downstream assembler like Falcon or the Celera Assembler as argued earlier.

For a very lScreen Shot 2016-04-04 at 9.44.23 AMarge data set one may want to repeat this process at different coverage levels and with different thresholds in order to achieve greater efficiency.  For example, we have a 30X data set of a 30Gbp genome, that requires dividing the Dazzler DB into 3,600 250Mbp blocks.  We first run each block against itself (.008X vs .008X) and create a mask .rep1 with a threshold of 20, that detects only super-repetitive elements.  Next we compare every group of 10 consecutive blocks (.08X) against themselves soft-maskiScreen Shot 2016-04-04 at 9.44.35 AMng with .rep1, and create a mask .rep10 with a threshold of 15 to detect fairly abundant repetitive elements.  Then we compare every consecutive group of 100 blocks (.8X) against themselves soft-masking with .rep1 and .rep10, and create a mask .rep100 with threshold 10 used to further soft-mask the final all-against-all of the 3,600 blocks in the entire data set.  The figures interspersed within this paragraph illustrate a 1×1 and 5×5 schema for a hypothetical set of blocks.

To implement this strategy we have created the programs HPC.REPmask and REPmask as part of a new DAMASKER module.  “HPC.REPmask -g#1 -c#2” produces a job-based UNIX script suitable for an HPC cluster that:

  1. compares each consecutive #1 blocks against themselves,
  2. sorts and merges these into .las files for each block, and then
  3. calls “REPmask -c#2 -mrep#1” on each block and associated .las file to produce a block interval track or mask, with the name .rep#1, of any read region covered #2 or more times.

As a running example consider a hypothetical Dazzler data base D that is divided into 150 blocks and contains an estimated 50X coverage of the target genome.  Then 3 blocks of data contains 1X of the genome, and to realize a 1X versus 1X repeat mask, we need to compare every consecutive group of 3 blocks against each other and then produce a mask of all reads covered more than say 10 deep.  Calling “HPC.REPmask -g3 -c10 D” will produce a shell script to do this, where the script has the following form:

# Daligner jobs (150)
daligner D.1 D.1
daligner D.2 D.1 D.2
daligner D.3 D.1 D.2 D.3
daligner D.4 D.4
daligner D.5 D.4 D.5
daligner D.6 D.4 D.5 D.6

daligner D.148 D.148
daligner D.149 D.148 D.149
daligner D.150 D.148 D.149 D.150
# Initial sort & merge jobs (450)
LAsort D.1.D.1.*.las && LAmerge L1.1.1 D.1.D.1.*.S.las
LAsort D.1.D.2.*.las && LAmerge L1.1.2 D.1.D.2.*.S.las
LAsort D.1.D.3.*.las && LAmerge L1.1.3 D.1.D.3.*.S.las
LAsort D.2.D.1.*.las && LAmerge L1.2.1 D.2.D.1.*.S.las
LAsort D.2.D.2.*.las && LAmerge L1.2.2 D.2.D.2.*.S.las
LAsort D.2.D.3.*.las && LAmerge L1.2.3 D.2.D.3.*.S.las

LAsort D.150.D.150.*.las && LAmerge L1.150.150 D.150.D.150.*.S.las
# Level 1 merge jobs (150)
LAmerge D.R3.1 L1.1.1 L1.1.2 L1.1.3
LAmerge D.R3.2 L1.2.1 L1.2.2 L1.2.3
LAmerge D.R3.3 L1.3.1 L1.3.2 L1.3.3
LAmerge D.R3.4 L1.4.1 L1.4.2 L1.4.3
LAmerge D.R3.5 L1.5.1 L1.5.2 L1.5.3
LAmerge D.R3.6 L1.6.1 L1.6.2 L1.6.3

# REPmask jobs (50)
REPmask -c10 -mrep3 D D.R3.1 D.R3.2 D.R3.3
REPmask -c10 -mrep3 D D.R3.4 D.R3.5 D.R3.6
REPmask -c10 -mrep3 D D.R3.7 D.R3.8 D.R3.9

REPmask -c10 -mrep3 D D.R3.148 D.R3.149 D.R3.150
# Produce final mask for entirety of D
Catrack D rep3

So the above would pretty much be the end of this post if it were not for one other kind of repetitive element that haunts single-molecule data sets that sample the entire genome including the centromeres and telomeres, namely tandem satellites.  Back in the days of Sanger sequencing, such repeats were rarely seen as they were unclonable, but with single molecule reads we see and sample all the genome including these regions.  Tandem repeats are even nastier than interspersed repeats at creating deep stacks in a read’s pile-ogram because they align with themselves at every offset of the tandem element as well as with each other!  So our masking strategy actually starts by detecting and building a mask for tandem repeats in the data set.

A tandem repeat can be detected in a read by comparing the read against itself, with the slight augmentation of not allowing the underlying alignment algorithm to use the main diagonal of the edit matrix/graph.  When a tandem is present, it induces off-diagonal alignments at intervalTandem dot plots equal to the tandem element length.  That is, if the tandem is x10 then the prefix of the first n copies of x aligns with the suffix of the last n copies of x creating a ladder of n-1 alignments as seen in the dot plot accompanying this paragraph.  Comparing a read against itself is already possible in our framework with the -I option to daligner.  To detect tandems, we built a special version of daligner that we call datander that given a DB block, compares each read in the block only against itself and outputs these alignments.  We further developed the program TANmask to use these self-alignments to detect tandem regions to mask: namely, when the two aligned segments of a self-LA overlap, the union of the two segments marks a tandem region.  Lastly, the program HPC.TANmask generates a script generator to call datander, sort and merge the resulting self-LAs, and finally call TANmask on the results.  For example, “HPC.TANmask D” for the 50X database D introduced earlier produces a script of the form:

# Datander jobs (38)
datander D.1 D.2 D.3 D.4
datander D.5 D.6 D.7 D.8

datander D.149 D.150
# Sort & merge jobs (150)
LAsort D.1.T*.las && LAmerge D.T.1 D.1.T*.S.las
LAsort D.2.T*.las && LAmerge D.T.2 D.2.T*.S.las

LAsort D.150.T*.las && LAmerge D.T.150 D.150.T*.S.las
# TANmask jobs (38)
TANmask D D.T.1 D.T.2 D.T.3 D.T.4
TANmask D D.T.5 D.T.6 D.T.7 D.T.8

TANmask D D.T.149 D.T.150
# Produce final mask for entirety of D
Catrack D tan

One should note carefully, that tandem repeat masking should be performed before the interspersed repeat masking discussed at the beginning.  To conclude, we close with the example of the super large 30X database of a 30Gbp genome consisting of 3,600 blocks.  Assuming the DB is called, say Big, all the scripts for performing the repeat-masked “overlap” phase of assembly would be produced by invoking the commands:

HPC.TANmask Big
HPC.REPmask -g1 -c20 -mtan Big
HPC.REPmask -g10 -c15 -mtan -mrep1 Big
HPC.REPmask -g100 -c10 -mtan -mrep1 -mrep10 Big
HPC.daligner -mtan -mrep1 -mrep10 -mrep100 Big

The scripts produced are quite huge for a task of this scale and include a variety of integrity checks and cleanup commands not mentioned here.  A newcomer would best be served by gaining experience on a small data set with small blocks, and only proceeding to large production runs with these “HPC.<x>” scripts when one has confidence that they can manage such a pipeline 😄  In a subsequent post on performance/results I will present some timing studies of the impact of repeat masking on the overlap phase of assembly.

Advertisements

DALIGNER: Fast and sensitive detection of all pairwise local alignments

The first phase of the Dazzler assembler compares every read in the trimmed Dazzler database (DB) against every other read in the trimmed DB, seeking significant local alignments between them. In an old-fashioned OLC assembler, one would look, more restrictively, for overlaps between reads. But one can use local alignments to detect artifacts such as chimeric reads and unclipped adapter sequence by looking for consistent alignment terminations within a read, and one can also detect and annotate repetitive elements of the genome covered within a read by looking for “pile-ups” of alignments. Removing the artifacts and knowing the repetitive regions covered by a read is a huge advantage prior to assembling the reads. Indeed, the dazzler assembler employes several downstream phases specifically to remove artifacts, correct reads, and annotate repetitive elements prior to building a string graph which is the modern analog of the layout phase for an OLC architecture.

You can get the source code for the DALIGNER module on Github here. Grabbing the code and uttering “make” should produce 9 programs: daligner, LAsort, LAmerge, LAshow, LAcat, LAsplit, LAcheck, HPCdaligner, and HPCmapper.

Ideally one would like to call the alignment finder, daligner, with a DB, e.g. “daligner MyDB“, and after some computing it would output a file encoding all the found local alignments. Unfortunately for big projects the amount of memory and CPU time required make this impossible. For example, on the Pacbio 54X human data set, daligner took 15,600 CPU hours on our modest 480 core HPC cluster, and as a monolithic run would have required at least 12Tb of memory! I could start apologizing, but consider that the only other program that can compare reads at a 12-15% error rate, BLASR, took 404,000 CPU hours, and only delivered overlaps. Indeed, a primary reason for releasing daligner now, and not waiting to release a complete end-to-end assembler is precisely to provide the bioinformatics community with a feasible way to compute alignments for Pacbio data at this scale. While we’ve embedded daligner within our database framework, it is easy to grab daligner‘s output in an ASCII format and then use that in an HGAP pipeline (which is the only current end to end assembler publicly available).

So given the scale of the required computation we have to “bite the bullet” and take a job parallel, HPC approach to effect an all-against-all comparison of the trimmed DB. So as a running example, let’s suppose you’ve set up a DB that is partitioned and optionally has a “dust” track (see this blog post) with a command sequence like:

fasta2DB DB MyData1.fasta MyData2.fasta ...
DBdust DB
DBsplit -x1000 DB

Suppose for simplicity that DB splits into 3 blocks. Then calling HPCdaligner on the DB will produce a UNIX shell script that contains all the commands needed to effect the all against all comparison. For example, “HPCdaligner DB” produces the script:

# Daligner jobs (3)
daligner DB.1 DB.1
daligner DB.2 DB.1 DB.2
daligner DB.3 DB.1 DB.2 DB.3
# Initial sort jobs (9)
LAsort DB.1.DB.1.*.las && LAmerge L1.1.1 DB.1.DB.1.*.S.las && rm DB.1.DB.1.*.las
LAsort DB.1.DB.2.*.las && LAmerge L1.1.2 DB.1.DB.2.*.S.las && rm DB.1.DB.2.*.las
LAsort DB.1.DB.3.*.las && LAmerge L1.1.3 DB.1.DB.3.*.S.las && rm DB.1.DB.3.*.las
LAsort DB.2.DB.1.*.las && LAmerge L1.2.1 DB.2.DB.1.*.S.las && rm DB.2.DB.1.*.las

# Level 1 merge jobs (3)
LAmerge DB.1 L1.1.*.las && rm L1.1.*.las
LAmerge DB.2 L1.2.*.las && rm L1.2.*.las
LAmerge DB.3 L1.3.*.las && rm L1.3.*.las

Each non-comment line of the script should be submitted to your cluster as a job, and all the jobs following one of the 3 comment lines can be run in parallel, but these 3 batches of job groups need to be run in sequence. That is, the 3 “Daligner” jobs can be run in parallel as a group, and once they have all completed, one can then run the 9 “Initial sort” jobs in parallel as a group, and finally the “Level 1 merge” jobs as a group. And of course, for a small script like this one, you could just invoke the shell on the script and let the jobs run sequentially on your desktop, e.g. “HPCdaligner DB | csh“.

When all the jobs of our sample script have completed there will be 3 sorted alignment files DB.1.las, DB.2.las, DB.3.las that encode the alignments found for the reads in each of the 3 trimmed DB blocks, respectively. To describe the contents of these 3 files precisely, one must first know that for each alignment, a .las-file records:

a[ab,ae] x bcomp[bb,be]

where a and b are the indices (in the trimmed DB) of the reads that overlap, comp indicates whether the b-read is from the same or opposite strand, and [ab,ae] and [bb,be] are the intervals of a and bcomp, respectively, that align. It is further the case that an alignment between reads a and b results in two alignment records, one for a versus b and another for b versus a. In addition, a series of trace points is retained with each alignment record to facilitate the rapid delivery of an optimal alignment between the relevant intervals of the reads. We can now explain that the output file DB.n.las contains all the overlap records for which the first read a is in the trimmed block DB.n and these records further occur in sorted order of a, then b, then comp, and finally ab. What this means is that all the alignments involving a given a-read occur in a contiguous interval of the output files permitting easy and job parallel computation in the downstream phases for chimer detection, read correction, repeat analysis, etc..

The HPCdaligner UNIX shell script employes the three commands daligner, LAsort, and LAmerge to produce the sorted and partitioned .las files for each of the N blocks of a hypothetical database D as follows:

  • “Daligner” step: Every block is compared against every other block by calling daligner on every pair D.x and D.y such that y≤x. A call such as “daligner D.x D.y1 .. D.yk” performs k comparisons of block D.x versus each block D.yi and does so with a compiled number of threads T (default 4). Each of these pairwise block comparisons produces 2T unsorted .las files D.x.D.y.[C|N]t.las and if x > y, an additional 2T unsorted .las files D.y.D.x.[C|N]t.las. The file D.x.D.y.Ot.las contains every alignment produced by thread t where the a-read is in D.x and the b-read is in D.y and is in orientation O. In total (N+1)N/2 blocks are compared and 2N2T .las files result.
  • “Initial sort” step: The 2T .las files for each block pair x,y, are sorted and merged into a single sorted .las file. The 2T files are first sorted with LAsort, where for each unsorted file U.las the command produces the sorted overlap file U.S.las. These sorted .S.las files are then merged into a single sorted file, L1.x.y.las, with LAmerge and afterwords removed. At the end of this step there are exactly N2 sorted .las files, one for each block pair.
  • “Level k merge” steps: In a sequence of O(log N) levels the N files L1.x.*.las for a given block x are merged by LAmerge into a single .las file, D.x.las. At level 1, the N sorted files for a block are merged in groups of up to 25 resulting in ⌊N/25⌋ files, that in level 2 are merged in groups of up to 25, resulting in ⌊N/625⌋ files, and so on until only one file D.x.las remains. Even big projects involve less than 625 blocks, so typically only one or at most two merge levels are required.

For extraordinarily large projects the computation can be performed in stages as new data is added to the database, thus amortizing the cost of finding alignments over the interval of data production. For example, suppose you have a really big project going and that thus far the DB has 78 blocks in its current partition. Executing the script produced by “HPCdaligner DB 1-77" will compare blocks 1 to 77 versus each other producing files DB.1.las through DB.77.las. Note carefully that the last block, 78, was not included, because it is a partial block. Later on suppose you’ve added more data and now have 144 blocks. Then executing the script produced by “HPCdaligner DB 78-143” will compare blocks 78 to 143 against each other and against the earlier blocks 1 through 77, updating DB.1.las through DB.77.las and creating DB.78.las through DB.144.las. And so on. The only restrictions are that (a) one must add the blocks in sequential order once only, and (b) one should never add the last, partial block until the very last increment. By submitting each new complete block as it appears, I figure one could keep up with data production for say a 100X shotgun of D. melanogaster on a decent laptop.

The command LAshow produces an ASCII display of all or a selected subset of the alignments encoded in a given .las file, sorted or unsorted. One can view just the read indices and their aligned intervals, one per line, or a line rendering of the relationship between the two reads, or a complete BLAST-style alignment of each local alignment where one has several options for customizing the display. In addition to permitting one to browse local alignments, the ASCII output can also be fed into a python or pearl script that converts the information to a from that can be input to an external program of your choosing, such as the read correction step of the HGAP assembler.

The LAcat and LAsplit commands allow one to effectively change the partition of the .las files for a project should it ever be necessary. Suppose the .las files D.1.las, D.2.las, …, D.N.las have been computed for a database D.db that has been partitioned into N blocks. “LAcat D.# >E.las” will find all the .las files of the form D.#.las and stream the conceptual concatenation of all the .las files in numeric order to the standard output, in this case to the file E.las. In the other direction, “LAsplit F.# N <E.las” will split E.las as evenly as possible into N block files, F.1.las, F.2.las, …, F.N.las subject to the restriction that the alignments involving a given a-read are all in the same file. Note that F.x.las does not necessarily equal the original D.x.las, as the a-reads in F.x.las may not be the same as the reads in a given block, D.x.db, because the splitting criteria for LAsplit and DBsplit are not the same. To get exactly the same files back, one should instead utter “LAsplit F.#.las D.db <E.las” or more tersely “LAsplit F.# D <E.las” that splits the piped input according to the partitioning of database D !  Finally, observe that if you want to repartition the DB D and also all the D.#.las files, first call DBsplit on D, and afterwords call “LAcat D.# | LAsplit D.# D” to redistribute the alignment records into .las files according to the new partitioning.

Lastly, the command LAcheck reads through the .las files given to it and checks them for structural integrity, i.e. do they reasonably encode a properly formatted .las file? We find this command useful in ensuring that every stage of a large HPCdaligner script has been properly executed. In processes involving thousands of jobs, it is not unusual for one or two to go “haywire”. So while we didn’t build it into the HPCdaligner scripts, we strongly recommend that you run LAcheck on .las files throughout the process to make sure all is well. Doing so is standard operating procedure for our assembly group.

A precise, detailed, and up-to-date command reference can be found here.

 

The Dazzler DB: Organizing an Assembly Pipeline

Assembling DNA sequence data is not a simple problem. Conceptually it generally can be thought of as proceeding in a number of sequential steps, performed by what we call phases arranged in a pipeline, that perform certain aspects of the overall process. For example in the “OLC” assembly paradigm, the first phase (O) is to find all the overlaps between reads, the second phase (L) is to then use the overlaps to construct a layout of the reads, and the final phase (C) is to compute a consensus sequence over the arrangement of reads. One can easily imagine additional phases, such as a “scrubber” that detects chimeric reads, a “cleaner” that corrects read sequence, and so on. Indeed, the header banner for this blog shows a 7 phase pipeline that roughly realizes our current dazzler prototype. To facilitate the multiple and evolving phases of the dazzler assembler, we have chosen to organize all the read data into what is effectively a “database” of the reads and their meta-information. The design goals for this data base are as follows:

  1. The database stores the source Pacbio read information in such a way that it can recreate the original input data, thus permitting a user to remove the (effectively redundant) source files. This avoids duplicating the same data, once in the source file and once in the database.
  1. The data base can be built up incrementally, that is new sequence data can be added to the data base over time.
  1. The data base flexibly allows one to store any meta-data desired for reads. This is accomplished with the concept of tracks that implementers can add as they need them.
  1. The data is held in a compressed form equivalent to the .dexta and .dexqv files of the data extraction module. Both the .fasta and .quiva information for each read is held in the data base and can be recreated from it. The .quiva information can be added separately and later on if desired.
  1. To facilitate job parallel, cluster operation of the phases of our assembler, the data base has a concept of a current partitioning in which all the reads that are over a given length and optionally unique to a well, are divided up into blocks containing roughly a given number of bases, except possibly the last block which may have a short count. Often programs can be run on blocks or pairs of blocks and each such job is reasonably well balanced as the blocks are all the same size. One must be careful about changing the partition during an assembly as doing so can void the structural validity of any interim block-based results.

The Dazzler DB module source code is available on Github here. Grabbing the code and uttering “make” should produce 13 programs: fast2DB, quiva2DB, DB2fasta, DB2quiva, fasta2DAM, DAM2fasta, DBsplit, DBdust, Catrack, DBshow, DBstats, DBrm, and simulator.

One can add the information from a collection of .fasta files to a given DB with fasta2DB, and one can subsequently add the quality streams in the associated .quiva files with quiva2DB. After adding these files to a data base one can then remove them as property 1 above guarantees that these files can be recreated from the DB, and the routines DB2fasta and DB2quiva do exactly that. Indeed, here in Dresden we typically extract .fasta and .quiva files with dextract from the Pacbio .bax.h5 source files and then immediately put the .fasta and .quiva files into the relevant DB, remove the .fasta and .quiva files, and send the .bax.h5 files to our tape backup library. We can do this because we know that we can recreate the .fasta and .quiva files exactly as they were when imported to the database.

A Dazzler DB consists of one named, visible file, e.g. FOO.db, and several invisible secondary files encoding various elements of the DB. The secondary files are “invisible” to the UNIX OS in the sense that they begin with a “.” and hence are not listed by “ls” unless one specifies the -a flag. We chose to do this so that when a user lists the contents of a directory they just see a single name, e.g. FOO.db, that is the one used to refer to the DB in commands. The files associated with a database named, say FOO, are as follows:

(a) FOO.db
a text file containing (i) the list of input files added to the database so far, and (ii) how to partition the database into blocks (if the partition parameters have been set).

(b) .FOO.idx
a binary “index” of all the meta-data about each read allowing, for example, one to randomly access a read’s sequence (in the store .FOO.bps). It is 28N + 88 bytes in size where N is the number of reads in the database.

(c) .FOO.bps
a binary compressed “store” of all the DNA sequences. It is M/4 bytes in size where M is the total number of base pairs in the database.

(d) .FOO.qvs
a binary compressed “store” of the 5 Pacbio quality value streams for the reads. Its size is roughly 5/3M bytes depending on the compression achieved. This file only exists if .quiva files have been added to the database.

(e) .FOO.<track>.anno, .FOO.<track>.data
a track called <track> containing customized meta-data for each read. For example, the DBdust command annotates low complexity intervals of reads and records the intervals for each read in the two files .FOO.dust.anno and .FOO.dust.data that are together called the “dust” track. Any kind of information about a read can be recorded, such as micro-sat intervals, repeat intervals, corrected sequence, etc. Specific tracks will be described as phases that produce them are created and released.

If one does not like the convention of the secondary files being invisible, then un-defining the constant HIDE_FILES in DB.h before compiling the library, creates commands that do not place a prefixing “.” before secondary file names, e.g. FOO.idx instead of .FOO.idx. One then sees all the files realizing a DB when listing the contents of a directory with ls.

While a Dazzler DB holds a collection of Pacbio reads, a Dazzler map DB or DAM holds a collection of contigs from a reference genome assembly. This special type of DB has been introduced in order to facilitate the mapping of reads to an assembly and has been given the suffix .dam to distinguish it from an ordinary DB. It is structurally identical to a .db except:

(a) there is no concept of quality values, and hence no .FOO.qvs file.

(b) every .fasta scaffold (a sequence with runs of N’s between contigs estimating the length of the gap) is broken into a separate contig sequence in the DB and the header for each scaffold is retained in a new .FOO.hdr file.

(c) the original and first and last pulse fields in the meta-data records held in .FOO.idx, hold instead the contig number and the interval of the contig within its original scaffold sequence.

A map DB can equally well be the argument of any of the commands below that operate on normal DBs. In general, a .dam can be an argument anywhere a .db can, with the exception of routines or optioned calls to routines that involve quality values, or the special routines fasta2DAM and DAM2fasta that create a DAM and reverse said, just like the pair fasta2DB and DB2fasta. So in general when we refer to a database we are referring to either a DB or a DAM.

The command DBsplit sets or resets the current partition for a database which is determined by 3 parameters: (i) the total number of basepairs to place in each block, (ii) the minimum read length of reads to include within a block, and (iii) whether or not to only include the longest read from a given well or all reads from a well (NB: several reads of the same insert in a given well can be produced by the Pacbio instrument). Note that the length and uniqueness parameters effectively select a subset of the reads that contribute to the size of a block. We call this subset the trimmed data base. Some commands operate on the entire database, others on the trimmed database, and yet others have an option flag that permits them to operate on either at the users discretion. Therefore, one should note carefully to which version of the database a command refers to. This is especially important for any command that identifies reads by their index (ordinal position) in the database.

Once the database has been split into blocks, the commands DBshow, DBstats, and DBdust below and commands yet to come, such as the local alignment finder dalign, can take a block or blocks as arguments. On the command line this is indicated by supplying the name of the DB followed by a period and then a block number, e.g. FOO.3.db or simply FOO.3, refers to the 3’rd block of DB FOO (assuming of course it has a current partition and said partition has a 3rd block). One should note carefully that a block is a contiguous range of reads such that once it is trimmed has a given size in base pairs (as set by DBsplit). Thus like an entire database, a block can be either untrimmed or trimmed and one needs to again be careful when giving a read index to a command such as DBshow.

The command DBdust runs the symmetric DUST low complexity filter on every read in a given untrimmed DB or DB block and it is the first example of a phase that produces a track. It is incremental in that you can call it repeatedly on a given DB and it will extend the track information to include any new reads added since the last time it was run on the DB. Reads are analyzed for low-complexity intervals and these intervals constitute the information stored in the .dust-track. These intervals are subsequently “soft”-masked by the local alignment command dalign, i.e. the low-complexity intervals are ignored for the purposes of finding read pairs that might have a significant local alignment between them, but any local alignment found will include the low-complexity intervals.

When DBdust is run on a block, say FOO.3, it is run on the untrimmed version of the block and it produces a block track in the files .FOO.3.dust.anno and .FOO.3.dust.data, whereas running DBdust on the entire DB FOO produces .FOO.dust.anno and .FOO.dust.data. Thus one can run an independent job on each block of a DB, and then after the fact stitch all the block tracks together with a call to Catrack, e.g. “Catrack FOO dust“. Catrack is a general command that merges any sequence of block tracks together into a track for the specified DB. For example, a 50X human dataset would partition into roughly 375, 400Mb blocks, for which DBdust takes about 20 seconds on a typical processor. One can either call DBdust on the entire DB and wait 375*20 ~ 2.07 hours, or run 375 jobs on each block on your local compute cluster, waiting 1 or 2 minutes, depending on how loaded your cluster is, and then call Catrack on the block tracks, which takes another half a minute at most.

Tracks that encode intervals of the underlying reads in the data base, of which the “dust” track is an example, are specifically designated interval tracks, and commands like DBshow, DBstats, and daligner can accept and utlize one or more such tracks, typically specified through a -m option on the command line.

DBshow and DBstats are routines that allow one to examine the current contents of a DB or DB block. DBstats gives one a summary of the contents of an untrimmed or trimmed DB of DB block including a histogram of read lengths and histograms of desired interval tracks if desired. DBshow allows your to display the sequence and/or quality streams of any specific set of reads or read ranges in the DB or DB block where the output can be in Pacbio fasta or quiva format depending on which option flags are set. There is a flag that controls whether the command applies to the entire DB or the trimmed DB and one must be careful, as the, say 5th read of the DB may not be the 5th read of the trimmed DB. A nice trick here is that since the output is in the Pacbio formats, one can then create a “mini-DB” of the set of reads output by DBshow, by calling fasta2DB and/or quiva2DB on its output. This conveniently allows a developer to test/debug a phase on a small, troublesome subset of reads in the DB.

DBrm removes all the files, including all tracks, realizing a given DB. It is safer to use this command than to attempt to remove a DB by calling rm on each hidden file. Lastly, this release includes a simple simulator, simulator, that creates a synthetic Pacbio data set useful for testing purposes.

A precise, detailed, and up-to-date command reference can be found here.