Estimating the number of true cell barcodes in single cell RNA-Seq (part 1)

Single Cell RNA-Seq (scRNA-Seq) took a major leap forward in 2015 when two Harvard groups published oil droplet-based methods for sequencing >10,000 cells in a single experiment for << 1$ / cell (Klein et alMacosko et al). Then in early 2016, 10X Genomics launched their commerical version of droplet scRNA-Seq, followed more recently by the illumina and Bio-Rad competitor technology. From my limited exposure, the 10X product seems to be the most popular right now but this is a very new field so there are likely more improved methods on the horizon which could shake things up. These techniques label cells in fundamentally in the same manner. Using microfluidic devices individual cells are encapsulated together with a random cell barcode in a droplet. The reverse transcription of the RNA and subsequent PCR amplification is then performed inside each droplet separately such that all cDNA from a single cell are labelled with the same cell barcode.

The techniques themselves are really exciting but I want to focus here on a technical question during the analysis. A crucial aspect of droplet scRNA-seq is that the number of possible cell barcodes is many orders of magnitude greater than the number of cells sequenced. Hence, if each sequenced cell gets one cell barcode at random, the probability of two cells having the same barcode is very low (if you’re interested, you can work out the probability using the solution to the famous ‘birthday problem’). However, when you get your sequencing data back and extract the cell barcodes detected, not all of these will be ‘true’ cell barcodes and some barcodes will contain sequencing errors or PCR errors. Furthermore, some cell barcodes may have been incorporated into cDNA that does not originate from your cells (more on this later). Hence, one of the first challenges in droplet scRNA-Seq is working out how many cells you’ve actually sequenced! In the supplementary material of the original drop-seq paper (Macosko et al) they suggest plotting the cumulative frequency and looking for an inflection point, or ‘knee’. They show this in supplementary Figure 3A (reproduced below in Figure 1 with permission). The 10X analysis pipeline also appears to take a similar approach by what I can gather from their website.

figs3_lrg-e1495127255597.jpg

Figure 1. Cumulative frequency for drop-seq data. Plot taken from Macosko et al supplementary figure 3 with permission.  Dashed line shows inferred position of point of inflection.

Now, if you’ve been on this blog before, you’ll probably have seen the posts about UMI-tools. This is a tool suite I’ve co-developed with Ian Sudbery (University of Sheffield) to accurately identify PCR duplicates in techniques utilising unique molecular identifiers (UMIs). UMIs are similar to cell barcodes in that they are just a sequence of bases used to barcode a read, though in this case, the barcode relates to the unique molecule prior to PCR amplification. The novel aspect of our tools is that we used networks to identify PCR duplicates in an error-aware manner. For background on the network methods, please see our publication or my previous blog post.  For the next major UMI-tools release (version 0.5), we’re planning to expand our support for single cell RNA-Seq analyses as this seems to be the major application of the tool to date according to our GitHub issues. As part of this, we wanted to expand the extract command. Currently this command extracts the UMI sequence from the reads and appends it to the read name so that the UMI can be accessed in the downstream BAM, We wanted to extend the extract command so that it can

  • extract cell barcodes
  • filtered cell barcodes against a whitelist of expected cell barcodes
  • or, where no whitelist exists (as in the case of droplet scRNA-Seq), create a whitelist of ‘true’ cell barcodes from the data and filter against this

For the final point, we needed to implement an automated version of the ‘knee’ method as described in the drop-seq paper. We also wondered whether our network based methods might be suitable for this purpose as they are designed to handle the situation where you have an unknown number of ‘true’ barcodes plus additional barcodes which have been generated via errors from the ‘true’ barcodes. I therefore set out to implement a ‘knee’ method and compare it with our network methods.

To start with, I took the SCRB-Seq data from Soumillon et al since I had it to hand and this includes 384 expected cell barcodes which I could use as a ground truth. All other barcodes observed should therefore have originated as errors from the 384 true barcodes. To be clear, this is not droplet scRNA-Seq but this give us a definitive ground truth for the purposes of initiation performance testing. Plotting the cumulative frequency we can clearly see an inflection near the 384th barcode as expected (Figure 2; blue line).

no_methods_cumsum

Figure 2. Cumulative frequency for SCRB-Seq cell barcodes. Blue line represents the position of the 384th cell which is close to the inflection point in the curve.

I initially attempted to identify the inflection point by fitting a spline function to the cumulative frequency but quickly realised it was much easier to work from the distribution of counts (log10) per cell barcode (Figure 3). Here the inflection point is represented by the local minima between the distribution of counts for ‘true’ and ‘false’ barcodes. We can see that the local minima is very close to the 384th cell barcode so this automated knee method seems to work well for this data. The obvious downside of this hard thresholding on read counts with the knee method is that there will likely be some overlap between the read counts for real and false barcodes (as we can see in Figure 3) . This is why we believed the network-based methods may be more accurate.

scrb_seq_one_min_dist

Figure 3. Density of reads per SCRB-Seq cell barcode. Blue line represents the position of the 384th cell which is close to the inflection point in the curve in figure 2. Orange line represents the local minima used in my implementation of the ‘knee’ method.

To compare with our network-based methods, I took increasing samples of reads from the SCRB-Seq sample and estimated the true cell barcodes using the knee method and our ‘adjacency’ and ‘directional’ methods and calculated the sensitivity and false positive rate (FPR) of the different approaches. As you can see in Figure 4, the ‘directional’ method has a slightly higher sensitivity and lower FPR than the ‘knee’ method, which suggests that this network-based method might be more suitable for identifying the true cell barcodes. The knee method slightly outperforms just selecting the top 384 most abundant barcodes (‘top 384’) in terms of higher sensitivity although this comes at the cost of increased FPR. The ‘adjacency’ method has a high sensitivity but tends towards overestimating the number of true cell barcodes as the number of sampled reads is increased. This is because adjacency only groups barcodes together if they are all separated from the same hub barcode and each by a single edge. Hence, any barcode with 2 or more errors will be always be in a separate group from the true barcode from which it originates. In contrast, the directional method is not limited to single edge paths.

pSens_zoom.pngpFPRpFPR_zoom

Figure 4. Sensitivity (Top) and False Positive Rate (FPR; Middle, Bottom) for 3 methods to detect the number of true cells sequenced. “Top 384” = select the top 384 barcodes. Shown for comparison with knee method. Middle = all samples. Bottom = adjacency excluded to show difference between knee and directional.

Next, I took a single sample from the drop-seq paper where the authors had inferred that there were ~570 cells using a visual assessment of the cumulative frequency plot (This is the sample shown in Figure 1). They also backed up this assertion with an assessment of the species-specificity of the reads for each cell barcode which suggested that most of the “cells” below this inflection point were actually droplets which did not contain a cell and simply contained ‘ambient’ RNA. So, 570 seemed a reasonable truth to test the method against. My ‘knee’ method worked great, using the first 50M reads, this method estimated that there were 573 cell barcodes (Figure 5).

indrop_cumsum

Figure 5. Cumulative frequency plot for cell barcodes in the first 50M drop-seq reads. The ‘knee’ method estimated 573 true cell barcodes and hence is plotted over the ‘top 570’ inflection point prediction. The X-axis has been limited to the 10 000 most abundant cell barcodes.

In contrast, the directional method failed miserably! For starters, the shear number of cell barcodes precluded the inclusion of all cell barcodes detected as the network building requires an all-vs-all comparison between the barcodes to calculate the edit distances. With 50M reads, the sensitivity was OK (95%), but the specificity was just 11% and the specificity unexpectedly increased as the number of reads parsed was decreased! Then came the realisation…

Image result for homer doh

As mentioned above, the network-based methods were designed to deal with a pool of barcodes, where some barcodes are true and the remaining barcodes are the result of errors from the true barcodes. However, here we have a situation where the ‘false’ barcodes could be due to either errors (I’ll call these ‘error barcodes’), or droplets without a cell which have amplified ambient RNA (‘ambient barcodes’). We can indentify these two separate types of false barcodes by looking at the minimum edit-distance between each barcode and the true barcodes. For error barcodes, we would expect this minimum edit-distance to be 1 in most cases. For ambient barcodes we expect them to have a similar distribution of minimum edit distances relative to the true barcodes as barcodes selected at random. Figure 6 below shows the distribution of counts per barcode, split into 4 bins. Figure 7 shows the distribution of edit-distances for each bin and a null distribution from a randomly generated set of cell barcodes. Bin 4 contains the true barcodes and shows a null-like distribution of edit-distances as expected. We can clearly see that bin 3, which has slightly lower counts per cell barcode compared to the true barcodes, is enriched in single edit-distances relative to the null. This strongly suggests many of these cell barcodes are error barcodes with a single error relative to the true cell barcode from which they originate. However, bin 2, and bin 1 especially, show a more null-like distribution, suggesting most of these cell barcodes are instead ambient barcodes.

indrop_bins

Figure 6. Distribution of reads per cell barcode (log 10) from first 50M reads, separated into 4 bins. Bin 4 represents the ‘true’ barcodes.

edit_distances_drop_seq

Figure 7. Distribution of minimum edit distance compared to true barcodes (bin 4) for each bin shown in figure 6. Random = 10 000 randomly generated cell barcodes.

 

In summary, the knee method as implemented here works very well with the samples tested so far so we can proceed with automatic detection of the number of true cell barcodes with UMI-tools extract. We will also enable the use of the directional method since this is more accurate when you know that all false cell barcodes are the result of errors. However, I’d recommend sticking with the knee method unless you are absolutely certain this is the case.

Finally, if you’re already using UMI-tools for scRNA-Seq, or thinking about doing so, keep a look out for version 0.5. Along with the added functionality discribed here, we’re adding a count commands to yield counts per cell per gene directly from featureCounts/HTSeq output and we hope to include a scRNA-Seq-specific tutorial for this release too.

EDIT: Following the comments below I’ve continued this analysis into a second post to look at INDELs in cell barcodes and comparing UMI-tools with sircel.

Improving kallisto quantification accuracy by filtering the gene set

In my previous post Road testing kallisto, I presented the results of a simulation study to test the accuracy of kallisto. Specifically, I hypothesised that kallisto would perform poorly for transcripts with little or no unique sequence and I wanted to use the simulation to flag these transcripts in my downstream analysis. The simulation was therefore intentionally naive and ignored biases from library preparation and sequencing so I could focus on the accuracy of quantification with “near perfect” sequence reads. The simulation results broadly fitted my expectations but also threw up a few more questions in the comments section. I’m going to post an update for each of the following questions:

  1. What’s the impact of including low-support transcripts on the quantification of well-supported transcripts?
  2. The correlation between truth and estimated counts drops off as a transcript has fewer unique k-mers. Is this also reflected in the bootstrap variances?
  3. How does kallisto compare to sailfish, salmon and cufflinks?
    • This comparison will come with caveats because of the way the reads were simulated

For this post I’ll focus on the first question.

The impact of including low-support transcripts on the quantification of well-supported transcripts

Previously I showed that the 9 DAZ2 transcripts (Ensembl v82) are very poorly quantified by kallisto. DAZ genes contain variable numbers of a repeated 72-bp exon (DAZ repeats) and I noted that transcript models for DAZ2 overlapped one another perfectly with regards to the splice sites, so some isoforms differ only in the number of DAZ repeats, hence 8/9 of the isoforms had no unique kmers whatsoever. I further suggested that the inclusion of low-support transcript models in the gene set may be reducing the accuracy of quantification for the high-support transcripts. I then suggested this was likely to be a problem at many other loci and filtering out low-support transcripts may be beneficial.

To test this, I created two genesets, “Filtered” and “Unfiltered“, both containing transcripts from hg38 Ensembl v82 protein-coding genes. For the “Filtered” gene set I removed all transcripts with a low-support level (>3 ; No supporting non-suspect EST or mRNA). For the “Unfiltered” gene set, I retained all transcripts. I then compared the accuracy of quantification over all transcripts with a support level of 1-3.

We would expect this filtering to increase the fraction of unique 31-mers in the remaining transcripts. Figure 1 demonstrates that this is indeed the case.

kmers filtered vs unfiltered

Figure 1. Distribution of fraction of unique 31-mers per transcript in the filtered and unfiltered gene sets

Figure 2 is a re-representation of the the previous figure showing the correlation between the sum of ground truths across the 30 simulations and the sum of estimated counts, with separate panels for the two gene sets. The two panels only show quantification over the high-support transcripts. There’s a clear but slight decrease in the accuracy of quantification when using the unfiltered gene set. With the arbitrary threshold of 1.5-fold that I used previously, 278 / 97375 (0.28%) are flagged as “poor” accuracy, compared to 448 / 97375 (0.46%) when we include the low-support transcripts. Honestly, I was expecting a bigger change but this hard thresholding somewhat masks the overall decrease in accuracy.

kallisto Filtering geneset improves accuracy

Figure.2 Filtering low-support transcripts improves the quantification accuracy for high-support transcripts with kallisto

Figure 3 shows the same plot but focusing on the DAZ genes. All the transcript models for DAZ4 are support level 5 so this gene is not included. When the gene set is not filtered, one of the DAZ1 (ENSG00000188120) transcripts is accurately quantified, with the other transcripts having zero or close-to-zero estimated counts across the 30 simulations. The filtering leads to a much improved quantification accuracy for the other DAZ1 transcript and one of the DAZ2 (ENSG00000205944) transcripts. However, there is no real improvement for the remaining transcripts, with two now considerably over-estimated and the other 4 showing no improvement.

Kallisto filtering improves DAZ gene accuracy

Figure 3. Improvement in quantification of DAZ gene transcripts when low-support transcripts are filtered out

The other important metric when assessing the accuracy of quantification is the Pearson’s correlation coefficient (r) between the ground truth and estimate counts across the simulations. For the majority of transcripts r>0.9 even when using the unfiltered gene set, however there is a noticeable improvement when the gene set is filtered (Figure 4).

kallisto_geneset_correlation_dist_zoom

Figure 4. Distribution of Pearson’s correlation coefficients between ground truths and kallisto estimated counts across 30 simulations. Transcripts were quantified using a filtered and unfiltered gene set

We would expect the greatest improvement where the correlation is low with the unfiltered gene set and the filtering leads to a large gain in the fraction of unique 31-mers. To show this, Figure 5 presents the gain in fraction of unqiue 31-mers (x-axis) against the improvement in correlation with the filtered geneset (y-axis), split by the correlation when using the unfiltered geneset. The transcripts in the top left panel showed very poor correlation with the unfiltered gene set (<0.5), so improvements here may be partly affected by regression to the mean. Looking at the other panels which show the transcripts with correlation >0.5 when using the unfiltered gene set, we can see that as the gain in fraction of unique 31-mers approaches 1 the improvement in the correlation tends towards the maximum possible improvement. Even those transcripts which only show a very slight gain in the fraction of unique 31-mers show a considerable improvement in the correlation. This is in line with my observation in the previous post that kallisto shows a much better performance for transcripts with > 0.05 unique kmers. 11047 / 97375 (11.3%) of high-support transcripts show at least a 0.05 increase in the fraction of unique kmers.

Transcripts unique kmers improved accuracy

Figure 5. Change in correlation when using the filtered gene set relative to the unfiltered gene set, split by the gain in fraction of unique 31-kmers and the correlation in the unfiltered gene set

Summary

The figures I’ve shown here fit perfectly with the expectation that filtering out the low-support transcripts increases the fraction of unique kmers in a subset of the remaining transcripts, in turn improving the accuracy of the estimated counts from kallisto. Removing the 44023 low-support transcripts improves the accuracy of the sum of estimates for the high-support transcripts, but the improvement is only slight. Whether this gain is worth removing 44023 isoforms is really down to how much you believe these low-support isoforms. Personally, I worry that a lot of these low-support transcripts are spurious models from poor transcript assembly.

The DAZ genes provide an insight into a very hard-to-quantify set of transcripts, indicating that there are gains to be made from filtering out the low-support transcripts but in some instances these will be minimal. Indeed, in some cases, even transcripts we know are real are just going to be near-impossible to quantify with short-read RNA-Seq due to their sequence similarity with other transcripts from the same locus and/or homologous loci.

The improvement to the correlation across the 30 simulations is somewhat clearer than the improvement in the sum of estimates, and most prominent where the gain in unique kmers is greatest, as expected. 11.2% of transcripts show a significant improvement in the fraction of kmers which seems to me a worthwhile gain for the loss of the low-support transcripts from the gene set. The next step will be to explore the effect of the filtering on differential expression analysis, since this is what I want to use kallisto for, and I expect this will be it’s major use case.

UPDATE (22/10/15):

In response to Ian’s question below, I had a look at whether removing a random selection of transcript models has a similar impact on the accuracy of the remaining models. To do this, I identified the number of transcripts which would be removed by the transcript support level (TSL) filtering for each gene_id and removed a random set (of the same size) of transcript models for the gene_id. This should ensure the comparison is “fair” with regards to the complexity of the loci at which transcript models are removed and the total number of transcript models filtered out.

My expectation was that low-support transcripts would be contaminated by transcript models from incomplete transcript assembly and would therefore be shorter and show a greater overlap with other transcript models from the same locus. I therefore expected that the removal of the random transcripts would also increase the fraction of unique kmers, as suggested by Ian, but that the effect would be greater when the low-support transcripts were removed.  It turns out I was wrong! Figure 6 shows the distribution of fraction unique 31-mers with the unfiltered, TSL-filtered and randomly filtered annotations. The random filtering actually leads to a slightly greater increase in the fraction unqiue 31-mers for the remaining transcripts. This also leads to an slightly greater correlation for the remaining transcripts (Figure 7).

So what about the length and overlap of the low support transcripts? Well it turns out the transcript models do decrease in length from TSL1-4, but oddly transcripts with TSL 5 (“no single transcript supports the model structure”) are very similar in length to TSL1 transcripts (Figure 8). To compare the overlap between the transcript models, I took a short cut and just used the fraction unique kmers as a proxy. Transcripts which have more overlap with other transcripts from the same locus will have a lower fraction of unique kmers. This is complicated by homology between gene loci but good enough for a quick check I think. Figure 9 shows the fraction unique kmers at each TSL. Again TSL1-4 show a trend with TSL5 looking more similar to TSL1 & 2. Interestingly the trend is actually in the opposite direction to what I had expected, although this comes with the caveat that I’m not controlling for any differences in the genome-wide distribution of the different TSLs so I’d take this with a pinch of salt.

Why TSL5 transcripts are more similar to TSL1-2 transcripts I’m not sure, perhaps this scale shouldn’t be considered as ordinal after-all. I think I’d better get in contact with Ensembl for some more information about the TSL. If anyone has any thoughts, please do comment below.

Of course this doesn’t take away from the central premise that the higher TSL transcript models are more likely to be “real” and accurate quantification over these transcripts should be a priority over quantifying as many transcript models as possible, given some of these models will be wrong.

Thanks to Ian for the suggestion, it turned out an unexpected result!

kallisto_geneset_random_filter_kmers

Figure 6. Impact of filtering on fraction unique 31-mers

kallisto_geneset_random_filter_cor

Figure 7. Impact of filtering on correlation coefficient. X-axis trimmed to [0.7-1]

transcript_support_kmers

Figure 8. Fraction unique 31-mers for transcripts split by Transcript Support Level (TSL)

transcript_support_length

Figure 9. Transcript length (log10) split by Transcript Support Level (TSL)

Unique Molecular Identifiers – the problem, the solution and the proof

[EDIT: UMI-Tools open access publication is now out]

I’ve been working with Dr. Ian Sudbery (former CGAT fellow, currently Lecturer in Bioinformatics at the University of Sheffield) on a script called dedup_umi.py to correctly remove PCR duplicates from alignment files when using Unique Molecular Identifiers (UMIs). The script is available from GitHub as part of the UMI-tools repository. As part of the development I’ve been running some simulations to compare different methods which have proved quite illuminating. Before I go into the details though, some background.

PCR duplication artifacts

Many sequencing-based techniques rely upon an amplification step to increase the concentration of the library generated from the fragmented DNA prior to sequencing. Following alignment to the genome, it’s usually preferable to identify and remove PCR duplicates as there are inherent biases in the amplification step as some sequences become overrepresented in the finally library compared to their true abundance in the fragmented DNA. The brilliant Picard tool is commonly used to identify and remove these PCR duplicates using their genomic coordinates. Of course, it’s also possible that two sequence reads with the same genomic coordinates will have the originated from two independent fragments. However, where the alignment density is essentially random (as in DNA-Seq) this is unlikely, even more so if paired end sequencing has been performed. The human genome has ~3 x 109 bases and the insert size will vary so the combinations of possible genomic coordinates for the paired ends is so vast we don’t expect to sequence two fragments with the same genomic coordinates.

Conversely, in RNA-Seq, some cDNA sequences will be highly abundant and we are therefore much more likely to sequence multiple fragments with the exact same genomic coordinates. This problem is most acute where a high number of PCR cycles have been used to generate the sequencing library, for example where the concentration of starting material in low, such as in single cell RNA-Seq as this necessitates many PCR cycles.

Using unique molecular identifiers to correctly identify PCR duplicates

Unique molecular identifiers (UMIs; also called Random Molecular Tags (RMTs)) are random sequences of bases used to tag each molecule (fragment) prior to library amplification (see figure below and Kivioja et al (2012), thereby aiding in the identification of PCR duplicates.  If two reads align to the same location and have the same UMI, it is highly likely that they are PCR duplicates originating from the same fragment prior to amplification. We can therefore collapse all the reads with the same genomic coordinates and UMI into a single representative read and obtain an accurate estimate of the relative concentration of the fragments in the original sample.

nmeth.2772-F1This figure is adapted from Islam et al (2014)

The problem

The use of UMIs as described above would work perfectly if it were not for base-calling errors, which erroneously create sequencing reads with the same genomic coordinates and UMIs and that are identical except for the base at which the error occurred. Base-calling errors therefore inflate the apparent numbers of unique fragments sequenced (see below). An additional source of errors comes from the amplification itself which can introduce errors during the DNA duplication step.

schematic_1

The solution

Ian initially identified this issue in his own data and wrote a python script utilising pysam to parse an alignment BAM file and inspect the UMIs of reads with the same genomic coordinates. He reasoned that sequencing errors would produce UMIs which are a single edit distance away from another UMI. Therefore, by forming an adjacency matrix it should be possible to identify the sequencing errors and collapse the reads to the true number of unique UMIs present (see schematic below for a simple example). The first step is to identify connected components, where UMIs are represented by nodes and edges represent links between nodes which are a single edit distance apart. The counts of each node are also retained. Within each component, the most abundant node is selected and all nodes within a single edit distance are removed. This process is repeated using the second most abundant node and so on until all edges are removed and the number of nodes remaining is inferred to be the number of unique fragments sequenced. We refer to this method as the “adjacency” method.

schematic_2

It turns out this works pretty well, however, one can envisage situations where this may not correctly identify the true number of unique fragments. For example,  an error during the PCR amplication may create an apparently distinct UMI from which sequencing errors could created further UMIs, all of which are actually from a single unique fragment (see schematic below). A similar situation could also arise if one particular position in the reads was more likely to contain an error (this is quite common in illumina sequencing). In this case, one would see lots of UMIs with a single edit distance due to an error at the common position and further errors may be present off of this UMI.

schematic_3In order to deal with these more complicated networks of connected UMIs, I developed the “directional_adjacency” method. I reasoned that we could resolve these networks by forming edges between nodes where the count of node A was >= 2 x node B. Now, each connected component is inferred to originate from a single unique fragment.

In addition to these methods, we also wanted to try a modification of the “adjacency” method in which the whole connected component is inferred to originate from a single unique fragment (“cluster”).

This issue has been previously observed by Islam et al (2014) who proposed that UMIs with less than 1% of the average counts should be removed “to account for UMIs that stem from PCR-induced mutations or sequencing errors”. We implemented this method within our tool for comparison (“percentile”).

Finally, we implemented a “unique” method, where all unique UMIs were retained. Note, this appears to be the method which is most commonly used. The schematic below shows a comparison of the various methods (I’ve obviously rigged it to make directional adjacency look the best).

schematic_2

The proof

We’re taking two approaches to prove the value of improved detection of PCR duplicates using the adjacency methods implemented in dedup_umi.py. Firstly, we’ve conducted simulations to show they consistently estimate the true number of unique fragments more accurately. Secondly, we’ve applied dedup_umi.py to multiple single-cell RNA-Seq studies and show an improvement, with respect to the distribution of UMIs, the quantification of spike-in RNAs and the clustering of cells. I’ll focus on the simulation for now and leave the single-cell RNA-Seq to another post when we’ve submitted the manuscript.

In order to compare the various methods, we wanted to know how the methods fared across a range of UMI lengths and depths of sequencing.  For this I turned to python to replicate the process of amplification and sequencing in silico and model the impact of the various parameters on the efficacy of the methods. The whole process is contained in a standalone ipython notebook which you can view here. If you would like to download and run the notebook yourself you can do so from here.

In brief, the simulation starts with a set number of unique molecules (fragments), our “ground truth”. A UMI is then randomly assigned to each molecule and the pool of molecules amplified by duplicating each molecules for every in silico PCR cycle. The final pool of molecules are then “sequenced” to produce the reads. Within this process, base-calling errors are incorporated during the sequencing and substitutions are incorporated during the PCR cycles. Furthermore, amplification efficiency of each UMI varies to naively model the impact of amplification biases. Various methods are then applied to the reads generated to simulate the de-duplication process within dedup_umi.py and an estimate of the number of unique molecules is obtained. To compare the methods I used 2 metrics:

  • The difference between this estimate and the ground truth
  • The variability of the estimate for which I used the coefficient of variation (CV; sd/mean)

Each of the following plots below show the impact of changing one of the simulation parameters on theses two metrics. In each plot, the red dashed line shows the parameter value that was used in the other simulations.

We can see that under the assumptions of the simulations, the directional adjacency method is the least sensitive to variations in the input parameters.

The unique method (the method used in most publications) severely overestimates the number of unique molecules since each sequencing error or PCR error erroneously increases the estimate. As the number of starting molecules increase, all methods eventually struggle to identify the true number of unique molecules, with the adjacency, directional adjacency and percentile methods showing the best performance.  As we would expect, the percentile method tends to overestimate the number of unique molecules since it fails to remove sequencing errors unless they are very rare events. The adjacency method tends towards an overestimation too, as it can only identify erroneous UMIs within a single edit distance. Interestingly, the directional adjacency method tends towards an underestimation as it can link together a string of UMIs each separated by a single edit distance. The directional aspect of the network is designed to avoid this happening between truly unique UMIs, however occasionally these will be collapsed together. The cluster method tends towards a more severe underestimation as it does not take counts into account at all and collapses whole networks into a single UMI.

Overall, the directional adjacency method shows very little sensitivity to the number of starting molecules and outperforms all other methods. As expected, the percentile method becomes more consistent as the number of molecules increase, since the variability between the counts in each one of the individual iterations is reduced. In contrast, the cluster and adjacency methods become less consistent as the number of starting molecules increases, as the variability in the topology of the networks formed will also increase.

simulation_number_of_umis_xintercept_difference_cv_deduped

As the UMI length increases (from a initial very short 3bp), the methods which utilise a network quickly improve in terms of accuracy and variability. However, the percentile and unique methods actually become less accurate as a longer UMI obviously means more probability of errors accumulating. Perhaps counter intuitively then, if you’re not going to account for errors when using UMIs, you’re better off not using a very long UMI.

simulation_umi_length_xintercept_difference_cv_deduped

As the PCR cycles are increased, all methods except directional adjacency show a decrease in accuracy and an increase in variability, with adjacency being the second most robust to high PCR cycle numbers. We see the same as we increase the DNA polymerase error rate. This confirms that the directional adjacency method is better able to handle errors originating from the amplification stage.

simulation_pcr_cycles_xintercept_difference_cv_deduped

simulation_dna_pol_error_rate_xintercept_difference_cv_deduped

Is a non-heuristic solution possible?

The methods we propose are definitely an improvement on “unique” deduplication using UMIs within these simulations.  We’ve also shown that the application of our proposed methods can have a real impact on biological interpretation of single cell RNA-Seq data (more to come on that soon). However, all implemented methods are heuristic approaches to the problem. My initial expectation was that we would be able to apply some probabilistic approach to determine the most likely set of UMIs which would account for the UMIs observed at a given genomic location. So far, we’ve been unable to develop such a method. If anyone would like to discuss alternative probabilistic approaches, or the script in general, please get in touch.