Estimating the number of true cell barcodes in single cell RNA-Seq (part 1)

Single Cell RNA-Seq (scRNA-Seq) took a major leap forward in 2015 when two Harvard groups published oil droplet-based methods for sequencing >10,000 cells in a single experiment for << 1$ / cell (Klein et alMacosko et al). Then in early 2016, 10X Genomics launched their commerical version of droplet scRNA-Seq, followed more recently by the illumina and Bio-Rad competitor technology. From my limited exposure, the 10X product seems to be the most popular right now but this is a very new field so there are likely more improved methods on the horizon which could shake things up. These techniques label cells in fundamentally in the same manner. Using microfluidic devices individual cells are encapsulated together with a random cell barcode in a droplet. The reverse transcription of the RNA and subsequent PCR amplification is then performed inside each droplet separately such that all cDNA from a single cell are labelled with the same cell barcode.

The techniques themselves are really exciting but I want to focus here on a technical question during the analysis. A crucial aspect of droplet scRNA-seq is that the number of possible cell barcodes is many orders of magnitude greater than the number of cells sequenced. Hence, if each sequenced cell gets one cell barcode at random, the probability of two cells having the same barcode is very low (if you’re interested, you can work out the probability using the solution to the famous ‘birthday problem’). However, when you get your sequencing data back and extract the cell barcodes detected, not all of these will be ‘true’ cell barcodes and some barcodes will contain sequencing errors or PCR errors. Furthermore, some cell barcodes may have been incorporated into cDNA that does not originate from your cells (more on this later). Hence, one of the first challenges in droplet scRNA-Seq is working out how many cells you’ve actually sequenced! In the supplementary material of the original drop-seq paper (Macosko et al) they suggest plotting the cumulative frequency and looking for an inflection point, or ‘knee’. They show this in supplementary Figure 3A (reproduced below in Figure 1 with permission). The 10X analysis pipeline also appears to take a similar approach by what I can gather from their website.

figs3_lrg-e1495127255597.jpg

Figure 1. Cumulative frequency for drop-seq data. Plot taken from Macosko et al supplementary figure 3 with permission.  Dashed line shows inferred position of point of inflection.

Now, if you’ve been on this blog before, you’ll probably have seen the posts about UMI-tools. This is a tool suite I’ve co-developed with Ian Sudbery (University of Sheffield) to accurately identify PCR duplicates in techniques utilising unique molecular identifiers (UMIs). UMIs are similar to cell barcodes in that they are just a sequence of bases used to barcode a read, though in this case, the barcode relates to the unique molecule prior to PCR amplification. The novel aspect of our tools is that we used networks to identify PCR duplicates in an error-aware manner. For background on the network methods, please see our publication or my previous blog post.  For the next major UMI-tools release (version 0.5), we’re planning to expand our support for single cell RNA-Seq analyses as this seems to be the major application of the tool to date according to our GitHub issues. As part of this, we wanted to expand the extract command. Currently this command extracts the UMI sequence from the reads and appends it to the read name so that the UMI can be accessed in the downstream BAM, We wanted to extend the extract command so that it can

  • extract cell barcodes
  • filtered cell barcodes against a whitelist of expected cell barcodes
  • or, where no whitelist exists (as in the case of droplet scRNA-Seq), create a whitelist of ‘true’ cell barcodes from the data and filter against this

For the final point, we needed to implement an automated version of the ‘knee’ method as described in the drop-seq paper. We also wondered whether our network based methods might be suitable for this purpose as they are designed to handle the situation where you have an unknown number of ‘true’ barcodes plus additional barcodes which have been generated via errors from the ‘true’ barcodes. I therefore set out to implement a ‘knee’ method and compare it with our network methods.

To start with, I took the SCRB-Seq data from Soumillon et al since I had it to hand and this includes 384 expected cell barcodes which I could use as a ground truth. All other barcodes observed should therefore have originated as errors from the 384 true barcodes. To be clear, this is not droplet scRNA-Seq but this give us a definitive ground truth for the purposes of initiation performance testing. Plotting the cumulative frequency we can clearly see an inflection near the 384th barcode as expected (Figure 2; blue line).

no_methods_cumsum

Figure 2. Cumulative frequency for SCRB-Seq cell barcodes. Blue line represents the position of the 384th cell which is close to the inflection point in the curve.

I initially attempted to identify the inflection point by fitting a spline function to the cumulative frequency but quickly realised it was much easier to work from the distribution of counts (log10) per cell barcode (Figure 3). Here the inflection point is represented by the local minima between the distribution of counts for ‘true’ and ‘false’ barcodes. We can see that the local minima is very close to the 384th cell barcode so this automated knee method seems to work well for this data. The obvious downside of this hard thresholding on read counts with the knee method is that there will likely be some overlap between the read counts for real and false barcodes (as we can see in Figure 3) . This is why we believed the network-based methods may be more accurate.

scrb_seq_one_min_dist

Figure 3. Density of reads per SCRB-Seq cell barcode. Blue line represents the position of the 384th cell which is close to the inflection point in the curve in figure 2. Orange line represents the local minima used in my implementation of the ‘knee’ method.

To compare with our network-based methods, I took increasing samples of reads from the SCRB-Seq sample and estimated the true cell barcodes using the knee method and our ‘adjacency’ and ‘directional’ methods and calculated the sensitivity and false positive rate (FPR) of the different approaches. As you can see in Figure 4, the ‘directional’ method has a slightly higher sensitivity and lower FPR than the ‘knee’ method, which suggests that this network-based method might be more suitable for identifying the true cell barcodes. The knee method slightly outperforms just selecting the top 384 most abundant barcodes (‘top 384’) in terms of higher sensitivity although this comes at the cost of increased FPR. The ‘adjacency’ method has a high sensitivity but tends towards overestimating the number of true cell barcodes as the number of sampled reads is increased. This is because adjacency only groups barcodes together if they are all separated from the same hub barcode and each by a single edge. Hence, any barcode with 2 or more errors will be always be in a separate group from the true barcode from which it originates. In contrast, the directional method is not limited to single edge paths.

pSens_zoom.pngpFPRpFPR_zoom

Figure 4. Sensitivity (Top) and False Positive Rate (FPR; Middle, Bottom) for 3 methods to detect the number of true cells sequenced. “Top 384” = select the top 384 barcodes. Shown for comparison with knee method. Middle = all samples. Bottom = adjacency excluded to show difference between knee and directional.

Next, I took a single sample from the drop-seq paper where the authors had inferred that there were ~570 cells using a visual assessment of the cumulative frequency plot (This is the sample shown in Figure 1). They also backed up this assertion with an assessment of the species-specificity of the reads for each cell barcode which suggested that most of the “cells” below this inflection point were actually droplets which did not contain a cell and simply contained ‘ambient’ RNA. So, 570 seemed a reasonable truth to test the method against. My ‘knee’ method worked great, using the first 50M reads, this method estimated that there were 573 cell barcodes (Figure 5).

indrop_cumsum

Figure 5. Cumulative frequency plot for cell barcodes in the first 50M drop-seq reads. The ‘knee’ method estimated 573 true cell barcodes and hence is plotted over the ‘top 570’ inflection point prediction. The X-axis has been limited to the 10 000 most abundant cell barcodes.

In contrast, the directional method failed miserably! For starters, the shear number of cell barcodes precluded the inclusion of all cell barcodes detected as the network building requires an all-vs-all comparison between the barcodes to calculate the edit distances. With 50M reads, the sensitivity was OK (95%), but the specificity was just 11% and the specificity unexpectedly increased as the number of reads parsed was decreased! Then came the realisation…

Image result for homer doh

As mentioned above, the network-based methods were designed to deal with a pool of barcodes, where some barcodes are true and the remaining barcodes are the result of errors from the true barcodes. However, here we have a situation where the ‘false’ barcodes could be due to either errors (I’ll call these ‘error barcodes’), or droplets without a cell which have amplified ambient RNA (‘ambient barcodes’). We can indentify these two separate types of false barcodes by looking at the minimum edit-distance between each barcode and the true barcodes. For error barcodes, we would expect this minimum edit-distance to be 1 in most cases. For ambient barcodes we expect them to have a similar distribution of minimum edit distances relative to the true barcodes as barcodes selected at random. Figure 6 below shows the distribution of counts per barcode, split into 4 bins. Figure 7 shows the distribution of edit-distances for each bin and a null distribution from a randomly generated set of cell barcodes. Bin 4 contains the true barcodes and shows a null-like distribution of edit-distances as expected. We can clearly see that bin 3, which has slightly lower counts per cell barcode compared to the true barcodes, is enriched in single edit-distances relative to the null. This strongly suggests many of these cell barcodes are error barcodes with a single error relative to the true cell barcode from which they originate. However, bin 2, and bin 1 especially, show a more null-like distribution, suggesting most of these cell barcodes are instead ambient barcodes.

indrop_bins

Figure 6. Distribution of reads per cell barcode (log 10) from first 50M reads, separated into 4 bins. Bin 4 represents the ‘true’ barcodes.

edit_distances_drop_seq

Figure 7. Distribution of minimum edit distance compared to true barcodes (bin 4) for each bin shown in figure 6. Random = 10 000 randomly generated cell barcodes.

 

In summary, the knee method as implemented here works very well with the samples tested so far so we can proceed with automatic detection of the number of true cell barcodes with UMI-tools extract. We will also enable the use of the directional method since this is more accurate when you know that all false cell barcodes are the result of errors. However, I’d recommend sticking with the knee method unless you are absolutely certain this is the case.

Finally, if you’re already using UMI-tools for scRNA-Seq, or thinking about doing so, keep a look out for version 0.5. Along with the added functionality discribed here, we’re adding a count commands to yield counts per cell per gene directly from featureCounts/HTSeq output and we hope to include a scRNA-Seq-specific tutorial for this release too.

EDIT: Following the comments below I’ve continued this analysis into a second post to look at INDELs in cell barcodes and comparing UMI-tools with sircel.

An introduction to the CGAT group

Introducing CGAT

This blog follows the journey of a group of postdoc biologists into the world of computational genomics.  In this initial post we’ll introduce who we are and why we’re undertaking this adventure together.

It took more than ten years to sequence a single human genome using capillary sequencing technology.  We’ve come an even longer way since the dark days of painstakingly running a ruler down a sequencing gel (tales abound of even John Sulston taking his turn on the gels in the early stages of the human genome project).  Capillary sequencing allowed this process to be automated, but still it was slow and arduous, with relatively low throughput per sequencer.  The advent of sequencing by synthesis and massively parallel sequencing has hugely increased output whilst reducing the cost per nucleotide to fractions of a penny.  Initially reads were short, mapping was hard and error rates were high.  Tools were developed to align these short reads back to the genomes and transcriptomes they were derived from.  As the chemistry improved so did the read lengths and error rates, biases were minimised and more sophisticated aligners were developed to map short sequencing reads to genomes across the kingdoms of life on Earth.

With these advances came accessibility to large data sets as small labs started to generate gigabytes of sequencing data at relatively low costs.  However, analysis of these data sets requires large computational infrastructure and expert knowledge; a resource that is not available to all biology labs and institutes.  Additionally, analysis of NGS data needs to be driven by the biology, after all the sequencing data was generated to answer a biological or medical question.  This has led to a bottleneck in the analyses of these data and a need for more skilled individuals to bring together biological knowledge, statistical understanding and computational skills in handling and integrating large data sets.

The Computational Genomics Analysis and Training programme was setup by the UK Medical Research Council to train researchers in these skills and capitalise on the wealth of NGS data being generated.  CGAT fellows embark on a 3 year program which combines training and research.  Fellows train in statistics and computational biology whilst collaborating with UK scientists to perform data analysis and interpretation in wide ranging research projects.

This blog is about the fellows as we develop through this training program, our experiences as well as more technical information as befits the nature of our work.  Expect paper recommendations, brief highlights of analytical tools we’re using, lessons we’ve learned in programming and some guest posts from colleagues and collaborators.

As with all modern biological research, this blog is the result of a collaboration of individuals with a shared motivation.  We’ve all come from biological “wet lab” backgrounds to learn how to leverage computational tools to discover some cool biology.

If you’re interested in who the current fellows are then follow the links in the side bar to our Gravatar profiles or checkout the profiles and testimonials on the CGAT website.

That’s all for now folks.