Estimating the number of true cell barcodes in single cell RNA-Seq (part 2)

This post starts where we left off in the first part. If you haven’t already done so, I’d recommend reading the first part now.

I hadn’t planned on writing another post but no sooner had I finished the first part of this blog post than I discovered that the Pachter group have developed a tool to perform the same task (sircel, pronounced ‘circle’). This appears to be in the early stages of development but there’s a bioAxiv manuscript out which happens to cover the same input data as I discussed in the previous post (thanks to Avi Srivastava for bringing this to my attention).

In the discussion with Avi in the comments of the previous post I also realised that I had failed to clarify that only base substitutions will be detectable when using our network methods. As it happens, one of the motivations for the development of sircel appears to be handling INDELs which they claim can be present in >25% of dropseq barcodes. I can’t see where they get this number from (In the manuscript they reference the original Macosko et al paper but I can’t see this 25% figure anywhere) so I’m assuming for now that this is based on additional analyses they’ve performed. This seems like an alarmingly high figure to me given that the rate of INDELs is ~1 order of magnitude less frequent than substituions (see Fig. 2/Table 2 here) and even at the start of the read where the cell barcode resides, the per-base rate of INDELs does not usually exceed 0.001. I would also assume that cell barcodes with INDELs would have far fewer reads and so drop below the threshold using the ‘knee’ method. Hence, while some cell barcodes may be discarded due to INDELs, I would be surprised if a INDEL-containing barcode was erroneously included in the accepted cell barcodes.

How common are INDELs?

Before I tested out sircel, I wanted to independently assess what proportion of cell barcodes might have a substitution or INDEL. To do this, I returned to the drop-seq data from the last post (GSE66693). Previously, I showed the distribution of reads per cell barcode separated into 4 bins, where bin 4 represents the accepted barcodes above the knee threshold and bins 1-3 are largely populated by different types of erroneous barcodes (reproduced below as Figure 1). I compared each barcode to all barcodes in bin 4 and looked for a match allowing for one error (substitution, insertion or deletion). The results are tabulated below.  These figures are an upper estimate of the number of barcodes/reads harbouring an error relative to the true cell barcodes since each barcode may contribute to all three error types if at least one match to a bin 4 barcode exists with each error type. We can see that ~1.5% of cell barcodes may contain an INDEL, far less than the 25% suggested in the sircel manuscript. Furthermore, when we present the figures according to the number of reads, it appears around 3% of reads have a cell barcode with an INDEL, whilst 7% have a substitution. Thus, we can be quite confident that we’re not losing lots of data if we don’t error correct cell barcodes or just correct substitution errors.

indrop_bins

Figure 1. Distribution of reads per cell barcode (log 10) from first 50M reads, separated into 4 bins. Bin 4 represents the ‘true’ barcodes.

Untitled 2

Table 1: Columns 1 & 2: The number and fraction of cell barcodes which match a bin 4 barcode allowing one error. Columns 3 & 4: The number and fraction of reads with a cell barcodes which match a bin 4 barcode allowing one error.

Are INDEL-containing barcodes passing the knee threshold?

Next, I want to investigate whether the INDEL barcodes pass the knee threshold and become ‘accepted’. I therefore split the above data according to bin and focused on bin 4. As you can see in Figure 2 below,  there are indeed a small number of bin 4 barcodes which are a single error away from another bin 4 barcodes. Our null expectations (from 100 000 randomly generated barcodes) for the percentage of subsitution, deletion and insertion matches by chance are 0.14, 0.11 and 0.13%, respectively. In contrast, amongst the 575 bin 4 barcodes, there are 6 barcodes (1.0%) which match at least one other bin 4 barcode with one error. 4/6 of these barcodes have a match with an INDEL and all have relatively low counts suggesting these are indeed errors. It seems I may have been wrong to assume a INDEL-containing barcodes will not pass the knee threshold. The results for bins 1-3 minor my previous analysis using edit-distance and suggest the bin3 is mainly made up of error-containing barcodes whilst bins 1 & 2 appear to originate from amplification of ambient RNA in empty droplets.

error_detectionFigure 2. The frequency of cell barcodes in each bin which are one error away from at least one of the bin 4 barcodes. Random = 100 000 randomly generated barcodes

How does sircle compare to the knee method?

So, INDELs are observed in this drop-Seq data, although perhaps not 25% of all cell barcodes, and INDEL-containing barcodes can pass the hard knee threshold for acceptance. Now on to sircel. As you would expect from the Pachter group, sircel introduces a novel, elegant solution to detect substitutions and INDELs in barcodes. Rather than using edit-distance or fuzzy matching, they ‘circularise’ the barcodes and form a de Bruijn graph using the kmers from the circularised barcodes. The circularisation ensures that even with a large kmer size, barcodes containing an error will share at least a portion of their path through the de Bruijn graph with the cognate error-free barcodes. The edges are weighted according to the number of times each edge is observed and each circular path is given a score according to the minimal weight of all its edges. This score is used to filter the paths to select the high weight, correct paths and incorrect paths are merged to the correct paths. Notably, the thresholding is done using a method analogous to the knee method, whereby the threshold is set as the local maxima of the smoothed first derivative from the cumulative distribution of path scores. The neat thing about the sircel method is that it enables simultaneous identification of this inflection point to identify the threshold for true barcodes, and error correction to the true barcodes. To compare the results for sircel and my knee method, I ran sircel on the GSE66693 drop-seq sample where the authors had suggested there were ~570 true cell barcodes. Previously, using the first 50M reads I had estimated that there were 573 true barcodes using the knee method. As per the sircel manuscript recommendation, I used only 1M reads to run sircel, which yielded 567 true barcodes. This is slightly lower than the number quoted in the manuscript for 1M reads (582) but perhaps this is due to more recent updates? The methods agree on 563 cell barcodes but what about the cell barcodes where they agree. 4/10 out of the knee method-specific cell barcodes are a single error away from another barcode above the threshold, suggesting these may in fact be errors. The remaining 6 barcodes all have very low counts. Likewise, the 4 sircel-specific cell barcodes fall just below the knee threshold (6170-9576 reads per barcode; threshold = 9960). For these 10 barcodes, the difference appears to be due to differences in the number of reads sampled. I wanted to check this by using 50M reads with sircel but the run-time becomes excessively long. Unfortunately, going to other way and using just 1M reads with the knee method yields unstable estimates and since it takes <6 minutes to run the knee method with 50M reads we would never recommend using the knee method with just 1M reads. So in essence, the major difference between the knee method and sircel is that sircel effectively corrects the small number of error barcodes above the threshold.

Can we improve the knee method?

It does look like sircel does a good job of removing error barcodes but we could easily replicate this performance by performing an INDEL-aware error detection step after the knee threshold. During the development of UMI-tools we considered detecting INDELs as well as substitutions but decided against it because: a) as mentioned at the top, the rate of INDELs is usually ~1 order of magnitude lower than substituions (that this doesn’t appear to hold true for cell barcodes), and b) UMI-tools groups reads together by their alignment position so the current framework does not allow the detection of INDELs. However, when we’re dealing with cell barcodes, we consider all barcodes at once and hence, it’s much easier to identify INDELs. For our network methods, we can simply replace the edit-distance threshold with a fuzzy regex, allowing one error (substitution, insertion or deletion). The fuzzy regex matching is ~30-fold slower than our cythonised edit-distance function but this isn’t an issue here since we only need to build the network from the cell barcodes which pass the knee threshold. From the network we can identify the error barcodes and either correct these to the correct barcode or else discard them. This will now be the default method for UMI-tools extract with the option to discard, correct or ignore potential barcode errors above the knee threshold. We will also include the option to correct barcodes which fall below the threshold.

A final thought: To correct or not?

The cost of incorrectly merging two cell barcodes through cell barcode error correction could be drastic, e.g appearing to create cells ‘transitioning’ between discrete states. On the other hand, discarding reads from error barcodes rather than correcting them reduces the sequencing depth and hence increases the noise. In the drop-seq data above 55.7% of reads are retained without error correction with a further 9.2% of reads potentially correctable. If all barcodes above the threshold and within one error of another barcode above the threshold were removed, this would only mean the loss of 0.3% of reads. Given the potential dangers of incorrect error correction, it may be prudent in some cases to discard all cell barcodes with errors and work with 55.4% of the input reads rather than correcting errors in order to retain 64.9% or reads

Estimating the number of true cell barcodes in single cell RNA-Seq (part 1)

Single Cell RNA-Seq (scRNA-Seq) took a major leap forward in 2015 when two Harvard groups published oil droplet-based methods for sequencing >10,000 cells in a single experiment for << 1$ / cell (Klein et alMacosko et al). Then in early 2016, 10X Genomics launched their commerical version of droplet scRNA-Seq, followed more recently by the illumina and Bio-Rad competitor technology. From my limited exposure, the 10X product seems to be the most popular right now but this is a very new field so there are likely more improved methods on the horizon which could shake things up. These techniques label cells in fundamentally in the same manner. Using microfluidic devices individual cells are encapsulated together with a random cell barcode in a droplet. The reverse transcription of the RNA and subsequent PCR amplification is then performed inside each droplet separately such that all cDNA from a single cell are labelled with the same cell barcode.

The techniques themselves are really exciting but I want to focus here on a technical question during the analysis. A crucial aspect of droplet scRNA-seq is that the number of possible cell barcodes is many orders of magnitude greater than the number of cells sequenced. Hence, if each sequenced cell gets one cell barcode at random, the probability of two cells having the same barcode is very low (if you’re interested, you can work out the probability using the solution to the famous ‘birthday problem’). However, when you get your sequencing data back and extract the cell barcodes detected, not all of these will be ‘true’ cell barcodes and some barcodes will contain sequencing errors or PCR errors. Furthermore, some cell barcodes may have been incorporated into cDNA that does not originate from your cells (more on this later). Hence, one of the first challenges in droplet scRNA-Seq is working out how many cells you’ve actually sequenced! In the supplementary material of the original drop-seq paper (Macosko et al) they suggest plotting the cumulative frequency and looking for an inflection point, or ‘knee’. They show this in supplementary Figure 3A (reproduced below in Figure 1 with permission). The 10X analysis pipeline also appears to take a similar approach by what I can gather from their website.

figs3_lrg-e1495127255597.jpg

Figure 1. Cumulative frequency for drop-seq data. Plot taken from Macosko et al supplementary figure 3 with permission.  Dashed line shows inferred position of point of inflection.

Now, if you’ve been on this blog before, you’ll probably have seen the posts about UMI-tools. This is a tool suite I’ve co-developed with Ian Sudbery (University of Sheffield) to accurately identify PCR duplicates in techniques utilising unique molecular identifiers (UMIs). UMIs are similar to cell barcodes in that they are just a sequence of bases used to barcode a read, though in this case, the barcode relates to the unique molecule prior to PCR amplification. The novel aspect of our tools is that we used networks to identify PCR duplicates in an error-aware manner. For background on the network methods, please see our publication or my previous blog post.  For the next major UMI-tools release (version 0.5), we’re planning to expand our support for single cell RNA-Seq analyses as this seems to be the major application of the tool to date according to our GitHub issues. As part of this, we wanted to expand the extract command. Currently this command extracts the UMI sequence from the reads and appends it to the read name so that the UMI can be accessed in the downstream BAM, We wanted to extend the extract command so that it can

  • extract cell barcodes
  • filtered cell barcodes against a whitelist of expected cell barcodes
  • or, where no whitelist exists (as in the case of droplet scRNA-Seq), create a whitelist of ‘true’ cell barcodes from the data and filter against this

For the final point, we needed to implement an automated version of the ‘knee’ method as described in the drop-seq paper. We also wondered whether our network based methods might be suitable for this purpose as they are designed to handle the situation where you have an unknown number of ‘true’ barcodes plus additional barcodes which have been generated via errors from the ‘true’ barcodes. I therefore set out to implement a ‘knee’ method and compare it with our network methods.

To start with, I took the SCRB-Seq data from Soumillon et al since I had it to hand and this includes 384 expected cell barcodes which I could use as a ground truth. All other barcodes observed should therefore have originated as errors from the 384 true barcodes. To be clear, this is not droplet scRNA-Seq but this give us a definitive ground truth for the purposes of initiation performance testing. Plotting the cumulative frequency we can clearly see an inflection near the 384th barcode as expected (Figure 2; blue line).

no_methods_cumsum

Figure 2. Cumulative frequency for SCRB-Seq cell barcodes. Blue line represents the position of the 384th cell which is close to the inflection point in the curve.

I initially attempted to identify the inflection point by fitting a spline function to the cumulative frequency but quickly realised it was much easier to work from the distribution of counts (log10) per cell barcode (Figure 3). Here the inflection point is represented by the local minima between the distribution of counts for ‘true’ and ‘false’ barcodes. We can see that the local minima is very close to the 384th cell barcode so this automated knee method seems to work well for this data. The obvious downside of this hard thresholding on read counts with the knee method is that there will likely be some overlap between the read counts for real and false barcodes (as we can see in Figure 3) . This is why we believed the network-based methods may be more accurate.

scrb_seq_one_min_dist

Figure 3. Density of reads per SCRB-Seq cell barcode. Blue line represents the position of the 384th cell which is close to the inflection point in the curve in figure 2. Orange line represents the local minima used in my implementation of the ‘knee’ method.

To compare with our network-based methods, I took increasing samples of reads from the SCRB-Seq sample and estimated the true cell barcodes using the knee method and our ‘adjacency’ and ‘directional’ methods and calculated the sensitivity and false positive rate (FPR) of the different approaches. As you can see in Figure 4, the ‘directional’ method has a slightly higher sensitivity and lower FPR than the ‘knee’ method, which suggests that this network-based method might be more suitable for identifying the true cell barcodes. The knee method slightly outperforms just selecting the top 384 most abundant barcodes (‘top 384’) in terms of higher sensitivity although this comes at the cost of increased FPR. The ‘adjacency’ method has a high sensitivity but tends towards overestimating the number of true cell barcodes as the number of sampled reads is increased. This is because adjacency only groups barcodes together if they are all separated from the same hub barcode and each by a single edge. Hence, any barcode with 2 or more errors will be always be in a separate group from the true barcode from which it originates. In contrast, the directional method is not limited to single edge paths.

pSens_zoom.pngpFPRpFPR_zoom

Figure 4. Sensitivity (Top) and False Positive Rate (FPR; Middle, Bottom) for 3 methods to detect the number of true cells sequenced. “Top 384” = select the top 384 barcodes. Shown for comparison with knee method. Middle = all samples. Bottom = adjacency excluded to show difference between knee and directional.

Next, I took a single sample from the drop-seq paper where the authors had inferred that there were ~570 cells using a visual assessment of the cumulative frequency plot (This is the sample shown in Figure 1). They also backed up this assertion with an assessment of the species-specificity of the reads for each cell barcode which suggested that most of the “cells” below this inflection point were actually droplets which did not contain a cell and simply contained ‘ambient’ RNA. So, 570 seemed a reasonable truth to test the method against. My ‘knee’ method worked great, using the first 50M reads, this method estimated that there were 573 cell barcodes (Figure 5).

indrop_cumsum

Figure 5. Cumulative frequency plot for cell barcodes in the first 50M drop-seq reads. The ‘knee’ method estimated 573 true cell barcodes and hence is plotted over the ‘top 570’ inflection point prediction. The X-axis has been limited to the 10 000 most abundant cell barcodes.

In contrast, the directional method failed miserably! For starters, the shear number of cell barcodes precluded the inclusion of all cell barcodes detected as the network building requires an all-vs-all comparison between the barcodes to calculate the edit distances. With 50M reads, the sensitivity was OK (95%), but the specificity was just 11% and the specificity unexpectedly increased as the number of reads parsed was decreased! Then came the realisation…

Image result for homer doh

As mentioned above, the network-based methods were designed to deal with a pool of barcodes, where some barcodes are true and the remaining barcodes are the result of errors from the true barcodes. However, here we have a situation where the ‘false’ barcodes could be due to either errors (I’ll call these ‘error barcodes’), or droplets without a cell which have amplified ambient RNA (‘ambient barcodes’). We can indentify these two separate types of false barcodes by looking at the minimum edit-distance between each barcode and the true barcodes. For error barcodes, we would expect this minimum edit-distance to be 1 in most cases. For ambient barcodes we expect them to have a similar distribution of minimum edit distances relative to the true barcodes as barcodes selected at random. Figure 6 below shows the distribution of counts per barcode, split into 4 bins. Figure 7 shows the distribution of edit-distances for each bin and a null distribution from a randomly generated set of cell barcodes. Bin 4 contains the true barcodes and shows a null-like distribution of edit-distances as expected. We can clearly see that bin 3, which has slightly lower counts per cell barcode compared to the true barcodes, is enriched in single edit-distances relative to the null. This strongly suggests many of these cell barcodes are error barcodes with a single error relative to the true cell barcode from which they originate. However, bin 2, and bin 1 especially, show a more null-like distribution, suggesting most of these cell barcodes are instead ambient barcodes.

indrop_bins

Figure 6. Distribution of reads per cell barcode (log 10) from first 50M reads, separated into 4 bins. Bin 4 represents the ‘true’ barcodes.

edit_distances_drop_seq

Figure 7. Distribution of minimum edit distance compared to true barcodes (bin 4) for each bin shown in figure 6. Random = 10 000 randomly generated cell barcodes.

 

In summary, the knee method as implemented here works very well with the samples tested so far so we can proceed with automatic detection of the number of true cell barcodes with UMI-tools extract. We will also enable the use of the directional method since this is more accurate when you know that all false cell barcodes are the result of errors. However, I’d recommend sticking with the knee method unless you are absolutely certain this is the case.

Finally, if you’re already using UMI-tools for scRNA-Seq, or thinking about doing so, keep a look out for version 0.5. Along with the added functionality discribed here, we’re adding a count commands to yield counts per cell per gene directly from featureCounts/HTSeq output and we hope to include a scRNA-Seq-specific tutorial for this release too.

EDIT: Following the comments below I’ve continued this analysis into a second post to look at INDELs in cell barcodes and comparing UMI-tools with sircel.

The dependency hell in software development

Introduction

Most software projects depend on other software packages to provide its functionality. Actually, the bigger your software project, the more likely is that you end up having a large number of dependencies either internally between different parts of your code or externally with third-party libraries.

The goal of adding a new dependency to your code is to avoid reinventing the wheel. If there is code already available that does what you want, normally you would prefer to re-use it (there are exceptions, though) rather than investing your time in re-writing new code to solve the same task. On the other hand, once your software project has matured over time and is ready for production it may end up being a dependency for other software projects as well.

Grouping off-the-shelf functions into software packages and defining dependencies between them has been traditionally at the core of software development. It is specially important if you try to follow the Unix Design Philosophy where the aim is to build small programs that do one thing well, and can be easily re-used.

At the time that you decide to write new software to solve a problem that is not being tackled by other software out there, you will also have to choose the operating system, the programming language, system libraries, and a collection of other software packages available on the system. These choices will create dependencies between your new piece of code and a specific software environment that will need to be reproduced to run properly on a different system.

Initially, re-using software packages as much as possible looks like the best way forward to save time and effort in the long run; and it is indeed true. However, creating dependencies with other software packages also brings its own disadvantages, specially when the number of dependencies becomes larger and larger. Here is a nice example illustrating what I am talking about.

gentoo-deps

Copy and paste the original text to explain the picture:

This graph shows the dependencies between software packages in the Gentoo Linux operating system. In total there are 14,319 packages with 63,988 dependencies between them!

Packages are drawn on the circumference of the circle and a dependency is indicated by a line draw from one package to another. The colour of a package is randomly generated, while the colour of a line is determined by the package that is required by the other packages.

Every package has a line starting on the rim of the circle drawn radially outwards. The length of the line is determined by the amount of other packages that depend on the package.

As you can imagine, the larger the number of dependencies between packages, the more likely is to create conflicts between them (among other issues). This problem is known as the dependency hell and is the driver for the rise of new technologies out there, like containers, that are trying to propose a feasible solution.

The example about Gentoo Linux might seem an extreme one. However, the codebase in CGAT currently depends on 58 R packages and 45 Python packages, plus third party tools available in our compute cluster, where we have over 200 packages installed. Click here if you dare to check out the graph of internal dependencies for the CGAT scripts!

In the rest of this post I will revisit quickly how the dependency hell problem has been addressed up to date, from no dependency resolution at all to the full isolation provided by containers. Then, I present how I have approached the dependency hell problem for the CGAT scripts. Finally, I provide my conclusions and some advice that will hopefully be useful for someone out there.

A brief history of open-source software distribution

Before talking a bit more about containers, let us now have a quick look at the evolution of how open-source software has been distributed since the Internet was available.

Tarballs

Back in the old days the only way of sharing software was to create a tarball containing the code, configuration files and scripts to compile/install the code. As mentioned, unless the software package was completely self-contained, the dependencies were not shipped with it and if you did not have them installed on your box, it would rapidly complain either at compilation or run time. The amount of time spent making the new software package to run properly varied with the number of dependencies and the conflicts with the packages already installed. Finding the right versions for all the dependencies manually was a big limitation preventing the distribution of open source software, if not impossible in some cases.

Another problem was to build the software package on a CPU architecture that was compatible with the target system. Nowadays, the amount of different CPU architectures available in the market has shrunk quite a lot, but that has not been always the case.

Having said that, it is also true that if your software only depends on a couple of packages, and your reference CPU architecture is the widely used x86, tarballs are still today a feasible way of sharing code (although you should consider better options as explained below!)

Package managers

In order to ease the distribution of open source software, package managers were created. Instead of sharing just the code, the software was pre-compiled in a reference CPU architecture and packaged into special file format that included a way of specifying where the code should be located in the destination file system along with a list of the required software dependencies and version numbers. RPM and APT are probably the most popular package managers for Red Hat and Debian based Linux distributions, respectively.

Depending on the complexity of a software project, it is not trivial to create a package for a package manager. However, the benefits are evident. The package just needs to be uploaded to a proper repository and it can be easily installed by end users with no effort via the package manager, which will download the package and all its dependencies if they are not installed on the target system.

Although dependency resolution is a key feature of a package manager, it is worth mentioning that advanced package managers also play other important roles like applying security updates in a timely manner or checking that the software to be installed is not malicious.

Package managers started as a core part of Linux distributions but the list has grown to other operating systems as well. The concept has been extended outside operating systems to manage packages for programming languages like Perl, Python and R to name a few. Even more remarkable are initiatives like Conda, a package manager that was initially intended for Python packages only but has been generalised to manage packages in different programming languages (Python, R, Ruby, Lua, Scala, Java, Javascript, C/ C++ and FORTRAN) and can be installed in all the major operating systems in the market (Linux, OS X and Windows). More about Conda later.

Virtual machines

When the number of dependencies either with other software packages or with the underlying operating system is quite specific and/or large, another approach is to create a virtual machine where the complete software stack is under control. Virtual machines can be shared via its virtual disk only or as virtual appliances. The former gives freedom to the end user to select the amount of virtual resources assigned to the virtual machine (e.g. number of CPU cores and size of main memory). The latter not only comes with the software pre-installed but also with the virtual resources pre-selected for you so your hypervisor of choice just runs it.

Containers

Since Docker irrupted in 2014, there has been a lot of hype around containers in software development in the last years. Here is a good definition of containers:

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike virtual machines, containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.

Docker is available for all major operating systems (Linux, OS X and Windows) and the idea is that you create a container for your application once, and it runs everywhere. That is quite appealing and is the reason why Docker in particular, and containers in general, are getting a huge attention these days.

Conda package for CGAT scripts

CGAT maintains a collection of scripts and pipelines for the analysis of high-throughput sequencing data. As stated above, the code has a large number of dependencies and it has been traditionally a challenge to deploy outside the CGAT cluster. Basically, we are facing the dependency hell ourselves and here I describe our approach to solve it.

In the past, I have been playing more or less with several options presented in this post to make our code portable. Right now, I consider Conda the most suitable option going forward but time will tell whether I am wrong or not.

Conda is easy to learn and install plus there are already a couple of community led efforts that make this option attractive: Bioconda and conda-forge. These teams are making great contributions to enable reproducible research and solve the dependency hell problem. Special thanks to Johannes Köster for setting out the Bioconda community and asking me to become a contributor. I am also grateful to Bioconda’s the core team for their hard work and dedication to keep this project moving on.

Recently, I have added a Conda package to the Bioconda channel for the CGAT scripts. Next, I will describe the installation steps so you can give it a try and confirm whether the Conda approach is really a solution to our dependency hell!

The installation instructions below are for Linux. OS X and Windows users must install Docker and use the container option. I will use the a temporary folder, but feel free to choose a different location on your system.

# create an installation folder
mkdir /tmp/conda
cd /tmp/conda

# download and install Conda
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p /tmp/conda/install
export PATH=/tmp/conda/install/bin:$PATH

# configure Conda
conda config --set allow_softlinks False
conda config --add channels conda-forge
conda config --add channels defaults
conda config --add channels r
conda config --add channels bioconda

# install CGAT scripts into a Conda environment
conda create -n cgat-scripts cgat-scripts -y

Since the number of dependencies is high, the installation will take up to 10 minutes depending on your system’s configuration. According to my test, you’ll need 3GB of free disk space for the complete installation as well.

Now, you are ready to go:

# on the same or a different terminal, activate the Conda environment
source /tmp/conda/install/bin/activate cgat-scripts

# and now you should be able to use the CGAT scripts
cgat --help
cgat --help Genomics
cgat --help Conversion
cgat gtf2table -h
cgat bam2geneprofile -h

So all you need to decide is the path on your file system to install Conda and all the dependencies will be located under a single folder that you can then enable on demand. Please note, we have made use of Conda environments, which gives you flexibility to have a specific set of dependencies without conflicting with other environments (e.g. different versions of Python or R) on the same Conda installation. Or, if you are really paranoid about breaking things (like myself), you can create a separate Conda installation on a different location on your file system, to ensure full isolation.

If that worked for you, and are interested in more details about how the Conda package was created, please feel free to comment on this post and I will try to help.

Conclusions

In summary, I explained the dependency hell and how the problem has been tackled until today, and later I presented the Conda package for CGAT scripts as a way of solving our own dependency hell, which will hopefully work on your systems!

Finally, based on my experience solving the dependency hell for the CGAT scripts, here is my advice on how to address the problem properly if you are a software developer. Firstly, I recommend to be really picky when choosing new dependencies for your code. Make sure the software you depend on is in good health and maintained. Secondly, have a way of tracking the dependencies for your software. For example, a simple up to date list of software packages and the required versions will do the job.

matplotlib Tutorial

Hi Everyone,

I have made a quick tutorial on the basics of plotting with matplotlib.

It covers:

  • Matplotlib vocabulary – figures and axes
  • The different ways to use matplotlib: the interactive interface, the object-oriented interface and the object-oriented class library
  • Basic plotting commands
  • Customising plots using containers and artists

It is available to view here.

Or to download as an interactive IPython notebook here.

Good luck!

Katy

Resources to help you on your way to learning Python for biology

Having been a wet lab biologist for 5 years with very little programming knowledge (zero python, a little C++), my first task when joining the Computational Biology and Training Department (CGAT) was to develop the Python programming skills. However, knowing where to start was more problematic.

My first port of call was to buy the ‘Python for biologists’ books that are amazing introductions to the basic use of python in biology. However, I quickly realised that even these simple to understand books were far too advanced for me at the time, as I hadn’t even grasped how to use the for loop yet!.

My lack of knowledge on the simple basics of python led me to the Coursera python course, where basic principles are introduced and then the course explores some of the more advanced aspects of python, which I felt at the time were far too complicated for what I needed. However, I persisted and completed the course and it allowed me to begin my new life as a computational biologist.

However, It was only after completing the Coursera series that I discovered Codeacademy. If I had discovered this first I think that my road to becoming a python programmer would have been simpler and less complicated, as the interactive session used to teach python is really intuitive. Moreover, it covers the basic principles clearly and concisely.

I think the most significant issue when embarking on learning a programming language wasn’t actually getting access to material; it was trying to decide where to start first. Therefore, for anyone embarking on learning python for biology related purposes I would go through these sources in order:

  1. Codeacademy – this is a great free resource and introduces the principles of python perfectly.
  2. ‘Python for Biologists’ – this is an excellent introduction to building python code and then applying it to simple biological problems. – However, don’t expect too much from this book, it wont give you solutions to complicated research questions.
  3. Practical computing for biologists – Again another great resource for beginners how to use python to answer simple scientific questions.
  4. Coursera (Python programming) – This was a great course to begin with but goes into some more advanced topics that at the time I didn’t need. I would consider doing this course and then stopping when you either get bored or find that it isn’t really helping you anymore. After a year of using python I re-enrolled in this course and found the more advanced aspects of the tutorials so much more informative.

All in all, it took me a month to have a good grasp of python (I have no idea whether this is quick or slow) and about another month to start using the language to a sufficiently advanced level to be useful for my work.

Adam Cribbs (www.acribbs.co.uk)

Why you should use alignment-independent quantification for RNA-Seq

[Edit] I’ve changed the title to better reflect the conclusions drawn herein.

This post follows on previous posts about the wonderful new world of alignment-free quantification (Road-testing Kallisto, Improving kallisto quantification accuracy by filtering the gene set). Previously I focused on the quantification accuracy of kallisto and how to improve this by removing poorly supported transcripts. Two questions came up frequently in the comments and discussions with colleagues.

  1. Which “alignment-independent” tool (kallisto, salmon and sailfish) is better?
  2. How do the “alignment-independent” compare to the more common place “alignment-dependent” tools for read counting (featureCounts/HTSeq) or isoform abundance estimation (cufflinks2)?

To some extent these questions been addressed in the respective publications for the alignment-independent tools (kallisto, salmon, sailfish). However, the comparisons presented don’t go into any real detail and unsurprisingly, the tool presented in the publication always shows the highest accuracy. In addition, all these tools have undergone fairly considerable updates since their initial release. I felt there was plenty of scope for a more detailed analysis, especially since I’ve not yet come across a detailed comparison between the alignment-independent tools at read counting methods. Before I go into the results, first some thoughts (and confessions) on RNA-Seq expression quantification

Gene-level vs. transcript-level quantification

When thinking about how to quantify expression from RNA-Seq data a crucial consideration is whether to quantify at the transcript-level or gene-level. Transcript-level quantification is much less accurate than gene-level quantification difficult since transcripts contain much less unique sequence, largely because different transcripts from the same gene may contain some of the same exons and untranslated regions (see Figure 1).

kmer_hist

Figure 1. Distribution of unique sequence for genes and transcripts. The fraction of unique kmers (31mers) represents the “uniqueness” of the transcript. For transcripts, each kmer is classified as unique to that transcript or non-unique. For genes, a kmer is considered unique if it is only observed in transcripts of from a single gene. Genes are more unique than transcripts.

The biological question in hand will obviously largely dictate whether transcript-level quantification is required, but other factors are also important, including the accuracy of the resultant quantification and the availability of tools for downstream analyses. For my part, I’ve tended to focus on gene-level quantification unless there is a specific reason to consider individual transcripts, such as splicing factor mutation which will specifically impact expression of particular isoforms rather than genes as a whole. I think this is probably a common view in the field. In addition, every RNA-Seq dataset I’ve worked with has always been generated for the purposes of discovering differentially expressed genes/transcripts between particular conditions or cohorts. A considerable amount of effort has been made to decide how to best model read count gene expression data and, as such, differential expression analysis with read count data is a mature field with well supported R packages such as DESeq2 and EdgeR. In contrast, differential expression using isoform abundance quantification is somewhat of a work in progress. Of course, it’s always possible to derive gene-level quantification by simply aggregating the expression from all transcripts for each gene. However, until recently, transcript-level quantification tools did not return estimated transcript counts but rather some form of normalised expression such as fragments per kilobase per million reads (FPKM). Thus, to obtain gene-level counts for DESeq2/edgeR analysis, one needed to either count reads per gene or count reads per transcript and then aggregate. Transcript-level gene counting is not a simple task as when a read counting tool encounters a read which could derive from multiple transcripts it has to either disregard the read entirely, assign it to a transcript at random, or assign it to all transcripts. However, when performing read counting at the gene level the tool only needs to ensure that all the transcripts a given read may originate from are from the same gene in order to assign the read to the gene . As such, my standard method for basic RNA-Seq differential expression analysis has been to first align to the reference genome and then count reads aligning to genes using featureCounts. Since the alignment-independent tools all return counts per transcript, it’s now possible to count reads per transcript and aggregate to gene-level counts for DESeq2/edgeR analysis. Note: This requires rounding the gene-level counts to the nearest integer which results in an insignificant loss of accuracy.

Simulation results

Given my current RNA-Seq analysis method and my desire to perform more accurate gene-level and transcript-level analysis, the primary questions I was interested in were:

  1. How do the three alignment-free quanitifcation tools compare to each other for transcript-level and gene-level quantification?
  2. For transcript-level quantification, how does alignment-independent quantification compare to alignment-dependent (cufflinks2)?
  3. For gene-level quantification, how does alignment-dependent quantification compare to alignment-dependent (featureCounts)?

To test this, I simulated a random number of RNA-Seq reads from all annotated transcripts of human protein coding genes (hg38, Ensembl 82; see methods below for more detail) and repeated this 100 times to give me 100 simulated paired end RNA-Seq samples.

Quantification was performed using the alignment-independent tools directly from the fastq files, using the same set of annotated transcripts as the reference transcriptome. For the alignment-dependent methods, the alignment was performed with Hisat2 before quantification using Cufflinks2 (transcripts), featureCounts (genes) and an in-house read counting script, “gtf2table” (transcripts & genes). The major difference between featureCounts and gtf2table is how they deal with reads which could be assigned to multiple features (genes or transcripts). By default featureCounts ignores these reads whereas gtf2table counts the read for each feature.

In all cases, default or near-default settings were used (again, more detail in the methods). Transcript counts and TPM values from the alignment-independent tools were aggregated to gene counts. To maintain uniform units of expression, cufflinks2 transcript FPKMs and gtf2table transcript read counts were converted to transcript TPMs.

Gene-level results

A quick look at a single simulation indicated that the featureCounts method underestimates the abundance of genes with less than 90% unique sequence which is exactly what we’d expect as reads which could be assigned to multiple genes will be ignored. See Figure 2-5 for a comparison with salmon.

Figure 2. Correlation between ground truth and featureCounts estimates of read counts per gene

Figure 2. Correlation between ground truth and featureCounts estimates of read counts per gene for a single simulation. Each point represents a single gene. Point colour represents the fraction of unique kmers (see Figure 1).

Figure 3. Correlation between ground truth and salmon estimates of read counts per gene

Figure 3. Correlation between ground truth and salmon estimates of read counts per gene for a single simulation. Each point represents a single gene. Point colour represents the fraction of unique kmers (see Figure 1).

single_simulation_fraction_unique_vs_diff_feature

Figure 4. Impact of fraction unique sequence on featureCounts gene-level read count estimates. X-axis shows the upper end of the bin. Y-axis shows the difference between the log2 ground truth counts and log2 featureCounts estimate. A clear understimation of read counts is observed for genes with less unique sequence

Figure 5. Impact of fraction unique sequence on salmon gene-level read count estimates.

Figure 5. Impact of fraction unique sequence on salmon gene-level read count estimates. X-axis shows the upper end of the bin. Y-axis shows the difference between the log2 ground truth counts and log2 featureCounts estimate. Genes with less unique sequence are more likely to show a greater difference to the ground truth but in contrast to featureCounts (FIgure 4), there is no bias towards underestimation for genes with low unique sequence. The overall overestimation of counts is due to the additional pre-mRNA reads which were included in the simulation (see methods)

Since differential expression analyses are relative, the underestimation by featureCounts need not be a problem so long as it is consistent between samples as we might expect given the fraction of unique sequence for a given gene is obviously invariant. With this in mind, to assess the accuracy of quantification, I calculated the spearman’s rank correlation coefficient (Spearman’s rho) for each transcript or gene over the 100 simulations. Figure 6 shows the distribution of Spearman’s rho values across bins of genes by fraction unique sequence. As expected, the accuracy of all methods is highest for genes with a greater proportion of unique sequence. Strikingly, the alignment-independent methods outperform the alignment-dependent methods for genes with <80% unique sequence (11 % of genes). At the extreme end of the scale, for genes with 1-2% unique sequence, median spearman’s rho values for the alignment-independent methods are 0.93-0.94,  compared to 0.7-0.78 for the alignment-dependent methods. There was no consistent difference between featureCounts and gtf2table, however gtf2table tended to show a slightly higher correlation for more unique genes. There was no detectable difference between the three alignment-free method.

Figure 6. Spearman's correlation Rho for genes binned by sequence uniqueness as represented by fraction unique kmers (31mers).

Figure 6. Spearman’s correlation Rho for genes binned by sequence uniqueness as represented by fraction unique kmers (31mers). Boxes show inter-quartile range (IQR), with median as black line. Notches represent confidence interval around median (+/- 1.57 x IQR/sqrt(n)). Whiskers represent +/- 1.5 * IQR from box. Alignment-independent tools are indistinguishable and more accurate than alignment-dependent workflows. featureCounts and gtf2table do not show consistent differences. All methods show higher accuracy with greater fraction unique sequence

 

Transcript-level results

Having established that alignment-independent methods are clearly superior for gene-level quantification, I repeated the analysis with the transcript-level quantification estimates. First of all,  I confirmed that the correlation between ground truth and expression estimates is much worse at the transcript-level than gene-level (Figure 7) as we would expect due to the aforementioned reduction in unique sequences when considering transcripts.

Figure 7. Correlation coefficient histogram for transcript-level and gene-level salmon quantification

Figure 7. Correlation coefficient histogram for transcript-level and gene-level salmon quantification

Figure 8 shows the same boxplot representation as Figure 6 but for transcript-level qantification. This time only salmon is shown for the alignment-independent methods as kallisto and sailfish were again near identical to salmon. For the alignment-dependent methods, cufflinks2 and gtf2table are shown. Again, the alignment-independent methods are clearly more accurate for transcripts with <80% unique sequence (96% of transcripts), and more unique transcripts were more accurately quantified with alignment-independent tools or gtf2table. Oddly, cufflinks2 does not show a monotonic relationship between transcript uniqueness and quantification accuracy, although the most accurately quantified transcripts were those with 90-100% unique sequence. gtf2table is less accurate than cufflinks2 for transcripts with <15% unique sequence but as transcript uniqueness increases beyond 20%, gtf2table quickly begins to outperforms cufflinks2, with medium spearman’s rho >0.98 for transcripts with 70-80% unique sequence compared to 0.61 for cufflink2.

transcript_correlation_full

Figure 8. Spearman’s correlation Rho for transcripts binned by sequence uniqueness as represented by fraction unique kmers (31mers). Boxes show inter-quartile range (IQR), with median as black line. Notches represent confidence interval around median (+/- 1.57 x IQR/sqrt(n)). Whiskers represent +/- 1.5 * IQR from box. Alignment-independent tools are indistinguishable (only salmon shown) and more accurate than alignment-dependent workflows. gtf2table is more accurate than cufflinks for transcripts with >20% unique sequence. Alignment-independent methods and gtf2table show higher accuracy with greater fraction unique sequence. Cufflinks does not show a monotonic relationship between fraction unique kmers bin and median correlation coefficient.

The high accuracy of gtf2table for highly unique transcripts indicates that read counting is accurate for these transcripts. However, this method is far too simplistic to achieve accurate estimates for transcripts with less than 50% unique sequence (>86% of transcripts) for the reasons described above, and thus transcript read counting is not a sensible option for a complex transcriptome. The poor performance of cufflinks2 relative to the alignment-dependent method is not a surprise, not least because the alignment step introduces additional noise into the quantification from miss-aligned reads etc. However, the poor performance relative to simple read-counting suggests the inaccuracy is largely due to the underlying algorithm for estimating abundance which is less accurate than read-counting for transcripts with >15% unique sequence. This is really quite alarming if true (I’d be very happy if someone had any ideas where I might have gone wrong with the Cufflinks2 quantification?)

Overal conclusions and thoughts

As expected, accuracy was much higher where the transcript or gene contains a high proportion of unique sequence, and hence gene-level quantification is overall much more accurate. Unless one specifically requires transcript-level quantification, I would therefore always recommend using gene-level quantification.

From the simulations presented here, I’d go as far as saying there is no reason whatsoever to use read counting if you want gene counts and although I’ve not tested HTSeq, I have no reason to believe it should perform significantly better. From these simulations, it appears to be far better to use alignment-independent counting and aggregate to the gene level. Indeed, a recent paper by Soneson et al  recommends exactly this.

Even more definitively,  I see no reason not to use alignment-dependent methods to obtain transcript-level quantification estimates since the alignment-independent are considerably more accurate in the simulations here. Of course, these tools rely upon having a reference transcriptome to work with which may require prior alignment to a reference genome where annotation is poor. However, once a suitable reference transcriptome has been obtained using cufflinks2, bayesassembler, trinity etc, it makes much more sense to switch to an alignment-independent method for the final quantification.

In addition to the higher accuracy for the point expression estimates, the alignment-independent tools also allow the user to bootstrap the expression estimates to get an estimate of the technical variability associated with the point estimate. As demonstrated in the sleuth package, the bootstraps can be integrated into the differential expression to partition variance into technical and biological variance.

The alignment-independent methods are very simple to integrate into existing workflows and run rapidly with a single thread and minimal memory requirements so there really is no barrier to their usage in place of read counting tools. I didn’t observe any significant difference between kallisto, salmon and sailfish so if you’re using just the one, you can feel confident sticking with it! Personally, I’ve completely stopped using featureCounts and use salmon for all my RNA-Seq analyses now.

[EDIT] : Following the comment from Ian Sudbery I decided to have a look at the accuracy of expression for transcripts which aren’t expressed(!)

Quantification accuracy for unexpressed transcripts

Ian Sudbery comments below that he’s observed abundant non-zero expression estimates with kallisto when quantifying against novel transcripts which he didn’t believe were real and should therefore have zero expression. The thrust of Ian’s comment was that this might indicate that alignment-dependent quantification might be more accurate when using de-novo transcript assemblies. This also lead me to wonder whether kallisto is just worse at correctly quantifying expression of unexpressed transcripts which causes problems when using incorrectly assembled transcript models? So I thought I would use the simulations to assess the “bleed-through” of expression from truly expressed transcripts to unexpressed transcripts from the same gene – something that CGAT’s technical director Andreas Heger has also mentioned before. To test this I re-ran the RNA-Seq sample simulations but this time reads were only generated for 5592/19947 (28%) of the transcripts included in the reference genome; 78% of the transcripts were truly unexpressed. The figure below shows the average expression estimate over the 100 simulations for the 14354 transcripts from which zero reads were simulated (“Absent”) and the 5592 “Present” transcripts. Ideally, all the abscent transcripts should have expression values of zero or close to zero. Where the expression is above zero, the implication is that the isoform deconvolution/expression estimation step has essentially miss-assigned expression from an expression transcript to an unexpressed transcripts. There are a number of observations from the figure. The first is that some absent transcripts can show expression values on the same order of magnitude to the present transcripts, regardless of the quantification tool used, although the vast majority do show zero or near-zero expression. Within the three alignment-independent tools, kallisto and sailfish are notably more likely to assign truly unexpressed transcripts non-zero expression estimates, compared to salmon, even when the transcript sequence contains a high proportion of unique kmers. On the other hand, cufflinks2 performs very poorly for transcripts with little unique sequence but similarly to salmon as the uniqueness of the transcript increases. This clearly deserves some further investigation in a separate post…

Average expression estimates for transcripts which are known to be absent or present in the simulation.

Average expression estimates for transcripts which are known to be absent or present in the simulation. Fraction unique kmers (31-mers) is used to classify the “uniqueness” of the transcript sequence. Note that cufflinks2 systematically underestimates the expression of many transcripts, partly due to overestimation of the expression of “absent” transcripts.

Caveats

The simulated RNA-Seq reads included random errors to simulate sequencing errors and reads were simulated from pre-mRNA at 1% of the depth of the respective mRNA to simulate the presence of immature transcripts. No attempt was made to simulate biases such as library preparation fragmentation bias, random hexmer priming bias, three-prime bias or biases from sequencing. This purpose of this simulation was to compare the best possible accuracy achievable for each workflow under near-perfect conditions without going down the rabbit hole of correctlysimulating these biases.

[EDIT: The original text sounded like a get-out for my bolder statements ] Some comparisons may not be considered “fair” given that the alignment-free methods only quantify against the reference transcriptome where as the genome alignment-based methods of course depend on alignment to the reference genome and hence can also be used to quantify novel transcripts. However, the intention of these comparisons was to provide myself and other potential users of these tools with a comparison between common workflows to obtain expression estimates for differential expression analysis, rather than directly testing e.g kallisto vs. Cufflinks2.

The intention of this simulation is to provide myself and other interested parties with a comparison of alignment-independent and dependent methods for gene and transcript level quantification. It is not intended to be a direct comparison between featureCounts and Cufflinks2 and salmon, sailfish and kallisto – a direct comparison is not possible due to the intermediate alignment step – it is a comparison of the alignment-dependent and independent workflows for quantification prior to differential expression analysis when a reference transcriptome is available.

 

If you really want some more detail…

Methods

Genome annotations

Genome build:hg38. Gene annotations = Ensembl 82. The “gene_biotype” field of the ensembl gtf was used to filter transcripts to retain those deriving from protein coding genes. Note, some transcripts from protein coding genes will not themselves be protein coding. In total, 1433548 transcripts from 19766 genes were retained

Unique kmer counting

To count unique kmers per transcript/gene, I wrote a script available through the CGAT code collection, fasta2unique_kmers. This script first parses through all transcript sequences and classifies each kmer as “unique” or “non-unique” depending on whether it is observed in just one transcript or more than one transcript. The transcript sequences are then re-parsed to count the unique and non-unique kmers per transcript. To perform the kmer counting at the gene-level, kmers are classified as “unique” or “non-unique” depending on whether it they are observed in just one gene (but possibly multiple transcripts) or more than one gene. kmer size was set as 31.

Simulations

I’m not aware of a gold-standard tool for simulating RNA-Seq data. I’ve therefore been using my own script from the CGAT code collection (fasta2fastq.py). For each transcript, sufficient reads to make 0-20 copies of each transcripts were generated. Paired reads were simulated at random from across the transcript with a mean insert size of 100 and standard deviation of 25. Naive sequencing errors were then simulated by randomly changing 1/1000 bases. In addition reads were simulated from the immature pre-mRNA at a uniform depth of 1% of the mature mRNA. No attempt was made to simulate biases such as library preparation fragmentation bias and random hexmer priming bias or biases from sequencing since the purpose of this simulation was to compare the best possible accuracy achievable for each workflow under near-perfect conditions.

Alignment-independent quantification

Kallisto (v0.43.0), Salmon (v0.6.0) and Sailfish (v0.9.0) were used with default settings except that the strandedness was specified as –fr-stranded,  ISF and ISF respectively. Salmon index type was fmd. kmer size was set as 31.

Simulations and alignment-independent quantification were performed using the CGAT pipeline pipeline_transcriptdiffexpression using the target “simulation”.

Read alignment

Reads were aligned to the reference genome (hg38 ) using hisat2 (v2.0.4) with default settings except –rna-strandness=FR and the filtered reference junctions were supplied with –known-splicesite-infile.

Alignment-dependent quantification

featureCounts (v1.4.6) was run with default settings except -Q 10 (MAPQ >=10) and strandedness specified using -s 2. Cufflinks2 was run with default setting with the following additional options, –compatible-hits-norm –no-effective-length-correction. Removing these cufflinks2 options had no impact on the final results.

 

 

 

 

Speeding up python

I was recently contacted by Ilgar Abdullayev from the Ludwig Institute for Cancer Research, Karolinska Institutet with a query about why the dedup_umi.py script in our UMI-Tools repository was taking so long to run on some of his BAMs. Finding the bottleneck was relatively easy and the immediate solution was straightforward thanks to cython (and our technical director Andreas!). Thanks to Ilgar for sharing some of his data to let me optimise the code. Before the details though, I thought it might be a good time to take stock on why python is quick to write but slow to run. Skip to “Reducing the interpretation overhead” if you’re already a whiz at python.

Quick development

One of the things I really love about python is how quick it is to write function code.  Python is dynamically typed which means we don’t have to worry about the type of a variable until we want to do something with it. The following is perfectly valid.

variable = "hello world"

variable = 6

It appears to the user that the type of variable has changed from string to int. Actually, we first created a python string object and bound the reference “variable” to it, then we created a new int object and bound the reference “variable” to it. The first object no longer has any references bound to it so it’s garbage collected (e.g deleted).

Another great feature of python is duck typing by which we can perform the same operation on different python types to different ends. The classic example being the use of “+” for both concatenating strings and for adding numeric types.

variable = "hello world"
print variable + "I'm python"
hello world, I'm python

variable = 6
print variable + 2
8

Along with a comprehensive set of well-maintained modules for more complex classes, data manipulations and statistics, dynamic typing and duck typing makes writing a complex python script something that can be achieved in minutes rather than days.

Slower to run

The penalty you pay for dynamic typing is that the python code must be interpreted, and a section of code is re-interpreted each time its run. In the, admittedly contrived, example below we can clearly see why the code in the loop needs to be interpreted each time anew.

dict_1 = {1:"bar", 0:"foo"}
list_1 = ["bar", "foo"]

objects = (dict_1, list_1)
for obj in (objects):
    print obj[0]

foo
bar

In the first loop, obj is a dictionary. The “[0]” index in a dictionary returns the value for key “0”, which in this case is “foo”. In the second loop, obj is a list. The “[0]” operation on a list slices the list to the xth element (0-indexed), which in this case is the first element, “bar”.

Reducing the interpretation overhead

The reinterpretation of code takes a lot of time and python is therefore considerably slower than languages like C. And this is where cython (see what they did there) comes in.  Cython allows you to statically determine the type of an object reducing the work of the python interpreter.

Now to return to the original problem with our script. For a background on our UMI-tools see (CGAT blog – UMIs). The dedup_umi.py script in question takes the alignment file (SAM/BAM format) and returns an alignment with the duplicated reads removed. In my hands this was taking about 10 seconds for 1 million reads which seems perfectly reasonable. For Ilgar however, it was taking around 12 minutes per 1 million reads which is less than ideal when alignment files may contain >100M reads.

To identify the bottleneck I first used a bit of trial and error to identify a single read start position which was particularly troublesome. In hindsight it was hardly surprising this was an area of extreme coverage. With the use of of the python profiler module we can identify the part of the script which was taking the longest time.

python -m cProfile -s tottime dedup.umi.py -I subset.bam -S out.bam --method=adjacency > profile.log

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 26352342   47.813    0.000   76.867    0.000 dedup_umi.py:196(edit_dist)
 26352397   20.992    0.000   20.992    0.000 {zip}
 26352353    8.062    0.000    8.062    0.000 {sum}


So the majority of the time taken was the edit_dist function which was run 26352342 times! Looking back at the script, we can see this is because our adjacency-based methods take the list of UMIs observed and create a dictionary mapping each UMI to the UMIs which are a single edit distance apart (hamming distance = 1). This is an all vs. all operation so the time taken increases quadratically with the number of UMIs observed. The obvious solution was therefore to improve the speed of the adjacency dictionary building. Let’s look at relevant portion of the original code.

def edit_dist(first, second):
    ''' returns the edit distance/hamming distances between
    its two arguments '''

    dist = sum([not a == b for a, b in zip(first, second)])
    return dist

def _get_adj_list_adjacency(self, umis, counts, threshold):
        ''' identify all umis within hamming distance threshold'''

        return {umi: [umi2 for umi2 in umis if
                      edit_distance(umi, umi2) <= threshold]
                for umi in umis}

The _get_adj_list_adjacency method uses a dictionary comprehension with the values generated by a list comprehension so the edit_distance function is called on every combination of UMIS. For a quick improvement we can put the code for the edit distance calculation inside the list comprehension so we avoid recalling the function, but the gains are minimal. Alternatively, as you may have noticed, we’re only interested in UMIs where the distance is >= threshold so we could optimise the edit_distance method by returning as soon as the distance is over the threshold:

def edit_dist(first, second, threshold):
    ''' returns True if hamming distance is less than threshold '''

    dist = 0
    for a, b in in zip(first, second):
        if a != b:
            dist += 1
        if dist > threshold:
            return 0
    return 1

def _get_adj_list_adjacency(self, umis, counts, threshold):
        '''identify all umis within hamming distance threshold'''

        return {umi: [umi2 for umi2 in umis if
                      edit_dist(umi, umi2, threshold)]
                for umi in umis}

This improves the speed ~2-fold which is nice but not really enough!

Speaking to Andreas about this problem, he quickly identified cython as simple solution since the hamming distance calculation is very simple. Here’s the cython code snippet and the pertinent code in dedup.py

_dedup_umi.pyx:

cpdef int edit_distance(a, b):
    cdef char * aa = a
    cdef char * bb = b

    cdef int k, l, c
    c = 0
    l = len(a)
    for k from 0 <= k < l:
        if aa[k] != bb[k]:
            c += 1
    return c

dedup_umi.py:

...
import pyximport
pyximport.install(build_in_temp=False)
from _dedup_umi import edit_distance
...
def _get_adj_list_adjacency(self, umis, counts, threshold):
        ''' identify all umis within hamming distance threshold '''

        return {umi: [umi2 for umi2 in umis if
                      edit_distance(umi, umi) <= threshold]
                for umi in umis}

In the cython function we statically define the type for the two C strings a & b and the ints k, l & c. Then we count the positions in each string where the characters are different. In dedup_umi.py we just need to import the pyximport module and use it to compile _dedup_umi.py. Then we import the edit_distance function from _dedup_umi.py and use it like any other python function. This simple modification reduces the run time on the troublesome genomic coordinate from 91 secs to 11 secs – the most time-consuming function for this particular region is now remove_umis.

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     6432    5.902    0.001    5.902    0.001 dedup_umi.py:217(remove_umis)
       11    3.626    0.330    5.544    0.504 dedup_umi.py:296(<dictcomp>)
 26359658    1.918    0.000    1.918    0.000 {_dedup_umi.edit_distance}

The cython function could be further improved along the lines of the python function by returning as soon as the threshold is exceeded. I’m sure there are other improvements we could make to the run time of the script. For example, transforming the UMI strings to 2bit arrays would make the hamming distance calculation a very cheap xor bitwise operation. However, I’ve not tested yet if the cost of the transformation outweighs the gains in the edit distance calculation.

This is the first time I’ve used cython – something that Ian Sudbury (UMI-tools co-author) had suggested might be worth considering a while ago. To be honest, the solution was so straightforward that Andreas ended up implementing it himself whilst he demonstrated it. I think it might be time for me to read that cython documentation fully now!