
In previous blogs, I described how mathematical modelling can help understand the spread of the COVID-19 epidemics and provide privacy-preserving contact tracing. Looking forward at how the world will have to deal with COVID-19 in the coming months, it is likely that a significant percentage of the population will need to be tested multiple times. In a recent BBC science podcast, Neil Turok, Leon Mutesa and Wilfred Ndifo describe their highly efficient method of implementing large-scale testing that takes advantage of pooling samples. This is helping African governments save millions on the cost of testing. I offer an outline of their innovative approach, which is described in more detail in a paper published on arxiv.org.
The need for large-scale testing
The roll-out of antigen testing in some countries, like the US and the UK, has been painfully slow. Some suggest that the US may need to carry out between 400,00 and 900,000 tests a day in order to get a grip on the epidemic. When antigen tests cost 30-50 US dollars (or 24-40 UK pounds), this could be very expensive. However, as long as a relatively small percentage of the population is infected, running a separate test for everyone would be extremely inefficient compared with approaches that pool samples.
Pooling offers a huge advantage, because a negative test for a pooled sample of 100 swabs, would clear 100 people with a single test. The optimal size of the pools depends on the level of incidence of the disease: larger pools can be used for lower incidence.
The concept of pooling dates back to the work of Dorfman in 1943. His method was to choose an optimal pool size and perform a test on each pooled sample. A negative result for a pool clears all the samples contained in it. Then the infected individuals are found by testing every sample in the the positive pools. Mutesa and Ndifo’s hypercube method is more efficient, because, rather than testing everyone in an infected pool, you test carefully-selected sub-pools.
The idea is to imagine that all the samples in a pool lie on a multidimensional lattice in the form of a hypercube. It turns out that the optimal number of points in each direction is 3. Obviously it is hard to visualise high dimensions, but in 3-D, you have 27 samples arranged on a 3x3x3 grid forming a cube. The trick to identifying individual infected samples is to create sub-pools by taking slices through the lattice. In the diagram above, there are 3 red slices, 3 green and 3 blue, each containing 9 samples.
Consider, for simplicity, only one infected person out of the 27. Testing the 9 pools represented by the coloured slices will result in exactly 3 positive results, representing the intersection of the three planes passing through the infected sample. This uniquely identifies the positive individual with just 9 tests, whereas Dorfman would have set out to test all 27, finding the positive, on average after doing half of these.
Slicing a hypercube
Although you can optimise the pool size to ensure that the expected number of positives in any pool is manageable, in practice you won’t know how many infected samples are contained in any particular pool. The hypercube method deals with this by noting that a slice through a D-dimensional hypercube is itself a hypercube of dimension D-1, so the method can be applied recursively.
The other big advantage is that the approach is massively parallel, allowing positives to be identified quickly, relative to the speed of spread of the pandemic. About 3 rounds of PCR tests can be completed in a day. Algorithms that further reduce the total number of tests towards the information theoretical limit, such as binary search, require tests to be performed sequentially, which takes longer than doing more tests in parallel.
In order to make sure I really understood what is going on, I wrote some Python code to implement and validate the hypercube algorithm. In principle, it was extremely simple, but dealing with low probability edge cases, where multiple positive samples happen to fall into the same slice turned out to be a bit messy. However, in simulations, all infected samples were identified with no false positives nor false negatives. The number of tests was very much in line with the theoretical value.
Huge cost savings
My Python program estimates the cost savings of implementing the hypercube algorithm versus testing every sample individually. The bottom line is that the if the US government needed to test 900,000 people and the background level of infection is 1%, the algorithm would find all infected individuals with around 110,000 tests or 12% of the total samples. At $40 a test, this would be a cost saving of over $30million per day versus testing everyone individually. Equivalent calculations for the UK government to test 200,000 people would offer savings of around £5million pounds a day.
It is great to see leading edge science being developed in Africa. Cost conscious governments, for example in Rwanda, are implementing the strategy. Western governments lag behind, delayed by anecdotal comments from UK officials who worry that the approach is “too mathematical”, as if this is somehow a vice rather than a virtue.