Which team is that?

Screen Shot 2018-04-11 at 11.18.09

My last blog explored the effectiveness of deep learning in spotting the difference between Vincenzo Nibali and Alejandro Valverde. Since the faces of the riders were obscured in many of the photos, it is likely that the neural network was basing its evaluations largely on the colours of their team kit. A natural next challenge is to identify a rider’s team from a photograph. This task parallels the approach to the kaggle dog breed competition used in lesson 2 of the fast.ai course on deep learning.

Eighteen World Tour teams are competing this year. So the first step was to trawl the Internet for images, ideally of riders in this year’s kit. As before, I used an automated downloader, but this posed a number of problems. For example, searching for “Astana” brings up photographs of the capital of Kazakhstan. So I narrowed things down by searching for  “Astana 2018 cycling team”. After eliminating very small images, I ended up with a total of about 9,700 images, but these still included a certain amount of junk that I did have the time to weed out, such as photos of footballers or motorcycles in the “Sky Racing Team”,.

The following small sample of training images is generally OK, though includes images of Scott bikes rather than Mitchelton-Scott riders and  a picture of  Sunweb’s Wilco Kelderman labelled as FDJ. However, with around 500-700 images of each team, I pressed on, noting that, for some reason, there were only 166 of Moviestar and these included the old style kit.

Screen Shot 2018-04-11 at 10.18.54.png
Small sample of training images

For training on this multiple classification problem, I adopted a slightly more sophisticated approach than before. Taking a pre-trained Resnet50 model, I performed some initial fine-tuning, on images rescaled to 224×224. I settled on an optimal learning rate of 1e-3 for the final layer, while allowing some training of lower layers at much lower rates. With a view to improving generalisation, I opted to augment the training set with random changes, such as small shifts in four directions, zooming in up to 10%, adjusting lighting and left-right flips. After initial training, accuracy was 52.6% on the validation set. This was encouraging, given that random guesses would have achieved a rate of 1 in 18 or 5.6%.

Taking a pro tip from fast.ai, training proceeded with the images at a higher resolution of 299×299. The idea is to prevent overfitting during the early stages, but to improve the model later on by providing more data for each image. This raised the accuracy to 58.3% on the validation set. This figure was obtained using a trick called “test time augmentation”, where each final prediction is based on the average prediction of five different “augmented” versions of the image in question.

Given the noisy nature of some of the images used for training, I was pleased with this result, but the acid test was to evaluate performance on unseen images. So I created a test set of two images of a lead rider from each squad and asked the model to identify the team. These are the results.

75 percent right.png
75% accuracy on the test set

The trained Resnet50 correctly identified the teams of 27 out of 36 images. Interestingly, there were no predictions of MovieStar or Sky. This could be partly due to the underrepresentation of MovieStar in the training set. Froome was mistaken for AG2R and Astana, in column 7, rows 2 and 3. In the first image, his 2018 Sky kit was quite similar to Bardet’s to the left and in the second image the sky did appear to be Astana blue! It is not entirely obvious why Nibali was mistaken for Sunweb and Astana, in the top and bottom rows. However, the huge majority of predictions were correct. An overall  success rate of 75% based on an afternoon’s work was pretty amazing.

The results could certainly be improved by cleaning up the training data, but this raises an intriguing question about the efficacy of artificial intelligence. Taking a step back, I used Bing’s algorithms to find images of cycling teams in order to train an algorithm to identify cycling teams. In effect, I was training my network to reverse-engineer Bing’s search algorithm, rather than my actual objective of identifying cycling teams. If an Internet search for FDJ pulls up an image of Wilco Kelderman, my network would be inclined to suggest that he rides for the French team.

In conclusion, for this particular approach to reach or exceed human performance, expert human input is required to provide a reliable training set. This is why this experiment achieved 75%, whereas the top submissions on the dog breeds leaderboard show near perfect performance.

Ranking Top Pro Cyclists for 2017

peter-sagan.jpg

Following Il Lombardia last weekend, the World Tour has only two more events this year. It is time to ask who were the best sprinters of 2017? Who was the best climber or puncheur? The simplest approach is to count up the number of wins, but this ignores the achievement of finishing consistently among the top riders on different types of parcours. This article explores ways of creating rankings for different types of riders.

The current UCI points system, introduced in 2016, is fiendishly complicated, with points awarded for winning races and bonuses given to those wearing certain jerseys in stage races. The approach applies different scales according to the type of event, but each of these scales puts a premium on winning the race, with points awarded for first place being just over double the reward of the fifth-placed rider. In fact, taking the top 20 places in the four main world tour categories of event, the curve of best fit is exponential with a coefficient of approximately -1/6. In other words, there’s a linear relationship between a rider’s finishing position and the logarithm of the UCI points awarded.

UCI Points

This observation is really useful, because it provides a straightforward way of assessing the performance in different types of races, based on their finishing positions. The  PCS web site is great source of  professional cycling statistics. One nice feature is that most of the races/stages have an associated profile indicated by a little logo, see Tour de France. These classify races into the following categories:

  • Flat e.g. TdF stage 2 from Düsseldorf to Liège
  • Hills with a flat finish e.g. Milan San Remo
  • Hills with an uphill finish e.g. Fleche Wallonne
  • Mountains with a flat finish e.g. TdF stage 8 Station des Rousses
  • Mountains with an uphill finish e.g. TdF stage 5 La Planche des Belles Filles
  • It is also reasonable to assume that any stage of less than 80km was a TT

We would expect outright sprinters to top the rankings in flat races, whereas the puncheurs come to the fore when it becomes hilly, with certain riders doing particularly well on steep uphill finishes. The climbers come into their own in the mountains, with some being especially strong on summit finishes.

Taking the results of all the World Tour races in 2017 completed up to Il Lobardia and applying the simple -1/6 exponential formula equally to all categories of event,  we obtain the following “derived ranking”,  arranged by the profile of event.

Derived ranking for 2017 World Tour events, according to parcours

Screen Shot 2017-10-10 at 20.02.24

Marcel Kittel rightly tops the sprinters on flat courses (while Cavendish was 11th), but the Katusha Alpecin rider and several others have tended to be dropped on hilly courses, where Sagan, Ewan and Kristoff were joined by Trentin, Gaviria and some classic puncheurs. Sagan managed to win some notable uphill finishes, such as Tirreno-Adriatico and Grand Prix Cycliste de Quebec, alongside riders noted for being strong in the hills. The aggression of Valverde and Contador put them ahead of Froome on mountain stages that finished on the flat, but the TdF winner, Zakarin and Bardet topped the rankings of pure climbers for consistency on summit finishes. Finally we see the usual suspects topping the TT rankings.

It should be noted that ranking performances based simply on positions, without some form of scaling, gave very unintuitive results. While simpler than the UCI points system, this analysis supports the idea of awarding points in a way that scales exponentially with the finishing position of a rider.