Creating artistic images from Strava rides

firstimage
Four laps of Richmond Park

When you upload a ride, Strava draws a map using the longitude and latitude coordinates recorded by your GPS device. This article explores ways in which these numbers, along with other metrics, can be used to create interesting images that might have some artistic merit.

The idea was motivated by the huge advances made in the field of Deep Learning, particularly applications for image recognition. However, since datasets come in all shapes and forms, researchers have explored ways of converting different types of data into images.  In a paper published in 2015, the authors achieved success in identifying standard time series by converting them into images.

GPS bike computers typically record snapshots of information every second. What kind of images could these time series generate? It turns out that there are several ways to convert a time series into an image.

Spectrogram

Creating a spectrogram is a standard approach from signal processing that is particularly useful for analysing acoustic files. The spectrogram is a heat map that shows how the underlying frequencies contributing to the signal change over time. Technically, it is derived by calculating the discrete Fourier transform of a window that slides across the time series. I applied this to my regular Saturday morning club ride of four laps around Richmond Park. The image changes a bit once the ride gets going after about 1200 seconds (20 minutes), but, frankly, the result was not particularly illuminating. There is no obvious reason to consider cycling power data as a superposition of frequencies.

spectrogram

Ah! Now we are getting somewhere

The authors of the referenced paper took a different approach to produce things called Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF). Read the paper if want to know the details. I created these and something call a Recurrence Plot. All of these methods generate a matrix, by combining every element in the time series with every other element. The underling observations occurring at times t_{1} and t_{2} determine the colour of the pixel at position (t_{1}, t_{2}). Images are symmetric along the lower-left to upper-right diagonal, apart from GADF, which is antisymmetric.

Let’s see how do they look for on four laps of Richmond Park. We have six time series, with corresponding sets of images below. The segmentation of the images is due to periodicity of the data. This is particularly clear in the geographic data (longitude, latitude and altitude). The higher intensity of the main part of the ride is most obvious in the heart rate data. The MTF plots are quite interesting. Scroll down through the images to the next section

data1
Raw time series of power, heart rate, cadence, longitude, latitude and altitude
gasf
Gramian Angular Sum Field
gadf
Gramian Angular Difference Field
mtf
Markov Transition Field
rp
Recurrence Plot

From cycle ride to art

It is one thing to create an image of each item, but how can we combine these to summarise a ride in a single image. I considered two methods of combining time series into a single image: a) create a new image where the vertical and horizontal axes represent different series and b) create a new image by simply adding the corresponding values from two underlying images.

One problem is that some cyclists don’t have gadgets like heart rate monitors and power meters, so I initially restricted myself to just the longitude, latitude and altitude data. Nevertheless, as noted in an earlier blog, it is possible to work out speed, because the time interval is one second between each reading. Furthermore, one can estimate power, from the speed and changes in elevation.

Another problem is that rides differ in length. For this I split the ride into, say, 128 intervals and took the last observation in each interval. So for a 3 hour ride, I’d be sampling about once every 84 seconds.

The chart at the top of this blog was created by first normalising each series to a standard range (-1, +1). Method a) was used to create two images: longitude was added to latitude and altitude was multiplied by speed. These were added using method b). Using these measures will produce pretty much the same chart each time the ride is done. In contrast, an image that is totally unique to the ride can be produced using data relating to the individual rider. The image below uses the same recipe to combine speed, heart rate, power and cadence. If this had been a particularly special ride, the image would be a nice personal memento.

lastimage
A different take on four laps of Richmond Park

For anyone interested in the underlying code, I have posted a Jupyter notebook here.

References

Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks, Wang Z Oates T, https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10179/10251

 

Fuel for the work required: periodisation of carbohydrate intake

screenshot2019-01-31at16.06.16
Fuel for the work required, Impey et al, Sports Med (2018) 48:1031–1048

Last week I attended an event announcing the forthcoming launch of a new fitness app called Pillar. It offers combined training and nutrition advice to help athletes achieve their goals. Pillar is backed by a strong scientific team including Professor James Morton, Team Sky Head of Performance Nutrition, and Professor Graeme Close, England Rugby Head of Performance Nutrition.

James Morton gave a fascinating presentation about the periodisation of carbohydrate (CHO) fuelling, including a detailed description of the nutrition strategy he created to support Chris Froome’s famous 80km attack on stage 19 of the 2018 Giro d’Italia. His recent paper explains the underlying science. These are some of the key points.

  • Always go into competition fully fuelled with carbohydrate
    • Well-fuelled athletes perform for longer at higher intensities than those with depleted reserves
    • Basic biochemistry: fat burning is too slow and supplies of the phosphocreatine are too small to sustain intensities over 85% of VO2max
    • Theory is backed up by experiment
  • There are pros and cons to training with low levels of carbohydrate
    • Positive effects: Improved fat burning, changes in cell signalling, gene expression and enzyme/protein activity, potential to save precious glycogen stores for crucial attacks later in a race
    • Negative effects: Inconsistent evidence of improved performance, ability to complete training session may be compromised, reduced immunity, risks to bone health, loss of top end for those on high fat/low carb (ketogenic) diet
  • Different ways to train with low carbohydrate
    • doing two sessions in one day with minimal refuelling
    • low carb evening meal and breakfast: sleep low, train low the next morning
    • fasted rides
    • high fat/low carb diet

Is there a structured method of training that provides the benefits without the negatives?

  • The authors propose a glycogen threshold hypothesis
    • Positive effects seem to be dependent on commencing with muscle glycogen levels within a specific range
    • Levels have to be low enough to promote positive effects
    • But when too low, protein synthesis may be impaired and the ability to complete sessions is compromised
  • This leads to the idea of periodising carbohydrate consumption, meal by meal, around planned training sessions
  • “Fuelling for the work required”
    • low carbs before and during lighter training sessions
    • high carbs in preparation for and during rides with greater intensities
    • always refuel after training
  • The diagram above provides an example for an elite endurance cyclist
    • The red, amber, green colour coding indicates low, medium or high carbohydrate consumption
    • On day 1, the athlete aims to “train high” for a hard session
    • A lighter evening meal on day 1 prepares to “sleep low, train low” ahead of a lower intensity session on day 2
    • Carbohydrate intake rises after exercise on day 2 in anticipation of a high intensity session on day 3
    • Fuelling is moderated on the evening of day 3 as day 4 is assigned as a recovery day
    • Carbohydrate rises later on day 4 to prepare for the next block of training
  • The Pillar app aims to provide these leading edge scientific principles to amateur cyclists and other athletes

In order to put this into action, you need to know how much carbohydrate you are consuming. My assumption has been that my diet is reasonably healthy, but I have never actually measured it. So I have been experimenting with free app MyFitnessPal that can be downloaded onto your phone. This provides a simple and convenient way to track the nutritional composition of your diet, including a barcode scanner that recognises most foods. You can link it to other apps such as Training Peaks to take account of energy expended. However, neither of these tools plans nutrition ahead of training sessions. Pillar aims to fill this gap. It will be interesting to see whether this turns out to be successful.

References

Fuel for the Work Required: A Theoretical Framework for Carbohydrate Periodization and the Glycogen Threshold Hypothesis, SG Impey, MA Hearris, KM Hammond, JD Bartlett, J Louis, G Close, JP Morton, Sports Med (2018) 48:1031–1048, https://doi.org/10.1007/s40279-018-0867-7

Fuel for the work required: a practical approach to amalgamating train-low paradigms for endurance athletes, Impey SG, Hammond KM, Shepherd SO, Sharples AP, Stewart C, Limb M, Smith K, Philp A, Jeromson S, Hamilton DL, Close GL, Morton JP, Physiol Rep. 2016 May;4(10). pii: e12803. doi: 10.14814/phy2.12803

Low carbohydrate, high fat diet impairs exercise economy and negates the performance benefit from intensified training in elite race walkers, Burke LM, Ross ML, Garvican-Lewis LA, Welvaert M, Heikura IA, Forbes SG, Mirtschin JG, Cato LE, Strobel N, Sharma AP, Hawley JA.  J Physiol. 2017;595:2785–807

Low energy availability assessed by a sport-specific questionnaire and clinical interview indicative of bone health, endocrine profile and cycling performance in competitive male cyclists, BMJ Open Sport & Exercise Medicine,https://doi.org/10.1136/bmjsem-2018-000424

Machine learning for a medical study of cyclists

Screen Shot 2018-10-11 at 15.28.46

This blog provides a technical explanation of the analysis underlying the medical paper about male cyclists described previously. Part of the skill of a data scientist is to choose from the arsenal of machine learning techniques the tools that are appropriate for the problem at hand. In the study of male cyclists, I was asked to identify significant features of a medical data set. This article describes how the problem was tackled.

Data

Fifty road racing cyclists, riding at the equivalent of British Cycling 2nd category or above, were asked to complete a questionnaire, provide a blood sample and undergo a DXA scan – a low intensity X-ray used to measure bone density and body composition. I used Python to load and clean up the data, so that all the information could be represented in Pandas DataFrames. As expected this time-consuming, but essential step required careful attention and cross-checking, combined with the perseverance that is always necessary to be sure of working with a clean data set.

The questionnaire included numerical data and text relating to cycling performance, training, nutrition and medical history. As a result of interviewing each cyclist, a specialist sports endocrinologist identified a number of individuals who were at risk of low energy availability (EA), due to a mismatch between nutrition and training load.

Bone density was measured throughout the body, but the key site of interest was the lumbar spine (L1-L4). Since bone density varies with age and between males and females, it was logical to use the male, age-adjusted Z-score, expressing values in standard deviations above or below the comparable population mean.

The measured blood markers were provided in the relevant units, alongside the normal range. Since the normal range is defined to cover 95% of the population, I assumed that the population could be modelled by a gaussian distribution in order to convert each blood result into a Z-score. This aligned the scale of the blood results with the bone density measures.

Analysis

I decided to use the Orange machine learning and data visualisation toolkit for this project. It was straightforward to load the data set of 46 features for each of the 50 cyclists. The two target variables were lumbar spine Z-score (bone health) and 60 minute FTP watts per kilo (performance). The statistics confirmed the researchers’ suspicion that the lumbar spine bone density of the cyclists would be below average, partly due to the non-weight-bearing nature of the sport. Some of the readings were extremely low (verging on osteoporosis) and the question was why.

Given the relatively small size of the data set (a sample of 50), the most straightforward approach for identifying the key explanatory variables was to search for an optimal Decision Tree. Interestingly, low EA turned out to be the most important variable in explaining lumbar spine bone density, followed by prior participation in a weight-bearing sport and levels of vitamin D (which was, in most cases, below the ideal level of athletes). Since I had used all the data to generate the tree, I made use of Orange’s data sampler to confirm that these results were highly robust. This had some similarities with the Random Forest approach. Although Orange produces some simple graphical tools like the following, I use Python to generate my own versions for the final publication.

 

Finding a robust decision tree is one thing, but it was essential to verify whether the decision variables were statistically significant. For this, Orange provides box plots for discrete variables. For my own peace of mind, I recalculated all of the Student’s T-statistics to confirm that they were correct and significant. The charts below show an example of an Orange box plot and the final graphic used in the publication.

The Orange toolkit includes other nice data visualisation tools. I particularly liked the flexibility available to make scatter plots. This inspired the third figure in the publication, which showed the most important variable explaining performance. This chart highlights a cluster of three cyclists with low EA, whose FTP watts/kg were lower than expected, based on their high training load. I independently checked the T-statistics of the regression coefficients to identify relationships that were significant, like training load, or insignificant, like percentage body fat.

Conclusions

The Orange toolkit turned out to be extremely helpful in identifying relationships that fed directly into the conclusions of an important medical paper highlighting potential health risks and performance drivers for high level cyclists. Restricting nutrition through diet or fasted rides can lead to low energy availability, that can cause endocrine responses in the body that reduce lumbar spine bone density, resulting in vulnerability to fracture and slow recovery. This is know as Relative Energy Deficiency in Sport (RED-S). Despite the obsession of many cyclists to reduce body fat, the key variable explaining functional threshold power watts/kg was weekly training load.

References

Low energy availability assessed by a sport-specific questionnaire and clinical interview indicative of bone health, endocrine profile and cycling performance in competitive male cyclists, BMJ Open Sport & Exercise Medicine, https://doi.org/10.1136/bmjsem-2018-000424

Relative Energy Deficiency in Sport, British Association of Sports and Exercise Medicine

Synergistic interactions of steroid hormones, British Journal of Sports Medicine

Cyclists: Make No Bones About It, British Journal of Sports Medicine

Male Cyclists: bones, body composition, nutrition, performance, British Journal of Sports Medicine

 

Fuelling for Cycling Performance

CF
Chris Froome (LaPresse)

Some commentators were skeptical of Team Sky’s explanation for Chris Froome’s 80km tour-winning attack on stage 19 of the Giro. His success was put down to the detailed planning of nutrition throughout the ride, with staff positioned at strategic refuelling points along the entire route.  If you consider how skeletal the riders look after two and a half weeks of relentless competition, along with the limits on what can be physically absorbed between stages, the nutrition story makes a lot of sense. Did Yates, Pinot and Aru dramatically fall by the wayside simply because they ran out of energy?

The best performing cyclists have excellent balancing skills. This includes the ability to match energy intake with energy demand. The pros benefit from teams of support staff monitoring every aspect of their nutrition and performance. However, many serious club-level cyclists pick up fads and snippets of information from social media or the cycling press that lead them to try out all kinds ideas, in an unscientific manner, in the hope of achieving an improvement in performance. Some of these activities have potentially harmful effects on the body.

Competitive riders can become obsessed with losing weight and sticking to extremely tough training schedules, leading to both short-term and long-term energy deficits that are detrimental to both health and performance. One of the physiological consequences can be a reduction in bone density, which is particularly significant for cyclists, who do not benefit from gravitational stress on bones, due to the non-weight-bearing nature of the sport. In a recent paper, colleagues at Durham University and I describe an approach for identifying male cyclists at risk of Relative Energy Deficit in Sport (RED-S).

You need a certain amount of energy simply to maintain normal life processes, but an athlete can force the body into a deficit in two ways: by intentionally or unintentionally restricting energy intake below the level required to meet demand or by increasing training load without a corresponding increase in fuelling.

EnergyBalance

Our bodies have a range of  ways to deal with an energy deficit. For the average, slightly overweight casual cyclist, burning some fat is not a bad thing. However, most competitive cyclists are already very lean, making the physiological consequences of an energy deficit more serious. Changes arise in the endocrine system that controls the body’s hormones. Certain processes can shut down, such as female menstruation, and males can experience a reduction in testosterone. Sex steroids are important for maintaining healthy bones. In our study of 50 male competitive cyclists, the average bone density in the lumbar spine, measured by DXA scan, was significantly below normal. Some relatively young cyclists had the bones of a 70 year old man!

The key variable associated with poor bone health was low energy availability, i.e. male cyclists exhibiting  RED-S. These riders were identified using a questionnaire followed by an interview with a Sports Endocrinologist. The purpose of the interview was to go through the responses in more detail, as most people have a tendency to put a positive spin on their answers. There were two important warning signs.

  • Long-term energy deficit: a prolonged significant weight reduction to achieve “race weight”
  • Short-term energy deficit: one or more fasted rides per week

Among riders with low energy availability, bone density was not so bad for those who had previously engaged in a weight-bearing sport, such as running. For cyclists with adequate energy availability, those with vey low levels of vitamin D had weaker bones. Across the 50 cyclists, most had vitamin D levels below the level of 90 nmol/L recommended for athletes, including some who were taking vitamin D supplements, but clearly not enough. Studies have shown that the advantages of athletes taking vitamin D supplements include better bone health, improved immunity and stronger muscles, so why wouldn’t you?

In terms of performance, British Cycling race category was positively related with a rider’s power to weight ratio, evaluated by 60 minute FTP per kg (FTP60/kg). Out of all the measured variables, including questionnaire responses, blood tests, bone density and body composition, the strongest association with FTP60/kg was the number of weekly training hours. There was no significant relationship between percentage body fat and FTP60/kg. So if you want to improve performance, rather than starving yourself in the hope of losing body fat, you are better off getting on your bike and training with adequate fuelling.

Cyclists using power meters have the advantage of knowing exactly how many calories they have used on every ride. In addition to taking on fuel during the ride, especially when racing, the greatest benefits accrue from having a recovery drink and some food immediately after completing rides of more than one hour.

For those wishing to know more about RED-S, the British Association of Sports and Exercise Medicine has provided a web resource.

A related blog will explore the machine learning and statistical techniques used to analyse the data for this study.

References

Low energy availability assessed by a sport-specific questionnaire and clinical interview indicative of bone health, endocrine profile and cycling performance in competitive male cyclists, BMJ Open Sport & Exercise Medicine,https://doi.org/10.1136/bmjsem-2018-000424

Relative Energy Deficiency in Sport, British Association of Sports and Exercise Medicine

Synergistic interactions of steroid hormones, British Journal of Sports Medicine

Cyclists: Make No Bones About It, British Journal of Sports Medicine

Male Cyclists: bones, body composition, nutrition, performance, British Journal of Sports Medicine

 

Cycling Through Artistic Styles

HR

My earlier post on cycling art provided an engaging way to consider the creative potentials of deep learning. I have found myself frequently gravitating back to the idea, using the latest code available over at fast.ai. The method uses a neural network to combine the content of a photograph with the style of an artist, but I have found that it takes a few trials to find the right combination of content versus style. This led to the idea of generating a range of images and then running them together as a movie that gradually shifts between the base image to a raw interpretation of the artist’s style.

Artistic styles

Using a range of artistic styles from impressionist to abstract, the weights that produced the most interesting images varied according to the photograph and artistic style.

My selected best images are shown below, next to snippets of the corresponding artworks. It turned out that the impressionist artists (Monet, Van Gogh, Cézanne and Braque) maintained the content of the image, in spite of being more heavily weighted to artistic style. In contrast, the more monochromatic styles (O’Keeffe, Polygons, Abstract as well as Dali) needed to be more strongly weighted towards content, in order to preserve the cyclist in the image. The selections for Picasso and Pollock were evenly balanced.

Every image is unique and sometimes some real surprises pop up. For example, using Picasso’s style, the mountains are interpreted as rooftops, complete with windows and doors. Strange eyes peer out the background of finger-shapes in the Dali image and the mountains have become Monet’s water lilies. The Pollock image came out very nicely.

Deep learning

The approach was based on the method described in the paper referenced below. Running the code on a cloud-based GPU, it took about 30 seconds for a neural network to learn to generate in image with the desired characteristics. The learning process was achieved by minimising a loss function, using gradient descent. The clever part lay in defining an appropriate loss function. In this instance, the sample image was passed through a separate pre-trained neural network (VGG16), where the activations, at various layers in the network, were compared to those generated by the photograph and the artwork. The loss function combined the difference in photographic content with the difference in artistic style, where the critical parameter was the content weighting factor.

I decided to vary the content weighting factor logarithmically between around 0.1 and 100, to obtain a full range of content to style combinations. A movie was be produced simply by packing together the images one after the other.

References

A Neural Algorithm of Artistic Style, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

 

 

Strava – Automatic Lap Detection

Screen Shot 2018-08-04 at 16.30.58
Opening Laps of Hillingdon Race

As you upload your data, you accumulate a growing history of rides. It is helpful to find ways of classifying different types of activities. Races and training sessions often include laps that are repeated during the ride. Many GPS units can automatically record laps as you pass the point where you began your ride or last pressed the lap button. However, if the laps were not recorded on the device, it is tricky to recover them. This article investigates how to detect laps automatically.

First consider the simple example of a 24 lap race around the Hillingdon cycle circuit. Plotting the GPS longitude and latitude against time displays repeating patterns. It is even possible to see the “omega curve” in the longitude trace. So it should be possible to design an algorithm that uses this periodicity to calculate the number of laps.

Screen Shot 2018-08-03 at 19.07.16This is a common problem in signal processing, where the Fourier Transform offers a neat solution. This effectively compares the signal against all possible frequencies and returns values with the best fit in the form of a power spectrum. In this case, the frequencies correspond to the number of laps completed during the race. In the bar chart below, the power spectrum for latitude shows a peak around 24. The high value at 25 probably shows up because I stopped my Garmin slightly after the finish line. A “harmonic” also shows up at 49 “half laps”. Focussing on the peak value, it is possible to reconstruct the signal using a frequency of 24, with all others filtered out.

Screen Shot 2018-08-03 at 19.20.38Screen Shot 2018-08-03 at 19.24.53

So we’re done – we can use a Fourier Transform to count the laps! Well not quite. The problem is that races and training sessions do not necessarily start and end at exactly the starting point of a lap. As a second example, consider my regular Saturday morning club run, where I ride from home to the meeting point at the centre of Richmond Park, then complete four laps before returning home. As show in the chart below, a simple Fourier Transform approach suggests that ride covered 5 laps, because, by chance, the combined time for me to ride south to the park and north back home almost exactly matches the time to complete a lap of the park. Visually it is clear that the repeating pattern only holds for four laps.

Screen Shot 2018-08-03 at 19.35.07

Although it seems obvious where the repeating pattern begins and ends, the challenge is to improve the algorithm to find this automatically. A brute force method would compare every GPS location with every other location on the ride, which would involve about 17 million comparisons for this ride, then you would need to exclude the points closely before or after each recording, depending on the speed of the rider. Furthermore, the distance between two GPS points involves a complex formula called the haversine rule that accounts for the curvature of the Earth.

Fortunately, two tricks can make the calculation more tractable. Firstly, the peak in the power spectrum indicates roughly how far ahead of the current time point to look for a location potentially close to the current position. Given a generous margin of, say, 15% variation in lap times, this reduces the number of comparisons by a whole order of magnitude. Secondly, since we are looking for points that are very close together, we only need to multiply the longitudes by the cosine of the latitude (because lines of longitude meet at the poles) and then a simple Euclidian sum the squares of the differences locates points within a desired proximity of, say, 10 metres.  This provides a quicker way to determine the points where the rider was “lapping”. These are shaded in yellow in the upper chart and shown in red on a long/latitude plot below. The orange line on the upper chart shows, on the right hand scale, the rolling lap time, i.e. the number of seconds to return to each point on the lap, from which the average speed can be derived.

Screen Shot 2018-08-03 at 20.26.18

Two further refinements were required to make the algorithm more robust. One might ask whether it makes a difference using latitude or longitude. If the lap involved riding back and forth along a road that runs due East-West, the laps would show up on longitude but not latitude. This can be solved by using a 2-dimensional Fourier Transform and checking both dimensions. This, in turn, leads to the second refinement, exemplified by the final example of doing 12 ascents of the Nightingale Lane climb. The longitude plot includes the ride out to the West, 12 reps and the Easterly ride back home.

Screen Shot 2018-08-03 at 20.34.02

The problem here was that the variation in longitude/latitude on the climb was tiny compared with the overall ride. Once again, the repeating section is obvious to the human eye, but more difficult to unpick from its relatively low peak in the power spectrum. A final trick was required: to consider the amplitude of each frequency in decreasing order of power and look out for any higher frequency peaks that appear early on the list. This successfully identified the relevant part of the ride, while avoiding spurious observations for rides that did not include laps.

The ability for an algorithm to tag rides if they include laps is helpful for classifying different types of sessions. Automatically marking the laps would allow riders and coaches to compare laps against each other over a training session or a race. A potential AI-powered robo-coach could say “Ah, I see you did 12 repeats in your session today… and apart from laps 9 and 10, you were getting progressively slower….”

 

Strava Power Curve

Screen Shot 2018-05-11 at 16.34.08
Comparing Historic Power Curves

If you use a power meter on Strava premium, your Power Curve provides an extremely useful way to analyse your rides. In the past, it was necessary to perform all-out efforts, in laboratory conditions, to obtain one or two data points and then try to estimate a curve. But now your power meter records every second of every ride. If you have sustained a number of all-out efforts over different time intervals, your Power Curve can tell you a lot about what kind of rider you are and how your strengths and weaknesses are changing over time.

Strava provides two ways to view your Power Curve: a historical comparison or an analysis of a particular ride. Using the Training drop-down menu, as shown above, you can compare two historic periods. The curves display the maximum power sustained over time intervals from 1 second to the length of your longest ride. The times are plotted on a log scale, so that you can see more detail for the steeper part of the curve. You can select desired time periods and choose between watts or watts/kg.

The example above compares this last six weeks against the year to date. It is satisfying to see that the six week curve is at, or very close to, the year to date high, indicating that I have been hitting new power PBs (personal bests) as the racing season picks up. The deficit in the 20-30 minute range indicates where I should be focussing my training, as this would be typical of a breakaway effort. The steps on the right hand side result from having relatively few very long rides in the sample.

Note how the Power Curve levels off over longer time periods: there was a relatively small drop from my best hour effort of 262 watts to 243 watts for more than two hours. This is consistent with the concept of a Critical Power that can be sustained over a long period. You can make a rough estimate of your Functional Threshold Power by taking 95% of your best 20 minute effort or by using your best 60 minute effort, though the latter is likely to be lower, because your power would tend to vary quite a bit due to hills, wind, drafting etc., unless you did a flat time trial. Your 60 minute normalised power would be better, but Strava does not provide a weighted average/normalised power curve. An accurate current FTP is essential for a correct assessment of your Fitness and Freshness.

Switching the chart to watts/kg gives a profile of what kind of rider you are, as explained in this Training Peaks article. Sprinters can sustain very high power for short intervals, whereas time trial specialists can pump out the watts for long periods. Comparing myself against the performance table, my strengths lie in the 5 minutes to one hour range, with a lousy sprint.

Screen Shot 2018-05-11 at 17.19.45.png
Single Ride Power Curve versus Historic

The other way to view your Power Curve comes under the analysis of a particular ride. This can be helpful in understanding the character of the ride or for checking that training objectives have been met. The target for the session above was to do 12 reps on a short steep hill. The flat part of the curve out to about 50 seconds represents my best efforts. Ideally, each repetition would have been close to this. Strava has the nice feature of highlighting the part of the course where the performance was achieved, as well as the power and date of the historic best. The hump on the 6-week curve at 1:20 occurred when I raced some club mates up a slightly longer steep hill.

If you want to analyse your Power Curve in more detail, you should try Golden Cheetah. See other blogs on Strava Fitness and Freshness, Strava Ride Statistics or going for a Strava KOM.

 

Suddenly Summer in Richmond Park

Tour de Richmond Park Leaderboard – year to date 2018

Screen Shot 2018-04-20 at 14.54.42

This week’s dramatic change in the weather has seen a string of quick laps recorded for the Tour de Richmond Park. Twelve out of the fastest fifteen efforts were completed on 18/19 April. Apart from the sheer pleasure of finally being able to ride in short sleeves, two meteorological factors came into play: higher temperatures and a favourable wind direction.

As noted in an earlier blog, changes in temperature have a far greater impact on air density than variations in atmospheric pressure and humidity. When I completed a lap last week, the temperature was 6oC, but on 19 April it was closer to 26oC. The warmer weather had the effect of reducing air density by more than 7%. Theoretically, this should allow you to ride about 2% faster for the same effort. Using a physics model I built last year to analyse Strava segments, it is possible to estimate the effect of variations in the factors that determine your position on the leaderboard. Based on an average power of 300W and some reasonable estimates of other variables, this rise in temperature would reduce your time from 16:25 to 16:04 (as expected, 2% quicker).

The other key factor is the wind. On 18/19 April, it was blowing from the south or southeast. This was not the mythical easterly that provides a tailwind up Sawyers Hill, but according to the analysis in another earlier blog, it is generally beneficial for doing a quick lap around the park.

I clocked up a decent time this morning, to reach 15th place on the year-to-date leaderboard, but I failed to take my own advice on the best time of day. The traffic tends to be lighter first thing in the morning or in the evening, when the park closes. After waiting until mid-morning for the temperature to rise, I ended up being blocked by slow-moving vehicles on two occasions.

Although it was frustrating having to brake for traffic, the really puzzling thing was an average power reading of 254W. This is much lower than the other riders on the leaderboard. Last week, I did a lap in 16:44 at an average power of 313W, which seems much more reasonable. Admittedly, I was wearing a skin suit today, but that would not have saved 50W. It is possible that I had some drafting benefit from the numerous cars in the park and some favourable gusts of wind. However, my suspicion is that my Garmin Vector pedals had not calibrated correctly, after I switched them from my road bike, before today’s ride.

The concluding message is get on your bike and enjoy the sunshine. And why not try to beat your best time for the Tour de Richmond Park?

 

Which team is that?

Screen Shot 2018-04-11 at 11.18.09

My last blog explored the effectiveness of deep learning in spotting the difference between Vincenzo Nibali and Alejandro Valverde. Since the faces of the riders were obscured in many of the photos, it is likely that the neural network was basing its evaluations largely on the colours of their team kit. A natural next challenge is to identify a rider’s team from a photograph. This task parallels the approach to the kaggle dog breed competition used in lesson 2 of the fast.ai course on deep learning.

Eighteen World Tour teams are competing this year. So the first step was to trawl the Internet for images, ideally of riders in this year’s kit. As before, I used an automated downloader, but this posed a number of problems. For example, searching for “Astana” brings up photographs of the capital of Kazakhstan. So I narrowed things down by searching for  “Astana 2018 cycling team”. After eliminating very small images, I ended up with a total of about 9,700 images, but these still included a certain amount of junk that I did have the time to weed out, such as photos of footballers or motorcycles in the “Sky Racing Team”,.

The following small sample of training images is generally OK, though it includes images of Scott bikes rather than Mitchelton-Scott riders and  a picture of  Sunweb’s Wilco Kelderman labelled as FDJ. However, with around 500-700 images of each team, I pressed on, noting that, for some reason, there were only 166 of Moviestar and these included the old style kit.

Screen Shot 2018-04-11 at 10.18.54.png
Small sample of training images

For training on this multiple classification problem, I adopted a slightly more sophisticated approach than before. Taking a pre-trained Resnet50 model, I performed some initial fine-tuning, on images rescaled to 224×224. I settled on an optimal learning rate of 1e-3 for the final layer, while allowing some training of lower layers at much lower rates. With a view to improving generalisation, I opted to augment the training set with random changes, such as small shifts in four directions, zooming in up to 10%, adjusting lighting and left-right flips. After initial training, accuracy was 52.6% on the validation set. This was encouraging, given that random guesses would have achieved a rate of 1 in 18 or 5.6%.

Taking a pro tip from fast.ai, training proceeded with the images at a higher resolution of 299×299. The idea is to prevent overfitting during the early stages, but to improve the model later on by providing more data for each image. This raised the accuracy to 58.3% on the validation set. This figure was obtained using a trick called “test time augmentation”, where each final prediction is based on the average prediction of five different “augmented” versions of the image in question.

Given the noisy nature of some of the images used for training, I was pleased with this result, but the acid test was to evaluate performance on unseen images. So I created a test set of two images of a lead rider from each squad and asked the model to identify the team. These are the results.

75 percent right.png
75% accuracy on the test set

The trained Resnet50 correctly identified the teams of 27 out of 36 images. Interestingly, there were no predictions of MovieStar or Sky. This could be partly due to the underrepresentation of MovieStar in the training set. Froome was mistaken for AG2R and Astana, in column 7, rows 2 and 3. In the first image, his 2018 Sky kit was quite similar to Bardet’s to the left and in the second image the sky did appear to be Astana blue! It is not entirely obvious why Nibali was mistaken for Sunweb and Astana, in the top and bottom rows. However, the huge majority of predictions were correct. An overall  success rate of 75% based on an afternoon’s work was pretty amazing.

The results could certainly be improved by cleaning up the training data, but this raises an intriguing question about the efficacy of artificial intelligence. Taking a step back, I used Bing’s algorithms to find images of cycling teams in order to train an algorithm to identify cycling teams. In effect, I was training my network to reverse-engineer Bing’s search algorithm, rather than my actual objective of identifying cycling teams. If an Internet search for FDJ pulls up an image of Wilco Kelderman, my network would be inclined to suggest that he rides for the French team.

In conclusion, for this particular approach to reach or exceed human performance, expert human input is required to provide a reliable training set. This is why this experiment achieved 75%, whereas the top submissions on the dog breeds leaderboard show near perfect performance.

Valverde or Nibali?

Alejandro Valverde has kicked off the 2018 season with an impressive series of wins. Meanwhile Vincenzo Nibali delighted the tifosi with his victory in Milan San Remo. It is pretty easy to tell these two riders apart in the pictures above, but could computer distinguish between them?

Following up on my earlier blogs about neural networks, I have been taking a look at the updated version of fast.ai’s course on deep learning. With the field advancing at a rapid pace, this provides a good way to staying up to date with the state of the art. For example, there are now a couple of cheaper alternatives to AWS for accessing high powered GPUs, offered by Paperspace and Crestle. The latest fast.ai libraries include many new tools that work extremely well in practice.

There’s a view that deep learning requires hours of training on high-powered supercomputers, using thousands (or millions) of labelled examples, in order to learn to perform computer vision tasks. However, newer architectures, such as ResNet, are able to run on much smaller data sets. In order to test this, I used an image downloader to grab photos of Nibali and Valverde and manually selected about 55 decent pictures of each one.

I divided the images into a training set with about 40 images of each rider, a validation set with 10 of each and a test set containing the rest. Nibali appears in a range of different coloured jerseys, though the Astana blue is often present. Valverde is mainly wearing the old dark blue Movistar kit with a green M. There were more close-up shots of Nibali’s face than Valverde.

Screen Shot 2018-04-03 at 18.30.08.png

I was able to fine-tune a pre-trained ResNet neural network to this task, using some of the techniques from the fast.ai tool box, each designed to improve generalisation. The first trick was to augment the training set by performing minor transformations of the images at random, such as taking a mirror image, shifting left or right and zooming in a bit. The second set of tricks varied the rate of learning as the algorithm iterated repeatedly through the training set. A final useful technique created a set of variants of each test image and took the average of the predictions. Everything ran at lightning speed on a Paperspace GPU. After a run time of just a few minutes, the ResNet was able to  score 17 out of 20 on the following validation set.

Screen Shot 2018-04-03 at 18.49.27.png

The confusion matrix shows that the model correctly identified all the Nibali images, but it was wrong on three pictures of Valverde. The first incorrect image (below) shows Valverde in the red leader’s jersey of the Tour of Murcia, which is not dissimilar to Nibali’s new Bahrain Merida kit, though he was wearing red in two of his training images. In the second instance, the network was fooled by the change in colour of Moviestar’s kit, which had become rather similar to Astana’s light blue. The figure of 0.41 above the close-up image indicates that the model assigned only a 41% probability that the image was Valverde. It probably fell below the critical 50% level, in spite of the blue/green colours, because there were were far more close-up shots of Nibali than Valverde in the training set.

Overall of 17 out of 20 on the validation set is impressive. However, the network had access to the validation set during training, so this result is “in sample”. A proper  “out of sample” evaluation of the model’s ability made use the following ten images, comprising the test set that was kept aside.

Screen Shot 2018-04-03 at 21.21.59

Amazingly, the model correctly identified 9 out of the 10 pictures it had not seen before. The only error was the Valverde selfie shown in the final image. In order to work better in practice, the training set would need to include more examples of the riders’ 2018 kit. A variant of the problem would be to identify the team rather than the rider. The same network can be trained for multiple classes rather than just two.

This experiment shows that it is pretty straightforward to run state of the art image recognition tools remotely on a GPU somewhere in the cloud and come up with pretty impressive results, even with a small data set.

The next blog describes how to identify a rider’s team.