Deep Learning – Faking It

Screen Shot 2017-08-20 at 15.01.01
Thumbnails of real bikes (Bianchi, Giant, Cube…)
Screen Shot 2017-08-20 at 15.01.15
Fake thumbnails generated randomly by Wasserstein Generative Adversarial Network

My last blog showed the results of using a deep convolutional neural network to apply different artistic styles to a photograph of cyclist.  This article looks at the trendy topic of Generative Adversarial Networks (GANs). Specifically, I investigate the application of a Wasserstein GAN to generate thumbnail images of bicycles.

In the field of machine learning, a generative model is a model designed to produce examples from a particular target distribution. In statistics, the output might be samples from a Gaussian distribution, but we can extend the idea to create a model that produces examples of sonnets in the style of Shakespeare or pictures of cats… or bicycles.

The adversarial framework introduces an attractive idea from game theory: to create a competitive form of learning. While a generator learns from a corpus of real examples how to create realistic “fakes”, a discriminator (or critic) learns to distinguish been fakes and authentic examples. In fact, the generator is given the objective of trying to fool the discriminator. As the discriminator improves, the generator is driven to enhance the authenticity of its output. This creates in a virtuous cycle.

When originally proposed in 2014, Generative Adversarial Networks stimulated much interest, but it proved hard to make them work reliably in practice. One problem was “mode collapse”, where the generator becomes stuck, producing the same output all the time. However, this changed with the publication of a recent paper, explaining how earlier problems could be overcome by using a so-called Wasserstein loss function.

As an experiment, I downloaded a batch of images of bicycles from the Internet. After manually removing pictures with riders and close-ups of components, there were about 1,200 side views of road bikes (mostly with handlebars to the right, so you can see the chainset). After a few experiments, I reduced the dataset to the 862 images, by automatically selecting bikes against a white background.

Screen Shot 2017-08-20 at 14.45.29
Sample of real bike images

As a participant of part 2 of the excellent fast.ai deep learning course, I made use of WGAN code that runs using Pytorch. I loaded the bike images at thumbnail size of 64×64 (training with larger images exceeded the memory constraints of the p2.large GPU I’m running on AWS). It was initially disappointing to experience the mode collapse problem, especially because the authors of the WGAN paper claimed never to have encountered it. However, speeding up the learning rate of the generator seemed to solve the problem.

Although each fake was created from a completely random starting point, the generator learned to produce images against a white background, with two circles joined by lines. After a couple of hundred iterations the WGAN began to generate some recognisably bicycle-like images. Notice the huge variety. Some of the best ones are shown at the top of this post.

Screen Shot 2017-08-20 at 14.41.19
Sample of images generated by WGAN

I tried to improve the WGAN’s images, using another deep learning tool: super resolution. This amazing technique is used to solve the seemingly impossible task of converting images from low resolution to high resolution. It is achieved by taking downgraded versions of a large dataset of high resolution images, then training a neural network to reproduce a high-res version from the corresponding low-res input. A super resolution network is able to learn about certain properties of the world, for example, it converts jagged curves into smooth ones – a feature I’d hoped might be useful for making wheels look rounder.

Example of a super resolution network on real photographs

Unfortunately, my super resolution experiments did not lead to the improvement I’d hoped for. Two possible explanations are that a) the fake images were not low-res photos and b) the network had been trained on many types of images other than bicycles with white backgrounds.

Example of super resolution network on a fake bicycle image

In the end I was pretty happy with the best of the 64×64 images shown above. They are at least as good as something I could draw by hand. This is an impressive example of unsupervised learning. The trained network is able to use some learned notion of what a bicycle looks like in order to produce new images that possess similar properties. With more time and training, I’m sure the WGAN could be improved, perhaps to the point where the images might provide creative inspiration for new bike designs.

References

Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … Bengio, Y. (2014). Generative Adversarial Networks. 

Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. 

Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. 

 

Deep Learning – Cycling Art

I’ve always be fascinated by the field of artificial intelligence, but it is only recently that significant and rapid advances have been made, particularly in the area of deep learning, where artificial neural networks are able to learn complex relationships. Back in the early 1990s, I experimented with forecasting share prices using neural networks. Performance was not much better than the linear models we were using at the time, so we never managed money this way, though I did publish a paper on the topic.

I am currently following an amazing course offered by fast.ai that explains how to programme and implement state of the art techniques in deep learning. Image recognition is one of the most interesting applications. Convolutional neural networks are able to recognise the content and style of images. It is possible to explore what the network has “learnt” by examining the content of the intermediate layers, between the input and the output.

Over the last week I have been playing around with some Python code, provided for the course, that uses a package called keras to build and run networks on a GPU using Google’s TensorFlow infrastructure. Starting with a modified version of the publicly available network called VGG16, which has been trained to recognise images, the idea is to combine the content a photograph with the style of an artist.

An image is presented to the network as an array of pixel values. These are passed through successive layers, where a series of transformations is performed. These allow the network to recognise increasingly complex features of the original image. The content of the image is captured by refining an initially random set of pixels, until it generates similar higher level features.

The style of an artist is represented in a slightly different way. This time an initially random set of pixels is modified until it matches the overall mixture of colours and textures, in the absence of positional information.

Finally, a new image is created, again initially from random, but this time matching both the content of the photograph and the style of the artist. The whole process takes about half an hour on my MacBook Pro, though I also have access to a high-spec GPU on Amazon Web Services to run things faster.

Here are some examples of a cyclist in the styles of Cézanne, Braque, Monet and Dali. The Cézanne image worked pretty well. I scaled up the content versus style for Braque. The Monet picture confuses the sky and trees. And the Dali result is just weird.

 

References

Trained to Forecast – Risk Magazine, January 1993

Deep Learning for Coders

A Neural Algorithm of Artistic Style, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

 

 

 

Chain reactions

At this year’s Royal Society Summer Exhibition, scientists and engineers from Bristol University presented some interesting work on improvements to the drive chains used by Team GB in the Rio Olympics. They reached clear conclusions about the design of the chain and sprockets, taken up by Renold. Current research is exploring the the problem of chain resonance.

Bicycle chains and sprockets and sprockets tend to receive less attention than aerodynamics, for several reasons. As noted in previous blogs, the power required to overcome aerodynamic drag scales with the cube of velocity, whereas frictional effects scale simply in proportion to velocity. Furthermore, a good well-lubricated drive chain typically has an efficiency of around 95% or more, so it is hard to make further improvements. Note that a dirty chain has significantly lower efficiency, so you should certainly keep your bike clean.

The loss of power comes from the friction between links as they bend around the chainring and the rear sprocket. Using a high precision rig, the researchers demonstrated that larger sprockets are more efficient than smaller ones. For example, with a gear ratio of 4:1, it is more efficient to use a 64/16 than a more conventional 52/13.

In fact, one of the experts told me that the efficiency of the drive chain falls off sharply as the sprocket size is reduced from 13 to 12 to 11. This is because the chain has to bend around a much sharper angle for a smaller sprocket. If you think about it, the straight chain has to bend to a certain angle that depends on the number of teeth on the sprocket. Recalling some school maths about the interior angles of polygons, for 16 teeth, the angle is 157.5º, whereas for 11 teeth, the angle is 147.3º. For the larger sprocket, each pair of links overcomes less friction bending through 22.5º and back, compared with a more dramatic 32.7º and back for the smaller one.

Note that this analysis of the rear sprocket applies to single speed track bikes. On a road bike the chain has to pass the two derailleur cogs, which typically have 13 teeth, whatever gear you choose. However, the argument still applies to the chainring  at the front, where the gains of going larger were shown to exceed the additional aerodynamic drag.

The Bristol team also explored the effect of a number of other factors on performance. Using different length links obviously requires customised sprockets and chainrings. This would be a major upheaval for the industry, but it is possible for purpose-built track bikes. Certain molybdenum-based lubricating powders used in the space industry may be better than traditional oils. Other materials could replace traditional steel.

A different kind of power loss can occur when the chain resonates vertically. A specially designed test rig showed that this can occur at frequencies, which could be triggered at certain pedalling cadences. Current research is investigating how the tension of the chain and its design can help mitigate this problem (which is also an issue for motor cycles).

In conclusion, when we see Tony Martin pushing a 58+ chainring, it may not be simply an act of machismo – he is actually be benefitting from efficiency gains.

 

Update on cycling aerodynamics

A recently published paper provides a useful review of competition cycling aerodynamics. It looks at the results of a wide range of academic studies, highlighting the significant advances made in the last 5 to 10 years.

The power required to overcome aerodynamic drag rises with the cube of velocity, so riding at 50km/h takes almost twice as much power as riding at 40km/h. At racing speed, around 80% of a cyclist’s power goes into overcoming aerodynamic drag. This is largely because a bike and rider are not very streamlined, resulting in a turbulent wake.

The authors quote drag coefficients, Cd, of 0.8 for upright and 0.6 for TT positions. These compare with 0.07 for a recumbent bike with fairing, indicating that there is huge room for improvement.

Wind tunnels, originally used in the aerospace and automotive industries, are now being designed specifically for cycling, though no specific standards have been adopted. These provide a simplification of environmental conditions, but they can be used to study air flow for different body positions and equipment. Mannequins are often used in research, as one of the difficulties for riders is the ability to repeat and maintain exactly the same position. Some tunnels employ cameras to track movements. Usually a drag area measurement, CdA, is reported, rather than Cd, thereby avoiding uncertainty due to measurement of frontal area, though this can be estimated by counting pixels in a image.

One thing that makes cycling particularly complex is the action of pedalling. This creates asymmetric high drag forces as one leg goes up and the other goes down, resulting in variations of up to 20% relative to a horizontal crank position.

Cycling has been studied using computational fluid dynamics, helping to save on wind tunnel costs. These use fine mesh models to calculate details of flow separation and pressure variations across the cyclist’s body. The better models are in good agreement with wind tunnel experiments.

Practical advice

Cycling speed is a maximum optimisation problem between aerodynamic and biomechanical efficiency

Ultimately, scientists need to do field tests. The extensive use of power meters allows cyclists to experiment for themselves. The authors provide two practical ways to separate the coefficient of rolling resistance, Crr,  from CdA. One based on rolling to a halt and the other using a series of short rides at constant speed.

Minimising aerodynamic resistance through rider position is one of the most effective ways to improve performance among well-trained athletes

Compared with riding upright on the hoods, moving to the drops saves 15% to 20% while adopting a TT position saves 30% to 35%. Studies show quite a lot of variance in these figures, as the results depend on whether the rider is pedalling, as well as body size. The following quote suggests that when freewheeling downhill in an aero tuck, your crank should be horizontal (unless you are cornering).

Current research suggests that the drag coefficient of a pedalling cyclist is ≈6% higher than that of a static cyclist holding a horizontal crank position

The authors quote the figures for CdA of 0.30-0.50 for an upright position, 0.25 to 0.30 on the drops and 0.20-0.25 for a TT position. Variation is largely, but not only, due to changes in frontal area, A. Unfortunately, relatively minor changes in position can have large effects on drag, but the following effects were noted.

Broker and Kyle note that rider positions that result in a flat back, a low tucked head and forearms positioned parallel to the bicycle frame generally have low aerodynamic drag. Wind tunnel investigations into a wide range of modifications to standard road cycling positions by Barry et al. showed that that lowering the head and torso and bringing the arms inside the silhouette of the hips reduced the aerodynamic drag.

Bike frames, wheels, helmets and skin suits are all designed with aerodynamics in mind, while remaining compliant with UCI rules. Skin suits are important, due to their large surface areas. By delaying airflow separation, textured fabrics reduce wake turbulence, resulting in as much as a 4% reduction in drag.

In race situations, drafting skills are beneficial, particularly behind a larger rider. While following riders gain a significant benefit, it has been shown that the lead rider also accrues a small advantage of around 3%. It is best to overtake very closely in order to take maximal advantage of lateral drafting effects.

For a trailing cyclist positioned immediately behind the leader, drag reduction has been reported in the range of 15–50 % and reduces to 10–30 % as the gap extends to approximately a bike length… The drafting effect is greater for the third rider than the second rider in a pace-line, but often remains nearly constant for subsequent riders

For those interested in greater detail, it is well worth looking at the full text of the paper, which is freely available.

Reference

Riding against the wind: a review of competition cycling aerodynamics, Timothy N. CrouchEmail authorDavid BurtonZach A. LaBryKim B. Blair, Sports Engineering, June 2017, Volume 20, Issue 2, pp 81–110

The fractal nature of GPS routes

The mathematician, Benoît Mandelbrot, once asked “How long is the coast of Britain?“. Paradoxically, the answer depends on the length of your measuring stick. Using a shorter ruler results in a longer total distance, because you take account of more minor details of the shape of the coastline. Extrapolating this idea, reducing the measurement scale down to take account of every grain of sand, the total length of the coast increases without limit.

This has an unexpected connection with the data recorded on a GPS unit. Cycle computers typically record position every second. When riding at 36km/h, a record is stored every 10 metres, but at a speed of 18k/h, a recording is made every 5 metres. So riding as a lower speed equates to measuring distances with a shorter ruler. When distance is calculated by triangulating between GPS locations, your riding speed affects the result, particularly when you are going around a sharp corner.

Consider two cyclists riding round a sharp 90-degree bend with a radius of 13m. The arc has a length of 20m, so the GPS has time to make four recordings for the a rider doing 18km/h, but only two recordings for the rider doing 36km/h. The diagram below shows that the faster rider will have a record of position at each red dot, while the slower rider also has a reading for each green dot.  Although the red and green distances match on the straight section, when it comes to the corner the total length of the red line segments is less than the total of the green segments. You can see this jagged effect if you zoom into a corner on the Strava map of your course. Both triangulated distances are shorter than the actual arc ridden.

Cornering.pngIt is relatively straightforward to show that the triangulation method will underestimate both distance and speed by a factor of 2r/s*sin(s/2r), where r is the radius of the corner in metres and s is speed in m/s. So the estimated length of the 20m arc for the fast rider is 19.4m ridden at a speed of 35.1km/h (2.5% underestimate), while the corresponding figures for the slower rider would be 19.8m at 17.9km/h (0.6% underestimate).

We might ask whether these underestimates are significant, given the error in locating real-time positions using GPS. Over the length of a ride, we should expect GPS errors to average out to approximately zero in all directions. However, triangulation underestimates distance on every corner, so these negative errors accumulate over the ride. Note that when the bike is stationary, any noise in the GPS position adds to the total distance calculated by triangulation. But guess what? This can only happen when you are not moving fast. The case remains that slower riders will show a longer total distance than faster riders.

The simple triangulation method described above does not take account of changes of elevation. This has a relatively small effect, except on the steepest gradients, thus a 10% climb increases in distance by only 0.5%.  In fact, the only reliable way to measure distance that accounts for corners and changes in altitude is to use a correctly-calibrated wheel-based device. Garmin’s GSC-10 speed and cadence monitor tracks the passage of magnets on the wheel and cranks, transmitting to the head unit via ANT+. This gives an accurate measure of ground speed, as long as the correct wheel size is used (and, of course, that changes with the type of tyre, air pressure, rider weight etc.).

According to Strava Support, Garmin uses a hierarchy for determining distance. If you have a PowerTap hub, its distance calculation takes precedence. Next, if you have a GSC-10, its figure is used. Otherwise the GPS positions are used for triangulation. This means that, if you don’t have a PowerTap or a GSC-10 speed/cadence meter, your distance (and speed) measurements will be subject to the distortions described above.

But does this really matter? Well it depends on how “wiggly” a route you are riding. This can be estimated using Richardson’s method. The idea is that you measure the route using different sized rulers and see how much the total distance changes. The rate of change determines the fractal dimension, which we can take as the “wiggliness” of the route.

One way of approximating this method from your GPS data is, firstly, to add up all the distances between consecutive GPS positions,  triangulating latitude and longitude. Then do the same using every other position. Then every fourth position, doubling the gap each time. If you happened to be riding at a constant 36km/h, this equates to measuring distance using a 10m ruler, then a 20m ruler, then a 40m ruler etc..

Using this approach, the fractal dimension of a simple loop around the Surrey countryside is about 1.01, which is not much higher than a straight line of dimension 1. So, with just a few corners, the GPS triangulation error will be low. The Sella Ronda has a fractal dimension of 1.11, reflecting the fact that alpine roads have to follow the naturally fractal-like mountain landscape. Totally contrived routes can be higher, such as this one, with a fractal dimension of 1.34, making GPS triangulation likely to be pretty inaccurate – if you zoom in, lots of corners are cut.

In conclusion, if you ride fast around a wiggly course, your Garmin will experience non-relativistic length contraction. Having GPS does not make your wheel-based speed/cadence monitor redundant.

If you are interested in the code used for this blog, you can find it here.

Strava Fitness and Freshness

The last blog explored the statistics that Strava calculates for each ride. These feed through into the Fitness & Freshness chart provided for premium users. The aim is to show the accumulated effect of training through time, based on the Training-Impulse model originally proposed by Eric Banister and others in a rather technical paper published in 1976.

Strava gives a pretty good explanation of Fitness and Freshness. A similar approach is used on Training Peaks in its Performance Management Chart. On Strava, each ride is evaluated in terms of its Training Load, if you have a power meter, or a figure derived from your Suffer Score, if you just used a heart rate monitor. A training session has a positive impact on your long-term fitness, but it also has a more immediate negative effect in terms of fatigue. The positive impact decays slowly over time, so if you don’t keep up your training, you lose fitness. But your body is able to recover from fatigue more quickly.

The best time to race is when your fitness is high, but you are also sufficiently recovered from fatigue. Fitness minus fatigue provides an estimate of your form. The 1976 paper demonstrated a correlation between form and the performance of an elite swimmer’s times over 100m.

The Fitness and Freshness chart is particularly useful if you are following a periodised training schedule. This approach is recommended by many coaches, such as Joe Friel. Training follows a series of cycles, building up fitness towards the season’s goals. A typical block of training includes a three week build-up, followed by a recovery week. This is reflected in a wave-like pattern in Fitness and Freshness chart. Fitness rises over the three weeks of training impulses, but fatigue accumulates faster, resulting in a deterioration of form. However, fatigue drops quickly, while fitness is largely maintained during the recovery week, allowing form to peak.

The example chart above shows how my season has panned out so far. After taking a two week break before Christmas, I started a solid block of training in January. My recovery week was actually spent skiing (pretty hard), though this did not register on Strava because I did not use a heart rate monitor. So the sharp drop in fatigue at the end of January is exaggerated. Nevertheless, my form was positive for my first race on 4 February. Unfortunately, I was knocked off and smashed a few ribs, forcing me to take an unplanned two week break. By the time I was able to start riding tentatively, rather than starting from an elevated level, my fitness had deteriorated to December’s trough.

After solid, but still painful, block of low intensity training in March, I took another “recovery week” on the slopes of St Anton. I subsequently picked up a cold that delayed the start of the next block of training, but I have incorporated some crit races into my plan, for higher intensity sessions. If you edit the activity and make the “ride type” a “race”, it shows up as a red dot on the chart. Barring accident and illness, the hope is to stick more closely to a planned four-week cycle going forward.

This demonstrates how Strava’s tools reveal the real-life difficulties of putting the theoretical benefits of periodisation into practice.

Strava Ride Statistics

If you ride with a power meter and a heart rate monitor, Strava’s premium subscription will display a number of summary statistics about your ride. These differ from the numbers provided by other software, such as Training Peaks. How do all these numbers relate to each other?

A tale of two scales

Over the years, coaches and academics have developed statistics to summarise the amount of physiological stress induced by different types of endurance exercise. Two similar approaches have gained prominence. Dr Andrew Coggan has registered the names of several measures used by Training Peaks. Dr Phil Skiba has developed as set of metrics used in the literature and by PhysFarm Training Systems. These and other calculations are available on Golden Cheetah‘s excellent free software.

Although it is possible to line up metrics that roughly correspond to each other, the calculations are different and the proponents of each scale emphasise particular nuances that distinguish them. This makes it hard to match up the figures.

Here is an example for a recent hill session. The power trace is highly variable, because the ride involved 12 short sharp climbs.

Metric Coggan TrainingPeaks Skiba Literature Strava
Power equivalent physiological cost of ride Normalised Power 282 xPower 252 Weighted Avg Power 252
Power variability of ride Variability Index 1.57 Variability Index 1.41
Rider’s sustainable power Functional Threshold Power 312 Critical Power 300 FTP 300
Power cost / sustainable power Intensity Factor 0.9 Relative Intensity 0.84 Intensity 0.84
Assessment of intensity and duration of ride Training Stress Score 117 BikeScore 101 Training Load 100
Training Impulse based on heart rate Suffer Score 56

Weighted Average Power

According to Strava, Weighted Average Power takes account of the variability of your power reading during a ride. “It is our best guess at your average power if you rode at the exact same wattage the entire ride.” That sounds an awful lot like Normalized Power, which is described on Training Peaks as “an estimate of the power that you could have maintained for the same physiological “cost” if your power output had been perfectly constant (e.g., as on a stationary cycle ergometer), rather than variable”. But it is apparent from the table above that Strava is calculating Skiba’s xPower.

The calculations of Normalized Power and xPower both smooth the raw power data, raise these observations to the fourth power, take the average over the whole ride and obtain the fourth root to give the answer.

Normalized Power or xPower = (Average(Psmoothed4))1/4

The only difference between the calculations is the way that smoothing accounts for the body’s physiological delay in reacting to rapid changes in pedalling power. Normalized Power uses a 30 second moving average, whereas xPower uses a “25 second exponential average”. According to Skiba, exponential decay is better than Coggan’s linear decay in representing the way the body reacts to changes in effort.

The following chart zooms into part of the hill reps session, showing the raw power output (in blue), moving average smoothing for Normalised Power (in green), exponential smoothing for xPower (in red), with heart rate shown in the background (in grey). Two important observations can be made. Firstly, xPower’s exponential smoothing is more highly correlated with heart rate, so it could be argued that it does indeed correspond more closely with the underlying physiological processes. Secondly, the smoothing used for xPower is less volatile, therefore xPower will always be lower than Normalized Power (because the fourth-power scaling is dominated by the highest observations).

Power

Why do both metrics take the watts and raise them to the fourth power? Coggan states that many of the body’s responses are “curvilinear”. The following chart is a good example, showing the rapid accumulation of blood lactate concentration at high levels of effort.

Screen Shot 2017-04-20 at 15.08.31

Plotting the actual data from a recent test on a log-log scale, I obtained a coefficient of between 3.5 and 4.7, for the relation between lactate level and watts. This suggests that taking the average of smoothed watts raised to the power 4 gives an indication of the average level of lactate in circulation during the ride.

The hill reps ride included multiple bouts of high power, causing repeated accumulation of lactate and other stress related factors. Both the Normalised Power of 282W and xPower of 252W were significantly higher than the straight average power of 179W. The variability index compares each adjusted power against average power, resulting in variability indices of 1.57 and 1.41 respectively. These are very high figures, due to the hilly nature of the session. For a well-paced time trial, the variability index should be close to 1.00.

Sustainable Power

It is important for a serious cyclist to have a good idea of the power that he or she can sustain for a prolonged period. Functional Threshold Power and Critical Power measure slightly different things. The emphasis of FTP is on the maximum power sustainable for one hour, whereas CP is the power theoretically sustainable indefinitely. So CP should be lower than FTP.

Strava allows you to set your Functional Threshold Power under your personal performance settings. The problem is that if Strava’s Weighted Average Power is based on Skiba’s xPower, it would be more consistent to use Critical Power, as I did in the table above. This is important because this figure is used to calculate Intensity and Training Load. If you follow Strava’s suggestion of using FTP, subsequent calculations will underestimate your Training Load,  which, in turn, impacts your Fitness & Freshness curves.

Intensity

The idea of intensity is to measure severity of a ride, taking account of the rider’s individual capabilities.  Intensity is defined as the ratio of the power equivalent physiological cost of the ride relative to your sustainable power. For Coggan, the Intensity Factor is NP/FTP; for Skiba the Relative Intensity is xPower/CP; and for Strava the Intensity is Weighted Average Power/FTP.

Training Load

An overall assessment of a ride needs to take account of the intensity and the duration of a ride. It is helpful to standardise this for an individual rider, by comparing it against a benchmark, such as an all-out one hour effort.

Coggan proposes the Training Stress Score that takes the ratio the work done at Normalised Power, scaled by the Intensity Factor, relative to one hour’s work at FTP. Skiba defines the BikeScore as the ratio the work done at xPower, scaled by the Relative Intensity, relative to one hour’s work at CP. And finally, Strava’s Training Load takes the ratio the work done at Weighted Average Power, scaled by Intensity, relative to one hour’s work at FTP.

Note that for my hill reps ride, the BikeScore of 101, was considerably lower than the TSS of 117. Although my estimated CP is 12W lower than my FTP, xPower was 30W lower than NP. Using my CP as my Strava FTP, Strava’s Training Load is the same as Skiba’s Bike Score (otherwise I’d get 93).

Suffer Score

Strava’s Suffer Score was inspired by Eric Banister’s training-impulse (TRIMP) concept. It is derived from the amount of time spent in each heart rate zone, so it can be calculated for multiple sports. You can set your Strava heart rate zones in your personal settings, or just leave then on default, based on your maximum heart rate.

A non-linear relationship is assumed between effort and heart rate zone. Each minute in Zone 1, Endurance, is worth 12 seconds; Moderate Zone 2 minutes are worth 24 seconds; Zone 3 Tempo minutes are worth 45 seconds; Zone 4 Threshold minutes are worth 100 seconds; and Anaerobic Zone 5 minutes are worth 120 seconds. The Suffer Score is the weighted sum of minutes in each zone.

The next blog will comment on the Fitness & Freshness charts available on Strava Premium.