Large Language Models for Cycling

Amid the fanfare about the extraordinary AI capabilities of large language models (LLMs), the race is on to produce LLM applications that actually do something useful. Several hurdles obstruct the way forward, including confabulation, privacy and limitations of training data. This article explains how Retrieval-Augmented Generation can address some of these issues and help find out some useful information about cycling.

Confabulation

Since they are trained only to predict the next word in a sentence, LLMs have no understanding of the text they generate. They can have a tendency to intersperse facts with inaccurate or untrue statements in a manner that seems entirely plausible and convincing. You should not rely on an LLM for advice on important decisions without validating its response. Note that if you ask an LLM for its sources, the citations may be imaginary or if they exist the references may not support the assertions.

One way to reduce the risk of confabulation is to ask the LLM to base its answer selected relevant documents.

Privacy

If you use an online ChatBot, everything you type or upload is stored on the provider’s server. This can be used as further training data or for snooping on users. There is a significant risk that a lot of personal and proprietary company information has already been hoovered up in this way. Companies need ways to sandbox internally developed LLM tools that contain sensitive data and prevent leakage of confidential information.

Privacy can be addressed by running LLMs on local networks. This is possible because inference is a simple task relative to training, which requires banks of data servers. Many free open-source models are available. Running an open-source model locally also avoids paying a subscription to an online provider.

Limitations of training data

LLMs are trained on vast generic data sets. This is a very expensive task, so they are typically frozen at some point in time. If you ask an LLM about current news events, it will typically tell you that it was trained on data up to a certain date in the past and has no knowledge of the event. The models may also lack specialised data in narrow domains.

RAG provides a way to supplement the original training set with selected additional information relevant to the field of interest.

RAG

Retrieval-Augmented Generation is an approach that provides an LLM with additional information to help it answer a query. For example, when asked about current affairs, it would be very helpful to retrieve a copy of today’s newspaper. Companies may wish to develop LLM-based applications with specialist knowledge of internal documentation. RAG makes this possible.

When you interact with an LLM, your conversation is presented to the model in the form of a huge string of numbers called an embedding. LLMs are trained so that sentences with similar words and meanings have similar embeddings. The idea of RAG is to take one or more relevant supplementary documents, split them up into chunks and store the embeddings of these chunks in a vector database. When a user presents a query, the database retrieves the embeddings of chunks that most closely match the query. In this way, the query can be augmented with relevant chunks of text from the source material. Note that this avoids the need to do additional expensive training or fine-tuning of the LLM on specialist topics.

One weakness of RAG is that most LLMs have not been trained specifically on the types of prompts used in RAG, though some companies, like Cohere, are working on this.

Cycling Through the Pandemic

I decided to try this out using a freely available book called Cycling Through the Pandemic, which has the slightly cryptic subtitle, Tactical Urbanism and the Implementation of Pop-Up Bike Lanes in the Time of COVID-19. This 250-page book was published in 2024, so it could not have been part of the training set for the LLM that I was using.

It is unbelievably easy to download and then run an open-source LLM on your local machine. All you need to do is download LM Studio. I decided to use an open-source LLM model called Meta-Llama-3.1-8B-Instruct-GGUF. I dragged and dropped the pdf of the book into the chat-box and started asking questions.

A transcript of my conversation is recorded below. At each step, LM Studio retrieved three chunks of text from the book it considered most relevant to the query, allowing me to check the reliability of the model’s interpretation.

The model answered in good English, basing its responses on the content of the book. It corrected itself when challenged on the claim that Bogotá is in France. It even responded in French to a question posed in French. In my view, RAG can be used to produce genuinely useful applications for LLMs.

RAG dialogue based on a book about the impact of COVID 19 on cycling

You
CyclingCOVID.pdf

What is tactical urbanism

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Intermediate steps hidden. Click to expand.

Based on the provided information, tactical urbanism refers to a type of urban planning that involves temporary and low-cost interventions aimed at introducing rapid changes to urban spaces with a broader purpose in mind (Lydon and Garcia 2015). It can be seen as a practical approach to urban change where many small actions implemented at the hyper-local level can achieve longer-term goals of creating a liveable, walkable, sustainable community.

Tactical urbanism often involves shortcuts or rapid and unbureaucratic actions taken by municipalities in response to crises (Citation 1). It can be associated with grassroots initiatives but can also be used by authorities. This approach has been implemented in various contexts, including during the COVID-19 pandemic.

Tactical urbanism is characterized by its focus on temporary and flexible material approaches to urban change, which can be both a practical tool for citizens and enterprising governments alike (Citation 3).

It’s worth noting that there are different perspectives on tactical urbanism, with some viewing it as a means of achieving substantive change without state action, while others see the increasing professionalization of these activities as borrowing their material design language (Citation 2).

You
Why was cycling boosted by COVID 19?

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 3 relevant citations for user query
Done predicting
Cycling saw a boost during the COVID-19 pandemic for several reasons (Citation 2). Firstly, people had less need to travel due to home working and distance learning. Secondly, the fear of infection led to a strong decline in ridership on public transport, which benefited cycling. Additionally, active mobilities such as cycling were promoted as a means of staying fit and getting exercise when other recreational facilities were closed.

The pandemic also led to an increase in cycling for leisure activities (Buehler and Pucher 2021b), with the strongest increase seen on weekends and in the afternoon. This shift towards cycling as a leisure activity is likely due to the rise of “undirected travel,” or trips without a specific destination, which was encouraged by social distancing measures.

In France, for instance, the first COVID-19 related lockdown led to a rapid increase in cycling use (Buehler and Pucher 2022) and implemented temporary cycle paths. Governments saw cycling as a solution to develop health-friendly mobility while responding to the possible desertion of public transport due to fear of catching the virus (Citation 3).

You
Has the COVID related boost in cycling persisted

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 3 relevant citations for user query
Done predicting
The survey conducted in Geneva and Lausanne (Rérat et al. 2022; Schmassman and Rérat 2022) found that the COVID-related boost in cycling has largely persisted. The majority of respondents reported cycling as much or more often than before the pandemic, with a significant number even starting to cycle for the first time.

In Geneva, 42% of respondents cycled as much as before, while 44% cycled more often. In Lausanne, 58% cycled as much as before, and 27% cycled more often. This suggests that many people have adopted cycling as a mode of transport during the pandemic and are likely to continue using it.

The fact that 81% of respondents in Geneva (73% in Lausanne) agree or strongly agree that COVID cycle lanes make them feel safer also suggests that these temporary infrastructure projects had a lasting impact on cyclist safety and behavior.

However, as mentioned by Citation 2, the long-term sustainability of this boost is still uncertain. The study notes that most cycling infrastructure implemented during the pandemic involved existing projects, and it remains to be seen whether this “leap forward” will make a lasting difference in the trajectory of local public action or simply represent a spike on an upward graph.

Overall, while there are indications that the COVID-related boost in cycling has largely persisted, further analysis is needed to determine its long-term impact.

You
What has been the impact specifically in London?

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 3 relevant citations for user query
Done predicting
The impact of temporary cycling infrastructure and “Tactical Urbanist” approaches in London is a complex issue with both positive and negative effects.

Positive impacts include:

The introduction of School Streets, which have shown to promote mode shift towards active modes of travel (Hopkinson et al. 2021) and improve air quality at schools (Air Quality Consultants 2021).
The implementation of temporary cycle lanes and other small-scale projects aimed at improving the smaller residential or local streets in London.
These efforts are part of the Healthy Streets approach, which aims to promote a more sustainable transportation system in London.
However, there have also been negative impacts, including:

Variable outcomes between sites, with some schemes showing little effect (Citation 2).
Criticisms that some projects were implemented too quickly or without adequate consultation, leading to concerns about the speed and process of decision-making.
Vandalism of planters, bollards, and cameras used to prevent through-traffic in some areas (Citation 3).
Removal of Low Traffic Neighbourhoods, especially in Outer London, due to opposition from local residents.
The Temporary City report by Asa Thomas highlights the importance of balancing big change with small impact, emphasizing the need for careful planning and consultation when implementing new transportation projects.

You
What research was done in French cities?

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 3 relevant citations for user query
Done predicting
The research on French cities during the COVID-19 pandemic focused on understanding how urban planners and policymakers implemented measures to promote cycling and improve public transportation.

A study examined five French cities: Grenoble, Lyon, Montpellier, Rennes, and Bogotá (Citation 1). The study found that these cities had varying levels of population size, density, and surface area. For example, Bogotá was the most populous city with over 7 million inhabitants, while Grenoble had a smaller population of around 160,000.

Another study looked at the implementation of measures in French cities during the pandemic (Citation 2). The research found that some cities focused on improving cycling infrastructure within the city center, such as Lyon and Rennes. In contrast, other cities prioritized links between periurban municipalities or outer suburbs.

A book titled “Changing Bikeability: Everyday Cycling in Developed Territories” explores changing bikeability in French cities during the pandemic (Citation 3). The book examines six French cities: Grenoble, Lyon, Montpellier, Paris, Rennes, and Saint-Étienne. The research found that each city had its unique characteristics and ways of implementing measures to promote cycling.

These studies provide insights into how different types of cities implemented measures to improve cycling infrastructure during the pandemic, and whether these measures were made permanent.

You
Is Bogotá in France?

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 3 relevant citations for user query
Done predicting
No, Bogotá is not in France. According to Citation 1 and Citation 2, Bogotá is located in Colombia, and it has been compared with four French cities: Grenoble, Lyon, Montpellier, and Rennes.

In fact, the study mentioned in Citation 3 compares the cycling infrastructure in these four French cities (Grenoble, Lyon, Montpellier, and Rennes) with that of Bogotá, a city located in Colombia.

You
Quels ont été les résultats à Grenoble ?

Assistant
Meta-Llama-3.1-8B-Instruct-GGUF
Retrieved 1 relevant citations for user query
Done predicting
Les résultats obtenus à Grenoble sont mentionnés dans la citation :

“Nous avons contacté le Syndicat Mixte des Mobilités de l’Aire Grenobloise et réussi à obtenir la base de données du réseau cyclable. De plus, un stagiaire a effectué une interview avec un technicien de cette organisation, et sur plusieurs occasions, il est allé visiter le site pour voir si les voies temporaires (pop-up tracks) existaient toujours.”

Cela indique que les auteurs ont réussi à collecter des données sur le réseau cyclable de Grenoble en contactant la Syndicat Mixte des Mobilités de l’Aire Grenobloise et en effectuant une interview avec un technicien. Cependant, il n’y a pas d’information sur les résultats concrets de ces efforts.

Cependant, on peut trouver plus d’informations dans le graphique (Figure 8.2) qui montre différents types de voies temporaires installés dans les cinq villes étudiées, dont Grenoble.

39.77 tok/sec

234 tokens

3.34s to first token

Stop: eosFound

Dreaming of the Giro

fast.ai’s latest version of Practical Deep Learning for Coders Part 2 kicks off with a review of Stable Diffusion. This is a deep neural network architecture developed by Stability AI that is able to convert text into images. With a bit of tweaking it can do all sorts of other things. Inspired by the amazing videos created by Softology, I set out to generate a dreamlike video based on the idea of riding my bicycle around a stage of the Giro d’Italia.

Text to image

As mentioned in a previous post, Hugging Face is a fantastic resource for open source models. I worked with one of fast.ai’s notebooks using a free GPU on Google Colab. In the first step I set up a text-t0-image pipeline using a pre-trained version of stable-diffusion-v1-4. The prompt “a treelined avenue, poplars, summer day, france” generated the following images, where model was more strongly guided by the prompt in each row. I liked the first image in the second row, so I decided to make this the first frame in an initial test video.

Stable diffusion is trained in a multimodal fashion, by aligning text embeddings with the encoded versions corresponding images. Starting with random noise, the pixels are progressively modified in order to move the encoding of the noisy image closer to something that matches the embedding of the text prompt.

Zooming in

The next step was to simulate the idea of moving forward along the road. I did this by writing a simple two-line function, using fast.ai tools, that cropped a small border off the edge of the image and then scaled it back up to the original size. In order to generate my movie, rather that starting with random noise, I wanted to use my zoomed-in image as the starting point for generating the next image. For this I needed to load up an image-to-image pipeline.

I spent about an hour experimenting with with four parameters. Zooming in by trimming only a couple of pixels around the edge created smoother transitions. Reducing the strength of additional noise enhanced the sense of continuity by ensuring that that subsequent images did not change too dramatically. A guidance scale of 7 forced the model to keep following prompt and not simply zoom into the middle of the image. The number of inference steps provided a trade-off between image quality and run time.

When I was happy, I generated a sequence of 256 images, which took about 20 minutes, and saved them as a GIF. This produced a pleasing, constantly changing effect with an impressionist style.

Back to where you started

In order to make the GIF loop smoothly, it was desirable to find a way to return to the starting image as part of the continuous zooming in process. At first it seemed that this might be possible by reversing the existing sequence of images and then generating a new sequence of images using each image in the reversed list as the next starting point. However, this did not work, because it gave the impression of moving backwards, rather than progressing forward along the road.

After thinking about the way stable diffusion works, it became apparent that I could return to the initial image by mixing it with the current image before taking the next step. By progressively increasing the mixing weight of the initial image, the generated images became closer to target over a desired number of steps as shown below.

Putting it al together produced the following video, which successfully loops back to its starting point. It is not a perfect animation, because the it zooms into the centre, whereas the vanishing point is below the centre of the image. This means we end up looking up at the trees at some points. But overall it had the effect I was after.

A stage of the Giro

Once all this was working, it was relatively straightforward to create a video that tells a story. I made a list of prompts describing the changing countryside of an imaginary stage of the Giro d’Italia, specifying the number of frames for each sequence. I chose the following.

[‘a wide street in a rural town in Tuscany, springtime’, 25],

[‘a road in the countryside, in Tuscany, springtime’,25],

[“a road by the sea, trees on the right, sunny day, Italy”,50],

[‘a road going up a mountain, Dolomites, sunny day’,50],

[‘a road descending a mountain, Dolomites, Italy’,25],

[‘a road in the countryside, cypress trees, Tuscany’,50],

[‘a narrow road through a medieval town in Tuscany, sunny day’,50]

These prompts produced the video shown at the top of this post. The springtime blossom in the starting town was very effective and the endless climb up into the sunlit Dolomites looked great. For some reason the seaside prompt did not work, so the sequence became temporarily stuck with red blobs. Running it again would make something different. Changing the prompts offered endless possibilities.

The code to run this appears on my GitHub page. If you have a Google account, you can open it directly in Colab and set the RunTime to GPU. You also need a free Hugging Face account to load the stable diffusion pipelines.

Hugging Face

I have been blown away exploring Hugging Face. It’s a community on a mission “to democratize good machine learning”. It provides access to a huge library of state-of-the-art models. So far I have only scratched the surface of what is available, but this blog gives a sample of things I have tried.

At the time of writing, there were 128,463 pre-trained models covering a huge range of capabilities, including computer vision, natural language processing, audio, tabular, multimodal and reinforcement models. The site is set up to make it incredibly easy to experiment with a demo, download a model, run it in a Jupyter notebook, fine-tune it for a specific task and then add it to the space of machine learning apps created by the community. For example, an earlier blog describes my FilmStars app.

Computer vision with text

This is an example from an app that uses the facebook/detr-resnet-50 model to identify objects in an image. It successfully located eight objects with high confidence (indicated by the numbers), but it was fooled into thinking part of the curved lamppost in front of the brickwork pattern was a tennis racket (you can see why).

Image-to-text models go further by creating captions describing what is in the image. I used an interactive demo to obtain suggested captions from a range of state-of-the-art models. The best result was produced by the GIT-large model, whereas a couple of models perceived a clocktower .

These models can also answer questions about images. Although all of the answers were reasonable, GIT-large produced the best response when I asked “Where is the cyclist?”

The next image is an example of text-based inpainting with CLIPSeg x Stable Diffusion, where I requested that wall should be replaced with an apartment block. The model successfully generated a new image while preserving the cyclist, flowers, arch, background and even the birds on the roof. I had great fun with this app, imagining what my friend’s house will look like, when it eventually emerges from a building site.

Continuing with the theme of image generation, I reversed the image to caption problem, by asking a stable-diffusion-v1-5 model to generate an image from the caption “a cyclist rides away through an old brick archway in a city”. It came up with an image remarkably similar to what we started with, even including a female cyclist.

Do it yourself

HuggingFace provides various ways for you to download any of the models from its library. The easiest way to do this is to set up a free account on kaggle, which offers a Jupyter notebook environment with access to a GPU.

Using a HuggingFace pipeline, you can run a model with three lines of Python code! Pipelines can be set up for the image models above, but this is an example of the code required to run a text-based natural language processing task. It creates and runs a pipeline that summarises text, using a model specifically trained to generate output in the style of SparkNotes.

from transformers import pipeline
summarizer = pipeline("summarization",model="pszemraj/long-t5-tglobal-base-16384-book-summary")
summarizer("""Sample text from a book...""")

This rather morbid sample text produced the output from Python that follows.

The fact that Henry Armstrong was buried did not seem to him to prove that he was dead: he had always been a hard man to convince. That he really was buried, the testimony of his senses compelled him to admit. His posture — flat upon his back, with his hands crossed upon his stomach and tied with something that he easily broke without profitably altering the situation — the strict confinement of his entire person, the black darkness and profound silence, made a body of evidence impossible to controvert and he accepted it without cavil.

But dead — no; he was only very, very ill. He had, withal, the invalid’s apathy and did not greatly concern himself about the uncommon fate that had been allotted to him. No philosopher was he — just a plain, commonplace person gifted, for the time being, with a pathological indifference: the organ that he feared consequences with was torpid. So, with no particular apprehension for his immediate future, he fell asleep and all was peace with Henry Armstrong.

But something was going on overhead. It was a dark summer night, shot through with infrequent shimmers of lightning silently firing a cloud lying low in the west and portending a storm. These brief, stammering illuminations brought out with ghastly distinctness the monuments and headstones of the cemetery and seemed to set them dancing. It was not a night in which any credible witness was likely to be straying about a cemetery, so the three men who were there, digging into the grave of Henry Armstrong, felt reasonably secure.

From One Summer Night by Ambrose Bierce
[{'summary_text': "Henry's body is buried in the cemetery, but it does not seem to make him any more certain that he is dead. Instead, he seems to be completely ill."}]

Having come this far, it takes only a few steps to fine tune the model to match your desired task, put it into a GitHub repository and launch your own app as a fully fledged member of the Hugging Face community. A nice explanation is available at fast.ai lesson 4.

Can self-driving cars detect cyclists?

Screenshot 2019-05-10 at 14.05.59

Self-driving cars employ sophisticated software to interpret the world around them. How do these systems work? And how good are they at detecting cyclists? Can cyclists feel safe sharing roads with an increasing number of vehicles that make use of these systems?

How hard is it to spot a cyclist?

Vehicles can use a range of detection systems, including cameras, radar and lidar.  Deep learning techniques have become very good at identifying objects in photographic images. So one important question is how hard is it to spot a cyclist in a photo taken from a moving vehicle?

Researchers at Tsinghua University, working in collaboration with Daimler, created a publicly available collection of dashboard camera photos, where humans have painstakingly drawn boxes around other road users. The data set is used by academics to benchmark the performance of their image recognition algorithms. The images are rather grey and murky, reflecting the cloudy and polluted atmosphere of the Chinese city location. It is striking that, in the majority of cases, the cyclists are very small, representing around 900 pixels out of the 2048 x 1024 images, i.e. less than 0.05% of the total area. For example, the cyclist in the middle of the image above is pretty hard to make out, even for a human.

Object-detecting neural networks are typically trained to identify the subject of a photo, which normally takes up are significant portion of the image. Finding a tall, thin segment containing a cyclist is significantly more difficult.

If you think about it, the cyclist taking up the largest percentage of a dash cam image will be riding across the direction of travel, directly in front of the vehicle, at which point it may be too late to take action. So a crucial aspect of any successful algorithm is to find more distant cyclists, before they are too close.

Setting up the problem

Taking advantage of skills acquired on the fast.ai course on deep learning, I decided to have a go at training a neural network to detect cyclists. Many of the images in the Tsinghua Daimler data set include multiple cyclists. In order to make the problem more manageable, I set out to find the single largest cyclist in each image.

If you are not interested in the technical bit, just scroll down to the results.

The technical bit

In order to save space on my drive, I downloaded about a third of the training set. The 3209 images were split 80:20 to create a training and validation sets. I also downloaded 641 unseen images that were excluded from training and used only for testing the final model.

I used transfer learning to fine-tune a neural network using a pre-trained ResNet34 backbone, with a customised head designed to generate four numbers representing the coordinates of a bounding box around the largest object in each image. All images were scaled down to 224 pixel squares, without cropping. Data augmentation added variation to the training images, including small rotations, horizontal flips and adjustments to lighting.

It took a couple of hours to train the network on my MacBook Pro, without needing to resort to a cloud-based GPU, to produce bounding boxes with an average error of just 12 pixels on each coordinate. The network had learned to do a pretty good job at detecting cyclists in the training set.

Results

The key step was to test my neural network on the set of 641 unseen images. The results were impressive: the average error on the bounding box coordinates was just 14 pixels. The network was surprisingly good at detecting cyclists.

oosImages

The 16 photos above were taken at random from the test set. The cyan box shows the predicted position of the largest cyclist in the image, while the white box shows the human annotation. There is a high degree of overlap for eleven cyclists 2, 3, 4, 5, 6, 8, 11, 12, 14, 15 and 16. Box 9 was close, falling between two similar sized riders, but 7 was a miss. The algorithm failed on the very distant cyclists in 1, 10 and 13. If you rank the photos, based on the size of the cyclist, we can see that the network had a high success rate for all but the smallest of cyclists.

In conclusion, as long as the cyclists were not too far away, it was surprisingly easy to detect riders pretty reliably, using a neural network trained over an afternoon.  With all the resources available to Google, Uber and the big car manufacturers, we can be sure that much more sophisticated systems have been developed. I did not consider, for example, using a sequence of images to detect motion or combining them with data about the motion of the camera vehicle. Nor did I attempt to distinguish cyclists from other road users, such as pedestrians or motorbikes.

After completing this project, I feel reassured that cyclists of the future will be spotted by self-driving cars. The riders in the data set generally did not wear reflective clothing and did not have rear lights. These basic safety measures make cyclists, particularly commuters, more obvious to all road users, whether human or AI.

Car manufacturers could potentially develop significant goodwill and credibility in their commitment to road safety by offering cyclists lightweight and efficient beacons that would make them more obvious to automated driving systems.

References

“A new benchmark for vision-based cyclist detection”, X. Li, F. Flohr, Y. Yang, H. Xiong, M. Braun, S. Pan, K. Li and D. M. Gavrila, in proceedings of IEEE Intelligent Vehicles Symposium (IV), pages 1028-1033, June 2016

Link to Jupyter notebook

Learning the language of the Giro

Fausto_Coppi

Computers are becoming ever better at natural language processing (NLP). I set up a neural network as a language model and loaded a set of pre-trained weights on my MacBook Pro, to see what it could do. After priming it with some text about the Giro d’Italia, this is what the model produced as a continuation.

Priming text: The Giro d’ Italia (Italian pronunciation: [ˈdʒiːro diˈtaːlja]; English: Tour of Italy; also known as the Giro) is an annual multiple-stage bicycle race primarily held in Italy, while also occasionally passing through nearby countries. The first race was organized in 1909 to increase sales of the newspaper La Gazzetta dello Sport; however it is currently run by RCS Sport. The race has been held annually since its first edition in 1909, except when it was stopped for the two world wars. As the Giro gained prominence and popularity the race was lengthened, and the peloton expanded from primarily Italian participation to riders from all over the world.

Computer generated continuation:  the race was won by the italian rider , giovanni di u_n , who won the race in the first leg of the race . the race was won by italian rider giovanni u_n , who won the race by a margin of two lengths .
= = = world tour = = =
the tour de france was the first of the tour de france . the tour de france was won by the reigning world champion , the reigning world champion , who had won the tour de france in the previous year ‘s race …

The output may not make a lot of sense, but the point is that it looks like English (in lower case). The grammar is reasonable, with commas, fullstops and a header inserted in  a logical way. Furthermore, the model has demonstrated some understanding of the context by suggesting that the Giro could be won by an Italian ride called Giovanni. The word “u_n” stands for unknown, which is consistent with the idea that an Italian surname may not be a familiar English word. It turns out that a certain Giovanni Di Santi raced against Fausto Coppi (pictured above) in the 1940 Giro, though he did not win the first stage. In addition to this, the model somehow knew that the Giro, in common with the Tour the France, is a World Tour event that could be won by the reigning world champion.

I found this totally amazing. And it was not a one off: further examples on random topics are included below. This neural network is just an architecture, defining a collection of matrix multiplications and transformations, along with a set of connection weights. Admittedly there are a lot of connection weights: 115.6 million of them, but they are just numbers. It was not explicitly provided with any rules about English grammar or any domain knowledge.

How could this possibly work?

In machine learning, language models are assessed on a simple metric: accuracy in predicting the next word of a sentence. The neural network approach has proved to be remarkably successful. Given enough data and a suitable architecture, deep learning now far outstrips traditional methods that relied on linguistic expertise to parse sentences and apply grammatical rules that differ across languages.

I was experimenting with an AWD-LSTM model originally created by Stephen Merity. This is a recurrent neural network (RNN) with three LSTM layers that include dropout. The pre-trained weights for the wt103 model were generated by Jeremy Howard of fast.ai, using a large corpus of text from Wikipedia.

Jeremy Howard converted the Wikipedia text into tokens. A tokeniser, such as spaCy,  breaks text into words and punctuation, resulting in a vocabulary of tokens that are indexed as integers. This allows blocks of text to be fed into the neural network as lists of numbers. The outputs are numbers that can be converted back into the predicted words.

The wt103 model includes a linear encoder that creates embeddings of word tokens. These are passed through three LSTM layers whose states are able to retain a memory of previous words or context. The result is passed through a decoder, employing the same weights as the encoder, to produce a softmax output that can be treated as a set of probabilities, across the vocabulary, to predict the next word token. Special forms of dropout were employed, as described in the paper, to limit overfitting and make the model more robust.

The network was trained by minimising cross-entropy loss using stochastic gradient descent. A technique called back propagation through time (BPTT) introduced some random variation into the length of the blocks of text employed in each training batch. The connection weights were iteratively improved towards optimal values.

OK, but so what?

Impressive as it may be for a neural network to spout highly plausible but nonsensical English, what possible use could this have? Well, suppose you want to automate the separation of texts into different categories, for example, distinguish different authors or determine the sentiment of tweets or predict the reaction of the stock market to news items or identify the sentiment of movie reviews. Fine-tuning a neural network that already has a good knowledge of English provides a huge advantage over starting from scratch. In lesson 10 of the fast.ai course, Jeremy Howard demonstrates how this approach can be used to achieve state of the art results on the widely used IMDB movie review data set.

Further examples

In the first example, the network finishes the sentence about a hurricane in perfect English.  Next, it comes up with the name of a critic (actually a rock music critic rather than a book reviewer, but heh!) and it makes use of its LSTM to open and close quotation marks. In the bean example, it amazingly starts describing flowers and correctly converts their dimensions from centimetres to inches!

My code can be found here.

Priming text: Hurricane Ivan was a large long Cape Verde hurricane that caused widespread damage in the Caribbean and United States The cyclone was

Computer generated continuation:  the first hurricane to strike the united states since hurricane katrina in 2005 .

Priming text: The book was widely acclaimed by the critics.

Computer generated continuation: the book was published in the united states by the american library association on june 1 , 2006 .

= = = critical response = = =

the book received mixed reviews from critics . the new york times reviewer robert christgau called it ” a great book ” and ” a great book ” . he praised the book ‘s ” strong , strong , and well – written ” prose , and the ” excellent ” prose .

Priming text: Most varieties of bean grow either as an erect bush or as a climbing plant, but a few important kinds are of intermediate form. Dwarf and semiclimbers are grown extensively. When the climbing type is grown for its immature pods, artificial supports are necessary to facilitate harvesting. Varieties differ greatly in size, shape, colour, and fibrousness or tenderness of the immature pods. In general, varieties grown for dry mature seeds produce pods that are too fibrous to be eaten at any state of development. Most edible-podded beans produce relatively low yields of mature seeds, or seeds that are of low eating quality. Seed colours range from white through green, yellow, tan, pink, red, brown, and purple to black in solid colours and countless contrasting patterns. Seed shapes range from nearly spherical to flattened, elongated, and kidney-shaped. Pods are of various shades of green, yellow, red, and purple and splashed with red or purple; pod shapes range from flat to round, smooth to irregular, and straight to sharply curved; length ranges from 75 to 200 millimetres (3 to 8 inches) or more.

Computer generated continuation:  the most common form of the flower is the flower spike , which is a characteristic of the flower spike . the flower is made up of a series of small , cylindrical , cylindrical , cylindrical , cylindrical stems that are up to 10 cm ( 3.9 in ) long and 2.5 cm ( 0.98 in ) wide . the flower spikes are arranged in a series of three or four pairs of flowers 

 

 

Creating artistic images from Strava rides

firstimage
Four laps of Richmond Park

When you upload a ride, Strava draws a map using the longitude and latitude coordinates recorded by your GPS device. This article explores ways in which these numbers, along with other metrics, can be used to create interesting images that might have some artistic merit.

The idea was motivated by the huge advances made in the field of Deep Learning, particularly applications for image recognition. However, since datasets come in all shapes and forms, researchers have explored ways of converting different types of data into images.  In a paper published in 2015, the authors achieved success in identifying standard time series by converting them into images.

GPS bike computers typically record snapshots of information every second. What kind of images could these time series generate? It turns out that there are several ways to convert a time series into an image.

Spectrogram

Creating a spectrogram is a standard approach from signal processing that is particularly useful for analysing acoustic files. The spectrogram is a heat map that shows how the underlying frequencies contributing to the signal change over time. Technically, it is derived by calculating the discrete Fourier transform of a window that slides across the time series. I applied this to my regular Saturday morning club ride of four laps around Richmond Park. The image changes a bit once the ride gets going after about 1200 seconds (20 minutes), but, frankly, the result was not particularly illuminating. There is no obvious reason to consider cycling power data as a superposition of frequencies.

spectrogram

Ah! Now we are getting somewhere

The authors of the referenced paper took a different approach to produce things called Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF). Read the paper if want to know the details. I created these and something call a Recurrence Plot. All of these methods generate a matrix, by combining every element in the time series with every other element. The underling observations occurring at times t_{1} and t_{2} determine the colour of the pixel at position (t_{1}, t_{2}). Images are symmetric along the lower-left to upper-right diagonal, apart from GADF, which is antisymmetric.

Let’s see how do they look for on four laps of Richmond Park. We have six time series, with corresponding sets of images below. The segmentation of the images is due to periodicity of the data. This is particularly clear in the geographic data (longitude, latitude and altitude). The higher intensity of the main part of the ride is most obvious in the heart rate data. The MTF plots are quite interesting. Scroll down through the images to the next section

data1
Raw time series of power, heart rate, cadence, longitude, latitude and altitude

gasf
Gramian Angular Sum Field

gadf
Gramian Angular Difference Field

mtf
Markov Transition Field

rp
Recurrence Plot

From cycle ride to art

It is one thing to create an image of each item, but how can we combine these to summarise a ride in a single image. I considered two methods of combining time series into a single image: a) create a new image where the vertical and horizontal axes represent different series and b) create a new image by simply adding the corresponding values from two underlying images.

One problem is that some cyclists don’t have gadgets like heart rate monitors and power meters, so I initially restricted myself to just the longitude, latitude and altitude data. Nevertheless, as noted in an earlier blog, it is possible to work out speed, because the time interval is one second between each reading. Furthermore, one can estimate power, from the speed and changes in elevation.

Another problem is that rides differ in length. For this I split the ride into, say, 128 intervals and took the last observation in each interval. So for a 3 hour ride, I’d be sampling about once every 84 seconds.

The chart at the top of this blog was created by first normalising each series to a standard range (-1, +1). Method a) was used to create two images: longitude was added to latitude and altitude was multiplied by speed. These were added using method b). Using these measures will produce pretty much the same chart each time the ride is done. In contrast, an image that is totally unique to the ride can be produced using data relating to the individual rider. The image below uses the same recipe to combine speed, heart rate, power and cadence. If this had been a particularly special ride, the image would be a nice personal memento.

lastimage
A different take on four laps of Richmond Park

For anyone interested in the underlying code, I have posted a Jupyter notebook here.

References

Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks, Wang Z Oates T, https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10179/10251

 

Cycling Through Artistic Styles

HR

My earlier post on cycling art provided an engaging way to consider the creative potentials of deep learning. I have found myself frequently gravitating back to the idea, using the latest code available over at fast.ai. The method uses a neural network to combine the content of a photograph with the style of an artist, but I have found that it takes a few trials to find the right combination of content versus style. This led to the idea of generating a range of images and then running them together as a movie that gradually shifts between the base image to a raw interpretation of the artist’s style.

Artistic styles

Using a range of artistic styles from impressionist to abstract, the weights that produced the most interesting images varied according to the photograph and artistic style.

My selected best images are shown below, next to snippets of the corresponding artworks. It turned out that the impressionist artists (Monet, Van Gogh, Cézanne and Braque) maintained the content of the image, in spite of being more heavily weighted to artistic style. In contrast, the more monochromatic styles (O’Keeffe, Polygons, Abstract as well as Dali) needed to be more strongly weighted towards content, in order to preserve the cyclist in the image. The selections for Picasso and Pollock were evenly balanced.

Every image is unique and sometimes some real surprises pop up. For example, using Picasso’s style, the mountains are interpreted as rooftops, complete with windows and doors. Strange eyes peer out the background of finger-shapes in the Dali image and the mountains have become Monet’s water lilies. The Pollock image came out very nicely.

Deep learning

The approach was based on the method described in the paper referenced below. Running the code on a cloud-based GPU, it took about 30 seconds for a neural network to learn to generate in image with the desired characteristics. The learning process was achieved by minimising a loss function, using gradient descent. The clever part lay in defining an appropriate loss function. In this instance, the sample image was passed through a separate pre-trained neural network (VGG16), where the activations, at various layers in the network, were compared to those generated by the photograph and the artwork. The loss function combined the difference in photographic content with the difference in artistic style, where the critical parameter was the content weighting factor.

I decided to vary the content weighting factor logarithmically between around 0.1 and 100, to obtain a full range of content to style combinations. A movie was be produced simply by packing together the images one after the other.

References

A Neural Algorithm of Artistic Style, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

 

 

Strava – Automatic Lap Detection

Screen Shot 2018-08-04 at 16.30.58
Opening Laps of Hillingdon Race

As you upload your data, you accumulate a growing history of rides. It is helpful to find ways of classifying different types of activities. Races and training sessions often include laps that are repeated during the ride. Many GPS units can automatically record laps as you pass the point where you began your ride or last pressed the lap button. However, if the laps were not recorded on the device, it is tricky to recover them. This article investigates how to detect laps automatically.

First consider the simple example of a 24 lap race around the Hillingdon cycle circuit. Plotting the GPS longitude and latitude against time displays repeating patterns. It is even possible to see the “omega curve” in the longitude trace. So it should be possible to design an algorithm that uses this periodicity to calculate the number of laps.

Screen Shot 2018-08-03 at 19.07.16This is a common problem in signal processing, where the Fourier Transform offers a neat solution. This effectively compares the signal against all possible frequencies and returns values with the best fit in the form of a power spectrum. In this case, the frequencies correspond to the number of laps completed during the race. In the bar chart below, the power spectrum for latitude shows a peak around 24. The high value at 25 probably shows up because I stopped my Garmin slightly after the finish line. A “harmonic” also shows up at 49 “half laps”. Focussing on the peak value, it is possible to reconstruct the signal using a frequency of 24, with all others filtered out.

Screen Shot 2018-08-03 at 19.20.38Screen Shot 2018-08-03 at 19.24.53

So we’re done – we can use a Fourier Transform to count the laps! Well not quite. The problem is that races and training sessions do not necessarily start and end at exactly the starting point of a lap. As a second example, consider my regular Saturday morning club run, where I ride from home to the meeting point at the centre of Richmond Park, then complete four laps before returning home. As show in the chart below, a simple Fourier Transform approach suggests that ride covered 5 laps, because, by chance, the combined time for me to ride south to the park and north back home almost exactly matches the time to complete a lap of the park. Visually it is clear that the repeating pattern only holds for four laps.

Screen Shot 2018-08-03 at 19.35.07

Although it seems obvious where the repeating pattern begins and ends, the challenge is to improve the algorithm to find this automatically. A brute force method would compare every GPS location with every other location on the ride, which would involve about 17 million comparisons for this ride, then you would need to exclude the points closely before or after each recording, depending on the speed of the rider. Furthermore, the distance between two GPS points involves a complex formula called the haversine rule that accounts for the curvature of the Earth.

Fortunately, two tricks can make the calculation more tractable. Firstly, the peak in the power spectrum indicates roughly how far ahead of the current time point to look for a location potentially close to the current position. Given a generous margin of, say, 15% variation in lap times, this reduces the number of comparisons by a whole order of magnitude. Secondly, since we are looking for points that are very close together, we only need to multiply the longitudes by the cosine of the latitude (because lines of longitude meet at the poles) and then a simple Euclidian sum the squares of the differences locates points within a desired proximity of, say, 10 metres.  This provides a quicker way to determine the points where the rider was “lapping”. These are shaded in yellow in the upper chart and shown in red on a long/latitude plot below. The orange line on the upper chart shows, on the right hand scale, the rolling lap time, i.e. the number of seconds to return to each point on the lap, from which the average speed can be derived.

Screen Shot 2018-08-03 at 20.26.18

Two further refinements were required to make the algorithm more robust. One might ask whether it makes a difference using latitude or longitude. If the lap involved riding back and forth along a road that runs due East-West, the laps would show up on longitude but not latitude. This can be solved by using a 2-dimensional Fourier Transform and checking both dimensions. This, in turn, leads to the second refinement, exemplified by the final example of doing 12 ascents of the Nightingale Lane climb. The longitude plot includes the ride out to the West, 12 reps and the Easterly ride back home.

Screen Shot 2018-08-03 at 20.34.02

The problem here was that the variation in longitude/latitude on the climb was tiny compared with the overall ride. Once again, the repeating section is obvious to the human eye, but more difficult to unpick from its relatively low peak in the power spectrum. A final trick was required: to consider the amplitude of each frequency in decreasing order of power and look out for any higher frequency peaks that appear early on the list. This successfully identified the relevant part of the ride, while avoiding spurious observations for rides that did not include laps.

The ability for an algorithm to tag rides if they include laps is helpful for classifying different types of sessions. Automatically marking the laps would allow riders and coaches to compare laps against each other over a training session or a race. A potential AI-powered robo-coach could say “Ah, I see you did 12 repeats in your session today… and apart from laps 9 and 10, you were getting progressively slower….”