Phone showing data screen resting on Heaton Harriers screen

Running with data

7th July 2018. 14:05:13 hours. I can still remember the elated feeling, glancing at my watch on the way out of Carey Burn and realising I was almost bang on target. After over three and a half hours of racing in the sweltering sun I was within thirteen seconds of my pre-race goal.

It wasn’t the first time I’d been on target in a big race, but it was the first time it owed as much to a spreadsheet as to training. It was the Chevy Chase last year and my first attempt at making data work for me in a race.

Most fell races follow a simple format; you run to one or more checkpoints, in order, before the cut-off times. If you ignore the fact checkpoints are often on hill/mountain summits, you could even consider it fun.

What racing provides you with, as well as dull and persistent pain in the quads, is data. We track our runs with watches that record hundreds of data points every minute. We can access detailed information on our splits during races.

It’s hard to know where to start tapping into it all but last year I decided to try some basic analysis to help me in the race.

What racing provides you with, as well as dull and persistent pain in the quads, is data.

Given I’d done the race before, I had some data to start with. I set up a simple spreadsheet and entered the course information, my 2017 race data and the winner’s data.

Combining the data with my knowledge of the route I was able to look for areas I needed to improve. How did I handle different sections? Was I consistently slower than the winner, or was I particularly slower on ascent or descent? Were any of my splits outliers? Where could I expect to go faster?

I was able to tailor my training based on needing to improve my downhill pace. I set targets based on a combination of what I thought was possible and my target time. I’d ran the 2017 race in 4:37:12 and was keen to get below four hours in 2018.

Once I was done, I copied the key data points onto a phone wallpaper so I could glance at it on the go. It showed checkpoints, cut-offs and distances alongside my last time, my target time and the pace I’d need to get there.

Despite scorching sun, it went surprisingly well. For most of the race I hit checkpoints within less than two minutes of my target. At the final checkpoint I was just thirteen seconds slower than my overall target. It was on.

And then the inevitable happened; I turned right out of Carey Burn and saw the climb they call Hell’s Path. I’d thought there was a simple, flat-ish, 4.4k to the finish only to find another fierce hill. After running as near as I could my best race, my head dropped and I struggled up that final climb.

In the end I was three minutes behind my target, with a thirty four minute personal best. Given how close my times were at the checkpoints, I’ll take that as a win for using data to inform your race strategy.

In the end I was three minutes behind my target, with a thirty four minute personal best … I’ll take that as a win for using data to inform your race strategy.

As we approach this year’s race my spreadsheet has surfaced again. I’ve got both more data and more experience. I know that I can climb Cheviot a little quicker than I thought, but also that Hell’s Path is brutal on tired legs.

Because I now work in data and analytics I can’t help but imagine what could be possible if you were to utilise all of the race data available. There are few sports that lend themselves as well as fell running to data-driven performance improvements.

If you analysed every split from every runner you could find fascinating insights into the best pacing strategy. You could work out a runner’s individual strengths and plan a pace and route to match them perfectly. Comparing to similar runners could help to predict performance and identify areas for extra training.

In the absence of all that, however, I’m happy with my spreadsheet. It may not seem like running but it sure seems to help.