BikeTechReview.com

  • Increase font size
  • Default font size
  • Decrease font size
Home Performance Supply Human SuperComputers

Human SuperComputers

E-mail Print PDF

“Landing on Mars is very difficult…”
-   Ed Weiler, NASA

When projects like landing on Mars are taken on by the big-wigs at NASA (National Aeronautics and Space Administration) and the JPL (Jet Propulsion Laboratory) the overall monetary and time investments are mind-boggling.  Hardly an expense is spared in researching and developing a system that is capable of hurling itself off Earth, careening through space and eventually coming to rest on the Red Planet – all in one piece, of course.  If it all works out like the geeks at the computer terminals planned, and with a little luck, the colossal task at hand is made to look very easy… 

...kind of like when Jan Ullrich made whoopin’ up on LANCE in a super important Tour time-trial in 2003 look easy... 

It’s difficult to believe that Jan Ullrich, or any other bike racer that has been doing this kind of thing for a significant amount of time, spends countless hours simulating their performance in the virtual reality land of cyberspace.  More than likely, guys like Jan are out riding their bikes.  However, one recent case study seems to indicate that an experienced cyclist is capable of displaying nearly supercomputer-optimized pacing strategies!  This begs the question:  Are bike racers really human supercomputers?
 
The Course

The climb up Couser Canyon Road lies in the foothills of Palomar Mountain in northern San Diego County , California .  It is just over six kilometers long and rises around 275 meters for an average grade of approximately 4.5% (according to the downloaded data from my Polar x-trainer plus altimeter).  It’s not a particularly grueling climb, but it is my spot for doing 20MP-based workouts and testing my fitness.

This course is particularly interesting from a performance modeling perspective, since there are four distinct sections of the route.  The first 1.5 kilometers is flat with one small 200-meter sprinter type hill.  The second section is a 1.5 kilometer increasing gradient climb that then levels off a bit for the flat to gradually uphill 2 km-long third section.  The final section is the steepest 1km of the route. 

In the past ten weeks I have done this course nine times.  I have employed some different power pacing strategies along the way, thanks to a “new-to-me” SRM power meter.  During a test in late February, though, I just rode my bike up the hill and worried about the data until afterwards.  The results of riding this course by “feel” and then comparing them to the results of an optimized model/analysis were a bit surprising – depending on which side of the technology fence one stands...
 
The Model

I won’t bore anyone with the background details on the fundamental equation of motion of a cyclist, since it is beyond the scope of this blurb.  The model used was derived from first principles – and Newton has been known to get a thing or two correct in his day when it comes to physics:

The equation of motion above adequately describes the demand side of riding a bike.  Cyclists must overcome all sorts of forces due to the environment – aerodynamic drag, tire rolling resistance, inertia, and of course, gravity.  The more difficult (for me, anyways) side of the equation to model is the supply/power side:  What is the human body physiologically capable of doing while riding a bike?  Or, more simplistically – how many “fun tickets” do we get to spend over a given course?

As discussed previously, one only has so many “fun tickets” (which can be defined by an average power constraint) to spend over the course of the event.  The win/lose question then becomes “What is the most effective way to use this average power constraint to determine optimal performance?”  

Optimal Pacing Strategy

Lots of good TT’ers will describe their performances with statements similar to “I just went as hard as I could.”  While this description is completely valid, it’s not entirely helpful for novices learning the sport, nor is it helpful for aspiring equation of motion modelers.  Does going “as hard as I could” imply that a constant power pacing strategy was used and thus is “best”; or, might a “go harder on the uphills and easier on the downhills/variable power” type of strategy be faster (even though both strategies must equal the same average power constraint)?

Mathematical modeling makes the exploration of these types of questions very appealing. It takes relatively little time to run through hundreds of possible combinations of % power increases at different % grade thresholds (e.g. 20% higher power on section of road that are 4% or steeper).  The following contour plot shows the relationship between road grade, power variability, and time to complete the Couser Canyon Course assuming an average power constraint of 289 watts (as determined by post-hoc average power determination of the pace-by-feel-trial completed in late February):

Need help reading this plot?

For example, with a 20% power variation (+/- average power target) whenever the road tilts up or down by 7%, a little more than 10 seconds would be lost to the theoretical minimum time to cover the course.  Similarly, a 0% power variation at a 0% grade (constant power effort) would result in slightly more than a 20 second time loss.  The theoretical best way to go up Couser Canyon road would be to vary power by 60-80% above the average power constraint level whenever the road tilted up/down by 6% (as indicated by the 0 second contour on the plot).
 
Several case results have been tabulated below:

An example of the instantaneous power versus distance plot for some of the modeled cases looks similar to this:

The above profile is not very useful, other than showing that the initial section was done below the average power target of 289 watts and that power should go up on the steep sections and down on the flat/downhill sections.  This plot becomes slightly more interesting when a 30 second rolling average is computed and then compared to the “real” data acquired during the pace-by-feel effort.

The above plot (red is actual data, black is simulated data) shows my best guess at the variable power strategy that I employed during the effort (50% power variation at +/- 6% grade).  This variable power effort would have resulted in being a mere two seconds off the computed, optimal time.  The plot shows that up until 500 seconds the actual and simulated data tracks exceptionally well.  The subsequent 500 seconds shows some significant deviation – trends and magnitudes are pretty good on an overall level, though.

The following plot is an overlay of several different cases that were modeled:

Probably the most interesting thing about this plot is to notice the 289 watt constant power line (green) and the minimum time power line (light blue) – not very similar, huh?  Section 1 of the course is done primarily below the average power target; section 2 has a gradually increasing slope to its power.  Section 3 is again below the target power, allowing the body rest up for the above average power target period during the last 5-6 minutes of the effort.

It should be noted that all the simulated trials and the actual power generated over the course exhibited the same general patterns – one should go slightly harder when the road tilts up, and slightly easier when the road is level/tilts slightly down for optimum performance.  More importantly, the results show that a variable power pacing strategy can make a significant difference in the total time to complete a course.

While instantaneous speed may not be particularly valuable for identifying general trends in performance, instantaneous speed can demonstrate the accuracy of the mathematical model.  The first plot below is an overlay of actual speed, speed determined without an inertial term in the equation of motion (power consumption due to speed changes is not accounted for), and speed determined while considering inertia effects:

It should be clear that the red line is the no inertial term result.  If the no-inertia red trace is removed and the data is re-plotted:

we can see that mathematical modeling does a pretty darn good job of measuring “real-world” performance.  Isn’t math cool?

Conclusion

While some may describe doing time trials/training tests as “I was just going as hard as I could”, this should not be construed as “I used a constant power pacing strategy during the effort”.  It should be clear that the results of this obscure analysis suggest that if the terrain is variable, a variable power pacing strategy will result in a significantly faster time to complete the course.  The best way to determine this variable power pacing strategy is to practice on the course and see what works – a power meter should help to accelerate this whole process of pacing optimization.

During the Mars project, JPL engineers used a lot of elbow grease, smarts, and the power of supercomputers to iterate on design concepts.  As a result, the evolution of the Mars landing system was sped up until it had reached a near “optimal” state.  On “race” day, the hugely challenging task of landing on Mars was made to look easy.

It’s also kind of the same with racing bikes – if one rides enough, sooner or later, that supercomputer between one’s ears will figure it all out and tell your legs, heart, and lungs just the right time to step on the gas and just the right time to ease off, making the complicated task of pacing by feel look easy.  So get out there and ride your bike - “optimal” performance is just around the corner…

Last Updated on Tuesday, 23 February 2010 05:09  

Advertisement