Tuesday 15 November 2011

Sport Science - good, bad and bogus

The sport sciences have garnered a huge body of evidence regarding the measurement and improvement of performance. Television programmes reporting on the preparation of elite players for events like the 2012 Olympic Games show them in laboratories, where scientists in lab coats connect them to tubes and cables and the ubiquitous machines that go 'ping'.

The scientific foundations of modern training and measurement are simply taken for granted by those involved in sport. Personally, I am not so sure.

Dave Collins (formerly Performance Director at UK Athletics and now Professor at the University of Central Lancashire) and I have recently written a paper for the International Journal of Sport Policy and Politics called 'Scienciness' and the allure of second-hand strategy in talent identification and development. We argue that senior sports coaches and administrators are too quickly impressed by apparently 'sciency' looking strategies and practices which, on closer inspection, are really not very good science at all.

Of course, not all sport science is dodgy; the great majority is perfectly respectable. A problem, though, is that the people who actually use the research are rarely able to distinguish between good and bad science.

Without an understanding of science, most of us turn to good-old-fashioned common sense, which is probably a good bet most of the time, but occasionally is
paradoxical and surprising and quite the opposite of common sense. And there are countless examples of bogus ideas in sport that seem common sensical; consider talent identification: the importance of spotting talent young; the need to identifying children with the correct body shape for specific sports; the necessity of focusing on one sport early. All of these ideas are widely believed to be true, because they seem like common sense. Yet they are all, more or less, nonsense.


One of the most fascinating examples of the battle between science and non-science is reported in Michael Lewis' Moneyball, which has just been released as a film starring .. [drum roll] .. Brad Pitt. The book / film focuses on Major League Baseball, possibly the most evidence-laden sport in the world, and its reliance on a combination of the wisdom of aged insiders and common sense-based, sciency statistics. According to Lewis, the statistics used by even the most elite teams to evaluate player success and, vitally, to identify the most talented recruits, were simply not relevant for the modern game.



Moneyball
also tells of the remarkable success of the Oakland A's, a relatively poor team that somehow managed to compete successfully against much wealthier and established teams. Their secret? They used informed statistics to identify the real key predictors of playing success. So, while their opponents continued to assess players' potential and performance using common sense measures like batting averages, the A's turned to on-base percentages and slugging percentages. These scores were not only more valid predictors of playing success, but they also allowed the team to buy much more economically.

The events told in Moneyball took place some years ago, and the other Major League teams were forced to change, albeit reluctantly. It may be the new film will stimulate interest in the issue of good, bad and bogus sport science, and - who knows - perhaps it will contribute to the sport science education of coaches and administrators.

[A recent issue of the Financial Times has a fascinating discussion between Michael Lewis and Billy Beane, the man behind the A's statistical revolution]

No comments: