Genetic study on longevity may be flawed
This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.
When scientists announced last week that they had identified 150 genetic features that could be used to predict whether a person will live past 100, the public was intrigued (and I reported on it myself) – but fellow scientists were skeptical.
A few aspects of the study raised red flags for geneticists.
First, the impressive 77% prediction accuracy was unheard of for similar types of reports, and particularly stood out given the relatively small number of subjects for this study. The study featured more than 1,000 centenarians -- an impressive number given how rare it is for people to live this long. But most genetic studies of this type (ones that look at the entire genome to try to find associations with particular traits or diseases) need DNA data from tens or hundreds of thousands of people to reach meaningful conclusions.
There were also some methodological issues. The researchers weren’t totally consistent about the DNA-analyzing technology they used over the course of the study, reportedly because the tool they used at the beginning of the study was taken off the market midway through, so they had to switch to a comparable but not identical product.
Furthermore, genetic information from the centenarians and the younger controls was collected differently, potentially introducing errors.
The blogosphere has quickly picked up on the story since Newsweek magazine broke it July 7. Daniel MacArthur at Genetic Future provides a technical perspective, including a graph showing one of the troubling ways that this study deviates from the usual reports of this type.
The personal genome-sequencing company 23andMe also published a blog post about the topic. The firm used its extensive genetic data of customers (including 134 who were 95 and older and 27 who were 100 and older) to test the predictive power of the reported genetic markers -- and found it to be not significantly better than random.
It’s too early to toss the study in the trash; simple follow-up experiments with standardized equipment could provide a quick answer about the truth of it. Regardless, these concerns do raise significant questions about how to ensure that good journals publish good science.
-- Rachel Bernstein