As Hurricane Dorian closes in inexorably on the U.S. mainland, even the local news here in California is covering it intensely. One meteorologist made an odd remark about the storm: “Let’s not focus too much on what category it is.” (Hurricanes are categorized from one to five, depending on the wind velocity.) I think what he meant was that a storm’s damage can be caused by factors other than wind velocity, such as torrential rains and flooding, if the storm stalls.
But, as physics tells us, the wind velocity is hugely important. Here’s why: The destructive force of a storm increases as the square of the wind velocity, because kinetic energy = 1/2mv2, where m is mass and v is velocity.
Thus, if Dorian is a Category 4 hurricane with 155 mph winds, it has double the energy of a Category 3 storm with 111 mph winds. Knowing that — and taking appropriate precautions — could save your life. Storms with 155 mph winds are the kind that wipe out entire communities.
Science has other practical advantages, such as understanding that putting salt on ice lowers the temperature at which water freezes, which is why we spread salt on an icy highway, and also that you’re in trouble when the temperature goes below the level at which salt is effective. Accidents often occur on heavily salted roads in extremely cold weather, because motorists don’t realize that they’re driving on solid ice.
But science is also critically important to society at large for making decisions about everything from where to locate nuclear power plants to creating a substrate of knowledge which will form the basis of new pesticides and treatments for Alzheimer’s disease and cancer.
So let’s talk about what science is. It has several aspects, including, of course, a vast collection of facts, such as how DNA replicates, and in the explanations of why the planets revolve around the sun. Scientists also perform experiments to gain a deeper understanding of the biological and physical world things, such as by examining the post-mortem brains of athletes with chronic traumatic encephalopathy and exploring the nature of chemical reactions.
But most important of all, it’s a method to ensure that experiments and the data derived from them are reproducible and valid.
The Society of Environmental Toxicology and Chemistry published a short primer called, simply, “Sound Science,” which offers some good insights for non-scientists, including politicians, the media and the public. It begins by defining sound science as “organized investigations and observations by qualified personnel using documented methods and leading to verifiable results and conclusions.” If any of those characteristics is missing, the investigations — from lab experiments to clinical and environmental studies — are not likely to be reliable or reproducible.
The phrase “organized investigations and observations” means that there needs to be a readily testable hypothesis — for example, that treatments A and B (say, aspirin versus a sugar pill) to relieve a headache are equally effective.
The results of the two treatments are then compared, and appropriate statistical methods are applied to ascertain whether we can disprove the hypothesis that the effects of the treatments are the same, which would make the alternative hypothesis — that A is different from B — accepted. That’s the essence of the process for testing a new drug and accumulating evidence to be submitted to regulators.
That might seem straightforward, in that the scientific method is in theory well understood, and one would think that experts in a given field could evaluate the methods, results and conclusions of research performed by “qualified personnel” — defined loosely as people who understand the theory and practice of the scientific method. In practice, however, science is anything but straightforward, especially when politics and other special interests intrude.
Separating Science From Schlock
Astonishingly, “more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments,” according to a 2016 survey of 1,576 researchers conducted by the journal Nature. That is alarming enough, but the problem is likely to only become worse with the proliferation of “predatory publishers” of many open-access journals (which anyone can read online without a subscription fee).
According to an article in the New York Times by science writer Gina Kolata, the journals published by some of the worst offenders are nothing more than cash-generating machines that eagerly, uncritically accept virtually any submitted paper as long as the authors pay a hefty publication fee. (I receive email solicitations from them several times a week.)
Another worrisome trend is the increasing frequency of publication of the results of flawed “advocacy research” that is designed to give a false result to support a certain cause or position and can be cited by activists long after the findings have been discredited. The articles reporting the results of such “experiments” are often found in the predatory open-access journals.
Adding to the confusion, many non-scientists, including journalists and editors, frequently confuse association and causation. When a study finds an association, that means that two events or findings have a temporal relationship, while causation means that one event actually causes another.
The classic formulation is that the cock crows and then the sun rises; the two events are associated, but there is no causation. A more subtle example would be a claim that organic foods cause autism, simply because organic food sales and the incidence of autism have increased in tandem.
A related phenomenon that gives rise to erroneous conclusions, “data dredging” or “data mining,” is when an investigator looks at a large number of variables for statistically significant associations and formulates a hypothesis after the analysis is done. That’s how we end up with spurious headlines such as this one in USA Today, “Drinking four cups of coffee daily lowers risk of death.”
Such conclusions can arise when researchers ask their subjects numerous questions about what they eat and drink and their activities (exercise, smoking, occupation, hobbies, etc.) and then try to correlate those answers with health outcomes. If the numbers of questions and outcomes are large enough, spurious statistical associations are inevitable, although there is no causation. Unfortunately, many people now believe that drinking lots of coffee will actually boost their longevity when there’s no real evidence to suggest that.
Not only are these phenomena an affront to sound science but they also confound policymakers.
The bottom line is, science can show the way when researchers follow the rules and when policymakers, the media, and the general public are able to distinguish valid scientific findings from schlock. It’s not always easy.
Henry I. Miller, a physician and molecular biologist, is a senior fellow at the Pacific Research Institute. He learned to revere science as an undergraduate at the Massachusetts Institute of Technology.
Read more
Knowing A Little Physics Could Save Your Life
Henry Miller, M.S., M.D.
As Hurricane Dorian closes in inexorably on the U.S. mainland, even the local news here in California is covering it intensely. One meteorologist made an odd remark about the storm: “Let’s not focus too much on what category it is.” (Hurricanes are categorized from one to five, depending on the wind velocity.) I think what he meant was that a storm’s damage can be caused by factors other than wind velocity, such as torrential rains and flooding, if the storm stalls.
But, as physics tells us, the wind velocity is hugely important. Here’s why: The destructive force of a storm increases as the square of the wind velocity, because kinetic energy = 1/2mv2, where m is mass and v is velocity.
Thus, if Dorian is a Category 4 hurricane with 155 mph winds, it has double the energy of a Category 3 storm with 111 mph winds. Knowing that — and taking appropriate precautions — could save your life. Storms with 155 mph winds are the kind that wipe out entire communities.
Science has other practical advantages, such as understanding that putting salt on ice lowers the temperature at which water freezes, which is why we spread salt on an icy highway, and also that you’re in trouble when the temperature goes below the level at which salt is effective. Accidents often occur on heavily salted roads in extremely cold weather, because motorists don’t realize that they’re driving on solid ice.
But science is also critically important to society at large for making decisions about everything from where to locate nuclear power plants to creating a substrate of knowledge which will form the basis of new pesticides and treatments for Alzheimer’s disease and cancer.
So let’s talk about what science is. It has several aspects, including, of course, a vast collection of facts, such as how DNA replicates, and in the explanations of why the planets revolve around the sun. Scientists also perform experiments to gain a deeper understanding of the biological and physical world things, such as by examining the post-mortem brains of athletes with chronic traumatic encephalopathy and exploring the nature of chemical reactions.
But most important of all, it’s a method to ensure that experiments and the data derived from them are reproducible and valid.
The Society of Environmental Toxicology and Chemistry published a short primer called, simply, “Sound Science,” which offers some good insights for non-scientists, including politicians, the media and the public. It begins by defining sound science as “organized investigations and observations by qualified personnel using documented methods and leading to verifiable results and conclusions.” If any of those characteristics is missing, the investigations — from lab experiments to clinical and environmental studies — are not likely to be reliable or reproducible.
The phrase “organized investigations and observations” means that there needs to be a readily testable hypothesis — for example, that treatments A and B (say, aspirin versus a sugar pill) to relieve a headache are equally effective.
The results of the two treatments are then compared, and appropriate statistical methods are applied to ascertain whether we can disprove the hypothesis that the effects of the treatments are the same, which would make the alternative hypothesis — that A is different from B — accepted. That’s the essence of the process for testing a new drug and accumulating evidence to be submitted to regulators.
That might seem straightforward, in that the scientific method is in theory well understood, and one would think that experts in a given field could evaluate the methods, results and conclusions of research performed by “qualified personnel” — defined loosely as people who understand the theory and practice of the scientific method. In practice, however, science is anything but straightforward, especially when politics and other special interests intrude.
Separating Science From Schlock
Astonishingly, “more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments,” according to a 2016 survey of 1,576 researchers conducted by the journal Nature. That is alarming enough, but the problem is likely to only become worse with the proliferation of “predatory publishers” of many open-access journals (which anyone can read online without a subscription fee).
According to an article in the New York Times by science writer Gina Kolata, the journals published by some of the worst offenders are nothing more than cash-generating machines that eagerly, uncritically accept virtually any submitted paper as long as the authors pay a hefty publication fee. (I receive email solicitations from them several times a week.)
Another worrisome trend is the increasing frequency of publication of the results of flawed “advocacy research” that is designed to give a false result to support a certain cause or position and can be cited by activists long after the findings have been discredited. The articles reporting the results of such “experiments” are often found in the predatory open-access journals.
Adding to the confusion, many non-scientists, including journalists and editors, frequently confuse association and causation. When a study finds an association, that means that two events or findings have a temporal relationship, while causation means that one event actually causes another.
The classic formulation is that the cock crows and then the sun rises; the two events are associated, but there is no causation. A more subtle example would be a claim that organic foods cause autism, simply because organic food sales and the incidence of autism have increased in tandem.
A related phenomenon that gives rise to erroneous conclusions, “data dredging” or “data mining,” is when an investigator looks at a large number of variables for statistically significant associations and formulates a hypothesis after the analysis is done. That’s how we end up with spurious headlines such as this one in USA Today, “Drinking four cups of coffee daily lowers risk of death.”
Such conclusions can arise when researchers ask their subjects numerous questions about what they eat and drink and their activities (exercise, smoking, occupation, hobbies, etc.) and then try to correlate those answers with health outcomes. If the numbers of questions and outcomes are large enough, spurious statistical associations are inevitable, although there is no causation. Unfortunately, many people now believe that drinking lots of coffee will actually boost their longevity when there’s no real evidence to suggest that.
Not only are these phenomena an affront to sound science but they also confound policymakers.
The bottom line is, science can show the way when researchers follow the rules and when policymakers, the media, and the general public are able to distinguish valid scientific findings from schlock. It’s not always easy.
Henry I. Miller, a physician and molecular biologist, is a senior fellow at the Pacific Research Institute. He learned to revere science as an undergraduate at the Massachusetts Institute of Technology.
Read more
Nothing contained in this blog is to be construed as necessarily reflecting the views of the Pacific Research Institute or as an attempt to thwart or aid the passage of any legislation.