Scientific & Mathematical Illiteracy in the Media

(On the internet, legacy media, scientific advocacy, and pop science)

(Please note — I wrote this in 2013 and just today copied it over into Squarespace’s tool for managing blogs. Doing so messed up scientific notation stuff in the section where I’m ragging on Bill McKibben and Rolling Stone for their messed up math. I am aware that this is a little ironic. I think I caught my errors; I may have missed one — but what I’m frustrated at McKibben/RS for is NOT their typo, but rather the underlying mathematical mistake and what I assume was the glazed expression every reader had when looking at that piece when they ought to have been applying a critical perspective — ed.)

One objective of scientific education is to reduce mathematical and scientific illiteracy. We would probably expect that this is a goal congruent with the goals and values of journalism, since both scientists and journalists are professionally committed to communicating the truth. Reasonable people would agree that one of the qualities of math and science are their resistance to manipulation – pace Disraeli’s quip about lies and statistics, numbers ought to be more immune to the exploitation of the subtleties of language that allow much public speech to be dishonest while not provably false. We might expect, therefore, that journalists would rely heavily upon math and science to support their work.

 

But this summer, amidst rising popular anger at far-reaching government surveillance programs, author and blogger Michele Catalano published an article (“Pressure cookers, backpacks, and quinoa, oh my!” Medium, Aug. 1, 2013) claiming that several ominous three-letter agencies had been monitoring her internet use. She claimed that her Google search history prompted investigators to search her Suffolk County house; in passing, Catalano repeated these investigators’ comment that they made about a hundred such searches per week.

 

This is an extraordinary claim. Catalano’s article essentially argues that we live in a ubiquitous surveillance state – and extraordinary claims require extraordinary proof (this axiom is due to Marcello Truzzi, though often attributed to Sagan). I expected that any journalist reading her account would immediately sit down with a pencil and the back of an envelope to estimate the number of visits this taskforce was purportedly making, in order to understand the scope of this surveillance. The mental calculation I made while jogging along the Lake Michigan shore was that her story implied that about 5,200 households were searched every year. I had to stop at the 47th Street pedestrian bridge to catch my breath, where I Googled “Suffolk County, New York” and read that the 2010 US Census found 499,922 households in what looks like a rather pleasant suburban bit of Long Island. If the G-men get two weeks paid vacation a year, we can approximate the relevant numbers as 5,000 searches per year in a county with nearly 500,000 households – so about 1% of Suffolk County, NY households are visited every year by the goon squad… yet no one else had spoken out about this serious breach of their constitutional rights?

 

Basic mathematical literacy would have immediately shown that there was a rather suspicious hole in Catalano’s account.  Her account was soon debunked, but not before a number of high profile news outlets reported Catalano’s version of the story. Apparently no one writing for any of these widely respected newspapers, magazines, and television networks suffered from the nagging doubt that surfaces in a numerically literate person’s mind when she or he reads a number that seems to be more than a couple orders of magnitude greater than or less than a broadly educated person ought to expect. There was some skepticism – the Washington Post stood out, proving itself less gullible than other, more hyperbolic news organizations – but no one commenting on this fiasco pointed out the very obvious mathematical holes in Catalano’s article. It was a seemingly minor aspect of the story, but it ought to immediately give rise to suspicion about other parts of her account.

 

Scientific and mathematical illiteracy are not a problem exclusive to online journalism – the national press struggle with high school physics as well. A copy of the March 1, 2009 edition of The Atlantic lived in a tree behind my apartment for a length of time that I’m certain my neighbors didn’t appreciate after I threw it there while reading its cover story. Mark Bowden’s article about the replacement of the USAF mainstay F-15 fighter with 5th generation F-22’s considers the changes in military aviation that have diminished the role of traditional air combat – dogfighting. Describing an advanced fighter’s combat information systems, Bowden states that radar works by emitting electrons and detecting those that are reflected back.

 

Which means that an author writing in a highly respected national periodical – The Atlantic has been published since before the Civil War, and endorsed Abraham Lincoln in 1860 – about technology’s influence on military aviation is scientifically illiterate at something like a high school level – and so is every editor and fact checker who proofed this article. The online version of that article now bears the embarrassing coda

 

Correction: The print version of this piece incorrectly referred to the particles emitted by radar as electrons. Radar's signals are electromagnetic waves made up of photons.

 

But what is a reader who knows that the photon is the quantum of electromagnetic radiation to conclude from this article? Or the non-specialist reader, who nonetheless remembers a high school physics class explanation of light – and radar – as a propagating disturbance of the electromagnetic field, who strongly suspects that an electron is a whole other thing? Perhaps both ought to conclude that any other information this article presents as fact may be nonsense. This error reveals not just the author’s forgivable if embarrassing fallibility, but also the lack of real diligence by The Atlantic’s fact checkers. Besides the importance to our civil discourse, shouldn’t we simply want to get stuff this basic right?

 

Mistakes like this speak to a kind of It Probably Isn’t Important school of thought. The replacement of an air force fighter aircraft or the deployment of an invasive security apparatus are enormously important projects and affect many people. Billions of dollars will be spent, and we want to think that our understanding of these matters is sufficiently sophisticated to offer a properly informed opinion upon them. But if we rely on this quality of reporting, we won’t understand the basic, fundamental aspects of what we’re talking about.

 

Some may counter that ignorance of this sort of detail is forgivable because these journalists understand the system as a whole. Perhaps they have synthesized an understanding of the science and technology that informs these issues without troubling to understand the details. But to me it feels like a scam. We cannot sit at a remove and make political decisions based on the opinions of experts we trust to understand the details for us, because clearly those representing themselves as experts aren’t close enough to the metal to understand what they’re writing about. It’s masquerade journalism, depending on the broader perception that scientific and mathematical details salted generously throughout an article lend credibility to one’s writing, and that one’s ignorance of those details’ meaning can be concealed. These writers treat scientific language mystically, hoping it can be manipulated on the page, and that as long as certain words stay in a certain order they’ll still have meaning. We see the sham for what it is when the cargo cultist confuses his incantation – when he writes electron where he ought to have written photon. But how frequently does such a writer keep his leptons and his bosons straight, and successfully pull off the deception – how many times are we failing to catch the deception because the writer gets his spells nearly right through dumb luck?

 

The situation isn’t getting better, and the consequences for journalists’ scientific and mathematical illiteracy grow graver. Bill McKibben is considered one of the foremost environmental reporters working in the United States, but in the very first paragraph of his August 2, 2012 Rolling Stone article “Global Warming’s Terrifying New Math,” McKibben writes that the odds of a 327th consecutive month with global temperatures exceeding the 20th century average “occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.” The “10-99” error is presumably a typo that occurred when the superscript exponent -99 was dropped to a regular font without the use of a caret to indicate exponentiation. This is an unfortunate way to start an article about math; almost certainly McKibben means 3.7 x 10^-99.

 

But there is a much more serious error. 3.7 x 10^-99 is compared to a very large number (the number of stars in the universe) and said to be greater than that number. An article by a highly respected environmentalist, writing about the math of climate change – a topic subject to wild, unfounded attacks masquerading as skepticism, a subject that has always had recourse to defend itself in its scrupulous scientific record – asserts that the number 3.7 x 10^-99, or 0.000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 0037, is larger than the number of stars in the universe. The generally accepted number of stars is between 10^22 and 10^24, or somewhere between 10,000,000,000,000,000,000,000 and 1,000,000,000,000,000,000,000,000 stars. I feel quite confident stating that the number of stars is somewhat greater than 3.7 x 10^-99.

 

The source of this mistake is relatively easy to guess at. Presumably McKibben has previously estimated the likelihood of the 327 consecutive above-average months, and written that probability in the X-to-1 odds format, where X was large. If he assumed that a given month’s global temperatures were independent from the previous or succeeding months, and assumed a fair coin toss, 50%-50% chance for that month to be hotter than average, he would then guess that a 1 month hot streak had a probability of ½; a two month hot streak, 1/2^2; and an n-month hot streak, a probability of 1/2^n. 2 to the 327th power, or 2^327, is about 2.73 x 10^98. Expressed as a fractional probability, where the likelihood of an event is represented by a number from zero (no chance for it to occur) to 1 (certain to occur), the likelihood that this 327 month hot streak would occur by chance is 1/2^327=1/(2.73 x 10^98), or about 3.66 x 10^-99 – the same very small number that McKibben cites in his article.

 

With some very simple math, we can guess at his meaning, and deduce a reasonable statement about probability. But that’s not what McKibben wrote, and it’s not what his editor at Rolling Stone saw, and it’s not what any fact checkers responsible for scrutinizing this article would have seen. This mistake is, however, the very first thing that anyone with mathematical literacy will see. No one involved in the process of bringing this article to press balked at this elementary mistake in communicating mathematical information, which suggests that none of them have the (rather basic) mathematical fluency necessary to understand and verify any subsequent mathematical statements in the article. It is a mistake that does damage to the cause of hardworking scientists who need to communicate the dire risks of climate change, who have depended upon their scientific rigor to bring the unpleasant truth into the reluctant public eye. Unlike the Catalano and Bowden articles, no correction appears in the online version of this article as of October 20, 2013.

 

I don’t know if it makes it slightly better or slightly worse to realize that the comparison McKibben makes is itself meaningless – what has the number of stars in the known universe to do with the very real problems of climate change? If there were only one star in the sky, it would not change the probability of such a streak occurring by a statistically unlikely fluke. That information is already communicated in the fractional probability, and the fact that the universe is a very large place is not particularly germane to what is currently happening in our part of it. Has McKibben considered whether the odds against such a trend are greater than the total number of oranges ever grown in Spain? How does it compare to the number of atoms in my dog? Please let him know that she’s been gaining weight, and ask him to revise upwards any previous estimates.

 

I have attributed these errors to Bill McKibben, and to Mark Bowden and Michele Catalano, because their names appear as authors on these pieces, but that is perhaps unfair. It’s not troubling that an author makes a mistake; I’ve written thousands of pages of scientific and technical documents, and I know I’ve made errors. Rather, it’s the nearly unavoidable conclusion that the publishers of such articles lack the diligence to catch these mistakes that is a source of concern. In a final example from journalist and author Malcolm Gladwell, it becomes obvious that honest and accurate communication isn’t even the goal of this kind of numerically illiterate writing.

 

Perhaps no popular writer more embodies the problem of false scientific scholarship than Gladwell. His extraordinarily popular collection of essays What the Dog Saw: And Other Adventures contains a passage recounting Nassim Taleb explaining a linear algebra calculation at a chalkboard, which entails finding the “igon value” of a matrix. Unfortunately for Gladwell, any undergraduate math major could tell you that Taleb is interested in eigenvalues, which are parameters related to linear equations, not “igon values,” which are absolutely nothing. When originally published in The New Yorker, that magazine’s fact checkers found and used the correct spelling, as University of Pennsylvania professor Mark Liberman points out at the Language Log blog (http://languagelog.ldc.upenn.edu/nll/?p=1897, retrieved October 20, 2013). The error appears in Gladwell’s essay collection, revealing an interesting counterpoint to our other cargo cult journalists: The New Yorker’s fact checking process worked, but later we’ve learned that the author was simply throwing around the mathy stuff to appear more clever than he really is.

 

An author who has built a career upon counterintuitive results cherry picked from the social sciences and psychology, whose books repeat the theme everything you’ve been taught is wrong – but trust ME, must think that by smearing something that looks vaguely mathematical or scientific onto his writing he will inherit a certain amount of credibility from those subjects. The fact checkers and editors who turned Gladwell’s drafts into a published book looked at the words “igon value,” presumably thought something like “I don’t know what that means” (because it means nothing) but failed to reach for their dictionaries, because once again It Probably Isn’t Important.

 

There are an increasing number of critics stating that Gladwell’s books are misleading, that they rely on over-simplifications – reviewers for The New York Times and The New Republic, for example, have criticized the accuracy of his writing (Steven Pinker, “Malcolm Gladwell, Eclectic Detective”, New York Times, Nov. 7, 2009; Isaac Chotiner, “Mister Lucky,” The New Republic, Jan. 29, 2009) – but it’s not clear that these critical voices are being heard. Gladwell’s latest book, David and Goliath, is number two on the New York Times Best Seller list (http://www.nytimes.com/best-sellers-books/overview.html list for October 27, 2013, retrieved 20 October) where it appears under the hardcover nonfiction category – a category that is at least a half accurate description for his writing.