Pages

Tuesday, October 28, 2014

The Forced Perspective

When I was younger, I loved to swing; when I needed to think about something, I’d head to the swing set in the yard, grab a seat, and pump my legs, gaining momentum, reaching ever higher—soaring. Or at least, pretending to soar. I remember trying to imagine myself doing the same things as characters from books I’ve just read: traveling through time on a bike, hitching a ride on the back of a goose, escaping the bonds of an enemy tribe, or solving crimes with my photographic memory. Sometimes I would face away from the house and look down the hill, across my dad’s fields, to the tiny ribbon of highway seemingly hugging the base of the valley foothills and the sporadic stream of dots driving along that ribbon—and I’d wonder about the “story” behind each of those cars, the reason why and the place where someone would drive at 4:30 pm on a summer afternoon. From my college-student, analytical mindset, I could say that during those times on the swing set, I was running through different perspectives, considering life through the lens of various glasses with all sorts of different prescriptions.

How do peoples’ perspectives change? Well, typically not through swinging. A quick perusal through online sources (not scholarly, mind you) revealed several ideas: asking questions of others who don’t think like you; turning yourself physically upside-down; changing up your daily routine. All these people offered ideas about how to change perspective, but they didn’t offer insight into the actual transition from one perspective to another. For example, ask yourself if any of the readings this week changed your perspective, if even temporarily. How did they do so—or maybe the better question, what did it take to do so? What “clicked” in order to make that transition?

Basically, I’m wondering if the actual, initial transition between perspectives is one of “force” or of “choice.” Take a look at the photographs below.




Every one of these included “forced perspective” in its caption. Did you first see the intended illusion or the actual “reality”? At least for me, I first saw the illusion and then their set-up almost immediately afterwards. But the illusion—the changed perspective—came first, without my choosing it so. Therefore, apply this same concept to the readings: were they “forced” perspectives? (And I’m not saying that the different perspective is an illusion, like in the photos). If they were “forced,” I’m curious to know why. Peter Atkins (Creation Revisited) used definitions and “logical leading” to make his point about change as the result of controlled chaos; J.B.S. Haldane (“On Being the Right Size”) and Martin Rees (Just Six Numbers) used “what-if” scenarios and comparisons to emphasize to emphasize that every organism and the universe (respectively) is exactly the “right size.” (Of course, they all used other rhetorical devices as well, but are patterns I noticed from a broad overview). People may choose to believe and keep a particular, changed perspective, but I’m not so sure if they much choice in the matter of that first glimpse of changed perspective. And I think perhaps this “forced perspective” is the hardest part of the battle for science writers; knowing the how and why behind forced perspective could be the key difference between a compelling, paradigm-shifting article and one that flops.

Tuesday, October 21, 2014

Caught Unawares

We are most revealed in what we do not scrutinize claims Gould (qtd. in Mishra 141). I think this quote could be used to talk about all three texts by Mishra (“The Role of Abstraction in Scientific Illustration: Implications for Pedagogy”), Lakoff and Johnson (Metaphors We Live By—I’m guessing is the book), and Gross (“Rhetorical Analysis”). In other words, we are most revealed by what we assume.

Assumptions mask values—and beyond that, belief systems. Lakoff and Johnson describe metaphorical “concepts we live by,” which are basically “assumptions” we make at a linguistic level. For example, they point out our metaphorical concept that “argument is war” (4-6). Though they explain how we contrive argument as war, they don’t explain why we have arrived at this subtle conceptual metaphor (in contrast, note their explanation for the “time is money” metaphor on pages 8-9). Perhaps we “assume” that argument is war because we (at least Americans) value the idea of “winning” and superiority brought about by hard work and individualistic effort. Of course, even the values underlying this metaphorical concept of “argument is war” are metaphorical in nature, which leads us to question whether our values shape metaphor or metaphor shapes our values (a classic “chicken vs. egg” debate)…

Nonetheless, we might still pursue the idea that values underlie assumptions. Gross points out the common faulty assumption that science is emotionless and passionless: “the general freedom of scientific prose from emotional appeal must be understood not as neutrality but as a deliberate abstinence: the assertion of a value” (574). That value is “objectivity,” an emphasis on reason versus “unreasonable” emotion (an assumption in itself). Mishra’s discussion about a diagram of a heart compared to an actual heart (146-147) on one hand supports the assertion that diagrams are inherently symbolic for the express purpose of better understanding the reality; on the other hand, the “neat arrangements” of the diagrammed heart reveals our value for orderliness (we can control order, not chaos; ultimately, underneath the value of orderliness is the value of power).

We function by making assumptions—otherwise, we’d never be able to do anything. Just think about all the assumptions we make even waking up in the morning: 1) There’s a reality outside of me, and therefore, the alarm I’m hearing is an actual stimulus and not part of my imagination 2) the alarm is ringing, so it must be 6 am 3) I have to wake up because it’s a school day 4) I can’t miss classes because my grades will worsen…etc. etc. That’s a silly example (with a ton of assumptions built into each “assumption” itself), but I hope that it gets my point across: essentially, we are assumption-making beings. Assumptions, after all, aren’t always bad things; they are what make language and communication feasible.

However, assumptions also limit the scope of what we can “see”: “The very systematicity that allows us to comprehend one aspect of a concept in terms of another…will necessarily hide other aspects of the concept” (Lakoff and Johnson 10). Following the same lines, Root-Bernstein says, “Pictures, tables, graphs can be dangerous things. Revealing one point, they hide assumptions, eliminate possibilities, prevent comparisons—silently, unobviously. Thus, a pattern makes sense of data but also limits what sense it can make up” (qtd. in Mishra 154). Throughout many of these blog posts, we’ve examined the uncertain nature of language, knowledge, and truth functioning in science; some of us have even come to the conclusion that “truth” and “knowledge” are both unstable constructs invented by humans. According to certain kinds of theoretical frameworks, these conclusions appear to make sense. However, the frameworks themselves are a sort of assumption which allows us to see certain things while at the same time obscuring others.

In the particular framework we’ve been using, we’ve discounted the existence of absolute, certain truth or knowledge because we assume that we can’t be given them from an outside source; in other words, we write off the possession of absolute truth or knowledge because we disbelieve in supernatural revelation. Underlying this assumption is probably a multitude of different values: the value of independence, freedom from judgment, and control over one’s life, to list a few. And yet, tied to the assumption that a higher power doesn’t exist are other assumptions as well: if a good, all-powerful God were real, pain wouldn’t exist in the world; nothing can exist without a cause; physical bodies couldn’t reside in “heaven”—assumptions and questions Quinn brings up in her article, “Sign Here If You Exist.”

I think it’s interesting that in his syllabus, Doug decided to label the two readings for Thursday as “Choosing truths.” The nuance of the word “choosing” makes me regard it as something done on an arbitrary whim, just as I would happen to “choose” chocolate ice cream over vanilla ice cream. But perhaps this impression is exactly the intent of the phrase. Earlier in the semester, I wrote a blog post about the “Faith of Science,” and both Liam and Doug (in a follow-up email) made excellent counterpoints about the nature of evidence and how we qualify good evidence. Doug wrote, “From the very earliest ages, we find ourselves able to have faith in something because we already have faith in something else.” Basically, we already have faith in the very evidences we use as the basis for faith in something else. It all goes back to assumptions. What if someone asserts, “Reasoning is futile nonsense”? Most of us would disagree with this statement, but we can’t prove it because the only way to argue with this statement would be to reason about it—which is a circular argument. We “know” that reasoning isn’t “futile nonsense” because we take it on faith (J. Budziszewski). 

If most of our knowledge—and reason itself—is taken on faith (which could be thought of as assumptions), then how do we “choose”?


“But which side will you choose? Reason cannot decide for us one war or the other; we are separated by an infinite gulf. A game is on, at the other side of this infinite distance, where either heads or tails may turn up. Which will you wager on?” ~Blaise Pascal

Tuesday, October 14, 2014

Science Writing and Sensationalism

Now don’t get me wrong: I very much respect doctors and those in the medical field, and I think their life-saving work is noble indeed. However, I long ago gave up the disillusioned idea that doctors “know it all” because medical studies supposedly provide “all the solutions” to health problems we seek. One of my siblings has very severe food and environmental allergies; just her food allergies alone include: dairy products, eggs, gluten, soy, peanuts, tree nuts, fish, beef, pork, mustard, yellow corn, kiwi, and asparagus (as far as I know). And out of the many, many doctors she has seen, very rarely will two agree even on the type of allergy treatment or dietary supplements she should take. And yet, although I don’t believe in the omniscience of doctors, David Freedman’s “Lies, Damned Lies, and Medical Science” about the credibility—or lack thereof—of medical research still threw me for a loop.

According to Freedman, meta-researcher John Ioannidis has statistically discovered that “much of what biomedical researchers conclude in published studies…is misleading, exaggerated, and often flat-out wrong” (114). On one hand, especially after reading his entire article, one might conclude that Freedman’s statement seems to make sense, especially when reading an article like Malcolm Gladwell’s “The Treatment.” Freedman points out that false publications can result from the researcher’s desire for more funding; thus, we may have reason for suspicion after reading Gladwell’s account of the “miracle-drug” elesclomol discovery which saves Synta Pharmaceuticals in 2006 and provides inquiry for further research (159, 175). Gladwell’s quote from cell biologist/drug researcher Lan Bo Chen, who basically admits that they “are totally shooting in the dark” (173), also does not instill our confidence in the reliability of one of medicine’s supposedly top-notch lines of research: “in some cases you’d have done about as well by throwing darts at a chart of the genome” (Freedman 121).

On the other hand, we could be skeptical even of Freedman’s article itself. After all, Jeanne Fahnestock explains in her paper, “Accommodating Science: The Rhetorical Life of Scientific Facts,” something changes in the translation from original scientific findings to the science news articles that laypeople could understand. That “something” is the shift from uncertainty (or at least heavily-hedged claims) to a low-supported certainty of specific results or hypotheses. To roughly summarize her paper, this shift in apparent confidence of fact occurs because of the journalist’s need to sensationalize the news to capture reader attention. Although The Atlantic, the magazine Freedman writes for, is deemed a high-quality intellectual read, it nonetheless is not specifically a science journal. The very flaws Fahnestock associates with “scientific accommodation” may very well be present in Freedman’s representation of the work of John Ioannidis.


I realize that sensationalism basically refers to something used to excite or thrill the senses. However, I think sensationalism in science writing results from using Fahnestock’s “the wonder” and “the application” appeals (279)—essentially, “how is this interesting” and “why does it matter”—in conjunction with each other. We see this especially in the readings from The Best American Science and Nature Writing this week: for example, in “The Deadliest Virus,” Michael Specter writes, “One of the world’s most persistent horror fantasies, expressed everywhere from Mary Shelley’s Frankenstein to Jurassic Park, had suddenly come to pass: a dangerous form of life, manipulated and enhanced by man, had become lethal” (136). “The wonder” appeal: one of our worst nightmares has scientifically been produced. “The application” appeal: we’re all screwed. Though the problem of a possible pandemic is no doubt real, if we do a little digging, we most likely will discover that Specter’s writing sounds a little over-the-top compared to scientific reports and statements written by the scientists themselves (not the quotes given through phone conversations). Sensationalism in science news writing loses some of the accuracy (though Freedman would oppose this term) found in the original scientific reports, but it also effectively captures the public’s attention and stimulates funding for further research. Which is the higher priority?