A study making headlines today purports to conclude that Oreo cookies are “just as addictive as cocaine.” If a scientific study showed that a popular snack food had the addictive properties of a narcotic substance, popular press headlines would be appropriate. The study in question plainly does not support that conclusion, however.
The researchers conducted the study as follows:
On one side of a maze, they would give hungry rats Oreos and on the other, they would give them a control – in this case, rice cakes. . . . Then, they would give the rats the option of spending time on either side of the maze and measure how long they would spend on the side where they were typically fed Oreos.
. . .
They compared the results of the Oreo and rice cake test with results from rats that were given an injection of cocaine or morphine, known addictive substances, on one side of the maze and a shot of saline on the other. Professor Schroeder is licensed by the U.S. Drug Enforcement Administration to purchase and use controlled substances for research.
The research showed the rats conditioned with Oreos spent as much time on the “drug” side of the maze as the rats conditioned with cocaine or morphine.
From these two independent tests, it only seems possible to draw two independent conclusions: 1) rats like Oreos more than rice cakes, and 2) rats like cocaine or morphine more than saline. Plainly, because the testing did not directly compare Oreos and cocaine, it would be inappropriate to draw a conclusion that directly compares Oreos and cocaine.
From these two, independent tests, we do not know whether rats prefer Oreos in equal measure, for example, to cocaine. One seemingly easy way to find out would have been to ask them directly to choose between Oreos and cocaine, and it is strange that the researchers did not conduct such a test.
The testing conducted also appears to conflate preferentiality with addictiveness. Establishing that “hungry rats” consistently prefer one type of food over another does not necessarily mean that they are addicted to the preferred food option. The addictive force in a person would seem to be stronger than and fundamentally different from a mere preferential force; indeed, the power of addiction is that it can compel a being to act against its preferences in order to serve the addiction.
All we know from this research is that hungry rats would rather eat Oreos than rice cakes, not that the Oreos were “addicting” in a non-colloquial sense. A behavioral test for Oreos’ addictive properties might be whether rats choose Oreos over other, equally or more desirable food, or whether they eat Oreos even when they are not hungry, or otherwise consume Oreos to their detriment.
Addiction surely has a neurological component as well, but again, the difference between preference (or pleasure) and addiction (or need) would seem to be important. In follow-up research, one of the student-researchers conducted some neurological testing:
They used immunohistochemistry to measure the expression of a protein called c-Fos, a marker of neuronal activation, in the nucleus accumbens, or the brain’s “pleasure center.”
“It basically tells us how many cells were turned on in a specific region of the brain in response to the drugs or Oreos,” said Schroeder.
They found that the Oreos activated significantly more neurons than cocaine or morphine.
“This correlated well with our behavioral results and lends support to the hypothesis that high-fat/ high-sugar foods are addictive,” said Schroeder.
That we derive more pleasure from consuming Oreos than from consuming cocaine or morphine is interesting, but it does not necessarily mean that consuming Oreos creates the pervasive neurological shift that constitutes addiction. (This is probably why the researchers only describe a “correlat[ion]” on this point.)
As someone without formal neuroscience training, my assessment of this study and the conclusions drawn from it certainly may be incorrect, but my criticism seems obvious, appropriate, and easily addressed (and remedied, if necessary). I do not mean to suggest that this Connecticut College group is the only scientific research team susceptible to this critique, as the popular science news contains plenty of examples. Maybe something that seems obvious– Why not compare Oreos and cocaine directly?– to a lay reader like me would never occur to the trained researchers because it is not a scientifically relevant inquiry. If the scientific community wants to present its work to a popular audience, however, it should shed the thin veneer of social justice concerns, which the Connecticut College group attempted to apply, and focus on addressing that audience’s natural curiosities, which are particularly likely to arise in response to sensational headlines like “Oreos as ‘addictive as cocaine.’“
The growth of media and communication technology has provided us with greater volumes of utterances from more people than ever before. It is easy to capture the unfiltered, unvarnished thoughts of a broader portion of society. With emphases on access and immediacy, people are publishing more of their regrettable opinions, jests, thoughts, and other statements that upset members of their audience.
Setting aside an evaluation of the person-by-person authenticity of the widespread responses to off-color jokes, for example, the speakers’ apologetic responses to the reaction to these increasingly frequent statements have settled into a pattern that merits brief examination.
A recent instance of this now-reflexive call and response came earlier this month. MMA fighter and media personality Chael Sonnen was on Fox Sports Live, new sports network Fox Sports 1′s version of ESPN’s SportsCenter, to discuss boxer Floyd Mayweather’s match against Canelo Alvarez. Criticizing the perceived quality of Mayweather’s recent opponents, Sonnen said:
I’ve never seen anybody in the history of America get so rich and so famous off of having complete wimps throwing punch at their faces. I know what you’re saying. You’re saying, “Well, it’s happened before, what about Rihanna?”
Video of the segment is available here.
Sonnen’s inartful, imperfect analogy between Mayweather, who happened to have served jail time for domestic abuse, and Rihanna, a pop singer and a domestic abuse victim, triggered the issuance of an apology from the network before Sonnen’s remarks could blossom into a larger controversy:
FOX Sports regrets the comments Chael Sonnen made during last night’s edition of FOX Sports Live. They were an inappropriate attempt at humor that Sonnen acknowledges shouldn’t have been made and he apologizes to anyone who may have been offended by his remarks.
This cycle– statement, reaction, apology– has become both rote and swift in American media culture, to the point where a) the reaction phase no longer is a necessary way station before the apology, and b) the apology itself has become formulaic, always addressed to “anyone who may have been offended.”
The ubiquitous and seemingly harmless addendum about “anyone who may have been offended” is, at best, counterproductive. First, while the phrase usually comes at the end of the “apology,” blunting and qualifying what otherwise might simply be, “I’m sorry,” it actually indicates a limited, defined audience for the “apology.” Rather than allowing for a statement that could be simultaneously broader and more direct, this phrase shifts the attention and onus from the person who made the original statement to those people upset by the remark and whose sensibilities ostensibly necessitated the apologetic charade. This linguistic shift then draws negative attention to these supposedly overly sensitive people, who, it then will be said, must be members of the “P.C. police,” seeking nothing more than the suppression of free speech and the enforcement of antiquated moral values.
Second, and perhaps more fundamentally, the phrase renders the apologetic nature of the statement, because it refuses to acknowledge that even one person actually upset by the statement exists; at best, it is a conditional apology. A conditional apology is no apology at all, particularly where the apology’s recipients are not equally able to engage in dialogue with the apology’s issuer.
To remedy these deficiencies, in reverse order: 1) change “anyone” to “those” and “may have been” to “were,” so that the apology is addressed to “those who were offended” and the focus remains on the person apologizing, and 2) remove the phrase altogether. “I’m sorry for saying what I said” works just fine on its own.
Space– what the late Carl Sagan often referred to as “the cosmos”– probably is one of my longest-held interests. Whether due to my age or another reason, I did not watch Sagan and his “Nova” program going up, although as I came to learn about him, I wish I had.
It was with some excitement, then, that I discovered Neil DeGrassse Tyson, the apparent heir to Sagan’s throne as an astrophysicist with a desire to share his passion for cosmology with the general public. Tyson has appeared on programs like The Daily Show, is active on twitter, and generally has made himself a presence in popular culture.
Whether it reflects Tyson’s own personality or is illustrative of the tone of our general, popular conversation, Tyson’s message began to take on a more aggressive stance in defense and furtherance of “science.” I imagine he, like many, believes that “science” is “under attack” from people such as climate change skeptics and those who want Intelligent Design integrated into school curricula. While there is nothing wrong with this general effort, and the following is not a defense of climate change skepticism or the corporate contrivance that is Intelligent Design, Tyson’s approach sometimes leads him to make neat statements that play well in popular media (and not inconceivably are designed for that purpose), but that merit further examination.
Perhaps the most popular example:
By engaging in a modern political debate, Tyson has misstated the fundamental nature of science. In short, “science” is only “true” to the extent it accurately describes the observed world.
Science is not a collection of unassailable “true facts,” but a set of methods for the processing and categorizing of observations. Science is something that is done, not something that is true. At its base, science is an overtly and expressly technical and communal way of telling a story. Mythology is engaged in the same storytelling endeavor. It simply uses different methods.
There is commonality in the limits of science and mythology as well, and, returning to Tyson’s remark, pictured above, what science tells us about unobserved events in the past is no more “true” than mythology addressing the same topic. Both are telling stories, even if, for many, the story science tells may be more convincing for a number of reasons. Persuasiveness and truth are not the same thing, however.
Yesterday, the Supreme Court heard arguments in Hollingsworth v. Perry, a challenge to Proposition 8, a California ballot proposition that amended the state’s constitution to restrict the recognition of marriages to those between heterosexual couples.
During oral arguments, Justice Antonin Scalia and Ted Olson, the lawyer representing the Proposition 8 challengers, had the following exchange:
JUSTICE SCALIA: I’m curious, when - when did — when did it become unconstitutional to exclude homosexual couples from marriage? 1791? 1868, when the Fourteenth Amendment was adopted? Sometimes — some time after Baker, where we said it didn’t even raise a substantial Federal question? When — when — when did the law become this?
MR. OLSON: When — may I answer this in the form of a rhetorical question? When did it become unconstitutional to prohibit interracial marriages? When did it become unconstitutional to assign children to separate schools.
JUSTICE SCALIA: It’s an easy question, I think, for that one. At — at the time that the Equal Protection Clause was adopted. That’s absolutely true. But don’t give me a question to my question. When do you think it became
unconstitutional? Has it always been unconstitutional? . . .
MR. OLSON: It was constitutional when we -as a culture determined that sexual orientation is a characteristic of individuals that they cannot control, and that that -
JUSTICE SCALIA: I see. When did that happen? When did that happen?
MR. OLSON: There’s no specific date in time. This is an evolutionary cycle.
(Emphasis added.) The full transcript from yesterday’s oral arguments is available here.
Scalia’s question is deceptively fundamental in nature, and it (surely unintentionally) raises a practical question about his own approach to civil rights. Summarily, his approach is to recognize as protected only those rights clearly shown to be protected within the Constitution’s text or, in some cases, in (very) long-established tradition. For him, unless a claimed right finds clear, preexisting contemplation and protection in the Constitution, the claimed right does not exist.
One practical benefit, at least to Scalia, of this approach is that it is fairly convenient to operate on the back end– that is, the time when a judge is adjudicating a claim of a right violated. Following the alleged violation, the judge simply needs to look to the Constitution to see whether the claimed right is mentioned or clearly contemplated. If not, the claimant does not have a case. If so, the judge proceeds to determine whether there was an infringement of the established right in that particular instance.
Where Scalia’s approach is problematic, though, is on the front end. While principles of democracy and separation of powers properly keep the judiciary out of the legislature’s policy-making business, the historical fallacy of approaches like Scalia’s is that there was a time in the past when policy makers purposely set forth all the rights of the citizenry. Such an exhaustive effort has never been undertaken at the federal level, yet it would appear to be a necessary precondition for Scalia’s approach to make logical sense. If policy makers never set out an exhaustive enumeration of rights, Scalia would have no such source to which to point and state authoritatively that if the claimed right was not included, it did not exist. (Scalia’s inclusion of longstanding history as, along with the Constitution, the other source of rights, conceptually undermines his position, I think, and is a topic best left for another day.)
As I explained at length here, the Constitution’s Bill of Rights is not such a document. Neither its terms nor the intent of its drafters make any claim to exhaustiveness, and the same is true of subsequent constitutional amendments.
Returning to yesterday’s oral arguments, Scalia’s question– “When did it become unconstitutional to exclude homosexual couples from marriage?”– both deeply illustrates his view of civil rights and exposes the flaw in that view. That an asserted right does not appear on a list of rights that neither is nor claims to be an exhaustive list of rights is not a fully sufficient support for the consequential position that the asserted right does not exist. See generally here.
Another moment during yesterday’s argument of interest, if of lesser importance, came during an exchange between Justice Elena Kagan and the attorney for the Proposition 8 defenders, Charles Cooper. Keep reading…
Earlier this month, the Supreme Court heard oral arguments in Boyer v. Louisiana, a case that presented questions about the rights of criminal defendants, including the rights to counsel and a speedy trial. See generally here. Whether the case will be of great lasting significance remains to be seen, as the Court will not issue its decision for some weeks. It already has drawn significant attention from Court-watchers, though, for reasons entirely collateral to the merits of the case.
As most people know, Justice Clarence Thomas is not frequently a vocal participant in oral arguments. In fact, that’s probably an understatement: before the Boyer argument, it nearly had been seven years since he last spoke in open court. Back in 2010, I wrote:
This week marks the fourth anniversary of Justice Clarence Thomas’ silence during Supreme Court oral arguments. The last time he questioned an attorney during oral arguments was in Holmes v. South Carolina, 547 U.S. 319 (2006), on February 22, 2006. Thomas had a solid reputation for sparse participation prior to the Holmes argument, and the four silent years since then have only served to solidify it. Observers, close and casual, are mixed on the significance of that silence, however.
Most people I encounter in casual conversation are immediately disparaging when it comes to Thomas, and particularly so regarding his silence. Some consider him a waste of space on the bench, and others suggest it is evidence that he is unqualified to serve on the Court, a charge that sometimes carries implications about his intelligence. Still others believe he simply is close-minded.
Perhaps I limited my survey of reactions to Thomas’ recent remark because of how I had seen him regarded in the past, or perhaps I’m just less attuned to Court-watchers today than I was three years ago (and I am), but I did not detect the same degree of disparagement I did before. More than anything, people seemed to see the happening as a sort of political novelty. Some actually called it “brilliant,” but that seems ridiculous in light of what Thomas actually “said.”
When Thomas’ name made its appearance in the transcript, the discussion at hand was about the qualifications of the criminal defendant’s counsel. Justice Antonin Scalia asked whether the defendant’s lead counsel was a Yale Law School graduate. After Scalia received an answer in the affirmative, the transcript records the following:
JUSTICE THOMAS: Well, there — see, he did not provide good counsel.
Everybody but Justice Sonia Sotomayor and possibly the arguing attorney seemed to be laughing at this point at what those in attendance agreed was a joke by Thomas, including Tom Goldstein, who wrote:
Most of the Justices were in a lighthearted mood today. There was a lot of banter between them. At one point, the questioning turned to whether the petitioner – a capital defendant – had “competent” counsel. Justice Scalia made the rhetorical point that his lawyer was impressive because she had gone to Yale. Chuckling, Justice Thomas interjected (as I heard it, imperfectly) that fact might make the lawyer “incompetent.”
Everyone who heard what he said recognized it was a joke. All the Justices laughed to one degree or another. So did the bar and gallery.
The most interesting part is that it isn’t even clear whether Thomas intended to speak into the microphone; some had noticed him passing a note to his neighbor, Scalia, and thought the remark may have been intended to be a private one.
In any event, the context to this remark is simple and should have been immediately apparent to anyone with even a general familiarity with Thomas. He attended Yale Law School himself, so at the very least, the joke was a self-depreciating one. That’s assuming he’s softened his views toward Yale. In the past, at least, he has not been especially proud of his time in New Haven because he believed he only was accepted there due to the school’s affirmative action policy, and he somewhat famously stuck a fifteen-cent price tag on his diploma as a signal of the value he placed on his Yale degree. Some commentators noted that Thomas in fact has been warming up to Yale more recently, but any deep analysis beyond this would not appear to yield anything of great significance.
Instead of moving straight along with things, though, I think this occasion does offer a good opportunity to remember that there were good reasons for Thomas to keep his silence. Beyond the personal ones, which he has clearly set forth in his autobiography and elsewhere, it is helpful to remember that the written briefs, as Thomas has said, are “far more important” than oral arguments, which, nine times out of ten, do not change his position. Naturally, there is reason to believe that he is not the only justice who takes this view, even if he is the only one who will say it out loud.
Silent Justice – My full remarks on the fourth anniversary of Justice Thomas’ silence at oral arguments
Professor Randy Barnett is a right-libertarian constitutional scholar who unsuccessfully argued Gonzales v. Raich, 545 U.S. 1 (2005) on behalf of medical marijuana users and unsuccessfully argued Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. ___ (2012) on behalf of the healthcare law challengers, and who has appeared in these pages before. See here; see also here. Akhil Reed Amar is a leading progressive constitutional scholar who recently published an extensive book entitled America’s Unwritten Constitution: The Precedents and Principles We Live By. Earlier this month, Barnett published a review of Amar’s book in the Wall Street Journal. A few days later, Amar responded at length to Barnett’s review.
As illuminated in the review and the review of the review, the difference between these two hinges on what Barnett sees as Amar’s particular conception of the “living Constitution.” Barnett writes:
Now, it makes some sense to call the meaning that is implicit in the text the “unwritten Constitution.” After all, the implicit meaning is conveyed by what the text expressly says. But by including the judicially created implementing rules under this rubric, Mr. Amar suggests this doctrine is in some way the equivalent of the original, written one, and that this law of the judges can equal if not trump the law of the Founders. This is what living constitutionalism has always been about.
Mr. Amar acknowledges the problem. “Those who venture beyond the written Constitution must understand not only where to start, but also when to stop, and why,” he warns. “The unwritten Constitution should never contradict the plain meaning and central purpose . . . of an express and basic element of the written Constitution.” He adds: “The written Constitution deserves judicial fidelity, both because it is law and because, for all its flaws, it has usually been more just than the justices.” For the same reasons, he agrees that judicial precedent should not be allowed to trump or supersede the original meaning of the text. Where courts have gotten it wrong about the meaning of the text, the meaning—not the precedent—should govern. “A prior erroneous Court ruling does not properly amend the Constitution.” No matter how entrenched Jim Crow laws became after the Supreme Court upheld “separate but equal” in Plessy v. Ferguson, it was right to reverse that decision in Brown v. Board of Education.
This is all good and welcome. But Mr. Amar goes on to advocate an exception that is big enough to drive a living constitution through. “An erroneous precedent that improperly deviates from the written Constitution may in some circumstances stand,” he tells us, “if the precedent is later championed not merely by the court, but also by the people.” “When the citizenry has widely and enthusiastically embraced an erroneous precedent,” the courts may “view this precedent as sufficiently ratified by the American people so as to insulate it from judicial overruling.” When this happens, according to Mr. Amar, the erroneous precedent becomes part of America’s unwritten Constitution.
In other words, if what the judiciary is doing is popular enough, the unwritten Constitution promulgated by judges takes precedence over the written one. Despite the concession made to the written Constitution, this is really no more than a variation of living constitutionalism, one taken even further in the parts of the book where Mr. Amar contends that the unwritten Constitution also consists of numerous historical documents—like the Northwest Ordinance and the Gettysburg Address—along with institutional practices of Congress and the White House.
Amar sets out to refute this charge:
You wrongly suggest that this is my view: “If what the judiciary is doing is popular enough, the unwritten Constitution promulgated by judges takes precedence [according to Amar] over the written one.” I actually say something quite different, and far more nuanced: In the domain of unenumerated rights, popularity counts. Here is one key passage: “While a wave of new legislation would not ordinarily suffice to trump a precise and inflexible textual right, we must keep in mind that in this chapter we have been dealing with various rights that have not been specified in this way in the written Constitution. If the original judicial reason for deeming these rights to be full-fledged constitutional entitlements derived from the fact that American lawmakers generally respected these rights in practice, then such rights should lose their constitutional status if the legislative pattern changes dramatically. In this particular pocket of unwritten constitutionalism [my emphasis] what should ideally emerge is a genuine dialogue among judges, legislators, and ordinary citizens.” And here is another passage: “Thus, if the Court at time T1 gets the Constitution’s text and original understanding wrong and proclaims a right that does not in fact properly exist at time T1, and if the vast majority of Americans come to rejoice in this right, the Court at time T2 should affirm the originally erroneous precedent. The case, though wrong when decided, has become right thanks to an intervening change of fact — broad and deep popular endorsement — that the Constitution’s own text, via the Ninth and Fourteenth Amendments, endows with special significance. Note one key asymmetry: A case that construes a textual constitutional right too narrowly is different from one that construes the right too broadly. Even if both cases come to be widely embraced by the citizenry, only the rights-expanding case interacts with the text of the Ninth and Fourteenth Amendments so as to specially immunize it from subsequent reversal.”
Intelligent, thoughtful scholars like Amar and Barnett bring out the best in each other, or close to it, because they are willing to engage with each other and have an exchange that both sharpens the distinctions between the two and draws each to develop and defend his views. In this case, Amar has advanced an intriguing and creative constitutional notion. Barnett challenged it, and Amar’s response further defined the concept.
Perhaps it ultimately is too simplistic, but even high-minded conservative constitutional defenders like Barnett seem to forget a basic, mechanical objection to expansive constitutional approaches like Amar’s: they are undemocratic. Functionally, what the host of progressive, “living Constitution,” dynamic, “unwritten Constitution,” etc. approaches seek is a shortcut to or a circumvention of the constitutionally prescribed amendment process, the dangers of which should be self-evident. There probably is a reason that scholars in Barnett’s position do not rely on this fundamental objection– to which Amar’s vague appeal to the Ninth and Fourteenth Amendments looks like a grasping rejoinder– but it escapes me, especially because there does not seem to be an equally compelling response available to those in Amar’s position. (Note also that Amar’s qualification, that only those extra-Constitutional interpretations that expand rights are authoritative, is irrelevant in the face of a Federalist approach to liberty under the Constitution, in addition to being non-responsive to the fundamental, mechanical objection mentioned in this paragraph.)
It has been a long time since I have read fiction. Nonfiction has comprised effectively the entirety of my pleasure reading for years, and spending the past year developing ALDLAND has meant that sports news (i.e., more nonfiction, with the exception of hockey teams’ playoff injury reports) has dominated my online reading as well. Once I set aside Justice Breyer’s book earlier this year, I began to contemplate a return to fiction. I’m not quite ready yet, though, opting first to tackle Michael Sandel’s latest, which I’ve nearly finished. I also have contemplated reading Hampton Sides‘ Hellhound on His Trail: The Stalking of Martin Luther King, Jr. and the International Hunt for His Assassin next just as an excuse to remain in nonfiction’s friendly waters.
My inexplicable resistance to fiction nevertheless is slipping, however. Although I had no intention of reading or buying Jay Caspian Kang’s debut novel, The Dead Do Not Improve, I had been hearing about its release for a year, so it was easy enough to decide to take a peek when the Grantland blogger offered the first thirty-five pages of his book for free perusal online. My reactions to the experience of reading the opening of Kang’s novel were not complex or groundbreaking. My first thought was that it felt not so bad to be reading fiction again. My second was that the text seemed awfully autobiographical, and I couldn’t decide whether that irritated me. My third thought was confirmatory of my preconceived notion that there was no need for me to buy or read (now, the entirety of) this book. My fourth thought, upon completing the excerpt, was that maybe I would get the book, as a flippant way to ease back into fiction. I suppose that’s marketing at work, but my idea was that, rather than hold out on fiction not because I didn’t want to be reading it but because I felt I had to reengage in a particular way, and the choice of which fictional work would be my first would be too fraught.
I was not expecting to see any more of Kang’s text anywhere outside of the book’s covers when I came upon his recent Gawker post. Apparently a lot of other people thought The Dead Do Not Improve seemed pretty autobiographical too. For some reason, this (again, apparent) sentiment put Kang on the defensive, so he took to Gawker to try to tamp down the issue by presenting yet another, albeit much shorter, segment (italicized below by me for clarity) of the novel, this time with new annotations included:
To try to shove that top-down question of “how much of your life is in your character” and all of its political implications a bit further out to pasture, I’ve annotated an excerpt from The Dead Do Not Improve to tell you exactly what parts came from my life and what parts did not. My hope is that you will find these details to be about as unimportant as they ultimately are.
The true parts I have tagged IRL. The fictional parts are tagged FICTION.
Those mornings in the parking lot with my three friends, the Ronizm mornings: Seth Bloomberg (IRL: name altered) picked me up at seven-twenty on the dot.
In precal, I sat between Heba Salaama and Paul Offen. Years later, Heba Salaama, better known to the greater student public as Heavy Salami, won a hundred thousand dollars on some network TV weight loss show (IRL), but back before her dreams came true, in those pre-9/11 days when the last name Salaama was simply a curiosity, Heba was the terrifying, ethnically ambiguous girl who sat next to me in math, who kept telling me that I smelled like weed (IRL), who threatened to tell Ms. Butler if I didn’t let her copy last night’s homework (FICTION).
The entire exercise is available here. Upon reading all of it, my immediate reaction is that whether “these details” are “unimportant”– to the reader’s experience of Kang’s novel, presumably– is beside the point.
I chose the two excerpts of the excerpt that I did because they demonstrate a) Kang’s ability to use a particular, basic literary technique, and b) his decision not to employ that technique in a particular instance. Explicitly, Kang’s annotation reveals that he knows how to write about a person he’s met while disguising that person’s identity by using a different name, an elementary and widely accepted technique. There is nothing objectionable about writing about real people in the fiction context; indeed, it seems like it would be difficult to write convincing fiction about human beings without having met and being influenced by one or two. Still, as a matter of common courtesy and because there’s little to be gained by using real names, authors usually use a different name for their character. Like any author, Kang is familiar with this technique, and he demonstrates it with the character he calls “Seth Bloomberg.”
In the second excerpt, however, Kang declines to use this technique and goes out of his way to let us know that he’s chosen not to. “Heba Salaama,” the protagonist’s classmate, is a real person, and her name is Heba Salaama. Kang not only expressly tells us this, but he goes further out of his way to let readers know that Salaama is a real person by linking to a video of her. Within one sentence, Kang makes pointed reference to Salaama’s weight and ethnic background and mixes in a fictional part about academic cheating (recall the actual book does not contain the annotations being discussed here) before moving on to an extended discussion of his actual high school’s “lone autistic kid,” whose real name Kang also uses.
The issue here is not that Kang’s protagonist, named for another of Kang’s actual classmates, dwells on the physical characteristics, ethnic background, or mental capacity of other characters. Writers should be honest in this way, and protagonists, however autobiographical, do not have to be morally good people. Instead, the issue is why Kang felt the need to use the real names of real people like Salaama. Even if it isn’t a requirement for their protagonists, writers ought to be morally good people, and even though morality isn’t necessarily about balancing, two initial questions come to mind: 1) What does Kang gain by using the real names of people like Salaama?, and 2) What do people like Salaama lose when Kang incorporates them into his story, and publicly highlights likely unflattering episodes of their lives? For himself, Kang appears oblivious, which borders on the literally unbelievable.
Positive analysis has to do with descriptive, objective, fact-based observations; in essence, it asks, “what has happened?” Normative analysis, on the other hand, is subjective, and value-based; it asks, “what should happen?”
Different people use the “should” of normative analysis in slightly different ways, usually without taking care to precisely contextualize what they mean when they say that something “should” happen in a particular way. While public policy analysts and scientists, for example, usually seem to be mindful (or at least appropriately transparent) with their shoulds, economists seem to have some trouble in this area and may at times engage in overreaching normative analysis.
Economists ultimately are studying human behavior. When they make predictions about “what the market should do,” they really are predicting how people will act and react with respect to various signals. At a first level, there’s a simple feedback loop here. Unlike doctors stating the way in which a virus “should” mutate, for example, the real subjects of the economists’ normative statement can hear and react to the economists, and they often do. A second level recognizes that economists often have their own (implicit, unstated) preferences built into their normative assessments. A hydrologist doesn’t say that water and sediment ought to interact in a particular way because she personally wants them to. Conversely, it does not seem uncommon that an economist would say that the market ought to ignore a particular signal because she personally believes that the market is better off ignoring signals of that type.
The previous paragraph hints at the different uses of the normative “should.” One is predictive, based on collected past observations, data, and other indicia that lead an economist to render a conclusion about what “should happen in the future (based on what I have observed happen in prior similar circumstances).” The second is a value statement, based on personal preferences that lead an economist to render a conclusion about what “should happen in the future (based on how I prefer people and systems to act and behave).”
Failing to distinguish between these is problematic because the economist’s audience is a) unlikely to detect or make the distinction and 2) will assume the statement is of the first, scientific, predictive type and thus endow it with a certain level of authority to which it may not be entitled.
There also is a certain arrogance on the part of economists when they dress their personal value-based “should” statements like the more detached, scientific ones. This might be most apparent in the context of valuing human life, a topic that could itself fill numerous posts. Rather than phrasing the inquiry as determining the value of a human life, which many people find objectionable, economists refer to the value of a statistical life, apparently in an attempt to quell these lay fears by encouraging people to think about the question in a more detached, lifeless manner. Asking people how much they would pay not to be in a stadium of 100,000 people, knowing that a certain, small number (perhaps one) of those people would die, for example, is thin cover for the essential question of how much, in a monetary amount, we value a particular human life. I’m not saying we shouldn’t confront such questions– things like risk-risk analysis are important– but when economists tell us we should value our own lives or the lives of others at a specified dollar value, that we should be willing to subject ourselves or others to a particular increased risk of death, or even that we should or should not make a particular investment, their lay audience is right to bristle at them. They are right to bristle because the economist has made a value judgment for his audience, and the basis or framework for that value judgment is likely to exclude elements present in his audience’s value framework. Moreover, these sorts of presentations frequently seem to seek to justify and excuse business decisions to the detriment of broader, human interests. If economists are scientists of some variety, then the second sort of normative statements, the personal value-based ones, can quickly morph into pseudo science. See, e.g., here.
This all comes down to a matter of language. When economists say that the market should behave in a certain way, they really are saying something about the behavior of people. When they use such a statement to predict how they anticipate a group of people will act or react, the economists are acting in a beneficial way, and their audiences properly rely on them because they are speaking within their authority as experts on how people tend to behave in similar situations. When they use such a statement to tell people what to do with their own resources or lives because the approach fits the economists’ vision of how people and markets best function, the economists may be acting in an arrogant or deceitful way, and their audiences improperly rely on them because they are extending beyond their authority as experts. Cf. the difference between ontological and deontological approaches.
Briefly: I have tried to come up with ideas, conduct research, and write legal material fit for publication in the past, see e.g., here and here, but I was not successful until I collaborated with a senior colleague beginning last year, and I found myself in print last month, see here. The Michigan Real Property Review published our article on the effects of certain state constitutional amendments and legislation passed in the wake of the United States Supreme Court’s decision in Kelo v. New London, 545 U.S. 469 (2005). In short, our conclusion is that Michigan law currently treats private landowners very favorably when it comes to compensation for the taking of real property.
The full article is available here.
Reader JJM sent along a New York Times editorial entitled “Embarrassed by Bad Laws,” which argues that Florida’s now-infamous “‘Stand Your Ground’ self-defense law” is a) a “bad law”; b) the result of a nationwide, state-level lobbying by the American Legislative Exchange Council (“ALEC”) and the National Rifle Association; and c) the real reason why many of ALEC’s corporate supporters are distancing themselves from the policy group.
“Bad facts make bad law” is a common utterance of dissenting judges who believe that the majority has reached the wrong legal conclusion because the case before the court involves unusual or extreme facts atypical of the situations to which the law or legal conclusion is most likely to apply.
Bad facts may also make “bad law”; in other words, bad facts like the tragic circumstances surrounding the death of Trayvon Martin may lead the court of public opinion’s multitude of judges to declare a law bad. In the Martin case, Florida’s “stand your ground” law is “bad” because it created an incentive for George Zimmerman to kill Martin (assuming Zimmerman knew of the law, which isn’t an unreasonable assumption given Zimmerman’s status as a neighborhood watch person, whatever that means) or it created a legal situation in which Zimmerman is unlikely to be punished for his actions. (The judges of the court of public opinion rarely are as precise as we might like them to be, but these seem to be the two main reasons why someone might decide the Florida law is bad.)
It’s easy to imagine a factual situation in which an aggressive self-defense law would not be “bad,” and even might be considered praiseworthy. For example, if such a law saved from prosecution an older woman who defended herself from a home invader by striking him with a hammer she happened to grab when the invader later died as a result of the strike, it likely would be considered “good,” or at least “just” or “fair” if it received any attention at all.
What these examples highlight is that our popular opinion of a law is likely to be a mere reflection of our opinion about the actors in the underlying fact situation to which the law is applied, and for the most part in Martin’s case, the good and bad roles in the underlying fact situation set up pretty starkly and uncontroversially.
This isn’t to say that there aren’t actually bad laws, but it is interesting that no one seems to have examined the text of the law in question. It is available here, and the applicable provision appears to be subsection (3), which reads:
A person who is not engaged in an unlawful activity and who is attacked in any other place where he or she has a right to be has no duty to retreat and has the right to stand his or her ground and meet force with force, including deadly force if he or she reasonably believes it is necessary to do so to prevent death or great bodily harm to himself or herself or another or to prevent the commission of a forcible felony.
Exploring all of the different and competing policy factors internal and external to this penal statute to decide whether it is a good law or a bad one is beyond the scope of this post. For now, I think it’s enough to note that the provision doesn’t appear to be a bad one on its face and recognize that our popular opinion of the law as a “bad law” likely has more to do with a narrow consideration of its application to one set of facts (and indeed, one telling of those facts). Had the stronger, armed Zimmerman attacked the weaker, unarmed Martin unprovoked, as many assume, but then suffered a fatal injury at the hands of Martin in a scuffle following the initial attack, it seems unlikely that this law would have come under such sudden public scrutiny. Moreover, if the popular telling of the actual encounter between Zimmerman and Martin is accurate, this provision probably doesn’t apply. According to that narrative, Martin never attacked Zimmerman, and without a predicate attack, Zimmerman’s right to stand his ground never arises, and his reasonable belief as to the necessity of his use of defensive force is irrelevant.