Archive
Could the Wording of a Ballot Question Cost DeKalb County Homeowners?
When they enter voting booths tomorrow, some Atlanta-area residents will see this question on their ballots:
This is why you study for tests. Good lucky, everybody.
Extreme Hatred
As we conclude the leftover days of 2015, the nemontemi, preferably with a hopeful eye toward genuine positivity in the new year, a brief pause to remember the muse that is true hatred, a personal distaste that comes from a passionate place of deep emotion. Inspired therefrom can be words so effective, smoldering slow burns punctuated by efficiently biting lashes, true rants, that they are– in something that is apart from and rises above schadenfreude– enjoyable and even beneficial as a result.
One of the great personal feuds of the twentieth century to produce such literary output belonged to Hunter S. Thompson, vis-à-vis Richard M. Nixon. Thompson’s hatred of Nixon generated an entire book, one of Thompson’s best. When the thirty-seventh American president died, Thompson’s obituary, entitled “He Was a Crook,” evidences a true sense of personal loss. It begins:
MEMO FROM THE NATIONAL AFFAIRS DESK DATE: MAY 1, 1994 FROM: DR. HUNTER S. THOMPSON SUBJECT: THE DEATH OF RICHARD NIXON: NOTES ON THE PASSING OF AN AMERICAN MONSTER…. HE WAS A LIAR AND A QUITTER, AND HE SHOULD HAVE BEEN BURIED AT SEA…. BUT HE WAS, AFTER ALL, THE PRESIDENT.
“And he cried mightily with a strong voice, saying, Babylon the great is fallen, is fallen, and is become the habitation of devils, and the hold of every foul spirit and a cage of every unclean and hateful bird.”
—Revelation 18:2
Richard Nixon is gone now, and I am poorer for it.
The full program is available here.
Best wishes for safe passage into 2016.
It is Time to Believe Again
Atticus told me to delete the adjectives and I’d have the facts.
Harper Lee, To Kill a Mockingbird (1960).
To write many words here would defeat the purpose of this post, which is to highlight the expansion in our popular discourse of both the use of extreme descriptors and their likely associated increasing application to mundane subjects. I am not a brilliant sociologist, so I am not sure exactly why everything is so incredibly incredible these days, although I suspect some of the concepts surrounding the notion of the attention economy (e.g., our increasingly-difficult-to-satisfy need for other people to pay attention to us) may be helpful in answering that question.
Whether this is happening, though, is a more readily answerable question, I think. While the NSA still isn’t releasing searchable transcripts for all of our written and verbal conversations, we do have some proxies. One is the Google Ngram viewer, which allows a variety of queries from the text of all of the books Google has scanned into its system. Another is Chronicle, which allows similar searches of the text of the New York Times. Some results from both sources:
These are incredible times indeed.
Please feel free to share the results of your own queries and suggest your own hypotheses or explanations in the comment section below.
Review of a Review of a Review: On Barnett and Amar on Amar and “America’s Unwritten Constitution”
Professor Randy Barnett is a right-libertarian constitutional scholar who unsuccessfully argued Gonzales v. Raich, 545 U.S. 1 (2005) on behalf of medical marijuana users and unsuccessfully argued Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. ___ (2012) on behalf of the healthcare law challengers, and who has appeared in these pages before. See here; see also here. Akhil Reed Amar is a leading progressive constitutional scholar who recently published an extensive book entitled America’s Unwritten Constitution: The Precedents and Principles We Live By. Earlier this month, Barnett published a review of Amar’s book in the Wall Street Journal. A few days later, Amar responded at length to Barnett’s review.
As illuminated in the review and the review of the review, the difference between these two hinges on what Barnett sees as Amar’s particular conception of the “living Constitution.” Barnett writes:
Now, it makes some sense to call the meaning that is implicit in the text the “unwritten Constitution.” After all, the implicit meaning is conveyed by what the text expressly says. But by including the judicially created implementing rules under this rubric, Mr. Amar suggests this doctrine is in some way the equivalent of the original, written one, and that this law of the judges can equal if not trump the law of the Founders. This is what living constitutionalism has always been about.
Mr. Amar acknowledges the problem. “Those who venture beyond the written Constitution must understand not only where to start, but also when to stop, and why,” he warns. “The unwritten Constitution should never contradict the plain meaning and central purpose . . . of an express and basic element of the written Constitution.” He adds: “The written Constitution deserves judicial fidelity, both because it is law and because, for all its flaws, it has usually been more just than the justices.” For the same reasons, he agrees that judicial precedent should not be allowed to trump or supersede the original meaning of the text. Where courts have gotten it wrong about the meaning of the text, the meaning—not the precedent—should govern. “A prior erroneous Court ruling does not properly amend the Constitution.” No matter how entrenched Jim Crow laws became after the Supreme Court upheld “separate but equal” in Plessy v. Ferguson, it was right to reverse that decision in Brown v. Board of Education.
This is all good and welcome. But Mr. Amar goes on to advocate an exception that is big enough to drive a living constitution through. “An erroneous precedent that improperly deviates from the written Constitution may in some circumstances stand,” he tells us, “if the precedent is later championed not merely by the court, but also by the people.” “When the citizenry has widely and enthusiastically embraced an erroneous precedent,” the courts may “view this precedent as sufficiently ratified by the American people so as to insulate it from judicial overruling.” When this happens, according to Mr. Amar, the erroneous precedent becomes part of America’s unwritten Constitution.
In other words, if what the judiciary is doing is popular enough, the unwritten Constitution promulgated by judges takes precedence over the written one. Despite the concession made to the written Constitution, this is really no more than a variation of living constitutionalism, one taken even further in the parts of the book where Mr. Amar contends that the unwritten Constitution also consists of numerous historical documents—like the Northwest Ordinance and the Gettysburg Address—along with institutional practices of Congress and the White House.
Amar sets out to refute this charge:
You wrongly suggest that this is my view: “If what the judiciary is doing is popular enough, the unwritten Constitution promulgated by judges takes precedence [according to Amar] over the written one.” I actually say something quite different, and far more nuanced: In the domain of unenumerated rights, popularity counts. Here is one key passage: “While a wave of new legislation would not ordinarily suffice to trump a precise and inflexible textual right, we must keep in mind that in this chapter we have been dealing with various rights that have not been specified in this way in the written Constitution. If the original judicial reason for deeming these rights to be full-fledged constitutional entitlements derived from the fact that American lawmakers generally respected these rights in practice, then such rights should lose their constitutional status if the legislative pattern changes dramatically. In this particular pocket of unwritten constitutionalism [my emphasis] what should ideally emerge is a genuine dialogue among judges, legislators, and ordinary citizens.” And here is another passage: “Thus, if the Court at time T1 gets the Constitution’s text and original understanding wrong and proclaims a right that does not in fact properly exist at time T1, and if the vast majority of Americans come to rejoice in this right, the Court at time T2 should affirm the originally erroneous precedent. The case, though wrong when decided, has become right thanks to an intervening change of fact — broad and deep popular endorsement — that the Constitution’s own text, via the Ninth and Fourteenth Amendments, endows with special significance. Note one key asymmetry: A case that construes a textual constitutional right too narrowly is different from one that construes the right too broadly. Even if both cases come to be widely embraced by the citizenry, only the rights-expanding case interacts with the text of the Ninth and Fourteenth Amendments so as to specially immunize it from subsequent reversal.”
Intelligent, thoughtful scholars like Amar and Barnett bring out the best in each other, or close to it, because they are willing to engage with each other and have an exchange that both sharpens the distinctions between the two and draws each to develop and defend his views. In this case, Amar has advanced an intriguing and creative constitutional notion. Barnett challenged it, and Amar’s response further defined the concept.
Perhaps it ultimately is too simplistic, but even high-minded conservative constitutional defenders like Barnett seem to forget a basic, mechanical objection to expansive constitutional approaches like Amar’s: they are undemocratic. Functionally, what the host of progressive, “living Constitution,” dynamic, “unwritten Constitution,” etc. approaches seek is a shortcut to or a circumvention of the constitutionally prescribed amendment process, the dangers of which should be self-evident. There probably is a reason that scholars in Barnett’s position do not rely on this fundamental objection– to which Amar’s vague appeal to the Ninth and Fourteenth Amendments looks like a grasping rejoinder– but it escapes me, especially because there does not seem to be an equally compelling response available to those in Amar’s position. (Note also that Amar’s qualification, that only those extra-Constitutional interpretations that expand rights are authoritative, is irrelevant in the face of a Federalist approach to liberty under the Constitution, in addition to being non-responsive to the fundamental, mechanical objection mentioned in this paragraph.)
Normative Economics
Positive analysis has to do with descriptive, objective, fact-based observations; in essence, it asks, “what has happened?” Normative analysis, on the other hand, is subjective, and value-based; it asks, “what should happen?”
Different people use the “should” of normative analysis in slightly different ways, usually without taking care to precisely contextualize what they mean when they say that something “should” happen in a particular way. While public policy analysts and scientists, for example, usually seem to be mindful (or at least appropriately transparent) with their shoulds, economists seem to have some trouble in this area and may at times engage in overreaching normative analysis.
Economists ultimately are studying human behavior. When they make predictions about “what the market should do,” they really are predicting how people will act and react with respect to various signals. At a first level, there’s a simple feedback loop here. Unlike doctors stating the way in which a virus “should” mutate, for example, the real subjects of the economists’ normative statement can hear and react to the economists, and they often do. A second level recognizes that economists often have their own (implicit, unstated) preferences built into their normative assessments. A hydrologist doesn’t say that water and sediment ought to interact in a particular way because she personally wants them to. Conversely, it does not seem uncommon that an economist would say that the market ought to ignore a particular signal because she personally believes that the market is better off ignoring signals of that type.
The previous paragraph hints at the different uses of the normative “should.” One is predictive, based on collected past observations, data, and other indicia that lead an economist to render a conclusion about what “should happen in the future (based on what I have observed happen in prior similar circumstances).” The second is a value statement, based on personal preferences that lead an economist to render a conclusion about what “should happen in the future (based on how I prefer people and systems to act and behave).”
Failing to distinguish between these is problematic because the economist’s audience is a) unlikely to detect or make the distinction and 2) will assume the statement is of the first, scientific, predictive type and thus endow it with a certain level of authority to which it may not be entitled.
There also is a certain arrogance on the part of economists when they dress their personal value-based “should” statements like the more detached, scientific ones. This might be most apparent in the context of valuing human life, a topic that could itself fill numerous posts. Rather than phrasing the inquiry as determining the value of a human life, which many people find objectionable, economists refer to the value of a statistical life, apparently in an attempt to quell these lay fears by encouraging people to think about the question in a more detached, lifeless manner. Asking people how much they would pay not to be in a stadium of 100,000 people, knowing that a certain, small number (perhaps one) of those people would die, for example, is thin cover for the essential question of how much, in a monetary amount, we value a particular human life. I’m not saying we shouldn’t confront such questions– things like risk-risk analysis are important– but when economists tell us we should value our own lives or the lives of others at a specified dollar value, that we should be willing to subject ourselves or others to a particular increased risk of death, or even that we should or should not make a particular investment, their lay audience is right to bristle at them. They are right to bristle because the economist has made a value judgment for his audience, and the basis or framework for that value judgment is likely to exclude elements present in his audience’s value framework. Moreover, these sorts of presentations frequently seem to seek to justify and excuse business decisions to the detriment of broader, human interests. If economists are scientists of some variety, then the second sort of normative statements, the personal value-based ones, can quickly morph into pseudo science. See, e.g., here.
This all comes down to a matter of language. When economists say that the market should behave in a certain way, they really are saying something about the behavior of people. When they use such a statement to predict how they anticipate a group of people will act or react, the economists are acting in a beneficial way, and their audiences properly rely on them because they are speaking within their authority as experts on how people tend to behave in similar situations. When they use such a statement to tell people what to do with their own resources or lives because the approach fits the economists’ vision of how people and markets best function, the economists may be acting in an arrogant or deceitful way, and their audiences improperly rely on them because they are extending beyond their authority as experts. Cf. the difference between ontological and deontological approaches.
Hail “Hitler,” the Most Powerful Word in the English Language
Just hours before this week’s meeting between the Indianapolis Colts and Tampa Bay Buccaneers on Monday Night Football, Hank Williams, Jr., the face of the program for twenty-two years– more than half its existence– was a guest on the Fox News program Fox & Friends, talking politics with the show’s hosts. Early in the interview, Williams referenced President Barack Obama, Vice President Joe Biden, House Speaker John Boehner, and Ohio Governor John Kasich’s golf outing this summer, calling it “one of the biggest political mistakes ever.” Why? “It turned a lot of people off. . . . That’d be like Hitler playing golf with Netanyahu.” Williams went on to clarify that Obama and Biden are “the enemy” and endorse Republican presidential candidate Herman Cain. At the end of the segment, Williams confirmed that he used “the name of one of the most hated people in all of the world to describe the President.” The discussion apparently transitioned to sports after that. The first portion of the segment:
Williams later apologized, saying his comment was “misunderstood”:
My analogy was extreme – but it was to make a point. I was simply trying to explain how stupid it seemed to me – how ludicrous that pairing was. . . . Working class people are hurting – and it doesn’t seem like anybody cares. When both sides are high-fiving it on the ninth hole when everybody else is without a job – it makes a whole lot of us angry. Something has to change. The policies have to change.
I have always been very passionate about Politics and Sports and this time it got the Best or Worst of me. The thought of the Leaders of both Parties Jukin and High Fiven on a Golf course, while so many Families are Struggling to get by simply made me Boil over and make a Dumb statement and I am very Sorry if it Offended anyone. I would like to Thank all my supporters. This was Not written by some Publicist.
(HT: @jwg31)
After Williams made the analogy on the program, a lot of his rowdy friends (but not all of them) started to back away from him. ESPN, the network that currently airs Monday Night Football, announced that it would not run his opening segment before that night’s game. It is not clear when or whether broadcast of the segments will resume.
Adolf Hitler, the German leader who rose to power in the mid-1900s, presided over the Holocaust and directed Germany’s efforts in World War II is, for many, the human embodiment of evil, and his last name is perhaps the most common, universally understood shorthand reference to evil. A comparative study in vileness probably is unhelpful, at least here, and Hitler undoubtedly ranks near the worst of humanity’s worst, although there unfortunately are a number of options. It seems clear, though, that he is the most infamous terrible person of the terrible lot.
This may be due to our temporal proximity to his life– there are living veterans of WWII and living Holocaust survivors– but I don’t think so, and not just because we have seen evil leaders since Hitler’s death who failed to garner the same cache for evilness. Most all historical figures eventually become caricatures because it is too difficult to compress lives, often long and complicated, for later, disembodied understanding. What’s happened with Hitler seems to be different and rarer though, additionally notable because of the short time in which it has occurred. More (less?) than a label for a caricature, his name has become a word unto itself, or nearly so. “Hitler” has become a synonym for “evil,” not merely synonymous with it. Upon hearing the word, one does not think of the man or retrieve a mental image of his face; rather, one only thinks of the concept of evil, as if one had heard the word “evil.” Overuse has not cheapened or diminished the awfulness of the historical Hitler, as Jon Stewart argued. As the reaction to Williams’ analogy demonstrates, the strength of the reference persists, and the ease with which people can use “Hitler” illustrates the linguistic distance between the word and the man.
This separation is notable for its rarity (though not its exclusivity, cf. Ponzi), and for its extremity. With the probable exception of one other word, uttering “Hitler” is more likely to get you into hot water than anything else, particularly when used in a descriptive way, as discussed above.
The purpose of this post is not to humanize Hitler or justify uses of “Hitler” but to observe that his name has become a word, and indeed one of the most negative words in the English language. (Even writing publicly on the topic I am filled with the feeling that I need to include a sort of disclaimer like the preceding sentence.) Nor is the purpose to defend Williams– he just happens to be a recent, visible example– although his defenders, including the hosts of The View, certainly have viable arguments. See supra. All that’s left, then, is to invite you to share your thoughts, particularly on the word-usage issue, in the comment section below and look back at the genesis of the now-troubled marriage between Bocephus and Monday Night Football:
Judicially Speaking, Cut the #$*@%!
On Tuesday, the U.S. Court of Appeals for the Second Circuit rejected the Federal Communications Commission’s strict broadcast indecency rule as unconstitutionally vague. The Los Angeles Times first reported the news here. The Second Circuit already heard the case once, Fox v. FCC, 489 F.3d 444 (2d Cir. 2007), and when its initial ruling was appealed, the Supreme Court upheld the FCC’s rule in a limited holding, FCC v. Fox, 129 S.Ct. 1800 (2009). The Court decided only that the rule was not arbitrary and capricious under the Administrative Procedure Act; it reserved judgment on the constitutional question, which the Second Circuit answered Tuesday.

Fox news: For Justice Thomas, the 21st Century Fox means closing the doors on 20th century precedent.
Communication between judges can be different from communication between other types of public officials. Court-watchers find Supreme Court oral arguments to be important, in part, because the justices can try to influence each other or tip their hands to counsel and the public through their questioning. Judges in different courts try to communicate with and influence each other too. Most obviously and frequently, this happens when a higher court sends an appealed case back to the lower court (called “remanding”) for further proceedings in light of the instructions and guidance in the higher court’s written opinion. Intra-court communication can happen in less obvious ways too. Judge Diarmuid O’Scannlain’s concurrence in Ceballos v. Garcetti, 361 F.3d 1168, 1185 (9th Cir. 2004) is one example. In that case, dealing with the First Amendment speech rights of public employees, O’Scannlain voted in favor of the court’s opinion because it followed Ninth Circuit precedent, but he disagreed with that precedent and the accordant outcome of the case before him and wrote as much in a separate concurring opinion. This separate opinion by a noted conservative judge on a court with a liberal reputation served as a message to the justices of the Supreme Court, which trended conservative. The Supreme Court took the case and reversed the Ninth Circuit’s decision. Garcetti v. Ceballos, 547 U.S. 410 (2006). O’Scannlain’s concurrence served as a blueprint for Justice Anthony Kennedy’s majority opinion, which Chief Justice John Roberts and Justices Antonin Scalia, Clarence Thomas, and Samuel Alito joined, and Kennedy even mentioned O’Scannlain by name and cited his concurring opinion. See id. at 416-17.
Something similar is going on in the FCC case, the latest chapter of which the Second Circuit wrote Tuesday. When the Supreme Court passed on the case after the Second Circuit’s first ruling, Justice Thomas concurred in the majority decision that upheld the ruling. FCC v. Fox, 129 S.Ct. at 1819 (Thomas, J., concurring). In his separate opinion, Thomas explained that the majority correctly upheld the FCC’s rule as a matter of administrative law, but he expressed a willingness to strike down the rule on First Amendment grounds. As Mike Sacks reports, Tuesday’s Second Circuit opinion echoes Thomas’ concurrence. The Second Circuit reached the end Thomas prescribed, rejecting the rule on constitutional grounds, but had to use different reasoning to reach that result because Thomas’ approach required overturning Supreme Court precedent, means only available to the Supreme Court. See id. at 1819-20 (“I write separately, however, to note the questionable viability of the two precedents that support the FCC’s assertion of constitutional authority to regulate the programming at issue in this case.”) (citing Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969) and FCC v. Pacifica Foundation, 438 U.S. 726 (1978)). Unlike the Garcetti example mentioned above, which featured a court of appeals judge sending a message to the Supreme Court justices, it was a Supreme Court justice signaling the court of appeals judges in Fox. As such, there may be one more audience for Thomas’ concurrence: his fellow justices. If the government appeals this most recent ruling, Thomas’ view will be before his colleagues, who, unlike the court of appeals judges, have the authority to affirm his result (striking down the rule) with his reasoning (overruling Red Lion and Pacifica and holding that the rule is unconstitutional under the First Amendment). If nothing else, a second appeal to the Supreme Court will allow Thomas to achieve a judicial communication hat trick— 1) his original concurrence; 2) sent as a message to the Second Circuit; 3) a version of which would return to the Supreme Court in the form of the appealed decision, should the government appeal the latest ruling.
Memories, Borne on the Fourth of July
Most holidays are observed for the purpose of commemorating the past– its happenings and people– rather than occasioning a purely in-the-moment or prospective practice or event. (And even this latter sort of holiday, by its regular and repeated practice, likely would develop a retrospective aspect as the observation became ritualized and a tradition of active observamce built up. Sporting championships like Super Bowl Sunday or the World Series might be an example of this type of holiday.) “Commemoration,” what we do on most of our holidays, suggests an act of collective remembrance.* So stated, the practice of holiday observation has two aspects: remembering and acting. We remember the historical happenings that or people who merited designation of a special day and we do so through various acts of remembrance.
As applied to American Independence Day, observed on the fourth day of July due to the adoption of the Declaration of Independence by the Continental Congress on that date in 1776, the holiday directs our collective remembrance to the distant and recent past in consideration of those responsible for American independence. First, we should actively recall those fifty-six men who, along with their families, acted boldly with conviction and suffered because of it:
Massachusetts
John Hancock
Samuel Adams
John Adams
Robert Treat Paine
Elbridge Gerry
New Hampshire
Josiah Bartlett
William Whipple
Matthew Thornton
Rhode Island
Stephen Hopkins
William Ellery
Connecticut
Roger Sherman
Samuel Huntington
William Williams
Oliver Wolcott
New York
William Floyd
Philip Livingston
Francis Lewis
Lewis Morris
New Jersey
Richard Stockton
John Witherspoon
Francis Hopkinson
John Hart
Abraham Clark
Pennsylvania
Robert Morris
Benjamin Rush
Benjamin Franklin
John Morton
George Clymer
James Smith
George Taylor
James Wilson
George Ross
Delaware
Caesar Rodney
George Read
Thomas M’Kean
Maryland
Samuel Chase
William Paca
Thomas Stone
Charles Carrol
Virginia
George Wythe
Richard Henry Lee
Thomas Jefferson
Benjamin Harrison
Thomas Nelson, Jr.
Francis Lightfoot Lee
Carter Braxton
North Carolina
William Hooper
Joseph Hewes
John Penn
South Carolina
Edward Rutledge
Thomas Heyward
Thomas Lynch
Arthur Middleton
Georgia
Button Gwinnett
Lyman Hall
George Walton
Next, we should remember those throughout the nation’s history, including its very recent history, who served the country in a wide variety of capacities, especially those who, in service, gave the ultimate sacrifice.
As for the action component, I think fireworks, parades, and a celebratory mood with family and friends, perhaps along with a reading of names, should do just fine. Happy 234th, America!
* I am away from my books and dictionaries at the moment, so this etymological breakdown is a matter of uninformed impression. While I’m at it, I might hazard a guess on “holiday,” which could be from “holy day,” suggesting a day of special reverence. A noted aspect of most of our holidays today is that they are governmentally recognized. The “holy day” reading could be a throwback to times when the only state-recognized holidays would have been those in accordance with the official state religion. As a final note, I am recalling the possibly British use of “holiday” simply to mean “vacation” without any special significance.
Phonetically Speaking: Dead Language Edition
Iceland is fascinating for many reasons– the geographically isolated country is geologically, ecologically, and culturally unique– and I am fortunate to have spent time exploring the land of the ice. One of Iceland’s many notable features is its language. Icelandic is a living language, meaning that Icelanders create new words for new things, rather than acquiesce in the name the new thing bears. Put another way, there is an Icelandic word for everything; the language adopts no foreign words.
English is not a living language and so provides an illustrative counterexample. Americans enjoy foreign cuisine, and when they refer to one of these non-domestic delights, they do so by using the food’s original name. They happily call a Mexican favorite “taco,” rather than some newly created American word meaning “folded flat bread sandwich.” By contrast, Icelandic roughly mirrors the latter approach.*
As the comparison with Icelandic demonstrates, English speakers adopt foreign words into their vocabularies as often as they learn about new things that originated in cultures of a different tongue. Sometimes these words come from languages that do not use the English alphabet. These words require a new spelling using English alphabet letters. For example, the word “photography” comes from Greek roots (photos (ϕοτοσ), light, and graphos (γραοσ), writing). Another example is the word “giraffe,” which etymologists trace through French to Arabic, and possibly to an African dialect.
My question: why do English speakers denote the “f” sound with a written “f” for words from some languages and a written “ph” for words from others? Because phonetics likely is the primary guide in the described written translation process, a difference in pronunciation probably explains most spelling decisions. In America, at least, “f” and “ph” have the same pronunciation, however, so if there is a reason for the difference, it must be something else. Is there an explanation for this particular spelling decision?
* This may not be exactly correct, but it is my basic understanding of Icelandic linguistics. If there are any Icelanders reading this, they should feel free to correct me in the comment section.
Latest Comments