The decline of traditional journalistic media is well documented. In recent years, newspapers like the Ann Arbor News and Denver’s Rocky Mountain News have shuttered their doors. In order to survive, some papers, such as Detroit’s Free Press and News, have merged to varying degrees. Other regional papers, like the Tennessean, are shells of journalistic operations, mere AP repeaters after laying off batches of reporters. Some national papers– including the Wall Street Journal and New York Times– have gone to paid online platforms.
We have been told that the internet would be able to replace traditional print media, but experience suggests we have yet to realize that future. Web-based writers largely are concerned with reactions and opinions, and actual reportage appears have to decreased across the board, with foreign and local beats particularly suffering.
One outlet, Grand Rapids’ The Rapidian, is advancing the news media banner in the twenty-first century, though, and it is doing so through a hyperlocal, citizen-driven approach. The idea is to have a community’s members conduct actual reporting and create original content tied to the issues affecting that community and the happenings within it. The online-only newspaper seeks to capture the diversity of happenings and perspectives across the community’s varying neighborhoods– indeed, The Rapidian organizes content both by subject area and place-rooted bureaus– in a rigorous manner by providing training from experienced journalists and writers. This journalistic training, in turn, deepens residents’ connections to their community by providing them with tools to become more engaged community members.
It is easy to see why Grand Rapidians would want to support The Rapidian, a deeply engaged news source that is growing and developing along with a revitalized city that is doing the same thing, particularly when the city’s familiar news source, The Grand Rapids Press, is doing the opposite. (To its credit, MLive, the media group that now operates what remains of The Press and a number of other, formerly independent Michigan newspapers, has been a public, financial supporter of The Rapidian.)
It also should be easy to see why those who do not live in Grand Rapids nevertheless should want to support The Rapidian. Few communities currently have a dynamic, locally focused outlet like The Rapidian, but many, I suspect, would like and benefit from one of their own. The long-term solution, of course, is for members of these communities to create their own hyperlocal news source. (Anyone involved* with The Rapidian certainly would emphasize the “long-term” nature of that solution, I suspect.) The short-term solution is for members of these communities to support The Rapidian. The Rapidian is a national leader in this concept and a possible model for hyperlocal news media in other communities, and as such, its continued success makes it more likely that communities outside Grand Rapids will be able to follow its lead and develop their own versions. Thus, no matter where you live, if you have an interest in participatory locally focused news and media that works in the twenty-first century, you have an interest in supporting The Rapidian. You can do so here today.
* Disclosure: I am a former member of the board of directors of the Grand Rapids Community Media Center, the umbrella media organization that serves the city through numerous channels, and of which The Rapidian is a part.
The Constitution does not mandate America’s de facto two-party system; it does not mention political parties at all. Yet while the identities of the parties– in both name and platform– have changed over time, the United States has been a two-party country really since before day one.
There is much to be commended about the two-party system as it exists in the U.S. today. The conglomerate, dynamic nature of the parties means that the they evolve by competing with each other to attempt to absorb new movements and the votes that come with them. (Cf. Democrats and Greens with Republicans and Tea Partiers. The question of what happens once that absorption takes place– the assimilation– is a subject for another post.) It really is not so dissimilar from multiparty, parliamentary-style democracies, the difference being that those systems wait until after an election to form a coalition government, while the American system forms would-be governing coalitions before the election.
The third parties that persist in a two-party system like America’s without absorption generally are of two kinds: 1) the very unpopular or 2) the fundamentally opposed to both major parties. An unpopular faction will not be absorbed because it either is merely unpopular in the numerical sense or it is unpopular in the ideological sense. An unpopular faction is unlikely to coalesce into a functional political party for a variety of practical reasons.
The second variety of third parties mentioned is all that really remains for third parties under today’s two-party system. Because the major parties cover virtually the entire spectrum of substantive interests, the only thing left for a third party is to oppose both parties at some fundamental level, and that’s what America’s two most viable third parties– the Green and Libertarian Parties– are doing. Dissecting why the Green Party persists is a subject for another post. This post, unsurprisingly, will focus on the Libertarian Party.
I’ve already written at length here about libertarianism and some of its challenges. See, e.g., here, here, and here. With next week’s presidential election looming, the immediate question is whether it makes sense to actually vote for a third-party candidate. Most Americans profess concern with the notion that their vote “count.” People most concerned that their vote doesn’t count tend to be those in states with large populations and states that heavily favor the major party other than the one they support. This year, with the broadening popularity of Libertarian Party presidential candidate Gary Johnson, some are wondering whether a vote would be a wasted vote. The unstated basis for that view is the logical assumption that Johnson will not win the election.
That is a self-fulfilling prophecy, of course. There’s no way Johnson can win if nobody votes for him, whatever their reasoning; conversely, if enough people ignored that assumption and voted for him, he would win. Still, though, that is unlikely to happen either, because there don’t appear to be enough people who would even consider supporting Johnson regardless of their expectation of his success.
I think the real underlying sentiment among voters is that they want to pick a winner. In other words, they want their votes to “count” in the sense that they want their votes to achieve something. If there’s no reasonably likely way the candidate will win or even come close, people will see a vote for that candidate as a vote that was “wasted.” The vote had no hope of achieving anything.
Johnson has embraced the “wasted vote” concept:
“Wasting your vote is voting for somebody that you don’t believe in,” an impassioned Johnson said. “That’s wasting your vote. I’m asking everybody here, I’m asking everybody watching this nationwide to waste your vote on me.”
His statement includes an important response to the “wasted vote” critique that seeks to redefine the concept: “Wasting your vote is voting for somebody you don’t believe in.” He realized he needed to add a practical goal, though, to help people see their votes as votes that would “count” in that second sense of achieving something, even if it wasn’t an outright victory for their candidate. He has done that by setting a goal of securing five percent of the popular vote nationwide, an achievement that would entitle the Libertarian Party to public campaign funding (something the major parties now have rejected, with President Barack Obama setting a record by raising over $1 billion) and a spot on the ballot in every state in the 2016 election. This is a goal the potential achievement of which Johnson believes his potential supporters will see as sufficient to consider a vote for him as one that will “count.”
Everybody likes to pick a winner, and everyone wants to be on the right side of history. Letting the perfect become the enemy of the good isn’t always practical. But maybe it’s worth reexamining our approach to voting if we find ourselves voting for a candidate other than the one we want to win the election.
Johnson may not win this election. He may not even make it to five percent of the national popular vote. (After all, the most successful third party campaign, Ralph Nader’s 2000 effort for the Green Party, only secured 2.74% of the popular vote. Right now, Johnson is polling at six percent nationwide.) What he already has done, though, is initiated a compelling discussion about reconceptualizing how Americans vote. All he needs now is five out of every one hundred voters to agree that that is a conversation that should continue.
“Come waste your [vote] with me”
The Constitution’s Commerce Clause, Article I, § 8, has been in the news this week, but it’s the Clause’s negative implication– known as the Dormant Commerce Clause– that provides the conceptual starting point for this post and its ultimate conclusion about the full meaning of First Amendment speech rights. If the Commerce Clause is an express grant of authority to Congress “to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes,” the Dormant Commerce Clause is an implied restriction on state authority over a regulatory area– interstate commerce– that belongs to Congress. State regulation that affects interstate commerce must bear a rational relationship to a legitimate state concern and the benefit the regulation affords to the state’s interest must outweigh the burden on interstate commerce. This (implied) proscription applies even in the absence of affirmative federal regulation of the precise subject matter the state sought to regulate. It is enough that Congress could regulate the aspect of interstate commerce; it need not actually have done so.
A related concept is that of implied preemption. In general, implied preemption is a decision to resolve conflicts between federal and state law by choosing the federal law in most every instance. One application of implied preemption comes where Congress so occupies a regulatory field– immigration might be an example, Arizona and Alabama notwithstanding– that any state regulation in that area is preempted, even if Congress hasn’t passed a statute addressing the particular issue.
There is a concept at work both with the Dormant Commerce Clause and implied field preemption that has to do with the virtue and authority of silence. Both doctrines place silence on authoritative par with sound, inaction equal to action. They recognize and protect the full scope of the grant of authority, even if the authorized body never exercises the authority to the fullest extent.
Calvin College is one of the nation’s leading Christian Reformed colleges, and while it has a reputation for social conservatism, it also has a reputation for hosting progressive, secular music concerts. About a year and a half ago, these two interests clashed, however, when the school cancelled a scheduled performance by indie act The New Pornographers on the sole basis of the band’s name, and even in full recognition of the fact that the band does not “endorse pornography.” There’s no legal question that the private college may host or not host whatever entertainment it chooses, but the story still took on a community discussion that proceeded along free expression lines.
We usually talk about First Amendment speech in terms of things actually said, and the legal and political questions usually have to do with whether the First Amendment protects words actually spoken or actions actually taken. But maybe the First Amendment is about more than fostering a broad cacophony of speaking and a mess of expressive acting. Maybe there’s a negative implication of the First Amendment and its protected rights, a Dormant First Amendment.
The Dormant First Amendment might recognize that, just as someone has a right to say something, he also has a right (or at least a strong interest) in not hearing something. For example, we might see Calvin College not as restricting someone else’s speech in cancelling the concert but as preserving its own interest in not hearing something it found distasteful. The former formulation carries a negative connotation, but the latter should carry a positive one. Rather than the First Amendment (conceptually, not mechanically– although I do appreciate that that statement may impair the impending metaphor) being a one-way ratchet that directs only more and more speech-volume, why not a multifaceted approach that values discernment, distillation, refinement, taste?
It may be true that the First Amendment was meant to create a marketplace of ideas, as courts have said. Marketplaces are loud, noisy places, and the merchant who hawks her wares the loudest may be more likely to survive there, but not everyone survives in a market because customers don’t do everything sellers’ advertisements tell them to do. Perhaps people would make better decisions if they patiently heard every pitch from every market participant, but at the very least, the First Amendment is about a right to speak, not a right to be heard. Moreover, if the First Amendment is about everybody being able to say whatever they want, is it really so offensive to that principle to say that people ought to be able to use their discretion to decide when to step to the side of the spray of the verbal fire hose?
As for how the idea of the Dormant First Amendment would work practically I’m far from sure, and if there are any readers who aren’t practically dormant at this point, comments, as always, are welcome below. The real thrust of this post is to suggest the possibility that, like the Dormant Commerce Clause and implied preemption doctrines place Congress’ inaction on authoritative par with its action, the First Amendment might also have a negative implication that places an individual’s desire to avoid speech on protective par with his or her desire to engage in speech.
I have written before about compassion, see, e.g., here and here, and while a simple example sometimes can serve as a basic way of illuminating a concept that requires little additional commentary, a second-level example almost always will. Such an example was buried in USA Today’s coverage this week of a story that started when someone went to a Kmart store in Grand Rapids for the purpose of anonymously paying off shoppers’ layaway accounts and others around the country began to follow suit. The following part of the story appeared in the final paragraph of the print edition’s version of the article and is tucked in the middle of the online version:
Lori Stearnes thought it was a joke when a Kmart in Omaha called to tell her that someone had paid the $58 balance on her account, which included toys for her youngest grandchildren. “It was a shock, of course, and then it just made me feel warm and fuzzy,” she says. Stearnes went back to Kmart and used the money she had set aside for the gifts to pay off two other layaway accounts.
Judy Keen, “Mystery donors paying off layaway accounts for needy,” USA Today (Dec. 21, 2011). Compassion in action requires little elaboration; repetition is welcome, however, and it doesn’t take an angel to do so.
In the summer of 2008, Jon Bellona and a crew of runners set out on a transcontinental run– the Run for the Fallen– from Fort Irwin, CA to Arlington National Cemetery in Virginia, one mile for each member of the American military killed in Operation Iraqi Freedom. The group had a powerfully uncomplicated mission statement:
Run for the Fallen is a collective of runners whose mission is clear and simple: To run one mile for every American service member killed in Operation Iraqi Freedom.
On June 14, 2008, we run across America to raise awareness about the lives of those who fought, to activate their memories and keep their spirits alive, to support organizations that help wounded veterans and the families of those killed (Wounded Warrior Project, Yellow Ribbon Fund, HUGSS (Helping Unite Gold Star Survivors), and the 1st Lt. Michael J. Cleary Memorial Fund), and to aid the healing process for those Americans whose lives have been affected by the war.
We refuse any political affiliation or agenda, but simply honor those who have fought, and those who have fallen under the American flag.
Each of the over 4,000 miles was dedicated to one individual soldier, and an American flag and card personalized for that solider were affixed along the route as mile-markers.
While Bellona had a core of runners who joined him for the entirety of the journey, many other runners joined the group for various stretches. In addition, a film crew followed the group from pre-run preparation through the run’s conclusion, documenting it in its entirety.
The film, entitled To Them That’s Gone: A Film for the Fallen, is an integral component of the run because it advances the mission of the run– to honor the individual, specific, identified lives of those American servicemen and women who died in the Iraq war– by preserving all of those stories and broadcasting them to a wider audience.
I was working in Kentucky in the summer of 2008, and after my job ended in early August, I joined the run in northeastern Tennessee for a scant twenty-four hours, running just two miles on the route between New Tazewell and Van Hill. But a blip in the life of the run and in my own life’s timescale, that time registers among the most emotionally powerful moments of my life, and I am extremely grateful for it.
As Bellona consistently emphasized, though, the run wasn’t about the runners. It’s about those being honored, and the telling of their stories. The run, as a happening, told those stories, and the film serves the same purpose in a larger and more permanent fashion. The film crew is in a final fundraising push to finish their work, and they’ve given themselves thirty days (twenty-seven now remain) to raise the money they need. If you’re interested in backing their effort, the Kickstarter.com page is here. While on this page, view the trailer:
More video clips are available here, and still photos are available here. Finally and again, more information about this fundraising effort, including how to donate, is available at http://www.kickstarter.com/projects/1670101310/to-them-thats-gone-a-film-for-the-fallen.
Doubtful power does not exist.
In re Procedure and Format for Filing Tariffs Under the Michigan Telecommunications Act, 210 Mich App 533, 539 (1995).
An article on the front page of yesterday’s USA Today described the new budget plan Republicans introduced earlier this week that would “dramatically revamp the twin health care pillars of the Great Society, taking a huge political risk that could reverberate all the way to November 2012 and beyond.” Richard Wolf and Kelly Kennedy, “GOP seeking dramatic changes in Medicare and Medicaid,” USA Today (April 6, 2011). Behind the economic leadership of Rep. Paul Ryan, House Budget Committee Chairman, the Republicans are proposing fundamental changes to the federal Medicare and Medicaid programs. ”‘Our goal here is to leave our children and our grandchildren with a debt-free nation,’ said Ryan, 41, of Wisconsin. ‘At stake is America.’” Id.
For those who have tracked the recent rise of fiscal conservatism among Republicans at the national level, news that they are targeting large government programs for reductions is unsurprising. What might be surprising, however, are some of the effects of the GOP plan to privatize Medicare and shift Medicaid to state-level administrators:
Medicare, the government-run health insurance program covering about 47 million seniors and people with disabilities, would be run by private insurers and would cost beneficiaries more, or offer them less. Medicaid, the federal-state program covering more than 50 million low-income Americans, would be turned over to the states and cut by $750 billion over 10 years, forcing lesser benefits or higher co-payments. Social Security eventually would be cut, too.
Id. If these projected outcomes are accurate, they raise questions about the Republicans’ application of conservative fiscal theory.
During George W. Bush’s presidency, Republicans remembered well enough that they favored low taxes, but they appeared to forget why they took that position. In doing so, they created a deficit by continuing to spend at high levels rather than reduce spending to match the reduction in tax receipts.
Now, Ryan and his colleagues appear to have reconnected low taxes and low spending but forgotten why they favor low spending. The idea behind a push for lower taxes and spending, of course, is that it forces government to shrink and permits the private sector to expand. This is desirable because, from the proponents’ perspective, the private sector can provide goods and services more efficiently (cheaper and more effectively) than would be possible in the public sector (i.e., government). The economic calculus of privatization can be complicated, but the results presented in the above article– higher costs and reduced services– do not sound like efficiency gains.
Under conservative economic theory, small government is desirable, not as an end itself, but because it reduces regulatory roadblocks that inhibit the private sector. The stated results of the Republicans’ plan for Medicare and Medicaid imply that they have lost track of the practical goal the application of their theories is supposed to achieve. If the predicted results are accurate, it seems that Republicans either have unsuccessfully applied their theories or reframed small government as an end itself. Remedying the former may simply require more careful work on the part of policy makers and their economists and other advisors. The latter, however, requires a new theoretical justification.
One view, perhaps of an anarchist variety, is that the government is but another (albeit large and special) player in a market that does not distinguish between a public sector and a private one. Under that view, it may not be surprising that there are some goods and services that a traditionally “private” entity or group of entities can provide most efficiently, or that there are others that the entity known as “government” can provide most efficiently, and still others that some combination of the two can provide most efficiently. See, e.g., public-private partnerships. Looking at things in this way, the possibility that government, with its special access to virtually all individuals in the market, could provide the most cost-effective insurance program based on its economies of scale, may not be so surprising. This may not be the actual case here, but the stated results of the Republican plan to privatize services and shift them to the states– increased costs and decreased services– suggest it is a possibility.
If small government itself is a goal, detached from private-sector efficiency gains, for the new group of House Republicans, their “pro-business” stance appears much less principled.
Last week, Dr. Aneel Karnani, a University of Michigan business school professor, presented “The Case Against Corporate Responsibility” in the pages of the Wall Street Journal. The core of Karnani’s argument, that “the idea that companies have a responsibility to act in the public interest and will profit from doing so is fundamentally flawed,” comes in two early paragraphs:
Very simply, in cases where private profits and public interests are aligned, the idea of corporate social responsibility is irrelevant: Companies that simply do everything they can to boost profits will end up increasing social welfare. In circumstances in which profits and social welfare are in direct opposition, an appeal to corporate social responsibility will almost always be ineffective, because executives are unlikely to act voluntarily in the public interest and against shareholder interests.
Irrelevant or ineffective, take your pick. But it’s worse than that. The danger is that a focus on social responsibility will delay or discourage more-effective measures to enhance social welfare in those cases where profits and the public good are at odds. As society looks to companies to address these problems, the real solutions may be ignored.
Karnani’s thesis strikes me as unremarkable. What are interesting, however, are two related topics, one of which he does not explore, and the other of which he underexplores. The first is the relationship between a corporation and an individual in the responsibility context, and the second is Karnani’s concern that “the real solutions” may be lost with a focus on corporate responsibility.
A corporation is an organization of humans engaged in a business enterprise. People incorporate as a way to order their affairs and be treated under the law in a way that is tied to the nature of their activity. While the law commands that corporate governors act to maximize profits, the incentives for individual actors are not so different. For people, as for corporations, there often is a cost to acting in a socially responsible manner, and individuals who commit themselves to environmentally friendly behavior, for example, may find themselves in a difficult financial position. Substituting “people” for “corporations” in Karnani’s article would have challenged us to look more critically at our own life choices.
Additionally, those who pursue environmental, health, and other responsibility goals should consider the fact that the behavior they aim to encourage usually is more expensive for people than their current behavior. Healthier food and hybrid cars are more expensive than the less responsible alternatives. Leaders need to decide whether to try to align incentives– through government regulation or mobilization of collective action, for example– to achieve health and environmental goals or, more radically, to break out of a cost-benefit analysis altogether.
The second point, that a focus on corporate responsibility will distract from “the real solutions” to social problems, is an important contribution that could come out of a discussion like the one Karnani initiated, but Karnani doesn’t do much with it. Expecting corporations to act responsibly with respect to social problems “will delay or discourage more-effective measures to enhance social welfare in those cases where profits and the public good are at odds,” he writes. This is the obvious conclusion that flows from the premise that corporations only act to maximize profits. More interesting is whether, when profits and the public good are not at odds, the corporate-provided solution is suboptimal or incomplete, for example. This might happen because a corporate solution arises in response to consumer demand, but because the mass of consumers are not experts and may manifest their demand imprecisely, the “solution” will not respond to the actual underlying problem, but rather the consumers’ reaction to their understanding of that problem. It might also happen because corporations, ever profit-maximizers, choose cheap solutions to problems that require more thorough treatment or choose to make a big deal out of resolving a relatively minor issue while distracting consumers from more serious– and more expensive– issues that go untreated. See also here (discussing the danger of distorting discourse through a disproportionate focus on relatively minor issues). On the other hand, perhaps it’s the case that consumers can gain a sufficient understanding of problematic issues like health and environmental degradation and can communicate their desires for more responsible corporate behavior in a way that forces an appropriate response. Karnani did not pursue this avenue in his article, so the thoughts in this post are merely speculation.
Karnani’s article, as interpreted and expanded upon here, is a helpful reminder that people, not disembodied concepts like “corporations” and “the government,” must be the ones to act in response to social problems.
In writing the Declaration of Independence, Thomas Jefferson borrowed much from English philosopher John Locke. Although it appears without citation, Locke’s influence shows in both the overall approach and specific wording of the Declaration. See also John Locke, The Second Treatise of Civil Government, 1690. The core of Locke’s Natural Rights philosophy was the natural right of each individual to life, liberty, and property. Early in the Declaration, Jefferson stated that “Life, Liberty and the pursuit of Happiness” are among the “certain unalienable Rights” with which the “Creator” endowed people. Reading the Declaration today, with the Lockean influence well-recognized, it is easy to assume that Jefferson merely was covering his tracks a bit, adding a touch of rhetorical flair and whimsy, or at least avoiding direct and obvious plagiarism by swapping “property” for “the pursuit of Happiness.” (Perhaps my own Hamiltonian sensibilities also served to make it easier for me to arrive that this baser conclusion, but I don’t think so.) Looking at the two writings together and applying ordinary tools of textual interpretation suggests a different result, however. The alternative view is that “the pursuit of Happiness” is not at best a tuft of throwaway fluff, but rather a conscious and meaningful modification of Locke’s original triplet. Looking backwards through time– my first, longstanding approach– we might conclude that “the pursuit of Happiness” means “property” and thus it is through property (and a natural or unalienable right to it) that we might and should pursue happiness. Looking forward through time– the new view I’m suggesting here– we might conclude that “the pursuit of Happiness” means something different or more than “property.” Property could still be a part of it, and one’s home and land could be a great place to develop one’s good life, but it is more than the enforcement of property rights. In some instances, “the pursuit of Happiness” may even be contrary to certain property rights regimes.
Furthermore, Jefferson did not substitute “Happiness” in place of “property,” but rather “the pursuit of Happiness,” which places the focus not on the capitalized word (“Happiness”) but on the central action of the phrase: “the pursuit.” On this view, what Jefferson wanted to emphasize was something ongoing and never completed. It is the journey, the climb, the quest, the striving that must continue, rather than something static like property. In two lectures this week at the Chautauqua Institution, documentary filmmaker Ken Burns (Brooklyn Bridge, The Civil War, Baseball, Jazz, The National Parks: America’s Best Idea) illuminated this Jeffersonian vision with passing mention to his eponymous documentary on our third president and with application to the Chautauquan week’s substantive theme, sacred space.
Burns’ work has centered on American people, places, and events, and even if he finds the internal contradictions in Jefferson’s life glaring and impossible to ignore, he seems to have taken to heart Jefferson’s aspirational, ongoing directive to be always in pursuit of happiness. Jefferson had a revolutionary outlook (“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”), but even those with more conservative attitudes can live always “in pursuit.” This need not be the sort of opportunity cost-driven lifestyle I described earlier, see supra here, in which we can never sit still because each moment is a moment we could be spending making money, developing professional contacts, or inventing Velcro II. Instead, we can maintain a posture in which we are never completely satisfied with the status quo because we know we can do better, as individuals and as a community. The view that we can always improve– in how we treat other people (at home and abroad), other living things, our (natural and built) environment, and our sacred spaces– strikes me as appropriately American: American in that the notion of living always “in pursuit” has Jeffersonian roots in the Declaration and that it speaks to a people always striving to be the best at whatever they do; appropriate, in that the approach is not imperial or domineering, suggesting self and internal improvement rather than external subjugation of others. Jefferson was an idealist (a Hamiltonian knows this if anyone does), but by building into the Declaration a kind of practical idealism, he instituted something to be done, not merely to be pondered under a shady tree.
To continue the pursuit is to continue to validate the Declaration, to continue to declare and protect independence. In other words, the pursuit really is not optional. First, the language of the Declaration places the pursuit of happiness among the core of the unalienable rights that come from the creator, as mentioned above. Surely there is no independence where unalienable rights are violated. Second, there is an American sense of collective individual responsibility. Our country is a project, and we all bear some modicum of responsibility for the furtherance and sustenance of the project. President Abraham Lincoln knew both of America’s greatness and that it could be its own worst enemy. In a statement Burns quoted more than once this week, Lincoln said:
From whence shall we expect the approach of danger? Shall some trans-Atlantic military giant step the earth and crush us at a blow? Never. All the armies of Europe and Asia…could not by force take a drink from the Ohio River or make a track on the Blue Ridge in the trial of a thousand years. No, if destruction be our lot we must ourselves be its author and finisher. As a nation of free men we will live forever or die by suicide.
Burns has studied human, American struggles, achievements, lives, and events from many perspectives. His films, and of equal validity, our own experiences and those we share with others, provide windows on opportunities for all of us to continue our pursuit.
The settling of conflicts is one lens through which to view the progression of civilization. To this point, conflict seems to be endemic to human society, and we appear to have an inclination to resolve these conflicts. This extends to more general areas like arguments and problem solving.
A rough history suggests that, early on, the way to resolve conflict was through physical violence and, if the scale and magnitude necessitated, war. As nomadic tribes coalesced into agrarian communities, humanity saw the rise of civil society, politics and diplomacy, and the eventual proliferation of oral and verbal discourse. See also The Beer Theory of Civilization; I’ll Take My Stand: The South and the Agrarian Tradition. This notion of a shift from violence and war to civil discourse, peaceable assemblies, and political engagement isn’t a perfect historical description, of course. More than a few early cultures had their philosophers and deliberative political structures, and more than a few modern states continue to rely on militaristic means of conflict resolution. (Indeed, physical violence probably remains the real trump card in the vast majority of aggregate and individual dispute resolution.) Still, there has been a marked development of peaceful, deliberative means of problem solving in recent centuries.
The United States is no stranger to violent means of conflict resolution, even setting aside foreign policy. The Civil War stands out in this respect, as do the dropping of the atomic bomb and the violent and deadly acts of those dubbed “domestic terrorists.” Even the dueling that claimed the life of one of America’s greatest Founding Fathers was a recognized part of the European culture Americans imported and sustained for some time. By the time of the civil rights movement, violence remained on the table and indeed was a viable option for both sides. There nevertheless was a perceivable shift in which violence was a last resort and peaceable means were preferred in the first instance.
The peak of this time of civic unrest, the late 1960s, has become an archetypical reference point for much of the subsequent civic and political action. The question now is whether this model has been stretched too thin, overused, and, in a certain way, too peaceful, in the age of the internet. Is web-based “social networking” the sort of engagement and participation that would impress Tocqueville, Kennedy, King, Putnam, or Armstrong? Are 140 characters enough for a meaningful treatise? Can a Facebook.com group change the world? Or should we just grab a groupon and plan our revolutions face-to-face at the newest eatery (that checks out on Yelp, of course)? In short, global electronic connectivity has fostered the rise of a sort of wide-sweeping, possibly disparate civic engagement, but is it of significant consequence? Have we walked too far away from the days of settling our differences and sorting things out on the battlefield?
During that high period of American public participation more than forty years ago, a British group already was recognizing the runaway (from meaningfulness) potential of burgeoning civic engagement. The melodic title track of an album otherwise described as “sentimental” and “nostalgic” has a more satirical ring in my ears. (That or it’s the most conservative song ever written by a group banned from performing in the U.S.) Lyrics are available here. A live performance from 1973 gives the feel:
The familiar album version is available below, and there is space for your responsive comments below that. Whether you have some ideas about the role of violence in modern dispute resolution, the future of web-based civic engagement, or a new verse to add to the song, I welcome your thoughts.