Removal of a political leader prior to the end of his or her term rarely serves as a positive indicator for the then-current state of affairs. Impeachment and removal of a president, for example, may be preferable in light of the president’s offending conduct, but it carries its own disruptive costs. The framers made it difficult to remove a president, one logically assumes, because they anticipated it would be worth bearing the destabilizing effects only to eradicate the most serious of damaging offenses to the country.
President Donald Trump is a historically unpopular president right now, so it is not surprising that many Americans (and other people) want him out of office as soon as possible. It also is easy to see how a perceived expediency and efficacy of removal through the impeachment process would offer a means to that end that the President’s opponents find attractive. In short, it looks to them like a get out of jail free card, a silver bullet that, in rapid fashion, will erase what they believe to be the source of their problems in one clean shot.
Like most governmental processes, impeachment and removal does not happen that quickly, though, as this timeline from President Bill Clinton’s administration reminds. It also is far from clear that sufficient congressional will exists to impeach the President at this time.
These practical hurdles do not seem to be of great concern to the impeachment proponents, who have seized on the President’s alleged connections with Russia as their best hope for his removal prior to the end of his first term, even if, as some have observed, the asserted nature of the improper Russian involvement has evolved over time.
The foregoing notwithstanding, it often seems as though the President’s critics have coalesced around impeachment as the singular aspirational focus of their opposition. In doing so, they ignore the day-to-day policy activities of the administration and Congress. While not necessarily evidencing the smooth operation expected of a more experienced administration, Trump and the Republican-controlled Congress are advancing certain agenda items likely to have lasting effect while the vocal opposition remains distracted with Russia. At some point, putting all of their eggs in one shoot-the-moon basket is likely to backfire on the President’s adversaries, even if they ultimately succeed. Upon review of the new tax bill, reauthorization of NSA surveillance laws, environmental regulation rollbacks, potential renergization of federal marijuana prosecution, and appointment of a dozen new federal judges, among other accomplishments, they might conclude that point is in the past.
It is possible to use the internet to commit a crime. For example, one could use a Silk Road-like website to acquire a controlled substance it is illegal to possess in one’s geographical jurisdiction, or simply use the web’s myriad means of communication to coordinate a financial fraud.
It also is possible to commit an internet crime. The social and commercial interactions that occur within the internet itself are subject to a sort of moral code, and, for all of the flexibility and fluidity the web as a virtual space would seem to offer, one of the highest internet crimes arises out of inconsistency. For many in this realm, there is no greater offense than to be indicted for the offense of hypocrisy. And, indeed, indictment and conviction are nearly simultaneous in this medium, with sentencing following quite swiftly thereafter.
The internet remembers all, or sufficiently all, anyway, to retain record of those off-color tweets you sent years before you took a public stand against others who said similar things, and when someone else finds those old tweets, man are you going to look silly. One of the things at which human brains excel is detecting patterns, and when the alleged hypocrite expresses something apparently inconsistent with his or her prior positions, a little nugget of pleasure releases inside those brains upon the presentation of the irrefutable evidence from the historical record. Guilty on the spot.
The web-seductiveness of exposing apparent hypocrites is so alluring that it makes it easy to forget that hypocrisy, for all its attendant failings, is a sort of derivative or second-level offense, and our obsession with rooting it out can obscure or overwhelm what often is a serious substantive problem underlying the procedural default. In that way, for example, we frequently focus on an evaluation of the authenticity of an entertainment personality’s expressed opposition to the mistreatment of women when we subsequently find that she or he previously engaged in similar (or maybe even not that similar, but, hey, close enough) mistreatment in the past, rather than the actually bad problem itself. (This also touches on why otherwise uninvolved people “coming out as” anti-rape, anti-Nazi, etc., contributes very little to the general good.) Sexual harassment in the workplace and racism in public policy are two very real and significant issues that require real, meaningful effort to address, and yet we are so easily distracted from this work by the thrill of hypocrite hunting.
In the context of last month’s Senate election in Alabama, in which Doug Jones ultimately defeated Roy Moore by a margin the narrowness of which made many uncomfortable, Jonah Goldberg wrote for the Los Angeles Times about the dangers of our national distraction:
This obsession with hypocrisy leads to a repugnant immorality. In an effort to defend members of their team, partisans end up defending the underlying behavior itself. After all, you can only be a hypocrite if you violate some principle you preach. If you ditch the principle, you can dodge the hypocrisy charge. We’re seeing this happen in real time with some of Moore’s defenders, just as we saw it with Clinton’s in the 1990s.
Jonah Goldberg, Taking harassment seriously also requires making serious distinctions, Los Angeles Times, Nov. 21, 2017.
Or, as another thoughtful observer put it in sometimes cruder terms:
Wishing everyone a safe and happy new year filled with a renewed focus and energy for addressing some of our real problems in 2018.
Seven years ago, I wondered here whether America might be in need of a more meaningful, perhaps even physical, flavor of civic engagement. Stretching a bit, perhaps, I wrote:
The peak of this time of civic unrest, the late 1960s, has become an archetypical reference point for much of the subsequent civic and political action. The question now is whether this model has been stretched too thin, overused, and, in a certain way, too peaceful, in the age of the internet. Is web-based “social networking” the sort of engagement and participation that would impress Tocqueville, Kennedy, King, Putnam, or Armstrong? Are 140 characters enough for a meaningful treatise? Can a Facebook.com group change the world? Or should we just grab a groupon and plan our revolutions face-to-face at the newest eatery (that checks out on Yelp, of course)? In short, global electronic connectivity has fostered the rise of a sort of wide-sweeping, possibly disparate civic engagement, but is it of significant consequence? Have we walked too far away from the days of settling our differences and sorting things out on the battlefield?
Since then, the reelection of President Barack Obama and the subsequent election of President Donald Trump have preceded and likely fueled an increased attention to and participation in civic engagement and public discourse, at least of a certain variety. It remains to be seen whether this allegedly newfound brand of political participation is politically effective or merely serves to enhance the (digital and corporeal) egos of the participants.
Yesterday, the New York Times published an interview with Ed Cunningham, a person whose name probably is known to few, but whose voice may be more recognizable, memorializing a sort of retirement signing statement from the now-former ABC and ESPN college football broadcaster who apparently resigned this spring but did not disclose the real reason for his resignation, he says, until now: he believes football is a dangerous sport.
In its current state, there are some real dangers: broken limbs, wear and tear. But the real crux of this is that I just don’t think the game is safe for the brain. To me, it’s unacceptable.
Cunningham feels his job placed him in “alignment with the sport. I can just no longer be in that cheerleader’s spot.”
First, the timing of Cunningham’s explanation is curious not because it is coordinated with the beginning of the current college football season in order, one assumes, to deliver maximum media impact, but because it did not come years ago. The sport’s governing bodies at the scholastic and professional levels may, like tobacco companies before them, continue to distance themselves from reports and research on the relationship between football and brain damage, but whatever popular science concepts informed Cunningham’s decision are not new.
Second, to the extent Cunningham is fashioning his resignation as a protest designed to effect change in the sport, his approach seems shortsighted. In surrendering his national media platform, Cunningham has traded the opportunity to discuss the issues he claims are so important to him with a relevant audience on a regular basis for the chance to fire a single bullet– yesterday’s article– before disappearing from public sight. If his goal was to make football safer, surrendering an important resource does not seem like the best way to accomplish that goal. Even if he was worried that his superiors would not permit him to make the sort of (pedestrian, frankly) comments he provided to the Times during game broadcasts, it would have made for a more broadly significant departure from his position had a network (i.e., a league “broadcast partner”) terminated him for making those or similar statements. As it stands, someone else simply will replace him, and everyone will move on. The article quotes Al Michaels, a much more prominent football broadcaster:
I don’t feel that my being part of covering the National Football League is perpetuating danger. If it’s not me, somebody else is going to do this. There are too many good things about football, too many things I enjoy about it. I can understand maybe somebody feeling that way, but I’d be hard-pressed to find somebody else in my business who would make that decision.
In an effort to be fair to Cunningham, it is not completely clear from his actual statements quoted in the article whether he made his decision for the purpose of making football safer or for the personal purpose merely of extracting himself from an endeavor he now believes is too dangerous for his participants. Knowing the probable effect of telling his story the way he did, the distinction may be of no significance. If he wanted to be a source of meaningful change, though, Cunningham should have made like his contemporary version of Alexander Hamilton and stayed on his microphone as long as possible. Instead, he’s done just enough to satisfy his own guilt through effortless moral posturing. In that, he is not a unique player in today’s civic arena, but worthy causes deserve more.
There are such things in the world as human rights. They rest upon no conventional foundation, but are external, universal, and indestructible. Among these, is the right of locomotion; the right of migration; the right which belongs to no particular race, but belongs alike to all and to all alike. It is the right you assert by staying here, and your fathers asserted by coming here. It is this great right that I assert for the Chinese and Japanese, and for all other varieties of men equally with yourselves, now and forever. I know of no rights of race superior to the rights of humanity, and when there is a supposed conflict between human and national rights, it is safe to go to the side of humanity.
. . .
The apprehension that we shall be swamped or swallowed up by Mongolian civilization; that the Caucasian race may not be able to hold their own against that vast incoming population, does not seem entitled to much respect. Though they come as the waves come, we shall be stronger if we receive them as friends and give them a reason for loving our country and our institutions.
Last month, in response to indications that the Democratic National Convention drafting committee’s party platform policy decisions appeared to portend movement by the Democratic Party closer to the Republican Party, if not any meaningful sense of “the center,” I wrote that a functioning democracy needs a national political environment that features at least two political parties, and that, right now, it looks like America’s may be collapsing down to one.
The Democrats, of course, are not the only group shying away from their ostensible policy goals. In the lead up to the Republican National Convention in Cleveland, reports surfaced that, despite Ohio’s status as an open-carry state, “No guns will be allowed in the convention center where Trump will speak — nor in the tightest security zone immediately around it.”
The Republican Party’s presidential nominee, Donald Trump, has stated a general opposition to gun-free zones, however, and it is a position many in his Party share. One of the principles underlying this belief is that, had they lawfully been permitted to bear arms, “good” people would be able to fight back, by use of firearms, against “bad” people committing violent acts in places like schools and movie theaters.
The statistical rarity of this occurrence– ten times in the last nineteen years or so, by one careful count— only reinforces the pro-gun position. After all, had more honest citizens been allowed to carry firearms in public, they could have arrested even more violent criminal behavior.
Yet, a gathering of, presumably, the most ardent adherents to this thesis, taking place against a legal background that otherwise would have authorized such armament, guns were not allowed. If not at the RNC, then where?
The best thing for America might not be a powerful political faction that favors the proliferation of deadly weapons; nevertheless, the nation needs policy makers who adhere in practice to the policy positions they espouse. For a policy maker to do otherwise is to be dishonest with his or her constituents, an act both more inimical of and, unfortunately, likely more prevalent across this republic.
Election day was last month, and along with it came the increasingly open conversation about voter nonparticipation. We once simply lamented low voter turnout rates in post-election headlines, for example, or sported bumper stickers proclaiming support for a losing candidate. Today, however, the discourse has become somewhat more robust. Interestingly, the emphasis seems to have shifted from concern over low turnout to a justification of nonparticipation. While those who perceive themselves as disenfranchised or marginalized have endorsed a nonparticipatory stance for some time, support for the non-voting view, more and more, is coming from academic elites. In short, despite their ostensibly increased and careful attention to political matters, at least some academics do not like to vote at any greater rate than the general public. Being academics, they naturally feel the urge to express their non-expressive approach in writing. Their rationales for not voting do not always rise to a greater level of sophistication or explication than those of their lay counterparts, however.
First, I agree that there is nothing inherently wrong with abstention in a democratic system. While I prefer civic action to inaction, taking a stand rather than reserving judgment until some (likely imaginary) future date, I have always supported abstaining from ignorant voting. I think voters have a duty to become reasonably informed regarding the candidates and issues before them, and where voters have not done so, abstention is appropriate.
For some reason, though, attention recently turned to a critique of the colloquial phrase, “if you don’t vote, you have no right to complain.” Professor Jason Brennan, here highlighted and endorsed by Professor Ilya Somin, contends that the no-voting, no-complaining position, in Somin’s words, “fails to consider the extremely low likelihood that your vote will actually have an effect on government policy.” Brennan continues:
The most obvious explanation is that if you don’t vote, you didn’t do something that could influence government in the way you want it to go. You didn’t put in even minimal effort into making a change. . . .
But voting isn’t like that! The problem is that individual votes don’t make any difference. On the most optimistic assessment of the efficacy of individual votes, votes in, say, the US presidential election can have as high as a 1 in 10 million chance of breaking a tie, but only if you vote in a swing state and vote for one of the two major candidates. Otherwise, the chances of breaking a tie or having any impact are vanishingly small.
Brennan’s position strikes me as a mathematical fallacy. As I see it, Brennan could be writing for two audiences, and his argument does not work for either one. If Brennan simply is writing for himself in order to justify his personal decision not to vote, his explanation– that he does not vote because his individual vote is extremely unlikely to “break[] a tie or hav[e] any impact”– relies on unknowable facts. Even granting the substantive premise that a vote only matters if it breaks a tie or has “any impact,” general election votes almost certainly are not counted in such a sequential manner such that any one vote could be said to have been the tie-breaking vote. By setting up criteria that, as a practical matter, never could be satisfied, Brennan guarantees his apparently desired outcome but does little else. (An examination of whether and when a vote ever matters is a subject for another post.)
More likely, Brennan is addressing his remarks to a broader audience comprised of the entire electorate. In that case, however, his argument makes even less sense. In essence, his position amounts to an assertion that, because any one person’s vote is unlikely to break a tie or “hav[e] any impact,” no one should vote. Obviously, if no one voted, elections would be meaningless voids of civic disengagement.
One has to assume that, if Brennan was setting out to attack a foundational pillar of democracy generally (i.e., elections), he would do so directly. I am not familiar with Brennan’s work outside of the context of this immediate discussion. It is possible that he believes that the United States should exchange its democratic system for another one that does not rely upon citizens expressing their public will through elections. If Brennan supports a government system that incorporates elections to some meaningful degree, however, his argument against voting makes little sense, whether applied to himself individually or the electorate at large.
Last year, I noted that President Barack Obama seemed to be selectively leveraging his executive muscle in favor of certain constituencies and not doing so to benefit others. Following the passage and effective date of the Affordable Healthcare Act (“ACA”), the President acted, probably in illegal fashion, to help the business community by delaying application of the new law’s burdens on employers. (As Jon Stewart noted at the time, the administration did not afford other constituencies, like young people, the same benefit.) What the President was willing to do– circumvent Congress to achieve a desired policy outcome– last July for businesses under the ACA he was not willing to do for immigrant families being split up under deportation laws last November, suddenly bemoaning that Congress was standing in his way (“When it comes to immigration reform, we have to have the confidence to believe we can get this done, and we should get it done. The only thing standing in our way right now is the unwillingness of certain Republicans in Congress to catch up with the rest of the country.”).
This seesaw pattern has continued in 2014, and others are catching on. Earlier this month, Glenn Greenwald noticed another executive power incongruity emanating from the White House, this time in the foreign policy context. Like his selective enforcement of the ACA, the President likely illegally circumvented Congress and released five Guantanamo Bay prisoners in exchange for the return of an American prisoner. The exchange provided a public reminder of many things, one of the most basic of which was that the U.S. prison at Guantanamo Bay remains open and operative, contrary to the President’s longstanding promise to close it. As Greenwald points out, “the sole excuse now offered . . . for this failure [to close Guantanamo] has been that Congress prevented [the President] from closing the camp.” He concludes: “either the president broke the law in releasing these five detainees, or Congress cannot bind the commander-in-chief’s power to transfer detainees when he wants, thus leaving Obama free to make those decisions himself. Which is it?”
If the President’s actions do not contradict his words, they at least illuminate his priorities. The President may truly desire all of the policy outcomes he professes to seek. By leveraging his executive might in pursuit of some of those outcomes and not others, though, he reveals which goals really matter to him. The above examples show that, for President Obama, helping businesses and securing the return of an American POW were high-priority goals, while helping immigrant families and closing the Guantanamo Bay prison are lesser priorities.
If there is a lesson here, it is not a new one: when evaluating a politician’s performance, we cannot merely rely on her own words. It is appropriate to measure a politician’s record against the rubric she makes for herself through campaign promises and other goal-setting pronouncements. In conducting that measuring, however, we must look to the politician’s actions, and we must look at them in context, not in isolation. When an elected leader shows that he is willing to exceed the legal confines of his office in order to achieve a goal, we should accord little weight to his complaint that the same legal obstacle, elsewhere ignored, precludes his achievement of another ostensibly desired goal. We may not reasonably be able to expect forthright honesty in our leaders’ self-critical evaluations, but we ought to demand that degree of thoroughness of our own critical evaluations of our leaders.
The growth of media and communication technology has provided us with greater volumes of utterances from more people than ever before. It is easy to capture the unfiltered, unvarnished thoughts of a broader portion of society. With emphases on access and immediacy, people are publishing more of their regrettable opinions, jests, thoughts, and other statements that upset members of their audience.
Setting aside an evaluation of the person-by-person authenticity of the widespread responses to off-color jokes, for example, the speakers’ apologetic responses to the reaction to these increasingly frequent statements have settled into a pattern that merits brief examination.
A recent instance of this now-reflexive call and response came earlier this month. MMA fighter and media personality Chael Sonnen was on Fox Sports Live, new sports network Fox Sports 1’s version of ESPN’s SportsCenter, to discuss boxer Floyd Mayweather’s match against Canelo Alvarez. Criticizing the perceived quality of Mayweather’s recent opponents, Sonnen said:
I’ve never seen anybody in the history of America get so rich and so famous off of having complete wimps throwing punch at their faces. I know what you’re saying. You’re saying, “Well, it’s happened before, what about Rihanna?”
FOX Sports regrets the comments Chael Sonnen made during last night’s edition of FOX Sports Live. They were an inappropriate attempt at humor that Sonnen acknowledges shouldn’t have been made and he apologizes to anyone who may have been offended by his remarks.
This cycle– statement, reaction, apology– has become both rote and swift in American media culture, to the point where a) the reaction phase no longer is a necessary way station before the apology, and b) the apology itself has become formulaic, always addressed to “anyone who may have been offended.”
The ubiquitous and seemingly harmless addendum about “anyone who may have been offended” is, at best, counterproductive. First, while the phrase usually comes at the end of the “apology,” blunting and qualifying what otherwise might simply be, “I’m sorry,” it actually indicates a limited, defined audience for the “apology.” Rather than allowing for a statement that could be simultaneously broader and more direct, this phrase shifts the attention and onus from the person who made the original statement to those people upset by the remark and whose sensibilities ostensibly necessitated the apologetic charade. This linguistic shift then draws negative attention to these supposedly overly sensitive people, who, it then will be said, must be members of the “P.C. police,” seeking nothing more than the suppression of free speech and the enforcement of antiquated moral values.
Second, and perhaps more fundamentally, the phrase renders the apologetic nature of the statement, because it refuses to acknowledge that even one person actually upset by the statement exists; at best, it is a conditional apology. A conditional apology is no apology at all, particularly where the apology’s recipients are not equally able to engage in dialogue with the apology’s issuer.
To remedy these deficiencies, in reverse order: 1) change “anyone” to “those” and “may have been” to “were,” so that the apology is addressed to “those who were offended” and the focus remains on the person apologizing, and 2) remove the phrase altogether. “I’m sorry for saying what I said” works just fine on its own.
The decline of traditional journalistic media is well documented. In recent years, newspapers like the Ann Arbor News and Denver’s Rocky Mountain News have shuttered their doors. In order to survive, some papers, such as Detroit’s Free Press and News, have merged to varying degrees. Other regional papers, like the Tennessean, are shells of journalistic operations, mere AP repeaters after laying off batches of reporters. Some national papers– including the Wall Street Journal and New York Times– have gone to paid online platforms.
We have been told that the internet would be able to replace traditional print media, but experience suggests we have yet to realize that future. Web-based writers largely are concerned with reactions and opinions, and actual reportage appears have to decreased across the board, with foreign and local beats particularly suffering.
One outlet, Grand Rapids’ The Rapidian, is advancing the news media banner in the twenty-first century, though, and it is doing so through a hyperlocal, citizen-driven approach. The idea is to have a community’s members conduct actual reporting and create original content tied to the issues affecting that community and the happenings within it. The online-only newspaper seeks to capture the diversity of happenings and perspectives across the community’s varying neighborhoods– indeed, The Rapidian organizes content both by subject area and place-rooted bureaus— in a rigorous manner by providing training from experienced journalists and writers. This journalistic training, in turn, deepens residents’ connections to their community by providing them with tools to become more engaged community members.
It is easy to see why Grand Rapidians would want to support The Rapidian, a deeply engaged news source that is growing and developing along with a revitalized city that is doing the same thing, particularly when the city’s familiar news source, The Grand Rapids Press, is doing the opposite. (To its credit, MLive, the media group that now operates what remains of The Press and a number of other, formerly independent Michigan newspapers, has been a public, financial supporter of The Rapidian.)
It also should be easy to see why those who do not live in Grand Rapids nevertheless should want to support The Rapidian. Few communities currently have a dynamic, locally focused outlet like The Rapidian, but many, I suspect, would like and benefit from one of their own. The long-term solution, of course, is for members of these communities to create their own hyperlocal news source. (Anyone involved* with The Rapidian certainly would emphasize the “long-term” nature of that solution, I suspect.) The short-term solution is for members of these communities to support The Rapidian. The Rapidian is a national leader in this concept and a possible model for hyperlocal news media in other communities, and as such, its continued success makes it more likely that communities outside Grand Rapids will be able to follow its lead and develop their own versions. Thus, no matter where you live, if you have an interest in participatory locally focused news and media that works in the twenty-first century, you have an interest in supporting The Rapidian. You can do so here today.
* Disclosure: I am a former member of the board of directors of the Grand Rapids Community Media Center, the umbrella media organization that serves the city through numerous channels, and of which The Rapidian is a part.
The Constitution does not mandate America’s de facto two-party system; it does not mention political parties at all. Yet while the identities of the parties– in both name and platform– have changed over time, the United States has been a two-party country really since before day one.
There is much to be commended about the two-party system as it exists in the U.S. today. The conglomerate, dynamic nature of the parties means that the they evolve by competing with each other to attempt to absorb new movements and the votes that come with them. (Cf. Democrats and Greens with Republicans and Tea Partiers. The question of what happens once that absorption takes place– the assimilation– is a subject for another post.) It really is not so dissimilar from multiparty, parliamentary-style democracies, the difference being that those systems wait until after an election to form a coalition government, while the American system forms would-be governing coalitions before the election.
The third parties that persist in a two-party system like America’s without absorption generally are of two kinds: 1) the very unpopular or 2) the fundamentally opposed to both major parties. An unpopular faction will not be absorbed because it either is merely unpopular in the numerical sense or it is unpopular in the ideological sense. An unpopular faction is unlikely to coalesce into a functional political party for a variety of practical reasons.
The second variety of third parties mentioned is all that really remains for third parties under today’s two-party system. Because the major parties cover virtually the entire spectrum of substantive interests, the only thing left for a third party is to oppose both parties at some fundamental level, and that’s what America’s two most viable third parties– the Green and Libertarian Parties– are doing. Dissecting why the Green Party persists is a subject for another post. This post, unsurprisingly, will focus on the Libertarian Party.
I’ve already written at length here about libertarianism and some of its challenges. See,e.g., here, here, and here. With next week’s presidential election looming, the immediate question is whether it makes sense to actually vote for a third-party candidate. Most Americans profess concern with the notion that their vote “count.” People most concerned that their vote doesn’t count tend to be those in states with large populations and states that heavily favor the major party other than the one they support. This year, with the broadening popularity of Libertarian Party presidential candidate Gary Johnson, some are wondering whether a vote for Johnson would be a wasted vote. The unstated basis for that view is the logical assumption that Johnson will not win the election.
That is a self-fulfilling prophecy, of course. There’s no way Johnson can win if nobody votes for him, whatever their reasoning; conversely, if enough people ignored that assumption and voted for him, he would win. Still, though, that is unlikely to happen either, because there don’t appear to be enough people who would even consider supporting Johnson regardless of their expectation of his success.
I think the real underlying sentiment among voters is that they want to pick a winner. In other words, they want their votes to “count” in the sense that they want their votes to achieve something. If there’s no reasonably likely way the candidate will win or even come close, people will see a vote for that candidate as a vote that was “wasted.” The vote had no hope of achieving anything.
“Wasting your vote is voting for somebody that you don’t believe in,” an impassioned Johnson said. “That’s wasting your vote. I’m asking everybody here, I’m asking everybody watching this nationwide to waste your vote on me.”
His statement includes an important response to the “wasted vote” critique that seeks to redefine the concept: “Wasting your vote is voting for somebody you don’t believe in.” He realized he needed to add a practical goal, though, to help people see their votes as votes that would “count” in that second sense of achieving something, even if it wasn’t an outright victory for their candidate. He has done that by setting a goal of securing five percent of the popular vote nationwide, an achievement that would entitle the Libertarian Party to public campaign funding (something the major parties now have rejected, with President Barack Obama setting a record by raising over $1 billion) and a spot on the ballot in every state in the 2016 election. This is a goal the potential achievement of which Johnson believes his potential supporters will see as sufficient to consider a vote for him as one that will “count.”
Everybody likes to pick a winner, and everyone wants to be on the right side of history. Letting the perfect become the enemy of the good isn’t always practical. But maybe it’s worth reexamining our approach to voting if we find ourselves voting for a candidate other than the one we want to win the election.
Johnson may not win this election. He may not even make it to five percent of the national popular vote. (After all, the most successful third party campaign, Ralph Nader’s 2000 effort for the Green Party, only secured 2.74% of the popular vote. Right now, Johnson is polling at six percent nationwide.) What he already has done, though, is initiated a compelling discussion about reconceptualizing how Americans vote. All he needs now is five out of every one hundred voters to agree that that is a conversation that should continue.
Latest Comments