This week, according to sludgebait website Awful Announcing, “Everyone Is Losing Their [sic] Minds Over Colin Kaepernick.” During a preseason NFL game, television cameras caught the San Francisco 49ers backup quarterback sitting during the traditional pregame playing of the national anthem. Kaepernick addressed the subject in a postgame interview:
His team promptly issued the following statement:
The national anthem is and always will be a special part of the pre-game ceremony. It is an opportunity to honor our country and reflect on the great liberties we are afforded as its citizens. In respecting such American principles as freedom of religion and freedom of expression, we recognize the right of an individual to choose and participate, or not, in our celebration of the national anthem.
Kaepernick’s former teammate, Alex Boone, also was quick to react, and his critical comments reflected those of many who were displeased with Kaepernick’s display:
You should have some f—ing respect for people who served, especially people that lost their life to protect our freedom. We’re out here playing a game, making millions of dollars. People are losing their life, and you don’t have the common courtesy to do that. That just drove me nuts. . . . There’s a time and a place. Show some respect.
I haven’t been alive every year America, or its flag, has, but I have the following senses: 1) the American flag long has been a symbol of great reverence for the nation’s military, for obvious reasons; 2) the flag was not always the exclusive symbolic property of any particular political or social faction, however; 3) as it had during prior times of national crisis, the flag’s symbolic energy increased during the aftermath of the September 11, 2001 attacks; and 4) during that time, and as the nation’s response to those attacks eventually became a matter of partisan and public controversy, the flag, as a symbol, traveled with that response and its proponents such that, in the following years, many wore or displayed the flag as a symbolic endorsement of the growing War on Terror and the political and military leaders of that war, to the exclusion of other people and groups, including those who questioned or opposed those actions.
The American flag belongs to all Americans, however, including those who want to fly it in support of a particular political regime; those who want to fly it in opposition to a particular political regime, as a reminder of values for which it stands that they feel that regime is disregarding; those who want to reject it; those who want to burn it; those who want to ignore it; and anyone else.
Kaepernick’s protest raises serious and immediate practical concerns. The discourse his protest initiated also highlighted the reality that many view the flag and other national symbols as the exclusive property of some, but not all. That is worrisome, but it also provides an opportunity to examine and reverse the trajectory of the flag as a captured, proprietary national symbol.
Last month, in response to indications that the Democratic National Convention drafting committee’s party platform policy decisions appeared to portend movement by the Democratic Party closer to the Republican Party, if not any meaningful sense of “the center,” I wrote that a functioning democracy needs a national political environment that features at least two political parties, and that, right now, it looks like America’s may be collapsing down to one.
The Democrats, of course, are not the only group shying away from their ostensible policy goals. In the lead up to the Republican National Convention in Cleveland, reports surfaced that, despite Ohio’s status as an open-carry state, “No guns will be allowed in the convention center where Trump will speak — nor in the tightest security zone immediately around it.”
The Republican Party’s presidential nominee, Donald Trump, has stated a general opposition to gun-free zones, however, and it is a position many in his Party share. One of the principles underlying this belief is that, had they lawfully been permitted to bear arms, “good” people would be able to fight back, by use of firearms, against “bad” people committing violent acts in places like schools and movie theaters.
The statistical rarity of this occurrence– ten times in the last nineteen years or so, by one careful count— only reinforces the pro-gun position. After all, had more honest citizens been allowed to carry firearms in public, they could have arrested even more violent criminal behavior.
Yet, a gathering of, presumably, the most ardent adherents to this thesis, taking place against a legal background that otherwise would have authorized such armament, guns were not allowed. If not at the RNC, then where?
The best thing for America might not be a powerful political faction that favors the proliferation of deadly weapons; nevertheless, the nation needs policy makers who adhere in practice to the policy positions they espouse. For a policy maker to do otherwise is to be dishonest with his or her constituents, an act both more inimical of and, unfortunately, likely more prevalent across this republic.
A common refrain of politically dissatisfied Americans is that the United States is in need of a multiparty system akin to the parliamentary arrangements that populate Europe. During the last presidential election cycle, I wrote in defense of casting a vote for a third-party presidential candidate. That post included a favorable discussion of our two-party system:
There is much to be commended about the two-party system as it exists in the U.S. today. The conglomerate, dynamic nature of the parties means that the they evolve by competing with each other to attempt to absorb new movements and the votes that come with them. (Cf. Democrats and Greens with Republicans and Tea Partiers. The question of what happens once that absorption takes place– the assimilation– is a subject for another post.) It really is not so dissimilar from multiparty, parliamentary-style democracies, the difference being that those systems wait until after an election to form a coalition government, while the American system forms would-be governing coalitions before the election.
That, at least, is how a successful two-party system ought to operate. If party platform decisions made by the Democratic National Convention drafting committee this month stick, though, that may be an indication we are moving closer to a one-party system than any sort of multiparty arrangement:
There is some nuance missing here (e.g., the rejection of the $15 minimum wage amendment leaves in place a minimum wage at that amount but will not index the amount to the inflation rate, as some wanted), and the platform is not final, but these policy decisions– as well as others not noted, including the rejection of a proposal opposing the Trans-Pacific Partnership trade agreement— appear to portend movement by the Democratic Party closer to the Republican Party, if not any meaningful sense of “the center.”
A functioning democracy needs a national political environment that features at least two political parties. Right now, it looks like America’s may be collapsing down to one.
Yesterday was Memorial Day. The legacy and cost of wars as public policy decisions can be told in gruesomely sterile bar graphs. It is relatively easy to discuss and analyze an issue by aggregating the participants and then slicing and dicing them in the pursuit of answers and pseudo-conclusions. War merits inspection on an n=1 basis as well. Even though that sort of inspection is difficult and expensive, and even though its results are not readily translated into things like bar graphs and statistical tables, it remains necessary. This is particularly so given the sense that our armed forces draw their volunteers from increasingly narrow demographic subsets. Considering the individual soldier, even (perhaps especially) if she is not personally known to the considerer, is worthwhile and valuable.
Individualized humans can communicate messages to other humans that statistical humans cannot, and, in the course of placing at risk human lives, regard of the former ought at least to compliment analysis of the latter.
Placing a monetary value on a human life is, at least, unsavory; by apparent contrast, we have little difficulty pricing time, even– perhaps especially– our own. People spend much of their time at work, which they do in exchange for monetary compensation, often in the form of an hourly wage. Some, like lawyers, accountants, and consultants, even sell their time directly to their customers in the form of billable hours.
Maybe pricing our time is a mental shortcut humans take to hurdle the sticky undertaking of placing a value on a full life. The latter incorporates an element of finality: the life is over, or at least contemplated as a completed product, matters accomplished weighed against those uncompleted, the relative wisdom of roads taken balanced against those avoided or undiscovered, without any further recourse or appeal.
Adults stereotypically chide youths for their projected or enacted attitudes of invincibility, but even if grownups are more likely to acknowledge their own mortality, most everyone seems to treat time as a far less finite resource.
“There’s always tomorrow,” except when there isn’t. And when someone runs out of tomorrows, it can seem like a sudden occurrence. Sometimes the realization that time is the currency of life arrives with arresting, even crushing force.
A baseball writer recently wrote an article about the death of his young son, a child who had been sick all of his short life. A selection therefrom:
Perhaps stuck in the bargaining phase of grief, I kept thinking about how many games (indeed, how many Cubs wins, even) I would trade for even one more chance to toss my phone aside, load Emerson into his wagon, and go to the park to swing. As anyone who has grieved a painful loss can tell you, irrational anger sometimes creeps in, and for me, that has taken the form of blaming baseball for stealing my son from me, for taking my attention away from him too often over the last few years, for distracting me as I cared for him over the last few weeks, even.
Baseball mostly steals from us, steals our money (tons of our money), steals our time, steals the passion and intellectual energy we ought to put toward more important things.
How we spend our lives is how we spend our time. A life well lived is composed of time well spent. To regard every moment as a vanishing grain of finality is an approach too paralyzing to be sustainable for any stretch. Time, like other, less cosmic currencies, can and often should be invested– in the mundane, in the unenjoyable, in the unproductive, in the lonely. To burden every moment with the conscious knowledge of unalterable consequence is too much. Still, to better prepare to receive that last, ultimate punch of total realization, it may be wise to moderate our day-to-day approaches by incorporating a greater respect for the real value of time, both ours and others’.
Life is sacred. Or, at least, life is invaluable. Our resistance to placing a specified value on life generally, or an identified person’s life specifically, or even approaching the task of deciding whether and how to place a value on life, leads us to some extreme places. The death penalty is one (controversial) example. If our public policy reflects, however bluntly, our values, then it makes at least some sense that we would apply the most severe sanction to the most severe crime. Even if the logic does not compute, the arithmetic does: 1 Life = 1 Life.
In situations less severe than murder, however, and units smaller than 1 Life, the practical push toward pricing life becomes difficult to avoid. Read more…
Among its many great promises, the internet offered humanity the possibility of facilitated collaboration at a speed, scope, and cost that combined to create an infrastructure through which such vast collaborative opportunities– discourse, truly, writ large, or small, or whatever size in between you wanted, or that your ideas could command, anyway– were a realistic possibility. The structure and execution quickly became obvious, as people moved beyond unilateral, newspaper-style content publishing to message boards and blogs, where someone could present an idea, proposal, argument, or other creation, and others could respond to the idea, and even interact with the presenter in that space below the posted concept referred to as the comment section. The availability of a publicly interactive comment section is one of the most distinguishing features, along with accessibility and mixed-media formatting, of online content, and for those interested in promoting quality discourse, the comment section was the structural crux of the internet’s promise in this regard. Some sites, such as Reddit.com, saw and prioritized this concept to an extreme degree. Reddit calls itself “the front page of the internet,” but “the comment section of the internet” might be a more descriptive title.
Confucius and Plato and many others in the two millennia and a half since have made successful use of dialogues and conversations as ways of sharing, probing, and developing ideas and knowledge. If the great deliberative, discursive world wide web is to be a thing we have, though, it has yet to arrive, and the commenting structure may not be the way to achieve it. Read more…