In 1988, Guy Debord observed:
“We believe we know that in Greece, history and democracy appeared at the same time. We can prove that their disappearances have also been simultaneous.”
“To this list of the triumphs of power we should, however, add one result which has proved negative for it: a State, in which one has durably installed a great deficit of historical knowledge so as to manage it, can no longer be governed strategically.”
This absence of historical knowledge manifests itself today. After weeks of partisan gridlock, the current Congress finally re-opened the Federal government with a last-minute deal. This partisanship, however, is not ideological: it is emotional, it is irrational, and its results are unpredictable.
In response to this minor “accomplishment” President Obama remarked:
“Let’s work together to make the government work better, instead of treating it like an enemy, or making it worse. That’s not what the founders of this nation envisioned when they gave us the gift of self-government.”
The President’s statement seems, on its face, uncontroversial — and that’s a big part of the problem. His statement is profoundly anti-historical, and in the most problematic manner possible, reads present values into the past.
Until 1850 or so, only white men with substantial wealth — such as bankers, factory owners, or plantation owners — were allowed the vote. Renters, subsistence farmers, and the laboring majority — whites and blacks — lacked political representation at the time of the US’s founding. The Constitution made no mention of suffrage until the 14th and 15th Amendments in 1868 and 1870 — almost 100 years after the original Constitution was ratified. In 1875, the US Supreme Court explicitly ruled that the 14th Amendment — which defined citizenship for the first time — did not give women the right to vote. Women didn’t get the vote until 1920. Blacks didn’t get full rights until the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
That’s an odd “gift of self-government.” Truly, it sounds more like a long, hard struggle to obtain self-government — IN SPITE OF the Founding Fathers. Indeed, despite being subjects to the Crown, the British managed to outlaw the slave trade, abolish slavery, and enfranchise women before the United States. And despite being subject to the Crown, the British today enjoy many of the same rights — like freedom of religion, freedom of the press, freedom of speech — that Americans consider distinctly American innovations under the Bill of Rights.
Though the myth of Democracy endures, it is plain to see that the Founders feared Democracy, and made provision to prevent its emergence in the New World. That so much time passed before American citizens obtained universal suffrage is testament to the effectiveness of the Founding Fathers’ plans.
In arguing for a new Constitution to replace the Articles of Confederation, Founding Father Elbridge Gerry complained to the Constitutional Convention on May 31, 1787: “The evils we experience flow from the excess of democracy.” Founding Father Edmund Randolf also complained about the “turbulence and follies of democracy.” In Convention, Founding Father John Dickinson argued against expanding political enfranchisement: “The danger to free governments has not been from freeholders, but those who are not freeholders.” Dickinson went so far as to claim that a constitutional monarchy was “one of the best Governments in the world.”
Founding Father Alexander Hamilton, in Convention on June 18, 1787, expressed his belief that “nothing but a permanent body can check the imprudence of democracy … you cannot have a sound executive upon a democratic plan.” In The Federalist #10, James Madison wrote: “democracies have ever been spectacles of turbulence and contention, have ever been found incompatible with personal security and the rights of property.”
Thomas Jefferson — who did favor democracy — was not particularly effective in pushing his views. He was not even in the country when the Constitution was drafted and ratified.
The attainment of self-government in the United States was no “gift.” President Obama is either unaware of this nation’s history, or perhaps he has no problem brushing it aside in his public statements. Perhaps he is content that the myth of America’s long history of revolutionary self-government is a suitable expedient in the short-sighted calculus of contemporary electoral politics.
Assuming a political narrative where “liberals” oppose “conservatives,” one might think the first black president in the United States would show some slight interest in calling attention to the heritage of liberal reformers and progressives who paved the way for him to attain high office. One might suppose it would be in his political advantage to make it clear for all to see that the rights most “conservatives” today enjoy are the result of the efforts of their political adversaries.
Instead, the President’s choice to ignore this history — and to implicitly endorse the a-historical nationalist myth favored by self-described “conservatives” — obscures real threats to what measure of democracy Americans have gained by long struggle.
“Conservatives” who think they are defending the “gift” of democracy vehemently favor so-called “voter ID” laws — which have the effect of disenfranchising students and the elderly. The ostensible rationale for these laws — that they prevent voter fraud — not only points to a problem that does not appear to exist, but affects a sort of bait-and-switch. These laws brush aside legitimate concerns about election fraud — they ignore systemic flaws with electronic voting machines, irregularities in the 2004 elections which may have been covered up, voting irregularities and questionable legal activities surrounding the contested 2000 election, and ongoing voting irregularities in parts of small town America like Waukesha, Wisconsin.
In addition to enacting “voter ID” laws, “conservative” governors have been purging voter rolls, and continue to push for national policies — such as the failed War on Drugs and mandatory minimum sentencing laws — which have had the net effect of leaving one in five black men disenfranchised.
The “conservatives” perhaps don’t know to what condition their “traditionalist” views are returning this country — and our “liberal” President does not seem particularly concerned with remedying the matter. And for all the lofty speech of “history” surrounding the 2008 elections, the word was routinely used not by way of elucidating the past, but by way of branding what was then the present moment.
All this is perhaps fitting. President Obama’s policies are by and large center-right. With the exception of gay marriage, he has a dismal civil rights record, that includes granting amnesty to CIA torturers, failing to close the Guantanamo Bay detention facility, extrajudicial assassinations of US citizens, requesting indefinite detention provisions in the 2012 National Defence Authorization Act, negotiating secret treaties, expanding and legitimating Bush-era surveillance programs — despite well-documented evidence detailing what these programs can lead to, violating the sovereignty of foreign nations with drone strikes, prosecuting whistleblowers with a vengeance … and the list goes on.
Mr. Obama’s calls for “unity” are perhaps well-intentioned, but cannot be strategically effective in the absence of historical knowledge among the population. Something closer to a one-party system is unlikely to mend the damage caused by a dysfunctional two-party system. What America needs is not “unity” but a real opposition party — a role that, in the face of “conservative” efforts in the Tea Party Caucus — the Democratic party seems unwilling or unable to fulfill.
Although many states in the US have been passing laws allowing for medical and even recreational use of cannabis, the substance remains illegal at the federal level. Specifically, cannabis is classified as a Schedule I substance under the Controlled Substances Act. This classification is on the basis of three main criteria:
1. The drug or other substance has a high potential for abuse.
2. The drug or other substance has no currently accepted medical use in treatment in the United States.
3. There is a lack of accepted safety for use of the drug or other substance under medical supervision.
Cannabis is classed with Heroin and LSD. Cocaine, Morphine, and Oxycodone are less tightly regulated Schedule II substances.
As cannabis use persists among teenagers and adults, both legally and illegally, more and more people — especially young people — are seeing first hand that the risks associated with the use of cannabis don’t square properly with the federal government’s treatment of cannabis. If the government has an interest in protecting young people from the risks of substance abuse, they also have an interest in providing accurate information and formulating sensible policies that don’t simultaneously undermine their own credibility.
Problems with the Current Classification of Cannabis
The Schedule I classification of cannabis has a number of problems. First, abuse is hard to quantify, and just what patterns of cannabis usage fall under this rubric are not well defined. Second, there are few quantifiable safety concerns with cannabis: the substance is profoundly non-toxic, and it is, for all practical purposes, impossible to overdose on cannabis. This distinguishes cannabis from other Schedule I substances like heroin, and even from legal recreational drugs like alcohol, which is a contributing factor in the death of some 80,000 Americans each year. Third, the current federal scheduling of cannabis does not take into consideration the accepted medical use of cannabis in a number of states. The Department of Veterans Affairs has issued a formal directive permitting the clinical use of cannabis in those states where medical uses are approved. Researchers studying the relative risks and merits of the substance encounter great difficulties acquiring suitable samples to study, and their findings are of limited applicability to the way the substance is routinely consumed in a non-standardized, non-regulated black market.
Perhaps the most dramatic difficulty with the federal government’s position on cannabis is that the US Department of Health and Human Services holds a patent on medical uses of cannabis. Issued in 2003, US Patent #6630507 is titled “Cannabinoids as antioxidants and neuroprotectants.” The patent examines a molecule found in cannabis, CBD, though the chemical mechanism the patent identifies should be present in all cannabinoids, including THC. The patent notes that “cannabinoids are found to have particular application as neuroprotectants, for example in limiting neurological damage following ischemic insults, such as stroke and trauma, or in the treatment of neurodegenerative diseases, such as Alzheimer’s disease, Parkinson’s disease and HIV dementia,” and also indicates that cannabinoids offer a unique delivery mechanism due to the facility with which these molecules can cross the blood-brain barrier.
When cannabis was originally listed as a Schedule I substance in 1970, the classification was intended to be provisional, pending the results of an ongoing study. The National Commission on Marijuana and Drug Abuse issued the study findings in 1972, finding that there “is little proven danger of physical or psychological harm from the experimental or intermittent use of the natural preparations of cannabis.” Although the study recommended de-criminalizing cannabis and treating use or possession akin to alcohol, President Nixon chose not to implement the Commission’s recommendations, and marijuana has remained a Schedule I substance since. Although whole-plant marijuana remains a Schedule I substance, the synthetic THC called dronabinol — sold under the brand name Marinol — is classified as a less-restricted Schedule III substance.
Social Attitudes Affecting Cannabis as Medicine
In the United States, a lot of opposition to medical cannabis laws have presumed that such laws are just a “first step” towards outright legalization. While there is little to suggest such an outcome would be inherently detrimental, there is also ample evidence that supports medical uses of cannabis on the substance’s own merits.
What presents a more profound problem to the public is in part a tacit sociology of medicine that limits and proscribes how individuals view treatment. Politicians have adopted these cultural attitudes unquestioningly — indeed, the authoritarian personalities of these politicians wouldn’t allow them to ask such questions. Those who are open to such questions don’t dare assert themselves, despite polling results that show 70-85% of Americans favor significant changes in current federal policy.
Of particular note in this regard is the unexamined notion that medicine has to come in the form of an expensive bitter pill. The notion that medicine might also be pleasurable is anathema, and that healing might be enjoyable is equally heretical. Medicine is still penance, disease is sin, the new medical complexes are cathedrals, and doctors are the high priesthood, mediating between this world and the next, serving as both the front line and the last defense against the forces of corruption, decay, and disorder.
We apologize when we call in sick to work, and are stigmatized by our ailments. Just as the medieval church was one of the largest landlords in Europe, today’s medical industry claims vast swaths of the GDP. In the US, healthcare spending exceeds the 10% tithe commanded by the medieval church. The religion analogy is quite complete, and includes the irreligiousness of the most ardent devotees. Hospitals, gathering together the diseased, are diseased. They are morally perverse and rotten with wealth.
Along with unexamined notions of how medicine fits into our culture, there is another factor promoted by our culture, related to the ideology of Progress. Progress holds that the future will always bring improvements, that all new technology is better technology, and that what is new must replace what is old. From within the confines of this ideology of Progress, it seems on the face of things obvious that any new pill is inherently superior to “natural preparations.” This is, unfortunately, quite difficult to establish with any certainty.
There are easy-to-identify counter-examples where modern medicine has delivered a harmful product: the recall of pills like VIOXX make a big splash in the media, and create the impression that these are exceptions to the general rule that modern medicine routinely delivers improvements. But these issues have been with medicine for a long time: heroin, for example, was originally invented by the pharmaceutical company Bayer, and marketed as a non-addictive alternative to morphine.
The litany of prescription painkillers marketed since Bayer invented heroin have now surpassed car crashes in the number of annual deaths they cause, accounting for some 90% of all poisonings. The number killed by these drugs amounts to about ten 9-11′s each year — every year. Instead of figuring out how to deal with this plague, however, the US throws more and more money at the medical industry, which keeps developing new drugs with serious side effects and abuse potential. The broader, social implications of this are even more troubling.
The Decline of Western Medicine
Most of modern medicine is unnecessary. After sanitation and hygiene, antibiotics, analgesics and anesthetics, and vaccines, most of modern medicine is devoted to coping with the side effects of industrialization. This effect can be seen in diet particularly, but also with respect to such vectors as environmental pollution. Environmental pollution may take the form of contaminants in the air and water, particulate matter in the air (which causes diseases like asthma), or increased radiation in the environment (due to industrial processes, residues from atmospheric nuclear testing, or because of solar radiation that is increased by a depleted ozone layer in the upper atmosphere).
If he or she lives past the age of 15, the typical hunter-gatherer stands a reasonable chance of remaining healthy and active into their 70′s, with a strong social support network to care for them as they age. The modern US health care industry really doesn’t do all that much better. A sizable portion of the modern improvements in life expectancy over what is offered by a hunter-gatherer society come from improved infant mortality, a hygiene problem identified by Ignaz Semmelweis in 1847. Hand washing is a an extraordinarily cheap and effective medical technology. Antibiotics, which were developed for around $20,000 of basic research, have saved many more individuals from childhood disease, and increased the range of surgeries that are possible.
As modern medicine grows more expensive, its productivity declines precipitously. This decline in productivity can be measured in terms of substantive outcomes or in terms of the cost per patent. Either way, the role of diminishing returns in this field is not adequately addressed in the contemporary discourse.
Most of the big medical breakthroughs of the last 300-500 years were inexpensive. Everything recent is increasingly expensive and of rapidly declining effectiveness compared to basic innovations like sanitation or antibiotics. Most modern medicines and medical procedures could be avoided through less expensive means, specifically, through dietary and behavior modification.
The cost of medicine detracts from other public welfare programs, such as nutrition, food security, education, and mass transit, all of which yield a far greater return on investment than modern medicine.
At some point, the moral aspects of modern medicine need to be evaluated in terms of the social cost. For example: as a percentage of GDP, the US spends three times more money on the healthcare industry than on education. We know that basic education makes us smarter, better socialized, and better equipped for employment; but most medicine isn’t really making us all that much healthier.
Marijuana and Medicine
Progress makes a raw agricultural commodity like cannabis seem suspect as a medicine, through really, it is modern medicine that should be suspect. Whereas a typical television commercial for a new pharmaceutical product will often devote more than half its airtime to potential side effects, no similarly funded social initiative exists to teach Americans how to eat properly, or how to prepare nutritious foods. Nutritious foods or vegetarian diets are routinely mocked by Americans.
Somehow, none of this is a medical problem. Rather, in the discourse, these are treated as political problem. So doctors, not being politicians, stay out of politics; and, somehow, proper diet is only of tangential concern to the medical industry, while new drugs of dubious effectiveness are promoted as indispensable innovations.
Somehow health is not a medical issue, only disease warrants attention. And where medicine and politics do intersect on this issue of cannabis, instead of informed discussion, the public is treated to a wall of silence, or else jokes about hapless stoners.
In the wake of this most recent school shooting in Connecticut, it is important to remember that these are still isolated incidents. That these cases become public spectacles represents an ideology of indoctrination more than they represent symptoms of a peaceful society changing into something frightening and more violent.
Our society is already violent. 12,000 gun homicides annually ranks our “peaceful” Homeland among many war zones. This is a 10-year low. For every gun homicide, there is roughly one accidental gun death. But very few of these deaths take place in mass shootings: they are overwhelmingly the results of domestic disturbances, armed robberies, and gang violence. Yet, when these more common types of shootings occur, the media is largely silent. If it were otherwise, the news would be about nothing but shootings.
When the media picks up gun violence as a topic — typically when a rare mass shooting occurs — the narrative the media propagates tells audiences that these acts are result of some “disturbed” individual. Now, clearly, shooters in domestic disturbances are “disturbed” at the time, but the media means to say something about psychological pathology: that mass shooters are somehow unhinged and acting irrationally. Relatively little attention is paid to what sorts of social pressures these mass shooters may have been under. And almost no attention is paid to the significance of the fact that these shooting sprees often end with a suicide.
The Suicide Shooter
What should be deeply disturbing to media consumers is that the sensationalism of the media coverage is oriented more towards inflicting emotional trauma on audiences than providing a useful description of what gun violence in the country actually looks like.
The media seems to be operating under the premise that when American audiences see coverage of events like this, audiences implicitly understand that these events are stand-ins for countless other events that vary widely in their specifics. All the Tweets and FaceBook posts to the effect that, “if only more teachers carried weapons in the classroom” seem to contradict such an assumption. I would wager that if more teachers were armed in class, there may be more school shootings: it is not safe to assume that all teachers love children unconditionally, or are a paragon of infinite patience.
Perhaps most disturbing about the media coverage of these events — in a sociological sense — is the uniform lack of discussion of the frequent suicidal climax of the killing. Perhaps the media feels it is enough to state that the shooters are “disturbed,” and that suicide therefore seems natural enough as a conclusion to such a “disturbed” episode. But we do have another paradigm for suicide attacks like these: suicide bombers in the Middle East.
It is not so easy to write off Middle Eastern suicide bombers as uniformly “disturbed.” They are trained in their attack strategies — much like our suicide shooters plan assaults. Suicide bombers are motivated by ideology or other political goals — which often enough appears to be the case with our suicide shooters. While suicide bombers are promised great rewards in the afterlife, our suicide shooters live in a culture saturated with aesthetic treatments of violence, such that the rush of a real-life shoot out and the subsequent notoriety may be reward enough.
Often enough, the perpetrators of these acts of mass violence do have reasons, though, due to the media’s sensationalism and lack of analysis, we never get any closer to understanding those reasons or, consequently, understanding what we can do address them. All we are left with, as a nation, is televised grieving, emotional trauma inflicted by the media over distant events, and nebulous debates about “gun control.”
The Calculating Killer
Joe Stack, who after several failed attempts at pursuing the American Dream through entrepreneurism, eventually committed suicide by flying a private plane into an IRS tax office in Texas. Joe Stack wrote a suicide note, titled, “Well Mr. Big Brother IRS man… take my pound of flesh and sleep well.” In it, he stated:
“If you’re reading this, you’re no doubt asking yourself, ‘Why did this have to happen?’ The simple truth is that it is complicated and has been coming for a long time. The writing process, started many months ago, was intended to be therapy in the face of the looming realization that there isn’t enough therapy in the world that can fix what is really broken. Needless to say, this rant could fill volumes with example after example if I would let it. I find the process of writing it frustrating, tedious, and probably pointless… especially given my gross inability to gracefully articulate my thoughts in light of the storm raging in my head. Exactly what is therapeutic about that I’m not sure, but desperate times call for desperate measures.
“We are all taught as children that without laws there would be no society, only anarchy. Sadly, starting at early ages we in this country have been brainwashed to believe that, in return for our dedication and service, our government stands for justice for all. We are further brainwashed to believe that there is freedom in this place, and that we should be ready to lay our lives down for the noble principals represented by its founding fathers. Remember? One of these was ‘no taxation without representation’. I have spent the total years of my adulthood unlearning that crap from only a few years of my childhood. These days anyone who really stands up for that principal is promptly labeled a ‘crackpot’, traitor and worse.”
Joe Stack doesn’t sound like his reason has become unhinged, he sounds like society pushed him to the breaking point.
Some time earlier, the Unabomber, Ted Kaczynski, wrote in in his Manifesto:
“96. As for our constitutional rights, consider for example that of freedom of the press. We certainly don’t mean to knock that right: it is very important tool for limiting concentration of political power and for keeping those who do have political power in line by publicly exposing any misbehavior on their part. But freedom of the press is of very little use to the average citizen as an individual. The mass media are mostly under the control of large organizations that are integrated into the system. Anyone who has a little money can have something printed, or can distribute it on the Internet or in some such way, but what he has to say will be swamped by the vast volume of material put out by the media, hence it will have no practical effect. To make an impression on society with words is therefore almost impossible for most individuals and small groups. Take us (FC) for example. If we had never done anything violent and had submitted the present writings to a publisher, they probably would not have been accepted. If they had been accepted and published, they probably would not have attracted many readers, because it’s more fun to watch the entertainment put out by the media than to read a sober essay. Even if these writings had had many readers, most of these readers would soon have forgotten what they had read as their minds were flooded by the mass of material to which the media expose them. In order to get our message before the public with some chance of making a lasting impression, we’ve had to kill people.”
Say what you will about his methods, but his thinking was not delusional or unhinged from reason: this was a man pushed by society to his breaking point. His actions were not the result of an irrational outburst. He sought social change.
For a media apparatus that literally sold us the lies that lead up to our 2003 invasion of Iraq; that has been largely silent on the erosion of the 4th Amendment under the FISA amendments and the free reign given to the National Security Agency; that has been largely silent on the erosion of the 5th Amendment under the 2012 NDAA; that is silent about illegal drone campaigns in foreign countries, extrajudicial assassinations of US citizens without trial, that still hasn’t gotten to the bottom of our war crimes in Fallujah, or what actually happened in the 2000 elections; that allows the sprawling security state unchecked growth, and that even cheerleads for the technologies that make it all possible, subsidized by mass production yields in consumer goods … all the media’s moralizing on issues like this rings hollow.
Anarchist thinker Emma Goldman had a different take on events like these:
“To Analyze the psychology of political violence is not only extremely difficult, but also very dangerous. If such acts are treated with understanding, one is immediately accused of eulogizing them. If, on the other hand, human sympathy is expressed with the Attentäter, one risks being considered a possible accomplice. Yet it is only intelligence and sympathy that can bring us closer to the source of human suffering, and teach us the ultimate way out of it.”
Emma Goldman suggests that such violent outbursts are the product of an unusually sensitive soul, not an irrational madman or an unfeeling sociopath. She does not condone these events, but seeks to understand them both emotionally and rationally, as part of a larger program towards making the world a more just and equitable place. The media could learn a few things from her approach.
When the media or politicians raise the spectre of “gun control” following such incidents — which is logical enough, given the total lack of any more substantive debate about the causes and conditions underlying such violent outbursts — a certain highly vocal group of Americans starts clamoring about their rights.
Recent Supreme Court rulings depart from precedent dating back to the 1930’s, mainly, the Miller Case, which affirmed that a sawed-off shotgun is not a militia weapon, and that individual ownership of such weapons can therefore be curtailed.
The decision in the 1939 Miller case reads, in part:
“The Court cannot take judicial notice that a shotgun having a barrel less than 18 inches long has today any reasonable relation to the preservation or efficiency of a well regulated militia, and therefore cannot say that the Second Amendment guarantees to the citizen the right to keep and bear such a weapon.”
In their close-reading of the text of the Constitution, the justices in Miller asserted:
“The Constitution, as originally adopted, granted to the Congress power –
“To provide for calling forth the Militia to execute the Laws of the Union, suppress Insurrections and repel Invasions; To provide for organizing, arming, and disciplining, the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress.
“With obvious purpose to assure the continuation and render possible the effectiveness of such forces, the declaration and guarantee of the Second Amendment were made. It must be interpreted and applied with that end in view.”
The Second Amendment, in the strictest sense, is not the “gun” amendment, but the “militia” amendment. The introductory clause, “A well regulated militia,” serves the legal function of a “whereas” clause, delineating the scope of the provision. In the Second Amendment’s equivalent clause under the Articles of Confederation, the text is explicit in that arms are to be kept “in public stores.”
This is not to say there is not a Constitutional right for individuals to keep and bear privately-owned weaponry; but if the individual right to own a private firearm derives from the Second Amendment, it is the result of activist judges rather than the original intent of the Founding Fathers. An individual right to own a firearm is more plausibly found in the 9th or the 10th Amendment.
Gun advocates, however, do not make this argument, because although they frequently want smaller government and despise activist judges, there is no judicial case history testing gun rights under these more plausible Amendments.
Moreover, if individual gun ownership derives from the 9th or 10th Amendment, this also opens the door to State regulation of gun ownership — something these anti-Federal government, States-rights gun advocates paradoxically want to avoid (opposition to the Chicago and DC handgun bans illustrate this point). They want to have their cake and eat it too, where the Federal government and gun rights intersect.
Now, we are all accustomed to restrictions put on legal products to which we otherwise feel we have a right to purchase: there is a minimum drinking age for alcohol, all cars must drive on the same side of the street and stop at red lights, all electronic devices must limit their electromagnetic interference. There are countless other instances which, under certain conditions, may represent an inconvenience, but which, on the whole, represent the needs of a greater social good.
But, by and large, the largest source of confusion and agitation on this point comes from the National Rifle Association. Started as a support group for Marines, the powerful lobby completely ignores that gun ownership today is a completely different proposition that it was in the early days of the Republic. The group’s position is completely a-historical.
In the Revolutionary era, guns were hand-made artisan items, not mass-produced commodities. Maybe one out of eight men owned a gun. As many as half these guns didn’t work, but were viewed as property nevertheless and passed on from generation to generation. They weren’t essential for hunting, as many Americans lived in cities or raised crops and livestock as farmers. There weren’t organized municipal police forces at the time. The original “individual mandate” from the Federal government required that able-bodied me purchase firearms due to a massive shortage.
Today, there is one gun per man, woman, and child in the country. It is a very different situation. And here, perhaps, lies the most atrocious facet of the NRA: it is little more than an industry trade group masquerading as a civil rights organization.
The term “libertarian” was first used by Joseph Déjacque in 1857 to describe a particular brand of anarchism. I have discussed elsewhere the extent to which modern libertarianism represents a perversion of classical anarchist thought. The extent to which this modern, impoverished view represents a clear and present danger to law and order deserves added emphasis.
Modern libertarians, in their criticism of the possibility that government action might distort market factors, consistently neglect to consider the extent to which industrial corporations distort market factors through the formation of monopolies and oligopolies. Former Federal Reserve Chairman Alan Greenspan even went so far as to criticize anti-trust legislation on the grounds that “No one will ever know what new products, processes, machines, and cost-saving mergers failed to come into existence, killed by the Sherman Act before they were born. No one can ever compute the price that all of us have paid for that Act which, by inducing less effective use of capital, has kept our standard of living lower than would otherwise have been possible.” Greenspan later acknowledged that his views contained a “flaw,” and expressed “shocked disbelief” that the banking sector failed to self-regulate.
For many industries, monopoly and oligopoly are the default market arrangements: consider Monsanto in the US soy market, Intel in the computer chip market, or the typical cable tv market, for example. These are not isolated cases, but represent a pervasive form of corporate organization at the industrial scale. Wherever producers are able to dictate prices, rather than rely on the signals sent by consumer purchasing decisions, producers exert coercive pressure on consumers, and market forces do not operate properly.
Given that industrial scale corporations control far more wealth and resources than the government, industrial scale corporations would seem to pose a more significant threat to individual liberty than government. And, given the libertarian opposition to growth in government as a source of coercive influences, a self-consistent libertarian position should hold the growth of monopoly or oligopoly to be a considerable threat as well. Given that modern industry is largely characterized by monopoly and oligopoly, a self-consistent libertarian position, therefore, would hold economic growth itself to be highly suspect. In railing against regulation, however, modern libertarians neglect the threat posed by industry, and in effect, facilitate the growth of oligopoly.
This myopic character of modern libertarianism can be seen operating behind conservative opposition to “Obamacare,” which conservatives believe represents a government over-reach into the health care market.
The government’s intervention in health care isn’t an intervention in the free market, however, because the US health care industry isn’t governed by market forces. Take prescription drugs, for example. Each pharmacy pays a different price to the pharmaceutical company. Depending on the insurer, each consumer pays a different copay to the pharmacy. The consumer often doesn’t know the retail price of the drugs, and the prescribing doctor doesn’t know what the patient’s copay is. The price system only works when consumers — not producers — set the price. If consumers don’t know the price, their purchasing decisions don’t serve as signals to producers.
The health insurance industry in the US is best characterized as an extortion racket. Government mandated health insurance is a problem, but not for the reasons conservatives identify. The problem is not government interference in the market because there is no market. Because conservatives have the wrong diagnosis, their prescription for a cure is also wrong.
Free market thinker Friedrich Hayek, in the Road to Serfdom, saw monopoly as the proximate cause of modern totalitarianism. On page 194, he observes, “This movement is, of course, deliberately planned by the capitalist organizers of monopolies, and they are thus one of the main sources of this danger. Their responsibility is not altered by the fact that their aim is not a totalitarian system but rather a sort of corporative society in which the organized industries would appear as semi-independent and self-governing ‘estates.’ … A state which allows such enormous aggregations of power to grow up cannot afford to let this power rest entirely in private control.”
Congressman and presidential candidate Ron Paul has assumed the mantle of modern crusader for the Libertarian cause.
His campaign website claims: “Dr. Paul is the leading spokesman in Washington for limited constitutional government, low taxes, free markets, and a return to sound monetary policies based on commodity-backed currency.”
The modern Libertarian position, however, has a number of striking shortcomings that become even more pronounced when situated within the historical context that gave rise to the political philosophy of Libertarianism.
Limited Constitutional Government
While it is true that the scope of the US government has expanded over time, this isn’t an inherently negative thing. During George Washington’s Administration, 80% of the federal budget was dedicated to Indian eradication. In this sense, national security is the oldest subsidy program in US history; at the same time, it’s encouraging to consider that the government has largely abandoned systematic genocide and, throughout the Progressive era, dedicated itself to ways of promoting “the general welfare” and creating “a more perfect union” by extending voting rights to women and blacks.
Moreover, there is little historical evidence to suppose that the US government was originally meant to be limited in scope. This may be a Jeffersonian ideal, but is just that: idealism. Insofar as it is believed as historical fact today, it represents a form of popular mythology.
In the Federalist #14, James Madison wrote:
“If Europe has the merit of discovering this great mechanical power in government, by the simple agency of which the will of the largest political body may be concentrated, and its force directed to any object which the public good requires, America can claim the merit of making the discovery the basis of unmixed and extensive republics. It is only to be lamented that any of her citizens should wish to deprive her of the additional merit of displaying its full efficacy in the establishment of the comprehensive system now under her consideration … Let it be remarked … that the intercourse throughout the Union will be facilitated by new improvements. Roads will everywhere be shortened, and kept in better order; accommodations for travelers will be multiplied and meliorated; an interior navigation on our eastern side will be opened throughout, or nearly throughout, the whole extent of the thirteen States. The communication between the Western and Atlantic districts, and between different parts of each, will be rendered more and more easy by those numerous canals with which the beneficence of nature has intersected our country, and which art finds it so little difficult to connect and complete.”
The framers planned on territorial, technological, and infrastructure expansionism from the outset.
In the Federalist #10, Madison concluded:
“The question resulting is, whether small or extensive republics are more favorable to the election of proper guardians of the public weal; and it is clearly decided in favor of the latter.”
The vehement support of low taxes among modern Libertarians is predicated on the assumption that this is money taken away from citizens. This is short-sighted in the extreme. When the government takes in money in taxes, it spends it. Corporations and wealthy CEO’s, on the other hand, do take money out of circulation. They put their surplus profits in banks, so banks can turn around and create more money through the fractional reserve system, which leads to inflation, which harms typical workers who have seen wages stagnate even as worker productivity has steadily increased. But corporations and wealthy individuals who invest their money aren’t spending it; they aren’t contributing to the “real economy” the way government does when it allocates revenues.
Citizens benefit from government spending. The west was not won by strong individualists in combat with the wilderness; the west was won through government subsidies promoting the expansion of railroads, the telegraph, and through land grants. The westward expansion was the government-subsidized expansion of new technology.
By GDP, one third of government spending is dedicated to the armed forces. This is a continuation of our oldest subsidy program: national security. Since World War II, the permanent war economy has required a certain type of economic growth; during the Cold War, this took the form of planned obsolescence, which served the function of battlefield attrition in the context of a war without fighting. A large military provides important technological subsidies that the economics of growth capitalism require.
This is perhaps among the most pernicious of myths propagated by modern Libertarianism. By GDP, business accounts for about 75% of US economic activity. Yet entrepreneurialism accounts for only 1/7 of that. That means most economic activity in the US is on the scale of industry.
Industrial scale commerce is characterized by organizational prowess, not entrepreneurial initiative. Most of what the industrial firm calls planning is precisely the elimination of market forces: Sony sells Playstations at a loss to undermine competition; Microsoft was fined $2 billion by European regulators for operating in open violation of EU trade laws over the course of a decade. If such tactics fail, an industrial firm will buy its competition outright. Buying firms is also a way for the industrial firm to enter a new market. From the perspective of the industrial firm, acquisitions assume the role of innovation, which is otherwise impossible where planning foresees outcomes. There is only competition where outcomes are uncertain: that is what a competition is. Entering a market a new by purchasing a successful firm is anti-competitive; but shareholders want a predictable return on their investment, so investors favor industrial planning over entrepreneurial initiative.
The modern industrial system is characterized not by competition, but by oligopoly. In a given market, you have one choice of cable provider. Intel makes 85% of the CPUs sold in computers today. 90% of the soy grown in the US is Monsanto Roundup Ready. ADM processes 50% of domestic corn ethanol. These are not isolated examples. General electric makes NBC sitcoms and nuclear weapons: the intuitions of the entrepreneur have about as much validity with respect to industry as observations about a piggy bank have with respect to the fractional reserve system.
Despite the rhetoric of free markets, most economic activity in the US is the result of industrial-scale economic planning.
There is no free market where there is no competition; in industry, there is little meaningful competition. The price system only works if producers have no control over pricing; under oligopoly, it is precisely producers rather than consumers that determine prices.
The era of entrepreneurial capitalism vanished in the 19th Century, eclipsed by monopoly capitalism and again by the permanent war economy. There’s no going back, unless you’re willing to do without industry. Even the entrepreneur is a tool of industry: a purchaser of computers, media, manufactured furniture; independent retailers sell industrial goods.
The very notion of a free market is antiquated idealism.
Sound Monetary Policies
While there is plenty to criticize about fractional reserve banking, a return to a commodity based currency like the gold standard is not a viable solution. The value of a currency tied to the price of gold can easily be manipulated by wealthy individuals or organizations hoarding the available supply, leveraging scarcity to their advantage.
Where fractional reserve banking creates opportunities for abuse, the solution is increased government regulation which puts limits on how much money banks are able to create, and under what circumstances.
In their efforts to dismantle the public sector, modern Libertarians overlook several important key points.
First off, the effort to privatize government services on the assumption that markets are more efficient neglects to consider that markets are more importantly characterized by competition. Do we really want competition for what rights we are guaranteed? Isn’t this at odds with the very concept of the Bill of Rights as representing “inalienable” rights? If a competition is fair, its outcome is unpredictable: market mechanisms are therefore a poor way to guarantee rights.
Second, in an industrial economy, a large public sector is essential to ensuring aggregate demand. An important feature of the public sector is that employees are neither rewarded in good times nor penalized in hard times; this allows industry to plan effectively.
Third, modern libertarians are in agreement with Demcorats, Republicans, and industrialists generally in assuming that a certain type of growth capitalism is good. This, however, requires enormous subsidies and a large source of aggregate demand. The demands of growth capitalism also overlook the fact that we live on a planet with finite resources, and we cannot grow indefinitely.
Fourth, modern libertarians obsess about government intervention as a source of market distortion, but never mention oligopoly. Firms like Microsoft routinely engage in anti-competitive business practices in the US and abroad. They treated this as just another business expense on the road to market domination. Oligopoly does far more to distort markets than typical government regulatory activity.
Modern Libertarians might be right that our current government is a problem, but they have the wrong diagnosis and consequently the wrong prescription. They would never cite rising divorce rates as evidence that the institution of marriage should be abolished, but this is just the approach they take to government. They inadequately identify the specific dynamics that have lead our government to become so grossly dysfunctional.
Debates about too much to too little regulation miss the historical context in which our government was instituted: the Lockean tradition, which was largely concerned with property, held property as subject to regulation by the state. “Life, Liberty, and the pursuit of Happiness” is a repurposing of Locke’s “life, liberty, and property.” In his Second Treatise on Civil Government, Locke wrote:
(se. 120) “it would be a direct contradiction, for any one to enter into society with others for the securing and regulating of property; and yet to suppose his land, whose property is to be regulated by the laws of the society, should be exempt from the jurisdiction of that government, to which he himself, the proprietor of the land, is a subject.”
Furthermore, Friedrich Hayek‘s “free market” program, spelled out in The Road to Serfdom (in which he also voiced opposition to laissez-faire capitalism), is quite compatible with a public health care system. After noting that “The functioning of a competition not only requires adequate organization of certain institutions like money, markets, and channels of information — some of which can never be adequately provided by private enterprise” (38) he asserts that “there can be no doubt that some minimum of food, shelter, and clothing, sufficient to preserve health and the capacity to work, can be assured to everybody” (120).
“Nor is there any reason why the state should not assist the individuals in providing for the common hazards of life against which, because of their uncertainty, few individuals can make adequate provision. Where, as in the case of sickness and accident, neither the desire to avoid such calamities nor the efforts to overcome their consequences are as a rule weakened by the provision of assistance — where, in short, we deal with genuinely insurable risks — the case for the state’s helping to organize a comprehensive system of social insurance is very strong” (121).
The Intellectual Poverty of Modern Libertarianism
The rhetoric that modern Libertarian thought borrows from classical anarchism neatly ignores the economic equality imperative that anarchists considered to be inseparable from absolute individualism. Modern Libertarianism also glosses over the bitter disputes between Marxists and anarchists: there is a tendency to view Communism as monolithic and as opposed to pure Capitalism; yet anarchism represents a third position, opposed to both Capitalism and Communism. Under the anarchist critique, for example, Communist China can be seen exactly for what it is: not a Communist enterprise in any substantive sense, but rather, as a variety of state capitalism.
The Occupy Philosophy blog recently posted an article about “plutocracy,” or rule by the wealthy, written by Brian Leiter, Director of The Center for Law, Philosophy & Human Values at the University of Chicago. In his commentary on American plutocracy, Leiter asserts that “at historical moments pregnant with the potential for significant social and economic change, the choice of language sometimes matters.” In light of these premises, let us examine his position.
Leiter identifies “plutocracy” as the primary ill in the modern United States. He asserts that “plutocrats” have undermined democracy. He states that “the United States is the most powerful ‘plutocracy’ in the world. It is no longer a democracy.”
To be precise about our “choice of language,” the United States Constitution guarantees a republican form of government, not democracy; and, insofar as the law originally limited political participation to white, land-owning males (the capitalist class), the United States has always been a plutocracy.
But the more profound problem with Leiter’s argument lies in his particular invocation of “plutocracy” as the source of the problem: to equate wealth with power does nothing to explain how wealth translates to power, but simply assumes this as a fact. This, in one sense, amounts to simply stating the obvious. It is like pointing out that businesses are run by businessmen, without discussing at all what varieties of business are present, how they operate, or how they are integrated with or, as the case may be, antagonistic to society at large.
I. Whither Capitalism?
Leiter begins by observing that “we are now in the fourth year of the worst economic catastrophe in the capitalist world since the Great Depression.” While this, at first glance, may appear uncontroversial, some qualification is needed with respect to the use of the term “capitalism.” Not only are there ideological disputes at issue, but historical conditions which are, on the whole, inadequately addressed in contemporary discourse.
The late 19th Century, in which wage labor became a dominant mode of subsistence, brought about radical changes in the nature of capitalism as industry became increasingly institutionalized and bureaucratized. The entrepreneurialism of the revolutionary bourgeoisie gave way to a commingling of private and public bureaucracy — of capital and political power — and set the stage for the working conditions of the early 20th century.
It was here we saw the ascendency of the labor union as a serious political and economic power. The antagonism of government to unionization was a result of the union’s encroachment on the management prerogatives of industry (that is, the setting of wages and working conditions). The state, acting on behalf of capital, revealed the presence of the close-knit connections between political and industrial power that had developed during the second half of the 19th Century.
By the middle of the 20th century, this trend continued to the point where, what had traditionally been called “the market” had ceased to be a relevant force in the dominant culture of the United States. Classical liberalism assumes that capital (land and machinery) is fixed, while labor is flexible. Industrialization caused mass migrations of labor from farms to urbanized areas, and workers readily acquired new skills to adapt to different types of labor.
As labor has become increasingly specialized, as two-income households have become more common, and as benefits have become an increasingly important part of employee compensation, labor has accordingly become less flexible. At the same time, capital has moved overseas, and become more flexible. By the end of the 20th century, the traditional relationship between capital and labor had been well inverted.
Today, when one uses the term “capitalism,” this term means different things to different people. The American conservative uses the word to invoke a nostalgic vision of 19th century entrepreneurialism. The American liberal typically uses the word to indicate a mode of collectivist action wherein professional managers control the means of industrial-scale production on behalf of absentee owners.
There is an important sense in which even Nazi Germany was a capitalist country. To be sure, it wasn’t market capitalism — any more than market capitalism prevails in the United States today — it was a form of monopoly capitalism that took the State as the primary consumer, and which used an imperialist war of expansion to organize production.
Although the official ideology of the Nazi Party espoused a socialist organization of society, the Nazis did very little to restructure private property or private profit along the lines of socialist ideology (except for the expropriation of Jewish wealth, which was handed over to industrialists and bankers).
Between World War I and World War II, German industrialists were a key component to the German rearmament, and the same German industrialists were the key beneficiaries of the war economy. The industrialist Fritz Thyssen, for example, was a central financier of the Third Reich, as was the Association of German Industrialists. The automobile manufacturer Volkswagen was a private corporation that produced automobiles for the Third Reich. Max Amann profited enormously as a publisher of Nazi propaganda. The Zyklon B used in Nazi gas chambers was a commercial product.
Insofar as the Nazi economy was characterized by a vast agreement between industrialists and politicians, it is worth noting that American business and government alike agree that growth is the key to success. This is despite the fact that we live on a planet with finite resources, the exploitation of which is characterized by diminishing returns, and that increases in worker productivity are of only marginal benefit to workers themselves, who have been seeing their compensation stagnate or diminish for quite some time. It is government and industry agreeing that growth is of the utmost importance for the industrial-scale corporation.
II. Who Competes?
The typical American conservative will construct a binary opposition between capitalism understood as “free markets” and socialism understood as “economic planning.” This is, however, a false dichotomy.
The modern corporation is largely defined by organizational prowess, and insofar as these organizations are risk averse, the chief market operations of the industrial firm are actions meant to eliminate market forces. This is called planning. A farmer in the midwest can be fairly certain of finding the fertilizer he needs when he needs it precisely because modern corporations are expert planners.
Stability is the enemy of competition (which must be unpredictable if it is to be fair), and insofar as corporations want to guarantee favorable performance for their shareholders, they set out to ensure economic stability and predictable growth. Marketing and advertising are means to ensure consistent demand. Corporations will sell their products at a loss to undercut competitors, and if this fails, they may buy their competition outright.
Because executives rarely go to prison when corporations break the law, corporations are apt to operate in open violation of the law if it will snuff out the competition — this is precisely what Microsoft did in Europe, paying $2 billion in fines during a decade of operation in direct violation of EU trade laws. Corporations that pollute are granted enormous subsidies: given that most homes and businesses must pay for garbage collection, why should the biggest polluters be exempt to the extent that they are? Insofar as schooling prepares students for employment, and college trains students in industry-standard skills and software applications, the cost of education represents a form of subsidy.
The result of this relentless push by modern industry to eliminate market forces at every opportunity has a profound impact on daily life — albeit one that is difficult to perceive at first glance. Although there are many channels to choose from on television, most markets are served by a cable TV monopoly. Although a consumer has many different brands of computers available to them for purchase, one firm — Intel — makes most of the chips in these computers. A few large firms make most of the hard drives and optical drives in these computers. Microsoft makes the operating system for most of these computers. Computers are highly commoditized, and relatively few firms control the market for this commodity.
This dominant market arrangement is known as oligopoly, and is characterized by collusion between a few major firms to mutually ensure their continued dominance. And it is not just the the cable television market or the technology sector that is characterized by this arrangement: as of 2005, 90% of the soy crop grown in the US was of the patented Roundup-Ready variety sold by Monsanto. One company — Archer Daniels Midland (ADM) — claims close to 50% of the domestic market for ethanol.
US Government mandates that gasoline be blended with ethanol increased ADM’s net earnings by 26% in 2006 alone. This is just one way in which ADM is the beneficiary of subsidies and governmental planning. ADM also benefits from agricultural subsidies for corn, since most of the ethanol it produces is made from corn. In 1993, ADM was also the target of the largest price-fixing case in US history. It’s not just Microsoft that engages in anti-competitive business practices.
The demand for ethanol in gasoline, from which ADM benefits so enormously, is predicated on access to roads. Roads are heavily subsidized. For federal highways to be financially solvent, for example, the federal gasoline tax would need to be raised by 40¢ per gallon. The federal gasoline tax was last raised by a nickel in 1993 — and whatever proceeds might be had from that increase have been consumed by inflation.
Not only are roads heavily subsidized, but the research that goes into advanced biofuels represents a subsidy as well: it could be argued that, given the economic law of diminishing returns, the money spent researching biofuels could be better spent investing in various forms of mass transit (though this would make the unpleasant implication that the American way of life is, as presently constituted, unsustainable — so politicians say what they must to get elected, and corporations keep giving consumers whatever marketing departments tell consumers they want).
None of this has happened by chance: the market is not an anarchy of small entrepreneurial firms as it was in the first half of the 19th century. What we have in the West today is the result of planning. Given that most wealth in the US is held by corporations, not plutocrats or governments, it is fair to say that most of the decisions about the US economy are the result of planning, since the modern industrial corporation is characterized by planning (that is, collusion with related firms and with government) rather than market competition (or voter turnout).
What is Excessive about CEO Compensation?
Although Mr. Leiter is content with the populist appeals of the Occupy Movement, which hold that excessive CEO compensation is the result of “avarice,” the truth of the matter is more subtle. The problem of CEO compensation is not one of avarice, but, rather, is a particular solution to the personnel needs of the industrial corporation.
Most CEO’s are already wealthy by the time they are recruited. Pay itself is not an incentive to work because they have neither fear of privation nor need for additional material comfort. There are, then, two main approaches to providing them an incentive to work: psychological identification with the goals of the firm, or increased status.
Where CEO’s are recruited, rather than obligated to claw their way up through middle management, it is more difficult to get them to identify with the goals of the firm. In certain industries this can be accomplished through an identification of the goals of the firm with specific social objectives (such as national defense), or through the dogma of indefinite growth (which even a tobacco company executive can participate in, and thereby contribute to society) — and it is here that a peculiar brand of nationalism comes into play — but in general it is easier to equate wealth with status, and motivate the CEO by enhancing his or her status accordingly (also satisfying the contemporary quantitative mindset).
And so growth becomes a central feature of American capitalism — providing both a psychological justification for those who manage industry on behalf of absentee owners (whose status derives from the circumstances under which they need only sit back and watch the money roll in) and what enables the firm to confer a form of status on the CEO. It is through this fixation on growth that modern capitalism takes on an imperialist aspect. It is moreover worth noting in this connection that the CEO is no more a capitalist than the typical pro-business unionized auto worker: the CEO is management, not a an individual proprietor, and is not inherently interested in the amassing of capital.
Of course, to the 99%, the CEO’s are, so to speak, high-status (in addition to being upper-class). But what is often ignored is the extent to which they inhabit a completely separate social world with completely distinct norms. There is, among industry, politics, and the military, a distinct affinity group — a set of shared goals, management practices, and close social ties. You can see evidence of this affinity group where people who attain this status are able to move easily from one sector to the other.
Take former Vice President Dick Cheney, for example: he went from Secretary of Defense (military) to CEO of Haliburton (industry) to Vice President of the US (electoral politics). It is not the case that the object of the work in any one occupation directly qualified him to occupy the other, especially in this era of specialization. Yet what Cheney specialized in were certain management practices, bureaucratic proficiencies, and the cultivation of a specific social network. His case is not an isolated one.
The personnel problem becomes a social problem where these people, who aren’t always the wealthiest, but who have access to authority and the media, set about normalizing the persistence of the affinity group from which they benefit. It is not a matter of some wealthy folks being “well-intentioned” while others are “sociopathic” — though many in positions of power do exhibit sociopathic personality traits. There is, more substantively, the important matter of why so many Americans go along with things.
Many Americans see collusion as waste and arrive at the conclusion that government should be run like a business, without ever stopping to think for half a second about what that means. Many people believe that if government were run more like a business, it would work more efficiently. If government were to hold efficiency to be of paramount importance, it would simply kill the infirm, rather than offer Social Security. This is, of course, contrary to the US Constitution’s promise to “promote the general welfare,” understood as a means of guaranteeing “life, liberty, and the pursuit of happiness,” but markets, by definition, offer few guarantees. It is a very circumscribed definition of “efficiency,” but one that highlights why “playing the stock market” is often equated with gambling. Sometimes the bottom line isn’t the bottom line.
There are other problems with holding that government should be run like a business. Businesses are not democratic organizations, they are authoritarian (you do what your boss tells you to do, and you don’t get to vote your boss out of office if you don’t like it); their management practices are in many cases proprietary (as opposed to publicly announced laws) and their office holders are appointed, rather than elected.
Furthermore, there are reasons to suppose that the ethical standards of conduct with respect to business and government are incompatible. Whereas a business man must be on the lookout for opportunities to engage in commerce, when an office holder does this, it’s called bribery or a conflict of interest.
Business (of the desirable, market-based kind) needs competition, but government needs loyalty. It doesn’t even make sense to think of government as competing: the whole point of a constitutional republic is that the state has a monopoly on the legitimate use of force as a means of coercion; the alternative is vigilante justice.
And What of it?
Where the notion that business represents a superior model for governance coincides with the ideology of political freedom deriving from economic freedom, it is worth noting that the sort of absolute freedom advocated by American conservatives is not the pinnacle of civil society, but its complete opposite.
In John Locke’s Second Treatise on Civil Government, published 1690, he states: “where there is no law, there is no freedom: for liberty is, to be free from restraint and violence from others; which cannot be, where there is no law: but freedom is not, as we are told, a liberty for every man to do what he lists” (57). Liberty is having assurances everybody obeys the same law.
Laissez-faire economics is contrary to the Western Constitutional tradition, as originally conceived, and as understood in the mid 20th century. Some centuries after Locke, in 1944, free market advocate Friedrich Hayek echoed much the same position, in articulating his view of rule of law: “The Rule of Law thus implies limits to the scope of legislation: it restricts it to the kind of general rules known as formal law and excludes legislation either directly aimed at particular people or at enabling anybody to use the coercive power of the state for the purpose of such discrimination. It means, not that everything is regulated by law, but, on the contrary, that the coercive power of the state can be used only in cases defined in advance by the law and in such a way that it can be foreseen how it will be used” (Road to Serfdom, Chapter 6). Provided that individuals have a say in what laws are passed, freedom is having to obey only the law, and not yield to the whims of others.
The contemporary trend to privatize governmental services, then, is contrary to the goals of a just, democratic (or, republican, as the case may be) society. It takes public resources and removes them from democratic control, under the banner of re-instating some nostalgic, 19th Century vision of entrepreneurial capitalism.
Of course, we have the benefit of history to tell us what that style of capitalism leads to: 15 hour workdays, no weekends, sweatshop conditions, mere subsistence pay, occupational safety hazards, and the like. Union organizers fought tooth and nail for decent working conditions. And already we can see both how far we’ve slid back into these precise conditions, and how they represent not the cooperation of individuals under the law, but the subjugation of individuals to what working conditions employers dictate. This is an issue of no small concern, given that most people spend the better part of their waking hours for the better part of their lives working.
Where Mr. Leiter explains, “The social and economic world is both vast and complex, and in market economies, all the incentives of daily life demand focus on the immediate moment: closing this deal, getting to this business meeting, pleasing that client and, overridingly, getting what you can for yourself,” he is guilty of a gross over-simplification.
The very existence of government subsidies favorable to industry speak to the fact that these firms plan quite far ahead, and the lengths to which they go to undermine competition speaks to the extent to which they are averse to market participation. The flaw here is the assumption that the conditions of a market economy are a relevant factor in shaping the shared goals of industry and politics. These conditions do not prevail; rather, monopoly and oligopoly prevail. There may be competition among filling stations and convenience stores or fast food restaurants within a particular neighborhood, but, the franchise agreements under which these small operators open up shop, as elsewhere, insulate the oligopoly from the risks of actual market participation.