On Brett Wagner’s “Apocalypse Blaze.”

On Brett Wagner’s “Apocalypse Blaze.”

A friend of mine, whom I first met when he was a student in my poetry class, was writing a post-apocalyptic novel.  There’s nuclear fallout; civilization crumbled.  A few people who haven’t yet caught the sickness are traveling together, fantasizing that they could restart the world.

When the bombs fell, governments collapsed.  Not immediately, but within the year.  The idea of government is predicated on people getting things done: fire fighters who might rescue you, police officers who might protect you, agencies who maintain the roads and ensure the water is safe to drink.  All of which requires money, which the government can print, but those slips of paper don’t mean much if no one will accept them in exchange for food or a safe place to sleep.

“Hangrith,” that’s a beautiful word.  It’s archaic, means a realm in which you can expect security and peace.  Literally, “within the grasp of the king’s hand.”  While you are here, the government will protect you.

Within the grasp. Image by Enrico Strocchi on Flickr.

My friend was skeptical of the concept.  The king’s hand wasn’t cradling him, nor wielding a protective sword to keep orcs at bay; instead, my friend felt the gauntlet at his throat.  We’d met in jail, where he’d landed for addiction.  We volleyed emails after he left, while he was working on his novel.  And then he was in my class again.  Failed check-in.  Once you’re on probation, you’re given numerous extra laws to follow – people on probation don’t have the rights of other citizens, and minor transgressions, like missing a meeting or late payment for a fine, can land you back in jail.

And so it wasn’t difficult for my friend to imagine a world in which there was no government to rely upon.  To reach their destination, his heroes have to barter.  Which meant that, suddenly, my friend’s skills might be treated with respect.

After all, what would people be most willing to trade their food for in a world where waking life was a ravaged nightmare?

I took a patch with me underground when shit hit the fan.  Grew it hydroponically.  Cared for that shit like a baby.  Gave me something to do while I was in that shelter.  Weed is my money.”

Rampant economic inequality, fractured communities, and the spread of attention-grabbing toys that prevent us from making eye contact with one another – these have all contributed to the increase in drug use and addiction in contemporary America.  But the world could be worse.  After the blast, everyone would share the stress and trauma that people in poverty currently weather.

Methamphetamine lets people keep going despite crushing hopelessness and despair.  Meth use is widespread in many hollowed-out towns of the Midwest.  It’s a problematic drug.  At first, people feel good enough to get out of bed again.  But methamphetamine is metabolized so slowly that users don’t sleep.  Amphetamines themselves are not so toxic, but lack of sleep will kill you.  After five, ten, or twenty days awake, vicious hallucinations set in.  The drug is no longer keeping you alert and chipper enough to work – static crackles through your mind, crustacea skitter beneath your skin, shadows flit through the air.

They walked on, their path lit by the moon, among the wreckage of cars and piles of trash and useless electronics that were heaped up until they came to a concrete slab with a manhole in it.

This is my crib, where I sat out that day.”

Image by Joe Shlabotnik on Flickr.

After the fall, experience in the drug trade lets people carve out a living.  And experience on the streets lets them survive.  All the ornate mansions, people’s fine wood and brick homes, have fallen into disarray.  Their inhabitants caught the sickness, or else died in the initial blast.

The survivors were people who slept outdoors, protected by thick concrete.  Not in bunkers; the blast came too suddenly for that.  Beneath bridges, tucked into safe alcoves, or down on dry ledges of the sewers.

My friend understood what it meant to make shelter where you could find it.

After Pops gave me the boot, I had to find a way to support myself; that’s when I learned my hustle.  And Penny here was one of my biggest customers.”

You used to be her dealer?”

Damn, dude, you make it sound dirty.  Weed ain’t no drug, it’s medicine.”

The heroes plan to go west, aiming for San Francisco.  When I was growing up, I had that dream too – I’d read a little about the Merry Pranksters and failed to realize how much the world might have changed.  People living around the Bay Area are still interested in polyamory and psychedelic drugs, but that doesn’t mean they’re nice.  It was heartbreaking to see how racist and ruthless the people there were, especially since I’d expected to find a hippie paradise.

And so my spouse and I moved back to the Midwest. 

But I understand the dream – we’re surrounded by a lot of retrograde cudgleheads, here.  The only problem is that people are pretty similar everywhere else. 

An agrarian based society.  Where everyone works to grow what they eat.  The soil might be okay.  We won’t know all the affects of the radiation until later.”

Well, I know for sure it’s mutated animals near the hit zone.  I’ve seen all kindsa freaky shit.  People too.  It’s like the wild west again, where we’re going.”

The actual “wild west,” in U.S. history, was horrible.  Racism, genocide, misogyny.  But the ideal – a lawless land beyond the hangrith where a person’s ingenuity reaps fortune instead of jail time – might be enough to keep someone going.

And it worked, for a while.  My friend carved out months of sobriety.  He was volunteering at the community food kitchen.  In the late afternoons, he’d type using a computer at the public library.  He was always a very hopeful person; while he was in jail, he asked me to bring physics textbooks so he could use the time productively.  You can get a sense of his enthusiasm from his poetry:

“BIRD TOWN, TN”

by Brett Wagner

Picture this young boy

whose favorite color was the blank white

of a fresh page.  We went running once

on the spring green grass.

As I’ve heard it said,

“There’s nowhere to go but everywhere”

so we ran anywhere in this

jungle gym world.

Somewhere the clouds didn’t smother us

and the hills didn’t exhaust us,

where robins, blue jays, and cardinals sing

like boddhisattvas that have taken wing.

But then he slipped.  A first drink led to more.  He’d been in sober housing; he was kicked out, back onto the streets.  A friend, another New Leaf volunteer, gave him enough money for a few days in a hotel.

We had several cold snaps this winter.  Two nights after his hotel money ran out, temperatures dropped. 

We’d made plans for my friend to join us for a panel with Dave Eggers, where we’d discuss storytelling and incarceration. 

Instead, at 29 years old, Brett Wagner froze to death.  His novel is unfinished; his heroes will not build a new agrarian society.

They had grim odds.  Nuclear fallout is a killer.  But my friend was felled by the apocalypse that’s already upon us.

Header image for this post by akahawkeyefan on Flickr.

On taxing robots.

On taxing robots.

My family recently attended a preschool birthday party at which cupcakes were served.  I watched in horror as the children ate.  Some used grimy fingers to claw off the top layer of frosting.  Others attempted to shove the entire frosted top into their gaping maws, as though they thought their jaws might distend snake-like.  These kids failed, obviously, and mostly smashed the cupcakes against their faces.

And then, a mere two minutes later, the kids all slid from their chairs to run off and rampage elsewhere in the house.  The table was a wreckage; no child had actually eaten a cupcake.  They’d eaten frosting, sure, but left the remnants crumbled and half-masticated on their plates.

Someone needed to clean up.

If I was a better person, I would have offered to help.  But I didn’t.  I just stood there with my mouth twisted into a grimace of disgust.

I wonder why it’s so hard for our family to make friends.  Surely my constant scowls seem charming!  Right?  Right?

Even at our own house, where our compost bin ensures that uneaten food isn’t completely wasted … and where my own children are responsible for the entirety of any mangled remnants … I loathe scraping the plates clean. 

And I don’t like washing dishes.

Luckily, we have a dishwasher.  Slide dirty dishes into the rack, push a button, and, voila, a robot will make them clean!

Automation is great!

Also, automation is making our world worse.

Although official unemployment in the United States is low, the economy is doing poorly.  The official statistics don’t count people who’ve given up, and they don’t count people who are stuck with worse jobs that they would’ve had in the past.

Low unemployment is supposed to drive up people’s salaries.  When a company knows that there are few available job seekers, they’ll pay more to prevent you from leaving.  But that’s not happening, currently.  If a company knows that your life is sufficiently bleak, and also that no other company is planning to treat you better, then they can keep salaries low.  Financial misery lets employers operate like a cartel.

Image by Farcaster.

Despite low unemployment, most employees are quite replaceable.  If you won’t do the work, a robot could instead.  Just like my beleaguered dishwasher, filled with plates and bowls too gross for me to want to touch, a robot won’t advocate for better treatment.  And a robot draws no salary.  If you have the wealth to invest in a dishwasher – or a washing machine, or a donut maker, or a legal-document-drafting algorithm – it’ll serve you tirelessly for years.

People often say that the jobs of the future will be those that require a human touch.  Those people are wrong.  Your brain is a finite network of synapses, your body an epidermis-swathed sack of gristle.  In the long run, everything you do could be replicated by a machine.  It could look like you, talk like you, think like you – or better.

And – after its initial development and manufacture – it wouldn’t cost its owners anything.

As our automation technologies improve, more and more of the world’s income will be shunted to the people who are wealthy enough to own robots.  Right now, human delivery people are paid for dropping off the packages people buy from Amazon – but as soon as Jeff Bezos owns drones and self-driving cars, he’ll keep those drivers’ salaries for himself.  As your labor becomes less valuable relative to the output of a machine, it’s inevitable that inequality will increase.  Unless we implement intentional redistribution.

A recent editorial by Eduardo Porter for the New York Times advocates for a tax on automation.  Perhaps this seems sensible, given what I’ve written above – if robots make the world worse, then perhaps robots should be made more expensive.

After all, the correct way to account for negative externalities in a capitalist economy is through taxation.  That’s how capitalism solves the tragedy of the commons.  If the cost of an action is paid by everyone collectively – like pollution, which causes us all to drink dirty water, or breathe asthma-inducing air, or face apocalyptic climate change – but the profit is garnered by individuals, then that person’s private cost-benefit analysis will call for too much pollution.

For every dollar the Koch brothers earn, the world at large might need to spend $1,000 fighting climate change.  That dollar clearly isn’t worth it.  But if each dollar they earn increases their personal suffering by only a nickel, then of course they should keep going!  That’s what capitalism demands.  Pollute more, and keep your ninety-five cents!

But a person’s private priorities can be made to mirror our society’s by charging a tax equal to the total cost of pollution.  Then that person’s individual cost-benefit analysis will compare the total cost of an action against its total benefit.

A pollution tax wouldn’t tell people to stop being productive … it would simply nudge them toward forms of production that either pollute less, or are more valuable per unit of pollution.

But automation isn’t harmful.

Yes, automation is making the world worse.  But automation itself isn’t bad.  I’m very happy with my dishwasher.

If we want to use tax policy to improve the world, we need to consider which features of our society have allowed automation to make the world worse.  And it’s not the robots themselves, but rather the precipitous way that current wealth begets future wealth.  So the best solution is not to tax robots, specifically, but rather to tax wealth (with owned robots being a form of wealth … just like my dishwasher.  Nothing makes me feel rich like that lemony-fresh scent of plates I didn’t have to scrub myself.)

And, after taxing wealth, we would need to find a way to provide money back to people.

World War II taught us that unnecessary production – making goods whose only value was to be used up and decrease the value of other goods, like bombs and tanks and guns – could improve the economic situation of the world.  We ended the Great Depression by paying people to make weapons.  And we could ameliorate the current economic malaise with something similar. 

But an actual war seems misguided, what with all the killing and dying.  There are better, kinder ways to increase wasteful government spending.

If I were in charge of my own town, I’d convert the abandoned elevator factory into a bespoke sneaker and clothing factory.  The local university offers a degree in fashion design, and it might be nice if there were a way for students to have batches of five or ten items produced to specification.

As a business, this wouldn’t be economically viable.  That’s the point.  It would be intentionally wasteful production, employing humans instead of robots.  Everything would be monetarily inefficient, with the product sold below cost.

It’d be a terrible business, but a reasonable charity.

With alarmingly high frequency, lawmakers try to impose work requirements on welfare payments.  I obviously think this policy would be absurd.  But it wouldn’t be so bad if there were government-provided work opportunities.

Robots can make shoes cheaper.  That’s true.  But by taxing wealth and using it to subsidize wasteful production, we could renew people’s sense of purpose in life and combat inequality.  No wars required!

And no need for a tax targeting my dishwasher.  Because, seriously.  I’ve got kids.  I don’t want to clean up after them.  Would you?

On suboptimal optimization.

On suboptimal optimization.

I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra. 

Optimization is a wonderful mathematical tool.  Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena.  Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum.  Philosophers love it – how can we provide the most happiness for a population?  Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s Philosophical Investigations with this New York Times Magazine article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky.  I’m skeptical that this mathematical technique is essential for everyone who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology.  She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more.  A basic understanding of statistics might make people better citizens.

Whereas … linear algebra?  This is a beautiful but counterintuitive field of mathematics.  If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math.  A deep understanding of linear algebra can enliven your study of quantum mechanics.

The summary of quantum mechanics: animation by Templaton.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra.  He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back.  He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession.  After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet.  Somebody in an office decided that all university students here need to pass this class.  I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse.  Numbers will always do what they’re supposed to, but each such problem begins with a choice.  What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Figure 1 from Eykholt et al., 2018.

Most automobile companies are researching self-driving cars.  They’re the way of the future!  In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After all, using graffiti to make a self-driving car interpret a stop sign as “Speed Limit 45” is a design flaw.  A car that accelerates instead of braking in that situation is not operating as intended.

But passenger-less self-driving cars that roam the city all day, intentionally creating as many traffic jams as possible?  That’s a feature.  That’s what self-driving cars are designed to do.

A machine designed to create traffic jams?

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable.  After all, the AI piloting a self-driving car must include an explicit response to the trolley problem.  Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians?  This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.” 

Otherwise, the manufacturers won’t sell cars.  Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often.  But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi.  If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income.  But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles.  Some self-driving cars would spend those hours unoccupied.

But they won’t park.  In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc. 

The cost to just keep driving is generally going to be lower than $2 per hour.  Worse, this cost is a function of the car’s speed.  If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices.  If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem.  It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there.  As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Photo by walidhassanein on Flickr.

Pity the person sitting behind the wheel of an occupied car on those streets.  All the self-driving cars will be having a great time stuck in that traffic jam: we’re saving money!, they get to think.  Meanwhile the human is stuck swearing at empty shells, cursing a bevy of computer programmers who made their choices months or years ago.

And all those idling engines exhale carbon dioxide.  But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.

On alternate truths.

On alternate truths.

Sometimes the alternatives are jarring – you look and count a certain number, another person proffers a radically different amount.

Surely one of you is mistaken.

In the United States, there’s a rift between those who overestimate certain values (size of inauguration crowds, number of crimes committed by immigrants, votes cast by non-citizens, rates of economic growth) and their fellows.

Henri_Tajfel.jpgIn the 1960s and 70s, psychologist Henri Tajfel designed experiments because he was curious: how is genocide possible?  What could sap people’s empathy so severely that they’d murder their thinking, perceiving, communicating neighbors?

Tajfel began with a seemingly irrelevant classification.  In the outside world, people have different concentrations of epidermal melanin, they worship different deities, they ascribe to different political philosophies.  But rather than investigate the gulf separating U.S. Democrats from Republicans, Tajfel recruited a homogeneous set of teenage schoolboys to participate in an experiment.

Screen Shot 2018-09-19 at 2.38.26 PMOne by one, the kids were shown a bunch of dots on a screen and asked to guess how many dots were there.  Entirely at random, the kids were told they’d consistently overestimated or underestimated the number of dots.  The numbers each kid guessed were not used for this classification.

Then the kids participated in a pretty standard psychology experiment – they had various amounts of money to split between other study subjects.  In each case, the kids were told that one of the recipients would be a fellow over-estimator (not themselves, though), and the other recipient would be an under-estimator.

An intuitive sense of “us vs. them” would pit study subjects against the researchers – kids should assign payoffs to siphon as much money as possible away from the university.  When every option has an equivalent total payoff, you might expect a fair distribution between the two recipients.  After all, the categorization was totally random, and the kids never had a chance to meet the other people in either their own or the other group.

Instead, over-estimators favored other over-estimators, even at the cost of lowering the total payout that the kids would receive from the researchers.  Oops.

We should expect our current over-estimators to favor each other irrationally, too.  These groups aren’t even randomly assigned.  And many of the alternate truths must seem reasonable.  Who among us doesn’t buy in to the occasional fiction?

For instance, there’s the idea of “free market capitalism.”  This is fictitious.  In the absence of a governing body that threatens violence against those who flaunt the rules, there can’t be a market.

Sometimes anarchists argue that you could have community members enforce cultural norms – but that is a government (albeit a more capricious one, since the “cultural norms” might not be written down and shared policing introduces a wide range of interpretations).  Sometimes libertarians argue that a government should only enforce property rights, but they purposefully misunderstand what property rights consist of.

garden-gardening-growth-2259If you paint a picture, then I spray it with a hose, you won’t have a picture anymore.  If you have a farm, then I buy the adjacent property and start dumping salt on my land, you won’t have a farm anymore.  I don’t have the physically take things out of your hands to eliminate their value.

If you own a house, then I buy the adjacent property and build a concentrated animal feeding operation, the value of your house will plummet.  You won’t have fresh air to breathe.

Or maybe I want to pump fracking chemicals into your aquifer.  You turn on your tap and poison spills out.

We have rules for which of these actions are acceptable and which are not.  The justifications are capricious and arbitrary – honestly, they have to be.  The world is complex, and there’s no pithy summary that solves all our quandaries.  Right to swing my arm, your nose, pffft, nonsense.  Why’d you put your nose there, anyway?

And our government enforces those rules.  The market is not free.  Corporations that denounce government intervention (e.g. dairy-industry-opposing tariffs, carbon tax, etc.) seek government interventions (now the dairy industry hopes that producers of soy milk, almond milk, coconut milk, etc., will be forced to rename their products).

But this probably doesn’t feel like hypocrisy.  We humans are good at believing in alternate truths.

On automation, William Gaddis, and addiction.

On automation, William Gaddis, and addiction.

I’ve never bought meth or heroin, but apparently it’s easier now than ever.  Prices dropped over the last decade, drugs became easier to find, and more people, from broader swaths of society, began using.  Or so I’ve been told by several long-term users.

This is capitalism working the way it’s supposed to.  People want something, others make money by providing it.

And the reason why demand for drugs has increased over the past decade can also be attributed to capitalism working the way it’s supposed to.  It takes a combination of capital (stuff) and labor (people) to provide any service, but the ratio of these isn’t fixed.  If you want to sell cans of soda, you could hire a human to stand behind a counter and hand sodas to customers, or you could install a vending machine.

Vending_machines_at_hospitalThe vending machine requires labor, too.  Somebody has to fill it when it’s empty.  Someone has to fix it when it breaks.  But the total time that humans spend working per soda is lower.  In theory, the humans working with the vending machine are paid higher wages.  After all, it’s more difficult to repair a machine than to hand somebody a soda.

As our world’s stuff became more productive, fewer people were needed.  Among ancient hunter gatherers, the effort of one person was needed to feed one person.  Everyone had to find food.  Among early farmers, the effort of one person could feed barely more than one person.  To attain a life of leisure, a ruler would have to tax many, many peasants.

By the twentieth century, the effort of one person could feed four.  Now, the effort of one person can feed well over a hundred.

With tractors, reapers, refrigerators, etc., one human can accomplish more.  Which is good – it can provide a higher standard of living for all.  But it also means that not everyone’s effort is needed.

At the extreme, not anyone’s effort is needed.

1024px-Sophia_(robot)_2There’s no type of human work that a robot with sufficiently advanced AI couldn’t do.  Our brains and bodies are the product of haphazard evolution.  We could design something better, like a humanoid creature whose eyes registered more the electromagnetic spectrum and had no blind spots (due to an octopus-like optic nerve).

If one person patented all the necessary technologies to build an army of robots that could feed the world, then we’d have a future where the effort of one could feed many billions.  Robots can write newspaper articles, they can do legal work, they’ll be able to perform surgery and medical diagnosis.  Theoretically, they could design robots.

Among those billions of unnecessary humans, many would likely develop addictions to stupefying drugs.  It’s easier lapse into despair when you’re idle or feel no a sense of purpose.

glasshouseIn Glass House, Brian Alexander writes about a Midwestern town that fell into ruin.  It was once a relatively prosperous place; cheap energy led to a major glass company that provided many jobs.  But then came “a thirty-five-year program of exploitation and value destruction in the service of ‘returns.’ “  Wall street executives purchased the glass company and ran it into the ground to boost short-term gains, which let them re-sell the leached husk at a profit.

Instead of working at the glass company, many young people moved away.  Those who stayed often slid into drug use.

In Alexander’s words:

Even Judge David Trimmer, an adherent of a strict interpretation of the personal-responsibility gospel, had to acknowledge that having no job, or a lousy job, was not going to give a thirty-five-year-old man much purpose in life.  So many times, people wandered through his courtroom like nomads.  “I always tell them, ‘You’re like a leaf blowing from a tree.  Which direction do you go?  It depends on where the wind is going.’  That’s how most of them live their lives.  I ask them, ‘What’s your purpose in life?’  And they say, ‘I don’t know.’  ‘You don’t even love yourself, do you?’  ‘No.’ “

Trimmer and the doctor still believed in a world with an intact social contract.  But the social contract was shattered long ago.  They wanted Lancaster to uphold its end of a bargain that had been made obsolete by over three decades of greed.

Monomoy Capital Partners, Carl Icahn, Cerberus Capital Management, Newell, Wexford, Barington, Clinton [all Wall Street corporations that bought Lancaster’s glass company, sold off equipment or delayed repairs to funnel money toward management salaries, then passed it along to the next set of speculative owners] – none of them bore any personal responsibility. 

A & M and $1,200-per-hour lawyers didn’t bear any personal responsibility.  They didn’t get a lecture or a jail sentence: They got rich.  The politicians – from both parties – who enabled their behavior and that of the payday- and car-title-loan vultures, and the voters of Lancaster who refused to invest in the future of their town as previous generations had done (even as they cheered Ohio State football coach Urban Meyer, who took $6.1 million per year in public money), didn’t bear any personal responsibility.

With the fracturing of the social contract, trust and social cohesion fractured, too.  Even Brad Hutchinson, a man who had millions of reasons to believe in The System [he grew up poor, started a business, became rich], had no faith in politicians or big business. 

I think that most politicians, if not all politicians, are crooked as they day is long,” Hutchinson said.  “They don’t have on their minds what’s best for the people.”  Business leaders had no ethics, either.  “There’s disconnect everywhere.  On every level of society.  Everybody’s out for number one.  Take care of yourself.  Zero respect for anybody else.”

So it wasn’t just the poor or the working class who felt disaffected, and it wasn’t just about money or income inequality.  The whole culture had changed.

America had fetishized cash until it became synonymous with virtue.

Instead of treating people as stakeholders – employees and neighbors worthy of moral concern – the distant owners considered them to be simply sources of revenue.  Many once-successful businesses were restructured this way.  Soon, schools will be too.  In “The Michigan Experiment,” Mark Binelli writes that:

In theory, at least, public-school districts have superintendents tasked with evaluating teachers and facilities.  Carver [a charter school in Highland Park, a sovereign municipality in the center of Detroit], on the other hand, is accountable to more ambiguous entities – like, for example, Oak Ridge Financial, the Minnesota-based financial-services firm that sent a team of former educators to visit the school.  They had come not in service of the children but on behalf of shareholders expecting a thorough vetting of a long-term investment.

carver.JPG

This is all legal, of course.  This is capitalism working as intended.  Those who have wealth, no matter what historical violence might have produced it, have power of those without.

This is explained succinctly by a child in William Gaddis’s novel J R:

I mean why should somebody go steal and break the law to get all they can when there’s always some law where you can be legal and get it all anyway!”

220px-JRnovel.JPGFor many years, Gaddis pondered the ways that automation was destroying our world.  In J R (which is written in a style similar to the recent film Birdman, the focus moving fluidly from character to character without breaks), a middle schooler becomes a Wall Street tycoon.  Because the limited moral compass of a middle schooler is a virtue in this world, he’s wildly successful, with his misspelling of the name Alaska (“Alsaka project”) discussed in full seriousness by adults.

Meanwhile, a failed writer obsesses over player pianos.  This narrative is continued in Agape Agape, with a terminal cancer patient rooting through his notes on player pianos, certain that these pianos explain the devastation of the world.

You can play better by roll than many who play by hand.”

220px-AgapeAgape.jpgThe characters in J R and Agape Agape think it’s clear that someone playing by roll isn’t playing the piano.  And yet, ironically, the player piano shows a way for increasing automation to not destroy the world.

A good robot works efficiently.  But a player piano is intentionally inefficient.  Even though it could produce music on its own, it requires someone to sit in front of it and work the foot pumps.  The design creates a need for human labor.

There’s still room for pessimism here – Gaddis is right to feel aggrieved that the player piano devalues skilled human labor – but a world with someone working the foot pumps seems less bad than one where idle people watch the skies for Jeff Bezos’s delivery drones.

By now, a lot of work can be done cheaply by machines.  But if we want to keep our world livable, it’s worth paying more for things made by human hands.

On empathizing with machines.

On empathizing with machines.

When I turn on my computer, I don’t consider what my computer wants.  It seems relatively empty of desire.  I click on an icon to open a text document and begin to type: letters appear on the screen.

If anything, the computer seems completely servile.  It wants to be of service!  I type, and it rearranges little magnets to mirror my desires.

Gps-304842.svg

When our family travels and turns on the GPS, though, we discuss the system’s wants more readily.

“It wants you to turn left here,” K says.

“Pfft,” I say.  “That road looks bland.”  I keep driving straight and the machine starts flashing make the next available u-turn until eventually it gives in and calculates a new route to accommodate my whim.

The GPS wants our car to travel along the fastest available route.  I want to look at pretty leaves and avoid those hilly median-less highways where death seems imminent at every crest.  Sometimes the machine’s desires and mine align, sometimes they do not.

The GPS is relatively powerless, though.  It can only accomplish its goals by persuading me to follow its advice.  If it says turn left and I feel wary, we go straight.

facebook-257829_640Other machines get their way more often.  For instance, the program that chooses what to display on people’s Facebook pages.  This program wants to make money.  To do this, it must choose which advertisers receive screen time, and to curate an audience that will look at those screens often.  It wants for the people looking at advertisements to enjoy their experience.

Luckily for this program, it receives a huge amount of feedback on how well it’s doing.  When it makes a mistake, it will realize promptly and correct itself.  For instance, it gathers data on how much time the target audience spends looking at the site.  It knows how often advertisements are clicked on by someone curious to learn more about whatever is being shilled.  It knows how often those clicks lead to sales for the companies giving it money (which will make those companies more eager to give it money in the future).

Of course, this program’s desire for money doesn’t always coincide with my desires.  I want to live in a country with a broadly informed citizenry.  I want people to engage with nuanced political and philosophical discourse.  I want people to spend less time staring at their telephones and more time engaging with the world around them.  I want people to spend less money.

But we, as a people, have given this program more power than a GPS.  If you look at Facebook, it controls what you see – and few people seem upset enough to stop looking at Facebook.

With enough power, does a machine become a moral actor?  The program choosing what to display on Facebook doesn’t seem to consider the ethics of its decisions … but should it?

From Burt Helm’s recent New York Times Magazine article, “How Facebook’s Oracular Algorithm Determines the Fates of Start-Ups”:

Bad human actors don’t pose the only problem; a machine-learning algorithm, left unchecked, can misbehave and compound inequality on its own, no help from humans needed.  The same mechanism that decides that 30-something women who like yoga disproportionately buy Lululemon tights – and shows them ads for more yoga wear – would also show more junk-food ads to impoverished populations rife with diabetes and obesity.

If a machine designed to want money becomes sufficiently powerful, it will do things that we humans find unpleasant.  (This isn’t solely a problem with machines – consider the ethical decisions of the Koch brothers, for instance – but contemporary machines tend to be much more single-minded than any human.)

I would argue that even if a programmer tried to include ethical precepts into a machine’s goals, problems would arise.  If a sufficiently powerful machine had the mandate “end human suffering,” for instance, it might decide to simultaneously snuff all Homo sapiens from the planet.

Which is a problem that game designer Frank Lantz wanted to help us understand.

One virtue of video games over other art forms is how well games can create empathy.  It’s easy to read about Guantanamo prison guards torturing inmates and think, I would never do that.  The game Grand Theft Auto 5 does something more subtle.  It asks players – after they have sunk a significant time investment into the game – to torture.  You, the player, become like a prison guard, having put years of your life toward a career.  You’re asked to do something immoral.  Will you do it?

grand theft auto

Most players do.  Put into that position, we lapse.

In Frank Lantz’s game, Paperclips, players are helped to empathize with a machine.  Just like the program choosing what to display on people’s Facebook pages, players are given several controls to tweak in order to maximize a resource.  That program wanted money; you, in the game, want paperclips.  Click a button to cut some wire and, voila, you’ve made one!

But what if there were more?

Paperclip-01_(xndr)

A machine designed to make as many paperclips as possible (for which it needs money, which it gets by selling paperclips) would want more.  While playing the game (surprisingly compelling given that it’s a text-only window filled with flickering numbers), we become that machine.  And we slip into folly.  Oops.  Goodbye, Earth.

There are dangers inherent in giving too much power to anyone or anything with such clearly articulated wants.  A machine might destroy us.  But: we would probably do it, too.

On Sci-Hub, the Napster of science.

On Sci-Hub, the Napster of science.

Here’s a story you’ve probably heard: the music industry was great until Napster came along and complete strangers could “share” their collections online and profits tanked.  Metallica went berserk suing their fans.  It was too late.  The industry has never been the same.napster

Sci-Hub has been called a Napster equivalent for scientific research papers, and the major publishing companies are suing to shut it down.  The neuroscience grad student who created it faces financial ruin.  The original website was quickly shuttered by a legal injunction, but the internet is a slippery place.  Now the same service is hosted outside U.S. jurisdiction.

[Note: between writing and posting this essay, Sci-Hub has lost another lawsuit requesting all such sites to be blocked by internet service providers.]

The outcomes of these lawsuits are a big deal.  Not just for the idealistic Kazakhstani grad student charged with millions in damages.  Academic publishers will do all they can to accentuate the parallels between Sci-Hub and Napster – and, look, nearly a quarter of my living relatives are professional musicians, so I realize how much damage was wrought by Napster’s culture of theft – but comparing research papers to pop songs is a rotten analogy.  Even if you’ve never wanted to read original research yet … even if you think – reasonably – that content producers should be paid, you should care about the open access movement.  Of which Sci-Hub is the most dramatic foray.

My own perspective changed after I did some ghostwriting for a pop medicine book.  Maybe you know the type: “Do you have SCARY DISEASE X?  It’ll get better if you take these nutritional supplements and do this type of yoga and buy these experimental home-use medical devices!”  Total hokum.  And yet, people buy these books.  So there I was, unhelpfully – quite possibly unethically – collaborating with a friend who’d been hired to ghostwrite a new one.

Central_core_disease_NADH_stainI read huge numbers of research papers and wrote chapters about treating this particular SCARY DISEASE with different foods, nutritional supplements, and off-label pharmaceuticals.  My sentences were riddled with un-truths.  The foods and drugs I described are exceedingly unlikely to benefit patients in any way.

Still, I found research papers purporting to have found benefits.  I dutifully described the results.  I focused on the sort of semi-farcical study that concludes, for instance, that cancer patients who drink sufficient quantities of green tea have reduced tumor growth, at which point newspapers announce that green tea is a “superfood” that cures cancer, at which point spurious claims get slathered all over the packaging.

Maybe nobody has written a paper (yet!) claiming that green tea ameliorates your particular SCARY DISEASE.  But there’s also turmeric, kale, fish oil, bittermelon, cranberries… I’m not sure any ingredient is so mundane that it won’t eventually be declared a superfood.  Toxoplasma gondii has been linked to schizophrenia, but low-level schizophrenia has been linked to creativity: will it be long before cat excrement is marketed as a superfood for budding artists?

cat-shit-2-flat-1.jpgAs it happens, enough people suffer from our book’s SCARY DISEASE that many low-quality studies exist.  I was able to write those chapters.  And then felt grim.  The things I’d written about food weren’t so bad, because although turmeric, coconut oil, and carpaccio won’t cure anybody, they won’t cause much harm either.  But the drugs?  They won’t help, and most have nasty side effects.

My words might mislead people into wasting money on unnecessary dietary supplements or, worse, causing serious damage with self-prescribed pharmaceuticals.  Patients might follow the book’s rotten advice instead of consulting with a trained medical professional.  I’d like to think that nobody would be foolish enough to trust that book – the ostensible author is probably even less qualified to have written that book than I am, because at least I have a Ph.D. in biochemistry from Stanford – but, based on the money being thrown around, somebody thinks it’ll sell.

And I helped.

Whoops.  Mea culpa, and all of that.

But I didn’t perpetrate my sins alone.  And I’m not just blaming the book’s publishers here.  After all, the spurious results I described came from real research papers, often written by professors at major universities, often published in legitimate scientific journals.

It’s crummy to concentrate all that slop in a slim pop medicine book, I agree, but isn’t it also crummy for all those spurious research papers to exist at all?

Maybe you’ve heard that various scientific fields suffer from a “replication crisis.”  There’s been coverage on John Oliver’s Last Week Tonight and in the New York Times about major failures in psychology and medicine.  Scientists write a paper claiming something happens, but that thing doesn’t happen in anyone else’s hands.  That’s if anyone even bothers to check.  Most of the time, nobody does.  Verifying someone else’s results won’t help researchers win grants, so it’s generally seen as a waste of time and money.

Still, the news coverage I’ve seen hasn’t stated the problem sufficiently bluntly.  Modern academic science is designed to be false.

This is tragic.  It’s part of why I chose not to stop working in the field.  I became a writer.  Of course, this led to my stint of ghostwriting, which… well, whoops.

Here’s how modern science works: most research is publishable only if it is “statistically significant.”  This means comparing any result to a “null hypothesis” – if you’re investigating the effect of green tea on cancer, the null hypothesis is simply “green tea does nothing” – then throwing out your results if you had more than a one in twenty chance to see what you did if the null hypothesis were true.

If you have a hundred patients, some of their tumors will shrink no matter what you do.  If you give everybody buckets of green tea and see the usual number of people improve, you shouldn’t claim that green tea saved them.

Here’s a graphic from Wikipedia to help:

pvalue1pvalue2

Logical enough.  But bad.  Why?  Because cancer is a SCARY DISEASE.  Far more than twenty people are studying it.  If twenty scientists each decide to test whether green tea reduces tumors, the “one in twenty” statistical test means that somebody from that set of scientists will probably see an above-average number of patients improve.  When you’re dealing with random chance, there are always flukes.  If twenty researchers all decided to flip four coins in a row, somebody would probably see all four come up heads – doesn’t mean that researcher did anything special.

Or, did you hear the news that high folate might be correlated with autism?  This study probably sounds legitimate – the lead scientist is a professor at Johns Hopkins, after all – but the result is quite unlikely to be real.  That scientist hasn’t written about folate previously, so my best guess (this new study is currently unpublished) is that pregnant women were tested for many different biomarkers, things like folate, iron, testosterone, and more, and then tracked to see whose children would develop autism.  If the researchers tested the concentrations of twenty different nutrients and hormones, of course they’d see one that appeared to correlate with autism.

[Edit: these findings were recently published.  Indeed, the data appear rather unconvincing, and the measurements for folate were made after the fact, using blood samples – it’s quite possible that other data was gathered but excluded from the published version of the study.]

This is not science.  But if you neglect to mention how many biomarkers you studied, and you retroactively concoct a conspiracy theory-esque narrative explaining why you were concerned about folate, it can do a fine job of masquerading as science.  At least long enough to win the next grant.

Which means that, even though the results of many of these studies are false, they get published.  When somebody checks twenty nutrients, one might appear to cause autism.  When twenty scientists study green tea and cancer, somebody might get results suggesting green tea does work.  Even if it doesn’t do a thing.

In our current system, though, only the mistaken researcher’s results get published.  Nobody knows that there were twenty tests.  The nineteen other biomarkers that were measured get left out of the final paper.  The nineteen researchers who found that green tea does nothing don’t publish anything.  Showing that a food doesn’t cure cancer?  How mundane.  Nobody wants to read that; publishers don’t want it in their journals.  But the single spurious result showing that green tea is a tumor-busting superfood?  That is exciting.  That study lands in a fancy journal and gets described in even briefer, more flattering language in the popular press.  Soon big-name computer CEOs are guzzling green tea instead of risking surgery or chemo.

I generally assume that the conclusions of research studies using this type of statistical testing are false.  And there’s more.  Data are often presented misleadingly.  Plenty of scientists are willing to test a pet theory many ways and report only the approach that “works,” not necessarily because they want to lie to people, but because it’s so easy to rationalize why the test you tried first (and second, and third…) was not quite right.  I worked in many laboratories over a decade and there were often results that everybody in the lab knew weren’t true.  Both professors I worked under at Stanford published studies that I know weren’t done correctly.  Sadly, they know it too.

This subterfuge can be hard for outsiders to notice.  But sometimes the flaws are things that anybody could be taught to identify.  With just a little bit of guidance, anybody foolish enough to purchase the pop medicine book I worked on would be able to look up the original research papers and read them and realize that they’re garbage.

There’s a catch: most of those papers cost between twenty and thirty dollars a pop.  The chapters I wrote cite nearly a hundred articles.  I’d describe a few studies about the off-label use of this drug, a few about that one, on and on, “so that our readers feel empowered to make their own decisions instead of being held at the paternalistic mercy of their healthcare professionals.”  A noble goal.  But I’m not sure that recommending patients dabble with ineffectual, oft-risky alternative medicines is the best way to pursue it.  Especially when the book publisher was discussing revenue sharing agreements with sellers of some of the weird stuff we shilled.

So, those hundred citations?  You could spend three thousand dollars figuring out that the chapters I wrote are crap.  The situation is slowly getting better – the National Institute of Health has mandated that taxpayer-funded studies be made available after a year, but this doesn’t apply to anything published before 2008, and I’m not sure how keen sick patients will be to twiddle their thumbs for a year before learning the latest information about their diseases.  Plus, there are many granting organizations out there.  Researchers who get their money elsewhere aren’t bound by this requirement.  If somebody asks you, “Would you like to donate money to fight childhood cancer?” and you chip in a buck, you’re actually contributing to the problem.

1403798100_f4ba200fe0_z
Photo by diylibrarian on Flickr.

I was only able to write my chapters of that book because I live next to a big university.  I could stroll to the library and use their permissions to access the papers I’d need.  Sometimes, though, that wasn’t enough.  Each obscure journal, of which there are legion, can cost a university several thousand dollars a year for a subscription.  A few studies I cited were published in specialty journals too narrowly focused for Indiana University to subscribe, so I’d send an email to a buddy still working at Stanford and ask him to send me a copy.

If you get sick and worry yourself into looking for the truth, you’ll probably be out of luck.  Even doing your research at a big state university library might not be enough.

That’s if you keep your research legal.

Or you could search for the papers you need on Sci-Hub.  Then you’d just type the title, complete a CAPTCHA on a page with instructions in Cyrillic (on what was until recently http://www.sci-hub.cc, at least), and, bam!  You have it!  You can spend your thirty dollars on something else.  Food, maybe, or rent.

Of course, this means you are a thief.  The publisher didn’t get the thirty dollars they charge for access to a paper.  And those academic publishers would like for you to feel the same ethical qualms that we’re retraining people to feel when they pirate music or movies.  If you steal, content producers won’t be paid, they’ll starve, and we’ll staunch the flow of beautiful art to which we’ve become accustomed.

The comparison between Napster and Sci-Hub is a false analogy.  Slate correspondent Justin Peters described the perverse economics of academic publishing, in particular the inelastic demand – nobody reads research journals for fun.

With music and movies, purchasing legitimate access funds creators.  Not so in academia.  My laboratory had to pay a journal to publish my thesis work; this is standard practice.  It costs the authors a lot of money to publish a research article, and “content producers” only do it, as opposed to slapping their work up on a personal website for everyone to read free, because they need publication credits on their CVs to keep winning grants.

With music and movies, stealing electronic copies makes content producers sad.  With research articles, it makes them happy.

In fact, almost everyone believes research articles should be free.  At the European Union’s recent Competitiveness Council, the member states agreed that all scientific papers should be freely available by 2020 – these  are the governments whose enforcement is necessary to maintain the current copyright system!  The only people making statements in favor of the status quo are employed by the academic publishers themselves.  Their ideological positions may be swayed somewhat by the $2 billion plus profit margins major publishers are able to extract from their current racket.

Academic publishers would argue that they serve an important role as curators of the myriad discoveries made daily.  This doesn’t persuade me.  The “referees” they rely on to assess whether each study is sound are all unpaid volunteers.  Plus, if the journals were curating well, wouldn’t it have been harder for me to fill that pop medicine book with so much legitimate-looking crap?

Most importantly, by availing yourself of Sci-Hub’s pirated material, you the thief no longer live in ignorance.  With our current healthcare model, ignorance is deadly.  The United States is moving toward an a la carte method of delivering treatment, where sick people are expected to be knowledgeable, price-sensitive consumers rather than patients who place their trust in a physician.  Most sick people no longer have a primary care physician who knows much about their personal lives – instead, doctors are forced for financial reasons to join large corporate conglomerates.  Doctors try their best moment by moment, but they might never see someone a second time.  It’s more important than ever for patients to stay well-informed.

Unless Sci-Hub wins its lawsuit, you probably can’t afford to.