I’ve been helping a friend learn the math behind optimization so that she can pass a graduation-requirement course in linear algebra.

Optimization is a wonderful mathematical tool. Biochemists love it – progression toward an energy minimum directs protein folding, among other physical phenomena. Economists love it – whenever you’re trying to make money, you’re solving for a constrained maximum. Philosophers love it – how can we provide the most happiness for a population? Computer scientists love it – self-taught translation algorithms use this same methodology (I still believe that you could mostly replace Ludwig Wittgenstein’s *Philosophical Investigations* with this *New York Times Magazine *article on machine learning and a primer on principal component analysis).

But, even though optimization problems are useful, the math behind them can be tricky. I’m skeptical that this mathematical technique is essential for *everyone* who wants a B.A. to grasp – my friend, for example, is a wonderful preschool teacher who hopes to finally finish a degree in child psychology. She would have graduated two years ago except that she’s failed this math class three times.

I could understand if the university wanted her to take statistics, as that would help her understand psychology research papers … and the science underlying contemporary political debates … and value-added models for education … and more. A basic understanding of statistics might make people better citizens.

Whereas … linear algebra? This is a beautiful but counterintuitive field of mathematics. If you’re interested in certain subjects – if you want to become a physicist, for example – you really should learn this math. A deep understanding of linear algebra can enliven your study of quantum mechanics.

Then again, Werner Heisenberg, who was a brilliant physicist, had a limited grasp on linear algebra. He made huge contributions to our understanding of quantum mechanics, but his lack of mathematical expertise occasionally held him back. He never quite understood the implications of the Heisenberg Uncertainty Principle, and he failed to provide Adolph Hitler with an atomic bomb.

In retrospect, maybe it’s good that Heisenberg didn’t know more linear algebra.

While I doubt that Heisenberg would have made a great preschool teacher, I don’t think that deficits in linear algebra were deterring him from that profession. After each evening that I spend working with my friend, I do feel that she understands matrices a little better … but her ability to nurture children isn’t improving.

And yet. Somebody in an office decided that all university students here need to pass this class. I don’t think this rule optimizes the educational outcomes for their students, but perhaps they are maximizing something else, like the registration fees that can be extracted.

Optimization is a wonderful mathematical tool, but it’s easy to misuse. Numbers will always do what they’re supposed to, but each such problem begins with a choice. What exactly do you hope to optimize?

Choose the wrong thing and you’ll make the world worse.

#

Most automobile companies are researching self-driving cars. They’re the way of the future! In a previous essay, I included links to studies showing that unremarkable-looking graffiti could confound self-driving cars … but the issue I want to discuss today is both more mundane and more perfidious.

After
all, using graffiti to make a self-driving car interpret a stop sign as “Speed
Limit 45” is a design flaw. A car that
accelerates instead of braking in that situation is *not* operating as
intended.

But
passenger-less self-driving cars that roam the city all day, intentionally
creating as many traffic jams as possible?
That’s a *feature*. That’s
what self-driving cars are *designed* to do.

Despite my wariness about automation and algorithms run amok, I hadn’t considered this problem until I read Adam Millard-Ball’s recent research paper, “The Autonomous Vehicle Parking Problem.” Millard-Ball begins with a simple assumption: what if a self-driving car is designed to maximize utility for its owner?

This assumption seems reasonable. After all, the AI piloting a self-driving car must include an explicit response to the trolley problem. Should the car intentionally crash and kill its passenger in order to save the lives of a group of pedestrians? This ethical quandary is notoriously tricky to answer … but a computer scientist designing a self-driving car will probably answer, “no.”

Otherwise, the manufacturers won’t sell cars. Would you ride in a vehicle that was programmed to sacrifice you?

Luckily, the AI will not have to make that sort of life and death decision often. But here’s a question that will arise daily: if you commute in a self-driving car, what should the car do while you’re working?

If the car was designed to maximize public utility, perhaps it would spend those hours serving as a low-cost taxi. If demand for transportation happened to be lower than the quantity of available, unoccupied self-driving cars, it might use its elaborate array of sensors to squeeze into as small a space as possible inside a parking garage.

But what if the car is designed to benefit its owner?

Perhaps the owner would still want for the car to work as a taxi, just as an extra source of income. But some people – especially the people wealthy enough to afford to purchase the first wave of self-driving cars – don’t like the idea of strangers mucking around in their vehicles. Some self-driving cars would spend those hours unoccupied.

But they won’t *park*. In most cities, parking costs between $2 and $10 per hour, depending on whether it’s street or garage parking, whether you purchase a long-term contract, etc.

The cost to just *keep driving* is generally going to be lower than $2 per hour. Worse, this cost is a function of the car’s speed. If the car is idling at a dead stop, it will use approximately 0.1 gallon per hour, costing 25 cents per hour at today’s prices. If the car is traveling at 30 mph without breaks, it will use approximately 1 gallon per hour, costing $2.50 per hour.

To save money, the car wants to stay on the road … but it wants for traffic to be as close to a standstill as possible.

Luckily for the car, this is an easy optimization problem. It can consult its onboard GPS to find nearby areas where traffic is slow, then drive over there. As more and more self-driving cars converge on the same jammed streets, they’ll slow traffic more and more, allowing them to consume the workday with as little motion as possible.

Pity the
person sitting behind the wheel of an *occupied* car on those
streets. All the self-driving cars will
be having a great time stuck in that traffic jam: *we’re saving money!*,
they get to think. Meanwhile the human
is stuck swearing at empty shells, cursing a bevy of computer programmers who
made their choices months or years ago.

And all those idling engines exhale carbon dioxide. But it doesn’t cost money to pollute, because one political party’s worth of politicians willfully ignore the fact that capitalism, by philosophical design, requires we set prices for scarce resources … like clean air, or habitable planets.