On the apparent rarity of human-like intelligence.

On the apparent rarity of human-like intelligence.

Like many people, I have a weak grasp on long times. My family often visits a nearby pioneer reenactment village where the buildings and wooden gearworks of its water-powered corn mill are about two hundred years old; I feel awed. In Europe, some buildings are a thousand years old, which sounds incredible to me.

These are such small sips of evolutionary time.

Humans have roamed our world for hundreds of thousands of years. Large dinosaurs ruled our planet for hundreds of millions of years. Animals whom we’d recognize as Tyrannosaurus rex prowled for the final 2.5 million years of that, with their last descendants dying about 66 million years ago.

My mind struggles to comprehend these numbers.

I found myself reflecting on this after a stray remark in Oded Galor’s The Journey of Humanity: The Origins of Wealth and Inequality: Why is such a powerful brain so rare in nature, despite its apparent advantages?

Galor’s question seems reasonable from the vantage of the present. We live on a planet where 96% of the mammalian biomass is either our own species or prey animals we’ve raised to eat. The total mass of all surviving wild dinosaurs – otherwise known as “birds” – is less than a thirtieth the mass of humans. We’ve clearly conquered this world. Our dominance is due to our brains.

And this moment – right now! – feels special because we’re living through it. From a geological or evolutionary perspective, though, the present is a time much like any other. If we represent the total lifespan of our sun as a 24-hour day (which is much more sensible than representations with the present moment at the end of the day), the current time would be 10:58 a.m., and our sun will become so hot that it boils away all our planet’s liquid water at 7:26 p.m. Between now and then, though, we have a whole workday’s time for life to continue its beautiful, chaotic evolutionary dance. Perhaps quite soon – maybe just a million years from now, or 10 million, which is less than two minutes of our total day – the descendants of contemporary parrots, crows, or octopuses could become as intelligent as contemporary Homo sapiens.

As a human, I’m biased toward thinking that parrots and crows would have a better chance than octopuses – after all, these birds face a similar evolutionary landscape to my own ancestors. They’re long-lived, social species that invest heavily in childcare, are anatomically well-suited for tool use, and face few risks from predators.

Or rather, parrots would face few risks if humans weren’t around. Unfortunately them, a voracious species of terrestrial ape is commandeering their homeland and kidnaps their young to raise as pets. But crows can thrive in a human-dominated landscape – some crows even use our cars as tools, cracking nuts by placing them in urban crosswalks then retrieving their snack after the light turns red.

Octopuses, however, are short-lived and antisocial. They’re negligent parents. Their brief lives are haunted by nightmarish predators. And yet. Some octopuses are already quite intelligent; their intelligence appears to confer a reproductive advantage (if only by virtue of survival); their bodies are well-suited for tool use. Certain types of tools, like flaked stone, would be more difficult to create underwater, but many octopuses are capable of brief sojourns into open air. So I wouldn’t rule them out. Sometimes evolution surprises us – after all, the world has a lot of time to wait.

Which means that powerful brains like ours might not be rare in the future. Especially if our species does something stupid – like engaging in nuclear war, succumbing to global pandemic, or ruining crop yields with climate change – and the animal kingdom’s future intelligentsia don’t have to compete with 8 billion Homo sapiens for space and resources.

Also, it’s surprisingly difficult to assess whether powerful brains like ours were rare in the past. Intelligent, tool-crafting, fire-wielding, language-using species have gone extinct before – consider the Neanderthal. Our own ancestors nearly went extinct during past episodes of climate change, like after a volcanic eruption 70,000 years ago. And even if some species during the age of dinosaurs had been as intelligent as modern humans, we might not recover much evidence of their brilliance.

Please note that I’m not arguing that Tyrannosaurus rex wove baskets, wielded fire, or built the Egyptian pyramids. For starters, the body morph of T-Rex is ill-suited for tool use (as depicted in Hugh Murphy’s T-Rex Trying comics). But simply as a thought experiment, I find it interesting to imagine what we’d see today if T-Rex had reached the same level of technological and cultural sophistication as humans had from 100,000 to 10,000 years ago.

If T-Rex made art, we wouldn’t find it. The Lascaux paintings persisted for about 20,000 years because they were in a protected cave, but as soon as we found them, our humid exhalations began to destroy them. Millions of years would crush clay figurines, would cause engraved bone to decompose.

If T-Rex crafted tools from wood or plant fibers, we wouldn’t find them. We can tell that ancient humans in the Pacific Northwest of North America caught an annual salmon harvest by analyzing radioactive isotopes, but we’ve never found evidence of the boats or nets these ancient people used. After a few more radioactive half-lives passed – much sooner than a million years – this would have become invisible to us.

If T-Rex crafted tools from stone, we’d find remnants, but they’d be difficult to recognize. Evidence for human tool use often comes in three types – sharp flakes (usually 1-3 inch blades used as knives or spear tips), a hammer (often just a big round stone), and a core (a hunk of good rock that will be hit with the hammer to knock knife-like flakes off its surface). We’re most likely to realize that a particular rock was a human tool if it’s near a human settlement or if it’s made from a type of sediment rare in the location where contemporary archaeologists found it (which is why we think that an ancient primate took particular interest in the Makapansgat pebble).

Still, time is a powerful force. 66,000,000 years can dull the edges of a flake, or produce sharp rocks through mindless geological processes. It’s been difficult for archaeologists studying submerged sites in ancient Beringiaa mere 30,000 years old! – to know for certain whether any particular rock was shaped by human hands or natural forces. Other stone tools used by ancient humans look a lot like regular rocks to me, for example this 7,000-year-old mortar from Australia or these 9,000-year-old obsidian knives from North America. Ten million more years of twisting, compressing, and chipping might deceive even a professional.

And then there’s the rarity of finding anything from that long ago. Several billion T-Rex have tromped across the land, but we’ve only found as much as a single bone from a hundred of them. 99.999996% of all T-Rex vanished without a trace.

From those rare fossils, we do know that T-Rex brains were rather small. But not all neurons are the same. Work from Suzana Herculano-Houzel’s research group has shown that the number of neurons in a brain is a much better proxy for intelligence than the brain’s total size – sometimes a bigger brain is just made from bigger neurons, with no additional processing power. And the brains of our world’s surviving dinosaurs are made quite efficiently – “Birds have primate-like numbers of neurons in the forebrain.” **

We humans are certainly intelligent. And with all the technologies we’ve made in the past 200 years – a mere millisecond of our sun’s twenty-four hour day – our presence will be quite visible to any future archaeologists, even if we were to vanish tomorrow. But we do ourselves no favors by posturing as more exceptional than we are.

Animals much like us could have come and gone; animals much like us could certainly evolve again. Our continued presence here has never been guaranteed.

.

.

.

** A NOTE ON NEURON COUNTS: many contemporary dinosaurs have brains with approximately 200 million neurons per gram of brain mass, compared to human brains with approximately 50 million neurons per gram of brain mass. A human brain has a much higher total neuron count, at about 80 billion neurons, than dinosaurs like African Gray Parrots or Ravens, which have about 2 billion neurons, but only because our brains are so much more massive. If the brain of a T-Rex had a similar composition to contemporary dinosaurs, it might have twice as many neurons as our own.

Of course, elephant brains also have three times as many neurons as our own — in this case, researchers then compare neuron counts in particular brain regions, finding that elephant brains have about a third as many neurons specifically in the cerebral cortex compared to human brains. For extinct species of dinosaurs, though, we can only measure the total size of the cranial cavity and guess how massive their brains would have been, with no indication of how these brains may have been partitioned into cerebellum, cerebral cortex, etc.

.

.

.

Header image: a photograph of Sue at Chicago’s Natural History Museum by Evolutionnumber9 on Wikipedia.

On AI-generated art.

On AI-generated art.

Recently, an image generated by an artificial intelligence algorithm won an art competition.

As far as I can tell, this submission violates no rules. Pixel by pixel, the image was freshly generated – it was not “plagarized” in the human sense of copying portions of another’s work wholesale. Indeed, if the AI were able to speak (which it can’t, because it’s particular design does not incorporate any means to generate language), it might describe its initial training as having “inspired” its current work.

The word “training” elides a lot of detail.

Most contemporary AI algorithms are not wholly scripted – a human programmer doesn’t write code that says, “When given the input ‘opera,’ include anthropomorphic shapes bedecked in luxurious fabrics.”

Instead, the programmer curates a large collection of images, some of which are given the descriptor “opera,” all others being, by default, “not opera.” Then the algorithm analyzes the images – treating the images as a grid of pixels, each with a particular hue and brightness, and also higher-order mathematical calculations on that grid, such as if there is a red pixel in a location, what are the odds that other nearby pixels are also red, and what shape will that red cluster take? From this analysis, the algorithm finds mathematical descriptors that separate the “opera” images from “not opera.”

An image designated “opera” is more likely to have patches with vivid hues that include bright and dark vertical stripes. A human viewer will interpret these as the shadowed folds of fabric draping an upright figure. The algorithm doesn’t need to interpret these features, though – the algorithm works only with a matrix of numbers that denote pixel colors.

In general, human programmers understand the principles by which AI algorithms work. After all, human programmers made them!

And human programmers know what sort of information was provided in the algorithm’s training set. For instance, if none of the images labeled “opera” within a particular training set showed performers sitting down, then the algorithm should not produce an opera image with alternating dark and light stripes arrayed horizontally – the algorithm will not have been exposed to horizontal folds in fabric, at least not within the context of opera.

But the particular details of how these algorithms work are often inscrutable to their creators. The algorithms are like children this way – you might know the life experiences that your child has been exposed to, and yet still have no idea why your kid is claiming that Bigfoot dips french fries into ice cream.

Every now and again, an algorithm sorts data by criteria that we humans find ridiculous. Or, rather: the algorithm sorts data by criteria that we would find ridiculous, if we could understand its criteria. But, in general, we can’t. It’s difficult to plumb the workings of these algorithms.

Because the algorithm’s knowledge is stored in multidimensional matrices that most human brains can’t grasp, we can’t compare the algorithm’s understanding of opera with our own. Instead, we can only evaluate whether or not the algorithm seems to work. Whether the algorithm’s images of “opera” look like opera to us, or whether an AI criminal justice algorithm recommends the longest prison sentences to people whom we also assume to be the most dangerous offenders.

#

So, about that art contest. I’m inclined to think that, for a category of “digitally created artwork,” submitting a piece that was created by an AI is fair. A human user still plays a curatorial role, perhaps requesting many images using the exact same prompt and then choosing the best, each generated from random seeds.

It’s a little weird, because in many ways the result would be a collaborative project – somebody’s work went into scripting the AI, and a huge amount of work went into curating and tagging the training set of images – but you could argue that anytime an artist uses a tool or filter on Photoshop, they’re collaborating with the programmers.

An artist might paint a background and then click on a button labeled “whirlpool effect,” but somebody had to design and script the mathematical function that converts the original array of pixel colors into something that we humans would then believe had been sucked into a whirlpool.

In some ways, this collaboration is acknowledged (in a half-hearted, transactional, capitalist way) – the named artist has paid licensing fees to use Photoshop or an AI algorithm. Instead of recognition, the co-creators receive money.

But there’s another wrinkle: we do not create art alone.

Even the Lascaux cave paintings – although no other paintings from that era survived until the present day, many probably existed (in places that were less protected from the elements and so were destroyed by wind & rain & mold & time). The Lascaux artist(s) presumably saw themselves as part of an artistic community or tradition.

In the development of a human artist, that person will see, hear, & otherwise experience many artistic creations by others. Over the course of our lives, we visit museums, read books, watch television, hear music, eat at restaurants – we’re constantly learning from the world around us, in ways that would be impossible to fully acknowledge. A painter might include a flourish that was inspired by a picture they saw in childhood and no longer consciously remember.

This collaborative debt is more obvious among AI algorithms. These algorithms need fuel: their meticulously-tagged sets of training images. The algorithms generate new images of only the sort that they’ve been fed.

It’s the story of a worker being simultaneously laid off and asked to train their replacement.

Unfortunately for human artists, our world is already awash in beautiful images. Obviously, I’m not saying that we need no more art! I’m a writer, in a world that’s already so full of books! The problem, instead, is that the AI algorithms have ample training sets. Even if, hypothetically, these algorithms instantly drove every other artist out of business, or made all working artists so nervous that human artists refused for any more of their work to be digitized, there’s still an enormous library of existing art for the AI algorithms to train on.

After hundreds of years of collecting beautiful paintings in museums, it would take a hefty dollop of hubris to imagine immediate stagnation if the algorithms lacked access to new human-generated paintings.

Also, it wouldn’t be insurmountable to program something akin to “creativity” in the algorithms – an element of randomness to allow the algorithm to deviate from trends in its training set. This would put more emphasis on a user’s curatorial judgment, but also lets the algorithms innovate. Presumably most of the random deviations would look bad to me, but that’s often the way with innovation – impressionism, cubism, and other movements looked bad to many people at the beginning. (Honestly, I still don’t like much impressionism.)

#

There’s no reason to expect a brain made of salty fat to have incomparable powers. Our thoughts don’t come from anything spooky like quantum mechanics – neurons are much too big to persist in superpositions. Instead, we humans are so clever because we have a huge number of neurons interconnected in complex ways. We’re pretty special, but we’re not magical.

Eventually, a brain made of circuits could do anything that we humans can.

That’s a crucial long-run flaw of capitalism – eventually, the labor efforts of all biological organisms will be replaceable, so all available income could be allocated to capital owners instead of labor producers.

In a world of physician-bots, instead of ten medical doctors each earning a salary, the owner of ten RoboMD units would keep all the money.

We’re still a ways off from RoboMD entering the market, but this is a matter of engineering. AI algorithms can already write legal contracts, do sports journalism, drive cars & trucks, create award-winning visual images – there’s no reason to believe that an AI could never treat illnesses as well as a human doctor, clean floors as well as a human janitor, write code as well as a human programmer.

In the long run, all our work could be done by machines. Human work will be unnecessary. Within the logic of capitalism, our income should drop to zero.

Within the logic of capitalism, only the owners of algorithms should earn any money in the long-run. (And in the very long run, only the single owner of the best algorithms should earn any money, with all other entities left with nothing.)

#

Admittedly, it seems sad for visual artists – many of whom might not have nuanced economics backgrounds – to be among the people who experience the real-world demonstration of this principle first.

It probably feels like a very minor consolation to them, knowing that AI algorithms will eventually be able to do your job, too. When kids play HORSE, nobody wants to be out first.

But also, we have a choice. Kids choose whether or not to play HORSE, and they choose what rules they’ll play by. We (collectively) get to choose whether our world will be like this.

I’m not even that creative, and I can certainly imagine worlds in which, even after the advent of AI, human artists still get to do their work, and eat.

On Constantine Cavafy’s ‘Body, Remember,’ and the mutability of memory.

On Constantine Cavafy’s ‘Body, Remember,’ and the mutability of memory.

Because we’d had a difficult class the week before, I arrived at jail with a set of risqué poetry to read.  We discussed poems like Allison Joseph’s “Flirtation,” Galway Kinnel’s “Last Gods,” and Jennifer Minniti-Shippey’s “Planning the Seduction of a Somewhat Famous Poet.”

Our most interesting conversation followed Constantine Cavafy’s “Body, Remember,” translated by Aliki Barnstone.  This is not just a gorgeous, sensual poem (although it is that).  Cavafy also conveys an intriguing idea about memory and recovery.

The poem opens with advice – we should keep in mind pleasures that we were privileged to experience.

“Rumpled Mattress” by Alex D. Stewart on Flickr.

Body, remember not only how much you were loved,

not only the beds on which you lay,

A narrative of past joy can cast a rosy glow onto the present.  Our gratitude should encompass more, though.  We should instruct our body to remember not only the actualized embraces,

but also those desires for you

that glowed plainly in the eyes,

and trembled in the voice – and some

chance obstacle made futile.

In addition to our triumphs, we have almost triumphs.  These could be many things.  On some evenings, perhaps our body entwines with another’s; other nights, a wistful parting smile might suggest how close we came to sharing that dance.  In another lifetime.  Another world, perhaps.

Missed Connection 1 by Cully on Flickr.

But we have the potential for so many glories.  In basketball, a last shot might come so close to winning the game.  If you’re struggling with addiction, there could’ve been a day when you very nearly turned down that shot.

Maybe you’ll succeed, maybe you won’t.  In the present, we try our best.  But our present slides inexorably into the past.  And then, although we can’t change what happened, the mutability of memory allows us to change how we feel.

Now that all of them belong to the past,

it almost seems as if you had yielded

to those desires – how they glowed,

remember, in the eyes gazing at you;

how they trembled in the voice, for you, remember, body.

Consciousness is such a strange contraption.  Our perception of the world exists only moment by moment.  The universe constantly sheds order, evolving into states that are ever more probable than the past, which causes time to seem to flow in only one direction. 

Brain nebula by Ivan on Flickr.

A sense of vertigo washes over me whenever I consider the “Boltzmann brain” hypothesis.  This is the speculation that a cloud of dust in outer space, if the molecules were arranged just right, could perceive itself as being identical to your present mind.  The dust cloud could imagine itself to be seeing the same sights as you see now, smelling the same smells, feeling the same textures of the world.  It could perceive itself to possess the same narrative history, a delusion of childhood in the past and goals for its future.

And then, with a wisp of solar wind, the molecules might be rearranged.  The Boltzmann brain would vanish.  The self-perceiving entity would end.

Within our minds, every moment’s now glides seamlessly into the now of the next moment, but it needn’t.  A self-perceiving entity could exist within a single instant.  And even for us humans – whose hippocampal projections allow us to re-experience the past or imagine the future – we would occasionally benefit by introducing intentional discontinuities to our recollection of the world.

Past success makes future success come easier.  If you remember that people have desired you before – even if this memory is mistaken – you’ll carry yourself in a way that makes you seem more desirable in the future.  If an addict remembers saying “no” to a shot – even if this memory is mistaken – it’ll be easier to say “no” next time. 

Our triumphs belong to the same past as our regrets, and we may choose what to remember.  If our life will be improved by the mistake, why not allow our minds the fantasy?  “It almost seems as if you had yielded to those desires.”  The glow, the gaze: remember, body.

In the short story “The Truth of Fact, The Truth of Feeling,” Ted Chiang contrasts situations in which the mutability of memory improves the world with situations in which this mutability makes the world worse.  Memories that reinforce our empathy are the most important to preserve.

We all need to know that we are fallible.  Our brains are made of squishy goo.  The stuff isn’t special – if it spills from our skulls, it’ll stink of rancid fat.  Only the patterns are important.  Those patterns are made from the flow of salts and the gossamer tendrils of synapses; they’re not going to be perfect.

As long as we know that we’re fallible, though, it doesn’t help much to dwell on the details of each failure.  We need to retain enough to learn from our mistakes, but not so much that we can’t slough off shame and regret once these emotions have served their purpose.  As we live, we grow.  A perfect remembrance of the past would constrict the person we’re meant to be.

I imagine that Brett Kavanaugh ardently believes that he is not, and has never been, the sort of person who would assault a woman.  He surely believes that he would never thrust his bare penis into an unconsenting woman’s hand.  And I imagine that Brett Kavanaguh’s current behavior is improved by this belief.  In his personal life, this is the memory of himself that he should preserve, rather than the narrative that would probably be given by an immutable record of consensus reality.

The main problem, in Kavanagh’s case, is his elevation to a position of power.  In his personal life, he should preserve the mutable memories that help him to be good.  No matter how inaccurate they might be.

In public life, however, consensus reality matters.  Personally, I will have difficulty respecting the court rulings of a person who behaved this way.  Especially since his behavior toward women continued such that law professors would advise their female students to cultivate a particular “look” in order to clerk for Kavanaugh’s office.

The Supreme Court, in its current incarnation, is our nation’s final arbiter on many issues related to women’s rights.  Kavanaugh’s narrative introduces a cloud of suspicion over any ruling he makes on these issues – especially since he has faced no public reckoning for his past actions.

And, for someone with Kavanaugh’s history of substance abuse, it could be worthwhile to preserve a lingering memory of past sins.  I still think that the specific details – pinning a struggling woman to the bed, covering her mouth with his hand – would not be beneficial for him to preserve.  But I would hope that he remembers enough to be cognizant of his own potential to hurt people while intoxicated.

Episodic memories of the specific times when he assaulted people at high school and college parties probably aren’t necessary for him to be good, but he would benefit from general knowledge about his behavior after consuming alcohol.  When I discuss drug use with people in jail, I always let them know that I am in favor of legalization.  I think that people should be allowed to manipulate their own minds.

But certain people should not take certain drugs. 

Like most people in this country, I’ve occasionally been prescribed Vicodin.  And I was handed more at college parties.  But I never enjoyed the sensation of taking painkillers.

Some people really like opiates, though.  Sadly, those are the people who shouldn’t take them.

Brett Kavanaugh likes beer.  Sadly, he’s the sort of person who shouldn’t drink it.

Honestly, though, his life would not be that much worse without it.  Beer changes how your brain works in the now.  For an hour or two, your perception of the world is different.  Then that sensation, like any other, slides into the past.

But, whether you drink or don’t, you can still bask later in the rosy glow of (mis)remembrance.

On mind control versus body control

On mind control versus body control

In jail last week, we found ourselves discussing mind control.  Ants that haul infected comrades away from the colony – otherwise, the zombie will climb above the colony before a Cordyceps fruiting body bursts from its spine, raining spores down onto everyone below, causing them all to die.

6761314715_360cd6c878_z.jpg
Photo by Bernard Dupont on Flickr.

Several parasites, including Toxoplasma gondii, are known to change behaviors by infecting the brain.  I’ve written about Toxo and the possibility of using cat shit as a nutritional supplement previously – this parasite seems to make its victims happier (it secretes a rate-limiting enzyme for dopamine synthesis), braver, and more attractive.

I told the guys that I used to think mind control was super-terrifying – suddenly your choices are not quite your own! – but I’ve since realized that body control is even more terrifying.

We’d thought that each fungus that makes ants act funny was taking over their brains.  But we were wrong.  The Ophiocordyceps fungus is not controlling the brains of its victims – instead, the fungus spreads through the body and connects directly to muscle fibers.  The fungus leaves an ant’s brain intact but takes away its choices, contracting muscles to make the ant do its bidding while the poor creature can only gaze in horror at what it’s being forced to do.

If a zombie master corrupts your brain and forces you to obey, at least you won’t be there to watch.  Far worse to be trapped behind the window of your eyes, unable to control the actions that your shell is taking in the world.

A sense of free will is so important to our well-being that human brains seem to include modules that graft a perception of volition onto our reflex actions.  Because it takes so long for messages to be relayed to the central processing unit of our brains and back outward to our limbs, our bodies often act before we’ve had a chance to consciously think about what we’re doing.  Our actions typically begin a few hundred milliseconds before we subjectively experience a decision.

Then, the brain’s storytelling function kicks into gear – we explain to ourselves why we chose to do the thing that we’ve already begun doing.

If something goes wrong at that stage, we feel awful.  People report that their bodies have “gone rogue.”  If you use a targeted magnetic pulse to sway a right-handed person to do a simple task left-handed, that person probably won’t notice anything amiss.  The storytelling part of our brain hardly cares what we do – it can come up with a compelling rationalization for almost any action.

“Well, I chose to use my left hand because … “

But if you use a targeted magnetic pulse to incapacitate the brain’s internal storyteller?  The sensation apparently feels like demonic possession.  Our own choices are nightmarish when severed from a story.

On evolution and League of Legends.

teemochineseOkay, here’s something that I feel like the Cosmos show did nicely – when they showed a tree representing evolutionary lineage, humans were on a branch jutting out seemingly at random to the side.  Whereas many popular science presentations of evolution depict humans as the pinnacle – we’re here at the top, and if you go back in time, our ancestors looked like chimpanzees, and if you go back farther in time, our ancestors looked like goldfish, and if you go back farther in time, our ancestors looked like sea sponges… which obviously isn’t true.  A current chimpanzee, and a current fish, and a current sponge, have gone through just as much evolutionary time as we have.

I think many scientists would feel bothered by phrasings such as “humans are more evolved than bacteria.”  Well, a statement that direct might be hard to come up with a reference for, but quite often humans are described as “higher organisms,” in comparison.  And, yeah, we are multicellular, and have nuclei – gee whiz, nuclei!  But you could quite easily argue that bacteria are more evolved.  Their generational time is shorter, so every minute effectively gives them more time to evolve than it gives us.  And they seem quite well suited for their environment – many can now thwart even directed efforts to expunge them.  I’d like to see some of those “highly evolved” humans shrug off murderous intent with such panache.

And, honestly, that was going to be the end of my essay.  I was planning to root around, find an egregious reference for a statement about how great it is that humans are so evolved, and call it finished.  But would that be cool?  I have to imagine that plenty of high school biology teachers out there have already declaimed similar truths to their students.

So, instead, here is a bonus contrasting thought – a framework in which humans are, in fact, more evolved.

Because, sure, bacteria go through many more generations than humans within any given amount of time.  But additional “rounds” of evolution won’t accomplish much if there aren’t significant options for change.  A wide range of bacteria all look pretty much the same… to me, that is, someone who is not a bacteriologist.  The times I’ve looked at them in microscopes, they just looked like bothersome squirming dots – I was doing mammalian tissue culture, so was displeased to see them, and was using relatively low magnification.  And to someone who actually knows about bacteria, the idea that they’re all the same might sound inane – some polymerize mammalian actin behind them to shoot around like rocket ships!  How is that not cool??

Well, yeah, yeah, actin rockets.

A lot of the problem, in terms of my thinking that various evolutionary descendants of bacteria are cool, is that they function much closer to the thermodynamic limit than other organisms do.  Not so much that I think it’s reasonable to generally assume that they function efficiently – yes, a mathematical model under that assumption reproduced rRNA copy number, but how many other salient features of the genome would be predicted?  – but the energetic constraints on bacteria do seem to be tighter than for many multicellular organisms.  If you are competing for resources based on reproduction rate, and a limiting step is duplication on your genome, there’ll be strong pressure to keep your genome small.  But one major driver of evolutionary divergence is gene duplication events – it’s easier to accrue mutations that might lend a new function if those mutations are in a second copy and do not necessitate the loss of a necessary pre-existing function.

So a multicellular organism, with a big sloppy genome, has a lot more tuning knobs that can be adjusted during evolution.  Which I thought was worth writing about because it would allow me to make a cutesy analogy to the current design goals of the team making League of Legends.  It’s a twitchy online variant of capture the flag that I used to play – I can’t anymore, since they made the game fancier and I’m using a computer from 2006 that needs a lot of duct tape to function.  At the four corners of the base, duct-tape holds stacks of 3 pennies each to give my computer stilts so that there’s room for the fan to exhaust and space for the battery to hang out.  Pennies seemed cheaper than buying pegs or anything to keep it raised.  And, right, the battery – it’s gotten bulgier over time, such that now, if it’s put all the way into the computer, it presses against the underside of the keyboard and makes many letters not work.  But as long as it dangles halfway out all the time, kept in place with duct tape, the computer works fine.

I’m sure a bulging battery doesn’t indicate anything potentially disastrous, right?

Anyway, the League of Legends team recently announced their goals for the new changes, and one they stressed was that they wanted to give themselves more potential variables to tweak in case the game needed balancing in the future.  And I thought, okay, that’s a sense in which you could claim that humans are more evolved – we have so many features that can be tweaked over time, compared to the set of variables available to be modified during bacterial evolution.

But a corollary to that thought is that, since there are so many variables that could change with humans, and since we have a relatively long generational time, there’s no reason to expect that we’ve gotten much right yet.  With a bacterium, you might expect that it will be sufficiently evolved that it’s near optimal for its environment.  With a human, you should have no such expectation.

Which I was writing about in my project as regards transcranial electrical stimulation.  This is a technique where you deliver excess current to certain regions of your brain with the goal of improving cognition – it often seems to work, although there have been only vague explanations why.  And the very fact that something like this might work illustrates that human evolution didn’t get incredibly far.  Much of our reproductive success is due to cognitive ability – that’s how we were able to cover the globe, and begin altering environments to suit us better (locally – globally, we may well be doing the opposite), and contemplate shooting ourselves into space.  So you might imagine that there would be evolutionary pressure on humans to make that cognitive ability as good as it can be.  Which obviously isn’t the case if you could look at a cheesy website and build something out of supplies from Radioshack to make yourself think better.