Most writers spend a lot of time thinking about how others see the world. Hopefully most non-writers spend time thinking about this too. It’s easier to feel empathy for the plights of others if you imagine seeing through their eyes.
So I thought it was pretty cool that the New York Times published an article about processing images to represent how they might appear to other species.
The algorithm shifts the color distribution of images to highlight which objects appear most distinct for an animal with different photoreceptors. I thought it was cool even though the processing they describe fails in many ways to convey how differently various animals perceive the world.
For one thing, image processing can only affect visuals. Another species may rely more on sound, scent, taste (although perhaps it’s cheating to list both scent and taste — they are essentially the same sense, chemodetection, with the difference being that humans respond more sensitively, and to a wider variety of chemicals, with our noses than our tongues), touch, sensing magnetic fields, etc.
If we assume that other animals will also place maximal trust in the detection of inbound electromagnetic radiation from the narrow band we’ve deemed “the visual spectrum,” we can fool ourselves regarding their most likely interpretations. For an example, you could read my previous post about why rattlesnakes might assume that humans employ chameleon-like camouflage (underlying idea courtesy of Jesus Rivas & Gordon Burghardt).
The second problem with assuming that an image with shifted colors represents how another animal would view the world is on the level of neurological processing. When a neurotypical human looks at an image and something resembles a face, that portion of the image will immediately dominate the viewer’s attention; a huge amount of human brainpower is devoted to processing faces. Similarly, some dogs, if another dog enters their visual field, have trouble seeing anything else. And bees: yes, they see more blues & ultraviolets than we do, but it’s also likely that flowers dominate their attention. I imagine it’s something like the image below, taken with N and her Uncle Max on a recent walk. Although, depending on your personality, you might have some dog-style neurological processing, too.
Even amongst humans this type of perceptual difference exists. A friend of mine who does construction (ranked the second-best apprentice pipefitter in the nation the year he finished his training, despite being out at a buddy’s bachelor party, i.e. not sleeping, all night before the competition), when he walks into a room, immediately notices all exposed ductwork, piping, etc. Most people care so little about these features as to render them effectively invisible. And I, after three weeks of frantic itching and a full course of methylprednisolone, could glance at any landscape in northern California and immediately point out all the poison oak. My daughter can spot a picture or statue of an owl from disconcertingly far away and won’t stop yelling “owww woo!” until I see it too.
The color processing written up in the New York Times, though, was automated. Given the current state of computerized image recognition, you probably can’t write a script that would magnify dogs or flowers or poison oak effectively. Maybe in a few years.
There’s one last big problem, though. And the last problem is about the colors alone. There is simply no way to re-color images so that a dichromatic (colloquially, “colorblind”) human would see the world like a trichromat.
(A brief aside: Shortly after I wrote the above sentence, I read an article about glasses marketed to colorblind people to let them see color. And the basic idea is clever, but I don’t think it invalidates my claim.
Here’s how it works: most colorblind people are dichromats, meaning they have two different flavors of color receptors. Colored light stimulates these receptors differentially: green light stimulates green receptors a lot and blue receptors a little. Blue light stimulates blue receptors a lot and green receptors a little. The brain processes the ratio of receptor stimulation to say, “Ah ha! That object is blue!”
A typical human, however, is a trichromat. This means that the brain uses three datapoints to determine an object’s color instead of two. The red and green receptors absorb maximally near the same part of the spectrum, though… the red vs. blue & green vs. blue ratios are generally very similar. So the third receptor type mostly helps a trichromat distinguish between red and green.
This means a dichromat will have a narrower range of the electromagnetic spectrum that they are good at distinguishing color within. For a dichromat, reds and greens both will be characterized by “green receptor stimulated a lot, blue receptor only a little.”
Now, if you imagine that the visual spectrum is number line that runs from 0 to 100, a dichromat would be good at distinguishing colors in the first 0 to 50 segment, and not good at distinguishing color beyond that point — everything with green wavelength, ca. 500 nanometers, and longer, would appear to be green.
But you could take that 0 to 100 number line and just divide everything by 2. Then every color would look “wrong” — no object would appear to be the same color as it was before you put on the wacky glasses — and you’d be less able to distinguish between close shades — if two colors needed to be 15 nanometers apart to seem different, now they’d need to be 30 nanometers apart — but a dichromat could distinguish between colors over the same full visual spectrum as trichromats.
That’s roughly how the glasses should work — inbound light is shifted such that all colors are made blue & greenish, and the visual spectrum is condensed).
Of course, even though you can’t change an image in a way that will allow you (I’m assuming that you, dear reader, are a trichromat. But my assumption has a 10% chance of being wrong. My apologies! I care about you, too, dichromatic reader!) and a dichromatic friend to see it the same way. But you can change your friend. You can inject a DNA-delivering retrovirus into your friend’s eyeball, and after a short neurological training period, you and your friend will see colors the same way!
It’s possible that your friend won’t like you any more if you do this. But here’s how it works: the retrovirus encodes for the flavor of photoreceptor that none of your friend’s cone cells were expressing. Upon infection, the virus will initiate production of that receptor… so now a subpopulation of cone cells will be sending new signals to the brain. They’ll be stimulated by different wavelengths of light than they were before. And brains, magically plastic things that they are, rapidly rewire themselves to incorporate any new data they have access to.
(If you’re interested in this sort of thing, you should look up biohacking. Like implanting magnets in your fingers to “feel” electric or magnetic fields. But I’m not going to link to anything. Wrestling your friend to the ground in order to inject recombinant DNA into his eyeball? That makes me smile. But slicing open your own fingertips to put magnets under the skin? That’s too creepy for me).
If a brain is suddenly receiving different signals after exposure to red versus green light, it’ll use that information. Which means: Color vision achieved! Unfortunately, viral DNA integrates randomly, so a weird eye cancer might’ve been achieved as well. You win some, you lose some.
What we call “color vision,” though, is still only trichromatic. With three flavors of cone cells, humans can do a pretty good job distinguishing colors from about 400 to 700 nanometers. But some species have more flavors of cone cells, which means they can distinguish the world’s colors more precisely. Even some humans are tetrachromats, although their fourth cone cell flavor is maximally stimulated by light midway between red and green, a part of the electromagnetic spectrum that trichromatic humans are already good at parsing. And tetrachromatic humans are rare: to the best of my knowledge no languages have a word for that secret color between red and green. I don’t know any words for it, at least, but maybe this too is a secret guarded by those who see it.
Still, no amount of image processing would allow you, dear reader, even if you’re one of those rare tetrachromatic individuals, to see the world in all the spangled glory seen by a starling or a peacock. This graph shows the stimulation of each flavor of cone cell receptor by different wavelengths of light.
And even the splendorous beauty seen by birds pales in comparison to the way we thought mantis shrimps perceive the world. Because mantis shrimps, see, have twelve flavors of photoreceptors, which means that if their brains processed colors the same ways ours do, by considering the ratio of cone cell flavors that are stimulated by incident light, they’d be exquisitely sensitive to color. Here: compare the spectral sensitivity graph for humans and starlings, shown above, to the equivalent graph for mantis shrimps. This makes humans look pathetic!
If you haven’t see it, you should definitely read this cartoon about mantis shrimp perception from The Oatmeal.
It’s possible that mantis shrimps process color differently from humans, though. Instead of computing ratios of cone-flavor activation to determine the color of an object, they might decide that an object is the color of whatever single cone flavor is most stimulated. In other words, while humans use stimulation ratios from our mere three flavors of cone cells to identify thousands of hues, a species with a dozen photoreceptor flavors might regard every object as being one of those dozen discrete colors.
Indeed, that’s what a recent study from Thoen et al. (“A Different Form of Color Vision in Mantis Shrimp”) suggests. They trained mantis shrimps to attack a particular color of light in order to win a treat, then tested how well it could distinguish that color from nearby wavelengths. In their hands, the shrimps needed approximately 50 nanometers separating two colors to distinguish them, whereas humans, with our meager three flavors of photoreceptors, can often distinguish colors as close as 1 or 2 nanometers apart.
Still, it’s hard to know exactly what a shrimp is thinking. Testing human cognition and perception is easier because we can, you know, talk to each other. Describe what we see.
With humans, the biggest barrier to empathy is that sometimes we forget to listen.