The problem with transhumanism: It's not what you think - Choice Compass
post-template-default,single,single-post,postid-514,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-6.1,wpb-js-composer js-comp-ver-4.3.5,vc_responsive

The problem with transhumanism: It’s not what you think

30 Sep The problem with transhumanism: It’s not what you think


I was planning to write about the transhumanist movement today, but after opening my email this morning to find an announcement for a grant competition from the DoD focused on, among other things, “human-computer symbiosis,” this blog post became a priority.

Transhumanism is defined by the transhumanists at humanity+ as follows:

“(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.”

Humanity+ has insightful and very smart Board Members, some of whom are colleagues of mine. I applaud their second bullet point — the study of the ethical matters that such technologies introduce — as a critical component of transhumanism.

However, what concerns me is the first bullet point. Applied reason is cited in this point as the key to improving the human condition, and the way to apply this reason is to eliminate aging and enhance human intellectual, physical, and psychological capacities.

For an organization directed toward these aims, it’s a glaring omission that there is not a single neuroscientist or psychologist on its Board of Directors. They are all either ethicists (great!) or technology people (wonderful!)…but what about the people who understand what it means to try to apply reason, what reason is anyway, and what intellectual and psychological capacities are? I think I understand why such people are missing from their Board.

Here are my top two hypotheses:

1) There is a hubris among Silicon Valley folks and other technologists that has become apparent over the years. They are very smart, yes. And they can make a lot of money, yes. But that doesn’t mean they understand psychology. In fact, there is some evidence that, on average, those who choose professions related to technology do not have the best understanding of human interactions and relationships. However, because our culture reveres technology so much, there is a status problem here. Those with high status in the scientific tower (physicists, engineers, mathematicians — all male-dominated fields, BTW) tend to think that their ability to do well in a high-status field confers an ability to have insight into lower-status problems (like psychological ones).

This status problem is borne out in, for example, the insistence of the folks who created Google Glass to have the camera on the right side of the glasses, a huge problem for anyone trying to read the emotional status of the wearer. The right upper eye area — just the area that is covered by the camera — is critical for reading subtle emotional cues, especially cues for negative emotions. No perceptual psychologist, if asked, would have suggested this camera placement. I know some of the people at Google Glass, and to their credit, they are working hard to correct these problems. But the problems wouldn’t have occurred in the first place if psychology was considered a higher-status field, and psychology researchers were consulted in the first place.

2) Any neuroscientist or psychologist who has enough expertise and interest in consciousness and cognition to be of interest to Humanity+ is likely also aware of the huge problems that come from trying to move forward based on applying reason. Humans have a strong belief that we can be objective and reasonable, and of course we cannot. Take Harvard’s Implicit Association Test (IAT) if you think that reason dictates your actions.

Now, there is a transhumanist argument that, in fact, emotional bias like that revealed by the IAT and similar experiments is exactly why transhumanism is a great idea. We will beat out that unreasonable bias, by applying our reason. Here’s the rub — when you think you’re applying reason, you’re not. We build computers to seem reasonable to us, but what we build them to do is often not reasonable. For instance, providing porn to an international audience. Another for instance, the transhumanist goals of exterminating aging and uploading consciousness…are these reasonable? What would be better if we were immortal that couldn’t be better by dying off and letting people raised in a new and different world take over the helm of their world? How would uploading our consciousness actually benefit us, when new and better-informed consciousnesses are being born all the time? The only “reasons” I can think of are emotional ones…”But, I don’t want to die!” and, “I’m too important for my consciousness to be lost!”

The problem with transhumanism, as I see it, is that transhumanists first need to master the experience of being human. They’re not alone, all of us have this problem. We all have trouble remembering that we’re not supposed to be like robots. Especially men, these days. Our culture has so effectively squashed the idea that we have real anger, hurt, fear, and hope inside of us. We have forgotten that we are fragile and delicate, yet this fragility makes us such wonderfully made creatures! This cultural amnesia makes it easy for us to build organizations bent on destroying anything weak or vulnerable about us. “Why would we want those things? What’s beautiful about weakness?” Oh, go read a @#^%$ poem for once!

What we really need — what can really make us “superhuman” relative to how we are now — is to work with focused, daily effort on LOVE. Compassion. Developing love for ourselves, and leaking that love out to others. Overflowing with love for our gentle, gentle, easily hurt selves. What if we use technology for that purpose?

So my gentle and loving challenge to the transhumanists is six-fold:

1) Rejoice over giving birth to new life (a human, not a cyborg) and experience fear for its survival — this does not need to be your child, of course, but a child of someone close to you will do just fine.

2) Weep over your family and friends getting sick and dying.

3) Enjoy and celebrate the wisdom and anxiety involved in getting older.

4) Work through your childhood hurts, loving yourself all over again, and finding ways to use technology to spread love into the world.

5) Then, when you turn 90 years old, decide whether you think you are too important to lose. Decide whether you think your mind, or the “computer minds” you create, can solve problems better than your children, who have molded their minds to this world, can. Decide whether you think you have a better plan than thousands of millions of years of evolution.

6) If you do think you are that important, then clearly you are not being reasonable. So do yourself and all of us a favor, and let your life go in love and peace. Be reasonable and move on.


No Comments

Sorry, the comment form is closed at this time.