#E25 Ending Human Suffering Using the Three Supers of Transhumanism with David Pearce, Philosopher and Co-Founder of Humanity+

About David Pearce

David Pearce is a British transhumanist philosopher and co-founder of Humanity Plus. David's philosophy is based on the idea that we will have the capacity to eliminate suffering from all sentiments in life. Inspired by the fear of death and disturbed by human suffering David has dedicated his life to finding a cure for death and creating a Triple S human civilization of super intelligence, super longevity, and super happiness. He's the author of The Manifesto, the Hedonic Imperative, which outlines the ethical and morals obligation to eliminate suffering in all forms of life.

Read the HYPERSCALE transcript.

(00:33) Briar: Hi, David, how are you? It's been a while since I've seen you.

(01:52) David: Briar. It's very good to see you and hear your voice again

(01:59) Briar: How's everything been? Have you been busy? Whereabouts in the world are you at the moment?

(02:05) David: Well, I am currently in Portugal, just back from the Coinbase Consensus Conference. Let’s say Coinbase is not my normal stamping ground. I've not got a background in crypto, but one of the Crypto Zillionaires is a fan of my work. That's Ola Carlson Bay of Poly Chain Capital. So yes, I've been on stage at consensus talking, not about Paradise Engineering on the blockchain, but yeah, essentially my ideas on the use of biotechnology to phase out suffering throughout the living world. Yes I felt a bit outta place. I was probably the only person at the conference who doesn't own any crypto, but it was an interesting experience sociologically.

(02:52) Briar: Amazing. Well, I can't wait to deep dive today and learn a little bit more about what you've been doing. And I remember the last time we connected, you revealed to me that one of your biggest driving forces, and the reason you got into transhumanism and all of these very philosophical opinions that you hold is because I think you're about 12, weren't you? When you thought how almost frustrating and or how disappointing it was that we all die at the end of the day?

(03:23) David: Yeah, actually, it was probably much earlier than that, that I was preoccupied by the problem of aging and death. And at a very early age, I read somewhere that one allegedly lost 2000 brain cells a day over after the age of 18, and I resolved to find a cure for the aging process when I became a bit older. Yes, probably by about the age of 12 or 13, I came to realize that this probably wasn't realistic, but I then stumbled across Robert Ettinger, The Prospect of Immortality from the local library and resolved to be chronically frozen instead with a view to being reanimated centuries, hence when aging was a curable condition. .

(04:12) Briar: So you are actually going to be frozen. Is this the plan for you?

(04:19) David: Yes, and I'm more likely, I think probably to opt for cryothanasia rather than cryonics, because if you opt for cryonics by the time you are 95, the chances are you're going to be pretty gaga and far gone. But if you suspended, let's say at the age of 75 and you're more or less cognitively intact probably more information is preserved. I should add that I'm personally a little skeptical that post-human super intelligence will want to reanimate Darwinian malware from a previous era. But by getting suspended, one keeps the option open. Another reason for urging cryonics and perhaps cryothanasia is that potentially, I think it can defang or at least partially defang death, because if instead of visiting grandpa's grave, you can go to the cryonics tank instead. Perhaps grandpa seems a little more real simply resting. Radical anti-aging is only one strand of transhumanism. I like to define transhumanism in terms of the three supers, super longevity, super intelligence, and my own focus, super happiness. But yes, from a fairly early age, I was preoccupied by what we would now call transhumanist technologies and ideas. So

(05:45) Briar: There's so many things that I want to deep dive in with you today. So whereabouts are these? Are they factories? Like, are there people frozen all lined up? Like where, where would we find these places?

(05:59) David: Alcor is perhaps the best known organization. It's run by Max Moore, one of the founders of Transhumanism. Alco doesn't practice cryothanasia this is cryonics, but yes I don't know precisely how many people are currently suspended, but a fair number of people have signed up. The premiums are relatively cheap if you are 10 years higher if you are older. But yeah, this is one strand of the transhumanist commitment to phasing out biology of aging. The other real strand associated, particularly with Aubrey De Gray is focused not so much on cryonics, but actually tackling the biology of aging right now. But it's a kind of twin track strategy because I suspect a lot of your older listeners are thinking, well, yes, perhaps one day science will find a cure for aging, but I'm not going to make it. But in its best, transhumanism offers something for everybody, and yeah, no one need feel they're missing out if you sign up for crayons or controversially cryothanasia.

(07:15) Briar: I've been really enjoying my conversations that I've been having with Transhumanists and I've been enjoying the Transhumanist Reader, the book that I've been reading by Max More & Natasha Vita-More, who you mentioned before. Tell us a bit about the, the three supers that you talk about. What are they?

(07:33) David: Okay, super longevity which we've touched on just as silicon robots can be repaired and upgraded indefinitely. There doesn't seem to be any principles reason by the same shouldn't be true for organic robots like you and me too. So that is one of the three big supers of transhumanism, super longevity. Super intelligence, which is a contested term, different conceptions of super intelligence within the transhumanist movement. One conception that I particularly focus on is the idea that super intelligence is going to be AI enhanced biological descendants. That we are going to genetically rewrite our source code, amplify our intelligence, and augment ourselves with artificial intelligence. That's one conception of super intelligence. Full spectrum super intelligence, our biological descendants. Another conception of super intelligence associated with transhumanist Ray Kurzweil visions a kind of fusion, complete fusion of humans and machines with the possibility of something like mind uploading in which you are digitally scanned and uploaded to a less perishable substrate. This is the idea of a kind of technological singularity. 

There is another conception of super intelligence, arguably the most radical, and also for many people, the most alarmists that sees the prospect of recursively self-improving artificial intelligence. That goes in the kind of sort of runaway fume effect that effectively is going to replace sentient life. Replace sentient humans. It was proposed first by the mathematician I. J. Good back in 1965, has been developed by Eliezer Yudkowsky and Mary, and written up by Nick Bostrum and his book super Intelligence, very topical at the moment. Perhaps essentially humans are going to be retired in some way by artificial intelligence. I should add, I'm a great skeptic of this scenario. I think full spectrum super intelligence will also be super sentient, and that classical sharing machines, digital computers are zombies incapable of solving the binding problem. They are not the right architecture for full spectrum general intelligence, but this is a contested view amongst Transhumanists.

Anyway, that's the two of the three supers, super longevity and super intelligence. The third super and the one that preoccupies me most is super happiness and the abolitionist project. I think we have an overriding moral obligation to phase out the biology of suffering throughout the living world and replace pain, misery, and malaise with a new information signaling system, a new architecture of mind life-based entirely on information sensitive gradients of intelligent lists and genome editing, genome reform promises. Yes, essentially a new reward architecture. And although this probably sounds like sci-fi one can point to rare genetic outliers today, who are essentially never unhappy, who go through life, they respond adaptively to good and bad stimuli, but they almost never fall below hedonic zero. 

I hope that all prospective parents worldwide will be given access to pre-implantation genetic screening, counseling, and soon genome editing to allow a number of things, the ability to choose the approximate pain threshold, pain tolerance of your future children. There is a single gene, the so-called volume no for pain that allows you, it's got dozens of different variants, the SC9A gene to essentially choose the pain tolerance of your future kids, but not just physical wellbeing. It'll also be possible to choose the approximate hedonic range and hedonic set point of your future offspring. Hedonic set point is the approximate level of wellbeing, or ill being around which people tend to fluctuate in the course of a course of a lifetime, and by just a handful of genetic tweaks be possible to essentially choose very high hedonic set point for your future kids, such that yes, default levels of wellbeing are very high. 

Now beyond this kind of tweaking, looking further ahead to, centuries millennia, hence it'll be possible to ratchet up hedonic range and hedonic tone and enjoy life based on gradients of superhuman bliss that are orders of magnitude richer than anything physiologically possible today. But yep. Okay. That really is pretty sci-fi from an ethical perspective, I think as I said overriding obligation is to minimize, mitigate, and then phase out altogether suffering. I mean, at the moment, something like 800,000 people each year take their own lives, essentially, because they're victims of depression, all manner of pain, misery, malaise in the world. If one could empathize with even a fraction of it, one would go psychotic, but the problem of suffering is fixable technically.

What I would like to see, but sadly this isn't a prediction, is kind of a hundred year plan under the auspices of the World Health Organization to yeah, genetically eradicate all forms of suffering. Realistically, the timescale is likely to be centuries, but yeah, the level of suffering in the living world is an adjustable parameter. It'll be possible to help not just humans, but non-human animals too, in ways ranging from cultured meat and animal products that don't involve animal agriculture and today's horrific abuse of non-humans.

What's more, one can extend the abolitionist project to the rest of the living world? Free living nature. The entire biosphere is going to be programmable, and it'll be possible to use technologies such as synthetic gene drives that cheat the laws of Von Willebrand inheritance to cross species fertility regulation via immuno-contraception, tunable gene drives again, and create a kind of pan species welfare state. 

Now, I'm glossing over all manner of complications and challenges, both technical and ethical, ideological, religious, but if, like me, you're a transhumanist, you want to create essentially a triple S civilization of, of super longevity, super intelligence and super happiness. That's what I work for as a transhumanist.

(15:32) Briar: It's amazing to hear you dive into all of these different topics and I do want to explore them further. And I think something that has appealed to me when I've been having conversations with Transhumanists such as your yourself and really exploring it a lot more since we last met, is I like the idea that behind everything that you do, it's with the idea of, Hey, we have this future and we can help drive it. We can create it, we can use technology, we can think about the world that we want to live in. We can play an active role in it. But something that concerns me is our people with the money, or big corporations or governments, do they even want super happiness, for instance, for the people? Because the way that society seems to be built at the moment is that they love this sick care system. They don't want people to be happy and healthy. They want people to be sick because when they're sick, they make money from them.

(16:37) David: Well, in general, I have an extremely dark view of human nature and life and capitalism and business. But that said, on the whole, I don't think there are many people out there including titans of business who actively want people to be sick. Now, any form of biomedical revolution, any vision of our glorious future that depends on heroic self-sacrifice, I'm extremely skeptical it will come to pass. But one of the beauties of information based technologies is the way that the price comes crashing down and trends inexorably to zero, which is why essentially everyone can have a mobile phone of some description. Everyone can have access to the world's musical resources literary resources, movies. I mean, this is sounding rather utopian. 

There are some technologies, yes, that the rich, they're going to benefit from first and poorer people are going to be excluded. But something like preimplantation genetic screening, counseling, genome editing is extremely cost effective. The price is going to collapse. And I don't know if any reason, any technical reason, at any rate why it shouldn't be ubiquitously available. I don't know. Possibly I'm too optimistic there, but really, I think business leaders are, in many ways, people like you are me. Sure, they've got all kinds of dark Moses self-interested, but they're not malicious. They don't actively want people to suffer. So yeah I'm aware this may, this may sound perhaps rather naive, but yeah, I am cautiously optimistic that essentially everyone will have access to these triple S technologies.

(18:51) Briar: I think that I definitely agree with you. I think that humans, I like to think are fundamentally good. I just think when it comes to these big corporates, there's just so many layers to them, and they're so particular about making money and things like this. And I recently wrote a letter to the White House actually that said that when we're thinking about artificial intelligence, we need to be thinking about the future from a good side and a bad side, and we need to be thinking now so that we can put regulations in place so that we can help drive it. So we can plan, basically. And in my letter I wrote, let's just not leave it to the big technology corporations. Let's get a diverse set of people together, such as yourself, philosophers, transhumanists, psychologist, unions, like all different kind of people so that everyone can put their brains together. Because I personally do not think this happens enough when we're thinking about the, our future. What are your thoughts about this?

(19:56) David: I again, try to avoid doing is going on a rant against capitalism, the cash nexus at heart. And here I'm quite unusual amongst Transhumanists, I'm a socialist, I believe in fairness and justice, and I'm very suspicious of the cash nexus and business. Realistically, this kind of scenario isn't going to happen. So we've got to work with the grain of human nature and the cash Lexus. And if you look at big corporations, I mean, someone like Elon Musk or Sam Altman, I mean, it is essentially a mixture of motivations. They shouldn't be lionized, but nor should they be demonized either. And what we should be aiming for, essentially, I think is technical solutions to ethical problems. Another, another huge example, I think perhaps the greatest evil in the world today is animal agriculture. And I strenuously urge everyone to become vegan. But if one relies on moral argument alone, it will perhaps take centuries. But if one goes for technical fixes like cultured meat and animal products, it's possible to have a scenario in which we end animal agriculture and end the horrors of non-human animals suffering without actually calling for heroic self-sacrifice or even personal inconvenience for most people. And so, yeah, that should be I think our focus solutions that don't actually call for moral heroism.

(21:41) Briar: And you spoke before about the fact that we can, it basically our genes play a role in, in how happy we feel. And I think this is very interesting that you bring this up because recently when I was speaking to one of my friends, she asked me to reflect on her top three qualities. And one that came glaringly to mind is I was like, you know what? I don't think I've ever seen you flustered negative, depressed, like you are continually a very happy person. And she said, it's not very often I feel any negative emotion, and if I do I prefer just to hide and, and stay away and meditate on it. But I thought it was really interesting. And I think one company that that comes to mind, which is perhaps a little bit more well-known is CRISPR They've been doing lots in this space, but can you tell me a little bit more about the genome editing and how potentially this could start at birth and how we could, or not even birth, but before birth, and how we could be using this to build a successful human race?

(22:49) David: Yes, and one characteristic of, well, not just humans but human and non-human animals, is the hedonic treadmill. That essentially there is no evidence that on average most people are happier or sadder than they were on the African savannah. That this so-called lottery paradox that six months after either winning the lottery or becoming quadriplegic in a terrible accident, most people will have reverted to their previous level of wellbeing or ill being before the win or the tragedy. And I think our goal should be not to abolish the hedonic treadmill, but to recalibrate hedonic set points. So the treadmill still operates, but on a much more exalted level. You mentioned your friend, imagine if everyone essentially thanks to access to pre-implantation genetic screening counseling, and CRISPR had an extremely high hedonic set point so that their lives were animated by gradients of intelligent bliss, this is going to be technically feasible.

Should we do it well, critics will raise the spectre of eugenics, and there are indeed all manner of pitfalls. But we now have the technical tools to do so that like so many people, I'm interested in social justice building a fairer society. We've been trying to improve our environment for hundreds, perhaps in one sense thousands of years. But what we are not doing is actually tackling the negative feedback mechanisms of the hedonic treadmill. Whereas even a handful of genetic tweaks, let's say the far, far out genes, the cob genes can enable essentially everyone to enjoy an extremely high default quality of life. Now, there are two strands here. There is somatic gene therapy to help existing humans, which is going to be feasible. It's being introduced for a handful of well-known genetic diseases, and it's very expensive too. It'll be feasible for the general population later this century.

Much easier than trying to remedy these genetic, but biological genetic deficits of existing humans is to practice responsible parenthood. And if you think it is ethically permissible, justifiable to bring new life and potentially new suffering into the world, rather than today's genetic crapshoot and just trusting, the wisdom of nature or providence or not lot or what have you to choose the genetic makeup of your future children. And although some traits like intelligence are influenced by thousands of different genes and allelic combinations, something like mood and pain tolerance, even a handful of genetic tweaks and intelligent responsible choices can load the genetic dice in your children's favour and underwrite an extremely high quality of life. 

Although one can speak in terms of enhancement, and most people are extremely uncomfortable with the idea of genetically enhancing humans, one can think instead in terms of the language of remediation, because if one takes seriously, the World Health Organization's definition of health is laid out in its founding constitution of 1948. Health is a state of complete physical, social emotional wellbeing. We are all profoundly sick. But by practicing this kind of germline genetic intervention, one can create an approximation of healthy people. In some ways, the World Health Organization has a definition of health that is more radical than anything transhumanists come up with because complete health as defined by the World Health Organization, hasn't yet been enjoyed by any sentient being in history. Instead of complete health I think we should be aiming for information sensitive gradients of wellbeing. Bit of a mouthful, but it's important because instead of being uniformly blissful or blissed out, it's possible to conserve critical insight, social responsibility, personal growth, even if your hedonic set point is radically higher than today. 

The other beauties of hedonic recalibration is that it doesn't ask you to buy into my vision of paradise and the good life. You can conserve if you want to, your existing values and preference architecture. It's just like waking up tomorrow morning in an extremely good mood able to pursue all the projects that you care about. And once again, there are many, many complications that we're glossing over here. But yeah, I'm not trying to sell my vision of paradise, really think of what you enjoy and appreciate most in life. Think of your best experiences. Imagine if life could be like that all the time. Only better where it's going to be feasible with biotech.

(28:46) Briar: Do you think that, so one of my recent conversations I've been having, they said to me that potentially by the year 2045, I could have the option of whether I want to live forever or not. And something else that they said to me is not only by that year will I have that option, but potentially I will have nanotechnology inside me that will take away any kind of deterioration or anything like this, so I will actually become younger. Is this something that you potentially believe in or, and even following on from that, what happens if we're all living forever? Isn't the whole purpose of dying to make life meaningful?

(29:30) David: Okay, number of different strands to your question, first timescale, the 2045 is especially associated with Ray Kurzweil he was on the cover of Time Magazine, 2045, the year Humans become immortal. Given the current race of technological progress, exponential growth, as some people would see it, then perhaps 2045 is a credible timescale. I suspect it's too optimistic that essentially however smart your intelligence or your AI, you need clinical trials. But perhaps I'm wrong temperamentally, I'm a pessimist. So yeah, I think later this century, it is going to be possible to fix aging just as it's possible to fix suffering. In terms of death, giving life meaning, I think this is more of a rationalization as long as aging, senescence, death, decrepitude, all the infirmities of age are inevitable. It helps if one can rationalize them both to oneself and to others.

When these rejuvenation anti-aging technologies actually hit the shelves, hit the mainstream, I think all our rationalizations are simply going to crumble away. Take this ghastly disease, progeria, this accelerated aging syndrome in which someone dies looking like an extremely old person at the age of perhaps 15 or 16, without exception. Everyone recognizes that progeria is a ghastly disorder or set of disorders, but from the perspective of successes, we all have this kind of progeriod syndrome of aging and it's going to be fixable and without the need to rationalize. Yeah I think, yeah, our existing ideology and exceptions of death and decrepitude is just going to fizzle away into nothing

(31:53) Briar: In regards to, so we spoke about super intelligence and of course so many people are talking about artificial intelligence. It feels like it's the, the buzzword of this year. Do you think that we are doing enough as a human species to be building our intelligence to be exploring potentially things such as Neuralink and other technologies mind uploading, for instance? because one of the concerns that I have, and I think you touched a little bit on this earlier because I know that other people have this concern as well, is that the artificial intelligence just gets so smart and it's training and it's becoming a super intelligence and maybe on our side as humans, the development that we are doing to augment with technology and to become smarter is potentially a little bit slower. And just even circling on the back of this, people are just spending so-- well, children are spending so much time on the likes of TikTok, they're watching things such as like twerking and doing all of these things. Like part of me just worries that maybe we are not focusing enough on making our younger generation smart.

(33:05) David: This sounds rather pedantic, but it's not. I think we need to clarify what we mean by intelligence. And there's one conception of intelligence measured by so-called IQ tests that measures one particular form of intellectual ability, which is really a very important intellectual ability. One could call it the so-called autistic components of intelligence. But there are other forms of intelligence too. For example, what seems to have driven, at least in part the evolution of distinctively human intelligence has been a mind reading, prowess, social cognition, cooperative problem solving. And we need a much, much richer conception of what intelligence amounts to, and that some conceptions of super intelligence are more akin to a super Asperger than a full spectrum super intelligence. I mentioned cooperative problem solving, social cognition, mind reading, but there are other aspects to intelligence too. For example, the ability to introspect, the ability to explore radically altered states of consciousness.

Our current digital clinical computers, which essentially are implementations of classical Turing machines and also classically parallel connectionist systems are not phenomenally bound subjects of experience. They can't solve the binding problem. Their ether of sentience is architecturally hardwired. And I suspect many people, many AI researchers think of consciousness as something that is computationally incidental or irrelevant, just like the textures of the pieces in a game of chess that they are irrelevant to the gameplay and they would generalize chess to all forms of cognition. But I would argue that any conception of full spectrum super intelligence must embrace phenomenally bound consciousness and our machines of simply the wrong kind of architecture to support phenomenally bound minds. 

Minds are incredibly adaptive. There's, there's capacity to phenomenally bind disparate feature processes. So that right now, instead of being a so-called micro experiential zombie made up of 86 billion odd membrane bound neurons, you are a subject of experience, a unified subjective experience with a unified self, unified field of perception.

And this is incredibly fitness enhancing. And yeah, whereas some people see consciousness sentience as a mere detail, I see it as the key to the plot because phenomenally bound consciousness is incredibly computationally powerful. And the best way to illustrate the sheer power of phenomenal binding is to look at syndromes where it breaks down. 

Someone with integrative agnosia, for instance, can see a tooth and a man and jaws, but they can't see a lion or someone with simultanagnosia can only see one object at once, or someone with motion blindness, akinetopsia can't see motion, they can just see a lion in the distance, then a lion nearer and then and a lion almost on top of them. The ability of animals, creatures with the capacity for rapid self-propelled motion to run these phenomenally bound world simulations in almost real time. What naive realists call perception is incredibly adaptive. It is going to enable us to explore a vast multitude of alien state spaces of experience. And our machines just aren't capable of doing, doing this. Their ignorances are architecturally hardwired. 

So though I am absolutely fascinated by, for example, chatGPT and use it a great deal, it's not nascent super intelligence, it's simply got the wrong kind of architecture for general intelligence. I should stress in case anyone has any doubts that this is a controversial view, but nonetheless it's not ill motivated. And yeah, when I speak of full spectrum super intelligences, I'm most certainly not thinking of digital zombies, classical cheering machines. They're not going to wake up, they're not going to become full spectrum super intelligences. They are tools, very useful tools. And that, nor am I saying that AI isn't potentially dangerous as it is. But the only information processing system that really, really scares me is other male human primates doing what evolution in a sense designed male human primates to do, which is to get together with other humans and wage territorial wars of aggression.

Which is why, although in some ways I'm extremely optimistic about the future triple S civilization, super intelligence, super longevity and super happiness, I'm very pessimistic this century. Are we going to be able to avoid nuclear war and things could so easily spiral outta control in the Ukraine, for example. So yeah I am genuinely torn about the future. I think the, the medium to long term future is indescribably glorious and virtue essentially of control over our reward secretary life based on gradients of superhuman bliss. But navigating this century is going to be treacherous.

(39:21) Briar: You're not the first person who's said this, this to me. I recently heard from somebody else as well that they thought that the next 20 years is going to be such a significant change as big as when humans went through the industrial revolution.

(39:41) David: Yes, I suspect  so many people think that history of have thought they were living at the end of times, Messianic movements this time it's different, it almost is. But clearly momentous changes are afoot in society, even in one's own lifetime. One thinks back, how far we've come since the birth of the internet. However, for all this talk of momentous change and progress, humans and non-human animals today, we've still got the same default consciousness, the same core emotions, same pleasure, pain, access, same default settings for hedonic set points, wellbeing and ill being, are we going to revolutionize minds upgrade our reward circuitry or are we going to essentially conserve the biological genetic status quo? I'm cautiously optimistic that there is going to be productive revolution of designer baby. And yes, 300, 500 years from now, the concept of suffering will be inconceivable. But, and this is an old saying, how'd you make God laugh? Tell him your plans, I could be hopelessly mistaken.

(41:06) Briar: So David, thank you so much for sharing all of this wonderful knowledge with me today. It's been incredibly interesting and I'm very excited now to keep exploring. It's opened up a whole can of questions in my head, so to speak, and I look forward to reconnecting in a bit in the future.

(41:24) David: Thank you very much for having me, and thank you for a wonderfully stimulating discussion, infinite bliss, and thank once again.

Briar Prestidge

Close Deals in Heels is an office fashion, lifestyle and beauty blog for sassy, vivacious and driven women. Who said dressing for work had to be boring? 

http://www.briarprestidge.com
Previous
Previous

#E26 Understanding the Implications of Sentient AI and the Role of Humanity in a Digital-First Future With Tech Expert and Futurist Theo Priestley

Next
Next

#E24: Radical Life Extension and Human Augmentation With Dr. Natasha Vita-More, the Author of The Transhumanism Manifesto