#E42 In The Face Of AI, Humanity Must Prove Its Value With Steve Fuller

About Steve Fuller.

Professor Steve Fuller is an American social philosopher renowned for his groundbreaking contributions to the field of science and technology studies. Serving as a Professor of Sociology at Warwick University, he has delved into diverse areas such as social epistemology, academic freedom, and fervently advocated for intelligent design and transhumanism.

As the founder of Social Epistemology, Steve has shaped critical discourse in understanding the social dimensions of knowledge. His intellectual pursuits extend to the written realm, where he has authored influential works such as "Humanity 2.0" and "Post Truth: Knowledge as a Power Game," offering profound insights into the evolving landscape of human interaction with science and technology. Steve's dedication to exploring the intersections of academia, philosophy, and societal advancements has solidified his position as a thought leader in the ever-evolving field of social philosophy.

Read the HYPERSCALE transcript.

[00:52] Briar: Well, hi everybody, and welcome to another episode of Hyperscale. I've got Dr. Steve Fuller on the call with me, and I was just listening to a podcast with him yesterday, so I'm very excited to be talking to him about all things future, transhumanism, what will society be like? Welcome to the show.

[01:13] Steve: Thank you. Thank you for having me, Briar.

[01:16] Briar: So let's just dive into it. When people are looking at your body of work, and you've got an absolutely fantastic body of work and knowledge and research that you've conducted over the years, what is the number one thing that you would want them to take away and really get from it?

[01:36] Steve: Well, I would say it's the way in which our understanding of humanity has been very profoundly shaped by the developments in science and technology, certainly over the past 400 years, but arguably since the beginning of homo sapiens. And so the transformations that we're seeing now that seem to be happening at an ever faster rate are in a sense continuing, something that's always been very much part of the human trajectory.

[02:02] Briar: And what is the human trajectory? When I'm talking to people on my podcast, I'm hearing very different things about the future and what the future might hold. Some people say it's going to be this very dystopian future. When they talk about the likes of transhumanism, they say it's the world's most dangerous thing. And other people on the other side of the coin when talking about transhumanism, they picture it to be this good thing that we're doing for humanity. First of all, what is transhumanism to you, I guess, might be a good start?

[02:38] Steve: Well, first of all, these two descriptions that you gave of transhumanism are in a sense both correct and I think the way to think about transhumanism is again not something that's so strange, but something that you might say is built into the way in which the history of science and technology has already progressed. So even if we take something like the extension of life expectancy, even before we start getting into whether we can reverse aging and all that jazz. As a matter of fact, over the past hundred years, let's say, life expectancy has nearly doubled. In many cases infant mortality has dropped. And all of this has been done just through the historical developments in scientific medicine, even before we start to get into the more futuristic stuff.

[03:26] Steve: And so one of the things that you've seen over the course of human history, over the last 200 years is more people living longer, doing more things. In a way, all of this is kind of what transhumanism is trying to promote in its positive face. And in a sense, we should just be going full speed ahead as far as that trajectory is concerned, which is a trajectory that we've been on, I would say at least for 200 years in some very palpable kind of way.

[03:57] Briar: Some would argue that we're already transhuman, my gram has a hip replacement. I carry my phone around like it's an extension of my arm. Some people wear glasses to read. What's your thoughts about this?

[04:14] Steve: Well, I think that's, broadly speaking, correct, that we are already transhuman in so far…. So there is this aspect of transhumanism about being a cyborg, where there is some kind of technologically-based enhancement to the human, which in fact may even be necessary for the human to be fully functioning. And so we might even start with eyeglasses on that score, but then get into other things like silicon chip implants and various prostheses for bodily parts and stuff of that kind. And those things have been around for quite a long time in many cases. And so we're quite used to that kind of thing. I think, again, it's a matter of the scale and intensity with which this has been happening in the recent past that I think is a sort of thing that has caused people to take notice in a very strong way, and then ask questions about whether we're sort of becoming a different species or are we losing our humanity or something of that kind. My personal view is I'm not particularly fussed by that, but I can see how, because of the speed with which these changes have been happening, that some people are kind of a bit disoriented by what's going on. But again, it's just speeding up kind of a trajectory that we've already been on.

[05:32] Briar: I agree. It does sound like it seems to be moving quite fast, and I think part of this reason is because the metaverse was so hyped up last year. This year, we've had all of the AI hype going on as well. Do you think that we as humans are almost doing enough to evolve our biological meat sack, so to speak, when AI and robotics seem to be moving at such a dizzying pace, yet on the more human side of things, there are so many rules and regulations around the likes of neuralinks, and obviously for a good reason. But do you think we're doing enough, is robotics and AI just going to beat us?

[06:18] Steve: Well, you're raising a very interesting point that we got to get to at some point in the conversation, and that is what exactly we mean by human in the first place. Because the idea, the sort of identification, the strong identification of the human with the upright ape, homo sapiens is really a mid-18th century invention. Homo sapiens is a term from the mid-18th century, and it has to do with, in a sense, placing humans squarely in some kind of understanding of the animal kingdom. But before that time what exactly a human being was and what counted as a human being was a much more open question. And of course, generally speaking, if you're talking about, the Judeo-Christian Islamic tradition, a human is kind of part divine, part animal in some sense.

[07:15] Steve: That has always been kind of the classical metaphysical definition of the human without talking about apes for the most part. So apes generally was not part of that discourse, that religious discourse, because there weren't that many apes around, especially in the European countries. And so the human was this kind of metaphysical hybrid always. And of course, if you go back to the ancient times, the Greeks and the Romans, there was even, very open discussion about animals that spontaneously demonstrating human qualities. So we still talk this way in a sense. But we call it now anthropomorphism. But if you go back to the Greeks, and you look at, let's say, Aesop's Fables which have all these animals behaving in rather human ways, there's no reason to think that the Greeks didn't take that literally in the sense that the animals could have human qualities.

[08:02] Steve: And in fact, our notion of treating animals humanely again does not come from the world of anthropomorphism, but rather comes from animals being able to manifest to certain extent human qualities. And indeed, when we talk about the upright ape, the homo sapiens and you look at philosophers and even religious attitudes toward that creature with regard to its humanity, one of the things you find is that the human is something that you are born with the potential of becoming. And this is why historically, you became human and you had to be educated, you had to be trained, you had to acquire a certain kind of comportment and demeanor, and had to be fit for civilized society and be able to speak up for yourself and all those things you're not born with. Those are things that come through discipline and training.

[08:49] Steve: And that's where the set of disciplines that we have today called the humanities come from. These are the things you would learn, as it were to become a human being. Now, you might say that all homo sapiens are in a sense, eligible for this kind of training to become human and that is even a contested point historically. So what you start to get in the mid-18th century when homo sapiens gets coined, is you start to get this very clear identification between the upright ape and the human, and it becomes kind of an exclusive identification. And then as you move into the 19th century with Darwin and all the rest of it, then you start to get some very clear ideas that the human is just this kind of ape.

[09:35] Steve: And whenever we talk about animals as having human qualities, we're just anthropomorphizing them. So that term comes in, in the 19th century. And so now there's a sense in which we do have a default position that says that the human being is homo sapiens and we're this meat bag thing. And of course, that then raises alarm bells for people who are worried about uploading consciousness into machines or merging with AI or any of that other stuff that many transhumanists talk about. But as a matter of fact, again, if you look at the long historical perspective, the idea of what counts as a human has always been pretty free-floating. And so the idea that our human consciousness might upload itself to a machine or merge with a machine in some sense, like Neuralink would suggest. I don't think there's anything by definition that is particularly anti-human about that if you take this kind of broader conceptual understanding of what a human being is.

[10:32] Briar: I think it's so interesting and I think this is why I love this topic so much, is it always challenges my worldview. I was speaking to someone recently, actually, about how different I felt, perhaps 10 years ago, 5 years ago, about all of these things. And there was a point of time where I kind of looked at myself and I said, why am I so attached to a particular idea? Why can't I be curious and explore? Because the future always changes. The world always evolves. And I believe if we have that sort of flexible thinking, then that's helpful going into a future of uncertainty, because it essentially is I guess, we don't a hundred percent know, do we?

[11:15] Steve: I mean, if I had to talk about what is the source of concern in all this, I wouldn't necessarily say it has to do with the fact that some people are talking about uploading their consciousness to machines and that we might actually be able to do it at some point or reverse aging, which we might actually be able to do at some point. So I'm not skeptical, at least if you give it a long enough term for it to work out. But I do think there is a problem. And that's the problem, especially if all of these transhuman futures turn out to work in a way. Because what that will mean is that we will be living in a world with a much wider range of beings that will wish to qualify as human than we've ever lived with before.

[12:04] Steve: And I think that raises a lot of very interesting questions because as you know just sticking with homo sapiens for a moment, there has been a lot of difficulty over the course of the history of society and politics and so forth in getting all of the people from a biological standpoint, count as humans, kind of on the same page and living in the same society and in peace and harmony and all that jazz. And the kinds of differences that humans have faced in the past are by the standards that we might imagine in the future, relatively minor. We're talking about skin color, gender difference. I mean, stuff like that is relatively minor compared to what might be very substantial substrate differences.

[12:49] Steve: If we have, let's say a highly silicon-based human, and we have a meat bag human, and in terms of the kinds of demands they might put on energy resources, for example, what would count as, respecting their space, enabling opportunities for them to flourish as beings. I mean, all these kind of classical questions that one would ask about how do you live in harmony in a society with diverse individuals? Well, the diversity is going to go up exponentially. And so there is going to be a question and some transhumanists talk about this, about whether all these different beings can live together in harmony, or are we going to start having to think about, what some transhuman call subs speciation, where in a sense we kind of branch off and become different things, perhaps each of us with our own planet, if Elon Musk gets his way.

[13:41] Briar: So do you think it's going to be a case of the haves and the have nots, and do you think that people with more money might be the ones that get this sort of technology?

[13:54] Steve: I think, obviously in the beginning especially since this stuff is relatively speaking, unregulated in terms of how it operates, because it's moving so fast, governments can't keep up with it. So I think given that you have something like a kind of very experimenting free market for innovation, I think it's inevitable that the early adopters will be the people who are closest to the funding of the operations that are going on now and they're mostly private sector operations, so that wouldn't be too surprising. I think one thing you'd want to factor into that though is that all these rich guys that we're thinking about here might turn out to be guinea pigs because if they are the early adopters of things, let's say, have not gone properly through clinical trials and stuff like that.

[14:44] Steve: And we don't actually know what the long-term consequences are. These people will be the test cases. So just because rich people will have access to this stuff first, it doesn't follow that they'll get everything they want out of it. And especially as we enter into a culture, which you probably know is more open to self-experimentation than ever before. And so yes, while you might say the funders, the richer people do have access to this cutting-edge technology, it's not clear that everything's going to turn out to be a benefit for them. But let's say some of this stuff does really work, time and time again, and has a kind of long-term efficacy and so forth. Well then, from a market standpoint, it seems to me that there will be incentives to bring the price down as the scale of production goes up.

[15:32] Steve: And, those kinds of broadly capitalist arguments will probably kick in. So, like in the UK where I live, we have a National Health Service and one of the things the National Health Service does is to provide a kind of standard, you might say, of minimally acceptable health and the medicines and treatments and so forth that you need to maintain that level Is something that the state provides… so we have these health insurance taxes In the country. And so what counts as normal, what everybody will need to have in order to be considered adequate to be able to perform in this kind of, now kind of enhanced world that we're moving into, that will be an interesting question. And so I have advocated for example, that if we're thinking about the future of the welfare state, which is what all this is part of, this health service stuff, then we have to start paying attention to the way in which certain kinds of transhumanist enhancements, if widely adopted will alter our sense of what's normal.

[16:34] Steve: And that will be an issue for society, just like when you realize, you have a certain number of people who aren't able to see properly, you need to provide them eyeglasses as cheap as possible if you want your society to be fully functioning, well brain boosting drugs of various sorts may fall into this category in the future. It's entirely possible. But I think there's a further issue here, which I think in a way is the real tricky issue. What if you're somebody who has the opportunity because it's not very expensive, you can afford to get all these kinds of enhancements, but you just don't want to. You want to opt out. You want to go natural as it were. Instead of humanity 2.0, you want to stay humanity 1.0.

[17:20] Steve: Will you be allowed to do that? Will you be able to function in a world where most people think the thing you need to do is get enhanced all the time and the means are available for you to do that? Will you be able not to be involved in that activity? Can you opt-out successfully? Now, it is true if you look at the history of technology, that there are some technologies that people actually quite successfully opt out of and do not have a terribly abnormal life. I'll give you an example from my own life. I never learned to drive a car. Very 20th century thing to know, is to drive a car. And cars are still around. They're not going away, it seems right. At least if Elon Musk sticks around, they're not going away. I don't feel like my life has been terribly impaired, but there would be other things.

[18:10] Steve: There'd be other things that if you did not opt into, that your ability to do other things in the world would be seriously restricted. And my fear here is that a lot of this transhumanist stuff, because it involves enhancement of brain capacity or brain powers or whatever, a lot of that stuff actually might make it very difficult for people to opt-out if they want to still be part of a world where everyone has gotten into this. So I do think that is a serious issue. And you see, transhumanists don't really face this issue very squarely because they are so convinced that everybody will want this once it gets generally made available, that they can't imagine somebody might not want it. But my guess is a lot of people won't want it. At least a significant minority won't want it for all kinds of reasons. Kind of the people who like to go natural, the people who have religious reasons. There'll be all kinds of people, they're recognizable from what we've got now in the world, that will really pile on when this transhumanist stuff gets put on the table, seriously. So I do think that is a long-term and quite deep problem that I think it's not going to be easy to solve, frankly

[19:27] Briar: I think it's very interesting thinking about it and thinking about how society could be. And just even as you were talking, I was thinking about my smartphone, the internet, and how almost difficult it is to function if we don't have the internet. The other day, my phone went down, I was like, oh, I don't have my bank cards because I rely on Apple Pay, and oh, God, I don't have Google Maps to get home. I can't remember where I live from this area. So something like the internet is just such an everyday part of our society, but there was times where, of course, we didn't have the internet. We had good old dial up and we were on for half an hour, and then it'll be like ding ding. You remember that noise?

[20:07] Steve: Yeah I remember that. 

[20:08] Briar: It's enough to give you nightmares really, isn't it? You'd be like, "mom, I'm still on bebo, get off." But I think it would be a time where it would almost, there would be so much happening in society that it would almost become quite difficult to function and when other people see that it's safe as well. But something that I'm really concerned about is who owns this stuff. I was reading an article in Australia recently about how a lady had this brain chip put in her head, and it changed her life for the better. She was having seizures. She couldn't function before that brain chip. And then the company went bust, and they said, hey, we have to take our brain chip back. And she was like, I'll remortgage my house. She was arguing it at court. She's like, listen, I need this brain chip. And they ended up taking it away from her.

[20:59] Steve: Interesting. Well, see, this is an interesting kind of legal issue. One of the contexts in which I teach this stuff to students at Warwick, to law students, and so there is this emerging area called Cyborg Law. One of the things and your example highlights this is the redefinition of personhood and in particular, the boundaries of personhood. Like when do I end and you begin kind of thing? And when we're talking about this silicon chip in the lady's head, there's a sense in which there seems to be overlapping jurisdictions going on here. Because this chip is in the lady's head and it's making her function. But at the same time, the company owns the chip. And so in a sense, the fate of the chip is the fate of the company.

[21:45] Steve: And so the law is increasingly getting involved in trying to disentangle these kinds of things. And so I do think there's going to be a more systematic approach to this in the future. And especially with regard to the way in which contracts are set out when people actually decide to have one of these silicon chips. In a sense, the terms and conditions need to be better clarified. But one case that in a way moves in the opposite direction, which has already happened and in fact happened almost 10 years ago in the United States. It was a Supreme Court case called Riley versus California, which basically, what the ruling boiled down to. So the police confiscated this guy's cell phone when they thought he had done some kind of crime or something, they confiscated it, and I think they may have destroyed it or something in the process that the police claimed that this was destruction of property. 

[22:39] Steve: And there's a certain kind of law handling that, but the person Riley argued that in fact, this was like an assault, that in a sense the smartphone was a distributed part of the person. So even though the smartphone is a separate object it basically included half the person's brain or something like that. And so the fact that it was physically distributed did not as it were violate the concept that it's part of a whole person. That in other words, a whole person does not have to be restricted to a continuous physical body, but may have distributed physical parts. Now, this is one thing that the Supreme Court ruled in 2014 and so that means, that there's a precedent for thinking about smartphone as a proper part of you.

[23:28] Steve: Not just property, but a part of you. It would be like, cutting off your hand or giving you a concussion that impairs your brain function. It would be something at that order. It wouldn't be like stealing your car. So there's beginning to be a kind of recognition within the law that our notions of personhood are morphing in response to the way in which the technology has a much more intimate kind of functioning in the way in which we define ourselves as human beings. But the point is, it's a very open space at the moment and different courts rule differently in different countries. So this is not a done deal that that sub-organization is going to become the norm. But it is interesting that the US Supreme Court has already put a marker down in that direction.

[24:12] Briar: Very interesting. I sometimes worry that the people who are developing these technologies, they're all kind of one and the same, for instance, with open AI's latest news and things like this, lo and behold, it's an all male board again. And one might argue that this company is developing something very important that is going to shape the absolute vision and journey of humanity. And yet here we have an all-male board again. So how can we get things to be more diverse, I guess? Are the governments getting involved enough in this?

[24:53] Steve: Well, yes. See, now you've raised a very interesting point and I'm glad you brought the government into this, because historically, the only way in which there's been the requisite kind of diversity that you're talking about is when the government's got involved. And one of the problems that we have with what you might call broadly speaking, the Silicon Valley mentality, it's generally speaking anti-state. It's anti-big government and believes that all good things come from spontaneous self-organization. And so if you look at the people who naturally gravitate to certain kinds of setups and certain ways in which work is organized, for example and the kinds of jobs that people have and how they relate to each other, and all the rest of it, you could see that even though this is supposedly spontaneously self-organizing, in fact, it is kind of done in the image of, as it were, the initial people to be involved who are usually these younger male people.

[25:52] Steve: And then they, as it were, anchor it in a sense, so the founders anchor it. And you could claim to be as free as you want. And this applies, not only to just the people who produce the hardware or even just to people who produce the software. But you have this kind of tendency reproduce, let's say in Wikipedia where Wikipedia of course is supposed to be done by the punters, by ordinary people. But these so-called ordinary people who are in fact editing and adding entries and so forth, they themselves reflect exactly this general culture, 75% male, the age group is kind of quite young.

[26:36] Steve: And again, if you ask Jimmy Wales, who says, well, look, I'm not doing anything, the people just come there and these are the people who come. Anyone could come. Well, not so clear anyone could come. Because if you actually look at the way in which people in Wikipedia act toward each other, because that's interesting in keeping a very clear track record of how people interact with each other, it's cut and thrust. I mean, it's really brutal if you look at the talk pages for various things. But how do you remedy this? Well, typically, this is where the government would come in, because the government would, especially with regard to anything that is regarded as a public good. So this is something where… and this is where I do think Silicon Valley is quite vulnerable, because I think the way Silicon Valley protects, you might say this kind of self-organizing culture is by presenting itself as a purely commercial entity.

[27:34] Steve: Companies can be run their own way, their private enterprises, all that sort of stuff. But the problem is when you end up producing something that turns out to be a public good. Even if you're making money out of it, then your obligations change, especially if people are going to be relying on this a lot. So if we're talking about Google, let's say, or Microsoft, or certainly all these top guys, what they produce at the end of the day has de facto become public goods. And there the government does have a right to look at things like whether algorithms are biased and who's being employed under what circumstances. Almost like the way they would look at the situation in some public sector agency just because of the nature of the thing it produces.

[28:21] Steve: And I think the burden is even greater for the government to get involved in this precisely because this stuff, to a large extent is monopolistic. There's nowhere else you can go to get the kinds of goods that these guys are providing. You really are quite limited in your options. And so if you're not going to be able to break up the monopolies and so far governments generally speaking haven't been very successful, then the least you could do is in some way regulate the way they operate so they reflect more of the interests of the public that they are clearly serving already. And so I think that would be the way, if I were the government, to kind of get into this issue.

[28:58] Briar: I think sometimes when I think about society and how it functions these days where we almost seem so stuck in our day-to-day where we're struggling to, I don't know, so many people live paycheck to paycheck and that they're concerned and they're worried about inflation and paying their power bill and all sorts of things. So I think it does in today's society make it quite tricky to even think and plan for the future for when we are not even around. And I think it makes it difficult when thinking about the governments in majority of countries such as the US and the UK, where they're constantly just trying to get their next term election and they never talk about this kind of technology and things, because I think part of the reason why is because old Joey down the street is living paycheck to paycheck, and that's his worries. He doesn't care about a hundred years into the future, he's not thinking that far ahead because he can't.

[29:57] Steve: Well, yeah, I mean, it seems to be that there are a couple of issues you're pointing to here. And one is that it's quite clear that you might say faith in the state has massively declined, certainly since, I would say the 1980s actually. So somewhat before Silicon Valley really comes on stream. I think as the Cold War was winding down, what you see is that people kind of start to recoil more from the idea of the state having to provide everything or provide even the necessities. And that's when you start to see, I think, where the rot in the state comes in, which is namely various tax revolts. People refuse to pay higher taxes because people were paying quite high levels of tax during the Cold War because the state was seen as kind of the bulwark of defense, against whatever threat one was imagining was going on.

[30:56] Steve: And everybody was… and the state would be kind of the center of the national interest. I think that's all kind of disappeared. And what that means is the state has a harder and harder time legitimizing itself. What's happened is that because the tax base for the state has eroded, whatever money it can raise has to actually deal with the concerns of these very poor people who live paycheck by paycheck. And so there is really no opportunity to deal with the more future-forward kinds of things. And I think that is a real problem. For example, one area where you see this is with regard to the healthcare issue. So in the UK, which, to my mind ought to be a place where there is some interesting kind of experimental social policy where various kinds of enhancements might be kind of made generally available to the public as part of a re-normalizing of the human condition toward in a transhuman direction. 

[31:58] Steve: I think that's quite difficult, at least for the medium term because of the difficulty in providing basic healthcare and social care to people that they're normally expecting. And then to add other stuff on top of that seems quite burdensome. So I do think that the state in a way is quite limited in its ability to participate, you might say in the development of all these new technologies. And where it has probably its best chance of having some kind of systematic influence would be at the level of regulation, including, what we were talking about earlier, the regulation in terms of the kinds of people who can participate in these kinds of enterprises. And I think that could be quite salutary, even if it doesn't sort of address the more kind of economic questions of provision for people.
[32:57] Briar: I wrote a letter to the White House about how I wanted to see a diverse community come together of philosophers, scientists, startups, big corporations, just as such a diverse group of people come together to actively plan for the future, to think about regulation when it comes to AI, how we regulate it, what sort of things we do. I wrote this around the time that Elon Musk and Co wrote that, oh yeah, let's pause AI for six months. I was like, well that's like telling a pregnant lady not to have her baby, keep it in there for an extra six months. Like to me, it just didn't seem to work. But despite my letter, I guess it got delivered, despite my letter it was, it was disappointing, I guess, to see that even though they did get a group of people together, it was all big corporations. It was all big tech.

[33:53] Steve: Yeah. It's interesting because different sorts of committees of this sort are being formed all over the place. And so for example one thing I haven't mentioned yet in connection with the Humanity 2.0 is that the Vatican has its own committee on Humanity 2.0 because they're getting into the act. And they have some very interesting angles to run. But maybe even, kind of more to the point of what you were raising before, one of the people whose very much involved in the Vatican Humanity 2.0 project, which I'm part of too, is a priest who actually sits on the board of Meta. And so this priest is the Vatican's representative on Meta. So Zuckerberg has agreed to that. And I don't know, from what I gather, because I've been in some conferences with this guy that he's a bit like an ethics watchdog or something.

[34:54] Steve: He certainly works at one of the pontifical colleges. So in Vatican, there are all these different colleges and universities that train priests for various functions, and he's part of the faculty of one of those. That's an interesting example because you say, okay, what exactly does this guy bring to the table with regard to all these issues we're talking about? And so he's quite explicit. And so the whole Vatican thing as been quite explicit, first of all about the social justice side of it. And the social justice side being sort of a concern about poor people, you might say. So in other words, what's in it for poor people because already we do have a digital divide that's formed to a large extent, not so much in terms of who owns smartphones, but in terms of the ability of using those smartphones to actually get anything done.

[35:50] Steve: There is an enormous digital divide as far as that's concerned. And so how is that going to be narrowed? So the church is very interested in that issue. The church is also very interested in kind of more sort of conventional kinds of concerns of human flourishing where one imagines sort of like in the model of Abraham Maslow, if you know the theory of self-actualization is like pyramid of needs. And that there needs to be secure material needs and that they are precursors to any kind of higher needs that may be fulfilled. But if the material needs are not fulfilled, then the attempt to fulfill the higher needs in a way will fail or will undermine the people in the long term.

[36:35] Steve: So if everybody is just spending their time in the metaverse and ignoring their loved ones or ignoring society because they're just spending all their time in the digital reality, then, from the church's standpoint that raises alarm bells because human beings should be primarily dealing with other human beings when it gets right down to it. And so they're very clear, at least they're very clear in their minds that all of this metaverse stuff and the digital world more generally is a means to an end, but not an end in itself. And so this then, very often puts the church at loggerheads somewhat with some of the direction of travel that you see, for example, from Zuckerberg, who I think, in principle would like to see us spend most of our time in the Metaverse.

[37:25] Steve: And of course, there are a lot of reasons to think that it's kind of going his way. That's the other thing, of course, too. And so the church takes this kind of interest, and it has managed to embed itself in at least some of the boards where the decisions are taken about, how this stuff is being developed. It's not completely kind of unregulated in some way, but it's certainly not systematically regulated, that's for sure.

[37:55] Briar: So you mentioned that you've been to a few of their get togethers, a few of the Vaticans conferences and things, what was it like? Who was there? What other things are being talked about? Tell us the goss.

[38:08 Steve: Well, I mean, I think the thing here is that there are representatives of different faiths, so not just Catholics. People are brought in from different parts of the world. People are brought in from industry. It's quite an ecumenical group of people. They're both plenary sessions and they're also these breakout groups, I guess you'd call them, where people from quite different backgrounds are sort of mixed together in these small groups to answer certain kinds of questions, kind of like well, what you would expect would happen, let's say in a way day or a retreat or something at a company. There is a lot of that kind of stuff. And some of the people are priests and they're in their priestly garb and other people are not.

[38:59] Steve: And everybody's pretty reasonable. But I do think in a sense, the way that the thing is focused, it is about the future of humanity. So there's a sense in which the discussion starts with a kind of conception of what it means to be a whole human being. So that part of the discussion, I would say, is probably the thing, if you're coming at it from a transhumanist or post humanist lens that would probably be the thing that would strike you the most, is how there at least is presented what is regarded as a clear conception of what it means to have a flourishing human life. And that in a sense always needs to be the goal. And then the question becomes, how does all of these technological developments serve or hinder that goal?

[39:51] Steve: So that's kind of the way the frame of references for this activity. And one of the things that becomes very interesting, when you kind of elaborate this stuff is that, well the concerns of the third world, the global south become prominent as well as environmental issues become prominent. And the extent to which, people having these sort of customized metaverses to play around with, how that ends up then detracting attention from environmental degradation around them and especially affects poor people who can't escape it. And so the Pope, Pope Francis, almost 10 years ago now, put out this encyclical called Laudato Si which was basically, this was the first time the church had done this, basically tying the future of humanity to the linking of social justice and environmental justice, and specifically targeting the sort of Silicon Valley mentality with turning a systematic blind eye to all this.

[40:58] Steve: So that in a sense, for Silicon Valley, as far as the church is concerned its default position is to ignore issues of social justice and environmental justice. Those issues have to be forced on them. And so I would say that document has kind of set a tone that I think the church did not have previously, church tended to be a bit more neutral about these matters. But I think now part of it having to do, and this is where the religion aspect gets interesting. Part of it, I think has to do with the need for organized religion to regroup itself and redefine itself in light of the kind of, important normative role that all these technological developments are taking us.

[41:44] Steve: Because as we've been talking through most of this discussion, in a sense, we would say science and technology is kind of the leading edge in terms of shaping our understanding of what it means to be a human being. In a sense, this is kind of where we began this conversation, but if you're the church, the church has already an existing conversation about what it means to be a human being. And from that standpoint, this other conversation is that loggerheads with it. And I think with this encyclical that the pope put out, he is basically drawing a line in the sand, you might say. But at the same time, he's not declaring open war. He actually wants some kind of dialogue. And so people like me show up. Because I actually do, one of the things about what's distinctive in a way about my own position on this, is I actually think that there is a very strong theological drive within transhumanism, especially that in a way you might say sublimated, to use the Freudian term.

[42:41] Steve: So in other words there is a God complex thing going on but it's not explicit because all these guys claim to be atheists. But the point is, it's very difficult to explain the preoccupations with resurrection and living forever and uploading your mind into a more durable medium. I mean, all these kinds of ambitions, which are very closely associated with transhumanism they don't come out of nowhere. These are things that you already see precedence from in the biblical religions, as something that in a sense is God-Like. So it seems to be that it's very difficult actually to make sense of transhumanism without some of that theological background in the back of your mind. Otherwise, it just seems very exotic that people would be going after all this stuff, especially if you start off as a kind of ordinary person.

[43:37] Steve: I'll give you an example. Let me give you another example of this. One of the things that I do, and we could talk about this more if you want, but I go to high schools largely in greater London, and I talk to kids about transhumanism and posthumanism and I ask them what they think about it. And so I lay out the basic tenets of it. And of course, because these students, I mean, it's usually private schools. These students are quite smart. They're into science fiction already. They're digital natives, as we would say. So they're often more adept in finding their way around cyberspace than we are. And so they have a feel, they have a feel for the issues of transhumanism already, intuitively. And so it is interesting the sort of things that they find attractive about transhumanism, what they don't find attractive about transhumanism. And so, for example, I'll tell you, you may not believe this, given what transhumanist, what they're normally saying. But I would tell you that these students, I have never been to a class where the majority of students would like to live forever. Never. Never.

[44:46] Briar: Why did they say, why do you not want to live forever? What did they say?

[44:52] Steve: They said they'd get tired. They'd be fed up. And also, very often some of these kids don't think the world is going anywhere positive. I mean, there's a lot of that going on because if you're a kid, I mean, you think about the kind of world you've been living in for the last 10, 15 years, wars, covid, I mean, it's not fun. So I don't think it's so surprising. Again, the reason why I mentioned kids is because the kids don't come in with a lot of preconceived kind of notions about what the world is about and all of this kind of stuff. I mean, we as adults who've had education a lot and in a way identify ourselves with various aspects of our culture and our past and things like this.

[45:39] Steve: We tend to have much more ideologically embellished views about things. So in a sense, we're a bit immune to what's actually in front of our eyes whereas kids aren't. And so kids end up asking what seems to be very ordinary kind of questions. Like, why would I want to live forever? So when I start saying stuff which some transhumanists say, like, for example well just think about it, nowadays people have kids largely so they can pass on their dreams so that their kids can fulfill the dreams that they could not live in their lifetime. But imagine if you lived for 200 years, you'd be able to fulfill all the dreams yourself, especially if you're an able-bodied person during this entire period. 

[46:25] Steve: And the kids look at me like I'm crazy. They talk about "what, no kids?" they've got a point. I mean, transhumanism is not the most child-friendly movement because basically they want the adults to remain children forever. I think that's one way to look at what transhumanism is up to, where you're sort of youthful forever. You could do anything all the time when you're 200, 300, 400 years old. And so kids think that there's some weird sense of, intergenerational injustice going on here. They may have a point. But the point is it's really interesting to talk to these kids, these people who don't come in with very over-informed views about things and ask them what they think about the fundamental intuitions that drive a lot of this stuff. 

[47:15] Briar: What about avatars? Do these digital natives want different avatar versions of themselves?


[47:21] Steve: Well, they vary. Some do, some don't. I mean, I think what is lacking in the way the kids think about this is kind of you might say the sort of ontological baggage that adults bring to all this. That in some sense, you can multiply your identity or you can maybe migrate your identity across the interface, as it were. I mean, I don't think the kids get into that kind of stuff very readily. I think that they see kind of the degrees of freedom that avatars have when working in cyberspace. But I don't think they invested quite with so much metaphysical baggage that I think a lot of the adults do. So one thing they do like, let me give you an example of something they do like, which again transhumanism doesn't spend that much time talking about, but it is kind of in the transhumanist agenda.

[48:16] Steve: There are some people who believe in what's called uplifting. Now uplifting is a term that David Brin, the science fiction writer came up with. And it's basically a way of enhancing human animal communication. And you might think of this in terms of there being some kind of interface that enables us to read the minds of the animals and the animals can read our minds. And of course, because we are getting better and better at being able to model animal cognition in various ways, there's no reason why in principle there couldn't be, as it were, a more in-depth kind of relationship between humans and animals.

[49:02] Briar: I'm just thinking about what my cats like. I don't know if I would want to know what's going on in my cat's brains, because that's the cutest little cuddliest fluffiest thing. But inside they're probably being like, "come on, human feed me." 

[49:17] Steve: Yeah, maybe, but you got to think of this in a way feed, it's the way kind of transhumanism might make a contribution to the animal rights agenda. Because your pet, who behaves in such a docile manner may in fact be desensitized in a sense and is just responding to you because that's the only way it's being dealt with, you might say. So there's this vicious cycle of bad behavior being repeated over and over again. And you're not quite fully exploring the potential that the animal has for flourishing as an animal. And you can maybe imagine this kind of interface thing is a bit like a sort of neuralink for animals, where you put the little beanie and the head of the animal and the beanie on your head and maybe you might be able to get something going there.

[50:09] Steve: I mean, I don't know exactly how it would work, but in principle, in so far as we're beginning to understand how the animal brains work, it's not out of the question. And so there is a literature in bioethics that actually says that if you are a kind of a person who believes in encouraging the flourishing of animals as an animal rights activist would, then some people would argue you have the obligation to feed into the uplifting agenda. In other words, you should be in favor of our getting a better and better understanding of how animals think, so that we can in fact, respond to animals and treat them better, truer to, in a sense what their potential is. Now, the kids like that and these kids have pets.

[51:02] Steve: So the thing is, most of these kids have pets and they would actually, like, in a way to understand their pet better, and they feel concerned about the pet, and they can see this kind of an enhancement as beneficial. So they like that, that gets a lot of support, I got to tell you, that gets a lot of support. And I take a certain amount of, even though it's a relatively exotic feature of transhumanism, I do take some heart from that because it shows that the kids are thinking in terms of a kind of more communal, not so individualistic. Because the whole thing about living forever, it's a very individualistic, almost egoistical kind of thing. But this is different. This is kind of trying to expand the moral circle and enhance it. And so I do think that in this respect, that the students kind of are picking up on some of the positive, I think some of the more positive morally positive features of transhumanism. 

[52:09] Briar: It's very interesting. And I would honestly love to have conversations with my cats. I just got a new kitten on the weekend actually, and it's so super cute when you're talking about this society and this kind of more communal constructs in the future. I think it's interesting and I think even just thinking about how Generation Z feels about climate change and things like this, you can already see it starting to develop. And I wouldn't be surprised if that continues into the little ones that are growing up at the moment as well. There's been talks about in the future, about how AI might take over all of our jobs. And on one side of the fence, and especially in the Reddit communities, you've got this super dystopian thought about AI doing everything for us. And then on the other side of the Reddit community, they're like, this is the best thing ever. We won't have to work. And a way that they talk about it is they talk about how we're all going to receive some kind of allowance, and everybody's just going to live in this big communal thing, and we all get paid the same amount. And then that's how society's constructed. What's your thoughts?

[53:24] Steve: In fact, I find that, I mean, this whole universal basic income stuff.

[53:27] Briar: That's what it's called. Yes.

[53:28] Steve: Yeah. I got to say, it's a very amazing thing for  transhumanists. It's like pulling a rabbit out of a transhumanist hat, because these people are most of the time libertarians. And now what they're proposing is something like socialism 2.0. First off, I find the whole idea very bizarre. I think it is predicated on a lot of assumptions that are probably not going to pan out in the right way. So first of all, yes, in the sense that technology is going to take over jobs. And it's going to take over intellectual jobs. And I think because of course throughout history technology has replaced drudgery, manual labor and stuff of that kind. And even people like Karl Marx, whatever problems he had with the ownership of the means of production, nevertheless, actually liked the fact that technology would remove drudgery from people.

[54:22] Steve: And then of course, he imagined that there would be this communist paradise, which may be something that all these transhumanists are still harking back to. But the problem now is a little bit different because the sorts of jobs that are likely to be replaced are jobs that at the moment people actually go to school for a long time to get very often. So we are talking very often about here, jobs in law, for example, or even medicine if we're talking about more basic parts of medicine and this is true of law as well. There are always branches of these fields. So in law, I would say something like human rights law. Where in a sense, the way the law is constructed is very much a case-by-case thing, and it's not a very routinized form of the law.

[55:07] Steve: But of course, a lot of aspects of the law are very routinized. Like writing a will, writing a divorce, setting up a contract to establish a firm. And most of the work that lawyers do is of that kind, actually, and these lawyers go to school for all these years. And they're respected members of society in all this business. Well, the estimates are from the law profession, up to 60% of these people are going to be gone. And you got a comparable figure with regard to medicine and we're already beginning to see this to a certain extent, even in the National Health Service in Britain, where people are increasingly encouraged to self-diagnose based on very sophisticated interfaces that they can have access to, where they become their own doctors essentially. 

[55:59] Steve: And then the only reason why you'd need a doctor would be that once you've reached a certain conclusion that requires a certain treatment or a certain drug to be administered, that then a doctor has to look over what's taken place in your transaction, in the interface to be able to sign off on it. But the doctor himself doesn't do anything other than that so all of this artificial intelligence is going to get rid of a lot of this stuff, the kind of middle-level intellectual work that goes on now. And so where the crisis is first going to hit, I would say, would be higher education. Because so much of what people are trained for when they go to university, regardless of what they study, in terms of the kinds of jobs they get are going to be jobs that in a sense, AI will probably be able to replace.

[56:47] Steve: And so there's going to be a question about why we need all these people to get all these degrees as the job market is beginning to shrink? So I think higher education will really be hit hard by this. There will no longer be the automatic demand to go to law school or medical school. The higher education sector is already kind of in a bubble. There are too many people going to higher education and not getting enough out of it, given the amount of money they got to spend for it. And so, AI, I think, is going to contribute to the bursting of that bubble. So that's one place where there's going to be some action, I would say in the next 10 to 20 years. Now, there's also the issue of well, what happens if these people can't find jobs?

[57:31] Steve: Are we going to have universal basic income bailing it out? Well, I just don't understand who's going to be paying the taxes for this. That's the basic problem, because remember, this is against the backdrop of a state generally struggling to actually maintain the tax base that's necessary to take care of poor people. And now we're going to get all of these unemployed middle-class people. I mean, this is a nightmare. This is not going to happen. And so I do think that there is no scenario where this thing works, where this universal basic income thing works. And I do think there is going to be a problem down the line, because of course, having large numbers of people who are not gainfully employed is going to be unsustainable at many different levels, not just economically, but also environmentally it seems to me.

[58:20] Steve: I suppose a guy like Elon Musk might come along and say, well, what we'll do is we'll sort of do what we did in the age of exploration. We'll just ship them into space. Surplus population, we'll just send them off somewhere to start up a new planet somewhere. I'm sure a guy like Elon Musk actually thinks about things like this. But it's not going to happen in the timeframe required. It's going to take too long for that to happen, frankly, because this kind of development of technological unemployment is going to be happening in our lifetime, for sure. And the idea of colonizing loads of space colonies, even if Elon Musk has an enormously efficient spacecraft that can sort of fly multiple places, it's too far in the future to be of any use to here.

[59:09] Steve: So I think we're going to get… At the end of this, and this may come sooner than we think, and this is where I really disagree with Elon Musk. I think people are going to wonder, why do we have so many people? Why do we have so many people? It's not that we're running out of people. No, we got too many and we got too many for this old-fashioned reasons that we have had too many before, plus the fact that they have nothing to do. So I would see the population of the planet dropping and not dropping because people are getting killed necessarily. But just you might see suicide, you might see people not reproducing, which we already see to a large extent in several parts of the world.

[59:52] Steve: So I would see that there would be a downsizing of the human population. And if that happens fast enough, and I don't know if it will, that might actually mitigate a lot of the climate issues. And so we won't need to be leaving Earth like Musk is predicting. But we'll be able to stay here, but there'll be fewer of us around. I think that would be kind of the best scenario to come out of this. That's what I would say. I would also make one final general point about the fact that all this technological employment is happening because I know there are a lot of people who use that prospect as grounds for stopping AI development altogether. It's going to steal our jobs. Well stop it then God damn it. I don't hold that view.

[1:00:39] Steve: In fact, I believe that when jobs become so routinized that AI can take it over, then that should be sending a message to humans to raise their game. And I think that's been the real problem. And this is where I think AI is a very interesting mirror to hold up to humanity. Because if AI is able to outperform us in so many things that we normally require enormous education and is normally held in very high esteem and so forth, my response would be, humans have to raise their game. They have to be valued for something else, for something more, because we have now made machines that basically can replace us. And then the question becomes, what's the value added of being human? And I think that is the bottom line question when it comes to humanity's relationship to AI, is what is the value added of being human? And that is a question that is always going to be with us. And the goalposts are always going to be changing as artificial intelligence gets better and better. But there is no ‘A’ priority answer to that question. It is a question that we always are going to be confronted with. And in that respect, I think that artificial intelligence is helping human beings become deeper in terms of their self-understanding.

[1:02:00] Briar: I think this is all very fascinating. I worry that these days people are just so distracted on their phones with the algorithms, with TikTok and stuff like this. So you are busy saying, and I very firmly agree with you, that we have to up our game if we're all planning on having jobs, and we need to be creative thinkers and come up with ways to do stuff that the AI can't necessarily do. But here we are living these lifestyles where we're hanging out on our couches and watching reality TV and twerking on TikTok, it's quite concerning for me.

[1:02:35] Steve: Well, I agree with you. I mean, and I frankly don't think it's sustainable. And this is where we've already been discussing several directions from which the pressure points are going to come from on this. Let's put it this way. I'd be surprised if this world that you're describing that we live in will be quite the same in 20 years. I really doubt it, but I'm not quite sure how that C change is going to take place. And to be honest with you, one of the things that we never talk about when we talk about the future is kind of the old-fashioned threats like we might get a nuclear war. We might have like a really serious pandemic that wipes out hundreds of millions of people like the Spanish flu did back a hundred years ago.

[1:03:23] Steve: I mean those old-fashioned kinds of threats to humanity, not even a nuclear war, but even just a very sustained global war. I mean, I think all of those things, we cannot take off the table. And they could end up making a massive difference in terms of how people see the meaning of their lives, because that's happened in the past. In fact there's a very interesting thesis that was put forward a few years ago by actually an historian of the classics by the name of Walter Scheidel, he was arguing that these big global wars and epidemics, things of that scale in fact tend to be the great equalizers of humanity more than any kind of legislation or kind of redistributive schemes or anything of that kind.

[1:04:17] Steve: But rather, it's just this great leveling kind of activity that takes place when you have epidemics. When we're talking about hundreds of millions of people kind of dying across all classes where everybody gets hit in some way. In fact, in the Middle Ages, they used to talk about the dance macabre, which was basically the way in which the bubonic plague hit everybody in society. It sort of danced across the classes. It didn't just confine itself to the poor people. Kind of a bit of everybody got it, where not even the wealthy people are protected. I think those kinds of changes could make a big difference in how human beings think of themselves. In fact, at Warwick, I teach every year a course called the Sociology of End Times, about the End of the World, because a lot of the students these days, definitely believe the world is heading in the wrong direction.

[1:05:15] Steve: How do they define the wrong direction varies tremendously. But the point is a lot of them, as a result, are open to the idea that the world needs, in some sense, to be rebooted. That we need to begin in some sense anew, which means that we have to start from somewhere else that's radically different from where we are now. And so invariably what this means is something like wars or epidemics or something like that, in some sense, those things that nobody plans for, but happens kind of almost by accident. Those things end up resetting the clock. I think that's one thing that the transhumanist imagination doesn't really conjure enough with. How all of these wonderful schemes, to upload consciousness and live forever and all of that, all of that stuff pretty much depends on relatively smooth trajectories with regard to this other stuff I'm talking about now. People are worried about artificial intelligence eating our lives.

[1:06:14] Steve: Well, that's because artificial intelligence is the only thing they're looking at. But there are all these other traditional kinds of issues. And if you look at the way, the volatility of the geopolitical scene, you can imagine lots of situations where things could get out of control, and that would just scupper all of these predictions. And so I do think that's another kind of issue that should not be overlooked in terms of potentially changing this kind of decadent digital lifestyle that you have just been talking about that we have.

[1:06:52] Briar: Obviously, they say that technology's neutral. And it depends what hands it falls in. And people talk about how AI is going to be a reflection of us. So there'll be the good and the ugly, I guess. When you were speaking about pandemics, do you think there's a possibility of, say like a manmade virus as a way that they can help equalize this quite dystopian world that you've just described to me?

[1:07:20] Steve: First of all, humans are always making viruses in the lab. Largely because viruses are often very important for making all kinds of treatments that you want to make in people. So viruses are kind of like little machines that you can program and get them to do stuff. But of course, every now and then they may escape and go about, and then hooking up with other things they weren't meant to hook up with. And that's how you get the virus leaking from the lab. And that's of course possible. Because I would say that this so-called Wuhan leak theory of the Covid virus. It seems to me that that's possible. It's kind of a byproduct of what transhumanists are actually promoting.

[1:08:10] Steve: Namely, we want bioengineering, we want to improve our genetic makeup through bioengineering and viruses, manufactured viruses are part of that story, but the viruses got to be doing the right thing. And so that's why you have to be doing it in very controlled lab conditions and not allow your viruses to escape. But the point is that we wouldn't be in this mess of having manmade viruses if we weren't actually pursuing this kind of bioengineering agenda. So there's a sense at which, as with so much of the stuff that transhumanists promote, right there is its Janus-faced. There's the good side and the bad side and they come together. You just can't have only the good stuff. You also have the bad stuff, at least, bad stuff meaning there is a risk. There is a risk that something may leak, it may not do exactly what you want, it may do something else instead. But this is part of the transhumanist agenda and just like you can re-engineer the body to live forever, you could re-engineer the body to collapse from some kind of disease that you set up for it. You could do both.

[1:09:29] Briar: So what's our solution?

[1:09:31] Steve: Well, I think this is the thing, I think that the main solution is to have a realistic attitude toward risk. And I think this is where I think transhumanism challenges a lot of our intuitions and generally speaking, I'm on the transhumanist side of this particular thing, which is that, as I've talked about in one of my books. Transhumanism has a pro-actionary approach to risk. And what that means is it treats risk not as a threat, but as an opportunity, which is very much how, let's say entrepreneurs treat risk. Because risk means, the future's open. We don't know what's going to happen. It could be good, it could be bad, but because it could be good we go for it.

[1:10:17] Steve: And we try to make the most of it, especially as long as the space is kind of uncertain because it gives you a lot of wriggle room and stuff like that. It seems to me transhumanism is very much committed to this kind of pro-actionary approach. And so so what that means is it's very much pro-innovation that in a sense, you don't know if something works unless you try it out. And you're never going to know whether it's safe enough. And so you might as well just try it out and see what happens and hope for the best and just monitor the consequences. The pro-actionary approach to risk, generally speaking, is kind of more in tune with making the mistakes, committing the harms, and then learning from them than let's say preventing any harms from happening because if you try to prevent any harms from happening, you're probably never going to do anything. Because you'll just be afraid, that something may go wrong, and then you do nothing. 

[1:11:23] Briar: I think that's very much how I even got into transhumanism in the first place, because I just even felt forgetting about the future. Just thinking about the past and sort of my day-to-day values I guess when it comes to tackling life in general. The way that I see it is we've got two choices, we can either be sitting on our couch in our farm, in Darfield, New Zealand, and just letting life pass us by or we can be playing a more proactive role in how our life is curated. We can take those risks, we can move to New York and work as a waitress and climb your way to the top, so to speak. And I think that's what I love about transhumanism. You're so right. It doesn't come without risk, but then nothing good comes easy.


[1:12:10] Steve: Well, that's right. I mean, in this context, one of the things I would say is if you look at how we have got to where we are now with regard to our advancements in science and technology. People in the past, so let's say if we're talking about the 18th, 19th, and even most of the 20th century, those people who were the great pioneers and innovators took a lot more risks than people are allowed to take these days. In terms of, let's say, trying experimental treatments on themselves. In terms of putting people under what would now be regarded as very torturous conditions for experimentation. And certainly animals have been sacrificed along the way. And you see all this stuff that has been very much part of our trajectory of progress over the past 200-300 years would not be allowed to happen today.

[1:13:10] Steve: And in fact, there are people, Peter Thiel in Silicon Valley being one of the most prominent in this thinking, who believes actually we're suffering from technological stagnation these days and scientific stagnation largely because that aspect of the human condition is overregulated. So maybe producing silicon chips is not too regulated but the idea of human experimentation and medical experimentation and things of that kind, that is overregulated. And what it does is then it drives out any desire for innovation and it's true. I mean everybody basically is worried about being sued and not just personally worried about being sued, but it's largely institutions that are worried about being sued. Because very often, and this is and I happen to believe this as well that if it were actually a one-to-one contract between an experimental scientist and a potential subject, it would be possible, even if we're talking about a highly risky experiment, it would be possible for the a subject to become sufficiently informed about the risks they were taking and still agree to be part of the experiment.

[1:14:38] Steve: But we live in a world where that kind of one-to-one agreement is not allowed if it's happening within an institutional structure like a university. And so that's a real hindrance because it's not just that there are these crazy scientists out there who want to torture people, but rather there are actually a lot of people out there who'd be more than happy to actually participate in these kind of very risky kinds of experiments, for various reasons. Some of them might think they'll get something out of it, others, because they actually take very seriously, kind of like people who go off to war to fight for their country. They actually want to be contributors to progressive science and technology and that there's nothing wrong with that. And so I do think this is a real serious issue. There is this risk aversion effectively that is built into the legal system surrounding especially academic research today. But also technological advance more generally, especially in the western world. Situation in China is much different. China is an ethics-free zone and it benefits from that, it seems to me.

[1:15:55] Briar: Wasn't that gentleman, was he based in China, he was the one that cloned some twins, but then he ended up getting put in prison, didn't he for three years afterwards?

[1:16:06] Steve: Yeah, but the point is, generally speaking, that kind of risky experimentation, people can get away with it, and also the other thing too, of course, is with China things don't have to be made as public as here. And I think that's a key part of the issue as well. So we don't actually know a lot of what's going on in China. We only know what's going on in China when they're publishing in the Western journals, which they increasingly do, of course. But the point is, that's not the sum total of everything they're doing. In the western world people do actually have to report that they are acting ethically and all the rest of it in order to get published and order to get funding and all the rest of it. And those standards are way too high to enable innovation to happen.

[1:16:58] Briar: I thought it was interesting when I heard that China has a very different TikTok policy for their younger generation. So they listen to science videos, educational videos, whereas obviously the US doesn't have that kind of TikTok regulation. It's quite different, sort of very short attention span kind of content. And for God's sake, our attention spans are as bad as they are already, let alone without being on TikTok and picking up these very short pieces of content. It's quite concerning.

[1:17:31] Steve: I gave a keynote address at an information science conference in Beijing in August. And it was in the main university for telecommunications in Beijing. And they brought in a lot of school kids to the room. Some were high school, but to my eye, a lot looked like primary school kids. And they were actually encouraged by the organizers to communicate with their teachers while they were listening to the various talks that were being given. And the kids were certainly doing something while these talks were given. They were sitting in the back of the room basically and they were definitely buzzing along.

[1:18:24] Steve: And the kids were given, like little goodie bags, which had, kid-like things that were related to digital stuff, which was mainly what we were talking about. And I was very struck by that. I mean, I hadn't seen that kind of thing before. But I was told, that they're basically interested in getting the kids while they're young. They get the kids, so the kids, they don't see all this TikTok etc stuff as just for fun. But rather it's infrastructure that they're learning, the infrastructure of their lives. And so they need to be picking it up at many different levels, not just at the level of entertainment, but also the level of education. So I thought that was pretty interesting.

[1:19:11] Briar: I think it makes a lot of sense when you think about it. I was writing something for LinkedIn recently. I run a PR agency and I was writing about how with internal communications these days, don't just make it an email, actually reach people how they want to be reached, whether it's social media or listening to some kind of audio, sound wave, get creative with it. How do we like to ingest information these days? So this story that you share about the TikTok, and that's how they get their education. It makes a lot of sense when you think about it.

[1:19:45] Steve: Oh, yeah. I'm surprised we don't do it. 

[1:19:49] Briar: Well, yeah, so am I, I'm surprised about a lot of things in society these days. So, I just hope that really my mission and I thank you so much for coming on the show. This has been amazing. And I could have kept chatting for another couple of hours, if I'm honest with you, but the reason why I just really want to bring your messages out to the masses is because I just want to reach a different crowd, perhaps one that hasn't had an interest in these sorts of things that maybe is sitting on the couch and playing on TikTok and ingesting, not a lot of important information about the future and things. So I do hope we can reach a few more people and that we can play this proactive creation of our lives and be having these sorts of important discussions.

[1:20:39] Steve: Yeah, I agree a hundred percent. And I think you have to, it's not something you can really opt out. I mean social media is something that is obviously here to stay and you really don't exist if you're not in some sense on social media. So you have to make the most of it.

[1:20:55] Briar: It's true. Well, thank you so much for coming on the show. I really appreciate it. It's been fun.

[1:21:00] Steve: Well, same here. Same here Briar. Have a good day.

[1:21:04] Briar: You too. 



Briar Prestidge

Close Deals in Heels is an office fashion, lifestyle and beauty blog for sassy, vivacious and driven women. Who said dressing for work had to be boring? 

http://www.briarprestidge.com
Previous
Previous

#E43 Connecting With Loved Ones After Death With Robert Ginsberg

Next
Next

#E41 AI Exists To Help Artists, Not Replace Them With Samar Younes