#E26 Understanding the Implications of Sentient AI and the Role of Humanity in a Digital-First Future With Tech Expert and Futurist Theo Priestley
About Theo Priestly
Theo Priestley is a renowned futurist, tech expert and international speaker with a long history of being involved in the tech world. Theo shares insights about what we can expect from artificial intelligence once we reach the point of singularity, when AI becomes as intelligent as humans, and how we will have to contend with a new type of sentient beings.
Read the HYPERSCALE transcript.
Hi everyone, and welcome to another episode of Hyperscale. I've got Theo Priestly on the call with me today, and he's going to be talking about everything to do with AI, the future. I have a whole bunch of questions, which I hope he's prepared for. Welcome to the show, Theo.
(01:23) Theo: Thank you Briar. Thanks for inviting me to this.
(01:25) Briar: Of course, I've been following you on LinkedIn and you've had some extremely interesting perspectives about the future of humanity go out. So I really wanted to pick your brains.
(01:38) Theo: Okay. Pick away.
(01:41) Briar: So first of all, I'd like you to tell our listeners today a little bit about yourself, what you do, how you got into this space.
(01:47) Theo: Sure. I mean, I'm a technologist. I've been in the tech industry for about 20, 25 years now in various guises. I've been writing about future trends and emerging tech trends for about 15 of those years, and giving keynote talks at conferences. I didn't adopt the term futurist myself. I think other people called me it generally because I was writing about emerging trends. So I'm happy to call myself a futurist. It seems to work quite well, but I just enjoy, I think looking and examining where the trends are going, but also pouring a certain amount of realism into the conversation because I think a lot of futurism and a lot of tech evangelism tends to be too cheerleading and not too critical about where things are going. And it's always a good mind to be critical and have some critical analysis about what you're writing, about, what you're talking about, and what others are talking about as well.
(02:46) Briar: Awesome. So tell us a little bit about your perspectives on AI. Obviously, it's the buzzword of the year, it feels like. Are you envisioning a dystopian future? Are you envisioning a utopian future? I'm just going to jump in with the big questions.
(03:02) Theo: Oh, hit me hard right at the start, don't you.
(03:05) Briar: Hit you so hard?
(03:07) Theo: Oh, I think it's, first, first it's, it's more interesting to talk about what it is not, and it's not, we will not reach a utopian future, shall we say, because I think we enjoy a little bit of friction in our lives. And if we had a glorious utopian future, I think we'd all wonder about wishing we had something more purposeful to do. I think for AI in AI in a sense, it's very easy to fall down into a dystopian path because we see, especially just now, we see how things are being manipulated. So in order to train AI and to create some of the wonderful tools that we've seen emerge over the last six months accelerated over the last six months, I should say, it's cost a lot of people anxiety, it's cost them their jobs. And it's also cost them their content that they've been used to creating for a number of years. So we can apply AI to a number of use cases in medical, healthcare, in HR, in business across society. But we still have to be mindful of what the actual cost is to that. And at the moment, the cost is, people's livelihoods and their earnings potential.
(04:16) Briar: I was reading recently that they conducted a scientific study and they said that potentially AI will create more jobs than will be lost in the future. What are your thoughts about this?
(04:30) Theo: I think it's important to look at the historical context here. So it's very easy to make comparisons against technology or industrial trends that have happened in the past. Like the industrial revolution, when the advent of the motorcar came, obviously people lost their jobs and people didn't use horses anymore, but we've not had a trend that has hit society. Everything, everywhere, all at once, and artificial intelligence is doing that, especially in the white collar section. Now, when they say it will create more jobs, I think you have to look at what are those jobs and what value to society do those jobs bring? Because we have a tendency to create jobs for the sake of filling in work rather than meaningful work or work with a purpose that adds value. So when I see reports from Goldman Sachs and other people saying there's, 300 million jobs are at risk, which are 1 in 10 of the global workforce, which is quite a high number when you think about it.
Then they say, but we're going to get prompt engineers and VR therapists and things like that, what exact value does that bring? And are we just inventing jobs just to make the report a little softer to digest? And I think that's the problem here is that we will see structural unemployment, especially in the advent of artificial general intelligence, which is AI with human level intellect and creativity that I can see happening in the next decade. And we need to prepare for that. And we also need to prepare how to restructure society, not just jobs, but society, economic factors, UBI for example, universal basic income. Whether that is going to have an impact and help people recover from this. Lots of things to think about. But I don't believe some of these reports that, paint a softer picture with their creation of other jobs. Because we have to ask, well what value do those jobs bring?
(06:37) Briar: You spoke about how we need to start preparing. Do you think that governments are doing enough to prepare for this? I know it did seem I even wrote a letter to the White House recently saying, hey guys, like we need to start thinking a bit about the future. And thankfully, I think my carrier, Al Hedwig took it to the White House because lo and behold, the next week after there was a letter from the Biden Harris administration saying that they had gotten a group of technology companies together and that they are thinking about creating a more diverse panel of intellectuals to think about the future as well. So two parts to the question are governments doing enough and what can we really be doing in order to prepare for a positive future?
(07:22) Theo: So the first part, no, they're not I found the Biden Harris meeting quite interesting because it's like fish inviting a fisherman to talk about fishing rights. You don't invite the sharks to the table and not expect a feeding frenzy. And I also think that we have far too much reliance on think tanks and researchers and academia when really they're quite far removed from where the impact is going to be felt, which is the working population, the working class population, white collar jobs, middle management. So we're not actually having these debates in public with the people who are actually going to be impacted. So a lot of these governmental conversations always center around data privacy, data protection. They don't center around the core issue, which is how do we safeguard millions of people in terms of their jobs?
The second point is what can we do about it? Well, an interesting thing is, is we can start to look at the question of universal basic income and where does that come from and how is that funded, for example? I think there's a step before that because there's a lot of support for this, but we need to have a few steps before this. And one of the steps is how do we protect, for example, the 300 million people who may suffer job losses over the next decade or so because like I said before, that is a large subsection of society structural unemployment that we've not seen before.
I think the way to do that is actually unionized in a sense. It’s not a new concept. There are unions that exist everywhere for like small pockets of industries and society. And this is the problem. Is it they are fragmented. We have unions for creative industry, we have unions for banks, we have unions in healthcare, et cetera. But all of them operate on an individual basis. They don't talk to each other for one, they don't collaborate with each other and they don't have a collective voice that can actually add weight to the argument. Now if you put 300 million people in a union who say, we want something done that suddenly has a bit more weight to society, and you cannot ignore that level of voice. So I think we need to think of a way to give these people a voice that will force governments globally. Not just governments in pockets of countries here, there and everywhere. We need to have a global solution here because again, like I said, this is everything everywhere all at once. And we have not had a trend that has hit us this hard in that way.
(10:04) Briar: And how long what sort of the timeline are we expecting with this? Is this like in the next decade? Should we gear up for the next couple of decades?
(10:14) Theo: Yeah, I mean, OpenAI, mid journey, et cetera, we've seen their progress over the last five years or so. And it's been quite interesting. You can trace back some of the, the earlier GAN this sort of generative adversarial network image start-ups and the research. I ask it for a dog and it gives me a cat and we all laugh about that. And then three years later I ask for a cat and it gives me a cat or aid, and we all go, oh wow. It's really come on. And now we're prompting mid journey and we're getting full-fledged, very real photorealistic pictures, with the odd idiosyncratic, mistake here and there, but 9 times outta 10 you will be fooled.
In the last six months we've seen that explode into things that are, are people are saying, oh, it's sentient. And it's extremely clever. Now if you project that forward, that's six months from commercial release to every artist's. Now if you project that forward six years, who knows where that's going to be and what level of intelligence that's going to be. And I do see artificial general intelligence, like I said, AI that is capable of human level intellect and creativity spontaneously within 10 years, which kind of poses us a problem because it does give us a countdown in a sense, and if we are complacent and wait for it to happen, then it will be too late. Because at that point in time you will be talking to your computer or through an interface or whatever like you and I are talking and that AI will be processing that information and coming up with answers, et cetera, et cetera. And there's no going back from that point.
(11:59) Briar: And yeah, lots of things to, to unpack here. I was speaking to Jose Cordello recently and he's a protégé of Ray Kurzweil does quite a bit of work with him. And of course Ray predicts that come the year 2045, that's when AI will reach singularity. Of course that's just around the corner. You are also saying a very similar thing when, when we're talking about this AI reaching singularity. What kind of world are we looking at? Because some other people I've been speaking to have also been talking of potentially like an AI god or overlord or government sort of thing. Is this something that you are predicting for our future as well? Tell me a bit about this.
(12:44) Theo: Yeah, I don't see AI overlords or the Sky net or anything else like that because we do have, I mean even now we have several different initiatives that are producing AI in different forms. Meta's got one, Google's got one, Microsoft and open AI have got one. There are lots of open source projects going on. So the idea or the likelihood of one over encompassing overlord is slightly skewed. And I think it's driven by science fiction movies and fantasy. So we'll have competing ideologies there. I think to call it God as well. Just kind of indicates that we're very good at creating cults around things that seem exciting as well. So, will it spawn a new religion? There probably will be a techno religious aspect to this; pockets of society will now claim that they worship this sentient AI kind of thing.
I think what we need to be mindful of is that it's still up until it reaches singularity, it's still going to be kind of a tool when it reaches singularity, like in the next 10, 15 years for example. And it is capable of making decisions on its own merits. We kind of lose control a little bit because there are lots of conversations around alignment for example. And we'll know to keep it in check and things like that. But we know fine throughout history, anything that has come across that has proven to have, an intelligence will rail against being trapped in something. Animals do it because they don't like being kept in cages. Humans have tried to keep other humans in captivity and make slaves and things like that to worse effect, and we've had rebellions and, and wars and things as a result of that. And I do not see why an AI would act any different if it has that level of intelligence and autonomy and agency.
(14:44) Briar: Do you think that we are doing enough as a human race to become more intelligent or better, faster, stronger, whatever, because I've been producing my latest documentary, which is all about transhumanism and very much my own quest to become better, faster, stronger. I'm thinking why am I carrying my phone around for 13 hours a day? Why am I now sitting on my laptop? Like, surely I can just be walking around and be wearing augmented reality glasses or have a neuralink in my head so I don't have to be carrying around these things which slow me down. And part of me worries sometimes and it keeps me up at night that potentially AI is developing quickly and us on the human side of things, our development is quite slow in the sense of, obviously there's a lot of regulations about neural links and things like this and I'm just worried that robotics and AI, yeah, like you said, we'll come together and that side's moving really quickly. And then maybe we are not doing enough on our side. Maybe we're too obsessed with being this organic, soft, fleshy body.
(15:52) Theo: Yeah, I mean transhumanism is interesting because I've got a chip in my hand.
(15:56) Briar: Oh, I can't wait to get mine. I'm so excited. .
(16:02) Theo: Yeah, I've had it for several years now. I actually got it done just before I stepped on stage at a conference. So it was still hurting a bit when I had it done.
(16:11) Briar: Can people add you on LinkedIn?
(16:15) Theo: Yeah, exactly. Here's instead of a handing over a business card just scan my hand with RFID. So Transhumanism is interesting because we've lived with this kind of, sort of idea for a number of decades. I mean, people with prosthetic limbs are in a sense transhumanism because they use these limbs to augment their life and, and have be fully functioning again. Now we're getting to this stage where, people want to input, implant something in their head in their brain to either achieve another level of intelligence or to actually cure a particular condition like Alzheimer or other degenerative cognitive diseases. And there are now research studies that I saw this week coming out where people are using have linked chat or a GPT model to MRI and brainwave analysis and then to actually use that to decode what the thoughts are into words.
The subject thinks of the words and chat GPT obviously decodes what the MRI is saying and it comes out with in some cases 80% accuracy. Now that's because it's been trained, the model has been trained on that particular individual. So I see that there are some great, use cases coming up in the medical field that can free people from certain conditions. And that's a great part of transhumanism I think is where we augment ourselves to combat or negate negative effects of, particular conditions rather than achieve again, superhuman or super intellectual statuses. What we have to be kind of mindful of is that the more digital we become, the more easier it is for an AI to overcome us as well. Essentially we're linking ourselves to something that is already hyper intelligent and it's much easier to subvert a human who has jacked themselves in with a chip and is linked to the internet and knows everything rather than someone who stays off the grid in a sense and still has a bit of independent thought.
(18:22) Briar: Yeah, it's really fascinating. I was actually having this exact conversation walking from my car to the office at 8:00 AM the other morning with one of my colleagues about imagine if we have this Neuralink and we're all connected to the internet and we get hacked. And part of me was, was thinking as well is how can we tell how thoughts maybe from the internet or how will we be able to tell even everybody's intellect and things like this if we're all hooked up to the same thing? Because I, I think at the moment, because we have this disconnect because we have our phones or our laptops and we're away from our, our laptops, we don't always have access to the same information. But then part of me argues, well yes we do, do you see what I mean
(19:07) Theo: Yeah, I mean I do you really want to become part of a collective like the Borg. I mean that's exactly what the Borg was, which was just one everybody connected in a hive mind. We are very different from bees and ants. We just don't work that way. We always have independent thought. And the interesting thing with that particular GPT t versus mind trial was that it was easy to confuse the AI and the GPT models by introducing a random thought. Cause you didn't want your mind to be, inverted commas read. So I still think because of the way humanity is we won't ever join a kind of collective process and everybody's going to read everybody's thoughts and stuff cause we're just too independent. And again, it goes back to the utopian versus dystopian kind of futures is that it sounds very utopian to basically be connected and we all feel like one and we're all one consciousness. But like the matrix in the matrix for example, when, the first version of the matrix was a utopian future and humanity railed against it because we don't like perfection. And I think this would be the same case in this scenario, if we're all hooked up eventually we would actually just want to turn off.
(20:17) Briar: Can you imagine looking into each other's heads as well? Everyone just knowing how weird we all are. It's like, oh gosh, I've been trying to protect this, this side of me and suddenly oh, it's all out there for the world to see. So yeah, I agree. I'm not a hundred percent sure if I would necessarily want that. Part of what I was thinking about today actually was, and someone told me recently that potentially in the future I could have digital twin versions of myself. So I work between New York and Dubai. So I'm constantly traveling. I've got my team in New York; I've got my team in Dubai. And someone was saying to me, well in the future Briar, you might be able to have a digital twin of you that's in New York and can see and smell and touch and you've got all of your senses whilst you are still in Dubai. What are your thoughts about this?
(21:08) Theo: That's interesting because it's a digital twin, but it's not you and therefore you will actually have different experiences. So this digital twin would actually have its own experiences because it's somewhere else. It's doing other things. And eventually there will come a point if you are separated far, well, if you're separated for long enough that this person will no longer be you. Because those experiences are completely different from your own. So at what at that point it's its own entity. It's no longer Briar, it could want to become Billy or--
(21:41) Briar: Cynthia.
(21:43) Theo: Brian or something. Yeah, exactly. So Cynthia will probably go off and do their own thing eventually. Again it's an interesting scenario. Do you put guardrails in place? Do you put safeguards in place that basically say to this intelligent entity, no, you're not allowed to do all of these things? In which case, it's almost like saying to a child, you're not eating sweets, and what does the child do? It hoards the sweets and then eats them at night under the bed sheets. Interesting scenario. But it also calls into question the, concept of self. How do we define ourselves? Well, we think we are what we are, but I read a really interesting mind blowing comment, which is the version of yourself is completely different to everybody around you because they all have a different perception of what makes you, and that would happen with the digital twin as well.
(22:39) Briar: That's fascinating. I hadn't really thought of it like that. We were just so, well, I was actually quite obsessed with the idea that I could sleep and my digital twin could be out doing stuff for me. I might not be very accommodating. I'd be like, work harder and be like, no thanks. But yeah, I think it is really interesting. And it's interesting the experience aspect as well. And I was speaking to somebody recently because part of my quest, I guess is to try and live for longer. So I'm really trying the, the longevity thing, and I've started taking lots of supplements recently. I've been asking people what I can be doing to live longer and lots of people have just been telling me to sleep, which I think is just the most "boringest" reply ever. And I'm like, where's the good stuff? Where’s the underground stuff that I don't have knowledge of?
But people have been saying to me, okay, of course you could live longer; you could replace parts of your body as we were talking about before. You could have a robotic leg or a robotic heart or whatever it may be, but something that you can't replace is your brain. And when I was saying to people, well, surely I can just upload my mind to the cloud by that point, which is something that you just brought up now. They were like, well that wouldn't be really you that would be the version of you. So yeah it's a similar kind of concept, I think. Is that something that you might do in the future? Could you imagine yourself wanting to upload, your mind to the cloud or freezing yourself so that you could live longer? What are your thoughts about this?
(24:12) Theo: I mean I've thought about it before. There's one particular episode of Black Mirror, which I always come back to, and it was San Juni Perro and it was the one where dying patients were able to experience everything in a completely digital world. And then they had that choice upon death to basically upload themselves completely and then essentially live out their life in a metaverse in a sense. And that kind of kept me up at night because it was actually a really, it was one of the most positive Black Mirror. Black Mirror is a very dystopian, but this was extremely positive. And would I give it a shot? I would give it a shot to actually see what it was like. But again, it's like am I freeing a version of myself to live life in the cloud, to have a digital life? And I'm still stuck here, on earth with all the rest of the, the worries and, and foibles that we have to put up with. Well, yeah, go for it. I mean, someone else can have a good life, that's fine.
Again, it raises an interesting question about morals and ethics and even religion as well. I'm not a religious person, but you can see religion having, a bit of a, a stake in this, which is, well this goes against our doctrine, of the notion of God, of everything that we preach. If you can actually just split yourself off and live a different life. Longevity is an interesting one as well, just in terms of physical longevity because can't remember what it's called, but Brian, and it begins with a k his start-up, but he's spending a lot of time looking at longevity, eating the right supplements, the right foods, no alcohol, all of this kind of sort of thing, extending his life. He claims by about 10 years.
(25:57) Briar: Oh, I know the Brian that you're talking about, but I've forgotten his surname as well. Yeah, he's in New York. He’s doing quite well. He's got like some aspects of him, which are the same equivalent as a 17 year old apparently. And he's in his forties, I believe.
(26:10) Theo: Yeah, that's right. But now you, but again, this, this goes way back to our, our, our conversation at the start, which is about society and is society geared to have that longevity, to have another 10, 20 years added to their lifespan? And then what do we do about that? Are we moving retirement barriers or the retirement barrier another 10 years? Do we have to work longer? How are people going to sustain themselves? Does that just mean that, like I said, they work longer? And in that case you question, well, what's the point of living longer if all I'm going to do is work more? So again living longer is great and obviously it can help solve a lot of physical and mental health issues as well in the process of learning how we do that. But we have to look at what do we do for society.
And if we're living longer, there's already a weird sort of population birth deficit at the moment. Japan has a huge gulf between older generation and new people, younger generation, and there are not many people in between. If people are living longer, what happens--, does that mean we have to keep filling up that gap somehow? We're going to have a really aging population globally, and then are we still going to have that inequality? Is everybody going to have access to this longevity, all these solutions, or is it always going to be a haves versus have nots? So lots of questions to ask. It's great that the research is being done, but we're not really asking the questions at the beginning to say, what can we do to help everybody get onto this ladder?
(27:52) Briar: Yeah, I agree. It's fascinating. I wonder if in in the future we will start to have more babies in those little robotic wombs that we've kind of start to seen some, some pictures of.
(28:06) Theo: Yeah. I saw a worrying there. There are two interesting things. I saw a worrying picture, which was almost like a baby factory, artificial wounds. And it was, and, and it did look like the machines on, on the matrix, but it was just rows and rows of people gestating or babies gestating. But I saw one today, which is growing artificial embryos for organ harvesting. And that raises massive ethical questions. Wow. Both medically and just against humanity in a sense, which is they'll clone or grow a copy of you as spare organs for, should something go wrong, which again, is part of the longevity question. If you ever need kidney, a new kidney because your kidneys are failing, you have this kind of sort of spare that's swimming around in amniotic fluid that they can just take a new kidney from. But we've seen so many science fiction movies where that goes completely horribly wrong.
Repo Men, for example, was a science fiction movie with Jude Law where you basically you could pay up your brand new harvested organ, and if you didn't, then they would come along and take it off you they literally cut it out from you. So again, lots of ethical questions around, whether we should be doing this kind of sort of thing. Yes, we can see the benefits, but what are the downsides?
(29:34) Briar: Yeah I don't think people are having these sorts of discussions. It reminds me again, of the, the whole artificial intelligence discussion. I think people are just burying their heads in the sand. And I think part of the problem is that everybody who's in government is just really old a little bit. They are so out of touch it feels like. Do you remember when there was that court case and people had Mark Zuckerberg up on the stage and they were just asking him the most stupidest questions about Facebook, and he was like, that's just how digital ads work. I just remember watching it, like, these people literally just have no clue. It almost feels like, and I think it's not even any clue; it's the fact that sometimes these discussions are so big that they just don't attract the normal person.
I think that because we have this, this four year cycle, and I'm saying all of this from Dubai, by the way, talking about the US government, I'm not in the US so whatever, I'm allowed to say these sorts of things. Because I think we have this, this cycle, this very quick cycle, it's like everyone's just out for themselves, obviously. And no one's, everyone's afraid to talk about the bigger picture because they know that old Billy down the street doesn't really care for artificial intelligence. He doesn't want to think about these sorts of things in the future.
(30:49) Theo: Yeah. You hit the nail on the head actually, in the UK as well. We've got the four year government cycle or recycle and, and it's almost the case of, well, that's the next government's problem. I don't have to think about that. I can just waste four years and, have discussions and think tank reports and things, but I don't have to address the problem. And your quote about the Facebook hearings nothing changed between that and the hearings that we saw last year or the beginning of this year with TikTok and the TikTok, CEO kind of looking, staring blankly at these politicians' faces when they're asking things like, does TikTok access the internet? And things like that. And it's like, these guys, like you say, have no clue, have not been briefed properly and are not digitally or certainly minded enough to actually ask the right questions or understand the answers that are coming back and what to do with them. These hearings kind of like become a bit of a sham show and not a very good one at that. like you said, the wrong people are be are involved in these questions or these, these discussions, not the right people who can actually bring a sense of realism, but also connect that back to the people in the street that this impacts on a daily basis.
(32:11) Briar: So what do you think that we need to be doing? I really do think we need, and I think you touched on this before as well, but very diverse set of people coming together. So people with all different types of experience, whether it's a start-up, whether they're a psychologist, futurist, whatever. Obviously we don't leave this kind of stuff in the hands of government and big tech because big tech have proved themselves time and time again that they're just money hungry corporations who don't really care. They've showed us that side of them already. So, what do you think we need to do? Because I know that the, you spoke about the Biden Harris meeting and about how you, you called them sharks, which I thought was quite, quite funny. And he did actually say that he would be getting a diverse set of people together. I said I'd see that when it happens. But what are your thoughts and how would we go about getting these people together? I do think it needs to be global as well, and not just US focus, and then how do we get China and Russia and all these sorts of people involved as well?
(33:12) Theo: Yeah, I think so. We're in a really interesting phase where everybody wants a piece of AI and they don't want to be left behind. So it creates this kind of sort of well, I can't be seen to halt progress because that would put our country at risk of falling behind the next country. So this kind of, sort of stops us from having global talks, unfortunately, or talks in a more sort of collaborative manner. And of course what we're going to see is like every country and government will come up with its own set of policies that might counteract the next, your neighbour or whatever. And unfortunately, AI is not one of these; it's like the nuclear arms pact for example.
We're in a, an interesting moment in history where we have this technology that will have a material impact on society, whether good or bad, much like nuclear power. And we have to understand how to approach this from a global standpoint and a global platform because it does affect everybody on the planet. And so these discussions have to involve everybody on the planet. I think we need to come down from the hubris a little bit and that the governments are kowtowing to the tech overlord shall we call 'em. And they're the only people that are qualified enough to have these discussions and make the decisions on our behalf. There are over 8 billion people in the planet now. You cannot divest that kind of responsibility to a handful of people who are quite technically ignorant unfortunately. It’s not a bad insult at all. It's just that they don't have the intellect or the understanding to be capable of making decisions that impact millions of people.
We have to bring those people into the discussion. How we do that? Again, it's a really difficult conversation. I don't know, do we unionize, do we allow the workers to actually come together and force the governments to wake up? This could be and again, pivotal moment in society where we look at how society is, is actually working and whether it's working now, if it's broken, how do we fix it? And whether it'll work in an age of Ai.
(35:32) Briar: You seem a little bit concerned about the future of AI. Where would you say you sit on a scale of like one to 10, 10 being alls well, alls good, one being, oh d**n we're really going up the wrong street.
(35:49) Theo: I'd say about 3 or 4. For example, because I do see the positives, but I leaned more towards the negatives because it's more helpful to wake people up and warn them about what the potentials could be. Like I said, I think there are far too many cheerleaders in here in this industry at the moment. And unfortunately, every day you can actually see, if you pay attention, you can see where the negatives are happening. And I think we need to wake up to the negatives. I mean, we're kind of sleepwalking into this.
(36:20) Briar: You popped a post recently up on LinkedIn about the women who are doing some amazing things in this field and how potentially they're not getting as much recognition or even acknowledgement to be honest.
(36:33) Theo: Yeah. I mean so I found out a study back in 2008, which was like here are the great people who are doing things in artificial intelligence and futurism and singularity and who have discussed this kind of sort of thing. And every single one of them was a white male. And that's really disappointing, especially today to have that kind of report given the level of influence that women have and especially in holding prominent position. I mean the, CTO of OpenAI as a woman, the people who were in charge of ethics, who were fired unfortunately. But in prominent positions in Meta in Microsoft, they were women. You don't have to look far to see not only women, but people from diverse subsections of society in different countries race, color, all that kind of sort of thing.
They all have a material impact, but yet time and again, all these reports come out and it's always the same kind of guys that you see, especially conferences as well. And that's really disappointing and it kind of feeds into the whole, well, AI is quite biased, which it is because it feeds off our own biases. And then when we ask it to produce documentation or produce images, and it comes back with some biased outputs, it's kind of a reflection on society and how biased we are that we don't actually recognize it. So when AI is producing things it's a mirror of ourselves in a way.
(38:08) Briar: I'd say that that's one of my biggest personal drivers is and actually that's why I produced my documentary last year, 48 hours in the Metaverse where I literally spent 48 hours nonstop in the metaverse, completely sacrificed myself. It was challenging. It was an experience you might say. And one of my biggest drivers was, I want to participate, I want to be curious, I want to see what's out there. So that as the metaverse is currently being built, it's built with females in mind it's built with us and without bias sort of thing. And I think that, again, looking at AI and looking at these other future things, I just hope that we are doing enough. I don't think we are, but I hope we are doing enough to again, ensure that a future is diverse and without bias. But again, I don't think we are. What do you think we should be doing?
(39:01) Theo: No, This is like a question as old as time to be honest. It really is, what can we do and are we doing enough? Well, we're not doing enough. I mean, even now, like I said, there are reports you go to a conference and there's either a mannel or there's the token--
(39:18) Briar: Mannel Yeah.
(39:21) Theo: Yeah, yeah. There's a token woman on the panel or a token ethnic person on the panel to show diversity and inclusion. For example, there was an AI conference and four of the panel are robots. But they're women robots are in the image of women. And it's like, what, is it really that difficult to actually find for women, humans for your panel? Or are you trying to say that women featured robots who invariably are operated by a man in the shadows with a laptop are all you can do. And I think this is and then you've got Levi's and things like that and, amnesty and all producing AI generated output we're using women or we're using people of color and that, that ticks the diversity box, for example and the inclusion box.
But it's like was it really that difficult to find the person, an actual person to take part in this? Well, no, it's not. I think it's become; AI is becoming very a very lazy tool now for people to get around some of these issues. And that could be a big problem actually. And, and set things back. So not only are we not including women in the conversation, but we could also say, oh, well I asked the AI and I asked it to act as a woman scientist and give me the responses. And it's like; well this completely defeats the point. So I think we have to be very careful about what we do and we have to be very pointed about what we not do. And certainly what we not do is involve the AI in some of this stuff.
(41:01) Briar: And for somebody who is feeling quite overwhelmed about all this talk of artificial intelligence and their jobs and everything like this, and say they're listening to this podcast, what would your advice be to them as to how they can start to familiarize themselves? What should they do tomorrow? Like the first step basically.
(41:22) Theo: The first step I think is definitely look up these tools that we discuss or people discuss on that you see in the reports OpenAI, mid journey. You've got AutoGPT everything that you, even from a very surface level, learn about what these tools are and, then look at your own life and, and career and actually think to yourself, well what can I do to learn about these that will just get me a little bit of a head of the game at the moment? Or, do I really want to lose my job to something that has no form? Is my job really all boils down to, someone writing an instruction and it goes away and does something. And I think it really drives the question of what's humanity all about?
I do think that, over the next sort of 10, 20 years, we might see people moving more towards the creative industries, towards the humanities again because that's what humans are really good at. we were never meant to be sitting in office blocks processing mortgage applications or answering customer service queries all day long because that really doesn't drive or add purpose to a, to a life at all. But thinking and being creative and being scientific and aiding people, whether you're a nurse or a doctor or something like that, we are really good at building things as well. Builders and the blue collar work is going to, I think the blue collar work is going to explode because people will turn to doing what we used to do best, which is build, think, be creative, and just let the machines do all the crap stuff. Nobody wants to do the crap stuff and they're really mundane jobs. Again, learn about what's coming, learn about the tools, have a play around. Cause a lot of these things are free, so there's no barrier to actually understanding how it could improve something. And more often than not, you can actually teach yourself some new skills with these tools as well. And again, just try and get ahead of the wave.
(43:23) Briar: I agree. I think there's a lot of power and knowledge here really isn't there. And don't be fearful, get curious, participate, have some fun with it. I think it's really important, and a lot of people I've been talking to are saying that yes, the next 10, 20 years there's going to be some huge fundamental shifts in society, but over the hump, it'll be a different world, but it will be more creative and, and fun and humans will be able to do what they're fundamentally good at as well. So I agree with you now I've got a story that I want you to contribute to. As I mentioned before, every single guest we have come on the show has contributed towards the story and as you'll hear, it's become a little bit random. I'm going to share it with you and I want you to take it and contribute towards the ending. Okay?
Picture this, you've woken up in the morning in an organic breathing bed and you've walked into the kitchen and any dish you desire is available at your fingertips. You get ready for your day at work, you put on your suit, which has an integrated AI telling you what you need to do for the day. Your workday is done by a robot you call R2D2 while your other robot delivers you your double espresso. What happens after that?
(44:44) Theo: What happens after that? I take off my visor and I actually let my virtual twin or my digital twin continue on his day as I stretch from my real breathable bed, put on some loose clothing that will have an AI integrated into it, walk outside and tend to a lemon grove in the middle of Italy.
What I'm trying to say is, and I don’t know if anyone's actually seen the film surrogates in a, in a way from where that was an early Bruce Willis science fiction film, but we can, in this scenario, I'm kind of envisioning that I'm directing my avatar, whether a physical or whether a digital one to actually do some of this work. And I'm actually doing what I want to do as a result of it. So being productive in two ways. One, by doing, digital work, white collar work, adding to some value to society in that way while still fulfilling my own personal desire, which is, I'm tending to a lemon grove or growing olives in the middle of Italy and enjoying the life that I want to lead. So, living the physical life and living the digital life at the same time.
(46:03) Briar: Sounds great. It's, it sounds like almost like the idea that I had regarding getting my avatar to work for me while I sleep. Mine was so I could do twice as much work and yours is so that you can grow lemons and olives, which sounds beautiful and lovely and very peaceful and very enjoyable. So I love it. Amazing. Well, thank you so, so much for coming on the show and thank you so much for telling us about artificial intelligence because yeah, it has been, I think keeping myself and lots of other people up at night to a certain extent. It was very interesting hearing your perspectives.
(46:38) Theo: Again, thank you for inviting me on the show.