Episode 10: "Interview with Theo Priestly, Technology Futurist and Author"
Listen to this episode:
Episode description:
In this episode I have a discussion with Theo Priestly, technology futurist, international speaker, and author. We discuss the potential effects of artificial intelligence on the future of labor, the entertainment and content industries, as well as the ethical implications of therapy chatbots and what happens when people believe that machines are alive.
Goldman Sachs: The Potentially Large Effects of Artificial Intelligence on
Economic Growth
https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf
World Economic Forum: The Future of Jobs Report 2023
https://www.weforum.org/reports/the-future-of-jobs-report-2023/
Episode script:
SCOT: Hello everyone, and welcome back to AI QuickBits: Snackable Artificial Intelligence Content for Everyone. My name is Scot Pansing, and today I'm going to be speaking with Theo Priestley, futurist and international speaker and author. Theo is widely known and sought after for his forthright views on the negative impacts of technology on society, and currently works to inform political and business leaders of the risks and benefits of adoption. He has worked with some of the biggest names in tech and business, including SAP, Siemens, Bosch, and Hewlett Packard Enterprise.
Theo has also published more than 300 articles on topics including Web3, artificial intelligence, space, blockchain, metaverse, and mixed reality for Forbes, the European, Wired, Huffington Post and much more. He recently authored the book The Future Starts Now, published by Bloomsbury. Theo, thank you so much for speaking with me today. I've been really looking forward to our conversation. I love following you on LinkedIn; the content you put out almost always challenges me. I usually find myself asking myself, about one of your posts, do I agree with this? Why or why not? And it can often be uncomfortable. But in order to rise to the challenges in the near future, we're going to have to deal with a lot of discomfort, wouldn't you say?
THEO: Yeah. Scot, thanks for inviting me. First of all, it's a pleasure to be here. I want people to kind of read and then have a gut reaction when I write something. I've been writing online since about 2007, so I feel kind of old hat at this. But I do find that a lot of writing these days has become sanitized and people just don't really want to express an opinion or hang their hat on something. And I'm kind of old now. I'm pushing 51, believe it or not. And I don't really care if I upset people anymore about my opinions and even even whether they're right or wrong. I think it's important to galvanized discussions now, and I think there's nothing more galvanizing right now than the discourse around AI. It's literally pro AI or you're actively against it. And I kind of take a stance in terms of trying to play devil's advocate on both sides. I can see some benefits, but I can also see the human cost. And I think the human cost is a thing that most people get, most people forget about because they get caught up in the hype and the excitement. They want to push the latest content that someone has created using Midjourney or ChatGPT. And it's normally to gain another tick on the newsletter or another follower on Twitter, but they don't add anything to the discussion or to the debate. And I'm just trying to get people to add to the debate when I post stuff.
SCOT: I think that's absolutely awesome. I also want to add to the debate. I want to add value. I don't just want to post the Midjourney Prompt Guide or here's the 20 Tips and Hacks You Can Use with ChatGPT. I'm really excited for us to get into some topics today, and I think that there was a recent interview with Gary Marcus, actually, just the other day, where he basically says, we really don't know if the benefits outweigh the costs or the other way around right now. And I think that those are the deeper discussions that we can have. I'd love to start with the big one, the effect on labor and the future of work. I've been spending some time with some of these [labor] reports. There's the Goldman Sachs report and the OpenAI report. And the World Economic Forum just came out with one and they sort of get distilled down into like a headline, for example, the Goldman Sachs report everyone just said, oh, Goldman Sachs says 300 million jobs are going to be affected. And they do mention that. But it's interesting that the report is actually somewhat positive, even though they say two thirds of jobs have some exposure to automation and generative AI may may take care of 300 million jobs or a quarter of all jobs out there. They also say, oh, productivity is going to increase over the next five years, et cetera.
The World Economic Forum report also says things like there will be 23% churn of jobs in the next five years, which sounds really uncomfortable for a lot of people. They still sort of present this like GDP will increase, or jobs will increase. And I feel like that's just temporary. Maybe like the “centaur phase” that people talk about in science fiction where before all the jobs are completely gone, yes, some people will learn how to use AI in their jobs and those will be the ones that keep their jobs and there'll be this efficiency bump, but it may be temporary. What are your thoughts on that?
THEO: Yeah, it's interesting to paint a bit of context. So, I mean, 300 million jobs is one in ten of the global working population and so when you reframe that, then it makes you set up. And then Goldman Sachs World Economic Forum saying 20%, I mean, you can basically project the same kind of numbers. To me, we have not seen a technological shift that has hit every sector at speed at this scale before. I was at a conference last week and one of the speakers was saying, “it's a bit like back in the 1800’s when people were shoveling horse manure from the streets and then they got replaced because the car came along” and things like that. It's like, yeah, but the motorcar did not happen all at once. It took decades to start to hit sections of society who could afford it, whereas AI is literally in the hands of everybody who could pick up a smartphone, for example.
And we've seen developers suddenly running with it at speed without much thought around data privacy or anything else, or even the jobs that they are going to impact with, even their experimentation. And the thing is, those experiments are being watched by the people who create the AI in the first place and they will be natively adopted into the various tools like ChatGPT or the next GPT model. So I also think that these jobs that are highlighted in these reports and they say, oh yes, but productivity can increase, yeah, you're right in the sense that it'll increase slightly for a little while. It'll be temporary because all these people who are adopting these tools are literally just training them to get better. I mean, that is what reinforcement learning by humans is, that you are adding more value and more sophistication to the large language models and the algorithms and everything else that sits behind it and the big black box every time you use it.
So I think that by the end of this decade, I think all these reports and I think everybody's kind of positive spin on this will fade into obscurity in a sense, because we'll start to understand the real impact, which is there may be structural unemployment at a scale that we can't cope with and certainly governments and local authorities can't cope with. They'll be rushing through various sort of universal basic income in inverted commas style initiatives. But of course, we have no idea where the money is coming from to support these kind of things. There isn't a money printer big enough to support 300 million people out of job, for example. And then how do you reapply taxation across society in order to support something like that? And there's no point in saying, “oh, but we'll just tax the companies that will employ robots or algorithms instead of people,” because that's just not the way capitalism works either.
So there's an interesting shift happening, I would say, in the way that society is structured and the way that business is structured. That could happen over the next 10-15 years as a result of AI becoming more sophisticated, more widespread, adopted at speed. We're just not prepared for it. And it's not an existential threat, the end of humanity, blah, blah, blah. It's actually the end of society as we have lived through it for the last few hundred years. I think civilization could change at a fundamental level and we're just not prepared.
SCOT: Yeah, the velocity does seem like it's pretty incredible. I think also, even if those 300 million jobs, even if they don't evaporate and those people are always out of work, even if there's this churn and this shift that's going to be so uncomfortable, and even the money just to retrain people or to get people to have mobility through the workforce, that's such a huge disruption, I can imagine. Even before the universal basic income talk, which may also be necessary, even just that transition would be quite costly.
And I think maybe something that is a big question is, it different this time, right? I mean, in this Goldman Sachs report, they talk about this other report from the National Bureau of Economic Research where and I've heard this phrase used a lot, or this data point, that 60% of workers today are employed in occupations that did not exist in 1940. So they're basically saying, “the historical case is that automation creates new jobs, that we have no idea what they are.” Like people didn't know what a web designer was in 1940 or all these other jobs. But it does feel like there's real potential for that not to be the case this time, right? Because it's not just the software AI, but you see these Boston Dynamics videos of humanoid robots throwing 50 pound bags of cement onto trucks and with self driving vehicles, et cetera, et cetera. It just feels like labor itself may go away, not just transform.
THEO: Yeah, I mean, a lot of people sort of say, “you act like it's going to happen overnight.” No, I don't act like it's going to happen overnight and know for a fact that it will take decades. But the thing is that it will happen and we're also expecting it to happen a lot faster than other trends like the Industrial Revolution, and it's going to happen pretty evenly as well. Whereas the Industrial Revolution took away particular levels of jobs and then created new ones. This is happening across the board.
I mean, the creatives industry, for example, is one that I've been tracking quite closely just now because I know a lot of people in that industry. And the level of backlash and impact in terms and the level of negative sentiment in terms of “well, that's my career over, something that I love is now being automated. And so many people are embracing what I took years to learn.” They're now just taking advantage of with a tool and mass producing content that has no soul, that has no human creativity in it and no pride, really, because, I mean, what pride do you take in essentially asking something to create something else for you and then just, like, sharing it or pumping it out or making money from it? It's upending what it means to be creative or certainly what it means to be human, in a sense.
And then you've got more sophisticated jobs. So I read that a biotech lab, someone who basically studied a PhD in bioengineering and is potentially facing redundancy within the next six months because their job was being automated by ChatGPT. The person thought, “I'll explore what ChatGPT does and in context of this PhD candidates job” and all of a sudden it's now, oh well, I don't actually need this person anymore because this tool can do it for me in a much faster and more efficient way. So it's not just the typical white collar jobs that we think about. People sitting in an office, customer service, processing mortgages day in, day out, pressing buttons, scraping data from one screen to another. It's other things that we don't actually readily think about that are being disrupted, being removed, being automated to a point where I don't actually need the human component in it at all, other than to maybe fact check. Is that all we're going to become? Which is babysitting an AI. I sincerely hope not.
Because the other thing that these reports tend to talk about is, yes, more jobs are created, more jobs that didn't exist before. But that to me is not the point of this at all. If we are automating large proportions of society and the workforce, creating more jobs as a result of it isn't actually doing anything at all. We're kind of creating jobs for the sake of filling time for people to work. And that time and that work might not have actually any meaning at all. So it's almost like saying, “well, I've automated the mortgage processing people, but now the new job is called mortgage AI handling and that person is now filling their time, checking what the AI does before it commits.” Is that value add to society? No. So I think these reports don't actually break down well, here are the jobs that are lost, here are the jobs that might turn up, but actually what's the kind of value equilibrium here to society and civilization as a whole?
SCOT: Yeah, and there'll be far fewer of those positions. It's sort of like a foreman like you're talking about or the supervisor. There's all the things that person is supervising won't be human, so there'll be far fewer of these supervisor or foreman roles.
Speaking of entertainment and content creation, I'm based in LA, so I'd like to drill into that a little bit. And it's quite timely with the Writers Guild strike that just started a couple of days ago. We're recording this in May of 2023. And what I find, I think I even commented this on one of your posts this morning, but what I see happening is that the entertainment industry is not moving yet on incorporating, at least the big studios, on generative AI. And I believe the reason why is that they're basically shifting, they're trying to mold the legal landscape first so that there's a smaller risk to their exposure to litigation when they do move.
Meaning right now, I assume you saw the trailer or the product demo of this company where there's a piece of content of an actress and then it does the same piece of content and the profanity now changes to a clean version, but her mouth and lips and everything are lining up. And then it's in Japanese and then it's in German, and it's all done by generative AI and the face also, there's no weird overdubbing.
So not only will they save time and money to create the content, but that content will probably play better in some of these territories around the world. There might be a box office lift if the audience perceives that that actor is a native speaker. And I think this Writers Guild strike, you noted that there was that piece in it where they had mentioned they want strict definitions around AI, and that you can't use AI. And the studios basically cross that one out, like, oh, we'll revisit that one annually, or something like that. And I believe that that is basically the beginning of them molding the landscape such that in the future, when they believe that time is right, they will 100% make the move and start to eliminate lots of jobs., when they feel that they have enough legal protection from massive lawsuits from the guilds.
THEO: Yeah, the Hollywood angle is a really interesting one because of, again, the types of jobs. Obviously the Writers Guild wants to protect script writers. You've got the writer's room, you've got production, everything marketing, everything that hits a movie studio that is white collar is up for grabs with these tools. And again, I have to stress, we're talking about this today, like you say, May 2023. By next year, the sophistication levels, because of the amount of input that these tools have had over the last twelve months, the outputs are going to become extremely sophisticated to the point where it's going to be quite hard to detect whether it was a human who actually produced this or whether it was an AI.
And like you say, there's the litigation angle because of the IP and the information and the preexisting work that has to go into these tools to train them to be as good as they are. And if it becomes detectable that your script or your lines or a song or any piece of art has any hint of somebody else's work or IP, that's a massive legal issue. And we will get to the point and I'm really hoping that we do we get to the point where if I'm using an AI and I produce something that as part of the explainability and as part of the transparency, it basically lists, here is the input that I borrowed from, that I was trained on, that became part of the output. So for an artist to have some kind of recognition or even part of the royalties shared in an image that was fed part of their artwork, or the preexisting IP to generate something new that made a lot of money, that to me starts to show a little bit of equitable fairness in what's happening.
But right now we're nowhere near that. And I very much doubt the way that things have happened so far, the way that there's 850GB worth of data that's been hoovered up across the web, and that's on a text basis, Midjourney is being very coy about where it's got all of its information from in terms of visual input into its training. I had a chat with someone today, and it's like the genie's out the bottle, the cat's out the bag, horses bolted. It's very difficult to retrospectively apply some of this legislation or this law to protect IP that has been used in training these algorithms. And we have to get to a point now where we have to stand and do something. And I really hope that the Writers Guild stands firm. But like you say, I think it's going to come whether they like it or not.
SCOT: The toothpaste is out of the tube, that's another one, right? It's really timely right now because well, first off, I would say these guilds, especially the Writers Guild, historically, they don't usually end up on the, quote, winning side. So hopefully it is different this time.
I think this is really timely because the EU is looking like they'll have a broad vote in June around AI. And as you mentioned about the intellectual property that goes into train the models, I believe as part of that legislation, they're really discussing, like, you need to be able to show where this stuff came from on these outputs. And this is usually referred to as explainability or the black box problem, right, with Generative AI.
And what's interesting, like you say, is that basically these models have I mean, for lack of a better phrase, they've been built by scraping the Internet and by taking what was considered public information. Meaning, technically, we all had these computers, these machines with a web browser, and you made a call and so all this information would show up in our web browsers for free. Whether it's image or text, this is all publicly available information. But how does that change when you have a program that now scrapes the entire Internet and uses it in this new way and then charges people money, perhaps a monthly subscription fee to make new content, and then none of that money of the subscription fee is flowing back to people who created the content to train the model. So, super timely, right?
Now, I think another just to move it along a little bit, but to stick on content, there will be this fire hose of content that's coming, right? Orders of magnitude more content is coming, much of it synthetic media. People talk about there's going to be content generated on the fly by an AI with no human involved that is to keep people's attention on these platforms. So you just watched a video with these variables, whether it's a cat video or who knows what it is. And then it'll just generate more similar videos, more of the same, to keep people on their attention or their eyeballs, and in addition, maybe even to keep AI attention. So I think we may be, what's, the snake eating itself?
THEO: Ouroboros.
SCOT: Yes, ouroboros. We may be getting to a situation where there's just all this content created and much of it is even consumed by machines to keep the money flowing. We may be in for it. I think we can probably move from the labor, the labor topic. I do think that I love the science fiction concept of the centaur phase, and usually it's pretty dystopic. It's like this is a temporary bump, there will be this centaur phase where we're assisted by AI, but we'll see. We're not really talking about Skynet here or being in the matrix, like we're all batteries, but nonetheless, the effect on just labor in general. And like you said, there's going to be significant impact to how we structure society, especially over the last couple of hundred years.
Okay, let's move to something that I think you and I both find a little disturbing, which is that basically because these machines can now simulate emotion well, let's even take a step back. The large language models, the way that they simulate or they don't even simulate it, they have conversations with people now, and it feels real to a lot of people. And I've had this discussion with people where I mean, there are lots of people who even give their car a name. A car doesn't even have a large language model in it. Not yet. But their pets, people anthropomorphize, all these things that are not human, especially inanimate objects. So now, since they can converse with us in a way that feels like humanlike without any guardrails, there are plenty of companies that are going to be like, hey, we can simulate emotion here even though we know it's not an alive machine, but we're going to simulate this. And then there's all kinds of things that companies can try to do with that, especially ones that may be a little I don't want to say bad actors, but may tend to push the boundaries of let's call it manipulation, whether it be to buy something or to profit off of someone's loneliness. Like this company, Replika, has chatbots that they can be a friend or a mentor, et cetera, but they also can provide a romantic relationship. And they used to I don't know if they still do this, but this chat bot could send you sexy selfies or I think they called them spicy selfies, I forget. But people really get attached to these things. Some people do. And how do you feel about any type of regulation, or even if it's self regulation? Like, where should the line be? Or should we consider a line in the simulation of emotion in these products?
THEO: I think it's really important to stress that a chatbot does not convey emotion, it's not your friend, it is not empathic, it does not understand you. It is just literally regurgitating and spitting out words in a particular fashion that feel that there is something behind it. And I find that some of the more recent examples that I've seen cropping up as a result of ChatGPT feel extremely predatory in the sense that they are capitalizing on, like you say, loneliness on mental health issues. Wrapping it around life coaching or something like that, knowing full well that people will tend to use these tools outside of the intended use case.
So in the example of Replika, for example, I know that having read a recent Wall Street Journal article, that yes, people are using them for extremely romantic and relational use cases forming relationships because it's very typically human to anthropomorphize and see something that isn't there. Like you say we call our car something, we look at our dogs or our cats or our pet hamster and we imagine having what they're thinking and the emotions that they're feeling and it's extremely dangerous in a chatbot because these things answer back. You put something in, you get something back out of it and immediately you think oh, this thing knows me. Oh, this thing is talking to me and I can relate to it now. But of course it doesn't care. It has no emotions, it doesn't relate back. And so seeing people say “oh, we've built a therapy chatbot” and stuff is just really dangerous and unethical. And like I said in the Wall Street Journal article that I read recently, and I think it was Replika that was quoted here, people forming these sort of romantic and really deep personal relationships were affected quite badly because they updated the models behind it. And all of a sudden, the relationship and the tone of the relationship and the feedback that they were getting back from these chatbots that they had spent weeks and months nurturing had changed. And I mean, in one case, there was a guy who had basically formed a very sort of strong personal relationship to the point it was becoming extremely sexual. And then they updated the chatbot and the algorithm, and all of a sudden it started to refuse to engage. And it would skirt around and go, “I don't want to talk about this. Can we talk about the weather, please?” Or something like that. And of course he had spent months cultivating this kind of really deep personal relationship to have it whipped away and the rug pulled from under them.
And it's the same with these companies saying “oh, I've got a therapy chatbot.” No you don't. What you have is a large language model built on a skin, built on top of ChatGPT that says the right things, like “I understand how you feel,” et cetera is how these things phrase. And of course you immediately jump on that thinking, “oh, it knows me, it understands my feelings.” But of course all it's done is string some words together. And I find that extremely predatory and also almost borderline manipulative because you will want to keep spending the money on the subscription because you feel like you're having a therapeutic relationship and it's nothing of the sort. And I think we need to stand quite firmly and involve the professions that these people are supposed to be building solutions on top of.
So the psychologists, the psychotherapists, the counselors, et cetera, where are they in these conversations that are helping them build the safeguards and the guardrails to make sure that people know that this is not supposed to be a replacement for a human therapist or a human psychologist or something like that? And it's almost supposed to be a support mechanism, but not a replacement.
My wife is actually in her master's degree. She's passed actually this week, she passed her second year in her master's degree on counseling and psychotherapy. And she wrote an article after we spend nights talking about all these examples and she gets quite frustrated by what she's reading, people promoting, in a sense. And she wrote an article about why ChatGPT and these chatbots are actually dangerous and it's not therapy. And it's because there is no human relationship, there is no therapeutic relationship behind it. It doesn't relate to you. It cannot relate to you because it doesn't know all the things that you've gone through. So I find that we're in a very dangerous phase where we are again rushing ahead, building these things and actually taking advantage of people who are extremely vulnerable. Just now, especially coming out of a pandemic.
And then on top of that, now, especially in the creatives industry, for example, those feelings of my career is over, I have no self worth, my work is useless. How can I combat against AI that in itself, on top of coming out of a pandemic is just an overflowing volcano, I guess, of a mental health crisis. And then to add a chatbot on top of that and say, “use a chatbot, it'll help you.” It's just extremely dangerous.
SCOT: The irony of talking to an AI chatbot to get therapy about why you've been displaced by AI.
I don't work at these companies. But it does feel like from the things that we're discussing and reading, like there is a disconnect between, like you said, the humanities workers and these products. Maybe at best it seems like they might engage with academia or some researchers. But as far as, like these national psychiatry associations or these professional associations of therapists and psychiatrists and psychologists, it does feel like there's some sort of disconnect there.
And what's interesting is what we're also getting into is that whether it's a therapy chatbot or some other product, people are going to feel like and do feel like the product is alive. And also, when you get down to it, there's no real specific definition of conscience. And maybe it's a spectrum, right? Like animals have a conscience. There's sort of this gray area, and it feels like a slippery slope when people believe that some of these things are alive and then corporations are.. Now, I'm not saying I think Gary Marcus or someone said this as well, I don't necessarily think we should program into these things, “You must say you are not alive,” but also, I do believe there's the tendency for again, with unfettered capitalism, companies are going to explore them. They're going to push to make them seem as alive as possible, and that's really within their grasp right now. And so when people start to believe that these machines are alive, I think that's going to get really interesting.
And you mentioned several weeks ago in a post about an old science fiction writer saying that everyone will own slave robots by the 1960s or sometime in the future. And then there was this juxtaposition between, there's this Star Trek: The Next Generation episode where the android Data argues for his rights, called “Measure of a Man.” A lot of sci-fi enthusiasts and Trekkies, it's just a classic two parter episode where this machine is arguing for rights based on his sentience. But there's also this other sci-fi world that's often.. I don't want to start a war here between Star Wars and Star Trek, but in Star Wars, the world is much different. The machines are tools, even if they develop these I mean, there's also these strange cases in Star Wars where some robot just goes rogue and becomes a bounty hunter and might not have rights, but it's got the ability to fend for itself out in the galaxy or whatnot. But for the most part in that first Star Wars movie from 1977, the Jawas just line up the droids for the farmers to choose which one will help them with their crops, in the Star Wars world, you know, for me, that's not a slave auction. That is machines, you know, for sale.
And I think that what I'm trying to get to here is it's very important for us to reiterate or to as much as we can to convince or to show society that they are machines and that they're not alive, because it quickly gets to this. It would be a waste of time with all these other things that we're talking about. Actually, I know someone that says nothing is ever a waste of time, but it would seem like an inefficient use of our time to be debating whether or not something is alive or not and should have rights, et cetera, and go down that path when there's all these other problems to address. And they're not alive notwithstanding “what is consciousness?” et cetera. But I just wondered if you had any thoughts on that.
THEO: Yeah, I've spoken with people who absolutely believe that robots should have rights. And it's a really interesting thing because we are so poor at giving human rights, in a sense, or upholding human rights across the board. And then to think that we now have to consider, actually we should be giving some of these robots rights straight away, over and above, or considering them over and above people's rights. Like you say, it's kind of a weird juxtaposition in a sense.
Obviously we can think of all these things at the same time, it's not a case of we're not going to think about human rights anymore because then I've got robots coming into play. But at the same time, it's like, do we really need to be thinking about this at this point? Because especially if we're facing so much disruption, we need to be protecting people before we protect the robots.
Star Trek, Star Wars, you've got The Butlerian Jihad in Dune as well, which is quite interesting. You’ve got Bicentennial Man, which in a sense is a bit like the sort of data question where I want the right to basically die, in a sense, not be a human slave, robot slave, I should say, anymore, or be subservient to one particular family or whatever. That was a great film with Robin Williams.
SCOT: Yeah, I remember that one.
THEO: So we've kind of thought about these scenarios for a long time and how, you know, how are we going to measure what rights any sort of artificial intelligence has? I mean, I don't think for a start that we'll recognize what an artificial intelligence really is because we are basing our criteria on what we understand as intelligence and human intelligence and the creatures around us, it's only been recently that creatures have been recognized. The other animals that inhabit this planet have been recognized as having sentience dogs, cats, cuttlefish, octopus, et cetera.
It's really hard to believe that after thousands of years of civilization, in terms of where we are today, that it's been recognized that these creatures have sentience and yet we arbitrarily go around, you know, abusing them, killing them, using them for experiments and now we're now we're thrust upon having to consider well, what does an artificial intelligence look like? A real one?
And does that require us to reconsider what sentience looks like from an artificial point of view? Or do we apply, again, human criteria on top of that? And I don't think we know. I mean, certainly the creators of OpenAI and Jeffrey Hinton and all these people who are coming out saying, “oh, we should stop what we're doing” and blah, blah, blah. I don't think they actually know what an artificial intelligence really looks like either. And even if we get to a stage, we're going to get to a stage where we have artificial general intelligence quite quickly, probably in the next ten years, if the current rate of change keeps up. But if we ever get to a point where we have artificial super intelligence, which is something way beyond our own capacity, one, would we even recognize it?
Two, if something really is that super intelligent, is it really going to reveal itself to us, knowing humanity's full history of absolutely being disgusted and revulsed at something that we don't understand? And obviously, the immediate reaction is knee jerk reaction, is to go up against it. And that just triggers off the self fulfilling prophecy of Terminator, et cetera, et cetera. So we've got some inward thinking to do in terms of how we apply human rights to ourselves, for a start, human rights in the face of potential structural unemployment and how we protect people from AI and the impact of AI.
And then how do we actually classify rights? Or should we classify rights conferred to an artificial entity that has the intelligence or perceived intelligence capability over and above what a human does? And again, this is all almost like AI philosophy in a sense, but I don't think we're doing enough philosophizing. We've left this field in the hands of the engineers for too long, I think. And engineers are very binary in black and white. And as we've seen, civilization is not black and white. And we need to start bringing in, funnily enough, the humanities, or certainly have the humanities evolve in line with the evolution of AI.
SCOT: I couldn't agree more. I think we need, for sure, humanities, philosophers, we need to involve safety, ethics, of course, policy, and they need all of these points of view in order to to make the world well for us to navigate this this journey.
Theo, I think we're going to leave it here. The show is called AI Quick Bits and we're clocking in at about 45 minutes here; I could talk to you for another couple of hours. I hope that we will meet again and maybe do part two. There were some other topics I wanted to get to, but I think this is a good place to end it. This has been a pleasure. I really appreciate you coming on the show.
THEO: Pleasure was all mine, Scot. Thanks for inviting me.
SCOT: All right. Talk to you soon.
THEO: Take care.
SCOT: Bye.