Episode 2: “I’ve Got Lots of Questions”
Listen to this episode:
Episode description:
In this episode I go over a large amount of controversial topics with AI. I ask a ton of questions, and though they are left unanswered, I pack a considerable amount of conversation starters into a brief overview.
Getty Images suing the makers of popular AI art tool for allegedly stealing photos
https://www.cnn.com/2023/01/17/tech/getty-images-stability-ai-lawsuit/index.html
Automated dialogue replacement (ADR) with AI
https://www.flawlessai.com/product
AI and the paperclip problem
https://cepr.org/voxeu/columns/ai-and-paperclip-problem
Book: Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe and Barry Smith
https://www.amazon.com/Machines-Will-Never-Rule-World/dp/1032309938
Episode script:
Alright, episode two! Let’s do this! My name is Scot and I’ll be sharing what I’m learning about generative artificial intelligence. I’m taking a deep plunge into these revolutionary new tools, and trying to distill down the practical information for others. I hope you find it helpful.
In this episode, I’m actually not going to go over a product or product category, but I really hope to get you thinking. There is no way I will be able to cover all of the topics that artificial intelligence raises about how it will affect the future. But I’m going to try to give a brief overview of the major themes, and I’m going to ask a lot of questions.
Here’s the deal. In addition to all of the incredible progress and work that lies ahead in improving artificial intelligence, there is another aspect of the field that is massive and will need a lot of attention. I’m talking about policy, regulation, ethics, all of the guardrails needed to make sure we have an overall positive benefit to humans and the planet. And people will need to feel safe, or at least feel that they are getting clear communications about AI that they understand.
So where do we start? Okay, there are a lot of rabbit holes here, and I could even begin with a question like, “What is intelligence?” – and there is a whole world you can explore that includes narrow intelligence, general intelligence, sequential models, and much more. But I’m not the guy for that. I’m going to ask questions about the effects of AI we will start to see on various aspects of our lives.
The first area I’ll touch on, since ChatGPT is currently all the rage and many people are impressed with its ability to write, is Education. Here are some questions:
What personalized learning experiences can AI provide for students based on their individual needs and abilities?
How might AI increase access to education for marginalized populations, and furthermore, how might it help to ensure equitable access to technological advances in education?
What are the ethical implications of using AI in education, such as privacy, bias, and accountability? Meaning, how will this affect things like fairness in grading, and potential discrimination in personalized learning experiences?
Could AI-powered education lead to a quality decrease, with less human interaction and emotional connection in the learning process?
How will AI impact students' ability to think critically and creatively, and how must education change to ensure that these skills are still developed and valued?
Building from Education, there are general concerns about Accuracy and Ownership.
Who owns AI-generated content? Actually most tools address this question and either state that they own the results or the user. But what about intellectual property concerns with regard to the source material the AI is leveraging? Shouldn’t artists be paid if their works are being used to train the models? This quickly gets into what is a transformative work and what is not, and has been tackled before, for example with parodies. But even with established law, bad actors are always around. Even without AI, there are already problems with people taking someone else’s social media content and reposting it, and then monetizing for their own gain. A good example of ownership playing out right now is that Getty Images is suing Stable Diffusion, a text-to-image generation tool, for allegedly stealing its images or “scraping” them from the web in order to train its models.
If some AI-generated content does not properly cite sources or follow established academic standards, how can we be confident that it is not plagiarizing?
What are the implications of relying on AI-generated content when there will always be some inaccuracies? How can it be verified and validated at scale? And what legal and ethical issues might arise from these inaccuracies?
And what about Ethics? Wow, there are a lot of questions here.
Here’s one many people are familiar with: How might AI perpetuate and amplify existing biases and discrimination in society, potentially yielding unfair treatment of certain groups?
What personal data is being and will be collected to train AI models? What are the implications for privacy, and what tools or regulations will be created to protect people, for example allowing them to opt-out?
Who is ultimately responsible in cases where AI causes harm, or makes unethical decisions?
Alright, let’s go for the big one: Disrupting and Replacing Human Labor. Look, automation and advancements in technology already have a robust history of changing the way we work. But we will eventually cross a new Rubicon, where these advancements will mean that machines and computers will be able to perform duties that are considered highly skilled and require a college education. It is a given that AI will displace jobs (as does geopolitics, recessions, pandemics, and many other phenomena). Job displacement is always a thing, but this time it may be a little different. Here are some more questions:
As jobs are displaced, how will existing workers retrain and develop new skills to remain employed?
I mentioned highly skilled work eventually being affected, but low skilled jobs will be impacted first. How do we prevent AI from increasing wage and income inequality as highly-skilled jobs are protected, while low-skilled jobs are automated?
How will working conditions change, including the type of work available, the hours worked, and the level of job security?
Is universal basic income, a controversial concept where a guaranteed income provides a safety net and ensures that everyone has access to basic necessities, an inevitable outcome? Does this all lead to an eventual death of human labor?
As most humans get their sense of self-worth and define their value to society by their work, how will the removal of labor affect their psychological well-being and sense of purpose and identity?
What will people DO? Was WALL-E a documentary? Are we destined to be kept as pets that get shuffled from one activity to another, like we are all in some massive assisted living facility?
Okay, this is getting a little dark. I can lighten it up a bit, just a bit, by letting you know that even in the dystopian science fiction generally accepted progression of things, we will first have a “centaur period” – centaur being the Greek mythological creature with the head, arms, and torso of a human and the body and legs of a horse – meaning a period where human labor is assisted by AI, not replaced by it. But, this sci-fi prevailing wisdom does not see the centaur period lasting that long, maybe 15-20 years.
I’m going to give one example of this that is about to happen right now in the film industry. For years, there has been an entire process that is part of overall film making called automatic dialogue replacement, or ADR. This is when dialogue for a scene is re-recorded, either by the original actor or a different actor, and inserted into the video to make a slightly new version. This could be the replacement of profanity for a different audience, or of an entire language to prepare the content for a different market.
There is a company called Flawless AI, that now can perform this process with artificial intelligence. There is no need to employ an additional actor, because the existing dialogue is plenty of source material to recreate the voice via voice synthesis. Profanity can be quickly changed to child-friendly words and phrases. Not only that, but the entire movie can quickly be made available in many languages. Again, no foreign language voice actors are needed to record new dialogue. AI can use voice synthesis from the original actor to create a new audio track of them speaking in another language. And it’s created much, much faster than it takes an actor to re-record the content. Furthermore, AI and other techniques that pop culture has come to describe as “deep fake” are used on the video, to change the motion of the actor’s mouth to sync up with the foreign language. No more out of sync overdubbing. To see this in action, check out the link in this episode description.
Now even though jobs will be lost here, there will still need to be a human or humans, probably the former ADR engineers who wisely embraced AI instead of fearing it, managing this process. Think of this from the studio’s perspective. Less money needed to have a better end result. Think of how much better that content will play in all of the global markets, where the audience no longer has to deal with subtitles or awkward overdubbing – the actors will appear to be native speakers of their language. This technological advancement will increase global box office receipts, television ratings, and the like.
Let’s get to the really dark stuff, shall we? The 2001, Terminator, Matrix dark stuff: out-of-control AI that becomes harmful to humans.
There are some that believe this could happen by an innocent accident of faulty or incomplete programming, often referred to as “the paperclip problem.” This is a philosophical exercise where we suppose that an advanced AI is tasked with making paper clips. Maybe the machine is located at a company that makes paper clips, and for whatever financial reason (a bad quarter, increased demand, whatever) the machine is told making paper clips is a top priority. The problem comes in when the machine, following orders, keeps diverting resources to make paper clips, regardless of the consequences. It’s grabbing any materials it can get its hands on that it can melt down to make paper clips. It figures out how to evade human attempts to turn it off, because that would get in the way of making more paper clips. Eventually, humans are in the way; they are taking up space needed for the paper clips! Hello apocalypse.
The more traditional sci-fi version is that AI is at some point inserted into the military industrial complex. Incidentally there’s a Greek mythology connection here too – Zeus had Hephaestus create a giant bronze robot named Talos, which he gifted to one of his Earth sons, Minos, the king of Crete. The robot had one job: protect the island of Crete against invaders. By the way I messed around with Midjourney for a while to try to make an interesting generated AI depiction of Talos, and added it as an image for this episode if you care to take a look. In our modern myths, AI usually takes over the military systems, then decides to turn on humans. I realized I haven’t asked a question here. Okay, “Could this really happen?”
If I’m freaking anyone out right now, let me try to provide some relief. There is a book published very recently, in August 2022, called Why Machines Will Never Rule the World: Artificial Intelligence without Fear. The two authors have backgrounds in computer science, psychiatry, engineering, cell biology, thermodynamics, biomathematics, AI, linguistics, neurology, and philosophy. Their names are Mr. Smarty Pants and Mr. Smarty Pants. (Their real names and a link to the book are in the episode description). They are not anti-artificial intelligence, quite the contrary, but they argue that it is mathematically impossible for an AI to take over the world. I personally find it comforting that there is debate among incredibly intelligent people about this, and it is not unanimously agreed upon that we are doomed.
Alright, how’s everybody feeling right now? I may have asked a lot of questions, but there are so many more. Here’s what I’ll leave you with. The great technological advancements are the ones that end up having a far greater impact than their first intent. For example, electricity was harnessed at first to create light; these candles are a big pain! But look at what electricity has given us beyond light: refrigeration, guitar amplifiers, GLOBAL COMMUNICATION. Artificial intelligence is probably going to be one of these types of advancements. We don’t know where it will go, but it’s going to be one amazing ride. And I also must point out, a lot of people still like candles.
Thank you for listening, and if you like what you heard, please consider subscribing, leaving a review, or forwarding to a friend. Bye for now!