Episode 18: "Interview with Laura Miller, Director of Ethics at Shadowing AI"
Listen to this episode:
Episode description:
In this episode I dig into the controversies surrounding artificial intelligence tools and art with Laura Miller. Laura is the Director of Ethics at Shadowing AI, an AI-powered career development platform that focuses on helping everyone proactively practice and improve their soft skills, problem solving skills, communication skills and critical thinking skills. Laura and I also discuss the differences between Responsible AI and Ethical AI.
There I Ruined It: https://www.youtube.com/@ThereIRuinedIt
Shadowing AI: https://shadowing.ai/
Laura's photography: https://www.linkedin.com/in/lmiller-ethicist/details/featured/1635471709323/single-media-viewer/?profileId=ACoAAAgI_m0BiXGQ5sWTqY621F_xgvy2v2Xkuzk&lipi
Episode transcript:
SCOT: Hello everyone, welcome back to AI Quick Bits: Snackable Artificial Intelligence Content for Everyone. My name is Scot Pansing, and in this episode I’m speaking with Laura Miller, Director of Ethics at Shadowing AI, an AI-powered career development platform that focuses on helping everyone proactively practice and improve their soft skills, as well as problem solving, communication and critical thinking skills.
Laura and I had a wonderful conversation dissecting the various debates over art, or “not art” that is generated or assisted in its generation by artificial intelligence tools. We also get into the differences between Responsible AI and Ethical AI, and of course hear about her work at Shadowing AI.
But before I play the interview, I want to set it up with the fact that the art ecosystem is usually booming with controversy when new tools or technology are introduced. Debates over whether something constitutes art or not go way back before AI. But of course, now we are again living through one of those periods. Tools that can produce text, images, video, and music are flooding the technology space, and they are for the most part extremely cheap to use. And there are what seems like countless problematic issues that need to be resolved before everyone is happy with how credit and attribution is determined, and of course how everyone gets paid.
I have one example that demonstrates so many of these issues quickly. It’s a music clip from musician Dustin Ballard, whose project “There I Ruined It” is either ruining music or doing something new and novel, depending on your opinion. He has so many clips to choose from, and they all use generative AI to essentially mash-up content and likenesses from a wide range of artists. Here’s the clip “Hank Williams Sings Straight Outta Compton”
SCOT: I mean, wow. This is incredibly creative derivative work, and it makes me feel joy as well as odd discomfort. It's like a dumpster fire on top of a train wreck of controversies in music, name-image-and-likeness, intellectual property training AI models, parody, fair use, royalties, copyright, and who knows what else! And there’s definitely loving work and detail going into this. I love the gunshots, whip cracks, and whistle censors, and the yodel on the word “Compton” around the 36 second mark is a sweet touch. Looking at the video description on YouTube, it appears that Dustin lays down the vocal track himself, actually sings it, and then uses generative AI to transform his voice into Hank Williams. As I said there are many more examples if you’d like to go down the rabbit hole, and I will link the YouTube channel in the episode notes.
So hopefully that sets the stage here. Art and AI are colliding, right now, in a huge way. And hopefully you enjoy my conversation about it with Laura Miller, Director of Ethics at Shadowing AI.
SCOT: Laura, thank you so much for coming on my show today. I really appreciate your time.
LAURA: Oh, I'm so glad to be here. Thank you for having me.
SCOT: Yeah, we sort of, like many people in various industries and verticals, we kind of met virtually on LinkedIn. And I noticed that you chat about a lot of topics that I find very interesting on LinkedIn. And one of them that I saw recently was about artificial intelligence in the art space, whether it be images or video or poetry or whatever. I think I'd like to sort of drill down into that with you because it's such a fascinating area.
I'd like to maybe to keep it to the visual arts. I think just maybe we can talk about poetry as well, but we'll see where it goes. Yeah, you had a nice post with a lot of discussion comments underneath. Do you want to maybe sort of give some initial thoughts?
LAURA: Sure. I've actually been posting a decent amount about the visual arts, and that surprises people since if you look at my profile, I am an Ethicist trained degreed and all of those good things, and a philosopher. So people are often surprised when I weigh in, not just on AI or tech ethics, but weigh in on things like the fine arts, the visual arts, and the concerns with copyright and whether or not AI can do the things that an artist does. And what I sometimes mention in my posts, some of them have it, some of it don't. I have to check which one you saw, but originally, back in a previous life, I was an artist. I still am. I still shoot photography on the regular. It will never not be riding shotgun with me.
So as an artist and someone who works in tech ethics, I have kind of a unique vantage point of that space, but also the training and the skills that go into what has heretofore been more of the traditional arts. So I studied two, three years in university to study graphic design as well as fine art and photography. So oil painting and photography, and we used to call those the dirty or the messy arts. And an interesting conversation in the tech world right now is whether or not work derived from AI or created through the process of using AI, say through mid journey, whether or not that should be viewed in the same light that traditional art is.
Now, traditional artists learn their art by copying masterpieces. I've done it in school. So that's nothing unheard of. And yet part of the idea was that it required practice. And one of the things that AI absolves the person writing the prompt of is the need for practice. Now, you have to hone the prompt, but that's not the same thing as trying to visualize or bring to life a vision from your mind and physically create it. It seems like that that's the task that the AI is doing. But then you have to wonder what the role of the prompter is. And it's caused quite a bit of controversy in that traditional art space. So what kind of questions did you have about that?
SCOT: Well, I think this is such a complicated area; it's not binary. It's not black and white at all, I don't think, because.. and I see the point and the drive of a lot of the different angles of this debate, but I would point to a couple of things to maybe spark some more conversation with you here.
You mentioned photography. When the camera and photography was invented and came on the scene, all of these realistic painters had a major problem with it, right? Like, oh, you're cheating. All of a sudden, these realistic images are being produced at the push of a button. So there's that.
And then I would also point to, for example, Marcel Duchamp and Dada and the fact that he took like, a urinal.. an object that he did not design, he did not create, he had nothing to do with. And this is considered like a total prominent example, seminal example of Dada. It was called “The Fountain,” right? And at the time, it was considered not art. It was rejected from the show that he submitted it to. And then he spent his certain portion of his career with things called Readymades, and he really challenged the idea of what is art.
And again, semantics can get in here. Is it art? Is it not art? But, I mean, I think we would agree Marcel Duchamp has gone down in art history. He is in art history textbooks. It's debatable, I guess, but I think most people would consider him an artist and “Fountain,” the urinal, a piece of art.
So I do agree that there are certainly some issues to be worked through with regards to intellectual property and what was used to train the models, et cetera. Again, I just don't think it's black and white. And I think there are always these sorts of controversies happening in art when new tools are introduced.
LAURA: Especially where new tools are introduced. One of the interesting things about “The Fountain” and even Malevich’s “Black Square” and one of the things that is difficult with AI because it's simply too young yet is that and that might be one of the greatest challenges is that it's new because one of the things that art seems to have is a history or a legacy. It's part of an evolution of expressing thoughts and ideas all the way from Lascaux cave paintings to Malevich’s “Black Square” to AI in Midjourney, as well as many others.
But one of the interesting things, like Malevich’s “Black Square” got absolutely roasted. For those who don't know, it's literally a black square painted on a white piece of paper. That's it. It was hugely derided and heralded by purveyors of arts and art experts and art critics. But if you looked at it historically and you understood the kind of artwork that had come before the “Black Square,” which often had a lot of detail to it and nuance and it was almost like human beings were trying to, in a very odd way, bring us back to the capabilities of the camera where it was so realistic that it was almost stunning. Not necessarily everyone, but that was certainly a large part of the history of art. And Malevich’s “Black Square” was kind of a thumb your nose up at everyone. It was reduced to just this most basic formative shape. It wasn't as much intended, I don't think I would hesitate to put words in Malevich’s mouth, but in the context of art history, it wasn't necessarily intended in my mind to be this stellar piece of art as much as it was meant to be a statement or a referendum on the kind of art that had come before. So some of this this historical context is relevant.
And we've had generative AI.. ChatGPT came out in November of 2022, and here we're in September of 2023. Some of these things, I think, will gain more respect with time. Even fine art photographers like myself call ourselves fine art photographers. Why? Because, well, my photography doesn't look like the average snapshot of somebody taking a photograph at a birthday party. It just doesn't. So there's degrees.
Some people say as long as you put a tool in your hand and you make something, you're an artist. But that brings you back to, is a child finger painting an artist? At what point do you cross a threshold? And usually you cross a threshold when you break some unique barrier by doing so. And what is unique about AI is the barrier that was broken is almost that you don't need the artist. And I don't know who's to say what that barrier is?
SCOT: Yes, I have a little bit of an issue with, like, who gets to say what the barrier is? I mean, again, back to those snapshots of a birthday party. Those could be on display in a museum, right?
LAURA: Sometimes they are.
SCOT: In a certain context. That's right. I don't like the idea that there's, like, a velvet rope around what is considered art. That it has to pass a certain criteria necessarily, because everyone has a different feeling. Also, what I would say is what's interesting is that we were talking about tools. And, like, with Photoshop, there are tools that are these days, just part of the basic platform of Photoshop. Like, there are tools like the rubber stamp tool or the magic wand, which, like, 20 years ago, again, people might have had an attitude like, oh, that's cheating. I spent hours doing something like that. And now you just click a button.
LAURA: And you still have artists.. actually, when it comes to my photography, I'm one of them.. other than cropping, I don't mess with my photography because so much of and again, it goes back to probably those breaking rules or that rule breaking that I mentioned before. For me, so much photography has been all about how you can kind of soup it up over, stylize it, distort it, and do all sorts of things that you almost don't need to be able to make the shot anymore because you can take a rough of it, and we'll eventually make it the shot. But I was taught by very traditional artists. One of them came from Pratt, and one of them was like, if you want that photograph, you should have taken it. So I grew up kind of hard knocks art school, but I think it's served me well.
But I do think we will get to the point that the arts that we're seeing now produced by AI will get the respect that they deserve. Now, I do think it is challenging to figure out, is it the algorithm who did it, or the person who made the prompt? Because the algorithm can do anything without the prompt. But you also wouldn't have artwork without the algorithm. You'd have a prompt.
SCOT: That's right. I think that it gets complicated with how people are, like, what they're putting in. I know it's very popular now to see these sort of fake movie trailers or whatnot, but short videos that people make, they will prompt Midjourney to make an image, and then they'll put that image into Runway and have it animated and then splice them all together. However, also there are people who will take an original creation of their own, like a pen and ink or some sort of physical, like a still image, and then so that is actually theirs. They didn't prompt Midjourney to make it, but then they say, hey, Runway animate it. And so the animation may be generative AI, but the input itself was human created. So definitely there are so many blurred lines here.
LAURA: We might get to that point where we come as a society, at least in the interim, is from the masses. Imagine them saying, I don't mind if you use Midjourney. I don't mind if you use Runway. I don't mind what apps that you use, but tell me which part of it is yours?
SCOT: Oh, for sure.
LAURA: And we're starting to see that even with some AI generated text content where some people are starting to put disclaimers. And I've even had conversations about whether or not this is a good idea to actually say, look, we were assisted with this, AI assisted us with this, or this is all us. Because I think that there is an innate curiosity to know exactly how much we're being assisted by it. But also, I think there's still some skepticism. We've seen some generative AI go a little bit rogue and share information that was inaccurate, struggle to do math and some basic things. So also, some people are kind of deflecting responsibility by going, hey, AI helped with this. So if you notice that there's an error later, well, we'll just post an update and throw AI under the bus.
It's going to be interesting for a while, but I think people just forget what it's like to have new technology and what an adventure it is to get it going, get it up and going and having it be the help to us that we want it to be. And I think that that's this moment, it's a very powerful lesson in that across the board, which is, wow, this is what it's like when we get something completely new and there are no rules and no guidelines and no best advice and we got no training and there's no specialists to call in from Photoshop. Photoshop used to have experts come in and give you classes on Photoshop. We're not getting people from ChatGPT coming to offices going, okay, so this is how you do it, guys. So it's kind of interesting to see us do it in this way. And there's a lot of ground to cover.
SCOT: Yeah, I think you're right. It's very wide open. There's not a lot of handholding with it.
Like you mentioned, I think disclosure and transparency is really huge. And honestly, I just see that as a natural extension of the placards in the museum that tell me what materials went into the sculpture. Is it oil on canvas? Is it watercolor? Is this polyurethane? Melted plastic? All of those things I want to know. Also, like, yeah, at what point in the production chain was AI used? Or if they don't get that specific, at least list, like, the tools, Midjourney, Runway, whatever, Blender, I don't know.
LAURA: But yeah, Midjourney, like you mentioned, oil, oil painting on canvas. Midjourney on canvas. It could work that way. And that was one of the comments on one of my posts where maybe AI is a medium and we list it like we list charcoal, oil painting, photography, et cetera. Midjourney is just another and whatever other generative AI program, software, et cetera, that app you've used. But I think we're getting to that point now that we're going to be approaching quickly. A year in from ChatGPT's, big burst of, hey, we have something cool on beta to everybody saying, I want to buy it now. And it'll be interesting to see where we are at the one year mark. It really will be another two months from now. It's still going to be a whole new world. It has been every month since.
SCOT: Just to cap off this art topic, I think another thread I'd like to explore with you is sort of this. Like you mentioned, people see something in their mind's eye. They have a creative vision. And I think there are certain people in the world who are fortunate enough to have access to tools, to have the time to take away from a vocation that they're just using to feed their family, to train on tools or to learn an artistic craft.
And then there are people in this world who probably have wonderful visions in their head who do not have access to these tools or the time to train. This is just another side of it that I've been exploring recently, is that the cost of these tools now, you can use Midjourney on your phone. You can just have Discord on your phone and have Midjourney for whatever, a month very cheap, and you can just type prompts with your phone. And so there may be I think..
LAURA: As we look at that, though, we've got to be careful that we don't devalue the traditional arts or the people who are spending years honing their craft. And I think that's one of the objections that artists have, which is, you know what? It was blood, sweat and tears to get me here. The starving artist trope isn't so much a trope as it is a reality. And there is value in being able to do it, quote unquote, old school, too.
SCOT: There is. I just worry that there's a bit of a velvet rope around.. like, why deny a poor person's ability to bring their imagined piece of whatever they're thinking about to life in a visual medium through these tools?
LAURA: I'll give you an interesting concept from my art instructor who was former head out of the graduate department of Parsons, and what she had said is that art is in your mind. It'll escape, however, or whatever vehicle you give it to escape. You don't need to be wealthy to be an artist. A few artists are wealthy. All you need is basically fingernails and something to claw at and it will come out of you. I think the people who are still enjoying Midjourney three years from now are people who have that itch and know how to claw. They might have found a new medium to do it, but they'll have done it for that purpose.
Just like you can still recognize, say, a graphic designer's work in PowerPoint and somebody who kind of threw one together on their own, there is still a distinct difference and a distinct quality. And I think we can recognize that quality without necessarily I mean, let's face it, technology isn't cheap either. Oil paints are expensive, but so is a new cell phone and laptop to even access the online tools. I think it's going to be an interesting debate that goes on for a long time. And it is challenging because as you mentioned, and as even I've mentioned, graphic design has come up. Desktop publishing came know way back in the day, if you remember Steve Jobs and the Mac when they first came out, they were used for typesetting. There's a lot to this.
What I would like to see us get to is a place where all artists, regardless of what medium they used, are respected for the art they created. Even people using Midjourney recognize the difference between somebody who's really figured out how it works and somebody who's clearly still trying to figure out how it works. Just because you know how to prompt doesn't mean that you get magic at the end. But some people do. And I think that that has to if I put my faith in my traditional schooling of arts and my trust that I have with ethics and AI and mash them all together, I'm still looking for the people who are still willing to scratch and dig for it and continue to make it better and better. And the rest might still be artists. They might be more hobbyists that are also artists, but the ones who are going to do it as a full time job, those are going to be the ones who are going to have to work for it, whether they're working on prompts or whether they're working on a paintbrush. But I think this debate is going to rage.
SCOT: Yeah. Oh, certainly it's hot and it's getting hotter, but I think you're right. I mean, there is definitely Midjourney imagery that just looks like someone did not spend much time. It just looks like Midjourney imagery and then there are images that I go, wow, I can't believe someone worked pretty hard to get.. how did you get that out of that out of Midjourney? That's really impressive. So I agree. Yeah, it's a wild ride.
LAURA: It is.
SCOT: Okay.
SCOT: I think we've gotten into art quite a bit. Oh, I know. The other part of art I don't want to wrap up just yet because I think also the part of this that you started to touch on with the commercial aspect of art, like with graphic design. So what I think is also interesting, and that is another larger topic around artificial intelligence is labor disruption.
And so there are certainly, I think to take an example like food photography, Midjourney is really pretty good. Like if you type something like “bowl of granola surrounded by fruit on a wooden table with natural sunlight coming in,” something like that, the imagery it comes back is really like I would say, beyond acceptable for a certain genre or what do I want to say? A certain level of graphic design for maybe small, medium sized businesses putting together, like, a menu or a brochure for a restaurant. It might not be good enough for a Fortune 500 company. It may be, but there's certainly photographers or graphic designers that either would be disrupted or maybe their job is just that much easier, or, like, they can concentrate on other things. So I think that's another realm of this that is super interesting.
LAURA: Oh, there's lots of interesting things, even for building websites or for getting images. That one of the things that I think is cool is it makes art accessible to people who whether art is their primary driver or whether they just need images and artwork to promote other things, it makes it so. Much more accessible and attainable.
Now, if you want to make a video, you don't have to hire a videographer and actors and sign nondisclosures or actually press releases so that you can use their image and do those kinds of things. You can use AI for that. One of the challenges, though, is whether or not the people whose faces and likenesses are in the data set actually gave their permission for their faces and such to be used in the data set and then to be used to promotion. But it does make art more accessible than it's probably ever been.
SCOT: I posted to LinkedIn yesterday, actually.. there's a great video on YouTube by this group called Corridor Digital, which they basically do all kinds of visual effects, a lot of educational videos about visual effects. And they do their own and they have a video about can AI replace actors and they go into it pretty deep.
Not just they go into like, okay, so it's already been the case well before AI, where in a battle scene like Lord of the Rings or something, where they're already using to create thousands of soldiers attacking each other.
LAURA: Well, and there was an artist strike.
SCOT: Or background crowd. Or crowds. But now it's a point where we are definitely at the level of atmosphere actors who are literally right next to top or main performers, like in a restaurant, that they no longer need extras for that that.. NPCs, like in a video game. They will be indistinguishable. They will not look like NPCs from a video game. They will be reacting to the main actors. They will be all of the things. And that's a whole other level that will and of course, deepfakes and everything.
Hollywood strikes are not just about the size of the writer's room. One of the major sticking points is AI. And it's still going on to this day, and it's very topical here in LA, where I'm located.
I would like to go on to another topic you recently wrote on LinkedIn, and I would love to hear you expand on this: “Responsible AI speaks to developing to avoid risk, and Ethical AI speaks to technology that carefully considers the fragile and damaged human world and its people.” So would you mind elaborating a bit more on the difference between Responsible AI and ethical AI?
LAURA: Sure. I'd also like to give myself credit for drama, but look, responsible AI is generally associated with the NIST or the NIST framework, which is born out of cybersecurity and speaks to how to develop AI that's responsible, which means that it's got a clean data set. It performs according to the data set. It's unable to be hacked or messed with, so that when it gets launched into the world, in an ideal world, it's a trustworthy or responsible product. So it has a lot to do with development.
But when you launch an AI product into the world, there are users that interact with it, a society that adopts it or rejects it, and an impact that is felt outside of the algorithm itself. So while we've put a lot of emphasis on Responsible AI, Responsible AI, Responsible AI. If that Responsible AI is based on NIST only because I do support NIST, but if it's based on NIST only, it doesn't actually, in my mind, meet the threshold of Ethical AI. Why? Because how you develop it can't account for what happens when the real world interacts with it, and it interacts with the world and the users by extension.
And if you're starting to look at impacts on users and society and whether something is good in society or not good in society, now you're talking ethics. But how it's built has little to do with that. So the NIST framework by itself is part of Ethical AI, but it doesn't make it Ethical AI in and of itself because it leaves too much out.
So I actually developed an Ethical AI framework that has NIST alongside a rights based framework which protects users' rights and takes users into consideration. And the third one is a relational framework where we start to talk about our relationship with AI and its quality of a knower as a knower. So whether or not we should trust the information it's shared and what kind of relationship we should have with technology in the event that we well, shouldn't trust the knowledge that it shares, or we at least have to question the quality of the knowledge that it shares.
But if you look at AI and you evaluate it for the quality of the knowledge that it shares, its impact on users and society and how it's built, well, now you're getting closer to Ethical AI.
SCOT: And just for listeners.. NIST meaning National Institute of Standards and Technology, correct?
LAURA: Yes. And it is the standard safety framework that is most often quoted as a matter of fact, even when there are different frameworks. NIST positioned its most recent framework so that there's a whole lot of overlap. There might be nuance between how one group defines safety and how another one does, but the basic tenets of the NIST framework are by and large the development framework that everyone is most familiar with. And when they talk about other ones, there's still tremendous overlap with NIST. So NIST is just kind of what I use as a benchmark for developing AI. That's how we develop AI so that the AI itself, as it's developed, ends up being responsible. But again, that's only one part of being ethical.
SCOT: Got it. Thank you for that deep dive. I appreciate it.
And then I'd love to hear about Shadowing AI, and I'd love to hear specifically about what ethical considerations you deal with at Shadowing AI, what product features you may have been involved in shaping from an ethical standpoint. Could you tell me a little bit about Shadowing AI?
LAURA: Shadowing AI is a fantastic project led by an Jwalant Patel, former Googler developer, extraordinaire. I have tremendous respect for Jwalant. His work is just absolutely been amazing.
And what Shadowing AI has been working to do is create an AI that can help you learn all sorts of things and train yourself on all sorts of things. Whether it's learning your company's HR policy or preparing for interviews, it's AI riding shotgun to help you better yourself and kind of work towards becoming the kind of employer or the person that you want to be. Doesn't necessarily mean it is solely ethical training, because it's not, it's practical, it's, I want to get a job at Amazon, what kind of questions am I likely to encounter? And there's specific training for that.
When you know who you're going to interview with, you can practice your interviewing skills and AI will give you feedback and let you know how it thinks you did and give you suggestions. But in addition, they also branched into or we branched into K-12 education to try to help fill some of the education gap brought about both by COVID, but as well as difference from access to quality education as well. So we've even partnered with a group in India to help bridge some of that knowledge gap in the K-12 space.
One of the things I love most about Shadowing AI is that it is a largely ethical, not only organization and company, but a product that's working to do an ethical task in the world. In other words, it would be good for people to have a more level playing field when it comes to access to education, and Shadowing AI is working towards that. It's hard not to be a champion of that when there's good in it. So I got to help write some of the courses, write some of the answers so that the algorithm knew what the right answers were. Some of them, God bless you, who've taken them, have my video and me talking you through them to lay out the foundations of the course at the beginning.
But I've also written policy about children's policy, foreshadowing AI and how long children should be online. That information runs shotgun with the information researched by UNICEF as its foundation, they threw down a challenge. I'm calling it a challenge, but they put forth a document that suggested if you're going to engage children with technology or AI, there are some very beneficial ways to do that. And when I wrote our children's policy, I took their guidance to heart when writing it.
So I've written policy, I've written training that the algorithm uses, I've written the answers, I've worked on the course development. The great news and the great thing about working for a startup is there's very little you don't get to do. So it's been an amazing adventure, a fantastic way to really see the inside of a startup and to watch it grow and to watch it learn and relearn again and reiterate and go fast and break things, including yourself, and go back and do it again. I love the startup environment, I love the speed and I love the pace of it. But I've also worked in Fortune 100 companies, so I've seen the benefit of waterfall and slow-and-methodical, too. It's kind of interesting to have a foot meet and kind of have some fun there. But I would strongly encourage anyone who got the chance to explore Shadowing AI that they do. There should be something in there for everyone. And if you find out that you're someone who doesn't think there's something in there for you, let me know and we'll make sure that it does, because the idea is it should help everyone.
SCOT: Yeah, I noticed also, I think when the consumer or the customer interacts with Shadowing AI, they record themselves, right? Responding to or being interviewed or.. I would assume also that you were involved in the policy of what happens to the recording and how it is used.
LAURA: Absolutely privacy reporting, recording, all of those good things. As a matter of fact, for some of our interviews, we allow for suggestions, we preface it very carefully. We never say you are feeling something. What we say is someone looking at you as you gave that answer might be concerned that you were uncomfortable, so you can think about your answer and see if that rings true to you or not. But it at least puts the idea out there, while at the same time not claiming to know how someone feels, because no one can know how someone feels.
But when it comes to AI, you have to be very careful whenever you use video and how we store those videos, especially for K-12, and all of those things are highly they're just so important. So to be involved in all of those conversations and that dialogue and to be researching policy for that has just been an incredible adventure. And right now we're on six of seven continents.
SCOT: Oh, wow.
LAURA: Hat tip to Shadowing AI and to Jwalant for the amazing things that he's created and for allowing me to be part of it and kind of in the inner sanctum of it, which has been a really great place to be.
SCOT: And you also founded your own company, NextGen Ethics, right? Just very recently. Want to tell me a little bit about that?
LAURA: I did, actually. I've always had, whether I'm a creative or a professor doing AI or my evolution through ethics, because all of my roles have always had an ethics bent to them. Even my photography is considered ethnography, but it specifically relates to how we in the United States interact with and treat people who struggle through houselessness or poverty or those who are impoverished know, working through how we treat marginalized people, even in a developing nation. So all of my roles have always had an ethics angle.
Ethics is just kind of I think I was born this way, but we got to a point with AI where technology has just such a profound potential to really address a lot of the challenges that our world has faced. If you think about the Sustainable Development Goals from the United Nations, well, you're well on your way to thinking of the kinds of challenges that I'm talking about, whether it's education, access is one of them, environment is one of them, even how we treat marginalized persons or under-recognized persons that's in there.
Technology can really reach anyone. I mean, Shadowing AI is located here, but it's on six of seven continents. And I recognize not only the power of technology to reach the people that I've spent my whole life hoping to benefit or help or assist or get out of the way of that.
As I got more and more into AI and AI ethics, it seemed like we needed to really tackle ethics in our world. That the things that we've neglected, the things on the Sustainable Development Goals list, all of those things are ethical things. So I'm ready for the next generation of ethics where we actually see it in our world doing good, making amazing things happen, and using technology to forward that as well.
So I named my company NextGen Ethics, and it's focusing on innovation and purpose for our modern world because we're all interconnected. Everything that happens happens, to a certain extent to all of us. And we really need to get the ethics of this right, and I'd like to have a voice in that, and I'd like to try to help make that happen. It's been a career goal since well, I had a career that even began. And even before then, I've always wanted to see the world we could be instead of the world that we somehow defaulted to. So NextGen Ethics is me taking a stand in owning my responsibility for helping to bring that out and bring that to be.
SCOT: Well, I can certainly get behind those goals and support those goals, and I really look forward to seeing what NextGen Ethics.. the future of NextGen Ethics. It sounds really exciting.
And Laura Miller, Director of Ethics at Shadowing AI and founder of NextGen Ethics, thank you so much for speaking with me. I really, really appreciate your time. Thank you for coming on the show.
LAURA: Thank you so much for having me. It's been great. I look forward to another one.
SCOT: All right, talk to you soon.
LAURA: Talk to you soon. Bye.