What are the ethical concerns with AI?
Share
Summary
With continued advancements in Artificial Intelligence, what are some of the biggest ethical concerns around AI when it comes to content creation, the entertainment industry, misinformation and its use in the military?
Get answers to some of the most asked questions surrounding the ethics of artificial intelligence.
Guest: Matt Katz, faculty member in the Department of Philosophy, Anthropology and Religion, at Central Michigan University
Transcript
Chapters
- 00:00 Introduction
- 00:39 What are the main types of artificial intelligence?
- 04:50 What does AI have to do with the Hollywood strike?
- 06:31 Is AI going to replace actors or writers?
- 08:55 How is AI being used in the military and misinformation?
- 12:51 How can you tell if something was made with AI?
- 14:36 Is the government regulating AI?
- 17:26 What are some resources to learn about the ethics of AI?
Introduction
Adam: With continued advancements in artificial intelligence, what are some of the biggest ethical concerns around AI when it comes to the creation of content, misinformation, and its uses in Hollywood? Welcome to The Search Bar. You've got questions. Let's find some answers. Bypass Google and sidle up to The Search Bar instead as Central Michigan University's amazing team of experts answer some of the Internet's most asked questions. I'm your host, Adam Sparkes, and on today's episode, we're searching for answers on the ethics of artificial intelligence. Matt Katz, faculty member in the Department of Philosophy, Anthropology, and Religion at Central Michigan University is here to help us do just that.
All right, Matt, it's really good to meet you and I'm glad that you're here to talk to me about artificial intelligence 'cause I have way more questions than answers, although sometimes I think I have answers, but I'm not sure that I actually do. What would be the most helpful thing is if we could kind of put a box, we were just talking before we're recording or while we're recording about what, what is AI? I think there's a little bit, there's kind of some differing opinions on what constitutes something that's artificial intelligence, and there's probably a gradation of it. But I'm just wondering how you kind of see it.
What are the main types of artificial intelligence?
Adam: Awesome. I just wanted to kick things off by talking about stress. It's something that everybody has. I feel like I have stress all the time, and I sort of wanted to talk about the difference between everyday stress and anxiety. Like, is there a point where stress becomes anxiety or is that just nomenclature? Like, is there a difference between these two things when we discuss them?
Matt: So, I mean, the term gets used for all kinds of stuff now. The way I'm thinking about it lately is in terms of machine learning first and then generative AI second. So a machine learning system is one that learns how to categorize certain kinds of things, right? So suppose you set up a camera in front of your fish tank and you take some, some video of it, and in each frame, you tell the system, that's a fish. That's a fish that's a fish, and it learns on its own to identify when there's a fish in the frame. And so later on it can do it by itself, right? So it learns, okay, in this frame there's a fish in this frame and there's a fish in this frame, and there's a fish. There's not in these other, in these other frames.
But it's only learning to categorize things right. And find them later on its own as opposed to generative AI. Right? Which will generate its own responses. So, ChatGPT, you can ask it questions and it will generate a response for you. You can say, sell me this new hair, give me a one-paragraph sales pitch for this new pro this new hair product in the style of Donald Trump, and it will do that, right? So, it's generating on its own as opposed to just learning to categorize one thing after another after another, and sort of doing one thing. That's how I would think of that. Artificial general intelligence would be an AI system that can do all the various things that a human being can do cognitively. So, you know, we have all kinds of systems, like we just said, we could train a system to identify fish.
We can train ChatGPT to answer questions about anything you want. We can train systems to drive cars, but they all do sort of one thing really, really well, or play chess or play go or whatever. There's no system that does all of the things that a human being can do. That would be general intelligence, right? So humans have general intelligence and AI with general intelligence would be AGI. Right? And then superintelligence would be a system that can do all that and perhaps much more can think much faster. It can understand much more.
Adam: Becomes more proficient than we are. It feels like, at least for me, when I look at the way I'll play with ChatGPT for work or recreation, it is very useful, but I start to think of it a little bit more like a relationship with something than I do like I do with like an Adobe product. I feel a little bit more like I have to think about this is something that the more I use this and the more it gets updated by me as the user and by the folks that are kind of pulling the strings on it. Like it, it's gonna continue to evolve. I have to, I'm thinking about it more like a relationship.
Matt: It definitely feels when you're using it as though you're talking to a human being or it can feel that way at times, right? And then you can, you know, I've played with it where I was trying to teach it some logic and it was, it was failing miserably and it starts to repeat itself in various ways. So, it feels like this isn't like interacting with a person at all. But I do suspect you're right. As it gets better and better and better, it will feel more and more like a person and we might feel much more as though we're interacting with something that has, I don't know, maybe self-awareness or personhood or something like that.
Adam: Or if it feels like it has empathy, then you start to trust it more. And I think that's kind of the thing that always makes me nervous with it is like right now, you had said, I feel like, and ChatGPT again, I'm just keep using it. Yeah. It's really impressive. But like, definitely like you said, logic or empathy or something, it's kind of, there is still a kind of uncanny valley moment. Like, well, it'll be doing really good. And then you go, eh, that's hysterical.
What does AI have to do with the Hollywood strike?
Adam: To try to drive the conversation into where those ethics are. Like we are, you know, right now it's the Hollywood strikes, right? The big headline catcher is like, you know, don't scan us and use our digital likeness or don't start writing screenplays that are in the style of, you know, whatever Christopher Nolan. This is a good place at least 'cause we're paying attention to it 'cause everyone watches movies and TV shows to start going like, do we want this to improve at all costs or not? Right?
Matt: Yeah. I mean, I think there are lots of worries about what it could be used for and what it will be used for. And I mean, you know, the, the strikes in Hollywood are about among other things that the issue with AI is this gonna be used to replace us as human writers or as human actors, right? I mean, I think there's other worries. There's similar worries in other venues too, right? So, I mean, I've read pieces about lawyers using them to write briefs, right? Which didn't go well of course. Or politicians using them to write emails, right? So, are we gonna be using them in that way? That will eliminate jobs for people, right? That's a real concern. It seems like it's the kind of thing like we've been saying, it's a powerful tool, seems like it's useful and the genie's out of the bottle, it's not going away. So I think, look, I can easily imagine like a small business owner who I don't know, does whatever work during the day, runs a restaurant, does landscaping, whatever at night, is doing accounting and email and stuff like that and doesn't have any employees. Right? Or maybe it's me and my spouse, right? I could easily see somebody like that making their life easier with this kind of tool. It's a big difference between that and using it to write screenplays and then not have to hire screenplays anymore, screen screenwriters anymore. Right.
Is AI going to replace actors or writers?
Adam: With writers and actors in Hollywood, both striking at the same time and, and AI being at least kind of one of the more attention-grabbing things in there. I think people are getting more and more aware of the idea that AI could be kind of taking jobs from folks or being used to, I don't know, like auto-generate creative endeavors that we've otherwise enjoyed and been able to assign to just to somebody, right? Am I gonna start having an emotional connection that isn't about how much I love Tom Hanks anymore, but it's about somebody who really seems a lot like Tom Hanks, but they're, they're not.
Matt: Oh, that's interesting. I hadn't thought of it that way. Yeah. I mean, I guess the primary concern is people's livelihoods. Right? Right. And sort of, you know, I think there are systems that can be incredibly useful and, and powerful and I think can make a lot of people's lives easier, but they can also be used to cut labor costs for the purpose of increasing profits, right? And that's, that doesn't look so great to me. And I think that's a chief concern, particularly going forward, right? As the systems get better and better and better. We talked about, you know, they're imperfect, they make mistakes. I don't know that they're ready to be writing you know, primetime scripts that with no human input, but they might be in not too many years. And I think that's a real worry of the people and the writers and the actors' guild and so on and so forth.
Adam: Right. And also, when it comes to the writers and unnamed extras and things, we're talking about effectively working-class people in Hollywood too. I think sometimes we can feel cynical 'cause it's like, you know, I talked about Tom Hanks, but you know, Tom Hanks probably could not work for the next 20 years and he'd be fine, he'd be fine. But those people are probably last in line to be replaced when it comes to these technologies, right? You'd imagine it's the background folks. It's the people with one line and it's the writers, so you never see, first.
Matt: Exactly. Exactly. And people who, like you say, they're working for a living, right? They can't stop working for 20 years or even six months, right? Or a year. And so I think I think you're exactly right. It's easy to think of it as a problem for Tom Hanks and Selma Hayek, but it's really a problem for, you know, tens of thousands or hundreds of thousands of people who make a living in the industry.
How is AI being used in the military and misinformation?
Adam: What are some of your other concerns as far as like, kind of the more immediate stuff? Because I feel like that's sort of one of 'em, we start to see the, like you were talking about the, like, where it's, if it's in the medical field or the justice system or hiring, like where, how do you feel about it in terms of like the election cycle or military use of AI or, or surveillance states. Because we're all kind of slowly living in a bigger surveillance state every year, right? Like it's a little bit of a thing.
Matt: Yeah. Well…
Adam: Self-surveillance in a way we're kind of subscribing to it.
Matt: Yeah. I mean, you know, it, it does seem like sometimes I just mention something to, to, to someone in my family, like not no devices around. Like, I'm thinking about a new bicycle and then, you know, I open up and there's ads for bicycles, right? Like, wait, I wasn't, didn't type anything in my phone is clearly listening to me, right? Okay. So two things come to mind. I mean, well, several, we talked about, you know, perpetuating inequity, we talked about job losses, right? Military use and misinformation and disinformation, right? Right. So military use, I think the real concern there is having systems make decisions that a human being ought to make. And I mean, from what I've read, the US military at least says it's committed to always having a human being in the chain of command and always making decisions and, and not using autonomous weapons not allowing autonomous weapons to make decisions on their own.
Of course, they're still getting information from them. So you wanna make sure the information's accurate for me, like presently right now using AI to create information that never happened in reality is a real worry, right? I mean, deepfake is the term, I don't know if it's necessarily the term we want to use, but I mean, who was the senator I'm blanking on his name now from Connecticut, Blumenthal maybe. Oh. Opened up a hearing on AI using some audio of his voice speaking. And it wasn't him speaking and he didn't write it, but it was really good facsimile. Right? It sounded like him. It was words he would've used. And you know, this is pretty concerning, I think in terms of, I mean, we've already seen the 2016 election, you know, actors all over the place trying to, you know, sew discord with misinformation and disinformation.
And it's, you know, if we get to a point where systems that can create video and audio and meld them together are readily available, and, you know, we can have faked audio of Biden saying the 2020 election was stolen, right? Like, that's a real worry 'cause It's really hard to tell the difference. And it's not clear that there are systems being developed to determine whether something was AI or made by AI or not, but they're imperfect as well. So, I think we're in a position where we might see media coming out that we're sort of divided as a country as to whether it's real or not. We can't as a society come to an agreement on what is fact and what is not fact. And depending on what sort of media biosphere you're living in, you have some set of beliefs about what's fact and not, and it's very, very difficult, it seems to say, to provide any evidence that will turn people away from what they already believe, right? And so, yeah, creating evidence that's even more visually convincing or auditorily convincing, it's a concern.
How can you tell if something was made with AI?
Adam: So if you're worried that somebody that you're close in proximity to physically, emotionally is in a space where they might need help beyond what you can offer, you're not just gonna be able to be that friendly, you're not just gonna be able to try to maintain a level of calm when you're having that conversation that where somebody's coping out loud to, or whatever that might be. What is an appropriate next step if I'm like, I think somebody needs help that's beyond my ability to be a good friend or good partner, or a good parent?
Adam: The hypothetical, right? The hypothetical YouTube can automatically put a watermark on something that it, it says AI-generated. Let's say that that happens great. And it's really effective, and we both go, that's really effective. Everyone in the computer science industry goes, "Trust us. It's really effective." I feel like the week after that would happen, there would be millions of people going, conspiracy. Literally, they're only doing it because the stuff that I was watching turns out to all be AI-generated videos or whatever. Like, I feel like it, I don't know that there's a battle to be won there.
Matt: Yeah. I tend to agree that's the real, the real worry is that even watermarking won't solve the problem, right? Because I mean, on the one hand, it's just, as you say, plenty of people won't believe it, right? The other concern I have is, suppose you know, YouTube says, okay, we're gonna watermark everything. What about other actors around the globe, right? Who makes no such assurance, right? People tend to right now are believing who they believe, right? And so YouTube says, yeah, this is watermarked. Somebody else says, no, it's not. Even, you know, so open AI and other actors are creating systems to detect what was AI created and what wasn't, but they're imperfect, right? And it just sort of opens up an arms race with people who want to create AI that's undetectable, right? So here's some system for creating video and audio that's not watermarked. Here's some system for determining that it was AI, right? Yeah. Okay, let's improve it. So it's uncatchable, right? So yeah, there's all sorts of concerns in the sense that yeah. Now you have me really worried that this is an unsolvable problem.
Is the government regulating AI?
Adam: I think the scary part to me and help tell me how you feel about this, I'll kind of say it as a statement, is that it feels like what we probably need to at least have a better handle on some of these things is it's probably not gonna be an act of Congress, but it seems like what we probably need is the bureaucratic institutions to do a better job. For example, like the Department of Transportation and like the highway safety boards. They should probably be dealing specifically with the AI that's in cars that determines when it does the ghost stopping or stops and the Tesla doesn't hit the truck. But there's also a little bit of a public resistance to like, put a lot of money into bureaucracy right now. Like that's a political fight that also seems to need to pass Congress. So it just makes me a little scared that, like, I don't trust a lot of folks who, to try to say this politely, don't seem to have a really firm grasp on even the most basic technology that we interface with to pass any laws, but it also seems like it's low hanging fruit to pretend that spending any money on putting permanent players into the bureaucracies that might help us define these things. Also, it's low-hanging fruit to criticize funding them too. Like the whole thing just feels like kind of like a circular problem, right? Like it's...
Matt: I guess my hope is that there's enough people working in and around Congress that they can help lawmakers learn enough and understand enough to pass meaningful legislation that will help. But I mean, exactly what such legislation would look like is a whole other question. I mean, I think you're exactly right. Like, putting money into bureaucracy is sort of a polite way to put it would be highly unlikely at this stage, right?
Adam: Yeah. But it feels like we should.
Matt: It feels like we should, I don't dis I don't disagree with you. It feels like we should, I mean, one thing that I've read is that the EU is ahead of us on this. Yeah. and that whatever legislation they pass will be a model for the rest of the world. So, I'm not up on what they're passing yet but if that's true, great. Right? I mean, I'd be shocked if it just gets left to, I'd be shocked if Congress doesn't pass something or if the EU doesn't pass something and Congress models some legislation on what the EU does, and then, you know, it'll be, it's sort of a, you know, the next question is, okay, what did we do? Will it be helpful for all of the various problems that we've talked about? I mean, I know that the White House has like a blueprint for AI rights and responsibilities with AI, forget what they're calling it. Right? And it looks like a good starting document. It mentions things that it ought to mention, but implementing those desires and those directives is a whole other story. And getting it passed by Congress is a whole other story.
What are some resources to learn about the ethics of AI?
Adam: Are there authors, are there websites? Are there resources where we're talking about kind of the ethics and the philosophy of AI that you, that you would put forward for someone to kind of get a little taste of their mouth, kind of swirl the cabernet, if you will?
Matt: So, I'll tell you the book that I use. I teach a philosophy of artificial intelligence class. It's an honors class. And I use a book entitled Ethics of AI. And it's a collection of papers from a conference that was put on a few years ago. And it has all kinds of great stuff in it about different topics like autonomous vehicles and autonomous weapons systems. And what, how should we imagine a world in which AI does all the work and nobody makes a living? Do we wanna address that now before it's too late? That's an author advocating for universal basic income, right? To protect against catastrophic results. And it also has chapters thinking about, what if AI someday becomes conscious or sentient, right? Would it thereby have rights and responsibilities like an adult human being, and how should we treat it? Right? And it has chapters on current thinking about how do we ensure that the AI that we're working with now doesn't affect us negatively without, you know, how do we align it with our own values and principles and things like that. So, it's got a whole range of topics in it. It's really a good book.
Adam: Do you have any online places you go?
Matt: Honestly, there are some good resources on YouTube. It's sort of explaining the basics of machine learning. I watched a video of a kid, like a high school kid programming a machine in like half an hour to recognize handwritten digits.
Adam: Well, hey, I, I really appreciate the conversation. I know we went long. And whatever folks listen to will be like a shorter version. It was edifying and super interesting.
Matt: Really fun, man. Yeah. Thanks for coming in. Thank you. Yeah, I appreciate it. It was great.
Adam: Thanks for stopping by The Search Bar. Don't forget to like, subscribe or follow so that you don't have to search for the next episode.