LEIF: All right, well, thanks for getting my slides up appreciate it. Hey everybody, good morning; how’s it going?
I’m Leif Nelson. I am the executive director of Learning Technology Solutions here at OIT. I am… let me just get this out of the way; I am not an AI expert; I wouldn’t say I don’t do research using neural nets or things like that. I have kind of a conceptual understanding of them. I’m an educational technologist.
Why am I here? My research background is in kind of the alignment problem in terms of big data and education; I taught for several years at a previous institution, the University of Wisconsin Green Bay, a cyber ethics course. So I’m looking at AI and especially the recent and emergent trends with the generative AI models from that kind of lens.
So hopefully, that’s pertinent to you all. Hopefully, it’s interesting; hopefully, it’s not too remedial. I’m sure you all know a lot of these concepts and topics already, but I also hope to provide some thought-provoking ideas and talk points for you.
So I’m going to go over 13 things to think about as it pertains to ethics in AI and artificial intelligence. The first one, which I’m sure you’ve all heard of, is this black box problem, right? The black box refers to the fact that garbage in, garbage out, or that when we’re dealing with really complicated models or massive, massive data sets, it could be difficult to pinpoint, for example, if I’m using something like ChatGPT-4, which has one trillion parameters, an engineer or a researcher or an end user might not necessarily be able to detect why did GPT-4 say this exact word in this exact context. There’s really no way to kind of reverse-engineer and decrypt that, right? So that’s a challenge of the Black Box problem. It’s hard to understand why certain outputs are produced, right? Precisely anyway.
So that leads to this other challenge: what if an AI model is producing content that might have bias, right? You’ve all heard about this racial bias in facial recognition technologies from a few years ago; notably, IBM actually mothballed and rolled back their facial recognition investments because of racial bias, and they were like, look, we don’t even want to touch this. This is too much of a hot-button issue. It’s too hard; we’re just gonna kind of walk away from it, right?
So there’s the bias of facial recognition technology not being able to accurately detect skin pigment, tone, and things like that, and there’s also a few years ago in an earlier iteration of chatbots, you may have seen this, but Microsoft just sort of naively said hey we’ve got this pretty sophisticated chatbot, let’s just release it to the world; let’s put it out on Twitter, and what happened, unfortunately, was within a matter of hours, this chatbot… This AI chatbot from Microsoft went from hello world to Hitler was right because it’s a reflection of the content that it was collecting and interpreting and then regurgitating from Twitter, and as we all know, Twitter is just this wholesome family-friendly environment 100% of the time, right? Nothing offensive or dark or biased, but unfortunately, it is, and so is much of the Internet, and that’s one of the concerns about, especially these large language models, is that that’s the data that they’re being trained on. Yes, there are safeguards and weights and fine-tuning approaches to try to prevent that, but it won’t do it a hundred percent.
I should also say that this is not new; there have been some big news in recent years; you may have heard of Sophia Noble’s book Algorithms of Oppression, where she talks about search engine algorithms and social media and how that tends to represent, sense, and reproduce and reinforce racial bias. Weapons of Mass Destruction by Kathy O’Neill is a really good read.
Do you have a question?
[Speaker caught up to current slide – switched from 1st slide: Ethics in AI 13 things to think about by Dr. Leif Nelson, Executive Director, Learning Technology Solutions at Boise State University to 1. The Black Box Problem]
Sorry, I need to do both at the same time. That’s a really helpful comment. All
[Went to next slide – black safe image, next slide – 1,000,000,000,000 parameters in GPT-4, next slide – 2. Racial (and other) Bias, and next slide – random images of individuals’ faces.]
right so let’s go real quick Black Box problem a trillion parameters, racial bias, facial
[Went to next slide – showcasing Twitter posts between user gerry and TayTweets and then went to next slide – algorithms of oppression]
recognition, here’s Tay talking about Hitler was right on Twitter, algorithms of repression; thank you for pointing that out. I should I can’t…really get a good angle on the screens here.
[Went to next slide – book titled “Weapons of math destruction” by Cathy O’Neil]
Okay, so where did we leave off? Weapons of Mass Destruction by Kathy O’Neill talks about the AI industry being dominated by a certain demographic of individuals which happens to look a lot like me, to be frank, which is white male, middle class, middle age, kind of thing. That doesn’t necessarily represent the diversity of our world and, therefore, can have some blind spots in terms of what they’re building and developing. So she argues that “look we need to not only have more ethicists working for these big companies that are developing these platforms and algorithms and models, but we should also include it in University curriculum as people are getting trained in these topics.”
[Went to next slide – 3. Environmental Impact]
All right, so now we’re caught up. The environmental impact is another concern as these AI models take a lot of computing power and a lot of storage, and that burns a lot of fuel and energy, right? So I mean, we’re getting better at it
[Went to next slide – “Image of two vertical cylinders in an industrial setting. They are emitting plumes of gas. A yellow tint indicating a form of smog is present”, transitioned to next slide revealing the wing of an airplane in a clouded sky.]
but there is this environmental impact. So recent headline is the cloud as an industry, cloud computing, has surpassed the airline industry in terms of carbon emissions.
[Went to next slide – arsTechnica article: “New Chapter in the AI wars – Meta unveils a new large language model that can run on a single GPU [Updated] LLaMA-13B reportedly outperforms ChatGPT-like tech despite being 10x smaller.”]
We’re getting better, right I mean, I think that’s sort of a call to action for a lot of people. Meta, you know, whatever you think about Facebook and Meta, they’ve actually developed a new language model that they say can run on like a single computer or a single GPU, right? We’re making strides in terms of efficient performance with these models.
[Went to next slide – 4. De(re)skilling]
De-skilling or reskilling is another concern.
[Went to next slide – “Image depicting a highly automated warehouse scene. Robotic arms are efficiently handling packages, picking them up, and placing them onto a conveyor belt. Additionally, there are small robotic vehicles navigating through the space, transporting boxes to a designated area. At the front of this area, there is another automated robot actively engaged in processing or handling the boxes in some manner.”]
You know, we’re talking about automation potentially replacing jobs again, thinking about the large language models that are emerging, how are these automating anything from, you know, people in the tech industry to journalism to Media production, etc. You might think it’s just a fringe sort of thing that’ll affect certain niche Industries but
[Went to next slide – Jobs Affected: 300,000,000 80% by Goldman Sachs CNN, etc.]
Goldman or I mean reputable sources like Goldman Sachs and CNN, are saying no; this will affect 300 million jobs and 80% of all occupations, so pretty major potential impactor.
[Went to the next slide – 5. Algorithmic Mediocracy]
All right, I just made this one up, but… algorithmic mediocrity is a potential I don’t know if it’s an ethical concern so much as just sort of a quality-of-life concern
[Went to next slide- Systems of averages]
but ChatGPT is another example in large language models, and potentially other kinds of AI models have been described as systems of averages. Right the way that they function is they’re taking large sets of data, they’re coming up with what they think are statistically predictable outputs, and then they’re generating those based on what might be considered an average, right? I mean, this is a reductionist kind of view of it, but what do we end up
[Went to next slide]
with when we have this really average… this is just a stock image of a boy band. I was originally going to put Nickelback up here, but I didn’t want to send any actual Nickelback fans; you’ve suffered enough.
[Went to next slide – 6. Academic Dishonesty]
Academic dishonesty is a big concern, especially since November of last year with the release of GPT 3.5; academia is just panicking K through 20. Everybody’s saying, okay, students are going to Plagiarize
[Went to next slide – student cheating off another student’s paper]
and they’re going to cheat, and they’re going to like just copy and paste stuff from ChatGPT, and it’s a valid concern.
[Went to next slide – Front cover titled China’s Examination Hell]
it’s not a new one if anybody wants to read the 2,000 year-old history of academic dishonesty. I recommend China’s examination hell by Miyazaki. it’s a great read. It talks about how this has always been an issue, it continues to be, and it probably will be forever. There are ways that we can try to address it and curb it and offset it, but at the end of the day, I argue as an educationist that having better assignment design and emphasizing the real goal and purpose of Education, which is to learn and not just get grades might help to kind of shift you know students’ mindsets on that, but it’s an issue.
[Went to next slide – Copyright Infringement]
Copyright infringement. Okay, so as these new technologies are released to the public, often what happens is laws and regulations take some time to catch up because we don’t, you know the lawyers and legislators don’t really understand how it works or what it means until they see some examples and precedence and then they sort of catch up with the laws, right?
[Went to the next slide]
So derivative art is a big concern in the AI world… this is Marcel Duchamp’s famous Mona Lisa with a mustache which recently sold for like millions of dollars, I think
[Went to the next slide – Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content: a red arrow is pointing at geetImages watermark on a Stable Diffusion generated image.]
but in the AI space, Stable Diffusion, Dolly things like that. Getty Images recently filed suit against Stable Diffusion; I wonder why they think that they’re using Getty Images content. It’s like it’s Guinea images clearly, right? That’s not the same thing.
So, I mean, what’s happening is it’s taking all of this data from the internet, and there are other artists who are saying, ‘Look, if you type into Dolly or whatever, like make a painting in the style of me, it will do that, and I never authorized that, and I never loaded my content to the database that it’s using, and so I should get credit for that, right?’
So all kinds of new questions and concerns which lead to a lack of regulation in general. Like I mentioned, the legislation and laws are struggling to catch up with new innovations. The U.S. Copyright Office just launched this month a new artificial intelligence initiative. ‘Like hey, we should probably figure out how we want to respond to this in terms of copyright law.’
So what do we do about it? AI researcher scholar named Annette Zimmerman has this really great read in the Boston Review called ‘Stop Building Bad AI’ where she…
[Forgot to switch to next slides – switched to 8. Lack of Regulation, Copyright Office Launches New Artificial Intelligence Initiative, Boston Review article on stop building bad AI, and switched bag to Copyright Office Lanches New Artificial Intelligence Initiative.]
okay, I gotta train myself to like do this motion with both hands at the same time. Copyright Office launches new artificial intelligence initiative.
[Went to next slide – Boston review: Stop Building Bad AI by Annette Zimmermann]
Okay, Boston Review Annette Zimmerman has this concept of non-implementation. What if we just didn’t release this stuff? So often, the people that are building AI models think just because we can, we ought to, where folks like Zimmerman are saying you know, maybe we should hit pause
[Went to next slide – Pause Giant AI Experiments An Open Letter]
and in fact, there’s this new future of Life Institute open letter that says hey everybody let’s hit pause for six months, and let’s not continue to develop anything that’s more advanced than GPT-4 until we have better regulations or at least understanding of the potential implications that this could have on all the things that I just mentioned.
[Went to next slide – Signatories: Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, Connor Leahy, Jaan Tallinn, Evan Sharp, and Chris Larson.]
Look who signed this thing, like Wozniak on it, Elon Musk, of all people, signed this open letter to hit pause on AI development. So this is really interesting stuff. I don’t know if you recognize some of these other names and Andrew Yang.
[Went to next slide – 9. Misalignement]
So it all kind of gets back to this misalignment problem that I talked about and alignment means are we developing technology with a which aligns with our moral and ethical values as a culture, as a society, right? I mean, are we doing this to serve humans? Are we doing it because we can?
[Went to next slide – Image of a track connected on one metal rail and the other unconnected.]
Thank you so misalignment
[Went to next slide – Image of a gavel and a business ethics book]
it gets back to like the question of ethics and laws and things like that, and as we know, laws
and ethics aren’t always the same thing, and sometimes we don’t have laws until we have, you know, worked examples that can help establish them.
[Went to next slide – Copyright Office Lanches New Artificial Intelligence Initiative.]
Getting back to the whole copyright office, it’s like, okay, ChatGPT is a thing, and people are using it, and nobody knows what it means in terms of copyright and trademark and stuff so let’s try to figure that out.
[Went to next slide – Consequential: Egoism (What actions will serve my best long-term interests?), Utilitarianism (What actions will bring the greatest overall good for all stakeholders?); Non-consequential: Kantian Ethics (What is my duty to others, considering universal law, means versus ends, and goodwill?); Alternative: Aristotle’s Virtue Ethics (Am I behaving as an exemplary person?), Ethics of Care (Am I showing care for others in my community?)]
There’s also a question of whether we don’t, as a species, have like a single common ethical framework, right? There are a lot of different ways that we could think about it, are you a consequentialist, or are you a virtue ethics person right? I mean, there are a lot of different ethical models and Frameworks, but there’s no universal consensus on it, right?
[Went to next slide – 10. Ontological Assumptions]
So there’s this ontological assumption like what does it even mean to be a person or to have consciousness when we use the phrase artificial intelligence, do we mean that literally or colloquially? I mean, are we saying that our brains are actually these like computers that work on inputs and outputs and algorithms, or is that just a metaphor, right? There isn’t a consensus on that question, either.
[Went to next slide – front cover of book titled What computers still can’t do by Hubert L. Dreyfus.]
A guy named Hubert Dreyfus was a philosopher at MIT around the same time as Marvin Minsky and others back in the 60s and 70s during the first surge of AI developments before the the the first winter of the late 70s.
[Went to next slide – front cover of book titled questioning technology by Andrew Feenberg.]
Andrew Feenberg writes some really good critical perspectives about the nature of consciousness and whether or not you know humans and machines are similar whether or not there’s something special and unique about the human mind.
[Went to next slide – image of Jaren Lanier and a quote: “We have to say consciousness is a real thing and there is mystical interiority to peoples that’s different from other stuff because if we don’t say people are special, how can we make a society for make technologies that serve people?”]
A recent quote from Jaron Lanier if you’ve ever heard of him. He was one of the pioneers of virtual reality back in the 1980s and 90s, who is still a pretty popular author now. He writes compelling books about getting rid of your social media accounts and things like that.
He has a really great quote in The Guardian from a couple of days ago that he talks about consciousness as like this mystical interiority, right? What phenomenologists might say this irreducible complexity of consciousness that you can’t just model or simulate with a program; that there’s something kind of special about being a human, right?
So what Lanier says is, if we don’t say that people are special, how can we make a society… okay, I misquoted that, but you get the gist. How can we make a society or make technologies that serve people?
Malicious intent; obviously Mark talked about this a lot. There are bad actors out there that want to exploit your data and your information. If powerful interests have access to massive data models, they can use it to generate really grammatically well-written emails to try to phish you, or they can use it to scrape lots of data from lots of different sources to try to manipulate and gather data and things like that.
So this is, as Mark mentioned, it’s nation-states, political entities; people can do this for financial gain, for ideological gain they could try to influence election results. Has that ever happened before? They could just do it for fun. There’s this whole hacker movement where you know folks are just doing it again because they can. Right? They just want to like sort of prove or do it for entertainment value which I believe is called the ‘lulz’ in certain channels.
Unintentional harm. Okay, so there’s malicious intent, and then there is “I didn’t realize that people would become addicted to this platform and cause all these mental health issues, or I didn’t realize that this would exacerbate inequity gaps among different populations,” you know but those are sometimes consequences that I think tech companies and others should be held responsible and accountable for and typically they’re not typically they say “look, it’s just a platform people use it however they want it’s not my fault for building it.”
Shoshana Zuboff talks a lot about this in what she calls surveillance capitalism if you’ve ever heard that term, talking about how big data companies and government entities sometimes partner to really just capture a lot of data from people whether it’s, you know, cameras that monitor people in the streets and facial recognition to your online activity and capturing that data in order to exploit it. Surveillance capitalism is what she calls it.
Facebook is a prime example. You remember the Cambridge Analytica scandal from a few years ago where they were collecting user data without anybody’s knowledge or their consent. They were actually doing mood manipulation studies without people’s knowledge and consent, they have been accused of contributing to the genocide in Myanmar, they’ve been accused of influencing the 2016 U.S. presidential election, and Zuckerberg says, “Hey, it’s just a platform I’m not responsible.”
There’s also the existential threat which a guy named Nick Bostrom talks about. The…
[Skipped slides – current-slide: Superintelligence by Nick Bostrom.]
pending future superintelligences… all right, check out Nick Bostrom’s book where he talks about super intelligence and basically says artificial general intelligence may not be something that happens in the next five years but maybe the next 50 or 100 years, and we ought to start thinking about it now. You know, this whole idea that if AI models become self-aware, have this capacity to generate their own programs that might be used in ways that are misaligned again with what’s beneficial for humans. That could be a risk that we ought to kind of confront and address sooner than later.
[Went to next slide – Whiplash]
Meanwhile, all this is happening really fast, right? We’re getting this kind of whiplash.
[Went to next slide – Quick Recap on AI Chat News Cycle: November 2022: ChatGPT 3.5 released; December 2022: Sky falls in academia; January 2023: GPT Zero (detection tool) built by a college student; February 2023: Microsoft releases beta of ChatGPT-based Bing Chat (which engages in creepy conversations with journalists and others, gets “lobotomized,” and later gets partially “unlobotomized”); February 2023: Google releases Lambda-based “Bard,” which produces false info in a press demo, causing Google stock to drop by $100 billion; February 2023: Facebook/Meta announces LLM; March 2023: GPT-4 released (trillions of parameters, can interpret images); March 2023: Bard available in public beta; Soon after: ChatGPT integrated into MS Office, Bard integrated into Google Applications.]
So, in just the last four months, Chat-GPT 3.5 was released and got a whole lot of news and buzz, which led Microsoft and Google to try to replicate and compete with that model to get the same level of attention and that kind of thing.
We’ve had people try to respond. GPT Zero was built by a college student to try to, you know, identify and try to decrypt the black box problem that I talked about, right?
Let’s see, Bard – you all read about Google Bard that hallucinated some false information and caused Google to lose like ~100 billion dollars. Holy cow.
Facebook Meta announced their large language model, which I talked about, that can run on a desktop apparently.
And then just this month, GPT-4 was released with trillions of parameters that can interpret image quality a whole lot better. Bard is in public beta right now, and in just a month or two, we’re going to have GPT integrated into Microsoft Office and Bard integrated into Google applications. So, it’s coming, and you’re all going to have access to this really soon, right?
There is a glimmer of hope. I don’t want this to all sound like doom and gloom.
Thank you.
[Went to next slide – no title]
People are talking events like this; we’re having discussions, there’s a whole lot of like because it’s so popular and in the media and stuff, there are a lot of conversations
[Went to next slide – Group of individuals at discussing at a meeting table]
taking place about hey what do we want to do about this.
[Went back a slide]
There are some entities and organizations in the United States; the U.S. military recently released a declaration of responsible military use of AI. They’re saying we are not going to use this for malicious intent or social engineering; these are our boundaries for AI.
There are also other kinds of grassroots organizations like the Algorithmic Justice League, which are scholars who act as watchdogs about AI developments.
We’ve got the Association for Computing Machinery’s Code of Ethics, which hopefully folks are familiar with, and then in our own institution, and in higher education, we have things like Institutional Review Boards.
Across the pond, as they say over in Europe, there are things like the Responsible Research and Innovation Framework. There’s the Future of Humanity Institute, which the superintelligence author I mentioned earlier is one of the directors of that organization.
Then there’s the European Group on Ethics in Science and New Technologies, and of course, the GDPR laws that are a little bit more forward-thinking than U.S. laws about data privacy and things like that.
[Went to next slide – Image of front cover titled In The Age Of The Smart Machine by Shoshana Zuboff.]
There’s also hope Zuboff, who is most well known for coining the surveillance capitalism phrase, was back in the 80s. Actually really well known for another seminal book called in the age of the smart machine, where she had a much more optimistic view of what she called complementarity,
[Went to next slide – Image of kid interacting with small humanoid robot.]
Where humans and machines might work together in ways that are complementary and that are aligned with human goals and values that serve the betterment of society, so hopefully, that’s how we’re all using this technology, and that’s how the industry is going to shape up as well.
Those are 13 things that are on my mind; hopefully, it’s given you some things to think about as well, and I’m open to questions if you have any.
Okay, so Zuboff has this idea of surveillance capitalism where, whether it’s a government or a corporation, or both of them working together, they will use big data models for exploitation or extracting value and resources from the users. And that can take the form of things like literal surveillance in terms of facial recognition, or it can scrape your web activities and things on social media and things like that. [Inaudible]
If you’re talking about just developments in general or… I would say that is more of a long-term or speculative kind of threat. I don’t think that is any sort of immediate concern, but you know, it’s something to consider. I think right now, the bigger concern is how people might use it for malicious intent. You know, so it’s still people behind the keyboards that are the bigger threat in my mind.
Yeah, all right, thanks, everybody.
[Boise State logo at end of video]