Stay up-to-date
Be the first to hear our new episodes by subscribing on your favorite podcast platform.
Like what you hear? Be sure to leave us a review!
School of International Service on a map
4400 Massachusetts Avenue NW Washington, DC 20016 United StatesIn this episode, School of International Service (SIS) professor William Akoto joins Big World to discuss the intersections of artificial intelligence and international affairs.
Akoto, a member of the SIS Department of Foreign Policy and Global Security and the director of research at the Center for Security, Innovation, and New Technology, begins our conversation by explaining what is currently known about how governments are using AI and emerging technology (1:58). Akoto then examines the potential usefulness and dangers of AI’s rapid global proliferation (4:41).
How can governments make sure AI is kept in the right hands and out of the wrong hands? (9:54). What role is AI playing in interstate cyber conflict, and how it will impact future challenges in cybersecurity? (13:35). Akoto answers these questions and considers the potential impacts of integrating AI models into government systems when they have proven to be imperfect (15:36). To conclude this episode, Akoto provides us with a few insights about the conversations he is currently having with his students about emerging tech (19:57).
0:07 Madi Minges: From the School of International Service at American University in Washington, this is Big World, where we talk about something in the world that truly matters.
0:16 William Akoto: For the foreseeable future, AI is going to be a huge part of our lives, and so what I try to emphasize to my students is that AI and cyber technology and the advancements therein are not things to be feared, right? We can embrace them, and we can try to deploy them in a way that makes society better off.
0:43 MM: That was SIS Professor William Akoto. He joins us today to discuss the intersections of AI and national security. In recent years, we've witnessed the rapid proliferation of artificial intelligence. AI bots like ChatGPT and Google Gemini have become part of the daily lives of millions of people around the world. AI is being used to power self-driving vehicles, assist businesses with data analysis, and even for more everyday menial tasks like meal planning or writing emails. But what about governments? How are governments around the world leveraging AI for national security purposes, and what are the potential dangers associated with the rapid rise of this emerging tech?
1:27 MM: I'm Madi Minges, and I'm joined by William Akoto. William is a Professor in the Department of Foreign Policy and Global Security here at the School of International Service. He is also the Director of Research of the Center for Security, Innovation, and New Technology. William's research is focused on the dynamics of interstate cyber conflict, including how countries use emerging technology for national security purposes, which we will be discussing in depth today. William, thanks for joining Big World.
1:55 WA: I'm really happy to be here, thanks for having me.
1:58 MM: William, you are someone who studies how nations use emerging technology and how they operate in cyberspace, and I'm just curious, what do we know right now about how governments around the world are currently using AI?
2:13 WA: Well, they are doing a lot of things. I mean, AI hit all of us like a ton of bricks. I mean, we were all just chugging along nicely and then OpenAI unleashed ChatGPT, and they seemed to have opened this huge door to AI and all the wonderful things that can be done with it. And I mean, you and I know how we use AI, but governments are taking tremendous advantage of this technology as well, especially the authoritarian types.
2:46 WA: One big way in which governments are currently using AI is for mass surveillance. Countries that are especially prone to protests and anti-government dissent are using AI infused facial recognition cameras and other smart systems to say, for instance, scan crowds to essentially identify protestors in essentially real time. They've always been able to do that, but what AI has allowed governments to do is to be able to actually identify protestors and opposition activists in real time, and to do this at scale because previously you needed a whole army of people to sift through the footage coming from the protest to look at the images, and that could take weeks to able to actually identify who was protesting where and when. With AI, you can do that in real time, so it has really supercharged that for many governments.
3:56 WA: The other thing that governments are actually using AI for is to monitor and scan social media posts. Again, previously something that used to take a lot of man-hours to do, now you can do that relatively quickly with AI and it allows them then to be able to actually identify who their opponents are, and then to essentially use that also to gather evidence, evidence that then can be used against these people when eventually they get dragged to court. So yeah, I mean these are just two small ways that they are using AI systems currently.
4:41 MM: Yeah, I think all of that, hearing you say all of that, it seems scary hearing that these are the ways that it's currently being used. And we already talked about, I know you mentioned that AI has been unleashed. I think that's a good way of putting it. Given how fast AI has come into our lives and just this rapid proliferation we're seeing, it's hard to imagine what the next generation will look like. What new technology will emerge in the coming years? What dangers do you see with this rise of AI?
5:17 WA: Well, there's a lot that can go wrong, but I mean, first there's always a lot of talk about how AI is dangerous and how cyber technologies are dangerous. I mean, I'm going to push back a little bit on that framing and say it's not the technology itself that is dangerous. There's nothing inherently dangerous with AI systems. AI systems, they can be incredibly useful. They can help us to solve climate change challenges. They can help plan education systems, transportation systems. They can help us design healthcare systems, better develop new medicines, and so they can be incredibly useful. The danger is in how they are used. You know the famous saying of it's not guns that kill people, people who kill people using guns? It is similar to AI in the sense that AI in the wrong hands can be incredibly dangerous just as AI in good hands can be incredibly useful.
6:29 WA: One of the dangers with AI in the proliferation we're seeing and how governments are using them is that it can be used as a tool, especially in authoritarian governments or semi-authoritarian states, it can be used as a tool for repression. We spoke a little bit earlier on about how various governments and including this China and Saudi Arabia, Iran, Myanmar, and Ethiopia, Sudan, how all these governments are using AI facial recognition software, for instance, to identify who anti-government protesters are. And on the side of the protesters, this is really, really troubling because research has shown that people are much more willing to show up to protest when there are safeguards in place. They know they can protest peacefully without say, for instance, being shot or being arrested later on.
7:45 WA: What AI does being able to essentially unmask opposition to the government is that it then allows government to repress people much more easily and in a more targeted form. The danger with that very precisely targeted repression, because now if you know who was protesting and who the opposition people are, then you don't have to essentially terrorize a whole town. You can just target the specific people, and those kinds of targeted repression oftentimes doesn't make it to the news. It's much harder to detect and that could be problematic.
8:37 WA: The other issue is with the censorship. If say for instance, we're using AI and cyber technology to scan social media for posts to identify who is posting anti-government information or who is putting out narratives that are counter to the framing that the government wants. If people know that everything I post could potentially land me in trouble, then that leads to self-censorship, right? People then don't speak freely, they censor themselves. The information sphere itself becomes a place of fear, and that is never good for democracy. Democracy thrives when people are able to speak freely and express their views openly. If because of the use of AI and cyber technology, people can do that, that is not good for democracy going forward. Those are some of the dangers that could result from the use of AI technologies in the wrong hands.
9:54 MM: Yeah, absolutely. I know you mentioned the importance of distinguishing between the effects of AI in the right hands versus the wrong hands. What do we know right now about what governments are doing to try to keep AI in the right hands? Is there work being done around this, or I guess in your opinion, what can we do to make sure that AI stays in the right hands?
10:21 WA: Yeah, no, I mean it will be really easy if it was, at least it would be really convenient if it was very easy to tell what the right and the wrong hands are. That's part of the challenge is that knowing what the right hands and the wrong hands are is always a huge challenge. Just as with a gun, you don't know whose hands is the right one and whose hands is the wrong one. A government that is fairly democratic today, let's say Switzerland or Germany, and that has access to AI system that is using them responsibly today. By tomorrow, new people get elected, they get into power, the government still has all of these capabilities lying around, and then they tend to use it the other way. And so it can be really hard to tell what the right hands and the wrong hands are.
11:26 WA: Currently, what many governments are doing, especially say for instance, the US is trying to restrict the export of the components that you need to build these AI systems. For instance, America has export bans and controls in place in terms of who can access advanced microchips that are necessary for building these AI systems. Famously, a lot of the US chip manufacturers, for instance, NVIDIA can't export their most advanced chips to China, for example, because at least from the United States' point of view, China is a government that doesn't use AI or deploy AI technologies in a way that is helpful or useful to the global environment that America wants to create and so it tries to restrict the sale of AI technology to these states.
12:42 WA: It also restricts them to sale to other states, mostly authoritarian and semi-authoritarian states like Russia, Syria, Iran, et cetera. I mean, that's one way in which we've tried to limit the proliferation of AI. But again, it can be hard because even here in the US, as we have seen, things can turn on a dime. Whereas a few years ago, America was this shining example of what a democratic nation at least could be, and now we are not so sure. And so in that sense, things can change and so the right and wrong hands can change over time.
13:35 MM: William, I want to turn now to talking about one of your areas of expertise. One of your research areas includes the behaviors of nations in cyberspace, including how states deploy and utilize hackers. What role do you see AI playing in interstate cyber conflict, and how will it influence future challenges in cybersecurity?
14:03 WA: Yeah, one thing that AI does really, really well is to increase the productivity of those who know how to use it. Even just with you and me, and our normal day-to-day work, if we learn how to deploy AI properly, it will increase our productivity, make our work load slightly easier. We can accomplish more in the few working hours that we have in the day. And it's the same for hackers as well. Hackers typically would sit around writing code trying to find vulnerabilities that they can exploit. AI makes that easier. The AI can deployed to hunt for these vulnerabilities, and once it finds the vulnerability, AI can write code that can then exploit these things. The net effect is that AI just turbocharges all the things that these hackers were already doing. It makes it easier to hunt for proprietary information and other trade secrets, and you can then deploy this AI covertly to siphon away this information. And so in that sense, AI just makes everything much, much easier for the hackers to achieve.
15:36 MM: I want to also talk about, I think there's been a broader conversation surrounding ethics and AI. I want to talk about a specific example. There's been some headlines recently about the US government, specifically the Department of Defense. They are set to begin using Grok, X's AI chatbot. However, we've seen some controversies surrounding Grok from spreading misinformation about white genocide in South Africa to making antisemitic comments to users queries. I think this leads to a question, what do you see happening if/when this technology, which has been proven to be imperfect at this point gets deployed by US government institutions like the DOD?
16:28 WA: Yeah, right. I mean, the use of Grok by US government agencies is something that has raised concerns among many in the academic and policy circles largely for the reasons that you highlight, right, that Grok just recently went crazy and started spewing antisemitic things and was praising Hitler, et cetera. That is really not an AI system that you want government and military systems based on or using. I mean in X and Grok's defense, Elon Musk and X and the whole firm claim that they have tried to fix these issues. It is a private company, their code is proprietary, so it's very hard for independent evaluators to essentially assess these systems to see what vulnerabilities and dangers still lurk in the system. But it looks like that's the direction that the US military and US government wants to go in and is to integrate these AI systems into their systems and functions.
18:05 WA: There's really nothing wrong with them trying to integrate AI, it's just the source of the AI and how reliable is it. AI, we all know comes with all these biases in there. And so in that sense, hopefully there are deep checks and balances in place on the DOD side to make sure that any AI system that they deploy has been thoroughly checked and vetted to make sure that it is generating data and information and making policy recommendations that are sound.
18:47 WA: It's not just even Grok. I think recently there was in the news that they had signed contracts with several other AI vendors as well, so with Microsoft, with OpenAI. They plan to rely to some extent on all of these systems. And so again, hopefully there are strong checks and balances in place. So long as we can be assured, at least the people using them are assured that these things are not providing input and outputs and recommendations that are biased or not sound, I think generally we should be okay. But it is worrying and it doesn't inspire much confidence to see what Grok was at least publicly saying just a few weeks ago, and the fact that that is the kinds of systems that will in the future be integrated into our government and military systems.
19:57 MM: William, last question. You are teaching the next generation of students who are interested in international affairs careers broadly. I'm curious about the conversations that you are having with your students about AI in the classroom and just how you are talking to them about it, and especially in the cybersecurity space. Can you tell us a little bit about that?
20:28 WA: Yeah, I mean, the fact is that AI is going to be a part of our lives going forward. I mean, it would be nice if we could wind back the clock and try to slowly roll out AI in a way that regulation and policymakers can wrap your heads around what's happening and put in place safeguards to make sure that the AI that is coming out is serving the public interest, but we just don't have that luxury. We just have to roll with what we have now. And for the foreseeable future, AI is going to be a huge part of our lives. What I try to emphasize to my students is that AI and cyber technology and the advancements therein are not things to be feared. We can embrace them and we can try to deploy them in a way that makes society better off.
21:37 WA: I think a large part of the conversations that we have that we're having, it's all about how dangerous these technologies can be. I believe that with the right policies in place and develop the right way with right regulations and the right frameworks in place, AI could essentially elevate humanity to the next level. For my students, I try to emphasize for them the responsible use of these technologies. Of course, we highlight the dangers in how things could go wrong because that is a very important part of learning how to get things right, is if we are very well aware of how things could go wrong and so we spend a lot of time also on that. But overall, the message is this is not something to be feared or rejected, but it's something that we can embrace it and deploy the right way could be of tremendous use to us in our daily lives and in our efforts to try to create a better world.
22:58 MM: William Akoto, thank you for joining Big World to discuss how nations are using AI and emerging tech for national security purposes. It's been a pleasure to speak with you.
23:08 WA: Thank you for having me.
23:09 MM: Big World is a production of the School of International Service at American University. Our podcast is available on our website, on Apple Podcasts, Spotify, or wherever else you listen. If you like this episode, please leave us a rating or review. Our theme music is It Was Just Cold by Andrew Codeman. Until next time.
William Akoto,
SIS professor
Be the first to hear our new episodes by subscribing on your favorite podcast platform.
Like what you hear? Be sure to leave us a review!