
Discourse
Welcome to Discourse with Wayne Unger—where we cut through the noise and make sense of the chaos. On this podcast, we take a deep dive into the pressing issues shaping our world in politics, law, technology, business, and more. No echo chambers. No corporate influence. Just thoughtful analysis and respectful civic dialogue. Because understanding different perspectives isn’t just important—it’s necessary.
Discourse
AI Governance, US v. China, and Cyberwarfare with Kevin Frazier
Discourse: Diving into AI Regulation and National Security
In this episode of Discourse, host Wayne Unger, a law professor and former Silicon Valley professional, welcomes the podcast's first guest, Kevin Frazier from UT Austin. Kevin is an expert in artificial intelligence governance policy. They discuss the complexities of AI regulation, the federal government's $1.5 billion investment into AI, and the implications for national security, particularly in the Department of Defense and Homeland Security. The conversation also delves into the recent discussions around a failed AI moratorium and varying political perspectives on AI regulation. Additionally, key issues such as child safety in AI and the challenges of regulating rapidly evolving technology are analyzed. The episode concludes with insights into the legal and ethical considerations of AI in modern warfare and cybersecurity.
00:00 Introduction to Discourse
00:45 Special Guest Announcement
02:14 Breaking News: School Funding
04:58 Political Messaging and Leadership
12:33 Columbia University Settlement
22:21 Interview with AI Expert Kevin Frazier
28:28 AI Regulation and National Security
49:42 Conclusion and Final Thoughts
07.30.2025
[00:00:00] Welcome to Discourse where we cut through the noise and make sense of the chaos. I'm your host, Wayne Unger. I'm a law professor and former Silicon Valley nerd, and I've spent years breaking down complex topics into digestible takeaways. And on this podcast, we'll take a deep dive into the pressing issues shaping our world in law.
Politics, technology, business and more. No echo chambers, no corporate influence. Just thoughtful analysis and respectful civic dialogue because understanding different perspectives isn't just important. It's necessary. Let's get started.
All right. Welcome back to Discourse. I'm your host, Wayne Unger, and we are recording today's episode on Friday, July 25th at 4:16 PM and as always, things may have changed since. On today's episode, we actually have a special treat. We are actually going to have our first guest on this podcast. His name is Kevin Frazier. He's with UT Austin, and he is by far one of the leading experts. I think [00:01:00] he'll disagree with me on this.
Leading experts on artificial intelligence. And specifically artificial intelligence governance policy. So what I mean by that is how do we regulate ai? What are the threats that we face from ai, but also what are the opportunities? So what is that appropriate level of regulation? We'll discuss with him the massive investment that the federal government has made into artificial intelligence via the one big beautiful bill, approximately $1.5 billion of investment into artificial intelligence, and its uses in general executive branch operations, but most specifically in the Department of Defense and the Department of Homeland Security. So stay tuned from that. We'll be back after this message.
Thank you again for making us one of the top news commentary podcasts in the United States. We are excited to announce that we are officially on [00:02:00] TikTok and Instagram--so, be sure to follow us at Discourse Podcast on both platforms to catch our daily shorts.
All right. We begin with our usual headlines, so the unscripted part of today's show, breaking News Just about an hour ago, the White House will release $5.5 billion for schools after a surprise delay. This is money that was already appropriated to public schools around the country, and I don't think anyone was particularly saved from this, for some reason the White House froze a significant portion of federal funding for public schools, and apparently, according to the headline, at least 5.5 billion will be released.
It's, it's intriguing on number one, we've, we've discussed appropriations in [00:03:00] general and how Congress has the authority to appropriate AKAs spend the general treasury funds of the United States. And it's certainly is clear that Congress has already said public schools around the country get, you know, this much money.
And the White House in many areas, including public schools, but also in say, university research funding, science funding, grants to nonprofits around the world, foreign aid, you name it. The White House has taken a big old stop sign to a lot of the money that is going out of the US Treasury into our communities.
And the public school funding was especially intriguing. Because this seems to be a bipartisan issue where the Trump administration faced growing bipartisan pressure, including from Republicans in the Senate to release these funds, that it made no sense to [00:04:00] hold up these funds. And I'm talking K through 12 education in particular, right?
Not necessarily higher education at the university level. We know that the Trump administration is. Investigating, I think is the word I'll use there. Or maybe targeting particular universities, claiming things like civil rights violations. And I'm not gonna opine on whether universities have actually committed civil rights violations , but I will say that that's a separate bucket that we're talking about here.
The $5.5 billion is 4K through 12 public schools around the country, and it certainly seems that. It may, I mean, it certainly makes, in my opinion, little to no sense on why you would freeze that money, but nonetheless, he did, and I am glad to see that the Trump administration has reversed course and will actually release the $5.5 billion according to the New York Times.
The other thing I wanted to discuss [00:05:00] during this headline segment is not necessarily a headline, but what we're seeing circulating on social media. So we're seeing some messaging. In fact, I would say a lot of messaging from the Republican party on social media. And this is the general, like GOP accounts, the general Republican party accounts.
And this messaging is how successful the big, beautiful bill is or has been, and how it has eliminated fraud, waste, and abuse. And I just wanna say. Those claims may be true except how can we tell? Because the bill was just signed into law about 20 days ago. So how do we know the effects of the bill in a matter of 20 days?
It's incredibly difficult to measure the success or failure for that matter of legislation in 20 days. They want to persuade [00:06:00] America and voters in general that what they're doing is successful.
I don't blame 'em for that, but I do highlight how it's potentially misleading. Now, at the same time, I will say, where is the messaging from the Democrats? So I have not been shy on this podcast and in other mediums that the Democrats just seem to be MIA. There are a couple who pop their heads up every now and then.
Elizabeth Warren, for example, is one who I've been paying attention to. A OC is another and they'll come out with great speaking points, except they're the only ones speaking those points. The rest of the Democratic party seems to be MIA, and I argue that there needs to be a change in leadership.
And that change needs to start with Senator Chuck Schumer, who is the minority leader in the United States Senate and Hakeem Jeffries, who's the leader in the house. I'm hopeful [00:07:00] I've met the guy. I am hopeful that he can kind of drive a new generation into the Democratic Party, but we haven't seen, my opinion, much success from him yet.
And perhaps, perhaps we need to recognize that he needs to break free of the old way, the old guard's way of doing things because we're just plainly in a different era. We've now had 10 years of Donald Trump and his disruption to American politics and I think we have established a new norm when it comes to politics in the United States, given that Trump has been love him or hate him successful politically for 10 years or so.
I mean, he has won two terms and we have to acknowledge that. You may disagree. Whether he should be in office, whether he's fit for office, whether he's a [00:08:00] good president or a bad president. But nonetheless, I think the reasonable statement here is Donald Trump has been a successful politician, maybe not governor, right?
He's been a successful politician over the last 10 years, and we should give credit where credit's due. All of that is to say, I think the Democratic Party has not yet learned. When they've had plenty of opportunity to do so because they've been slapped in the face multiple times now in multiple elections, but the Democratic party has not learned that it needs to change and it needs to evolve to the new normal.
You may argue as many Democrats do that this should not be the new normal, and I agree with you to a certain extent. There are many things about the American political experience today that I say should not be the norm, but we do have to live in reality in a way and say, yes, the political [00:09:00]landscape has changed, how politics is done, has changed, and if we don't change as a Democratic party, then we get left behind.
I used to work at a company called Cisco Systems, and at the time the CEO and Chairman was a guy named John Chambers. Really well respected and well liked in Silicon Valley, one of the longest serving chief executives in big tech. Uh, by the time he retired and now he has retired and moved on to kind of his second career in investments.
Any who, one of the things that John Chambers would frequently say is. Cisco succeeds because we can see around market transitions. I don't know how many times in his life he has probably said those two words, market transitions, and Cisco was incredibly skilled at seeing around corner, seeing those market transitions coming [00:10:00] and then getting in front of them.
Really what his point here is that we have to change and adapt in order to be competitive and stay competitive. And so if I apply that to American politics for a second, the Democratic Party needs to see, I would argue, a transition in American politics that has already occurred and try to catch up to it.
Because what we have seen over the last six months in particular. Is there is no cohesion amongst the Democratic party from a messaging standpoint or even from an action standpoint. The messaging is all over the place. There is no single messaging yet. The Republicans are incredibly successful at messaging.
They're incredibly successful at it. All of that is to say it's time for a change. I think I've been clear about that. And I think many of the millennial generation and even Gen Z, even [00:11:00] though Gen Z apparently skews towards the Republican party and conservatism, I think many of the younger generations would agree with me here, which is it is time for a change in leadership.
And when I say change, I mean not just a change of persons, but also a generational change that it's time. For baby boomers to take a step back and let millennials lead. And then of course that leaves Gen X, the, the generation in between baby boomers and millennials. And you're like, well, what about us?
Which is kind of the common theme about Generation X is that they get skipped over and certainly it is appropriate for Gen X to step up and lead as well if they haven't already done so. But perhaps Gen X. Is gonna get skipped over once again, which is the common theme amongst that generation. And I'm not saying that that's a right or that's good, or that's what [00:12:00] should happen.
I'm kind of just saying we can admit that that is a generational aspect to Gen X.
So the other headline that really came to my attention and a part of it is because I am in the industry. I am in higher education, I'm a full-time law professor, and I do this independent podcast. Kind of on the side because of course what millennial and Genzer doesn't have a side hustle.
But that's kind of a joke, but not really a joke. Put that aside. One of the headlines that we saw this week was Columbia's settlement with the Trump administration for something like $200 million. So let me just catch you up on that Columbia settlement.
So according to the BBC, Columbia University has agreed to pay $200 million. The President Donald Trump's administration over accusations that it failed to protect [00:13:00] its Jewish students. The settlement, which will be paid to the federal government over the course of three years was announced in a statement by the university and then confirmed by the president on social media.
Now, in exchange, again, according to the BBC, the government has agreed to return some of the $400 million in federal grants. It froze or terminated back in March. And Columbia was the first school to be targeted by the administration for its alleged failures to curb antisemitism Amid last year's Israel Gaza War protests on its New York City campus, and it had already agreed to a different set or another set of demands from the White House in March.
So here's the thing about being a lawyer that I try to impart on law students in particular, but also just people who may not have gone to law school. They could be potential clients, they could not be potential clients, but part of the consideration when it comes to the law is what makes the [00:14:00]most sense from all angles.
So yes, there may be a solid legal argument. However, the practical is, well, what's it gonna cost? And so from Columbia's perspective, Columbia may weigh, if we agree to pay $200 million as a fine, we can call it a fine. And in return we're gonna get $400 million back in federal grants. Well, what's the trade off there?
And does it make sense to just accept the settlement versus trying to litigate the sound? Because it could cost us millions of dollars in litigation in a climate in which we have a Supreme Court that's pretty deferential to the executive branch and specifically Donald Trump. So it is perhaps a losing battle in Columbia's eyes if they were like, let's litigate this because we have a winning legal argument.
Now all of that is to say. [00:15:00] The practical trade-offs could be top of mind considerations by the Columbia University Administration, but from an idealistic standpoint, the question also becomes, well, what have you done now, Columbia? Columbia, you have acquiesced to Donald Trump's actions by accepting the settlement.
You have agreed and in a way, implicitly endorsed what President Trump and his administration has done over the last six months, which is freeze federal grants and forcing universities to comply with his demands. Now, there is, as a practical matter, a level of separation between universities and the state.
The state being either state governments and or the federal government in that level of separation? Well, number one, [00:16:00] states have a strong interest. Of course, in funding education. They want to develop an educated populace that makes sense. And that's why we see state schools, public schools. But in Columbia University's perspective, it's a private school.
And so that that wall of separation between government and the university. Is perhaps even greater in the private school context. So in this context, we have a university that has essentially agreed, implicitly, agreed to allow federal government coercion of what it does. Now, you can argue that Columbia has essentially consented to this forceful hand of the federal government because Columbia does take federal money.
And because they take federal money, in many ways, the university agrees to comply with federal standards. Those standards could be codified [00:17:00] in statutory law, in regulatory law, certainly in constitutional law. But if we zoom out for a second and look at the role of higher education in the United States in particular, but of course around the world.
I say I, I characterize higher education as this. It is the role and responsibility of higher education institutions to add knowledge to the world, and we do that through many ways. So we do that through teaching, we're adding knowledge to the world by sharing our knowledge, by teaching with students, both undergrad, graduate professional students, trade schools, even again, sharing knowledge.
We also do that through research. And research is a great component, a significant component. The research that we do in higher education, and, and this is true regardless of the discipline, again, is about adding knowledge to the world. So from a scientific perspective, which is [00:18:00] probably the most concrete example that I can give here, from a scientific perspective, if we run a scientific trial.
Maybe a pharmaceutical trial at the university. Maybe we've invented or advanced a particular technology using federal funds and the research that happens at the university level, right? That again, is adding knowledge to the world. A university can conduct a study. It releases the study and we say, this is what happens in psychology, or this is what happens in sociology, or this is, you know, the political theory on what is going on in the world.
All of that is still adding knowledge. We also do that in other mediums or in o other ways. So for example, I do this podcast. And yes, it is independent of my work with the university as a law professor, but nonetheless, I still subscribe to the notion that it is our responsibility as higher education officials to add [00:19:00] knowledge to the world.
So if I can share knowledge via a podcast in this medium, then I'm going to do that. We also have many academics write books as an example, which again, is sharing knowledge with the world. So. In order to do that, we may have to do it in such a way that the government may not agree with, and this is why we have various protections in place.
This is why we have the concept of academic freedom, where a faculty member, a researcher with an institution, can pursue a line of research that may be disruptive to society. It may pioneer something controversial, but we do it in a way because our job is to add knowledge to the world.
Anyways, my point here is. There comes a point in which higher education has to understand and respect [00:20:00] the wall of separation and not allow too much government coercion into what it does.
And its basic role and responsibility in society in general, adding knowledge to the world, but by acquiescing here, by settling with $200 million with the Trump administration. Is this a step too far over the line, insofar as now have we agreed to government coercion? Is this coercion or is just a simple, you know, everyday disagreement?
I Compare Columbia University's actions with the Trump administration against Harvard University's actions against the Trump administration. In fact, both universities have taken two totally different approaches. So Columbia agreed to settle, yet Harvard is litigating kind of at full force here.
A lot of Harvard's claims are rooted in the First Amendment in that the federal [00:21:00] government and the Trump administration in particular, is retaliating against Harvard for its speech, both the speech as an institution, as well as faculty speech that occurs with faculty at Harvard.
So both universities, Harvard and Columbia have taken totally different approaches. And I'm intrigued to see where Harvard goes because Harvard is not only represented by absolutely stellar attorneys, they're also represented by absolutely stellar attorneys who are clearly conservative. Some of the most well-known names on the conservative end of the spectrum have signed on to represent Harvard in this case. And so I'm intrigued to see where it goes. And of course, as somebody who is in higher education, I am biased and I will admit that I, of course, want Harvard to [00:22:00] prevail here.
Because Harvard is trying to vindicate its rights both under the First Amendment, but also its rights really as a higher education institution, and affirm its role in society that higher education institutions have.
So we are joined today with a very special guest, uh, who I've known for several years now, uh, but is all things and expert in artificial intelligence. Kevin, welcome to Discourse.
Oh, well thank you very much Wayne. And I have to dispute the expert label. I, my, my surest sign is that if you've met an AI expert, that's how you know they aren't one.
Oh, fair enough. Okay. Complex. It's too complex. Well, so, you know, I realized this several years ago, way before I entered law school actually. Um, so maybe it was an eternity ago at this point. And I found myself sitting on a stage at a industry conference [00:23:00] representing my company that I worked for at the time.
And I remember looking to my left and then looking to my right, and I was by far the youngest person on the stage by probably 20 years. And the only person on the stage that did not have gray hair. Uh, and, and my thought in that moment was, how the heck did I get up here? Like when did I become the expert that gets to speak at an industry conference on a particular topic?
And I concluded at that moment that an expert. If we think about it, is somebody who knows a particular subject matter better or more so than the ordinary person. Uh uh. I mean, 'cause that has to be your baseline, right? So, so I would say, Kevin, you may reject the title, but you probably know more than the ordinary person.
No, well, right back at you. And I'd say the most impressive part of that story, Wayne, is you still don't have gray hair, so kudos to you.
This is true. And boy, do students wanna gimme gray hair sometimes? [00:24:00] No, I'm kidding. To my students who listen to my podcast. Any who?
Kevin, I would love for you to begin with maybe an introduction.
What's your background? You join us, ut Austin. Uh, affiliate of Lawfare, but, uh, introduce yourself, if you would, to our, our listeners.
Yeah, so the, the big old title because, uh, the legal Academy loves nothing more than big old titles or perhaps, uh, sandwiches for faculty lunch, but that's another conversation.
Uh, I am the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at Lawfare and co-host of the Scaling Laws Podcast. But the shortest story of how the heck I got here and what the heck I focus on is a summary of two jobs I had early on in, uh, my career. So first after graduating from the good old University of Oregon, I went and worked for Governor Kate Brown as her body guy.
[00:25:00] For listeners who have seen Veep. Yes, I was Governor Brown's Gary for a a, a full year. And got to carry around talking points and cliff bars and everything in between. And I also got to see how the sausage is made in state government. And I will tell you that state government is full of dedicated, very, uh, driven and public service oriented employees, and the sausage making process is gnarly.
So I had that experience and I thought, okay, that was fascinating. What is the exact opposite? So I went and worked for Big Tech. I went and worked for Google and was a part of their legal investigation support team, which I think has been renamed. But that team was the team that receives all of the requests for user data from law enforcement.
And boy were my eyes opened. All of a sudden I saw, okay, here is the move fast and break things. Here is the billions of pieces of data, [00:26:00] if not trillions, of pieces of data. And this is the exact opposite of what I was witnessing at state government. And so ever since then, I've thought of my role or as my sort of mission, I guess, as trying to translate all the good things, all the bad things that can happen in tech and emerging tech.
Two folks in government and in particular state government and federal regulators. So that's the, the driving animus. And I get to do that every day in my dream job here in Austin, working as the AI innovation and law fellow helping folks in, uh, the legal academy in the the legal community more broadly, think about this really wicked problem, which is how the heck do we regulate ai?
Right? And, and it actually, maybe this is why we get along so well, uh, is we kind of have a similar background. I didn't do state government to be fair, but, uh, and I give you mad props for doing that, especially being a body person. Your experience in big tech has certainly informed [00:27:00] your, your research, your, your philosophy, kind of your worldview. If. Uh, if that's an appropriate descriptor, your worldview of artificial intelligence. And I know when I came out of Silicon Valley and then went to law school, part of the reason why I went to law school is because I learned just how much information, personal information is shared about people on a daily basis, not just shared, to be fair, bought and sold.
On a daily basis and, uh, you know, and a big data breach like Equifax happens and I think, well, where's the recourse? Where's the redress? And, and I'm using legal terms now, but I didn't at the time. And then we get a check in the mail for, like, my check from Equifax was $17. Oof. I know big whopping $17 three Starbucks is right there.
Yeah. And, and, and you're like, well, this is a system that we didn't consent to being a part [00:28:00]of. I mean, you could argue implicit consent because you participate in society, but we never said yes to the credit bureaus. You can collect our data and at the same time, we can't regulate them out of existence because if we regulate them out of existence,
the whole economy, the global economy is based on credit. So how do we issue credit to borrowers? Anyway, so that's beside the point. But let's get to artificial intelligence. So you have worked on educating those in the academy, uh, those who are not in the academy. And, and so our last two episodes on this, on this podcast have focused on the big beautiful bill.
The Republicans one big, beautiful bill. And uh, I detailed on the last two episodes, so if you haven't listened to the listeners, please go back and listen to those. Just how much spending was actually in those bills. So it wasn't just cuts, um, or [00:29:00] in kind of indirect loosely cuts is, is what I said on those episodes, but there was tremendous spending in fact, and by my rough estimate, 'cause I read the bill cover to cover, I didn't do all of the math.
But I believe AI as a line item throughout the bill got something like $1.5 billion in appropriations, new appropriations, and some of that's going to the private sector in forms of research, but a lot of that is also spending internally. So bring us up to speed because one thing that we heard during that negotiation when the bill was in the house.
I think it was the house, or possibly the Senate was this AI moratorium. And so we wanted to ban states from legislating around artificial intelligence. Now that ultimately didn't make it in the final version of the bill that Donald Trump signed. What was it, number one, first question, and then number two, why did it fail?
Well, so I think I'll be able to describe this in [00:30:00] about two hours, uh, assuming that's, that's good. Yeah. So the. Quick way of providing an overview here would be to just tee up the, the broader AI regulatory discourse, which is right now being framed as a sort of AI safety perspective, and then what some people would call an accelerationist perspective.
And there's a whole lot of shades of gray within both of those camps. But for sake of time, I'm just going to, to keep those camps there. The AI safety community is focused on. Existing harms and, uh, very real and very possible, but future speculative harms that may arise from ai. So when we talk about the existing harms, these are things like amplifying, uh, dis discrimination in a hiring context.
These are things like discriminating, uh, against tenants or potential applicants to a, a real estate community or real estate opportunity. And then there are more speculative, [00:31:00] long-term harms that folks will generally call catastrophic risks from ai. Think cyber attacks that are just of a scale and speed that we've never seen before.
Think bad actors, including non-state actors, being able to develop bio weapons and deploy them at at a new speed and across borders in a way we, we haven't witnessed before. So that's the AI safety community, and they're saying, look, if Congress isn't going to regulate. These very possible harms. And I'm gonna say possible.
I'm not gonna go into a whole, uh, you know, um, is it, is it plausible? Is it probable? I'm just gonna say possible right now, and if Congress isn't going to regulate these possible harms, then states have an obligation going through their sort of police power almost to say, we need to protect our citizens against bio weapons, against these cyber attacks.
We need to take these catastrophic risks. Very seriously. On the other side, acceleration, they're saying, whoa, if you [00:32:00] wanna talk about real risks, the real risk is that China or another competitor or another adversary is going to continue. This AI progress and get ever more sophisticated. And you know what they're gonna do?
They're going to integrate it into their military, they're going to integrate it into their economy, and they're going to outperform the US on key domains and in key frontiers. So those are the two camps. And if you fall on the side of AI safety, then you're thinking, Hey look, we tried to nudge Congress to take this seriously.
We had the US uh, Senate Insight forums. We told senators how serious these risks were, what did they do? Absolutely nothing. Then we have again, the Accelerationist camp, which is saying if we allow states to go forward with that patchwork approach, we're going to hinder the sort of innovation we need to keep pace with China.
So all this came to a head in the one big beautiful bill debate and the house said, look. We're going to go forward [00:33:00] with a 10 year moratorium on state AI regulation. Now, this was a horribly confusing, horribly written provision that included a lot of uncertainty, so people weren't exactly sure what exactly was going to be precluded from enforcement in the states.
Would we still be able to enforce generally applicable laws that protect consumers, for example? Advocates would tell you, yes, you would be fine if you were a state, to continue to enforce those laws. Opponents to this, uh, moratorium were saying, I don't know. It's hard to read this language. It's very vague.
It's very uncertain. Commas aren't in the right spot. This is pretty problematic. Nevertheless, the momentum in the house led to that moratorium being passed in the house version of the bill. So we had a 10 year moratorium on state AI regulations. Being sold as still allowing states to generally enforce consumer protection bills that weren't [00:34:00] AI specific.
But again, that was somewhat debated. So then we shift into the world of the Senate and we have Senator Cruz on one side saying, yes, let's move forward with this moratorium. Then you had folks, oddly enough, like Senator Blackburn. Now a key thing here is that Senator Blackburn is from Tennessee, and Tennessee is one of the states that enacted one of the first kind of AI specific bills.
The Elvis Act and the Elvis Act as you may pick up from this creative name is all about protecting creators in in ai, in an AI economy. And so Senator Blackburn, unsurprisingly has constituents who are very AI savvy and AI aware who wanted to make sure that this piece of legislation as well as legislation at the state level that may be intended specifically to protect kids from the harms posed from AI, would still be able to come into effect and be enforced at the state level.[00:35:00]
So now we have this crazy debate in the Senate take place where Senator Cruz and Senator Blackburn are going, uh, through a round of negotiations to say, how can we change the moratorium language we receive from the house to be a sort of compromise measure that's going to meet the aims of both of these different stakeholders.
They got it down to a five year moratorium with some additional carve outs for language, uh, excuse me. Four bills like the Elvis Act. Yet there were some pivotal outside, uh, stakeholders who said, whoa, whoa, whoa, whoa, whoa. Senator Blackburn, we're not sure you're reading this language as most folks will.
So why don't we take a beat and, and analyze this more, uh, closely? And that additional analysis led Senator Blackbird's Camp to conclude that this actually wasn't a deal they wanted to reach. They no longer wanted to go forward with this compromise Measure. And that's when we saw the Senate vote[00:36:00] 99 to one, to take that provision out of the one big beautiful bill.
And, and who was the one vote?
It wasn't even Senator Cruz. Uh, you know, and I, I'm not sure who the lonely senator was, who, who held their hand up, but it, it wasn't Senator Cruz. And I think it was really meant as a way to say, look folks, this isn't happening in this bill. Uh, let's, let's move on.
So, so this is a fascinating issue because.
Not often do we have an issue that is of course currently for debate from a policy perspective where you have Republicans splitting on opposite ends of the spectrum. As you just mentioned, Ted Cruz and uh, Senator Blackburn, then you have in the house side, you had Marjorie Taylor Greene come out and say she would've not voted in the affirmative if she had known, which is a whole nother problem if she had known that this provision was in the bill.
So you have Republican splitting, but you also have a sizable [00:37:00] population within the Republican party. Siding with the Democrats on the issue. And it brings up the, the greater. Kind of roll between the federal governments and the federal government and the state governments on, uh, federalism. Right? So on the idea that the federal government can preempt state laws, but we're not talking about preemption on this.
That's, that's for my constitutional law students. It, it, and there's a new bill that I think Senator, uh, Blumenthal, Senator Blumenthal proposed recently. On that content creation aspect on specifically training models. So whether you can use content that, or, or works that are traditionally copyrightable, whether you can use that in training and basically the bill would put some restrictions on kind of the training data that a [00:38:00] developer, uh, would, would use.
And, and I think it has bipartisan support. I, I, I would have to double check my own fact there, but I think it has bipartisan support. Do you, I mean, have you followed that bill in particular?
I, I haven't followed that bill in particular, but what I'll say is that we are seeing a whole slew of strange bedfellows in the AI context get together in.
Various ways on various issues. So for example, uh, as you noted earlier, uh, Marjorie Taylor Greene, Senator Blackburn, and a slew of Democrats, for example, are being very outspoken about the need to protect kids in particular from AI companions. And I will chalk this up to what I would refer to as a sort of social media hangover.
Which is, we got it so wrong on social media with respect to making sure there were safeguards, whether it's at school, at home, or [00:39:00] what have you, to protect the interest of kids, uh, to make sure they're not being exposed to gnarly content, to make sure that their data's being protected in the right ways.
We had such a bad hangover from that, that a lot of politicians are saying, we're not gonna let it happen again, so we are going to act on the offensive. Even if we get it wrong, even if we have a sort of false positive and go overboard on ai, let's say we just wanna make sure that we don't mess up again in the same way we got it wrong on social media.
So we're seeing this really intentional effort to make sure in particular, that there's child safety protections in the AI space. On the other hand, though. There's some some odd dynamics going on both at the state level and the federal level that are saying, look, this China issue is so important to national security and this innovation question is so important to the US economy that we would rather have a [00:40:00] false negative and say, Hey, we were overly permissive.
On the AI context in terms of allowing for development and diffusion, because the risks of getting this wrong, the risks of becoming second to China are too grave. So here's where you'll see folks, for example, like Governor Polis, a Democrat in Colorado, one of the states that initially passed one of the most comprehensive AI acts.
We're now seeing him lead an effort to try to water that bill down. He actually came out in support of a moratorium, generally, not necessarily endorsing specific language that was before the House or the Senate, but Governor Polis was saying, look, this is a national issue. This should be decided by Congress and not at the state level.
And so these odd bedfellows keep getting together. And Wayne, I know you'll, you'll appreciate this in terms of the fact of the matter is. It's just super hard to regulate in a space that [00:41:00] is evolving this quickly and in a space where folks don't necessarily have a lot of technical understanding or the necessary data to determine, you know, what are the actual risks of this technology and what are the actual benefits.
And so it's kind of regulating, uh, in, in a really uncertain area. And unsurprisingly, it's leading strange partners to get together.
Uh, yes, I appreciate that comment because. I think I have been a strong advocate for saying we need more technical expertise with respect to the Hill Congress and also regulators and in the courts for that matter.
When it comes to court decisions, uh, justice Kagan, I think this last term, maybe it was two terms ago, right? Has now famously said that we are not the nine best experts when it comes to all things internet. And that's true. Very, very true. Uh, and so I grow frustrated, but that's a whole nother conversation about the lack of technical [00:42:00] expertise.
Well, I do have to point out, just today actually, there was breaking news. We're we're talking in late July. Uh, justice Kagan admitted at some talk. She was quite impressed with Claude's Legal. And this is what, you know, a, a year and a half after Claude is already producing what I think most people would say, impressive legal analysis, but now the, the Supreme Court got word that these tools can indeed be impressive.
Well, so I'll tell you about a, a research project, I'll tell you about it offline because I don't wanna disclose what the research project is to other scholars so that they can preempt me on it. But getting back to the big beautiful bill, because $1.5 billion in investment is a, a lot of money, right?
It's, it's certainly not the 200 plus billion that the bill appropriated to the defense Department, but nonetheless, it is a major investment, uh, 1.5 billion. So from a national security perspective, Kevin, is[00:43:00] some form of regulation appropriate in the national security space. And this is separate from, I, I think that there could be broad-based agreement on say, child safety, right?
I, I think we can come to, if Congress can actually act, but let's say they're in a, everyone's motivated to your point to act. I think we can come around on like, okay, what will child safety protections look like? But in the big beautiful bill, there's nothing about child safety There. It is all about military and Department of Homeland Security and executive branch operations in general that has these line item appropriations for AI investment.
So in the national security space, what regulation, if any, do you think might be appropriate?
Yeah, I think the. Most important thing to keep in mind for these national security conversations is again, the, the big C word here, which is just China, China, China, and [00:44:00] the folks on the hill, they're receiving briefings, they're reading the New York Times.
There was a recent op-ed by Kyle Chan basically saying, Hey, if you look up and down the AI stack, if you look at compute, if you look at data, if you look at talent. China is going full in on each of these aspects, uh, when it comes to creating national data exchanges, when it comes to bringing new power facilities up to date when it comes to massive data centers.
And when that carries over to a national security context, we see bright red lights going off in the Senate and in the house saying, how do we keep pace? Now the most important part in my opinion about integrating AI into the national security, uh, sector are making sure we update our testing and evaluation systems for AI enabled weapons.
If you look through our traditional approach to testing new weapons, it's very hardware oriented, right? It's let's go take [00:45:00] this weapon to a range, let's go fire it a bunch of times, and let's see how reliable it is. Does it misfire, uh, does it break down? How do you do that in an AI context? How do you do that with tools that are going to perhaps have emergent capabilities?
How do you do that with tools that are going to be interacting with other AI systems in ways we may not be able to predict? We need to know that the AI weapons we deploy in a military setting are going to be safe and reliable, and yet we haven't figured out how to do that in an AI context quite yet.
There's a lot of effort going into this space, and included in the AI action plan, which was recently released, was a directive for the DOD to really spend more time developing this sort of evaluation system. Now, in terms of when and how to use. AI systems in a military context. This brings up a whole degree of, of problems that you and I could [00:46:00] probably spend eons talking about.
Uh, with respect to privacy, there are huge concerns about the surveillance that's going to be made possible as a result of ai. Think about drones that are surveying you and surveilling you and collecting information on a scale we haven't seen before. Um, think about the ability to scrape the internet with some of these AI tools.
Glean insights, glean patterns that we had never been able to receive before. The intelligence capabilities made real by AI are things we just haven't seen before. And so updating our expectations around the privacy protections we want to make sure that these tools can, uh, adhere to is going to be one of the most important things from a civil liberties perspective, in my opinion, going forward.
And so one, I wholeheartedly agree with you on the privacy concerns as that's the center, more or less, that's the center of my research and, uh, a lot of what I publish about. [00:47:00] That said, uh, I think, again, if I remember the line item correctly, I don't have the printout of the bill in front of me, so that's my disclaimer to all of my, my listeners.
But there was something like $1.5 billion in offensive cyber. Attack capabilities to the Department of Defense. So we're investing also as a country on cyber warfare or in, in cyber warfare.
When it comes to cyber warfare, there are rules of engagement, or excuse me, when it comes to ordinary traditional warfare, there are rules of engagement. So, for example, it's widely understood, maybe not necessarily respected, that you don't bomb a hospital, right? And, and I say that's widely understood, but not necessarily respected, giving some of the conflicts going on around the world. That said, uh, do you issue, you know, an offensive cyber attack?
On a hospital, um, do you do it on the power [00:48:00] grid in a region of the country? So I grew up in Arizona. Do you do it in a region of the country where it's scorching hot during the summer and it shuts down everybody's ac Is that something that we allow? So the kind of the rules of engagement in cyber warfare, I think are still very much to be determined.
Yeah, and in particular, I think from a Loac or the law of armed conflict perspective, one of the most important questions will be what does it mean for an attack to be proportional going forward? Because proportionality has always been a sort of, I know it when I see it sort of thing. I think international law scholars would like to say, no, you know, we have some, some case law here or some illustrative case studies, but in reality it's really hard to say.
What is an apples to apples attack, uh, in response to a certain effort? And what, for example, does it mean if China resorts to AI for what's been referred to as data poisoning? [00:49:00] So if China floods the internet with tons and tons of just horrible content or content that they know when an AI model is trained on it is going to be more biased or is going to be more inflammatory, what's a proportional response to that?
Is it a kinetic response? Is it a cyber, uh, attack of some sort? I don't know. And that's the really difficult problem here is making sure that as we encounter these questions around proportionality, for example, and some of the gray areas that you were bringing up, we need to have more transparency into how these decisions are being made or minimally.
What principles are going to inform how those decisions get made.
Well, Kevin, I think that that's a great note to leave it on. Uh, and we're, we're out of time, unfortunately. Any, any final words?
Uh, Wayne, I just wanna say thank you and I love the slogan. I think today we may have made some sense of chaos in current events.
So glad to be [00:50:00] on.
That's it for today's episode of Discourse. Thank you for tuning in and being part of the conversation. You can catch future episodes of discourse wherever you get your podcasts. If you found this discussion insightful, be sure to subscribe, leave a review and share it with others who value thoughtful analysis over the noise.
You can also join the conversation by visiting discourse paw.org and following me on x and blue sky at Prof Unger for more insights and updates. Until next time, keep thinking critically, stay curious and engage with respect. We'll see you soon.
Discourse is a commentary podcast for informational and educational purposes only. It does not constitute professional advice or legal advice. The opinions expressed are solely those of the hosts and any guests, and do not reflect the views of any employer, institution, or organization. This podcast is not journalism and does not adhere to journalistic principles.
It offers analysis, opinion, and discussion on current [00:51:00] events, but should not be relied upon as a news source. Listeners should consult qualified professionals for legal or otherwise expert advice specific to their situation. Thanks for listening.