Will Hurd, a former CIA clandestine officer and Texas Congressman, has spent his career working at the intersection of technology and government. Hurd, a Republican, represented Texas’s 23rd congressional district in the House of Representatives from 2015 to 2021. He has also spent time as an author, investment banker, OpenAI board member, and presidential candidate. Hurd sat down with The HPR to discuss the future applications of AI to Congress, education, and cybersecurity.
This interview has been edited for length and clarity.
Harvard Political Review: With your background in computer science and your focus on cybersecurity, what do you believe are the most pressing technological threats to the U.S. today?
William Hurd: I think the most technological threat to the U.S. is quantum computing, and the fact that we’re not prepared to introduce quantum resilient encryption across most of our critical infrastructure, everything from communications to banking. The NIST, NSA, and groups like that are working on clarifying the kinds of quantum encryption that should be used. But I think this is one of the areas of the breakthroughs that could impact every single industry.
HPR: You’ve been vocal about the importance of technology in government. Can you discuss any specific policies or initiatives you believe are crucial for improving government IT systems?
WH: Look, it’s the government’s responsibility to use taxpayer dollars wisely and provide better digital-facing services. So let’s start with something very basic. Why should it take six months to renew a passport? If you’re a veteran, why is it hard to figure out when there’s an availability for you to go in and get to the VA? Right? Those are just two simple examples.
But from a broader perspective, we should have learned from other technologies, and not make the same mistake with things like artificial intelligence. I’ve been in the cybersecurity industry for a long time. We have a cybersecurity industry because we carved software out of product liability laws back in the 70s. We all know the trials and tribulations of social media. Social media is leading to young girls increasing self harm. Why? Because we carved it out of the Communications Decency Act. So let’s not make those same mistakes with things like artificial intelligence. And so, from a regulatory perspective, I believe the first thing we should do is say: AI has to follow the law. We have a lot of laws on the books about dealing with civil liberties and civil rights. So those should be followed, but does it already happen? Well, no, it doesn’t.
And that’s why you’re starting to see a lot of patchwork of laws across cities and states. If you and I want to make a parking lot here in Cambridge, you will have to pass by somebody to get a couple of approvals and permits to be able to do that. Something that’s really powerful, like AI, should have to have some kind of permit process. Now,it should be a quick process. And surely it should be triggered when it’s time to introduce it into society rather than just in your training and development.
Furthermore, every kid should have an AI tutor in their pocket. I benefited from having parents and a brother and sister that helped me with my homework, but imagine a kid in southwest Texas that doesn’t have that opportunity. And how do we ensure that kids have an opportunity to improve upward mobility based on self-learning? That’s an exciting thing and something that we should be supportive of.
HPR: How do you weigh the importance of privacy against security and general technological advancement, especially in the context of government surveillance and data collection practices?
WH: To me, it’s actually pretty simple. We should be protecting privacy at all costs. In Washington, D.C, there’s always a war on whether there should be backdoors to encryption. Oftentimes, law enforcement entities want to have a way in order to make sure a case doesn’t go cold. I’ve been very clear: We should be strengthening encryption, not weakening it, because protecting our digital lives is as important as protecting things in the analog world. And so, to me, it starts with privacy. I also believe everything I do online is mine, and I should be able to determine how that’s used, right? Like, I like Instagram sending me ads, because I just bought a pair of shoes from a new brand that they promoted to me. But in other cases, I don’t want people to have access to my information. And so I think that the ability to put the control in the individual’s hands, is what I’m supportive of.
When it comes to broader things, like when it comes to pure cybersecurity, and defending our digital infrastructure, we know what the basics are. Make sure your software is properly patched. Make sure you have strong passwords and are using two factor authentication. Don’t click on stuff in your email or text from people you do not know, if you do those things, you protect against ninety percent of the threats. But what we’re seeing now is an increase in zero-day attacks from our nation-state adversaries, primarily Russia and the Chinese government. And so that’s going to require a level of cooperation between the industry, the public, and the government to protect against nation-states.
HPR: Is the incorporation of technology in government a bipartisan issue in your mind, or do you think it’s uniquely aligned with your party?
WH: I think it actually is a bipartisan issue, and that the threat to our digital infrastructure is seen by both parties. When I was in Congress, I held the first hearing on AI in 2015, and I wrote the first national strategy on artificial intelligence when I was in Congress. At that time, the conversation around AI was like, “Am I going to have an autonomous drone peeping in my window?” Those were the conversations 10 years ago, like when the CEO of Facebook first testified in front of Congress. And, you had members of Congress asking dumb questions like “How do you make money?” and “It’s in the machine?” Now the conversations in D.C. are thoughtful. On the Senate side right now, a lot of the debates on tools like AI are happening in committee. A lot of the conversation on AI in the Senate is happening in a committee, between Josh Hawley and Richard Blumenthal. Their voting records are almost the exact opposite of one another. The fact that you can see a level of collegial conversation now, that the debates are thoughtful and bipartisan, means we got to move into “Okay, what should we be doing about it?” And that’s where we need to move sooner than later because ultimately, the Europeans are moving faster on legislation and setting the terms of how these tools are working, and our adversaries are trying to use these tools to supplant us as a global superpower, so we don’t have time to be wringing our hands. And we need to be making sure that we’re taking advantage of technology before it takes advantage of us.
HPR: Where do you see a need for AI to be incorporated in the government, whether in the CIA or Congress?
WH: In Congress, you always have, “How do I provide a better response to my constituents?” You know, if somebody writes to you or calls your office asking about your position on this upcoming piece of legislation. A good office may respond within a week, within seven days. But in this day and age, we should be able to give a response the same day with tools like AI, which are trained on all of that — individual office or member of Congress’ opinions and previous votes. You can be providing better digital services to your constituents. That’s one example.
And let’s take it to my former employer, the Central Intelligence Agency, being able to leverage and mine the existing information that we already know and have access to, which basically allows you to have perfect memory. The ability to have perfect memory on an issue like fighting terrorism or dealing with nuclear weapons proliferation is a way that we should be using AI at all times so that we’re leveraging all that information that currently exists. Using this in a nonclassified environment, being able to read and translate articles and blogs and what people are saying on the ground, and not have to have everybody have a significant language capability, is good. And so that’s just one other example of how we should be using these tools to better understand this dangerous world around us.
HPR: Let’s pivot to the experience of a student. A lot of Harvard students, including myself, are navigating what to study. So what made you originally pursue Computer Science and International Relations in college?
WH: In high school, I had an internship with a woman who was involved in robotics at a place called the Southwest Research Institute. I thought it was so cool. I was like, I want to be just like her. And she was a CS major, and I was like, I’m going to be a CS major. And so I studied computer science. Then my freshman year, I was walking across campus and I saw a sign to take two journalism classes in Mexico City for $425. I had $450 in my bank account. So I went to Mexico and fell in love with being in another culture. I had never really been outside of Texas; I was born and raised in San Antonio. At an international studies class, there was a minor guest lecturer who was a former CIA tough guy who told these amazing stories, and I was like, wait a minute, I can work on the most important national security issues of the day in exotic environments. That’s what caused me to go and serve: the ability to serve my country and protect it in exciting places like Pakistan, and the fact that I had a grounding in a technical background. It taught me how to solve problems, whether writing code or figuring out how to understand the plans and intentions of al-Qaeda, and then using those experiences in cybersecurity, and then being on the leading edge of working on new technologies like AI. I’ve been lucky to be at that intersection of technology, foreign policy, and politics.
HPR: Let’s talk about your experience on the OpenAI board. Do you agree with the growing monopolization of AI, and do you have thoughts on how it should be managed? How did your experience in industry compare to your experience in public service?
WH: I don’t know if I would agree with the monopolization of AI. Let me step back to November of 2022, when I left Congress and joined the OpenAI board. I know of OpenAI because of all my work in the field; Sam [Altman] and I were friends, and people were like, what’s OpenAI? Everybody thought, what are you doing? What is this thing? And people thought that OpenAI’s articulated goal was ridiculous, to be honest. So when ChatGPT was released, it became a moment in history. But even at that point, we thought there were only going to be maybe one or two potential competitors, but a lot of different models have been growing and increasing in scope. The people that are leveraging it are increasing. Generative AI may become a commodity, almost like cloud computing has become.
The thing that I learned and saw from the folks at OpenAI is that they’re really thoughtful, smart people that care about building a tool that’s gonna allow us to help students learn and move up the economic ladder, that’s going to help us go from understanding only five percent of the physical world to ninety-five percent of the world. That’s gonna allow us to have drug discovery and solutions to medical problems that have plagued us since we’ve walked the Earth.
This is an exciting time, and I equate artificial general intelligence to nuclear fission. Nuclear fission control gives you clean nuclear energy and almost unlimited power. Nuclear fission control gives you nuclear weapons that can destroy the world, and AGI I think is similar, so we have to be developing it in a thoughtful way. Any tool we develop is going to have an upside and a downside. But developing in a way that’s going to benefit humanity I think is exciting, and I’m glad to have been part of it.
HPR: What’s a message you have for students who aspire to the different career roles you’ve been in?
WH: Well, I look back at all of these experiences that I’ve been fortunate to have. I wouldn’t change anything. And the one thing that kind of ties them all together is they’ve all been meaningful and hard. And so when you pursue something that’s meaningful and hard, all those other things will flow: You’ll be able to make an income to support yourself and enjoy life or help your family. You’ll have the feeling of accomplishment of working on a good team or doing something big.
Senior Science and Tech Editor