5 Questions on AI-Readiness for SGS Board Member Robert Work

SparkCognition Government Systems | February 3, 2022

SGS board member Robert Work served as the 32nd United States Deputy Secretary of Defense from 2014 to 2017 under both the Obama and Trump administrations. He also served as the Under Secretary of the Navy and was the CEO of the Center for a New American Security (CNAS).

Sec. Work recently concluded his service as Vice-Chair of the National Security Commission on Artificial Intelligence (NSCAI), an independent commission formed “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The final report of the NSCAI was delivered in March 2021 to the President and Congress.

Last month, we spoke with him about the recommendations that came out of the NSCAI, the importance of responsible and ethical AI, and his thoughts on the partnership between the commercial sector and the Department of Defense (DoD).

Part one of our interview with SGS board member Robert Work

The NSCAI proposes a broad strategy for winning the artificial intelligence era. For the average American, where do you begin to explain the stakes involved?

On the National Security Commission on Artificial Intelligence, we did not want to sound alarmist, but we certainly wanted to convey a sense of urgency about this competition in high technology.

Artificial intelligence is at the center of everything, but it also includes things like autonomy and robotics, energy systems, quantum science, 5G and advanced networking, biosynthetic biology…it’s a bundle of advanced technologies that are really going to change the world. It’s going to change the lives of the citizens of the world, without question.

Since the end of World War II, the United States has been the world’s global innovation leader. It’s been the innovation hub that has really helped us both in economic competitiveness as well as military competitiveness. And the United States wants to remain in that No. 1 position, if at all possible. And the stakes are quite high.

We’re competing against autocracies, we’re competing against communist China and also the communist Russian Federation. And if we fought the Cold War to make the world safe for democracy, China and Russia want to make the world safe for authoritarianism.

All of these advanced technologies will be deployed on platforms, and the platforms will reflect the values of the governments that deploy them.

So in AI, for example, we know how China wants to deploy AI. They want to use it to surveil their population. They have no consideration for personal privacy or civil liberties. The Chinese believe if they deploy these systems, it will help those governments that accept the help to have a more authoritarian bent. And to send all the data back to China.

So for us in the commission, we said this truly is a democratic versus authoritarian competition at its heart. People shouldn’t just be thinking of beeps and squeaks and things like that. They should consider personal privacy and civil liberties and whether or not populations are being monitored like the Chinese do with their facial recognition.

The Commission thus viewed the stakes of the broader technology competition to be quite high, in our view. We want a world that is safe for democracy, and we would like democratic values to be reflected in future technology.

And you saw this in the 5G infrastructure debate. When 5G was rolled out, the platform that Huawei wanted to sell had a lot of means by which to surveil what was going on in the network and to transport information back to China for their own use—either malign or not. And that’s why the Huawei deployment kind of stalled because the United States was able to tell all the democratic nations that were considering the Huawei platform: “OK, this is what you’re signing up for. This platform will send back surveillance data to the PRC.” And we were able to kind of stop that. The same thing will happen in these broad AI applications, in quantum science applications, in synthetic biology, and in genomic applications.

So this is a big, big deal…an important competition.

The other way that authoritarian regimes are planning on using AI to weaken democracies is to try to find fissures inside democratic nations and try to widen the fissures and deepen them. Now, they have been able to do this up to this point, just with bots and with humans controlling them, but with AI, they’re going to be able to micro-target individuals throughout the society to achieve societal-level scale effects.

Using AI, a competitor will know exactly what you are worried about, what makes you mad. Is it gun control? Is it abortion? What could it be? They’d be able to micro-target you and start to bombard you with information that fires you up and says, “Can you believe that this is happening?” And they would use it to connect you with other people that think the same way.

And this is the way they’ve been operating since 2016, during our election cycles, for example. And what’s scary is they’ve been quite successful without AI. AI will enhance their ability to deploy these types of attacks, which I think of as societal and governance cohesion attacks, to try to break apart the cohesion of democracies. They do this a lot on their western border with places like Ukraine and the Baltics states. And you are constantly hearing about disinformation campaigns that are just bombarding these countries, and they do it with the United States and our allies.

As I said, this truly is a values competition, and we want to win the competition primarily to make sure the democratic values are maintained and defended across the world.

Of course, if you win this technological competition, it’s going to really help us economically and militarily. So as far as setting up for long-term strategic competition with these other great powers, having technological superiority is really important.

The NSCAI final report posits that by 2025, the Department of Defense and the intelligence community must be AI-ready. How do we balance the sense of urgency we need to have to ensure that we’re AI-ready in 2025, with the caution that we must also exhibit to develop AI technology that’s responsible, ethical, and actually serves to make the world a better place?

In terms of the AI and broader technology competition, we judge we are now close to parity. China is ahead on certain things, and we are ahead on certain things. But they want to surpass us in all things by 2025 and be the world’s No.1 technological power by 2030. They want to be the No.1 power in AI, in quantum, etc. In terms of AI, they have a plan, a national plan to get there, published in 2016-2017. And in simplistic terms, the plan said, “we want to catch up with the United States and AI technologies by 2020.” They have already done that. It also said, “we want to surpass the United States and AI technologies by 2025.” And they’re trying their best to get there. And then they want to be the world’s No.1 AI technological power by 2030. They have a national plan to get there, and they are pouring enormous resources into this.

And what the NSCAI argues is, “look, the government isn’t focused on this. Our response to this challenge is truly going to require a whole of government, whole of nation effort. And if we don’t have the building blocks of our response in place by 2025, it will likely be impossible to keep pace with the Chinese innovation engine.”

So we are saying, “if that’s the case, between now and 2025: we need to get all our ducks in a row, start competing in a coherent and focused way. And in the meantime—since our long-term vision is to make sure that all this is a values competition—let’s make sure the responsible AI is baked in from the beginning, so we don’t have any missteps and allow our citizens to conclude that we’re on the wrong track.”

And this goes right to the heart of your question. Responsible AI is very, very important if we are to convince our citizens that we can deploy AI while protecting democratic values and avoiding impinging upon their privacy or their civil liberties.

The AI we build must reflect our values in the way they operate. So we need to be thinking about this right now. And that’s why the Department of Defense already has established a policy on responsible AI. And they’re saying, “we’re going to build a governance structure to make sure that what we’re doing is responsible.”

They want to have requirements validation—so we know that responsible AI is being baked right into what we’re pursuing. And they want an AI workforce that really understands what all this means.

(End of Part One)

Read the rest of our interview with SGS board member Robert Work here.