U.S. Senator Marco Rubio (R-FL) joined One Nation with Brian Kilmeade to discuss the issue of illegal immigrant crime and the extent of Communist China’s influence in the United States. See below for highlights and watch the full interview on YouTube and Rumble. On...
NOTICIAS
Últimas Noticias
Rubio, Colleagues Push Dod to Prioritize Defense Industrial Base Related Trade Jobs for Veterans
The welding, automotive, aviation maintenance, submarine, shipbuilding, and other defense-related trade industries are facing a workforce shortage. Many service members and veterans possess the skills to excel in trade jobs benefiting the defense industrial base...
Rubio, Scott, Florida Delegation Ask for Security Plan for 2026 FIFA World Cup
The United States will host the 2026 FIFA World Cup, along with Canada and Mexico. Miami was chosen as one of the host cities to hold matches, with additional Florida cities serving as base camps for the competing national teams. The increased tourism activity across...
Rubio to Biden: Planning Needed to Avoid Oropouche Outbreak
Oropouche virus is a disease spread to humans by mosquitoes and biting midges that can cause neurological effects and devastating effects on unborn babies. Recent surveillance data reports approximately 40 travel-associated cases of oropouche, in Florida, from...
Rubio, Cardin Applaud Senate Passage of USCIRF
The United States Commission on International Religious Freedom (USCIRF), created by the International Religious Freedom Act of 1998, is a bipartisan commission that monitors and reports on international religious freedom. The commission’s authorization is currently...
Next Week: Rubio Staff Hosts Mobile Office Hours
U.S. Senator Marco Rubio’s (R-FL) office will host in-person Mobile Office Hours next week to assist constituents with federal casework issues in their respective local communities. These office hours offer constituents who do not live close to one of Senator Rubio’s...
Rubio Questions Witnesses at a Senate Intel Hearing
Vice Chairman of the Senate Select Committee on Intelligence Marco Rubio (R-FL) questioned witnesses at a hearing on the intersection of artificial intelligence (AI) and national security.
Witnesses:
- Dr. Benjamin Jensen, Senior Fellow, CSIS and Professor, Marine Corps University School of Advanced Warfighting
- Dr. Jeffrey Ding, Assistant Professor of Political Science, George Washington University
- Dr. Yann LeCun, Vice President and Chief AI Scientist, Meta Platforms and Silver Professor of Computer Science and Data Science, at New York University
Click here for video and read a transcript below.
RUBIO: I understand that we want to talk about the commercial and broader scientific applications of this. It’d be great to be the world leader, industry standard, top of the line. But for purposes of this committee, how it would be used as a nation state, what I think it’s important to reiterate is, you don’t need the state of the art for it to be adopted internally for your decision-making.
Every conflict in human history has involved an analysis. At some point, someone who started the war made an analysis based on their data that was in their brain, their understanding of history, their beliefs, their preconceived notions and the like, that they could be successful, and that this was really important and now is the time to do it. That’s the part that worries me the most, particularly when it comes to how it applies to authoritarian regimes.
At least in our system, for the most part, we [policymakers] encourage people to come forward…, make an analysis, and give us accurate information, even if it may not be the one we want to hear. In authoritarian systems, you usually get promoted and rewarded for telling leaders what they want to hear, not for reporting bad news and the like.
I don’t know if anyone can answer this question, but I wanted to pose it to you. Isn’t one of the real risks, as we move forward, that some nation with some existing capabilities will conduct analysis on the basis of their version of AI, which will be flawed to some extent by some of the data sets, and those data sets and the analytic functions [will] reach the conclusion that this is the moment to take this step against this [adversary]? Now is the time to invade, now is the time to move, because our system is telling us that now is the time to do it?
That system may be wrong. It may be based on flawed data. It may be based on data that people fed in there on purpose, because that’s the data that their analysts are generating. That’s the part that I worry about. Because even if it’s not the top of the line or the state of the art data, it will be what influences their decision-making and could very well lead to 21st century conflicts started not simply by a human mind, but how a human mind used technology to reach a conclusion that ends up being deeply destructive. Is that a real risk?
JENSEN: I’m happy to talk about war anytime, Senator. I think you’re hitting on a fundamental [part] of human history, as you’re saying. Every leader, usually not alone, [but] as part of a small group, is calculating risk at any moment. Having models incorrectly or correctly add to their judgment about what is a real concern.
There will be windows of vulnerability and the possibility of inadvertent escalation that could make even the early Cold War look more secure than it actually was. I think that’s the type of discussion you have to have. That’s where we hopefully will have backchannel conversations with those authoritarian regimes. Frankly, it just bodes well for what we know works for America, a strong military where your model finds it really hard to interpret anything but our strength.
I think that there are ways that you can try to make sure that the right information is circulating, but you can never fundamentally get away from those hard, weird moments, those irrational people with rational models. You see the model as irrational or flawed because it collects just the skewed data. I worry more about what we just saw happen in Russia, where a dictator living in corrupt mansions, reading ancient imperial history of Russia, decided to make one of the largest gambles in the 21st century.
I don’t think that’s going to leave us. I think that’s a fundamental [part] of human history. I actually think in some senses, the ability of models to bring data could steady that a bit, and we can make sure that we show the right type of strength that it steadies it further.
RUBIO: Let me ask this question related to that one, and it has to do with the work of this committee in general. The core of intelligence work is analysis. In essence, you can collect all the raw bits of data you want, but someone has to interpret and tell a policymaker, this is what we think it means in light of what we know about those people, what we know about historical trends, what we know about cultures, common sense, whatever. There’s an analytical product. And then you have to make policy decisions either with high confidence in the analysis, moderate confidence, low confidence, whatever it may be….
If it’s possible at this point, could you provide us as to what that analysis should include or look like if applied to the way we analyze data sets? So that not only are we reaching the most accurate results and the most the ones that are true, but ones that provide our policymakers a basis upon which to make the best decisions possible, weighing all the equities, including human consideration, not just the cost-benefit analysis from an economic or military standpoint?
DING: Let me start with your earlier question, which I take as, what is the risk of AI in terms of contributing to military accidents? I would say that an authoritarian regime might be a contributing factor to a state having a higher risk of military accidents. I think when we talk about these information analysis systems, think about the U.S.’ Aegis system that collects information and analyzes it and issues what this target is, whether it’s friend or foe, and then whether we should fire a missile towards the target. In the 1990s, the U.S. accidentally fired upon an Iranian civilian airliner, killing 300 people. So military accidents can happen in democratic countries.
But I think it’s an interesting research question. One of the things that I’m interested in studying is, how has China, as an authoritarian state, actually demonstrated a decent safety record with new civil nuclear power plants and aviation safety? How does that happen in a closed authoritarian system? What is the role of international actors?
A military accident anywhere, whether it’s caused by AI or any other technology, is a threat to peace everywhere, to your point. So we should all be working to try to reduce the risks of these systems sort of accidents in military AI systems.
To your second point, one of my recommendations would be to keep a human in the loop, regardless of whatever AI system we adopt, in terms of intelligence, surveillance, and reconnaissance. Hopefully that will make these systems more robust.
###