The UK is speeding up plans to create an “AI Air Force” in light of the US-Israeli war with Iran, the head of the Royal Air Force told The i Paper.
Air Chief Marshal Sir Harv Smyth said that the shift towards uncrewed air systems was “one of the most interesting and exciting changes for us”, and was happening much more quickly than military planners had expected.
Last week, the RAF launched a review of its Combat Air Strategy, first released in 2018, to significantly expand its artificial intelligence and autonomous capabilities by the end of the decade. This includes the development of “robot fighter jets”, according to Smyth.
Shorts – Quick stories
AI has regularly been used during the Iran war for selecting military targets and large-scale attacks from uncrewed aerial systems or drones.
Smyth, in an interview with The i Paper at a military airport hangar, said the Air Force has commissioned work to “re-look at our Combat Air Strategy and where those types of capabilities might and could play into that much sooner than we had thought”.
“What we’re now looking at is, how do we pull that all together into a single programme, and what could that mean for an Air Force that might become more of an AI Air Force in the future?” he added.
Robot fighter jets
He said that two years ago, the RAF was planning to integrate automated systems by 2035, but it is now aiming to master the technology by 2030.

One element of this is “collaborative combat aircraft, CCAs – or robot fighter jets”, he said.
“We’ve always known in theory that this was a capability that would be coming, we expected it to really come to the fore in the next decade. In truth, it’s here today.”
Separately, AI can be used in air and missile defence systems, helping to digest warning signs picked up by radars and sensors and allocate different ground-based capabilities to incoming aerial threats. This has been seen in Ukraine.
The UK’s Strategic Defence Review (SDR) – a blueprint for the future of the Armed Forces released last year – said that the RAF needed to improve its crewed systems with “autonomous collaborative platforms”, to provide “mass and capability across a range of tasks, including air defence, strike, and electromagnetic attack”.
It added that crewed combat air platforms “will remain at the heart of a system‑of‑systems approach, particularly in airborne air defence to counter peer adversaries’ aircraft, until artificial intelligence and autonomy reach the necessary levels of capability and trust”.

Asked if the UK was speeding up its plans to introduce automated tech in part due to lessons from the US-Israeli conflict with Iran, Smyth said: “Absolutely.”
But he said that integrating robotic systems would not only help the UK expand its military capabilities, it would also generate domestic economic growth. He added that British defence companies have “world-class skills” in the area and could benefit from new investment.
However, a pivot towards a robot air forces would require significant funding, and the Government is continuing to stall on its long-awaited Defence Investment Plan (DIP).
George Robertson, the former head of Nato and co-author of the SDR, recently said that the UK was not taking the threat from Russia seriously enough and needed to prioritise spending on defence over welfare.
Hybrid forces
The RAF, Navy and Army are taking steps to become hybrid forces, made up of crewed and uncrewed systems. Insiders said AI could help control air platforms and speed up the decisions of the pilot.
Justin Bronk, a senior research fellow at the Royal United Services Institute (Rusi), said the RAF was developing some “very promising” uncrewed systems, which includes the StormShroud – uncrewed aircraft designed to blind enemy radars – and a range of cheap decoy and one-way attack drone systems.
However, Bronk said the systems would mostly be useful in overwhelming air defence systems to improve the odds of high-end weapons like UK-made Storm Shadow missiles hitting targets, rather than as replacements for existing missiles and jets.

“Because there is a real risk of direct conflict with Russia in the next few years, the RAF – and the rest of the UK Armed Forces – will primarily have to fight any such war with the equipment they already have,” he said. “So, while future systems for service in the mid-late 2030s are important to develop, the main focus should be enhancing the lethality and survivability of the aircraft and aircrews we already have.”
Bronk added: “In the near term, that means bolstering air-launched weapon stocks, improving airbase defences and investing in electronic attack options like StormShroud.”
Military targeting decisions
AI and automation has been used extensively in the latest Middle East conflict, which began in late February when the US and Israel launched missile strikes across Iran, killing its supreme leader, Ayatollah Ali Khamenei.
In response, Iran has attacked US military bases and its allies across the region, and effectively closed the Strait of Hormuz, a key shipping lane for global oil and gas supplies.
When selecting targets in Iran, the US has reportedly used Palantir’s Maven Smart System, an AI platform designed to speed up military targeting decisions by bringing together data and intelligence and recommending targets.
Admiral Brad Cooper, the US Commander leading the war in Iran, has confirmed the use of “a variety of advanced AI tools” to sift through large amounts of data in the conflict. These systems allow leaders to make “smarter decisions faster than the enemy can react”, he said, and speed up processes from hours or days to seconds.
Nato bought a version of Maven Smart System in 2025.
At the same time, Ukraine and Russia are both using AI extensively on the battlefield. Kyiv’s deputy defence minister said last year that AI analyses more than 50,000 video streams from the front line each month, to help “quickly process this massive data, identify targets and put them on a map”.
But experts have warned that automated capabilities come with risk, and could lead to the accidental targeting of civilians or non-military infrastructure. The Pentagon has faced questions over whether AI contributed to an air strike on an Iranian girls’ school which Iranian officials say killed 168 people, but the answer is not yet clear.
Nilza Amaral, head of research operations at the Global Governance and Security Centre at Chatham House, recently wrote that an AI model could be trained with faulty data or with material that is different to what it encounters when deployed in the real world, leading it to “generate inaccurate information or malfunction when used outside of the training environment”.
The US military and AI suppliers insist there will always be a “human in the loop” who makes the final decision, but it is clear that automated and robotic systems will increasingly shape the face of modern warfare – and the RAF is racing to try to adapt.









