Header Ads Widget

Inside the artificial intelligence ‘X’ files taking UK military into a new age

SAT 6AM Revealed: All three UK armed forces 'creating killer AI robots’
Autonomous vehicles supported by artificial intelligence operate over land and water in the UK’s defence vision (Graphic: Myles Goode, Metro.co.uk)

A glimpse at a list of frontier artificial intelligence (AI) projects being developed by the UK military reveals more than 70 headings that sound like they belong in the next space age.

While some of the cryptic titles may not be fully realised for decades to come, this is a battlespace where the UK’s defence planners firmly believe the country cannot afford to lag behind.

The 73 projects span all three armed forces and cover work in areas including intelligence analysis, drone swarms and uncrewed submarines.  

Representing hundreds of millions of pounds of investment, some are established areas of research and development while the nature of others can only be guessed at by their titles in the list obtained by Drone Wars UK.  Cutting-edge work under headings such as ‘multi-domain integrated swarm’, ‘NavyX’ and ‘Counter-UAS’ are included in the document released to the group, which monitors military technology.  

The researchers warn in a blog that AI projects ‘could help to unleash new lethal weapons systems requiring little or no human control’, raising ethical and human rights concerns. The development of lethal autonomous military systems, which have been dubbed ‘killer robots’, pose a ‘high risk of civilian casualties’, according to Drone Wars.  

The Ministry of Defence (MoD) said this week there is ‘no basis for these exaggerated claims’ and it has a legal and ethical framework for carrying out the research, with no plans to develop fully autonomous weapons.

Artificial intelligence concept art
Data flows are seen as an integral part of the future across the UK’s three service arms (Picture: Getty Images)

Drone Wars founder Chris Cole said: ‘It’s clear that the MoD is crossing a line here. The projects in this list represent the building blocks needed to produce killer robots in the near future. The information revealed in this list raises significant questions about the government’s stated commitment not to develop autonomous weapon systems.’ 

The MoD, which disclosed the titles, has set out how it intends to adopt AI technology in the Defence Artificial Intelligence Strategy, which was published last year. Defence Secretary Ben Wallace asked readers to ‘imagine autonomous resupply systems and combat vehicles’ and hailed AI’s ‘enormous potential to enhance capability’. 

Mr Wallace framed it as part of the UK’s response to threats posed by Russia and China who are likely also investing heavily in the technology.

Drone Wars subsequently obtained the list under the Freedom of Information Act (FOI), finding that the number of projects had been reduced from the more than 200 programmes declared in the strategy.

The non-governmental organisation carried out the research in line with its aim to ‘investigate and challenge’ the use of lethal military technology, with the over-arching goal of promoting ‘sustainable human security’.

SALISBURY, ENGLAND - OCTOBER 14: Soldiers demonstrate the EXO Insight glasses which are an eye tracking virtual behaviour monitoring system on October 14, 2021 in Salisbury, England. Army Warfighting Experiment (AWE 21) is a flagship programme harnessing technology to prepare for complex future warfare. It has been designed to experiment with and showcase next-generation Collective Training Systems to inform future Army decisions and put the soldier at the heart of technology. It informs allies and adversaries that the British Army is looking to the future with cutting-edge technological advances. (Photo by Finnbarr Webster/Getty Images)
Soldiers demonstrate the latest battlefield technology in Salisbury as part of the Army Warfighting Experiment (Picture: Finnbarr Webster/Getty)

The response states that no central list is kept due to the ‘ubiquity’ of the research and suggests that some projects are so secret even the names cannot be released on national security and international relations grounds.

‘The MoD’s own AI Strategy accepts that transparency will be essential in gaining acceptance for AI and similar new technologies,’ Mr Cole said.

‘It is therefore very disappointing that the list of AI schemes had to be prised out of the MoD following an FOI battle, and not released proactively at the time the Strategy document was published.

‘Even the list of projects that has been released falls far short of the full set.  

‘The government has argued that it wishes to see artificial intelligence technologies used for ethical and responsible purposes, and it should therefore use the AI summit planned for later this year to kickstart a major international initiative to ban killer robots.’ 

Richard Cassidy, chief information security officer, Europe, Middle East, Africa, at data security firm Rubrik, told Metro.co.uk that advances in the military use of AI need to be tempered with safeguards that mean humans remain the technology’s overlords.

‘As we step deeper into the technological age, the UK has opened the doors to a new era of innovation, specifically focusing on AI in military defence,’ he said. ‘The challenge is welcome and given the speed of innovation in today’s cyber landscape, unarguably necessary, it is paramount that the pursuit of progress does not eclipse the criticality of exercising caution.’ 

The company founder emphasised the need for humas to remain in the loop as AI is incorporated into a wide range of critical national infrastructure.

‘The deployment of AI in defence systems amplifies the consequences of a malfunction, beyond inconvenience, to potential national security disasters,’ he said. ‘Thus, while embracing the power of AI, the criticality of the “human in the loop” approach must be preserved. This involves humans not just as supervisors but as active participants in the AI decision-making and control process. This human-machine synergy ensures we retain control and comprehension of the decision logic of these systems.’ 

Cassidy stressed the need for transparency and accountability in AI infrastructure, with humans as the ultimate backstop.

‘As we chart our course into this brave new world, let’s ensure the pursuit of progress is matched with a commitment to caution,’ he said. ‘We are, after all, the custodians of our national security and must ensure that in our quest to advance, we do not expose ourselves to avoidable vulnerabilities.’

Secretive mIlitary AI projects

Little is publicly known about some of the projects.

One headed ‘The Networked Unmanned Air System (UAS) across Future Commando Force (FCF)’ hints at the MoD’s vision for a leaner, technology-enabled force. Another is titled ‘T26/T31e Offboard UXVs’, possibly referring to unmanned marine vehicles being designed for Royal Navy frigates.

Other warfare technology looks to the far horizon as the architects seek to make radical shifts in British forces’ capabilities.

The ‘next generation’ Test and Evaluate Futures programme, which also features on the list, is known to include ‘novel weapons’, AI and space-based systems. 

Revealed in an annex, the projects range from logistics and medical support to systems intended to automatically detect and scramble AI units to intercept targets. Among the known technologies is the Hydra project, which is led by the Defence Science and Technology Laboratory and also involves private sector contractors and the US and Australian militaries. 

Technology being developed under the project includes systems to use AI-supported drone swarms in ‘contested environments’.  

Another, named NEXUS, is a cloud-based project being developed by the Royal Air Force’s Rapid Capabilities Office to network and share data between different aircraft and systems.  

Far-reaching technology includes Project Minerva, or the Space Game Changer, a £127 million programme intended to pave the way for a ‘multi-satellite system’ used for ‘space-based intelligence’.  

Another futuristic work-in-progress is XL UUV, which stands for extra large uncrewed underwater vehicle.

Also known as Project Cetus, the £15.4 million crewless submarine being designed for the Royal Navy is expected to be able to operate alongside or independently of other vessels when it reaches the dockyard next year.

As with many of the publicised projects, it’s not clear how the weapons systems will operate, but an early picture of the battery-powered UUV shows it has a ‘payload bay’. 

SAT 6AM Revealed: All three UK armed forces 'creating killer AI robots’
Advanced AI-powered systems under development include the latest unmanned aerial vehicles and uncrewed Navy vessels (Graphic: Myles Goode, Metro.co.uk)

Decision-making is at the core of another project — the Enhanced C2 Spearhead (EC2SPHD) — intended to ‘speed up processes’ and remove the ‘cognitive burden’ on humans in land operations. Other advanced research and development includes NavyX — which has been described by the Royal Navy as its ‘autonomy and lethality accelerator’.

What is no doubt is that AI is a ‘critical technology’ for the future of British armed forces, as it was described in the MoD’s updated Defence Command Paper published earlier this month. The report follows Mr Wallace’s announcement in 2021 that the number of fully-trained soldiers would be reduced by nearly 10,000 over the following four years.  

The document states: ‘Over the decades ahead, the ships, tanks and planes in our strike groups, armoured brigades and combat air squadrons will require ever fewer people but that will not necessarily mean our workforce will be smaller. We may have fewer people on the front line but a much larger community of specialists supporting them.’ 

The MoD maintains that the UK defence sector has established ‘guiderails and frameworks’ in the field of AI and autonomy, which ensure the work is legally compliant and supports human rights and democracy.

An Ambitious, Safe, Responsible policy further outlines the department’s approach to the field, including through the principles of human-centricity, responsibility, understanding, bias and harm mitigation.

An MoD spokesperson said: ‘There is no basis for these exaggerated claims and, as we’ve previously set out, we maintain a commitment to a fully legal and ethical approach to AI. We do not possess fully autonomous weapon systems and have no intention of developing them.

‘Across Defence there will always be context appropriate human involvement in weapons which identify, select and attack targets.’

MORE : ‘Cyber battlefield’ map shows attacks being played out live across the globe

Do you have a story you would like to share? Contact josh.layton@metro.co.uk



from News – Metro https://ift.tt/HLOCqg1

Post a Comment

0 Comments