Amazon-hosted AI tool for UK military recruitment ‘carries risk of data breach’ | Artificial intelligence (AI)



An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment.

Data used in the automated Textio system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored using Amazon Web Services (AWS) in the US. This means “a data breach may have concerning consequences, ie identification of defence personnel”, according to documents detailing government AI systems published for the first time today.

The risk has been judged to be “low” and the MoD said “robust safeguards” have been put in place by the suppliers, listed on the MoD website as Textio, AWS and Amazon GuardDuty, a threat detection service. (Amazon says GuardDuty is not a supplier, rather a product of AWS.)

But it is one of several risks acknowledged by the government about its use of AI tools in the public sector in a tranche of documents released to improve transparency about the central government’s use of algorithms.

Official declarations about how the algorithms work stress that mitigations and safeguards are in place to tackle risks, as ministers push to use AI to boost UK economic productivity and, in the words of the technology secretary, Peter Kyle, on Tuesday, “bring public services back from the brink”.

It was reported this week that Chris Wormald, the new cabinet secretary, has told civil servants the prime minister wants “a rewiring of the way the government works”, requiring officials to take “advantage of the major opportunities technology provides”.

Google and Meta have been working directly with the UK government on pilots to use AI in public services. Microsoft is providing its AI-powered Copilot system to civil servants, and earlier this month the Cabinet Office minister Pat McFadden said he wanted government to “think more like a startup”.

Other risks and benefits identified in current central government AIs include:

  • The possibility of inappropriate lesson material being generated by a AI-powered lesson-planning tool used by teachers based on Open AI’s powerful large language model, GPT-4o. The AI saves teachers time and can personalise lesson plans rapidly in a way that may otherwise not be possible.

  • “Hallucinations” by a chatbot deployed to answer queries about the welfare of children in the family courts. However, it also offers round the clock information and reduces queue times for people who need to speak to a human agent.

  • “Erroneous operation of the code” and “incorrect input data” in HM Treasury’s new PolicyEngine that uses machine learning to model tax and benefit changes “with greater accuracy than existing approaches”.

  • “A degradation of human reasoning” if users of an AI to prioritise food hygiene inspection risks become over-reliant on the system. It may also result in “consistently scoring establishments of a certain type much lower”, but it should also mean faster inspections of places that are more likely to break hygiene rules.

The disclosures come in a newly expanded algorithmic transparency register that records detailed information about 23 central government algorithms. Some algorithms, such as those used in the welfare system by the Department for Work and Pensions, which have shown signs of bias, are still not recorded.

“Technology has huge potential to transform public services for the better,” said Kyle. “We will put it to use to cut backlogs, save money and improve outcomes for citizens across the country. Transparency in how and why the public sector is using algorithmic tools is crucial to ensure that they are trusted and effective.”

Central government organisations will be required to publish a record for any algorithmic tool that interacts directly with citizens or significantly influences decisions made about people, unless a narrow set of exemptions apply such as national security. Records will be published for tools once they are being piloted publicly or are live and running.

Other AIs included on the expanded register include an AI chatbot that handles customer queries to Network Rail trained on historic cases from the rail body’s customer relationship system.

The Department for Education is operating a lesson assistant AI for teachers, Aila, using Open AI’s GPT-4o model. Created inside Whitehall, rather than using a contractor, it allows teachers to generate lesson plans. The tool is intentionally designed not to generate lessons at the touch of a button. But risks identified and being mitigated include harmful or inappropriate lesson material produced, bias or misinformation and “prompt injection” – a way of malicious actors tricking the AI into carrying out their intentions.

The Children and Family Court Advisory and Support Service, which advises the family courts about the welfare of children, uses a natural language processing bot to power a website chat service handling about 2,500 queries a month. One of the acknowledged risks is that it may be handling reports of concerns about children, while others are “hallucinations” and “inaccurate outputs”. It has a two-thirds success rate. It is supported by companies including Genesys and Kerv, again using Amazon Web Services.

This article was amended on 17 December 2024 to include reference to Textio in the subheading and second paragraph, and to make clearer that the tool’s data is hosted on Amazon Web Services. Amazon says AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS cloud, while AWS customers have ownership and control over their data.



Source link

Leave a Reply