Skip to content

AI tool designed to identify coercive language patterns receives Home Office funding


Share
Share
Share
Pin

A start-up project that could help police identify patterns of tech-facilitated coercive control has received a £115,000 cash boost, courtesy of the UK Home Office.

The project, for which De Montfort University Leicester (DMU) is consulting on, aims to create an artificial intelligence (AI) tool that can risk assess potential domestic abuse perpetrators by analysing the language they use with their partners over text messages and social media.

IMG-20220425-WA0000
Professor Vanessa Bettinson with Nonso Nnamoko

It’s hoped that the AI tool, which is currently being developed, will help police quickly analyse conversations for red flags through keywords and patterns, without manually reading through a victim’s phone.

Professor Vanessa Bettison, who teaches criminal law at DMU and is a leading consultant on the project, believes the tool may encourage more victims of domestic abuse and coercive control to come forward.

She said: “As technology has evolved, so have abusive and coercive behaviours. Perpetrators can now control light settings around the house and monitor the door or central heating systems, leaving victims vulnerable to abusive behaviours while they are by themselves at home.

“However, we still feel that language holds the key to identifying those who are at risk of being coerced or abused. This funding from the Home Office is allowing us to put this theory to the test and enabling us to see if an AI analysis tool is feasible.

“The police forces we have surveyed as part of our research are very much on board with the development of the tool and are keen to see this type of technology introduced, with the hope it can increase the chances of identifying more perpetrators of domestic abuse at an early stage.

“While we are right at the beginning of the project, I’m confident that what we’re developing will protect the privacy of the victim. Having the AI tool analyse conversations instead of an individual may encourage more victims to come forward as their private conversations won’t be manually sifted through.”

Nonso Nnamoko, who is a lecturer in computer science at Edge Hill University and is leading the development of the AI tool, explained that the data sets had been taken from court transcripts and curated social media messages online. The data sets will be used to train the AI to be able to identify what meets the legal threshold for abusive messages in text or social media messages. 

He said: “The AI algorithm will give victims added privacy and it will also be a far quicker process. A survivor will provide their mobile phone, which will give us the log of their conversations. The AI will be able to analyse it. The AI model is 88 per cent accurate so far, and we hope to reach a threshold of 96 per cent to 98 per cent.” 

London South Bank University will undertake the feasibility test and document its findings in the research team’s Home Office report.

Once the feasibility study is done, the coalition of universities hopes to secure further funding to carry out more testing, eventually reaching a point where police forces can routinely offer this program to domestic abuse survivors. 

Posted on Friday 6th May 2022

Search news archive