Online hackers are increasingly hijacking search engines and social media platforms to carry out cyber attacks, a research group led by De Montfort University Leicester (DMU) has found.
Artificial intelligence (AI) software found in commonly used search engines, social media platforms and recommendation websites is being manipulated by hackers more frequently than people realise, according to a new report.
Published by the European Union-funded project SHERPA – which has been established to enhance the responsible development of AI and examine the impact of smart information systems (SIS) on ethics and human rights – the report states that attacks against AI systems are already occurring regularly but are not easy to identify.
“Our consortium partners found that hackers tend to focus most of their efforts on manipulating existing AI systems for malicious purposes instead of developing new attacks that use machine learning,” explained SHERPA Project Coordinator Professor Bernd Stahl from DMU.
SHERPA researchers – including representatives and consortium partners from F-Secure; a cyber security firm that builds detection and responsible solutions to keep businesses and people safe online – identified a number of potentially malicious uses for AI that are well within reach of today’s attackers, including the creation of sophisticated disinformation and social engineering campaigns.
Andy Patel, a researcher with F-Secure’s Artificial Intelligence Center of Excellence, said: “Some humans incorrectly equate machine intelligence with human intelligence, and I think that’s why they associate the threat of AI with killer robots and out of control computers.
“But human attacks against AI actually happen all the time.”
The report also notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content, and some AI models have even been withheld from the public to prevent them from being abused by attackers.
Researchers explore how artificial intelligence could impact our lives by 2025
DMU to lead Europe-wide computer ethics project
Art AI Festival inspires new study that could help dementia patients in Leicester
“At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it,” said Andy.
“AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect. And there’s many different applications for convincing, fake content, so I expect it may end up becoming problematic.”
Professor Stahl added: “Our project’s aim is to understand ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these. We can’t have meaningful conversations about human rights, privacy, or ethics in AI without considering cyber security.
“And as a trustworthy source of security knowledge, F-Secure’s contributions are a central part of the project.”
artificial intelligence systems
Posted on Friday 12th July 2019