Longterm Wiki
Back

Trusting artificial intelligence in cybersecurity is a double-edged sword | Nature Machine Intelligence

paper

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 202613 KB
Trusting artificial intelligence in cybersecurity is a double-edged sword | Nature Machine Intelligence 
 
 
 

 
 

 
 

 

 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 

 
 

 

 

 
 

 
 
 

 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 

 
 
 

 
 Skip to main content 

 
 
 
 Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
 the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
 Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
 and JavaScript.

 
 

 

 

 
 
 

 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 

 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

 
 
 Subjects

 
 Ethics 
 Information technology 
 Social policy 

 
 

 
 
 

 
 

 
 

 
 Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.

 

 
 
 
 
 
 
 
 
 
 Access through your institution 
 
 
 
 
 
 
 
 Buy or subscribe 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 This is a preview of subscription content, access via your institution 

 
 
 

 

 Access options

 

 
 
 
 
 
 
 
 Access through your institution 
 
 
 
 
 
 

 

 
 
 
 
 
 Access Nature and 54 other Nature Portfolio journals
 

 
 Get Nature+, our best-value online-access subscription
 

 
 
 $32.99 / 30 days 
 

 cancel any time

 
 
 Learn more 
 
 
 
 
 Subscribe to this journal

 
 Receive 12 digital issues and online access to articles
 

 
 
 $119.00 per year

 only $9.92 per issue

 
 
 
 
 Learn more 
 
 
 
 Buy this article

 Purchase on SpringerLink 
 Instant access to the full article PDF. 
 USD 39.95 

 Prices may be subject to local taxes which are calculated during checkout

 
 

 


... (truncated, 13 KB total)
Resource ID: 69d97a4c0448c91d | Stable ID: MWNjMzU0M2