%0 Web Page %A Ryan, Mark %A Macnish, Kevin %A Hatzakis, Tally %A Kirichenko, Alexey %A Patel, Andrew %D 2019 %T D1.3 Cyberthreats and countermeasures %U https://figshare.dmu.ac.uk/articles/online_resource/D1_3_Cyberthreats_and_countermeasures/7951292 %R 10.21253/DMU.7951292.v2 %2 https://figshare.dmu.ac.uk/ndownloader/files/14801624 %K Cyberthreats %K counermeasures %K AI %K security %K Analysis of Algorithms and Complexity %K Artificial Intelligence and Image Processing not elsewhere classified %K Computer System Security %X

While recent innovations in the machine learning domain have enabled significant improvements in a variety of computer-aided tasks, machine learning systems present us with new challenges, new risks, and new avenues for attackers. The arrival of new technologies can cause changes and create new risks for society (Zwetsloot and Dafoe, 2019) (Shushman et al., 2019), even when they are not deliberately misused. In some areas, artificial intelligence has become powerful to the point that trained models have been withheld from the public over concerns of potential malicious use. This situation parallels to vulnerability disclosure, where researchers often need to make a trade-off between disclosing a vulnerability publicly (opening it up for potential abuse) and not disclosing it (risking that attackers will find it before it is fixed). As such, researchers should consider how machine learning may shape our environment in ways that could be harmful.

%I De Montfort University