Weaponizing AI
File this under "no shit Sherlock," but hackers are already weaponizing machine learning.
The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users. via Gizmodo
In reality, the above example is just a bunch of loops and stuff, it's what the tweets contain and to whom it's sent that makes all the difference. That 'intelligence' is probably generated from some machine-learned model or "AI".
Artificial intelligence, and machine learning, in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.
Yes, the propensity to-buy-to-click models and even some NLP will get people to infect their machines.
Chalk this up to another abusive, yet innovative way, to use machine learning.
Update
Since I posted this there have been many new updates in the deep learning world, especially the creation of GANs (Generative Adversarial Networks). With GANs and the improvements to video processing, we're now entering a world where we can selectively create fake videos that look real. This has long-ranging consequences for public life, without having a fact checker on staff, you sometimes don't know if the content is real or fake.
This is very scary for the political world, especially in authoritarian-leaning countries. The ones in power and with the cash can hire programmers to make realistic fake content to sway elections and stay in power.
Member discussion