How Do We Secure Our Ai Systems?
As the future unfolds and we become more and more automated and questions arise not just about our dependence on the technology behind the automation, but on the data that technology is depending on. As computers become ever more cheap and available to the public, the instance of nefarious activity will arise.
How safe are our Artificial Intelligence systems?
These Ai systems are being designed to take in raw data, subject it to some model, and then produce actionable information as a result, such as which stock to buy, perhaps. However, how does one determine whether or not the data you are feeding in is authentic and unmodified, as well as how or where it was produced?
"We wake up sweating," said Ankur Teredesai, Seattle-based KenSci's Director of Artificial Intelligence. KenSci builds Ai to manage a risk prediction platform for health care data. "At the end of the day, we're talking about real patients, real lives."
Ai is now being used to automate the generation of false news and information. Probabilistic models are sort of statistically deterministic over time, so if one can manipulate the data that your system trusts, and then continue to trust the output, the decisions that you make can be directly manipulated.
The basis of all security is the establishment of trust. Trust in security has traditionally been binary; you cryptographically authenticate someone and then continue to trust them until their credentials are revoked. With Ai, however, you can build more reputation based systems, systems that can operate with adversarial data sources. You establish a reputation for your experts and automatically adjust your confidence based on their performance rather than their credentials. There is a whole world of developing technological innovations to keep an eye on, and Artificial Intelligence is certainly one of them.
Written by Rachel Weissman & Edited by Alexander Fleiss