AI Security with Adversarial Robustness and Explainable AI
Nefarious agents are constantly developing more complex AI systems to hack into secure networks. So much of the criminal element has shifted over the last 2 decades onto the cyber landscape that the criminal cyber market has grown into an extremely large and global situation that can affect anyone. Now that governments and large organizations must trust AI as a major line of defense, validation and explainability becomes key.
Calypso AI founder Davey Gibian raises concerns about artificial intelligence's security as a growing number of companies across the globe implement these smart algorithms into their software.
"Artificial intelligence (AI) and machine learning (ML) technologies create paradigm-shifting advantages for companies, organizations, and society at large. However, the risks associated with AI and ML are emerging just as rapidly as the advantages," Mr. Gibian said in an interview with Rebellion Research. Calypso's SaaS-based software solution provides cybersecurity for AI itself, detecting and defending AI Systems from adversarial attacks. These are threats that many cybersecurity providers have not addressed.
Once a digital platform like Tesla, FireEye, or Google incorporates a new AI model, the algorithm interprets inputs and adjusts its output to align with the desired result. AI's ability to learn independently reconstructs the initial algorithm's parameters into an abstract and obscure decision-making process.
The abstract nature of AI allows hackers to create "adversarial examples" that resemble standard inputs but promote malignant outputs such as misclassifying malware or missing a street sign. "Attackers typically create these adversarial examples by developing models that repeatedly make minute changes to another model's inputs. Eventually, these changes begin to add up, causing the target model to become unstable and make inaccurate classifications," Mr. Gibian said.
As artificial intelligence increases in complexity, the adversarial attacks will become more challenging to detect and defend. Calypso builds AI security with adversarial robustness and explainable AI in mind to ensure AI models are safe.
Written by James Mueller, Edited by Sonakshi Dua & Alexander Fleiss