Return to site

Cyber Risks Have Real World Effects!

· Cyber Security,Crime,Security,Security Technology

Cyber Risks Have Real World Effects!

broken image

Often called "Kinetic Cyber," real-world threats from cyberattacks have become a major concern for nation states and cybersecurity professionals across the globe. To give a brief timeline of the “greatest hits” of kinetic cyberattacks, here are some of the better-known ones from the last 20 years:

Besides these highly publicized events, there are plenty that have grabbed as many headlines: an oil pipeline explosion in Turkey in 2008 resulting from hacked pressure controls, a moving train in Poland derailed earlier that year by a teenage hacker, a blast furnace damaged in a 2013 attack in Germany, etc.

As more devices join the Internet-of-Things (IoT), there is growing fear of increasing real-world effects from digital attacks on everything from drones to voice detectors to smoke alarms.

President of PathGuard, Dan Neman, told RR that the field definitely includes growing application of AI. Algorithms are already employed for pattern recognition in two particular categories: attestation to confirm safe states of computers and devices, and identification of suspicious activity.

Less well-known are newer applications of AI in rapid response. Many firms seek to remediate cyberattacks in under an hour after detection, but that's become more difficult as intrusions become more complex; AI can help, including with self-patching routines.

On the black hat side of cyber security, AI is broadly used in system attacks. Newman points to the evolution of these attacks “Twenty years ago, malware was written by humans and, when discovered, analyzed by humans at anti-virus firms, comprising dozens of new viruses each year.

Now there are several hundred million new kinds of malware each year. Most are weak probes running under a kind of virtual Darwin: the least effective ones are ignored while those with minor success get altered in multiple attempts to make them more robust. That kind of AI suits the opportunist.”

PathGuard itself offers hardware security against malware attacks by placing a sort of "waiting room" between critical functions and input from the outside world. Remote users, whether authorized agents or hackers, cannot send anything directly to a device protected by this system.

Instead, all communications are routed through this digital waiting room while the protection unit evaluates the input for sources of risk. If the input is approved, it goes into a data-only section of memory that prevents activation of any disguised malware.

The idea started years ago in a discussion between Dan with Frank Newman, his partner and father, to solve certain problems in finance. Frank has a substantial career in banking, and Dan was working in the industry at the time. Both had earlier experience as programmers but focused on hardware as an alternative to the catch-up game played by the good guys when defending against malicious hackers. PathGuard’s design had broader applications and they began developing it that way since.

Unfortunately, human error is still the key factor in many attacks. Solutions like PathGuard are useful as they can handle all incoming communications, as proved by their first prototype which stood up against weeks of penetration attempts by a respected white-hat hacking firm.

Having said that, there are still risks when handling remote access working environments which their solution does not currently address. Dan Newman identifies that there are significant development costs in making a commercially viable model for a laptop or any general-use computer. PathGuard’s current model handles the focused data for a specific kind of infrastructure controller, but they are confident that a general-use model will follow later.

At the other end of the spectrum, AI also helps improve the most sophisticated attacks with data analysis that would not otherwise be possible. Most of the big threats in cyber security come from state-sponsored hacking that changes the landscape. Compared to the old model of the lone hacker in a basement, state-sponsored teams are better informed, more focused, and much more patient. Combatting such teams requires the kind of AI "house alarm" that can detect intrusion.

Another solution that has been developed to make life more difficult for hackers is known as Deception Technology. This acts like the proverbial “alarm” by disguising triggers as the traditional routes into a given network. It is easiest to imagine that hackers have a basic map of systems based on international standards applied across networks.

In order to combat attacks, cybersecurity teams have begun to take these assumptions and set decoys or traps throughout their infrastructure. Once the cyber mouse-trap is triggered notifications are sent to a deception server to identify the suspect. In order to keep these solutions dynamic, most include machine learning and AI to make it more difficult for intruders to break in.

Despite the growing risk of kinetic cyber attacks and potential for real world cyber warfare, the cybersecurity industry is actively creating new solutions. These address a variety of attack vectors from network access to decoy intrusion detection and attacker identification.

By incorporating AI into these solutions, security teams are able to respond more quickly than they could if they were purely reliant on human interaction. Key to the fight against cyberattacks is the ability to work faster and smarter, rather than trying to work harder and longer – something that AI is an ideally suited tool for.

Written by Paul Marrinan & Edited by Alexander Fleiss