Fortinet today unveiled predictions from the FortiGuard Labs team about the threat landscape for 2019 and beyond. These predictions reveal methods and techniques that Fortinet researchers anticipate cybercriminals will employ in the near future, along with important strategy changes that will help organizations defend against these oncoming attacks. For a more detailed view of the predictions and key takeaways for CISOs, Highlights of the report follow:
Cyberattacks Will Become Smarter and More Sophisticated
For many criminal organizations, attack techniques are evaluated not only in terms of their effectiveness, but in the overhead required to develop, modify, and implement them. As a result, many of their attack strategies can be interrupted by addressing the economic model employed by cybercriminals. Strategic changes to people, processes, and technologies can force some cybercriminal organizations to rethink the financial value of targeting certain organizations. One way that organizations are doing this is by adopting new technologies and strategies such as machine learning and automation to take on tedious and time-consuming activities that normally require a high degree of human supervision and intervention. These newer defensive strategies are likely to impact cybercriminal strategies, causing them to shift attack methods and accelerate their own development efforts. In an effort to adapt to the increased use of machine learning and automation, we predict that the cybercriminal community is likely to adopt the following strategies, which the cybersecurity industry as a whole, will need to closely follow.
Artificial Intelligence Fuzzing (AIF) and Vulnerabilities
Fuzzing has traditionally been a sophisticated technique used in lab environments by professional threat researchers to discover vulnerabilities in hardware and software interfaces and applications. They do this by injecting invalid, unexpected, or semi-random data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions, and potential memory leaks. Historically, this technique has been limited to a handful of highly skilled engineers working in lab environments. However, as machine learning models are applied to this process we predict that this technique will not only become more efficient and tailored, but available to a wider range of less technical individuals. As cybercriminals begin to leverage machine learning to develop automated fuzzing programs they will be able to accelerate the process of discovering zero-day vulnerabilities, which will lead to an increase in zero-day attacks targeting different programs and platforms.
- Zero-Day Mining Using AIF: Once AIF is in place, it can be pointed at code within a controlled environment to mine for zero-day exploits. This will significantly accelerate the rate at which zero-day exploits are developed. Once this process becomes streamlined, zero-day mining-as-a-service will become enabled, creating customized attacks for individual targets. This will change how organizations will need to approach security as there will be no way to anticipate where these zero-days will appear, nor how to properly defend against them. This will be especially challenging when using the isolated legacy security tools which many organizations have deployed in their networks today.
- The “Price” of Zero-Days: Historically, the price of zero-day exploits has been quite high, primarily because of the time, effort, and skill required to uncover them. But as AI technology is applied over time, such exploits will shift from being extremely rare to becoming a commodity. We have already witnessed the commoditization of more traditional exploits, such as ransomware and botnets, and the results have pushed many traditional security solutions to their limits. The acceleration in the number and variety of available vulnerabilities and exploits, including the ability to quickly produce zero-day exploits and provide them as a service, will also impact the types and costs of services available on the dark web.
Significant advances in sophisticated attacks powered by swarm-based intelligence technology is bringing us closer to a reality of swarm-based botnets known as hivenets. This emerging generation of threats will be used to create large swarms of intelligent bots that can operate collaboratively and autonomously. These swarm networks will not only raise the bar in terms of the technologies needed to defend organizations, but like zero-day mining, they will also have an impact on the underlying cybercriminal business model. Ultimately, as exploit technologies and attack methodologies evolve, their most significant impact will be on the business models employed by the cybercriminal community.
Currently, the criminal ecosystem is very people-driven. Some professional hackers for hire build custom exploits for a fee, and even new advances such as Ransomware-as-a-Service requires black hat engineers to stand up different resources, such as building and testing exploits and managing back-end C2 servers. But when delivering autonomous, self-learning Swarms-as-a-Service, the amount of direct interaction between a hacker-customer and a black hat entrepreneur will drop dramatically.
- A-la-Carte Swarms: The ability to subdivide a swarm into different tasks to achieve a desired outcome is very similar to the way the world has moved towards virtualization. In a virtualized network, resources can spin up or spin down VMs based entirely on the need to address particular issues such as bandwidth. Likewise, resources in a swarm network could be allocated or reallocated to address specific challenges encountered in an attack chain. A swarm that criminal entrepreneurs have already preprogrammed with a range of analysis tools and exploits, combined with self-learning protocols that allow them to work as a group to refine their attack protocols, makes purchasing an attack for cybercriminals as simple as selecting from an a-la-carte menu.
Poisoning Machine Learning
Machine learning is one of the most promising tools in the defensive security toolkit. Security devices and systems can be trained to perform specific tasks autonomously, such as baselining behaviors, applying behavioral analytics to identify sophisticated threats, or tracking and patching devices. Unfortunately, this process can also be exploited by cyber adversaries. By targeting the machine learning process, cybercriminals will be able to train devices or systems to not apply patches or updates to a particular device, to ignore specific types of applications or behaviors, or to not log specific traffic to evade detection. This will have an important evolutionary impact on the future of machine learning and AI technology.
Defenses Will Become More Sophisticated
To counteract these developments, organizations will need to continue to raise the bar for cybercriminals. Each of the following defensive strategies will have an impact on cybercriminal organizations, forcing them to change tactics, modify attacks, and develop new ways to assess opportunities. The cost of launching their attacks will escalate, requiring criminal developers to either spend more resources for the same result, or find a more accessible network to exploit.
Advanced Deception Tactics
Integrating deception techniques into security strategies to introduce network variations built around false information will force attackers to continually validate their threat intelligence, expend time and resources to detect false positives, and ensure that the networked resources they can see are actually legitimate. And since any attacks on false network resources can be immediately detected, automatically triggering countermeasures, attackers will have to be extremely cautious performing even basic tactics such as probing the network.
Unified Open Collaboration
One of the easiest ways for a cybercriminal to maximize investment in an existing attack and possibly evade detection is to simply make a minor change, even something as basic as changing an IP address. An effective way to keep up with such changes is by actively sharing threat intelligence. Continuously updated threat intelligence allows security vendors, and their customers, to stay abreast of the latest threat landscape. Open collaboration efforts between threat research organizations, industry alliances, security manufacturers, and law enforcement agencies will significantly shorten the time to detect new threats by exposing and sharing the tactics used by attackers. Rather than only being responsive, however, applying behavioral analytics to live data feeds through open collaboration will enable defenders to predict the behavior of malware, thereby circumventing the current model used by cybercriminals to repeatedly leverage existing malware by making minor changes.
Speed, Integration, and Automation Are Critical Cybersecurity Fundamentals
There is no future defense strategy involving automation or machine learning without a means to collect, process, and act on threat information in an integrated manner to produce an intelligent response. To contend with the growing sophistication of threats, organizations must integrate all security elements into a security fabric to find and respond to threats at speed and scale. Advanced threat intelligence correlated and shared across all security elements needs to be automated to shrink the necessary windows of detection and to provide quick remediation. Integration of point products deployed across the distributed network, combined with strategic segmentation, will significantly help fight the increasingly intelligent and automated nature of attacks.