By Tin Zaw, Director of Security Solutions
Cyberattacks are occurring more frequently, at a larger scale with more sophisticated tactics than ever before. As security researchers work to keep websites airtight and customers safe, they must also maintain user privacy.
Finding a happy medium between safety and user confidentiality is challenging. However, with threats only getting more severe, falling short isn’t an option. The theme for this year’s RSA Conference taking place March 4 – 8 in San Francisco is “Better.” It’s time we all acknowledge that in 2019, better cybersecurity means developing a better solution that is respectful of privacy and tough on cyber adversaries.
Since the beginning of threat intelligence research, the guiding principle has been, “know your enemy.” The more details researchers have about an adversary and their tactics, tools, and procedures, the more effectively we can fight back.
However, unfettered access to user data is no longer the case. Laws and regulations designed to protect users’ privacy have driven security researchers to rethink traditional threat intelligence models. They are being pushed to innovate, reconsidering how existing machine learning and artificial intelligence models can be streamlined and adapted for threat intelligence that coexists peacefully with privacy regulations.
For most security organizations, this requires harnessing data collected from their systems and incorporating information shared by public threat intelligence organizations, other companies, or other divisions within their company. Security vendors that work with multiple customers may also use insights gleaned from their work with one organization to help defend another.
However, in a world with increasingly stringent data privacy regulations to consider, sharing that vital information may become more difficult. Today’s most advanced security solutions don’t just scan for known malware signatures and other red flags. Instead, they look at each user’s pattern of behavior in a broader context and try to judge whether that behavior is malicious.
Imagine a user who has rapidly input several different sets of credentials into a login page. Are they a legitimate user who has just forgotten their password? Or is it a bot engaged in credential stuffing?
An AI-driven security solution could look at several factors to answer these questions, including how long the user takes to enter each set of credentials (bots are faster at data entry than humans). Based on the result, the security algorithm might choose to block that user, send them a test, like a CAPTCHA, or quarantine them for a short period of time so their behavior doesn’t overload the system.
Information on user behavior isn’t quite the same as personally identifiable information (PII), such as a name or social security number. However, privacy regulations may still restrict how the behavior of a potentially malicious user is shared. Cybersecurity professionals should think more seriously about how such restrictions could impact their work.
Sharing context about user behavior patterns is crucial for security decision making. Without knowing how a given user has raised a red flag, it’s hard for an analyst or algorithm to decide how to treat them.
Take the common practice of IP blocking. Today, public threat intelligence organizations like the LA Cyber Lab can compile lists of IP addresses flagged as malicious. To protect users’ privacy, they can’t share any data related to those IP addresses.
Unfortunately, this can make it hard to use the information effectively. For example, should an IP address on the list be blocked outright, quarantined, or simply monitored closely? Without knowing more about the behavior of the user at that address, it’s hard to know. If data privacy regulations do apply to information about user behavior, it could make ensuring cybersecurity much more difficult.
The ongoing challenge for intelligence analysts is to keep users safe without making use of a user’s sensitive information to accomplish this goal. And with no universal roadmap for the process, security analysts and lawyers are still working out the details of what is and is not allowed.
In the United States, there are sixteen federal laws which place obligations on companies in the financial, healthcare, and insurance sectors to implement minimum threat-intelligence practices. But as privacy regulations are made into law, it can be difficult for researchers to determine where security methods and data confidentiality should overlap.
The time for cybersecurity to step up to the security versus privacy challenge is now. Researchers should look at increased data privacy regulations as an opportunity to innovate and improve existing threat intelligence models. At the same time that they are helping to resolve today’s security challenges, increasingly sophisticated machine learning algorithms may also outwit the bad actors of tomorrow.