Enterprise

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


The cybersecurity industry is rapidly embracing the notion of “zero trust”, where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted.

However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted “ground truth” as reference point.

How can these two seemingly diametrically opposing philosophies coexist?

This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all.

Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.

One of the main stumbling blocks associated with AI trustworthiness revolves around data, and more specifically, ensuring data quality and integrity. Afterall, AI models are only as good as the data they consume.

And yet, these obstacles haven’t discouraged cyber security vendors, which have shown unwavering zeal to base their solutions on AI models. By doing so, vendors are taking a leap of faith, assuming that the datasets (whether public or proprietary) their models are ingesting adequately represent the real-life scenarios that these models will encounter in the future.

The data used to power AI-based cybersecurity systems faces a number of further problems:

Data poisoning: Bad actors can ”poison” training data by manipulating the datasets (and even the pre-trained models) that the AI models are relying upon. This could allow them to circumvent cyber security controls while the organization at risk remains oblivious to the fact that the ground truth it relies on to secure its infrastructure has been compromised. Such manipulations could lead to subtle deviations, such as security controls labeling malicious activity as benign, or generate a more profound impact by disrupting or disabling the security controls.

Data dynamism: AI models are built to address “noise,” but in cyberspace, malicious errors are not random. Security professionals are faced with dynamic and sophisticated adversaries that learn and adapt over time. Accumulating more security-related data might well improve AI-powered security models, but at the same time, it could lead adversaries to change their modus operandi, diminishing the efficacy of existing data and AI models. Data, in this case, is actively shaping the observed reality rather than statically representing it as a snapshot.

For example, while additional data points might render a traditional malware detection mechanism more capable of identifying common threats, it might, theoretically, degrade the AI model’s ability to identify novel malware that considerably diverges from known malicious patterns. This is analogous to how mutated viral variants evade an immune system that was trained to identify the original viral strain.

Unknown unknowns: Unknown unknowns are so prevalent in cyberspace that many service providers preach to their customers to build their security strategy on the assumption that they’ve already been breached. The challenge for AI models emanates from the fact that these unknown unknowns, or blind spots, are seamlessly incorporated into the models’ training datasets and therefore attain a stamp of approval and might not raise any alarms from AI-based security controls.

For example, some security vendors combine a slate of user attributes to create a personalized baseline of a user’s behavior and determine the expected permissible deviations from this baseline. The premise is that these vendors can identify an existing norm that should serve as reference point for their security models. However, this assumption might not hold water. For example, an undiscovered malware may already reside in the customer’s system, existing security controls may suffer from coverage gaps, or unsuspecting users may already be suffering from an ongoing account takeover.

Errors: It would not be brazen to assume that even staple security-related training datasets are probably laced with inaccuracies and misrepresentations. Afterall, some of the benchmark datasets for many leading AI algorithms and exploratory data science research have proven to be rife with serious labeling flaws.

Additionally, enterprise datasets can become obsolete, misleading, and erroneous over time unless the relevant data, and details of its lineage, are kept up-to-date and tied to relevant context.

Privacy-preserving omission: In an effort to render sensitive datasets accessible to security professionals within and across organizations, privacy-preserving and privacy-enhancing technologies, from deidentification to the creation of synthetic data, are gaining more traction. The whole rationale behind these technologies is to omit, alter, or mask sensitive information, such as personally identifiable information (PII). But as a result, the inherent qualities and statistically significant attributes of the datasets might be lost along the way. Moreover, what might seem as negligible “noise” could prove to be significant for some security models, impacting outputs in an unpredictable way.

The road ahead

All of these challenges are detrimental to the ongoing effort to fortify islands of trust in AI-dominated cybersecurity industry. This is especially true in the current environment where we lack widely-accepted AI explainability, accountability, and robustness standards and frameworks.

While efforts have begun to root out biases from datasets, enable privacy-preserving AI training, and reduce the amount of data required for AI training, it will prove much harder to fully and continuously inoculate security-related datasets against inaccuracies, unknown unknowns, and manipulations, which are intrinsic to the nature of cyberspace. Maintaining AI hygiene and data quality in ever-morphing, data-hungry digital enterprises might prove equally difficult.

Thus, it is up to the data science and cybersecurity communities to design, incorporate, and advocate for robust risk assessments and stress tests, enhanced visibility and validation, hard-coded guardrails, and offsetting mechanisms that can ensure trust and stability in our digital ecosystem in the age of AI.

Eyal Balicer is Senior Vice President for Global Cyber Partnership and Product Innovation at Citi.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member