Chatbots, conversational systems, banking apps, smartphones, self-driving cars and even home devices, like Alexa, are rapidly becoming part of our daily lives. The mainstreaming of artificial intelligence (AI) across a broad variety of industries, from healthcare and financial services to autonomous vehicles and manufacturing, is driving ethical and governance concerns. Given the opacity of this technology, and the pervasiveness of its reach, increasingly fearful reactions have struck root in the public consciousness.
To address this widening trust gap, industry leaders must step forward to educate the public on what enterprise AI is realistically capable of, and what it is not.
Anik Bose and Venkat Raghavan, both partners at BGV and Zelros CEO Christophe Bourguignat explore the theme of trustworthy AI as the foundation for implementing AI governance. They look at the challenges presented by AI model sprawl and the explosion of data clouds, and identify avenues where growth, investment and innovation are likely to coalesce. Trustworthy AI, they conclude, presents a necessary framework for the responsible and safe development of this game-changing technology.
BGV is elevating awareness and thought leadership around this topic in partnership with industry leaders like Samsung, NXP Semiconductors, ARM, Tech Mahindra and Global Corporate Venturing by co-sponsoring a specialised award on ethical AI for the Extreme Tech Challenge, a global startup competition focused on “tech for good.” To learn more about the competition and the award please visit Extreme Tech Challenge.
AI model sprawl and the widening trust gap
The early days of AI experimentation are over. Whether Google Search or Netflix recommendations, auto email correction or virtual assistants, consumer AI is already entrenched, invisible and pervasive in our daily lives. Enterprise AI is now getting mainstream across a number of vertical use cases. A back of the napkin calculation could conceivably claim that today’s modern enterprises are running 25-plus AI models at once. That number will quickly scale to 50, then 75, and onwards and upwards along a hyperbolic trajectory. As more and more enterprises leverage AI based systems, the integrity of those AI models and the impact of their decisions and recommendations become increasingly consequential in both breadth – cutting across an incredibly wide array of industries – and in depth – building automated data driven decisions at an exponential rate without human intervention.
In this brave new world it is critical to ensure that AI-powered engines behave with the same level of accuracy and fairness with respect to protected populations and customer sub-groups (that is young or old, rural or urban, customers from different geographies and so on). It is equally important to avoid using biased variables in models (like racial or ethnic origin, political opinion and so on) and to ensure ongoing monitoring for verification, explainability and stability over time. This is particularly crucial in a regulated industry like insurance, where legal compliance with the Insurance Distribution Directive in Europe, for example, is required.
The introduction of AI into loan origination and financing paints a clearer picture of what is at stake. While the rise of algorithmic lending is helping companies raise loan production without raising delinquency or default rates, a bias in the AI governance model risks hurting, rather than helping, low-income and minority borrowers. “African Americans may find themselves the subject of higher interest credit cards simply because a computer has inferred their race,” claims Nicol Turner Lee, a fellow with the Brookings Institution. Machine learning underwriting models have so many data points from which to make inferences that race, socioeconomic background, and a host of other variables could influence a loan decision without the lenders even knowing. As long as the system’s operators cannot explain the set of signals that were used to derive the model’s outcome the algorithm cannot be “fixed”. Bias in the outcomes may not even be the fault of the algorithm, but of those that are developing the algorithm, or those that are inputting the data into the model to begin with. Trustworthy AI cannot emerge so long as the model remains a black box.
Data sets’ contribution to bias
Data sets may contribute to bias due to the existence of unbalanced, unrepresentative or non-representative data in an AI model’s training set. If the data is more representative of some classes of people than others, the model’s predictions may also be systematically worse for the under-represented classes. Additionally, any learnings from the prediction data in deployed models can also amplify the bias. To aggravate the predicament, a learning algorithm may bias-drift over time. It is entirely possible that after a model has been deployed, the distribution of the data that the deployed model sees is different from the data distribution in the training dataset. This change might introduce bias in a model over time.In these cases, operators of the AI engine need to monitor the bias metrics of a deployed model continuously and adjust for this bias drift. There are a number of techniques to mitigate bias and improve fairness but the imperative for a consistent methodology backed by an AI governance framework is clear.
BGV portfolio company Zelros (AI recommendation for insurance products) has been a pioneer in introducing the first standard for ethical enterprise-grade AI, three years ahead of the industry. This standard mitigates model drift, guarantees explainability and supports model fairness, and is now fully integrated in their platform, The company will soon release a new capability to mitigate biases detected in fairness reports through synthetic data generation. As AI models proliferate across a broader swath of enterprise use cases, industry standards will need to keep pace.
The explosion of data clouds and our increasing vulnerability
The AI model sprawl comes hand in hand with an explosion of data clouds as data keeps pouring into cloud and software-as-a-service (SaaS) platforms. Cloud native data-first applications have a need to upload, process and analyse massive amounts of data in data clouds. Surging demand for analytics-ready data that distills business insights and delivers revenue generating services translates to the next phase of enterprise application innovation that is powered by AI. There are over 400 million SaaS datasets siloed globally, according to Snowflake. Data clouds eliminate these silos, democratising data access across cloud and SaaS platforms and enabling developers to build rich applications such as customer relationship management services, sales and marketing automation, product analytics and business intelligence platforms.
AI-based applications that leverage cloud data and algorithmic learning to execute critical business functions, however, leave us exposed to adversarial attacks, which attempt to fool AI models by exploiting structural vulnerabilities within these systems. Many traditional security vulnerabilities, such as code injection and cross-site attacks, occur through the submission of maliciously crafted input. Deceptive inputs, therefore, are nothing new. However, if AI models are involved, this fundamentally raises the stakes. When AI models are used for a variety of applications, those applications can be attacked through maliciously crafted input given to the AI models.
Because AI systems do not simply process data, but make decisions, deceptive inputs present a much broader, and deeper, threat than they did traditionally. David Danks describes a few cases in the Institute of Electrical and Electronics Engineers’ magazine Spectrum: “Perhaps the most widely-discussed attack cases involve image classification algorithms that are deceived into ‘seeing’ images in noise, or are easily tricked by pixel-level changes so they classify, say, a turtle as a rifle. Similarly, game-playing systems that outperform any human (for example., AlphaGo) can suddenly fail if the game structure or rules are even slightly altered in ways that would not affect a human. Autonomous vehicles that function reasonably well in ordinary conditions can, with the application of a few pieces of tape, be induced to swerve into the wrong lane or speed through a stop sign.”
Adversarial AI poses risks of data poisoning, online system manipulation, transfer learning attacks and data confidentiality, to name a few. Given the pervasiveness of the decisions in question, it is not too difficult to imagine these attacks leading to life-threatening consequences.
The pace of change
It took 26 years for cybersecurity practices to move from academia to commercialisation, but trustworthy AI should emerge much faster. From the first malicious hack at Massachusetts Institute of Technology in 1961 (when Tech Model Railroad Club members hacked their high-tech train sets in order to modify their functions) to the introduction of the first commercial antivirus program by McAfee in 1987, an entire industry had to awaken to the threats, invest in new solutions and adopt best practices. Yet, today, enterprises spend billions of dollars every year on cybersecurity solutions, and those investments require little justification. We envision a similar adoption cycle in the AI governance space, only in a dramatically trimmed time horizon.
Although it takes six months to validate and release an AI model, an average of one in three models are abandoned or shelved due to lack of transparency or reward vs risk over standard technologies. Statista estimates that over $7bn worth of AI model investments are currently locked due to gaps in model governance. All of these examples highlight the cost, brand and liability issues associated with lack of AI governance controls. Adversarial AI, by contrast, has received less attention outside academia. Although the research community has been paying attention to adversarial AI for some years now, there appears to be an awakening of sorts beyond that small niche, as evidenced by the 30 times spike in published papers on the topic since 2018, according to Google Brain research scientist Nicholas Carlini.
Increasing headlines around AI governance issues is trickling down to awareness of the scaffolding required to build that automated data landscape, and the vulnerabilities therein. While adversarial AI is in its infancy, we believe it may well require more high-profile attacks to capture headlines, generate press and stimulate further interest in this space before broader commercial investment efforts take off.
Conclusions
We are optimistic that innovations in AI governance and adversarial AI are poised to see growth and commercial traction over the next 10 years, for several reasons:
An independent AI governance layer is required in life cycle between training, deployment and operations, to establish a set of guard rails that improves the trust and confidence in AI systems.
While the debate around ethical AI is only in its nascent stages, it is moving swiftly, and regulation is at the doorstep. Already, the EU has issued a White Paper on Artificial Intelligence, and will put forward a horizontal regulatory proposal on the topic this year. We expect the US to follow the example being set by the EU.
Large technology companies like Google, Microsoft and IBM, amongst others, are keenly aware of the need to improve the trustworthiness of AI at their enterprises and are setting up AI ethics boards within their organisations. Awareness of the challenge, therefore, has already reached critical levels amongst key audiences.
The surface area of attack for adversarial AI, while new, is substantial, and worthy of focus for innovation, given that $40bn was invested in AI in 2019 alone. Contrast this with aggregate $45 bn of total investments made into cybersecurity over the 16-year period from 2003 to 2019.
Last but not least early venture capital funding has begun to trickle into these areas. Arthur.ai raised $15m series A funding to tackle AI model monitoring and performance optimisation, bias detection and explainability. BGV portfolio company Zelros sets a new standard in AI fovernance covering model fairness, explainability and model drift. Recently Resistant AI raised $2.75 million in seed funding to develop an AI system that protects algorithms from automated attacks.
AI governance is rapidly becoming a must have that will only accelerate further with future regulatory pressures. Consequently, we are bullish on the prospects for this new frontier, and we look forward to seeing how this investment and innovation landscape takes shape.