AAA The Ethical AI Challenge

The Ethical AI Challenge

“You are now being recorded for quality training purposes.” How many times have you heard this automated message in the last few days, months or years? The more recent times may prompt you to ask: was my conversation recording used to train an employee or to train a machine learning algorithm? And then the question emerges, should corporations have a responsibility to disclose when their activity is used as training data for an AI tool? This is one of many challenges that companies have to grapple with in today’s data economy. In our new era, driven by artificial intelligence, ethical questions have arisen around transparency, data biases, model biases, security, privacy and the governance of AI technologies. In September 2021, executives from BGV, IBM, Silicon Valley Bank, and other prominent corporations met at the Global Corporate Venturing Innovation Summit, a conference in Monterey, California, to discuss the Ethical AI Challenge.

A top tier investor warned that “the provenance of truth is under attack.” From a national security perspective, a consensus has emerged around the threats posed by deep fakes, fake news, and attempts at election interference. Access to social media platforms and readily available data sets, coupled with cutting edge AI tools and the capacity to generate synthetic content, arms bad actors with the ability to weaponise content and drive agendas in ways that exploit vulnerabilities in our porous information landscape.

China has taken the most active role among countries attempting to interfere in US elections, according to these agencies. Given Beijing’s posture at home, where companies like WeChat, Baidu and AliPay openly surveil their own citizens, this comes as little surprise. Because China lacks the restraints faced by Western governments (where agencies require a ‘lawful intercept’ to retrieve personal data on US citizens), Beijing enjoys unfettered access to massive data sets of personal digital transactions, geo-locations, and behavioural patterns with which it can monitor trends, ensure citizen compliance or train its own homegrown AI engines.

“This started with ad-tech and telemetry data,” a strategic investor explained. The mission of ad companies is to know everything they can about their consumers, just like Facebook, Twitter and Google know everything about their users. This is how they sell advertising space, and it raises real privacy issues. Without regulatory oversight, this becomes an arms race on the black market, where personal data is being bought and sold for profit. “I am amazed,” he said, “that there has not been more blowback from consumers or from the investor base.”

IBM’s Betsy Greytok, VP of Ethics and Policy, shared how her company has taken a very firm stand on data privacy, going so far as to appoint a Chief Privacy Officer who oversees how data are collected, analysed and processed across all IBM properties. The conglomerate has created an AI & Ethics review board, and a privacy advisory committee, cutting across several layers of the organisational hierarchy, driven from the belief that ethics cannot be decided by one person, but requires the input of stakeholders across business roles and functions.

This same philosophy informs IBM’s efforts to identify biases in AI systems, algorithms or any automated decision-making processes emerging from its computer models. Given the AI systems reliance on clean training data to develop the models that will then govern the decision- making across much larger data sets, proactive bias detection measures are required early on in the design process. The developer teams designing these models, therefore, should reflect diverse perspectives and backgrounds, so as to catch biases before they unfold.

Greytok shared a story of a black computer programmer handed a design project. “The problem,” she explained, “is that he does not feel equipped or motivated to wade into an ambiguous discussion of ethics, for which he does not feel qualified or motivated. He is a developer and he just wants to tackle the challenge in front of him.” The key, she concluded, is to equip these teams with the right education and action item checklist to get them asking the right questions. Without the right training, you cannot rely on data scientists or developers to lean into their own ethnic backgrounds as a remedy against design flaws in AI systems.

Emmanuel Benhamou

Emma Eschweiler, VP at Silicon Valley Bank, countered that while all of these considerations are noble undertakings for Fortune 500 companies, the perspective looks very different with early-stage startups. “We deal with 1500+ startups, many of them in the AI space,” Eschweiler said. “Many of these companies cannot afford the extra oversight costs required to implement all of these checks and many do not have the appetite to slow down their innovation teams, while they are struggling to release products, scale their businesses and keep the lights on.” The stage of the business, therefore, fundamentally informs their approach to these issues. “The question is, how do you encourage the right kind of design from the beginning?”

BGV General Partner Eric Buatois believes investors have a strong role to play in this new era of Ethical AI governance, and that they have a responsibility to embed values into their due diligence process. “We are a very hands-on VC, working closely with our portfolio companies,” he said, “and for us, these Ethical AI governance issues show up in our due diligence process right at screen zero.” One of their portfolio companies, Zelros, has introduced ethical AI governance principles into their insure-tech tool that ingests data around insurance claims, policy prices, voice conversations, underwriting documents and other data sets to automate tasks like claim handling, recommendations and advice for policyholders and insurance advisors. The award- winning company has adopted an open-source model, putting its code on Github for the world to see, in a remarkable effort to increase transparency and explainability.

“Early-stage companies are vulnerable and sensitive to capital injections,” Buatois explained, “however, we are actually quite surprised at the level of receptivity these entrepreneurs have in implementing small changes to their design processes that lower product risk and brand risk while mitigating the possibility that they will face more expensive ‘switching costs,’ at a later stage in the business, when it is much more challenging.” Buatois leads BGV’s participation in the Extreme Tech Challenge, a global startup competition for purpose-driven innovation, and this year’s challenge inaugurated an Ethical AI Award. Of nearly 4000 applicants, some 40 qualified for the Ethical AI track as finalists, and many had already been thinking of novel ways to tackle bias detection, privacy, explainability and climate-tech challenges.

The panel went on to discuss black box challenges, model drift and the possibility that audits may be introduced into AI systems development, much like they exist today in the cybersecurity world. Some argued that AI technologies will not face regulation until a major breach or catastrophe arises, and will then proceed in a reactive manner, with political actors waiting until the last minute to act. In light of recent high-profile headlines around Facebook, however, it seems oversight may be coming sooner, rather than later. The pathway remains unclear.                 

Independent of government posture, these discussions at Global Corporate Venturing’s Innovation Summit represent an effort for industry practitioners to lead responsible change from within. Prominent venture capital and growth investors across all industries have already begun adopting ethics-based frameworks for evaluating AI governance, while global corporations like IBM, Google, and Microsoft are also establishing their own standards for Ethical AI governance. The question is whether they will succeed in forging a set of shared ethical standards and best practices that can resonate across the industry, the investor community and the startup innovation landscape.

Since this article was penned, a group of industry practitioners and strategic investors have come together to establish the Ethical AI Governance Group (EAIGG), a community platform dedicated to promoting the adoption of responsible AI governance and sharing best practices.  To learn more, join us here on LinkedIn.