So said author and speaker Yuval Noah Harari in a recent podcast I listened to. Yuval has listed the three main challenges that mankind has to wrestle with in the coming decades as nuclear war, climate change and technological disruption.
This is Yuval’s list, and you may disagree or have other topics at the top of your list, eradicating malaria for instance. What is thought-provoking to me as a trained engineer, a venture capital investor, and, more broadly, an optimist about the positive change technological innovation can bring to mankind, is that last topic, technological disruption.
I have witnessed close up the boom and bust of the first Internet wave, and I have co-developed an investment firm that has backed nearly 100 technology companies in the US, China, India and Europe.
I spent 12 years with Nokia, from 1990 onwards, and left the company in 2002, shortly before the market capitalisation peaked at $245bn, one of the world’s largest companies at the time. Little did I imagine that seven of the 10 largest companies today in terms of market capitalisation would be technology companies. The other three are JPMorgan, a bank that is very much a technology company, pharmaceutical group Johnson & Johnson, and conglomerate holding company Berkshire Hathaway.
When the algorithms in the systems being developed by either the tech giants of today or the VC-backed companies of tomorrow decide which patient is prioritised to get a liver transplant, which cars have priority in traffic jams, who gets a mortgage, who gets admitted to university, who gets social benefits, who gets persecuted for tax evasion, and so on, you start to see the shape of the sinister meaning in Yuval’s term “God is an algorithm”.
Yuval was putting a finger on the potential dangers of the development of artificial intelligence, software, processing power and billions of hyperconnected devices. Our systems of corporate governance, societal organisation and liberal economics, together with the continued drive for effectiveness, are likely to promote the redundancy of humans wherever possible – step by step, industry by industry. His point is that, if we do not find meaningful ways for humans to become useful in society, we could end up with what he calls a “useless” class of people – a class that is irrelevant to the interests of companies.
This may be a dystopian vision, but as Yuval is not a technologist, I take heed of his views – as always, it is more likely that an industry outsider will be able to tell us where we are heading rather than the expert insiders wedded to the tech industry. In fact, we may be better off having outsiders setting the guardrails for our industry.
Lina Khan, a brilliant young lawyer, may be such a person. She was thrust into the limelight in 2016 when she wrote a brilliant paper – Amazon’s Antitrust Paradox – while still at Yale law school. The paper essentially reframed what monopoly power is and how it should be viewed under anti-trust legislation, in this case US anti-trust legislation.
In my view, it is unavoidable that we in the technology industry are going to be subject to substantially more regulation and government intervention. This is not necessarily a bad thing. Many industries, such as finance, telecoms, transportation and healthcare, function under regulation and supervision. When you involve deeply human and ethical choices concerning our health, access to education or our personal data, it is inevitable that regulation will come into play. The EU’s General Data Protection Regulation is a good example of privacy protection now being followed in many jurisdictions around the world.
What has brought me to write this note is the apparent lack of recognition by our industry that we have to take ownership of and responsibility for the dialogue about the impact we impart on society. Many entrepreneurs are already working to impact regulations relevant to their business, and lobbying politicians, to implement favourable tax treatment of stock option plans for instance. Some entrepreneurs are part of handpicked governmental advisory bodies and are working on many other important topics.
However, we need to work on shifting the dialogue away from these tactical conversations to the issues that are really important for our mission. We need to help educate not only politicians but policymakers, lobbyists, influencers, trade unions, educators and journalists about how hyperconnectivity and algorithmic support can create augmented intelligence where humans are in the driver’s seat.
It is easy to imagine the next populist movement, not against globalisation and immigration, but against technology, and the displacement of workers and disruption of the established order that it creates. It is a narrative in the making, and much too close for comfort for us all. Engage in the dialogue where you can. Make a difference, and let us make sure the human is in charge.
This is an edited version of an article first published on Medium