AAA Deus ex machina? The evolution of AI

Deus ex machina? The evolution of AI

Despite being the antagonists of countless works of science fiction and an often repeated potential threat to human life by leading academics, the drive towards creating smarter and more functional artificial intelligence (AI) continues unabated.

Renowned physicist Stephen Hawking has warned that a “full artificial intelligence could spell the end of the human race” as people fail to keep up with the rate at which AIs could evolve. SpaceX CEO Elon Musk, who has invested $10m in AI firms just to keep “an eye on what is going on”, has called for international regulation, and likened AI development to summoning a demon. Both Oxford University’s Future of Humanity Institute and Cambridge University’s Centre for the Study of Existential Risk – research institutes that monitor potential threats to humanity – have indicated that the rise of a true artificial intelligence could have significant repercussions.

But have they have got it right? What risks are posed by AI, and what can we gain from the development? More importantly for Global University Venturing, how are universities involved, and which university-linked companies are behind the surge towards AI?

In many ways, AI is already here. Hawking made his warning through speech software based on a simple AI. Should a true AI rise, it might consider gamers akin to Nazis after legions of Xbox and PlayStation players have wiped out billions upon billions of AIs across numerous titles over the past couple of decades. Most people even carry one around with them in their pocket, with smartphone personal assistants such as iPhone’s Siri driven by AI.

However, the plane-flying, chess-playing, bread-cooking variants of AI are known as narrow AI – simple programmes with no capability to work outside their programming. The type of AI driving both curiosity and concern is general AI, a computer capable of independent thought and with the ability to learn. Estimates on when such as AI will appear tend to vary and have been proven wrong time and again in the past. And yet, the ambition to create one continues, with the most intriguing developments in AI happening on university campuses, and driven by companies utilising that research.

Here come the AIs

Internet giant and Stanford spin-out Google has become somewhat obsessed with AI. Not content with purchasing some of the world’s most advanced robotics in 2013, when it acquired Massachusetts Institute of Technology (MIT) spin-out Boston Dynamics, the company paid £400m ($620m) last year for AI startup DeepMind – one of Google’s largest European acquisitions to date.

DeepMind was co-founded in 2011 by Demis Hassabis. Once ranked the second-best chess player in the world under the age of 14, Cambridge-educated Hassabis first made his mark on the AI world working on famous computer games titles such as Theme Park and Black & White before establishing his own gaming firm, Elixir Studios. Hassabis would later return to academia at University College London (UCL) to complete his PhD in cognitive neuroscience, publishing multiple influential papers along the way. Following completion of his doctorate in 2009, he would continue research into AI and neuroscience at UCL as a Wellcome Trust research fellow while also acting as a visiting researcher at MIT and Harvard.

Since partnering UCL peer Shane Legg and programmer Mustafa Suleyman to form DeepMind, the company has quickly accelerated. DeepMind’s ultimate goal is to “solve intelligence”, which Hassabis is seeking to do by “attempting to distil intelligence into an algorithmic construct [which] may prove to be the best path to understanding some of the enduring mysteries of our minds”. Prior to the acquisition by Google, DeepMind’s last breakthrough was to develop a neural net that learns how to play computer games in the same way as humans, and is capable of beating expert human players.

Following the acquisition, DeepMind went on an AI hunt of its own and picked up two Oxford University spin-outs, Dark Blue Labs and Vision Factory, in deals said to be in the tens of millions. Together, the two firms provide eyes and ears for the DeepMind project.

Dark Blue had been working on machine understanding of language – a key focus of Google’s search engine as it battles to understand written searches or voice searches over its Android smartphone operating system better.

Meanwhile, Vision Factory has been developing visual recognition systems, allowing AIs accurately and quickly to understand what they are looking at. The software could undoubtedly find a home at camera-based searches on Google, or at the helm of one of the firm’s self-driving cars. In addition to the acquisitions, Google made a substantial donation to Oxford’s computer science and engineering departments. At the time, Mike Wooldridge, head of computer science at Oxford, said: “Machine learning is a technology whose time has come. We have invested heavily in this area and we are truly excited at the prospect of what we can achieve together with Google.”

The latest development at DeepMind has been a prototype computer that mimics the human brain’s short-term memory, allowing it to store information which it can later retrieve and adapt to a task for which it has not been programmed. Called a Neural Turing Machine, DeepMind is hoping that the AI will, in essence, be able to program itself. The potential for such an AI in computing is staggering. If developed further, a computer could learn and adapt, finding solutions to problems it has not been programmed to calculate, and write its own programmes and algorithms.

Not content with putting computer programmers out of a job, AIs could also be the death knell for journalism written by the human hand.

Already given its own column on news provider Forbes, Narrative Science has an AI that can take complicated data reports and turn them into a readable report. The AI started life as StatsMonkey, a research project at Northwestern University involving computer science and journalism students. Realising the potential, the team spun out the research into Narrative Science in 2010, and redeveloped the AI, now called Quill.

Focusing on sports, business, and financial news stories – all of which tend to be built from data – the company quickly garnered attention. Since launching, the company has raised $32.4m in external fundraising. Alongside Northwestern, Narrative has attracted Battery Ventures, Jump Capital, Sapphire Ventures and financial firm USAA. It has also lured In-Q-Tel, the corporate venturing arm of the US Central Intelligence Agency (CIA).

While it is unlikely that the CIA made its investment on the back of some wild dream to turn Quill into a real-life version of Skynet, the AI could find itself at home making sense of the information the intelligence agency collects, as well as big data from other federal sources such as the National Security Agency, which collects and analyses communication on a global level.

Columbia University spin-out eBrevia AI is also looking to make a career out of writing, but is targeting the legal sector. However, rather than writing itself, eBrevia is creating a niche for itself by offering to perform due diligence on legal documents. With $1.5m in seed funding behind them, eBrevia’s founders are still developing the AI before releasing it to market.

Performed manually, the process is long-winded, taking up to several days per contract. It is also an expensive process for law firms, which will pay junior associates somewhere in the region of $300 to $500 an hour to go through a document. Worse still, companies requiring the contracts may have given law firms a tight deadline and a set budget. This can impact the bottom line, as companies will pay for only so many hours despite a contract needing more, and may even lead to a contract containing errors due to it being rushed.

Law is also a focus of FiscalNote, an AI startup launched by Maryland University students Jonathan Chen and Tim Hwang. The company has raised $8.2m in external funding, $1.2m of which was a seed round before its December 2013 launch. The remaining $7m was raised in December 2014 in a series A round backed by a venture consortium.

The company focuses on predicting whether a bill proposed in the US will become law. Through analysing the bill’s text, what industries will be affected by it, and all the legislators and committees involved, the AI can predict which legislators are likely to vote yes or no.

AI is also having an impact on big finance. With number-crunching an obvious forte of machine intelligence, Harvard and MIT startup Kensho has secured $25m since launching in 2013. Its latest $15m series A was led by Goldman Sachs, which will provide the investment bank with what co-founder Daniel Nadler describes as the equivalent to a “quant army”.

Cloud-based Kensho scans economic reports, policy changes, political events, drug approvals, and company reports, and evaluates their impact on financial assets. Previously, only top hedge funds would have the computer power to generate such big-data-driven predictions, but Kensho opens the door to a wider range of investors harnessing analytics.

Kensho can be asked questions in a simple Google-style search box, such as “What would happen to oil prices if this country went to war with that country?” or “What stocks would be affected if Apple discontinued production of iPads?”, and have an answer within minutes. Currently, it would take days to provide the same answer.

Another company that has worked with MIT on research and is targeting the financial sector, as well as medical research, is Sentient Technologies. Sentient emerged from stealth at the end of 2014 with $103.5m in series C backing – it has raised a total of $143m during its time of trading under the name Genetic Finance Holding.

The team behind Sentient is the same team that worked on the technology that would become Siri, the AI personal assistant on iPhone devices. Much like Kensho, Sentient is looking to transform big data into usable insights in the financial industry, but has the advantage of scale. Its processing nodes are spread over thousands of sites and it uses millions of CPUs, giving it serious punching power when it comes to crunching data.

With backing from Tata Communications and several other unnamed institutional and corporate partners, Sentient is now looking at how it can expand beyond the financial industry into healthcare, fraud detection, public safety, e-commerce and other areas.

What are the risks?

While none of the above are yet on the level of sentience – such as famous AI antagonists HAL 9000 of 2001: A Space Odyssey or the nuke-happy Skynet in the Terminator film series – there are several risks in their development. But what are those risks, what benefits counteract them, and where could the evolution of AI take us?

Speculation about artificial life has long been a fascination of man, stretching back as far as the ancient Greek myth of Talos, a giant man of bronze who protected the island of Crete from invaders. Over the years, AI has moved from mythology and firmly into science fiction, with a good number of prominent AI-based characters appearing in books, films and other media over the years.

The difference of opinion over AI’s impact on humanity is reflected in predictions made in science fiction stories, which are littered with both positive and negative representations, such as Star Trek’s Data character, a benevolent android aiming to integrate himself with human society, and the Borg, a cybernetic hive-mind attempting to add all life to its collective.

Hypotheses on how AI will affect human life generally stems from the moment an AI becomes fully sentient and can match a human’s mind. It is thought that any sentient AI would not remain on an equal footing for long. Given the wealth of information available through the internet and other sources, the only variable that could determine an AI’s development is the amount of resources it has to process and learn from the data.

The benefits of such an AI are limitless – it could analyse all of the world’s data and compute the best solutions to the world’s problems. However, therein lies the greatest risk. A sentient AI would not be burdened by human emotions or biases, and would lack a sentimental tie to humanity. As Cambridge professor Huw Price puts it: “People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much, much cleverer and much, much faster.”

From this point of view, it would appear that the risk comes from who programs the AI to begin with, and the reason it has been programmed. For example, a general AI with no biases may look at climate change data and conclude the need for a major course correction in energy policy. Given the power, it may decide to do everything it can to override human decisions on climate change, which are generally made through political manoeuvring between science and the energy industry, as it could see them as inefficient or unimportant to the ultimate goal of sustaining life – both human and its own.

However, a corporation could design an AI with biases that could result in it manipulating data to serve one entity. In the example of climate change, an AI given the role of CEO at an oil firm could use that computational power not to tackle the issue, but to accelerate it as it strives for bigger profits. This could also extend to a race to build more efficient AIs at corporations or countries, and a rogue state creating a powerful AI completely unburdened by any form of programmed regulations, or the ability to override programming, could prove disastrous.

There are also more practical problems, such as mass unemployment. As mentioned above, lawyers, journalists and programmers could all find themselves out of a job thanks to AI, and that is just the tip of the iceberg. Combined with robotics research and 3D printing, a sufficiently evolved AI could theoretically generate specific AIs and the supporting resources to perform just about any task. This presents a problem whereby wealth inequality, already spiking, could continue to send all money upwards depending on who owns the AI, or if we embrace the technology to change our whole approach to work – handing to AIs the day-to-day tasks necessary to keep society running while humans find other uses for their time.

These and other issues need to be considered at every step of the way in the development of AI. And, as both the origin of research into AI and those overseeing AI’s evolution, universities will play a pivotal role in ensuring AIs remain a benefactor for humanity. While the risks are great, the outcome could prove to be positively transformational for mankind. 

Leave a comment

Your email address will not be published. Required fields are marked *