Specialist computer processing chips are poised to entrench the effects of deep artificial intelligence, two decades on from the rise of machine learning.
Deep learning algorithms already outperform the human eye when classifying pictures, and the costs of this and other image recognition-based tasks have plummeted.
The latest annual report from Stanford University’s AI Index pegs the cost of benchmark-accuracy image recognition using the cloud-hosted ImageNet repository at $12, against $2,323 two years ago.
The timeframe for training large image classification systems was estimated at just 88 seconds, down from about three hours in October 2017.
The impact has seen image recognition powering more use-cases, everything from agricultural problem-solving to computer vision-assisted driving systems.
On social media, regulators scramble to stop more iniquitous implications, with “deepfaked” photos vying to hoodwink users alongside computer-generated news stories.
More encouraging is the progress of AI drug developers such as Insilico Medicine, which revealed it had designed a potential drug in an astonishing 46 days.
Admittedly, its proof-of-concept profited from targeting proven therapeutic mechanisms. But the AI still crunched through 30,000 different molecular structures, a tantalising hint at the efficiencies ahead.
Insilico has now turned its attention to the novel coronavirus, uncovering six potential molecules for the disease and publishing details of a further several hundred relevant compounds. Meanwhile, an AI-designed treatment for obsessive compulsive disorder from University of Dundee spinout Exscientia has entered clinical trials.
All of this promise – and danger – has grown alongside the advances of deep learning.
Basic machine learning relied on highly structured data but deep learning models are configured to operate far more autonomously.
It often vaguely resembles human nerve centres, typically relying on multi-layered “artificial neural networks” to crunch the implications of highly advanced data inputs.
But although 2019 was massive in terms of demonstrating deep learning’s scope, some intimated the technology may be nearing its limits.
Among those predicting capacity was Blaise Agüera y Arcas, a machine intelligence engineer at internet and technology conglomerate Alphabet’s Google unit, speaking during a keynote at last year’s NeurIPS conference.
While deep learning excels at complicated tasks, ultimately it is a form of machine learning, – albeit more nuanced – and can thus never quite match humans’ deductive powers.
Deep learning functions, generally having repeatedly consumed labelled examples of the correct answer, serve little use in settings which require true reasoning or social intelligence, Agüera y Arcas added.
Moreover, training up deep learning is data-intensive. Every solution it offers comes only by labelling immense quantities of information.
The corporate context
With deep learning’s limits now apparent, attention has moved to hardware that can unlock additional use-cases by reducing bottlenecks, costs and lead times.
The so-called “democratisation of AI” will mean breaking beyond labs and cloud centres to put the technology in the hands of more end-users and industries.
Corporates sit at the vanguard of this shift, especially in the US, where the corporate-affiliated share of AI research is highest according to AI Index, but also increasingly China, which has witnessed a 66-fold rise in corporate papers since 1998.
It follows that corporate venturing will augment this research. Moreover, available data suggests CVC AI deals frequently lay at the larger end of the spectrum.
In deal terms, CB Insights estimates there were 2,235 venture deals for AI businesses in 2019.
Taken against the GCV Analytics estimate of 269 CVC-backed deals for the year, this implies a CVC-backing rate of roughly 12%.
Contrast that against the situation in dollar terms. AI businesses generated $26.6bn in VC funding during 2019 according to CB Insights, meaning around 38.4% was invested by CVC-backed consortia, using the GCV Analytics count of $10.2bn for the same year.
A glance at some of the biggest CVC-backed deals shows the scale of the potential strategic impact.
Topping the bill in 2019 was computer software vendor Microsoft’s formidable $1bn investment in AI research and development company OpenAI.
OpenAI aims to pave the way for artificial general intelligence, heralding a world where machines understand virtually any human task.
Rounding out the top 10 were corporate-backed AI hardware and software developers in fields ranging from healthcare to robotics.
China was the destination for half the deals, including a $750m round for image detection software developer Megvii, headed for a potential $500m listing in Hong Kong set to open exits for e-commerce group Alibaba and others.
China and the AI chip arena
Finding the right computer processor framework to run deep AI effectively is a major industry battleground.
Most accept that central processing units – long the norm for computer hardware products – cannot manage the hundreds of millions of calculations alone.
AI Index estimates the computational power required for training AI models has doubled every 3.4 months on average since 2012. Moreover, many believe Moore’s Law – that the number of transistors on a circuit board will double every two years – is about to taper off; meaning alternatives must be found.
What is striking about the AI revolution is that its fundamental research is increasingly distributed across western and eastern geographies.
And that makes it especially dangerous for US chipmakers to rest on their laurels, further raising the stakes for innovation.
East Asia accounted for a higher share of AI conference publications than North America between 1990 and 2018, at 33% to 27%, a reverse from less than 10% against 70% in 1990, according to AI Index’s breakdown of the Microsoft Academic Graph.
China’s fledgling AI chip sector has particularly lofty ambitions and enjoys state support, as Beijing looks to avoid the bottom of the sector’s value chain.
An early upset came in November last year, when e-commerce group Alibaba’s Hanguang 800 AI chipset outperformed a newly-released Intel processor in a machine learning benchmark study, according to the Economist.
Naturally, there were caveats. Hanguang 800 entered only one of five trials conducted, and so avoided comprehensive comparison. The chip is also larger than its Intel counterpart, providing advantages in terms of power and calculation throughput.
Hanguang 800 nevertheless wields impressive clout, especially as it represents Alibaba’s first foray into semiconductors. It is billed as containing 17 billion transistors, enough to crunch 78,563 images per second using the ResNet-50 neural network.
The wider context will cause further concern among US manufacturers. To bypass China’s persistent deficit in complex manufacturing, Alibaba’s assignment was shipped across the Strait to Taiwan Semiconductor Manufacturing Company.
With no sign of a trade truce between Washington and Beijing, few will welcome AI as another bargaining play, especially one placing Taiwan square in the middle.
But just as electric vehicles came to Beijing’s rescue previously, superseding the internal combustion engine, AI chips could prove another boon if Alibaba and others play their cards correctly.
Perhaps crucially, China sits atop a mass of data. And, if data really is the new oil, then China is awash not only with resources, but also the infrastructure to capture value.
So-called data-labelling factories graft in China’s less prosperous reaches, processing samples in near real-time and with labour costs far below the national average.
Meanwhile, as EU policymakers mull over clamping down on facial recognition in public places, China already widely tolerates the technology, for everything ranging from crowd surveillance to vending machines.
The threat from China means western chipmakers will find staking early advantage vital.
Alibaba’s internet group peer Baidu is also actively targeting AI, through its non-strategic investment vehicle Baidu Ventures, which focuses on early-stage vertical AI and robotics.
An interview with the unit’s vice-president Fang Yuan on Medium in January cited driving AI’s adoption as the most important battleground this year, carving out niches where it is likely to serve large enterprises most effectively.
Nvidia
Nvidia’s market position in AI benefits from its expertise in graphics processing units (GPUs), traditionally the core component of computer graphics cards.
GPU matrixes originally intended for 3D graphical texturing can also crunch deep learning vectors, often far more competently than their CPU equivalents.
However Jeff Herbst, vice-president for business development, said Nvidia endeavoured to offer more than just AI chipsets, providing a holistic ecosystem for developers, having invested “massively” in hardware and software architectures over recent decades.
Herbst added: “We essentially offer a complete computing platform, as opposed to being just a chipset manufacturer. While it might be tempting to view Nvidia as a typical semiconductor company, we, in fact, build complete AI solutions incorporating everything from the chip, to software and reference designs, as well as to drivers and APIs tied closely to major runtime frameworks.”
“On top of that, we layer solutions such as our Nvidia GPU Cloud (NGC) which enables any scientist to prototype on a desktop system, and then scale into the cloud in basically a click.”
Rather than spurring competition with existing AI investors, Herbst says Nvidia has opted for a complementary approach, aimed principally at supporting developers.
Above: Jeff Herbst of Nvidia on stage at the GCVI Summit in January
At the foundation of Nvidia’s ecosystem is its Inception program, a virtual accelerator where eligible VC investors can nominate portfolio companies to participate.
Inception’s 5,000-plus membership receive access to Nvidia’s considerable hardware resources, as well as marketing support, contacts and – crucially – credits for major cloud computing platforms.
Herbst said: “At heart we are long-run thinkers – we are passionate about advancing the field of AI, and strongly believe that the venture capital ecosystem is incredibly important to its trajectory.”
“VCs provide the fuel that enables startups to succeed. Regardless of whether they are corporate or financial investors, we are excited to help enable their respective portfolio companies achieve maximum efficiency by leveraging our resources, and we will ensure those startups are getting all the benefits they deserve through Nvidia.”
Nvidia’s approach brings into contact a gamut of AI-powered applications that will use the heft of its GPUs, now offered in variants that specifically accelerate deep automated workloads.
A notable example is natural language processing – which employs AI algorithms to automatically interpret and converse through text and speech.
Above: Nvidia Inception member, Lunit, exhibiting in the Inception Pavilion at the GPU Technology Conference
The AI modelling for these applications depends on terabytes of sound templates and text logs, presenting a big burden for computing resources.
For example, it took 50 Nvidia Tesla GPUs for cloud-based audio transcription tool Otter.ai to train up its AI service.
The company is a fine product of Nvidia’s developer-first mentality, having last month attracted NTT Docomo Ventures – part of telecoms firm NTT Group – and Duke University’s Innovation Fund in a $10m funding round.
Herbst predicted the market for natural language processing was just beginning to mature, following the recent wave of image recognition-based services.
Herbst added: “The world is full of industries that need more than images. In fact, the majority of businesses leverage voice and data. And while we have for some time now been highly capable at image processing, we are only now becoming world-class at applying AI to recognise and respond to voice interactions through natural language processing.”
Intel
Intel’s AI campaign is in top gear, benefiting demonstrably from the operations of its corporate venturing unit, Intel Capital.
GCV Analytics data reveals Intel Capital pumped a total of $573m into AI in 2019 across 12 deals, almost matching the $583m invested over 2017 and 2018 combined.
Intel also heads the AI leaderboard in terms of CVC dealflow since 2011, entering more than double the number of its nearest challenger, Alphabet.
The firm’s internal projections for AI hardware suggest massive market expansion over the next five years, said Mark Walton, communications manager for technology leadership at its European, Middle East and Africa division.
The AI silicon market opportunity is forecast to pass $25bn by 2024, with the comparable metric for data centre-focused chips hitting $14bn in the wake of compound average growth currently pegged at about 25%.
Walton argued Intel was well equipped not just in terms of AI hardware but also in specialised software, driving deployment in fields such as computer vision, natural language processing and personalised recommendations.
He predicted Intel’s OneAPI programming model would streamline development across different AI chip architectures, including CPUs, GPUs, field-programmable gate arrays and AI-specific accelerator chips.
“Purpose-built silicon accelerators are gaining momentum due to their potential to more efficiently process these emerging workloads. This market is fast-evolving and [is] in the early innings, but we are committed and investing to win.”
Late last year, Intel stumped up $2bn for Habana Labs, an Intel Capital portfolio company specialised in chips for cloud-based AI applications.
Habana Labs is testing its second AI chip, catered to hyperscale and next-generation cloud-hosted services, already having released a chip for AI inference – its ability to reason from evidence – during the fourth quarter of 2018.
The move comes after Intel bought another chip developer from its CVC portfolio, Nervana, in 2016, again with the intention of bolstering deep learning from cloud-based services.
Walton said the customer input and advancements from its Nervana-branded neural network processors were now shaping a new AI roadmap centred around Habana Labs technology.
Walton added: “With Nervana, we engaged more deeply with cloud service providers including Baidu and Facebook to understand the real-world requirements for deep learning workloads. Now, with Habana, we are currently engaged with a targeted set of hyperscale (for example, Facebook) and next-wave cloud service providers.”
Other chipset manufacturers backed by Intel Capital include Canada-based Untether AI, which Walton says has achieved smooth data transfer within a novel chip architecture to speed up neural network inference.
The interplay between data memory and conventional chip circuits is often cited as a major performance roadblock; creating inflated latency and power consumption requirements.
Other runners and riders
The activity at Nvidia and Intel reflects the industry expectation that semiconductor architectures that accelerate AI execution will be needed to drive the technology forward.
That thesis has largely been apparent since 2016, Google unveiled its cloud tensor processing unit (TPU).
Specifically designed for cloud-driven deep neural networks, TPUs enabled Google to consign conventional archetypes to less complex machine learning tasks.
Google’s guidance currently recommends TPUs for deep learning models that “train for weeks or months”, reserving CPUs and GPUs for workloads like quick prototyping and models programmed in third-party languages.
Google’s move was a signal to peers and other AI investors, unlocking venture dollars and innovation resources for a raft of chipset manufacturers.
Among the chief runners and riders is multi-corporate-backed Graphcore, which reached a $1.7bn valuation at the time of its $200m series D round at the end of 2018.
Billed as 10 to 100 times faster than comparable AI chipsets, Graphcore’s technology employs a graph-based processing structure to execute multiple datasets in parallel quickly and efficiently.
The company recently scored a distribution win when corporate backer Microsoft introduced development access through its Azure cloud computing platform.
An unconfirmed report in Wired last year suggested BMW iVentures – another Graphcore corporate backers – became interested because of the technology’s potential in vehicles with level 4 or 5 autonomy.
Graphcore has now raised $310m altogether over four rounds. It claims to be scaling “rapidly”, having extended its operations to include five offices based in the UK, US, Norway and China.
As Graphcore’s foothold grows, several alternative CVC-backed propositions are packing the field.
AI chipset and development stack business Blaize officially joined the fray in November last year.
Formerly known as ThinCI, Blaize counts automotive technology supplier Denso among its backers, after a $65m series C round in September 2018 that brought its lifetime funding haul to nearly $90m.
The revamp comes with Blaize ratcheting its commercialisation strategy based on a production-ready chip slated to ship in coming months, building on 15-20 pilot projects it has underway across the automotive, smart vision and enterprise industry spaces.
Dinakar Munagala, chief executive of Blaize, said the company is planning to raise its next financing round to accelerate its product roadmap and scale revenue growth.
He argued for novel chipsets in the next-generation AI world but believes Blaize has differentiated itself with more versatile form factors and an intuitive software development suite.
Blaize is pitched as a remedy for AI workloads that cannot function solely using cloud computing centres.
Above: Blaize’s founding team (L-R) Dinakar Munagala, Val Cook, Satyaki Koneru, and Ke Yin
These applications instead execute more data closer to the end-device, an approach dubbed edge computing, particularly suited to scenarios with limited energy or communications.
The chips slot into a range of computer graphics card-like containers, each shaped to fit a designated technological end-point, such as edge computing servers, oil and gas sensors or autonomous vehicles.
Blaize’s offering also includes AI Studio, an automated tool that Munagala claims accelerates the transfer of trained machine learning models onto their designated edge AI device.
“One of the biggest roadblocks is older architectures are currently being retrofitted for AI despite being purpose-built for earlier applications,” Munagala said. “So you need a novel architecture, but then there is the problem of how to build something highly-efficient and programmable that can be readily deployed into the field in various use-cases.”
The development suite functions entirely from a graphical interface – a big draw as AI gears itself toward smaller software engineering teams.
Indeed, coaxing more software developers into the field will be crucial. Google again demonstrated the need here by overhauling its TensorFlow suite in January 2019, simplifying its rules to align more with an open-source neural net framework called Keras.
Edge AI and sensors
Edge computing and energy efficiency go side by side where it comes to the democratisation of AI.
One study from the University of Massachusetts found training each neural network via the cloud emits 17 times more carbon dioxide than the average American citizen does each year.
That would put the fundamentals of the AI revolution in opposition to the sustainability goals, an uncomfortable trajectory as societies pursue the circular economy.
The environment is but just one factor driving research into energy-efficient, edge-based AI.
IT hardware is by no means homogenous – think of everything from self-driving buses to industrial sensors, and there is clear rationale to keep the energy burden to a minimum.
Sensors provide a particularly useful case study, digitising feedback in a host of industrial and other internet-of-things (IoT) settings.
Industrial sensors often rely on batteries, and the technology is rarely unitary in composition. To effectively handle networks containing multiple sensors, AI must ingest data from all inputs synchronously. Without AI, the devices still rely on human decision-making, limiting IoT-driven cost efficiencies.
AI-enabled edge computing protocols have provided part of the solution, according to Juan Manuel Corchado, full professor with chair at the University of Salamanca’s Department of Computer Science and Automation.
These systems turn sensors into robust AI terminals, processing data locally wherever possible to make automated analysis more practical.
Corchado said: “The edge computing approach has been developed precisely for this purpose. This technology facilitates the connection of sensors to intelligent gateways, and these are, in turn, connected to intelligent global systems that are in the cloud.”
“There are already many edge computing platforms developed by large companies, which allows the integration of intelligent gateways and the application of AI in complex problems without having great knowledge.”
Corchado praised the likes of enterprise AI application developer Deepint.net for explicitly integrating support for IoT systems.
He predicts a watershed for AI-in-sensor technologies, with about 50% of the IoT-related projects at Bisite Research Group and AIR Institute – two hubs he is involved with – now deploying automation through edge and distributed computing.
“With tools like Deepint.net the use of this technology is multiplying exponentially,” Corchado added.
Squeezing more from sensors and other low-power AI devices is likely to be big business.
In January, reports surfaced that consumer electronics producer Apple had paid about $200m to acquire Allen Institute for Artificial Intelligence-founded Xnor.ai, which equips low-power devices with edge computing-based machine learning and image recognition.
Another CVC-backed example in the space is AI-in-sensor chip business AIStorm, whose investors include biometric technology producer Egis, image sensor maker TowerJazz, kitchen equipment supplier Meyer and semiconductor maker Linear Dimensions.
The broad slate of corporates represented belies the wide strategic appeal as industries strive to grasp automated IoT technologies.
There will also likely be profound societal implications. Take water monitoring systems. Although internet-connected sensors can speed up data collection, the samples themselves still need to be dispatched to a laboratory for processing. Human supervision can only add to the cost burden, impairing implementation in developing communities, where IoT sensors could provide an early buffer against contaminated water supplies.
Sushanta Mitra, professor of mechanical and mechatronics at University of Waterloo, has proven the process can be automated, extracting data through sensors for analysis through a simple AI-powered phone app.
He said: “The biggest challenge is to have reliable data sets for any AI-driven analysis. Therefore it is imperative that reliable sensors are available that would collect data to perform the required analysis. Without data, AI has little meaning.
“Hence, sensors and coupled IoT will be critical infrastructure for deployment of AI-based solutions.”
Mitra’s technology has been pushed to operate within strict system constraints, a key prerequisite for remote and developing communities.
“There is no such requirement of computational power,” he explained, “We are able to use a simple cellular phone, which is available to everyone, to perform this task. One needs to be cognisant of the fact that often in remote and limited resource communities, access to reliable clean water is a challenge rather than having a cellular phone.”
Hardware will advance artificial intelligence in the near-term by extending its frontiers rather than marking any radical progress in autonomy.
But the coming wave of bespoke AI chipsets will drive more robust and flexible applications, resolving the technical pain-points prevalent with existing technologies.
Coupled with edge computing, it should create space for AI developers to enter a whole range of vertical industries.
That is likely create greater value and places a high degree of strategic impetus behind AI hardware dealmaking.
But hardware is only part of the puzzle.
Corporate-backed innovation is also driving exploration in adjacent areas of prime importance, with data and ethics among the most notable.
But there is little doubt the coming step-change in AI hardware will be exciting, for corporate investors and end-users alike.