Sadly, it was the coronavirus that changed society. The global death toll had passed 270,000 at the time of publication, with little sign of a solution in sight.
China is pruning its lockdown to mixed results, while many developing countries brace themselves for humanitarian disaster. In the West, citizens grapple with severe curbs on their personal liberty that are only now beginning to loosen.
On April 5, about 30% of the UK’s television audience tuned in for the Grand National – the country’s premier horse-racing event. Given the ban on public gatherings, the race was, in fact, a life-like simulation: a computer algorithm selected the winning jockey.
The race, proceeds of which went to charitable causes, demonstrates the seismic shift in societal norms, as countries hunker down to save lives. Even during World War II sporting fixtures took place in some form, to help bind communities together.
This time, pubs, restaurants and gyms have all been shuttered, leaving consumers with fewer options. E-commerce, film and home exercise have been some of the biggest replacements. The paradigm for enterprises has been affected, too. Telecommuting is the norm for much office work, while virtual event platforms are anticipating unprecedented business.
Already, teleconferencing platform Zoom has demonstrated exit potential, holding a $357m initial public offering in April last year that saw existing investors sell shares above the top of its most recent range.
The upshot is software has become even more central to our daily lives. And that provides more scope for artificial intelligence (AI), albeit in circumstances few predicted.
With more activity occurring online, the data resource for machine learning applications has suddenly escalated. AI developers have a sizeable opportunity, and that is before considering their role in healthcare, the focus of the next GCV special report.
GCV Analytics confirms a four-year growth trajectory in CVC-backed rounds for AI software businesses. In 2019, there was almost $8.4bn raised from 238 deals, compared with just $744m in 2015.
The data, however, do not account for product quality: some might argue AI’s appeal has been debased, not least by software touted as AI that actually provides more basic functionality.
AI and software development
Accessibility or technical performance: it is a dilemma that has faced generations of software developers.
The takeaway is that interoperability matters in IT and especially in software where Android has replaced Microsoft as the biggest operating system. But are there any lessons for the development of AI? There are certainly big differences.
In software, clarity is normally king; its code must be precise, or its performance becomes erratic. Machine learning by definition queries ambiguous matters, working from balances of probability.
A second difference lies in where software is executed, which has implications for where market power is held.
In Microsoft’s famous Wintel alliance (Windows software and Intel’s chips as hardware in computers built by the likes of IBM and Dell), chipset maker Intel and computer manufacturer IBM wielded substantial clout because Windows often came installed on the systems of those vendors.
But AI can run from the cloud, regardless of who designed the chipset it was trained on. Major cloud providers provide access to a range of hardware options, and these can be switched up at any time.
Markus Schläfli, head of simulation at Neurolabs, a computer vision software developer based at University of Edinburgh, explained: “There is a difference in where computation resides [compared with the emergence of Wintel]. In the 1990s it was in the everyday desktop workstation for typical software development, whereas now it is concentrated in the hands of cloud computing providers for deep learning.”
Above: Markus Schläfli
Moreover, graphical processing units (GPUs) are widely used for AI modelling, facilitating cross-compatibility across different manufacturers. Cem Dilmegani, founder of artificial intelligence industry analysis provider AIMultiple, said: “In AI model training you are relying on GPUs. You can own your own or you can get these things on the cloud.
“Cloud enables faster model training by allowing you to access more resources. However, switching to cloud requires substituting investment with operating costs and can be expensive. The decision to switch from own hardware to the cloud is mostly a financial one. The technical work required for the switch is less significant.”
Ease-of-use
Another facet that comes to mind when comparing AI with Windows is ease-of-use and providing developers with tools to build without learning advanced skills or new programming scripts.
A crucial reason for Windows’ success was that the operating system was easy to program. Initially, the platform leveraged the existing MS-DOS operating system to minimise hard disk usage. With time, the operating system accrued its own software development ecosystem.
But bringing accessibility to AI modelling is a far greater challenge. Programming neural networks is highly specialised, and dozens of frameworks exist for building new models. Each has strengths and weaknesses, and different tools handle each stage of the development process.
That makes for a complex process, though development is eased slightly because most frameworks share the Python programming language. Google’s TensorFlow and Facebook’s PyTorch are among the best-known examples.
Schläfli said these were best thought of as translations of basic binary – the 1s and 0s that actually operate computers – into forms that can be understood by human professionals.
He added: “It would be difficult for most people to interact with a compute in [a binary] way. A level of abstraction is added to bridge the interface.”
Dilmegani said: “For most common challenges such as extracting structured data from documents or counting number of customers on a retail space, AI software design is not necessary. Numerous software-as-a-service tools provide competitive functionality.”
Above: Cem Dilmegani
Michael Rovatsos, professor of AI at University of Edinburgh and director of the Bayes Centre, the institution’s innovation hub for data science and AI, said: “Enterprises just have to figure out what to do because the asset is already there,” he said. “The data analytics can then be applied to get value from the client’s information.”
But standard machine learning can only take the industry so far. Its value is less clear where the use-case is highly specialised.
Dilmegani argued that while some smaller companies would find existing models adequate, those wanting sophisticated AI still needed to build from the ground up.
He added: “Companies need to strike the right balance between flexibility and ease-of-use. While off-the-shelf solutions are easy to use and can quickly provide a decent solution, companies need to put significant effort to create a world-class solution for less common problems. Therefore, this complexity in the development process will continue to exist.”
Specialised AI
The need for bespoke AI tools increases their strategic value across multiple industries. Verticals often possess a lot of data but lack the infrastructure to leverage it for machine learning.
Rovatsos said: “Very often the bottleneck is just getting the data in place. Talk to a coffee chain – we might know what happens once the coffee is in store but not at the beginning of the supply chain.
“It is too costly to start investing in having sensors everywhere and the operations to gather the data, so quite often the barrier is being technology-ready.”
“Great value also lies in integrated services guiding the client through how they were to benefit if they adopt AI solutions. I am observing huge gaps in terms of awareness understanding the opportunity and the risk – so there could be a huge market for people helping.”
CVC dealflows reflect the need for industry AI. The proportion of CVC-backed AI deals attributable to vertical industries, rather than to IT directly, was around 45% as of 2019 according to GCV Analytics data. The figure remains almost unchanged from each of the previous three years, but up significantly from 27% in 2015.
Dong-Su Kim, chief executive of LG Technology Ventures, the corporate venturing unit for consumer electronics producer LG, said improved manufacturing supply chains were integral to its thinking in the space.
Kim also expected LG to generate intelligence from connected household devices, providing consumers a more seamless service inside and outside of the house. However, he said the supply chain use-cases were particularly convincing.
“I guess the first AI interface that people are familiar with is speech recognition, so we have been working with Amazon and Google to collaborate on that interface and then merge that with our platform in consumer electronics and appliances.
“The other aspect is trying to implement AI in many of our manufacturing processes – so for preventative maintenance and defect inspection and all sorts of different manufacturing processing.
“I think that is a use case where AI is very compelling, and obviously LG has a lot of data that is being generated through our product, offices and factories.
“So, as a corporate venturing capital unit, that is where we are mostly focused on is finding startups that would like to collaborate in advancing the technology together.”
Energy and AI
Energy was also especially ripe for AI-led disruption, according to AIMultiple’s Dilmegani. Few entities hold as much specialised data as global energy producers. For example, BP publishes its own energy data atlas – the Statistical Review of World Energy – for public consumption. Oil rigs and power stations are highly technical operations, so sophisticated modelling is needed.
BP’s interest was made clear in June 2017, when its corporate venturing unit BP Ventures invested $20m in US-based industrial AI model developer Beyond Limits. The company’s focus on complex enterprise AI lines up with oil and gas, building on technology originally funded by Nasa and the US Department of Defense.
BP Ventures also hopes to promote energy efficiency on the back of strategic AI. In late 2019, the corporate led a series A round for UK-based Grid Edge, whose cloud-based platform helps building managers track energy usage in their properties.
Ignacio Giménez, managing director for Europe and the Middle East at BP Ventures, said Grid Edge’s clients gained greater oversight of energy costs and also had the option of recycling consumed fuel into heat. Crucially, the platform is able to understand data from older energy management panels, installed before machine learning became widespread.
Above: BP Ventures portfolio company Grid Edge
Tom Anderson, co-founder and chief executive of Grid Edge, added: “[The situation] is starting to change with the proliferation of internet-of-things sensor technology, but this is only a small share of the market.
“Therefore, it is important to build systems that can work with new modern and older buildings, by setting up a secure data pipeline and working alongside the idiosyncrasies of each building and its data.”
Substantial growth potential remains for AI software that accounts for specific enterprise requirements.
According to market research firm Tractica, the global AI software market was expected to reach $14.7bn by the end of 2019, growing by 154% over the course of that year.
The opportunity is particularly enticing because enterprise software and management are heavily aligned – you may be familiar with devops, as it is known.
Intriguingly, though, corporate-backed enterprise software rounds are raising far less than they had previously, amounting to $3bn last year compared with $7.3bn in 2015, according to GCV Analytics.
AI is an opportunity to restore devops growth, but only if the end-product can truly satisfy the customer. Corporate venturing units will continue to seek AI software tailored to their goals. For deep AI to truly go mainstream, however, smaller enterprises increasingly will also require top-tier products.
Quantum software
AI software will reach a whole new level as the work of quantum technology developers begins to bear fruit.
Unsurprisingly, Google is making headway, having unveiled a quantum extension for its TensorFlow framework – dubbed TensorFlow Quantum – in March 2020.
TensorFlow Quantum forms part of an initial quantum software wave that seeks to overcome technical instability by fusing quantum and classical processing chips to solve highly specialised problems. Drug design stands out as one of the biggest potential wins.
Google’s prestige was boosted in late 2019 when it claimed to have achieved quantum supremacy – the point at which the technology is able to resolve problems that would take conventional supercomputers an eternity to execute.
But the corporate is by no means alone in the race amid signs the long-anticipated quantum revolution has started to materialise.
Among the other candidates is Rigetti Computing, a quantum computing hardware and software developer backed by media company Bloomberg that sealed $71m of funding in March, albeit, according to TechCrunch, at a lower valuation.
Rigetti has now assembled $190m since it was founded in 2013. Having created a proprietary quantum processing chip, Rigetti is now rolling out a platform for executing quantum software from public, private and hybrid clouds.
Several blocks remain to be crossed before quantum computing becomes practical, but its problem-solving capacity is tantalisingly close. A glimpse at the possibilities came recently when Canada-based D-Wave granted free access to its quantum cloud for projects addressing the effects of the Covid-19 pandemic.
D-Wave has a wealth of applied expertise available through partners which include carmaker Volkswagen and car parts supplier Denso, and researchers from Ludwig Maximilian University of Munich, Sigma-I Tohoku University and the Jülich Supercomputing Centre.
D-Wave’s investors include In-Q-Tel, the strategic investment affiliate of the US intelligence gathering community, which later backed another quantum software business called Q-Ctrl in April 2020. Given the ramifications for national security, In-Q-Tel’s interest is unsurprising.
Edge computing
Dilmegani points out that AI software is less adaptable in edge computing settings in which machine learning is executed from the device itself as opposed to the cloud.
Applying AI to the edge requires thoroughly assessing the specifications of the end-device, and it often limits the utility of external APIs.
Hardware can cause concern as not every edge device contains GPUs made for machine learning, although some have been designed for that purpose.
Dilmegani said: “In the edge case, where you need to input things with limited hardware, it is tricky because of all the constraints.”
“Different devices have different constraints. And then while you are building your model you have to be exactly aware of all those constraints.”
“There is more information perhaps for popular end-devices, but the way I understand there is still always the need to consider distinct specifications.”
Edge AI devices include self-driving vehicles whose makers have struggled to incorporate on-board machinery while conserving energy and space. As Ben Yu of Sierra Ventures argues in this report, autonomous vehicles may require 5G internet to access the cloud before they can become truly viable.
China and AI software
The first GCV AI special report explored the challenge from China’s AI chip developers, but there is also disruptive potential from the East in software. Armed with an abundance of data, China will likely make a significant contribution to taking forward the AI development paradigm.
In April 2020, China-based AI-as-a-service company Intellifusion secured $141m from investors that included affiliates of three financial services firms: Bank of China, China Construction Bank and Bank of Communications.
Intellifusion aims to cover the entire computer vision stack by combining proprietary hardware with big data resources and an algorithm database that executes 96 variants of machine learning.
The Chinese government is reportedly supporting its efforts, joining forces for a collaboration aimed at reducing the cost of developing specialist AI algorithms to below $1,400.
Whether the objective can be met is another matter, though Intellifusion has lower Chinese data mining and labour costs.
Success could shift the pendulum heavily toward China, bringing it the fruits of industrial AI quicker than the West can manage.
Image recognition and financial risk
The insurance and financial services industries have been major beneficiaries of viable image recognition-based AI software.
Already a hotbed for AI applied to numerical data, image recognition broadens strategic value just as limitations were emerging over the reliability of figures in certain subsegments.
Steven Katz, partner and co-founder of real-estate acquisition and management platform Rebar Capital, who also acts as a venture consultant for Singtel Innov8, the corporate venturing arm of telecoms firm Singapore Telecommunications, said AI was widely employed for real estate objectives like property valuations.
Other datasets such as financial transactions remained more challenging although the variety of inputs computed by AI was steadily increasing.
“Insurance is one area we have started to see some disruption, in particular in the single-family home market. AI-powered risk analysis is becoming more specific, leveraging various data points previously ignored, and tech companies are manoeuvring around brokers and insurers and working directly with reinsurance.”
Image recognition and computer vision-driven analytics can provide more precise valuations of financial assets as well as improved customer-side operations.
Among those working on this front is Cape Analytics, which applies AI-driven computer vision to geospatial images of homes so that insurers can estimate risk and property values.
Cape Analytics has raised at least $31m in funding, having collected $17m in a mid-2018 round featuring Avanta Ventures – the venture capital arm of CSAA Insurance – in addition to fellow insurers XL Catlin, Hartford, Cincinnati Insurance Company and State Auto Labs.
Steve Bernardez, partner at Avanta Ventures, said: “They source imagery from a variety of sources, including satellite, fixed-wing and drone, and then run computer vision algorithms to assess property attributes such as roof condition or the presence of pools and even trampolines.
“Their assessment is becoming increasingly valuable as climate change has altered the catastrophic loss risk landscape for insurers from fire, wind and hail events.”
More evidence of image recognition’s utility in risk modelling can be found in AgroStar, a farm crop troubleshooting company backed by financial services firm Rabobank’s Rabo Frontier Ventures unit.
Geared towards Indian smallholders, AgroStar claims its AI is capable of identifying crop problems at around 92% accuracy, having trained on a base set of 500 images and 2,000 modifications.
Thanks to Rabobank’s expertise in agricultural financial risk, AgroStar’s clients are expected to gain access to working capital from within the software, a major boon given the underbanked nature of many Indian smallholder farms.
Harrie Vollaard, head of Rabo Frontier Ventures, said: “AgroStar wants to develop credit and working capital facilities for smallholder farms into its platform, and together with its financial partner Rabobank. Smallholder farmers are mostly unbanked so new risk models based on different data sources are required to build a facility that works out in a rural environment.”
Vollaard said: “With Rabobank supporting many of the large suppliers, we can on the one hand help AgroStar grow and establish new relationships, while on the other hand bringing conversations with clients to a higher level by providing insights they never had before.”