US-based deep learning technology developer Neurala raised $14m in series A funding on Wednesday from investors including communications equipment manufacturer Motorola Solutions’ corporate venturing subsidiary, Motorola Solutions Venture Capital.
Ecomobility Ventures, the venture capital vehicle co-founded by oil company Total, telecommunications network Orange, carmaker Peugeot and rail operator SNCF, also contributed to the round in partnership with private equity firm Idinvest Partners’ Electranova Capital II Fund.
Pelion Venture Partners led the round, which further included Sherpa Capital, 360 Capital Partners, Draper Associates and SK Ventures.
Founded in 2006, Neurala has developed deep learning neural network software called Neurala Brain that was initially created for space agency Nasa to conduct planetary explorations. The technology is able to analyse an environment, detect potential problems and react accordingly.
Neurala Brain is now used for applications such as autonomous drones and self-driving cars, where it enables vehicles, for example, to avoid other objects. Motorola Solutions and Neurala are also collaborating on applications for video, image and audio analytics for public safety.
The capital will help the company cope with increased demand and enable an expansion into additional applications.
Tim Draper, founding partner of Draper Associates, led a $750,000 seed round for Neurala in 2014 that included VC firm Robolution Capital.
Paul Steinberg, chief technology officer for Motorola Solutions, said: “Motorola Solutions is constantly seeking ways to accelerate technology innovation for our public safety and commercial customers, who work in demanding and often dangerous environments.
“Neurala brings advanced deep learning capabilities that will enable us to further explore the potential of artificial intelligence to augment our customers’ experiences ‘at the edge.’
“This has the power to do things like help police find a missing person faster or guide a field worker’s maintenance activities by turning their body-worn camera into a sensor that can recognise actionable information in video and images – in real time at the edge.”