The Fusion Project, which promises to provide a more efficient way to collect the data required to train AI models for autonomous vehicles, today launched with Airbiquity, Cloudera, NXP Semiconductors, Teraki, and Wind River onboard.
The goal is to compress the data collected from autonomous vehicles to the point that it becomes possible to update the AI models employed in an autonomous vehicle faster. Today, autonomous vehicles rely on inference engines based on AI models trained in the cloud. The automotive industry is a long way from being able to train AI models in real time on the vehicle itself. In the meantime, the members of the Fusion Project are committing to making it easier to collect data by compressing data on the vehicles before it is transferred back to AI models residing in the cloud.
Those data compression techniques will eventually be applied to other forms of transportation, such as trains and planes, said David LeGrand, senior industry and solutions marketing manager for manufacturing and retail at Cloudera.
The members of the Fusion Project are pledging to develop an integrated embedded system for collecting compressed data from vehicles that can be fed back to a cloud platform. That capability will substantially reduce the cost of collecting data from what may one day be millions of vehicles, LeGrand noted.
In addition to compressing the data collected using software developed by Cloudera, the members of the Fusion Project will enable over-the-air updates to the inference engines installed in a vehicle using software management software from Airbiquity.
NXP, meanwhile, will provide the vehicle processing platforms, while Teraki provides the AI software that will be deployed at the edge. Finally, Wind River will provide the embedded system software.
Initially, the Fusion Project will specifically focus its efforts on advancing the ability of autonomous vehicles to recognize when to optimally change lanes based on the data gathered via vision AI engines installed in the vehicle, LeGrand said. The first tests of vehicles embedded with Project Fusion technologies will take place in Europe, LeGrand added.
The immediate goal is to not eliminate the need for drivers, but rather take the current alert systems that most vehicles have today to the next level by training AI models based on data about the actual driving experience being collected by vehicles, LeGrand noted. “It’s not going to be fully autonomous,” LeGrand said. “It’s more like a driver-assist system.”
There are, of course, fully autonomous vehicles that can follow a highly prescribed set of programming instructions to get from one point to another. The challenge is that the level of responsiveness required for an autonomous vehicle to navigate traffic flows that include vehicles driven by humans — who are likely to make random decisions — remains elusive.
There may come a day when AI models embedded within a vehicle could be trained and updated in real time. Today, achieving that goal would require the equivalent of a server based on a graphics processing unit (GPU) to be installed in the trunk of every vehicle. Naturally, that would make autonomous vehicles prohibitively expensive.
In the meantime, the process of transferring data between inference engines and the AI models on which they are based will continue to become more efficient. The AI model might not make it all the way out to the vehicle itself, but it will become more feasible to deploy AI models at the network edge. The challenge is finding a way to achieve that goal that is economically viable for automotive manufacturers.