Mapbox, the open source mapping service that is positions as direct competition to Google’s Maps Platform, have announced that they have a new software development kit (SDK) which will allow developers to build applications that provide augmented reality (AR) navigation. The SDK is also able to make use of ARM’s Project Trillium AI platform which allows for the SDK to recognize vehicles, pedestrian, speed limit signs, crosswalks and more information that is available within a direct camera feed.
This new release is called the Vision SDK and will give developers much faster and easier means to build applications. Mapbox also revealed that they are integrating its service deeply with Microsoft’s Azure IoT platform, which will future the capabilities of the Vision SDK and platform. As the run time is completely open source, developers are able to process events on the edge and stream incremental data updates to the cloud. This could for example be used to gain data from users about the state of traffic on a road and then provide that back to all users via cloud updates, in real-time.
Speaking of real-time, the Vision SDK has been optimized to offer true live location with minimum latency to ensure that split-second decisions made by a driver can be informed at the same rate that the applications developers create. Developers will be able to make detailed changes to the SDK on a hardware level to ensure that sensors and chips inside devices achieve real-time data processing and offer that low latency that will be key to success.
“The future of location is building live maps in real-time from distributed sensor networks embedded in vehicles and mobile devices at scale,” said Eric Gundersen, CEO of Mapbox. “Every vehicle and mobile device utilizing the Vision SDK creates a better map, and this same data is streamed back to Microsoft Azure for further processing. The Vision SDK not only runs in real-time to improve the driving experience in the vehicle, but also generates data for the back end to update the map based on changing conditions, powering larger solutions for smart cities or insurance companies.”
The partnership with ARM and their Project Trillium platform brings machine learning to the Vision SDK, allow it to take advance of a mobile devices onboard CPUs, GPUs and AI chips (if available) to perform necessary object recognition. As ARM’s machine learning and object detection processors become more widely adopted in newer mobile devices, the Vision SDK will be able to perform functions at an even faster rate, offering developers more power and flexibility to carry out key process for their AR applications.