Last week, at their annual I/O developer conference in California, Google unveiled their new software development kit, ML Kit, which enables developers to integrate machine learning models into their apps. The software, which sits under Google’s Firebase brand, offers several pre-trained models that can recognise text, detect faces, identify landmarks, scan barcodes and label images both online and offline.
Although it’s early days, the announcement shows that big technology companies are interested in providing developers with tools to use machine learning in their products. This heralds an exciting time for developers and we can already see some incredible advances in how machine learning is helping solve real world commercial issues.
There are several ways we’ll use the technology at DigitalBridge. Specifically, it will accelerate our ability to embed machine learning models into our apps and allow us to experiment with models to create proof of concepts very quickly.
To illustrate the impact ML Kit will have at DigitalBridge, we’ve listed 3 key features below:
The stand out benefit of ML Kit, however, is the amount of time it will save. This will allow us to make the most of our in-house expertise and focus on making our room planning and room visualisation software even better.
With the launch of ML Kit, exciting developments are now possible, for example we could use the on-device face detection in our scanning app to make sure that our floor plan estimation algorithm doesn’t take into account pixels belonging to a person to build the point cloud. Thanks Google!