AndPlus acquired by expert technology adviser and managed service provider, Ensono. Read the full announcement

DELIVERABLES

A basic iOS app which can process photos and make predictions about the contents of the photo.

Technologies Used

      • Core ML
      • Vision Framework
      • Resnet50 Core ML Model

TECHNICAL DEEP-DIVE

a diagram

Core ML vs. Tensorflow Lite
READ NOW

 

The AndPlus Innovation Lab is where our passion projects take place. Here, we explore promising technologies, cultivate new skills, put novel theories to the test and more — all on our time (not yours).

core ML and tensorflow lite machine learning models built in house at AndPlus
machine learning is being researched at AndPlus as we explore the future of big data management

 

Our Research

Core ML is a fairly new framework, introduced with iOS 11 in 2017. The idea is to allow integration of trained machine learning models into an iOS app. Core ML works with regression- and classification-based models, and requires the Core ML model format (models with a .mlmodel file extension). There were some initial misconceptions about how Core ML fit into the grand scheme of machine learning, so understanding that Core ML only worked with pre-trained models took some wind out of our sails about its capabilities. That said, the important thing to remember is that Core ML is still a beta product. Like many initial versions of nearly every Apple product, it is a reduce feature set of what might be capable so that Apple can react to the needs and expand features and functionality as needed. One of the limitations of Core ML is that it only works with pre-trained models with currently no support for additional training of the model. The API does provide a way to download and compile a model on a user's device, allowing for additional model tuning and delivery of updates to users without pushing out a new app version to the store.

It is worth noting that using Core ML also requires the latest of the latest with iOS: Xcode 9.2 and iOS 11. As for languages, Swift 4 seems to be preferred as most examples/tutorials are written in Swift 4. You might be able to even use Objective-C code, but that would likely require a lot more bridging code and probably isn't worth the effort involved. On top of that, Core ML only supports its own model format (.mlmodel) but thankfully Apple provides a few pre-trained models in the proper format as well as conversion tools (Core ML Tools) with additional support for writing your own converter (which exists already for MXNet and TensorFlow).

For our experiment, we utilized the Core ML and Vision frameworks to integrate a pre-trained image classification model into an sample app for making predictions about the contents of a photo.

AI recognizing a keyboard
AI recognizing a mouse

 

Deliverable

Above are a couple screenshots of the app in action. This sample application shows the 'Top Result' and its confidence percentage in addition to the complete results set with their confidence levels. Different models will yield different results but our main point of experimentation was the simplicity of integrating this into an app.

 

How it Was Done

  • Add Core ML Model to Xcode project
  • Wrap the Core ML model into a Vision model
  • Build a Vision request with the Vision model
  • Build a Vision request handler
  • Run the Vision request with the CIImage from the user

 

chris-martin

CHRIS MARTIN

AndPlus understands the communication between building level devices and mobile devices and this experience allowed them to concentrate more on the UI functions of the project. They have built a custom BACnet MS/TP communication stack for our products and are looking at branching to other communication protocols to meet our market needs. AndPlus continues to drive our product management to excellence, often suggesting more meaningful approaches to complete a task, and offering feedback on UI and Human Interface based on their knowledge from past projects.

Get in touch

LET’S BUILD SOMETHING AWESOME. TOGETHER.