With all the buzz about machine learning, artificial neural networks, and other forms of artificial intelligence (AI), it’s fair to ask, “When will I have it on my smartphone?”
The answer, it turns out, is: Sooner than you might think. Despite the vast computing power required to train even a modest artificial neural network to perform a fairly simple task, developers are in fact hard at work to bring artificial intelligence capabilities to the mobile platform.
Approaches to Mobile AI
When it comes to delivering AI on mobile devices, there are two basic approaches: Connect the mobile app to computing resources at a data center somewhere or run the algorithms on the mobile device itself. Each approach has advantages and disadvantages:
- Offloading the computational heavy lifting to a data center means the system can learn not only from your device, but from the accumulated input from millions of other devices, providing more reliable results. However, it means you need to have a network connection any time you use an AI-enabled app, which might chew up much of your monthly data allotment.
- Keeping all the computation on the local device reduces the bandwidth issues, but it means that an app must be “pre-trained” by the developers, which means that it might not necessarily be trained for whatever you’re using it for. And any further learning is based on data from only one user—you.
Developers are focusing their efforts on improving the local option. As you might expect, one of the main development groups is Apple, and the other Google. Apple’s machine learning effort for iOS is called Core ML, and Google’s, for the Android platform, is called TensorFlow Lite.
The Core ML framework from Apple provides developers with a large selection of artificial neural network types, enabling developers to experiment with different designs when developing intelligent apps. Developers can exploit data from the camera, microphone, and other onboard sensors for image recognition, natural-language processing, and more. Apple also provides a number of pre-trained models that developers can use “out of the box” or tweak and retrain to suit their needs. For example, there are several models already available that recognize certain objects and writing in different languages.
TensorFlow Lite is a local-device version of Google’s open-source TensorFlow project. At this writing, it has not been released, so fewer specifics are known about it than about Core ML. We do know that it will provide a library of machine-learning functionality for use in Android devices. Google has also hinted that it is working with chip manufacturers to design processors that are optimized for machine learning, thereby increasing speed and efficiency.
What It All Means
The advent of in-device AI frameworks represents the inevitable convergence of machine learning and the increasing popularity of smartphones and tablets. Eventually, this means that your smartphone will become even smarter: Imagine traveling in a foreign country and having your phone translate signs for you, or taking a picture of your dying houseplant and having your phone tell you what to do to save it. Did a part from your car or an appliance break, but you don’t know what it’s called to order a new one? Take a picture of it and let your phone identify it for you (and find the best price for a new one). With AI on your phone, whole new worlds of functionality will open up for you.
At AndPlus, we are at the forefront of this new frontier and we stand ready to exploit the advantages that AI brings to the mobile platform. If you think your smartphone is pretty cool now, just wait until it’s not just smart, but intelligent.