Core ML is a fairly new framework, introduced with iOS 11 in 2017. The idea is to allow integration of trained machine learning models into an iOS app. Core ML works with regression- and classification-based models, and requires the Core ML model format (models with a .mlmodel file extension). There were some initial misconceptions about how Core ML fit into the grand scheme of machine learning, so understanding that Core ML only worked with pre-trained models took some wind out of our sails about its capabilities. That said, the important thing to remember is that Core ML is still a beta product. Like many initial versions of nearly every Apple product, it is a reduce feature set of what might be capable so that Apple can react to the needs and expand features and functionality as needed. One of the limitations of Core ML is that it only works with pre-trained models with currently no support for additional training of the model. The API does provide a way to download and compile a model on a user's device, allowing for additional model tuning and delivery of updates to users without pushing out a new app version to the store.
It is worth noting that using Core ML also requires the latest of the latest with iOS: Xcode 9.2 and iOS 11. As for languages, Swift 4 seems to be preferred as most examples/tutorials are written in Swift 4. You might be able to even use Objective-C code, but that would likely require a lot more bridging code and probably isn't worth the effort involved. On top of that, Core ML only supports its own model format (.mlmodel) but thankfully Apple provides a few pre-trained models in the proper format as well as conversion tools (Core ML Tools) with additional support for writing your own converter (which exists already for MXNet and TensorFlow).
For our experiment, we utilized the Core ML and Vision frameworks to integrate a pre-trained image classification model into an sample app for making predictions about the contents of a photo.