AndPlus acquired by expert technology adviser and managed service provider, Ensono. Read the full announcement

Designing and Building AI-Based Solutions

Sep 23, 2020 2:30:00 PM

The future ain’t what it used to be. -Tom Petty, Spike

block labeled with AI floating in spaceConsider two speculative sci-fi films, Blade Runner (1982) and 2001: A Space Odyssey (1968). Blade Runner was set in 2019 and 2001: A Space Odyssey was…2001. In both films, artificial intelligence (AI) plays a key role, and not in a good way.

In 2001, the sentient shipboard computer HAL inexplicably turns on the ship’s astronauts. Blade Runner revolves around eliminating synthetic humanoid “replicants” that are almost indistinguishable from humans.

The years in which those stories took place are behind us, and now it’s obvious we are nowhere near either of those dystopian outcomes. AI is not sophisticated enough to go rogue and cause havoc. In fact, most AI algorithms struggle to become good at simple, narrow tasks.

The current limitations of AI or the possibility of world domination by evil AI-driven devices shouldn’t dissuade you from incorporating AI into your own applications. AI applications have value and are now used in myriad ways to solve problems that are difficult or impossible using conventional computing approaches.

Why Use AI?

Do you want to build an AI-based solution? If you do, an important question to ask is: Why?

It helps to know what kinds of problems AI can solve. The Cliff’s Notes version is this: AI is good at examining huge amounts of data to find patterns that approximately match a target pattern (or limited set of patterns).

The “huge amounts of data” part is important because small amounts of data can be processed by a human. Unlike humans, AI algorithms don’t get tired, bored, or distracted.

The “approximate match” part is important because exact matches are what conventional algorithms are quite good at. AI shines when there is wide variability in the data.

Let’s consider an AI algorithm intended to identify pictures of cows. No two cows look exactly alike, and a single cow can look different from different angles. A conventional computing algorithm will find identifying cows an impossible task.

An AI-based algorithm can be trained, by processing thousands of pre-annotated images, to distinguish cows from other objects. The same algorithm can also be trained to identify other objects, at the cost of additional training.

If the problem you’re trying to solve involves recognizing patterns in large amounts of data with quite a bit of variability, AI is the way to go.

Designing an AI Solution

Once you’ve made the decision to build AI into your solution, it’s time to choose the specific AI technology. Many approaches exist, and the right one for your problem and its proper setup requires some technical expertise.

When the technology is chosen and the algorithm set up, it must be trained. This can be the hard part, depending on how much training data you have available.

A child can look through a picture book of animals, and in an hour, from one picture each, can identify dozens of different animals. Not so for AI, which requires thousands of examples in order to recognize one target pattern and distinguish it from cases where that pattern does not exist.

The more data with which you have to train, the more reliable your AI solution can become. But there are some catches:

All that training data must be annotated. Images containing cows need to be labeled “cow” and those without labeled “not cow.” This tedious task falls to human operators. If you have, or can obtain, sufficient pre-annotated training data, you’re in good shape. If not, getting that data and annotating it can be a project in itself.

The training data must live somewhere, and the algorithm has to process all of it multiple times; tweaking its parameters in each iteration in order to increase its accuracy. Training requires immense storage and computing resources.

Testing the Algorithm

The fun doesn’t stop when the training phase is finished. You also need to test your solution, and for this you need even more annotated data. This time, you’re using data the algorithm hasn’t seen before to see how accurate it is at identifying the target pattern.

How accurate is good enough?

That depends on the problem being addressed; the minimum accuracy should be determined up front as part of the requirements. The threshold might be based on legal requirements, industry standards, regulations, or ethical considerations.

Unless the data has little variability, few algorithms will perform at 100% accuracy, with no false-positives or false-negatives. If testing shows the algorithm doesn’t meet your minimum threshold, you need to rethink the technology choice or design.

Not an AI Expert? Ask for Help

Developing an AI algorithm requires some esoteric knowledge and skill. As AI gains in popularity, more developers are gaining the experience and skills to do the job, but unless you know the right questions to ask, it’s hard to tell if you’re talking to a real expert.

At AndPlus, we’ve been honing our AI skills for years. We can help you make the right technology and design choices for any AI-based application. We will be happy to discuss your ideas. Let’s partner to make your AI-based solution a reality.

LET'S TALK

Chris DeProfio

Written by Chris DeProfio

Chris runs the “engine room” of AndPlus’ world-class engineering team that solves problems using a myriad of technologies. He is responsible for all aspects of product engineering and quality assurance, and often works closely with clients. He also manages the AndPlus employee professional development program, mentoring and guiding employees in their technical, business, and management skills development. Chris received a BA in Computer Science from Clark University, and is a certified Scrum Master.

Get in touch

LET’S BUILD SOMETHING AWESOME. TOGETHER.