[Throughout this post we use AI to mean a particular type of machine-learning (ML) algorithms called Deep Neural Networks – DNNs for short. This is currently the most lucrative domain of research both academically and financially.]
Let’s face it, if you’re following AI nowadays, not a week goes by in which you don’t hear of a major breakthrough. We often read of someone somewhere around the world managing to push the technology further. The result? Old problems that previously required some sort of cognition to accomplish can now be automated. Whatever AI touches gets done faster and more accurately than any expert could ever have dreamt of just a few years ago.
Entire industries have been shaken up in the process: Car manufacturing, big finance, medicine, military and government institutions, media, retail stores and logistics all got a taste of what these intelligent algorithms are capable of. The world will never be the same again.
This proliferation of use cases underlines something beautiful about these techniques. They’re not merely a tool that one might carry in the proverbial toolbox, waiting for the right moment to use it. They rather have the characteristics of a problem-solving framework.
In other words, instead of AI giving you a hammer to hammer the nail, you give it the planks and it’ll make you the entire fence.
What does this look like on a practical level? It depends on your AI use-case taxonomy. The one we’ll focus on for this article is data dimensionality, or how rich the domain is that the AI agent gets its input from. If we combine all the AI use-cases according to their data dimensionality, we end up having:
The most groundbreaking innovation in AI that works with text is OpenAI’s natural language model, called GPT-2. Given a couple of sentences as a seed, GPT-2 can build on the seed in a realistic manner. How realistic, you ask? Take a look at their samples.
Taking GPT-2 further, Christine Payne from OpenAI created MuseNet, an AI model capable of generating music. Starting with a music seed similar to the text seed we saw above, the model takes a song from one style or genre of music and recreates that music in the style of a different musician. It’s a marvelous engineering feat — especially when you remember that both GPT-2 and MuseNet use the same AI architecture.
But out of all the areas we now see AI being used in, images stand out as receiving the most active focus. In fact, that’s how it all started: In 2012, AlexNet, a multi-object classification AI model, managed to win first prize in the ImageNet challenge with an error rate of 15.3% (compared to 26.2%, the runner-up). Since then, image classification has become an industry standard. If you’re interested in seeing it in action, you can play around with a webcam image classifier directly in your browser, courtesy of Google.
How about repainting one image in the style of another? Ever thought about how Picasso would paint your vacation photos? Well, now you can see what it would look like.
Do you have old black-and-white photos laying around? Wouldn’t it be nice if someone could give a bit of color to them? Well, how about a tool that colorizes them for you for free and in a couple of seconds?
Exciting as this is, it’s just the beginning. AI is going beyond movies and being applied in problem-solution domains in complex environments. For instance, AI tools can compete against humans in games — just look at the successes of innovations such as OpenAI’s Dota 2 bot, DeepMind’s AlphaGo and DeepMind’s AlphaStar, to name just a few.
The fact that the same underlying algorithms can generate such a diverse group of use cases proves that these techniques are better thought of as a problem-solving framework than as a tool.
This shift requires a major adjustment — both in terms of how we think about AI and how we deploy it to create new products and services. What does our role as innovators look like when AI can generate so much? How do you envision AI impacting product development? Let us know what you think in the comments section below.