Two recent articles, one from Christopher Sims of the Wall Street Journal, and another featuring IBM CEO Ginni Rometty (also writing for the Journal), provide glimpses into where Artificial Intelligence – AI – is likely to take us. And from both, one conclusion is clear: it’s all in the data.
The end of the year is a great time to be thinking about the future. And AI will increasingly be a part of everyone’s future. The gist of the arguments from both Rometty and Sims make clear that data – big data – will be what makes AI truly possible.
While today’s newer smart assistants, like Alexa and Siri, are entering into our everyday lives, they represent only the beginning. Already, Alphabet (Google), Amazon and Microsoft are making their AI smarts available to other businesses on a for-hire basis. They can help you make a gadget or app respond to voice commands, for example. They can even transcribe those conversations for you. Add to that abilities like face recognition to identify objectionable content in images, and you begin to see how troves of data (in these cases, voice and image) are being transformed into usable function.
But all this data and technology, notes Sims, are not going to suddenly blossom into AI. According to data scientist Angela Bassa, the real intelligence is still about ten years away.
Why? Three obstacles:
- Not enough data. Most companies simply don’t have enough data to do deep learning that can make much more than an incremental difference in company performance. Customers are “more interested in analytics than in the incremental value that sophisticated AI-powered algorithms could provide.”’
- Small differences generally cannot yet justify the expense of creating an AI system.
- There is a scarcity of people to build these systems.
All that being said, Ms. Bassa, noting that there are only about 5,000 people in the world who can put together a real AI system, says that “creating systems that can be used for a variety of problems, and not just the narrow applications to which AI has been put so far, could take decades.”
IBM CEO Ginni Rometty notes that the term artificial intelligence was coined way back in 1955 to convey the concept of general intelligence: the notion that “all human cognition stems from one or more underlying algorithms and that by programming computers to think the same way, we could create autonomous systems modeled on the human brain.” Other researchers took different approaches , working from the bottom up to find patterns in growing volumes of data, called IA, or Intelligence Augmentation. Ironically, she notes, the methodology not modeled on the human brain led to the systems we now describe as ‘cognitive.’
Rometty concludes, fittingly, that “it will be the companies and the products that make the best use of data” that will be the winners in AI. She goes on to say… “Data is the great new natural resource of our time, and cognitive systems are the only way to get value from all its volume, variety and velocity.”
She concludes with a noteworthy commentary: “Having ingested a fair amount of data myself, I offer this rule of thumb: If it’s digital today, it will be cognitive tomorrow.”