Today AI is an over-hyped buzzword, but it hasn’t always been like that. Only a few years ago, saying your company works with AI would not exactly excite. Deep Mind CEO, Demis Hassabis said that “7 years ago when you would have said the words AI to a Venture Capitalist, they would roll their eyes at you. Today, they will throw 10 million dollars at you”. The business world is embracing AI with open arms, but the problem is, not many people really know what real AI is. Companies and VCs alike struggle to evaluate whether a company is really working with the technology, or just cashing in on the hype.
A survey from London’s VC firm MMC found that 40% of Europe’s startups that classify as AI companies don’t exploit the field of study in any material way. David Kelnar, head of research at MMC told Forbes, “we looked at every company and in 40% of cases we could find no mention or evidence of AI” he adds that “companies that people assume and think are AI companies, are probably not”.
So how can you distinguish if the solutions being sold to you are really based on AI? First of all, you need to understand what AI can actually do for you. You can read more about it in our previous article. Second, you need to know a bit about the historical context of the field. Though AI has been around for over 60 years, it’s only in the past few years that significant developments have been made in creating anything resembling artificial intelligence.
The first intelligent machine was created by Alan Turing during WW2. This machine called the Bombe would crack the ‘Enigma’ code, used by German forces to send encrypted messages. Then in 1956 at the Dartmouth Conference the term ‘Artificial Intelligence’ was first adopted. Since then, due to lack of computational power, AI became less popular and went through several ‘AI winters’. It wasn’t until the late 1990s when once again AI received significant attention. In 1997 IBM’s Deep Blue defeated Garry Kasparov, the reigning world chess champion. From there on, thanks to the exponential growth of data, computing power and improvements in hardware, AI has been slowly picking up pace.
Today, we are at a point where massively funded research is done in the field and breakthroughs in, amongst others, natural language processing; image recognition and generation; computer vision and reinforcement learning, are rapidly shaping the industry and opening up new possibilities. Still, there are many ways to create artificial intelligence. Some, very intelligent- others not so much.
AI can be a pile of if-then statements, or a complex statistical model built using deep neural networks. The if-then statements are essentially just rules programmed by humans, sometimes they are called rules engines, expert systems, but collectively they are known as Good, Old-fashioned AI (GOFAI). These systems might be useful for conducting repeated tasks, but have little to do with actual intelligence. They can automate processes, but don’t self-learn or improve without human intervention. You all know examples of this technology- most chatbots and accounting systems are built on it. Lack of robustness to the kind of variation one sees in natural generated data such as text is also a problem for GOFA. And this way of building AI is also quite limited in the scope of data it can process. On the other hand, machine learning and neural networks, require little to no human intervention. These programs alter themselves, are dynamic and adjust based on the data they are exposed to. Thanks to this they assist people in their work and simplify daily tasks. Still, they might require quite a lot of data to reach sufficient performance.
MMC’s report found that 26% of the startups in the study said they use AI to power chatbots. But it is hard to evaluate how much of a benefit the technology is bringing to their customers. Chatbots are often hard to navigate and more annoying than useful- simply used to cut costs of human employees. The reason is that though called ‘AI’, they are rule based systems incapable of real understanding (see illustration above). However, the recent developments in the AI field have allowed for more sophisticated generations of dialogue based tools, that use Natural Language Processing (NLP) and Deep Learning to understand meaning and answer in natural text.
By transferring text into vector representations, holding numerical value, it is possible to process text in completely new ways. Combining NLP and transfer learning (applying pre-trained models to data) is opening doors to completely new possibilities when it comes to text generation, understanding and translating.
Natural Language Processing
In 2018 we saw remarkable breakthroughs when it comes to language and text. OpenAI’s GPT-2 is generating stories based on short descriptions. The model is trained to predict the next word, but unlike similar models does so while maintaining the context of the whole text, modelling the text in a way that it has representation from all the previous input. Facebook research has expanded their LASER (Language Agnostic SEntence Representation) Toolkit to now work with 93 languages across 28 different alphabets. This model delivers strong results in cross-lingual document classifications and is revolutionising translations. In the future we are expecting that pertained language model embeddings will be widely leveraged in state-of-the-art models. Other examples of important text representation tools are ELMO, BERT and XLNET.
One of the most popular fields in the deep learning space - computer vision - also went through great advancements. Whether it be for image or video, new frameworks and libraries all around are making computer vision tasks easier. BigGANs are now capable of high fidelity image synthesis, producing images that are nearly unrecognisable from real photographs. We have all seen Deepfake videos of world leaders or art pieces that came to life. In the future we can expect to see this technology being used across a wide range of fields such as holograms, teaching and film making.
Deep mind’s AlphaZero is the new, improved and more generalised version of AlphaGO and AlphaGOZero. Though it’s predecessors were champion game-playing AIs, they were still taught the games by studying humans playing. AlphaZero on the other hand, taught itself from scratch by playing against itself. The technology, not constrained by human tactics, has come up with novel strategies to playing GO, Chess and Shogi, and forms its own evaluations of the game.