Artificial intelligence (AI) is defined as “a branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior,” according to Webster’s Dictionary. Recent advancements in the field, however, have proven that AI is so much more than a mere scientific curiosity. The technological advances attributed to AI have the potential to completely alter the world as we know it.
Artificial intelligence used to be the stuff of science fiction; evidence of the concept being studied by real-life scientists dates back to the 1950s. The famous Alan Turing explored the theory in a research paper in the year 1950, but the fundamentals of computers weren’t yet advanced enough to bring the idea of artificial intelligence to fruition. By 1955 The Logic Theorist program funded by the Research and Development (RAND) Corporation became what many believe to be the first example of AI. The program was designed to mimic the human mind’s ability to problem solve, eventually setting the stage for a historic conference called the Dartmouth Summer Research Project on Artificial Intelligence in 1956.
As computers began to get faster, more powerful, and less expensive, AI began to pick up steam through the '70s. Successful projects began emerging in scientific communities, some even securing funding from government agencies. Then, for close to a decade, AI research hit a wall as funding lapsed and scientific theories began to outpace computer ability once again. The biggest exception was a Japanese government-funded, $400 million project aimed at improving artificial intelligence from 1982 to 1990.
The 1990s and 2000s saw some huge advancements in artificial intelligence as the fundamental limits of computer storage yielded to new hardware innovations. As the applications of AI become more and more prevalent in the daily lives of humans, it is essential to have the context of some of the most important advances in AI history.
Stacker explored 25 advances in artificial intelligence from all different uses, applications, and innovations. Whether it’s robots, supercomputers, health care, or search optimization, AI is coming up strong.
You may also like: Jobs most in danger of being automated
Known for their AI robots and resulting viral videos, Boston Dynamics truly changed the public’s understanding of what robots with AI technology are capable of. Videos of their first big robot “Big Dog” went viral in 2005, and since then millions of people have enjoyed watching the company grow from smaller four-legged robots to full-fledged humanoids with the ability to perform parkour and backflips.
First launched in the U.S. in December 2018, ProFound AI is a cancer-detection software that assists radiologists in finding early digital breast tomosynthesis (DBT) in patients. While the technology has only reached select high-profile hospitals so far, the innovation is a great precursor to what’s to come in the medical world of AI.
October 2018 marked the first piece of AI-generated artwork sold at a world auction, signaling the emergence of AI into the art world. The portrait was created by a Paris-based collective that is studying the interface between art and AI and sold at world-renowned Christie’s auction house for $432,500 (almost 45 times its estimate).
The Japanese startup DataGrid began creating AI-generated faces in 2018, but didn’t go viral until April 2019. It was then that a press release revealed a series of photorealistic images of what appeared to be humans but were, in fact, created by AI. This advance utilizes a whole new level of deep learning algorithms that create realistic images so good that they can fool other humans.
TensorFlow, the deep-learning library that Google uses in its products (search engines, translation, recommendations, etc), was first released to the public in 2015. TensorFlow is open source—meaning that anyone can download it and use it for free—and symbolizes the importance of making machine learning and AI available to anyone for any purpose.
DeepMind Technologies’ AlphaGo first made headlines when it defeated the European champion in the complex Chinese board game “Go” in 2015, before going on to beat the world champion in 2016. AlphaGo famously marked a new age in advanced AI programs combining traditional methods with deep neural networks.
Generative adversarial networks (GANs) were first introduced in 2014 at the Neural Information Processing Systems Conference. The new GAN machine learning framework was innovative, consisting of two artificial neural networks able to compete and subsequently train one another, creating better results. An example of an updated version of GAN is the popular This Person Does Not Exist website, which uses AI to generate human faces.
In 2015 the first cross-country road trip performed by an autonomous car was completed in nine days. The Audi SQ5 was equipped with autonomous driving technology and software from Delphi Technologies to allow the car to make human-like decisions from merging onto highways to parking and everything in between. Though the safety driver (seated in the front seat per the law) reportedly had to interfere a few times, the trip marked a huge advancement in the future of driverless car technology.
When IBM’s Watson computer running software defeated two “Jeopardy!” all-stars in February of 2011, it breached a gap between technology and popular culture in a big way. Watson could hold about one million books’ worth of information in its software and storage, and when posed with a question, the AI ran several different algorithms to rank the best answer. This was different than traditional computer searching keyword-based software, as “Jeopardy!” questions are inherently human in nature, mixing colorful wordplay with puns and cultural references.
Siri may be present in most modern Apple products today, but when the automated voice assistant technology first came out commercially in 2010, it was cutting-edge. After being released as an app, Siri caught the eye of Steve Jobs, who bought the technology for $200 million and employed two of its inventors. The third inventor went on to co-found Viv Labs, acquired by Samsung for $215 million in 2016.
In July 2019, scientists from the University of Wisconsin-Madison created artificially intelligent glass with the ability to recognize images without a power source. One of the utilizations of this AI “smart glass” is creating face ID-locking security software on your phone without using the battery. Potentially, the glass could create a biometric lock that could stand the test of time without the need for an internet or battery source.
2019 saw a huge advancement for artificial intelligence in astrophysics with the development of the Deep Density Displacement Model. In June, astrophysicists were able to use AI to generate 3D simulations of the universe for the first time, shaving down the time it previously took to create complex simulations from minutes to milliseconds.
In February of 2019, a report released by Nature Communications revealed that machine learning technology had found evidence in the human genome of unknown human ancestors, or a “ghost population.” Somewhere down the line of human evolutionary history, a previously undiscovered group of hominins bred with homo sapiens leaving behind traces of DNA. The study is one of the first to directly illuminate how AI can actually help humans understand our own origins.
In a chilling artificial intelligence development, health care data scientists and doctors found in March of 2019 that AI could accurately predict death in patients. The study tested a new system of machine learning algorithms designed to predict the risk of chronic illness and early death in middle-aged patients. These computer-based techniques have the potential to improve preventative health care and assist doctors in taking account of biometrics, lifestyle, and more.
Chinese technology company Huawei claimed one of the first smartphones with artificial intelligence capabilities in September 2017, becoming the first company to base its key appeal for a cell phone around AI. The Huawei Mate 10’s Kirin 970 chipset was specially designed for the fast processing required for taking photos on your phone, all while utilizing more bandwidth. Though Huawei has since come out with new models of phones with the tech, the original Mate 10 marked a leap towards the future of mobile AI.
In 2016, Facebook announced that the company would be placing chatbots on their messenger platform, giving businesses the opportunity to provide automatic customer support, interactive content, and more. The ability to replace real people with AI capable of performing tasks like making reservations, reviewing orders, and responding to human users with structured messages was a game-changer for businesses at the time. Facebook has continued to develop chatbot technology with its FAIR (Facebook Artificial Intelligence Research) program.
OpenAI, the artificial intelligence research company founded by Elon Musk, unveiled an AI system that can generate text in February 2019. The language model became available to the public in a limited form shortly thereafter and proved what was previously believed to be untrue—that AI had the capability to be creative.
The term “deepfake” is believed to have been coined in 2017 by a Reddit user, and describes the AI-based machine learning technology used to realistically superimpose images or videos onto a source with incredible accuracy. While the tech is usually used playfully, deepfaking goes far beyond simply altering video footage and has the U.S. government concerned about its future misuse and weaponization.
Nautilus Labs entered the scene in 2016 when it began building AI technology platforms for the maritime shipping industry. By using machine learning programs to make predictions about shipping routes, the tech developed by Nautilus will maximize profit and minimize fuel consumption (a big win for the environment, too).
With its initial development beginning in 1984, Cyc is the longest-running artificial intelligence program in history. The AI engine spent over 30 years gathering general knowledge to build a base before becoming commercialized in 2016 by Lucid AI. Cyc represents the longevity of AI systems, and only time will tell what the future holds for the program.
The rise of Alexa and the Amazon Echo changed the world of smart home artificial intelligence when the product first launched in November 2014. One of its advantages over similar AI assistant services is the Echo’s far-field microphone, which can pick up sound from larger distances when users use the wake word (usually “Alexa”) to wake-up the virtual assistant.
In 1997 IBM’s Deep Blue computer became the first machine in history to beat a world champion at chess, famously proving to the world that AI could actually be developed to surpass humans in certain intellectual tasks. Deep Blue paved the way for IBM’s future AI superstar computer, Watson, and changed the world of technology forever.
The inaugural Defense Advanced Research Projects Agency (DARPA) driverless car competition in 2004 was the first of its kind. It was also generally considered a failure, due to the fact that none of the cars that qualified made it past the finish line. At the competition the following year, however, a few cars managed to finish the journey and create the momentum for three more important DARPA AI competitions: the Spectrum Challenge for radio, the Robotics Challenge, and the Cyber Grand Challenge tournament for automated network defense.
Google’s autonomous vehicle program, Waymo, made history in December 2018 when it announced a commercial self-driving taxi program in Arizona. Though the service will initially only be available to select customers and the cars will still have safety drivers in the front seats in case things go awry, the app-based program is completely original.
Before the days of Siri and Alexa, a language processing computing system called ELIZA started it all. The MIT Artificial Intelligence Laboratory developed ELIZA from 1964 to 1966 as a simulated Rogerian psychotherapist, where users could type out questions and receive a response back. The tech wasn’t nearly as advanced as the artificial language processing systems of today, but ELIZA laid down the groundwork.