Artificial intelligence is suddenly in people’s homes, driving their cars, and running their security systems. Users interact with chatbots, sometimes unaware they’re not talking to live people. Designers and marketing agencies trust computer-generated insights and machine learning over human input in making business decisions. Artificial intelligence development seemed to happen overnight, but it has been a series of developments that stretches back hundreds of years.
#1 1637 Descartes’s Discourse on Method
It’s hard to imagine that, 381 years ago, anyone could have conceived of artificial intelligence. As a point of reference, they were still almost 200 years away from the advent of indoor plumbing. Yet, in his book “Discourse on Method,” René Descartes did just that. He writes: “If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men.” From there he explains that he doesn’t believe that machines could ever have a fully nuanced conversation. The seed was planted. Perhaps Descartes should be thought of as the father of AI.
#2 1950 Alan Turing
313 years later, a paper was published by Alan Turing. In the paper he titled ‘Computing Machinery and Intelligence,’ Turing speculates about creating a machine that can think. The famous Turing Test was devised to determine whether it was possible for a machine to think. The test utilized a teletype and was designed to see if a machine could engage in conversation that couldn’t be distinguished from a human conversation. This was the first serious look at the possibility of artificial intelligence.
#3 1951 Stochastic Neural Analog Reinforcement Computer (SNARC)
Marvin Minsky and Dean Edmonds created SNARC out of vacuum tubes, motors, and clutches. The goal for this machine was to help a virtual mouse successfully complete a maze. Each time the machine received the results of an instruction it sent, it made modifications to the next instruction based on the feedback. What Minsky and Edmonds had created was the first neural network. In plain English, neural network is trial and error. It’s been established that the best way to train artificial intelligence is through the use of trial and error. The AI system makes a guess, receives feedback, and then makes another guess. Not unlike how humans learn. When babies are born, it’s through trial and error that they learn the rules of engagement of the world around them.
#4 1955 Logic Theorist
Allen Newell, Herbert A. Simon, and Cliff Shaw wrote a computer program called Logic Theorist. The program was designed to mimic a human being’s problem-solving skills. Logic is now known as the first artificial intelligence program. At the time, the term AI had not yet been coined.
#5 1956 Dartmouth Conference: AI is Born
Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester organized the Dartmouth Conference held in 1956. Logic Theorist was debuted at the conference. It was McCarthy who convinced the conference attendees to officially adopt the term “Artificial Intelligence” as the name of this new field of study. The Dartmouth Conference is generally considered the birth of AI.
#6 1995 Self-Driving Car
Ask most people when the first self-driving car hit the streets and they will likely give you a very recent date. It’s not widely known that the first self-driving car was launched in 1995. Mercedes-Benz drove a modified, mostly autonomous, S-Class car, from Munich to Copenhagen. The drive was 1,043 miles. This was achieved by essentially stuffing the trunk of the car with a super computer. It’s said that the car drove up to 115 miles per hour and wasn’t that different from autonomous cars now.
#7 1997 Deep Blue
The chess-playing machine, Deep Blue, became the first computer to win against a world champion. Deep Blue was developed by IBM.
#8 1998 The Furby
The first domestically-aimed robot was successfully produced and launched in 1998. Tiger Electronics released Furby during the holiday season and it quickly gained momentum as the trending toy of the time. The Furby, which looked somewhat like a small, fuzzy owl, spoke a garbled language called Furbish. It was programmed to mimic the process of language development by gradually using English words in place of Furbish. In the first three years of production, over 40 million Furbies were sold.
#9 2002 The Roomba
In September 2002, iRobot brought the Roomba to market. The Roomba is a robotic vacuum cleaner that uses sensors to autonomously move about the house cleaning the floors. Best known for being able to change direction when it senses an obstacle, it can also detect dirty spots and proactively avoid falling down the stairs. Newer versions are able to more systematically clean the floor area by utilizing onboard mapping and navigation software.
#10 2010 Siri
Nuance Communications released Siri as a mobile application. Two months later, Apple Computer acquired it and revolutionized the mobile phone experience with the release of Siri as an integrated component of the iPhone 4S. Siri is an intelligent personal assistant capable of making recommendations, answering questions, scheduling events, adjusting device settings, and conducting searches. The more that the user interfaces with Siri, the more the software adapts to individual preferences. The results that Siri returns are individualized. Siri was really the beginning of a conversational user interface. One of the most profound elements of Siri was the ability to use natural language.
#11 2015 ImageNet Challenge
AI is helping to master the image recognition challenge. In a 2015 matchup between a Google system and a Microsoft system, the machines surpassed human ability for identifying images and objects in over 1000 categories. Called “deep learning” systems, the machines beat the ImageNet Challenge. This is an exciting new breakthrough for automating tasks that require recognition of an object or person and then making a decision about how to proceed based on that recognition. The application of this technology could be wide-spread across many industries. For example, a farmer could use it to identify when crops are ready to be picked.
#12 2017 AlphaGoZero
Arguably, one of the most significant AI milestones to date occurred when the AlphaGoZero taught itself to play the board game Go and beat Lee Sedol, a top-ranked player, 100 games in a row. AlphaGoZero is the first AI to learn independently. Previous versions of AI, including the original AlphaGo, were initially fed data to learn the game. AlphaGoZero learned by playing against itself thousands of times over the course of three days.
The significance of the AlphaGoZero breakthrough can’t be overstated. Artificial intelligence that can teach itself opens the world to possibilities that would seem fantastical to us now. Is it so hard to imagine that home automation will go from following our directions to anticipating our needs and following through independently? Those old enough to remember Rosie the Maid from the Jetsons might be pretty excited at the prospect of having their own Rosie someday. What started out as mere imagination 381 years ago are now realities, and beyond some of the most ambitious concepts that existed at the time. Where will we be 381 years from now??
Stephen Moyers is an out of the heart writer voicing out his take on various topics of social media, web design, mobile apps, online marketing, entrepreneurship, startups and much more in the cutting edge digital world. He is associated with SPINX Digital a Los Angeles web design company & digital marketing agency. When he is not writing, he can be found traveling outdoors with his camera. You can follow Stephen on Twitter @StephenMoyers
Subscribe to our RSS-feed and follow us on Twitter to stay in touch.
Discover more from Life and Tech Shots Magazine
Subscribe to get the latest posts sent to your email.