Return to site

The Origins of Artificial Intelligence

· Ai

The Origins of Artificial Intelligence 

These days, AI is a hot topic. Companies across the globe are seeking out engineers with understandings of machine learning in the hopes to gain a competitive edge in business. From new fields like voice recognition—where companies like Apple are utilizing it for smart assistants like Siri—to more traditional ones like finance—where firms like Rebellion Research use it to select investments—AI is creating waves in industry. So let’s delve into where the field of artificial intelligence originated. 

As it turns out, some research into artificial intelligence that has had a lasting influence began in the early to mid-20th century. From there, research persevered, at times facing significant headwinds, until reaching the point where we are at today. The complex field saw its humble beginnings as ideas in the mind of early pioneers like Alan Turing. Turing was a British mathematician who wrote a groundbreaking paper, proposing a concept called the Turing Machine. A Turing Machine is a concept of one that can perform operations on a (potentially conceptual) piece of tape by reading and printing out values. Turing machines should be able to execute basic algorithms. Today, practically all common programming languages (Java, Python, C++, etc.) are able to fulfill these tasks and are hence called “Turing-complete.” Further developments were made by Turing in 1950, when he came up with the Turing Test, a way of defining computational intelligence. According to Turing, a machine was intelligent if and only if it could fool a suspicious judge into believing it was a human in a written conversation with an actual person. To this day, researchers and engineers have struggled to pass the Turing Test [1]. As it turns out, what comes naturally for human beings is actually a great hardship for computational machines. 

broken image

In 1956, artificial intelligence received more attention thanks to the 1956 Dartmouth Summer Research Project on Artificial Intelligence, which brought together top minds to theorize and speculate on the feasibility and forms of AI. Though not much of significant material substance was produced during the conference, artificial intelligence was deemed achievable, the spotlight was shone on the field, and more focused research began in the years to come [2]. 

broken image

In the 20 years following the Dartmouth conference, significant progress was made. Algorithms improved and processing power, which was at first incredibly limited, skyrocketed. Initially, the United States provided significant assistance through the Defense Advanced Research Projects Agency (DARPA) [2]. One of the motivations for the government to back AI technology was for its use in rapid translation, specifically for Russian documents in the Cold War era. However, disappointed by slower than expected progress and unimpressive return from the investments they made in the technology, the government ceased funding AI research in 1974. This lack of funding characterized a period that was later deemed “AI Winter.” One takeaway researchers later took from the dark ages of AI is that excessive levels of hype surrounding a technology can be harmful if interest wanes too much following perceived disappointment [1]. 

Fortunately, research in AI saw a resurgence in the early 1980s, when “deep learning” (which allows AI models to learn from experience) first began to see recognition, and through the use of expert systems. In essence, an expert system is a clever computer program that can pass on knowledge from more seasoned company workers to newer ones and by doing so boost productivity. Japan was an early proponent of expert systems. Its government gave out nearly half a billion dollars to researchers with the hopes of transforming industry in the Fifth Generation Computer Project (FGCP) from 1982 to 1990 [2]. Though the FGCP didn’t revolutionize industry as much as authorities hoped, progress was made in producing significantly faster processors and some of the brightest minds in academia rallied to study AI.  

broken image

A pivotal moment many have heard of in artificial intelligence is the creation and success of the Chess AI, Deep Blue [1]. Developed by the former computing behemoth IBM, Deep Blue leveraged the brute force computation of potential chess moves to play matches at an incredibly high level. In a groundbreaking and highly publicized match in 1997, Deep Blue defeated chess grandmaster Gary Kasparov. From there, attention was turned toward applying AI to all kinds of practical problems, games, and business applications.  

Nowadays, many important services, from social media to retail to marketing, are run on “big data.” Massive quantities of information are aggregated from users and historic data, and raw processing power (thanks to continuous improvements in computer processors) is used to sift through and create inferences from it. With the help of an abundance of information, fields like computer vision, natural language processing, and autonomous driving are seeing significant development. The rapid expansion of AI technology and data collection has also raised ethical and moral concerns. What is personal privacy in an age where nearly everything you do is logged and analyzed, often for profit? What happens when jobs involving repetitive labor are replaced en masse by machines and processors? These are all issues we will need to give serious thought to and grapple with moving forward. By appreciating AI’s past, we can put today’s achievements in context and frame further, and more ethical, development in the future. 

Works Cited

[1] Smith, Chris, et al. The History of Artificial Intelligence. University of Washington, Dec. 2006.

[2] Anyoha, Rockwell. “The History of Artificial Intelligence.” Science in the News, 28 Aug. 2017, https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.