A Chronological build up of Artificial Intelligence.

Artificial intelligence seems to be breaking the global news lately but its emergence dates back to a couple of decades ago. Let’s refresh on this exciting discovery of AI. In the ancient past, the study of mechanical or “formal” reasoning began with philosophers and mathematicians. Alan Turing’s theory of computation was born out of the study of mathematical logic. His theory suggested that a machine could simulate any conceivable act of mathematical deduction, by shuffling symbols as simple as “0” and “1”. This theory is called Church–Turing thesis; with this thesis and other findings in neurobiology, information theory and cybernetics, researchers began conceived the idea of creating an electronic brain. According to Turing’s proposal “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”. The first generally recognized work now considered as AI was a 1943 formal design for Turing-complete “artificial neurons” by McCullouch and Pitts’

A workshop at Dartmouth College in 1956 later created the field of AI research. The participants in the workshop became founders and leaders of AI research; Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM). Soon enough computers were solving word problems in algebra, proving logical theorems, learning checkers strategies and by 1959, were declaredly playing better than the average human under the production of this workshop leaders and students.

Research in the U.S. was largely funded by the Department of Defense in the Mid-1960s. Laboratories had been established worldwide and AI founders were certainly optimistic for the coming years. One of the founders, Herbert Simon predicted, “Machines will be capable, within twenty years, of doing any work a man can do”. And another, Marvin Minsky agreed in writing, “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved”.

Their optimism did not foresee any challenges ahead. Their AI research suffered slow expansion as there was pressure from the US Congress to fund more productive projects and a certain Sir James Lighthill criticized their work. This consequently led to a complete halt of exploratory AI research. Generating funds for the next few years were hard. However, AI research was finally awakened when a form of AI program that simulated the knowledge and analytical skills of human experts was created and it became a commercial success of expert systems early in the 1980s. The value for AI had reached over a billion dollars as of 1985. Japan’s fifth generation computer project was ‘hot’ in the same period hence funding for academic research was restored by the U.S and British governments who earlier cut it off. This was not the final victory of moving AI forward as the innovation had a series of downturn starting from the collapse of the Lisp Machine market in 1987. This hitch lasted longer than the previous one.

It wasn’t until the late 90s and 21st century that Moore’s law of increasing computational power brought success. AI found use in logistics, data mining, medical diagnosis, and other areas. You may have heard of Deep Blue the first computer to earn a chess champion title. I’ll peg this as the first milestone in the development of Artificial Intelligence: On 11 May 1997, Deep Blue the chess-playing computer beat a reigning world chess champion, Garry Kasparov. That must have been huge for the AI programmers at the time further proving that digital computers can simulate any process of formal reasoning. This was just the beginning of machines setting winning paces against humans.

 

One of the AI fore founders, IBM created a question answering machine called Watson who defeated the two greatest at Jeopardy (a quiz show exhibition match) champions by a significant margin in 2011. Machines had gained speed, improved in algorithms and had access to large amounts of data; this fostered advances in learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012. Smart personal assistants in modern mobile devices, 3D body–motion interface for the Xbox 360 and the Xbox One provided by The Kinect all emerged from algorithms of broad AI research.

Another milestone for AI was in 2016 and 2017 respectively, AlphaGo (computer Go-playing system) won Go champion Lee Sedol in 4 out of 5 Go matches. AlphaGo became the first to beat a professional Go player without handicaps. In the later year, AlphaGo won a three-game match in the Future of Go Summit against a two-year No. 1 world Go champion, Ke Jie. Go is a complicated game, much more Chess so it really was a bigger mark for AI.

Since error rates have fallen significantly in artificially intelligent systems, a large number of software projects have entwined AI in their work around 2015; there has been a considerable rise in the adoption of AI around the world and across industries in recent times.  Businesses use it to amplify operations, generate new innovations and boost customer experience. In 2019 and beyond as we have already started to experience, AI researchers will birth new ideas and innovations and find ways to use them as problem solvers in businesses, countries or whatever impactful way. From Voice cloning to automated vehicles, there’s visible progress yet so much more to learn about Artificial Intelligence.

READ MORE-

Definition of AI (artificial intelligence) and Possible classifications.

Artificial Intelligence and Blockchain, Stronger Together.

Weaker Rand in light of South Africa’s presidential elections being nigh.

Crypto Campaign to Accomplish SDGs in Africa: Akoin Launches First ‘TOA.’

New ‘Auto-Delete’ Web Tracking History option granted to Google Users.

NITDA’s view on the Automated Asset Declaration process.

Nigerian Women In Information technology Inducts New Members; Honors old.

Government of Ekiti State Restrategise for Development in ICT Capacity; Huawei aids them.

Technology combats HIV As Abbott’s Innovation Receives WHO Prequalification Approval.

The National Government as a Blockchain Use case.

Cybersecurity Honours at BoICT Awards 2019 Bestowed on Digital Encode Ltd

 

 

Credits- WikipediaHistory, Nick Ismail.

JOIN OUR COMMUNITY

Leave a comment

My Newsletter

Sign Up For Updates & Newsletters

[mc4wp_form id="456"]