Artificial intelligence

1863

This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998. == Impact == The long-term economic effects of AI are uncertain.

1943

The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956, where the term "Artificial Intelligence" was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.

1950

Their work revived the non-symbolic point of view of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI.

1956

The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956, where the term "Artificial Intelligence" was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.

1959

1954) (and by 1959 were reportedly playing better than the average human), solving word problems in algebra, proving logical theorems (Logic Theorist, first run c.

1960

By the middle of the 1960s, research in the U.S.

By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s. === Symbolic === When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

During the 1960s, symbolic approaches had achieved great success at simulating high-level "thinking" in small demonstration programs.

1961

(See Dreyfus' critique of AI.) Gödelian arguments: Gödel himself, John Lucas (in 1961) and Roger Penrose (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own "Gödel statements" and therefore have computational abilities beyond that of mechanical Turing machines.

1968

Clarke's and Stanley Kubrick's A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999).

1969

John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions.

1970

Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time. ==== Knowledge-based ==== When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.

1974

Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S.

1980

The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts.

However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began. The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s.

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.

By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s. === Symbolic === When access to digital computers became possible in the mid-1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s. ==== Logic-based ==== Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.

The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications. === Sub-symbolic === By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.

In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives.

1985

By 1985, the market for AI had reached over a billion dollars.

1987

However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began. The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s.

1988

Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

1989

A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A.

(See Dreyfus' critique of AI.) Gödelian arguments: Gödel himself, John Lucas (in 1961) and Roger Penrose (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own "Gödel statements" and therefore have computational abilities beyond that of mechanical Turing machines.

1990

Mead and Mohammed Ismail. In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.

However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as [Markov model]s (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures.

The intelligent agent paradigm became widely accepted during the 1990s. Agent architectures and cognitive architectures:Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.

1997

Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.

1998

This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998. == Impact == The long-term economic effects of AI are uncertain.

2000

10, pp. 21–23, October 2000. Later published as . == Further reading == DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3. Boden, Margaret, Mind As Machine, Oxford University Press, 2006. Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol.

2001

4, pp. 567–576, November 2001. Ronald, E.

2005

The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics" that stems from the AAAI Fall 2005 Symposium on Machine Ethics. ==== Malevolent and friendly AI ==== Political scientist Charles T.

8, 2005) Cybernetics Formal sciences Computational neuroscience Emerging technologies Unsolved problems in computer science Computational fields of study

2006

10, pp. 21–23, October 2000. Later published as . == Further reading == DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3. Boden, Margaret, Mind As Machine, Oxford University Press, 2006. Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol.

2009

Forbes June 2009 Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol.

2010

One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.

2011

Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.

2012

Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.

See: General game playing. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects.

Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.

2015

After AlphaGo defeated a professional Go player in 2015, artificial intelligence once again attracted widespread global attention.

See: General game playing. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects.

In January 2015, Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making.

2016

In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.

Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower". By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks.

2017

In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No.

In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".

A 2017 study by PricewaterhouseCoopers sees the People’s Republic of China gaining economically the most out of AI with 26,1% of GDP until 2030.

6 (June 2017), pp. 60–65. Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press. Koch, Christof, "Proust among the Machines", Scientific American, vol.

3 (March 2017), pp. 58–63.

2018

3 (September 2018), pp. 88–93. Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol.

2019

By 2019, transformer-based deep learning architectures could generate coherent text. === Perception === Machine perception is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world.

4 (July/August 2019), pp. 192–98.

6 (December 2019), pp. 46–49.

5 (May 2019), pp. 58–63. Myers, Courtney Boyd ed.

3 (May/June 2019), pp. 135–44.

1994. Taylor, Paul, "Insanely Complicated, Hopelessly Inadequate" (review of Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment, MIT, October 2019, , 157 pp.; Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Ballantine, September 2019, , 304 pp.; Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, Penguin, May 2019, , 418 pp.), London Review of Books, vol.

10 (6 June 2019), pp. 52–53, 56–57.

2020

Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower". By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks, albeit without the system attaining commonsense understanding of the contents of the benchmarks.

A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g.

Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI. The potential negative effects of AI and automation were a major issue for Andrew Yang's 2020 presidential campaign in the United States.

2021

2 (21 January 2021), pp.




All text is taken from Wikipedia. Text is available under the Creative Commons Attribution-ShareAlike License .

Page generated on 2021-08-05