Required Reading
Arbib, Michael A. "Artificial Intelligence: Cooperative Computation and Man-Machine Symbiosis." Computers, IEEE Transactions C-25.12 (1976): 1346-1352. IEEE. Web. 13 November, 2015.
In his study, Arbib, the Fletcher Jones Professor of Computer Science at the University of Southern California, observes machine learning to be the integration of artificial intelligence with linguistics, psychology, and brain research. He defines the aspects of perception, planning, learning, and language as key components of intelligence. Understanding the restrictions of current machines today, Arbib recognizes the study of robotics as the foundation of artificial intelligence. As a result, he believes robotic capabilities, such as, automation, pattern recognition, speech recognition, scene analysis, and other features will aide in the development of high level machine learning. Believing that man-machine symbiosis will have a major impact on the world, Arbib argues that criticisms against the development of artificial intelligence are developed based on confusion and antitechnological world views. In response, he states that scientists should clearly define their goals and create advancements based on human needs to control the development of AI.
Danaher, John. "Why AI Doomsayers Are Like Sceptical Theists And Why It Matters." Minds & Machines 25.3 (2015): 231-246. Academic Search Complete. Web. 13 November 2015.
Bringing to light the existential risks that artificial intelligence poses to humanity, Danaher, a lecturer at NUI Galway in Ireland, analyzes skeptical theists’ and doomsayers’ positions on the study of intelligent machines. Recognizing that several research institutions and companies have taken the initiative to approach and tackle these potentials risks, Danaher keys on Nick Bostrom’s, a notable and prominent figure in the field, literature, Superintelligence: Paths, Dangers, and Strategies. Danaher discusses the notion of necessary evils and God’s existence in relation to machine intelligence. In addition, Danaher focuses on Bostorm’s Treacherous Turn idea, which ultimately questions the credibility of machines in the future.
Hosea, S.P., Harikrishnan, V., Rajkumar, K. "Artificial intelligence." Electronics Computer Technology (ICECT), 2011 3rd International Conference 4.1. (2011) 124-129, 8-10. IEEE. Web. 13 November, 2015.
In their study, Harikrishnan, Hosea, and Rajkumar, lecturers from the KRS College of Arts and Sciences in India, define artificial intelligence as the ability of a machine to perform complex tasks using human intelligence. They ultimately argue that human intelligence includes performing a series of actions and maintaining a certain level of intellect by adapting, learning, connecting, and interacting with one’s environment and surroundings. Acknowledging the limits of machines and computers today, the authors consider whether computers can truly be humans, as they discuss four requirements that they value as necessary in order for a robot to be considered humanlike. Considering the advantages and disadvantages of creating superintelligent machines, they suggest that society waits to see what impact artificial intelligence will have on humanity in the future.
Lorenc, Theo. "Artificial Intelligence and the Ethics of Human Extinction." Journal of Consciousness Studies 22.9/10 (2015): 194-214. Academic Search Complete. Web. 13 November 2015.
Lorenc, an independent lecturer, explores the extreme risks resulting from the further development of artificial intelligence. Such dangers include total human extinction. He critiques the proposed solution of enacting ethical principles into AI devices. In response, he emphasizes the alarming issues regarding ethical agency and the possibility of machines having the capability of disregarding the consideration of humans in order to fulfil goals that do not take into human needs. Further analyzing his studies, Lorenc ventures into the notion of whole-brain emulation and robotic human-intelligence reproduction. On that notion, he notes how humans must distinguish between friendly and hostile AI to assess alarming dangers.
Maybury, Mark T. "The Mind Matters: Artificial Intelligence and Its Societal Implications." Technology and Society Magazine, IEEE 9.2 (1990): 7-15. IEEE. Web. 13 November, 2015.
Advocating for society to monitor and control the development of robotic machines, Maybury, Chief Scientist of the United States Air Force, discusses the origins of artificial intelligence, in relation to the definitions and foundational goals laid out by four intellectual scientists. He distinguishes landmarks of monumental discoveries in the field, as he discusses Alan Turing’s “Turing Test” and continues by recognizing several programs that simulated real life human conversations with actual humans. Pointing to several applications of artificial intelligence today, Maybury highlights the military’s and government’s interest in further developing these highly complex machines. Although he acknowledges the dangers of such applications, he ultimately encourages society to consider the potential artificial intelligence poses for the future.
Waltz, D.L. "Evolution, Sociobiology, and the Future of Artificial Intelligence." Intelligent Systems, IEEE 21.3 (2006): 66-69. IEEE. Web. 13 November, 2015.
Waltz, Director of the Center for Computational Learning Systems (CCLS) at Columbia University, offers the notion that the status and advancement of artificial intelligence in the next twenty years will be determined by monetary, technical, and scientific factors maintained by the government, academia, and research facilities that take an interest in certain AI developments. Based on his observations of prominent companies and businesses that take an interest in these intelligent machines, Waltz suggests that intense company and university collaborations will fund research in the emerging field. On that note, he ventures into the underlying question of whether the complexity of human structures and understanding will require robots to develop personalities to simulate true human emotion.
All pictures acquired from Google Images.
In his study, Arbib, the Fletcher Jones Professor of Computer Science at the University of Southern California, observes machine learning to be the integration of artificial intelligence with linguistics, psychology, and brain research. He defines the aspects of perception, planning, learning, and language as key components of intelligence. Understanding the restrictions of current machines today, Arbib recognizes the study of robotics as the foundation of artificial intelligence. As a result, he believes robotic capabilities, such as, automation, pattern recognition, speech recognition, scene analysis, and other features will aide in the development of high level machine learning. Believing that man-machine symbiosis will have a major impact on the world, Arbib argues that criticisms against the development of artificial intelligence are developed based on confusion and antitechnological world views. In response, he states that scientists should clearly define their goals and create advancements based on human needs to control the development of AI.
Danaher, John. "Why AI Doomsayers Are Like Sceptical Theists And Why It Matters." Minds & Machines 25.3 (2015): 231-246. Academic Search Complete. Web. 13 November 2015.
Bringing to light the existential risks that artificial intelligence poses to humanity, Danaher, a lecturer at NUI Galway in Ireland, analyzes skeptical theists’ and doomsayers’ positions on the study of intelligent machines. Recognizing that several research institutions and companies have taken the initiative to approach and tackle these potentials risks, Danaher keys on Nick Bostrom’s, a notable and prominent figure in the field, literature, Superintelligence: Paths, Dangers, and Strategies. Danaher discusses the notion of necessary evils and God’s existence in relation to machine intelligence. In addition, Danaher focuses on Bostorm’s Treacherous Turn idea, which ultimately questions the credibility of machines in the future.
Hosea, S.P., Harikrishnan, V., Rajkumar, K. "Artificial intelligence." Electronics Computer Technology (ICECT), 2011 3rd International Conference 4.1. (2011) 124-129, 8-10. IEEE. Web. 13 November, 2015.
In their study, Harikrishnan, Hosea, and Rajkumar, lecturers from the KRS College of Arts and Sciences in India, define artificial intelligence as the ability of a machine to perform complex tasks using human intelligence. They ultimately argue that human intelligence includes performing a series of actions and maintaining a certain level of intellect by adapting, learning, connecting, and interacting with one’s environment and surroundings. Acknowledging the limits of machines and computers today, the authors consider whether computers can truly be humans, as they discuss four requirements that they value as necessary in order for a robot to be considered humanlike. Considering the advantages and disadvantages of creating superintelligent machines, they suggest that society waits to see what impact artificial intelligence will have on humanity in the future.
Lorenc, Theo. "Artificial Intelligence and the Ethics of Human Extinction." Journal of Consciousness Studies 22.9/10 (2015): 194-214. Academic Search Complete. Web. 13 November 2015.
Lorenc, an independent lecturer, explores the extreme risks resulting from the further development of artificial intelligence. Such dangers include total human extinction. He critiques the proposed solution of enacting ethical principles into AI devices. In response, he emphasizes the alarming issues regarding ethical agency and the possibility of machines having the capability of disregarding the consideration of humans in order to fulfil goals that do not take into human needs. Further analyzing his studies, Lorenc ventures into the notion of whole-brain emulation and robotic human-intelligence reproduction. On that notion, he notes how humans must distinguish between friendly and hostile AI to assess alarming dangers.
Maybury, Mark T. "The Mind Matters: Artificial Intelligence and Its Societal Implications." Technology and Society Magazine, IEEE 9.2 (1990): 7-15. IEEE. Web. 13 November, 2015.
Advocating for society to monitor and control the development of robotic machines, Maybury, Chief Scientist of the United States Air Force, discusses the origins of artificial intelligence, in relation to the definitions and foundational goals laid out by four intellectual scientists. He distinguishes landmarks of monumental discoveries in the field, as he discusses Alan Turing’s “Turing Test” and continues by recognizing several programs that simulated real life human conversations with actual humans. Pointing to several applications of artificial intelligence today, Maybury highlights the military’s and government’s interest in further developing these highly complex machines. Although he acknowledges the dangers of such applications, he ultimately encourages society to consider the potential artificial intelligence poses for the future.
Waltz, D.L. "Evolution, Sociobiology, and the Future of Artificial Intelligence." Intelligent Systems, IEEE 21.3 (2006): 66-69. IEEE. Web. 13 November, 2015.
Waltz, Director of the Center for Computational Learning Systems (CCLS) at Columbia University, offers the notion that the status and advancement of artificial intelligence in the next twenty years will be determined by monetary, technical, and scientific factors maintained by the government, academia, and research facilities that take an interest in certain AI developments. Based on his observations of prominent companies and businesses that take an interest in these intelligent machines, Waltz suggests that intense company and university collaborations will fund research in the emerging field. On that note, he ventures into the underlying question of whether the complexity of human structures and understanding will require robots to develop personalities to simulate true human emotion.
All pictures acquired from Google Images.