NEW YORK UNIVERSITY  |  CEDA Debate

FOR ALL CURRENT INFORMATON ON THE 2020-21 SEASON, PLEASE INSTEAD VISIT the global debates homepage.

RESOURCES

Sources for the 2020-21 Season

The 2020-21 topic is

Resolved: On-balance, the risks of artificial intelligence outweigh the rewards  

AFF SOURCES​

[please cut and paste the links into your browser directly}

1. Sutrop, M. M. S. e. (2019). Should We Trust Artificial Intelligence? TRAMES: A Journal of the Humanities & Social Sciences, 23(4), 499–522. https://doi-org.proxy.library.nyu.edu/10.3176/tr.2019.4.07

2. Artificial intelligence in healthcare: Is it beneficial? (2019). Journal Of Vascular Nursing: Official Publication Of The Society For Peripheral Vascular Nursing, 37(3), 159. https://doi-org.proxy.library.nyu.edu/10.1016/j.jvn.2019.09.001

3. Allen, T. C. (2019). Regulating Artificial Intelligence for a Successful Pathology Future. Archives of Pathology & Laboratory Medicine, 143(10), 1175–1179. https://doi-org.proxy.library.nyu.edu/10.5858/arpa.2019-0229-ED

4. Bruun, E. P. G., & Duka, A. (2018). Artificial Intelligence, Jobs and the Future of Work: Racing with the Machines. Basic Income Studies, 13(2), N.PAG. https://doi-org.proxy.library.nyu.edu/10.1515/bis-2018-0018

5. Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, (2), 1–9. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=bth&AN=137670569&site=eds-live

6. Adamu, S., & Awwalu, J. (2019). The Role of Artificial Intelligence (AI) in Adaptive eLearning System (AES) Content Formation: Risks and Opportunities involved. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.1903.00934&site=eds-live

7.Ali, S. M. (2019). “White Crisis” And/As “Existential Risk,” or the Entangled Apocalypticism of Artificial Intelligence. Zygon, 54(1), 207–224. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=reh&AN=ATLAiREM190318000949&site=eds-live


NEG SOURCES​


[please cut and paste the link into your browser directly}

1. Hager, G. D., Drobnis, A., Fang, F., Ghani, R., Greenwald, A., Lyons, T., … Tambe, M. (2019). Artificial Intelligence for Social Good. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.1901.05406&site=eds-live

2. Sutrop, M. M. S. e. (2019). Should We Trust Artificial Intelligence? TRAMES: A Journal of the Humanities & Social Sciences, 23(4), 499–522. https://doi-org.proxy.library.nyu.edu/10.3176/tr.2019.4.07 (both sides)

3. Nagarajan, N., Yapp, E. K. Y., Le, N. Q. K., Kamaraj, B., Al-Subaie, A. M., & Yeh, H.-Y. (2019). Application of Computational Biology and Artificial Intelligence Technologies in Cancer Precision Drug Discovery. BioMed Research International, 1–15. https://doi-org.proxy.library.nyu.edu/10.1155/2019/8427042

4. Artificial intelligence in healthcare: Is it beneficial? (2019). Journal Of Vascular Nursing: Official Publication Of The Society For Peripheral Vascular Nursing, 37(3), 159. https://doi-org.proxy.library.nyu.edu/10.1016/j.jvn.2019.09.001 (both sides)

5. Varshney, K. R., & Mojsilovic, A. (2019). Open Platforms for Artificial Intelligence for Social Good: Common Patterns as a Pathway to True Impact. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.1905.11519&site=eds-live

6. Adamu, S., & Awwalu, J. (2019). The Role of Artificial Intelligence (AI) in Adaptive eLearning System (AES) Content Formation: Risks and Opportunities involved. Retrieved from http://search.ebscohost.com.proxy.library.nyu.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.1903.00934&site=eds-live (both sides)

 Organizations


Algorithmic Justice League https://www.ajlunited.org/ 


AI NOW https://ainowinstitute.org/ 


Responsible Robotics https://responsiblerobotics.org/ 


AI FOR ALL http://ai-4-all.org/


 

BOOKS


Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig (3rd edition)


Artificial Unintelligence By Meredith Broussard


BACKGROUND ARTICLES


Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence, Ernest Davis and Gary Marcus, CACM, September 2015.

Podcasts


1. https://twimlai.com/
2. https://lexfridman.com/ai/
3. http://dataskeptic.com/
4. http://www.thetalkingmachines.com/


2020-21 TOPIC PRIMER 

There is consensus among the tournament organizers that a topic primer is not needed this year for the prelims.  They will re-evaluate prior to the elim rounds.  Please consider any information below purely advisory. 

BACKGROUND

The earliest successful AI program was written in 1951 by Christopher Strachey, later director of the Programming Research Group at the University of Oxford. Strachey’s checkers (draughts) program ran on the Ferranti Mark I computer at the University of Manchester, England. By the summer of 1952 this program could play a complete game of checkers at a reasonable speed.

Information about the earliest successful demonstration of machine learning was published in 1952. Shopper, written by Anthony Oettinger at the University of Cambridge, ran on the EDSAC computer. Shopper’s simulated world was a mall of eight shops. When instructed to purchase an item, Shopper would search for it, visiting shops at random until the item was found. While searching, Shopper would memorize a few of the items stocked in each shop visited (just as a human shopper might). The next time Shopper was sent out for the same item, or for some other item that it had already located, it would go to the right shop straight away. This simple form of learning, as is pointed out in the introductory section What is intelligence?, is called rote learning.

The first AI program to run in the United States also was a checkers program, written in 1952 by Arthur Samuel for the prototype of the IBM 701. Samuel took over the essentials of Strachey’s checkers program and over a period of years considerably extended it. In 1955 he added features that enabled the program to learn from experience. Samuel included mechanisms for both rote learning and generalization, enhancements that eventually led to his program’s winning one game against a former Connecticut checkers champion in 1962.

In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence, introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer, “No,” in response to, “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turing’s test) the computer is considered an intelligent, thinking entity.

In 1991 the American philanthropist Hugh Loebner started the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program has come close to passing an undiluted Turing test.

In recent years more has been written about artificial intelligence in technology and business publications than ever before: the current wave of artificial intelligence innovations has caught the attention of virtually everyone, not in the least because of artificial intelligence fears.

Artificial intelligence (AI) isn’t new but this time it’s different. Cognitive systems and AI are innovation accelerators of the nascent digital transformation economy.

The evolution of AI-powered innovations and solutions in a myriad of areas has led to numerous articles and reports on the value of AI and its application across a wide range of domains, as well as the necessity and possibilities of artificial intelligence in a hyperconnected reality of people, information, processes, devices, technologies and transformations. Artificial intelligence in business is a reality.

It’s important to remember that Musk, Gates, Hawking and many others are not “against” artificial intelligence.  Wht they are warning about are the potential dangers of superintelligence (as we start seeing in some neural networks), maybe even intelligence we don’t understand. And is there anything humans fear more than what they can possibly not understand? To quote Tom Koulopoulos: “The real shift will be when computers think in ways we can’t even begin to understand”.

When Bill Gates expressed his concerns about AI this is what he said, according to an article on Quartz: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

In more than one way it’s a pity that AI is associated with what it could become (and what it was in previous waves when it failed to deliver upon its promises) instead of what it is today.
Artificial intelligence is far from a thing of the future. It exists today in business applications, clearly offering multiple benefits to the organizations using these solutions. It exists in so many platforms we use on a daily basis. Admittedly, it’s not here in the sense of super intelligence.

At a 2016 symposium by the Future of Life Institute, Alphabet Chairman Eric Schmidt (and others) adviced the “AI community to “rally around three goals”:
1.   AI should benefit the many, not the few.
2.   AI R&D should be open, responsible and socially engaged.
3.   Developers of AI should establish best practices to minimize risks and maximize the beneficial impact.

​The ability to reason logically is an important aspect of intelligence and has always been a major focus of AI research. An important landmark in this area was a theorem-proving program written in 1955–56 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of the Carnegie Mellon University. The Logic Theorist, as the program became known, was designed to prove theorems from Principia Mathematica (1910–13), a three-volume work by the British philosopher-mathematicians Alfred North Whitehead and Bertrand Russell. In one instance, a proof devised by the program was more elegant than the proof given in the books.

Newell, Simon, and Shaw went on to write a more powerful program, the General Problem Solver, or GPS. The first version of GPS ran in 1957, and work continued on the project for about a decade. GPS could solve an impressive variety of puzzles using a trial and error approach. However, one criticism of GPS, and similar programs that lack any learning capability, is that the program’s intelligence is entirely secondhand, coming from whatever information the programmer explicitly includes.


CLARIFICATION OF TERMS

RISK

Merriam Webster's Dictionary  possibility of loss or injury : PERIL

Random House Dictionary  a situation involving exposure to danger

REWARD

Merriam Webster's Dictionary  something that is given in return for good or evil done or received or that is offered or given for some service or attainment

ON-BALANCE

Oxford English Dictionary-- all things considered

MacMillian Dictionary After considering all the relevant facts

ARTIFICIAL INTELLIGENCE

Webster's Dictionary

AI-  the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

BJ Copeland Professor of Philosophy and Director of the Turing Archive for the History of Computing, University of Canterbury, Christchurch, New Zealand. Author of Artificial Intelligence and others.

the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

The tournament officials will not interfere if teams choose to use the definitions above or something similar. Framing matters beyond the terms above are issues for robust debate.