Turing Test as a Defining Feature of AI-Completeness
X.-S. Yang (Ed.): Artif. Intell., Evol. Comput. and Metaheuristics, SCI 427, pp. 3–17.
springerlink.com © Springer-Verlag Berlin Heidelberg 2012
Turing Test as a Defining Feature
of AI-Completeness
Roman V. Yampolskiy
*
Abstract.
The paper contributes to the development of the theory of AI-
Completeness by formalizing the notion of AI-Complete and AI-Hard problems.
The intended goal is to provide a classification of problems in the field of General
Artificial Intelligence. We prove Turing Test to be an instance of an AI-Complete
problem and further show certain AI problems to be AI-Complete or AI-Hard via
polynomial time reductions. Finally, the paper suggests some directions for future
work on the theory of AI-Completeness.
Keywords:
AI-Complete, AI-Easy, AI-Hard, Human Oracle.
1 Introduction
Since its inception in the 1950s the field of Artificial Intelligence has produced
some unparalleled accomplishments while at the same time failing to formalize
the problem space it is concerned with. This paper proposes to address this short-
coming by extends on the work in [56] and contributing to the theory of AI-
Completeness, a formalism designed to do for the field of AI what notion of
NP-Completeness did for computer science in general. It is our belief that such
formalization will allow for even faster progress in solving remaining problems in
humankind’s conquest to build an intelligent machine.
According to the encyclopedia Wikipedia the term “AI-Complete” was pro-
posed by Fanya Montalvo in the 1980s [54]. A somewhat general definition of the
term included in the 1991 Jargon File [37] states:
“
AI-Complete
:
[MIT, Stanford, by analogy with `NP-complete'] adj. Used to de-
scribe problems or subproblems in AI, to indicate that the solution presupposes a
solution to the `strong AI problem' (that is, the synthesis of a human-level intelli-
gence). A problem that is AI-complete is, in other words, just too hard. Examples
Roman V. Yampolskiy
Computer Engineering and Computer Science, DC 215,
University of Louisville, KY 40292
e-mail:
4 R.V.
Yampolskiy
of AI-complete problems are `The Vision Problem', building a system that can see
as well as a human, and `The Natural Language Problem', building a system that
can understand and speak a natural language as well as a human. These may ap-
pear to be modular, but all attempts so far (1991) to solve them have foundered on
the amount of context information and `intelligence' they seem to require.
”
As such, the term “AI-Complete” (or sometimes AI-Hard) has been a part of
the field for many years and has been frequently brought up to express difficulty
of a specific problem investigated by researchers (see [31, 26, 15, 36, 6, 20, 32,
33, 10, 27, 28, 29, 16, 23, 55]). This informal use further encouraged similar con-
cepts to be developed in other areas of science: Biometric-Completeness [36],
ASR-Complete [30]. While recently numerous attempts to formalize what it
means to say that a problem is “AI-Complete” have been published [2, 41, 11]
even before such formalization attempts systems which relied on humans to solve
problems which were perceived to be AI-Complete were utilized:
•
AntiCaptcha
systems use humans to break CAPTCHA security protocol [2,
58, 59, 63] either by directly hiring cheap workers in developing countries [5]
or by rewarding correctly solved CAPTCHAs with presentation of porno-
graphic images [52].
•
Chinese Room
philosophical argument by John Searle shows that including a
human as a part of a computational system may actually reduce its perceived
capabilities such as understanding and consciousness [40].
•
Content Development
online projects such as Encyclopedias (Wikipedia,
Conservapedia), Libraries (Project Gutenberg, Video collections (YouTube)
and Open Source Software (SourceForge) all rely on contributions from
people for content production and quality assurance.
•
Cyphermint
a check cashing system relies on human workers to compare a
snapshot of a person trying to perform a financial transaction to a picture of a
person who initially enrolled with the system. Resulting accuracy outperforms
any biometric system and is almost completely spoof proof (see cypher-
mint.com for more info).
•
Data Tagging
systems entice user into providing meta-data for images, sound
or video files. A popular approach involves developing an online game which
as a byproduct of participation produces a large amount of accurately labeled
data [1].
•
Distributed Proofreaders
employs a number of human volunteers to elimi-
nate errors in books created by relying on Optical Character Recognition
process. (see pgdp.net for more info).
•
Interactive Evolutionary Computation
algorithms use humans in place of a
fitness function to make judgments regarding difficult to formalize concept
such as esthetic beauty or taste [47].
•
Mechanical Turk
is an Amazon.com’s attempt at creating Artificial Intelli-
gence. Humans are paid varying amounts for solving problems which are be-
lieved to be beyond current abilities of AI programs (see mturk.com for more

6 R.V.
Yampolskiy
2 The Theory of AI-Completeness
From people with mental disabilities to geniuses human minds are cognitively diverse
and it is well known that different people exhibit different mental abilities. We define a
notion of a Human Oracle (HO) function capable of computing any function computa-
ble by the union of all human minds. In other words any cognitive ability of any hu-
man being is repeatable by our HO. To make our Human Oracle easier to understand
we provide the following illustration of the
Human
function:
String Human (String input) {
/
/
/
•••
/
return output; }
Fig. 1
Human oracle: Human
Best
– a union of minds
Such a function would be easy to integrate with any modern programming lan-
guage and would require that the input to the function be provided as a single
string of length
N
and the function would return a string of length
M
. No specific
encoding is specified for the content of strings
N
or
M
and so they could be either
binary representations of data or English language phrases both being computa-
tionally equivalent. As necessary the
human
function could call regular TM func-
tions to help in processing of data. For example, a simple computer program
which would display the input string as a picture to make human comprehension
easier could be executed. Humans could be assumed to be cooperating perhaps
because of a reward. Alternatively, one can construct a
Human
function which in-
stead of the union of all minds computes the average decision of all human minds
on a problem encoded by the input string as the number of such minds goes to in-
finity. To avoid any confusion we propose naming the first HO Human
Best
and the
second HO Human
Average
. Problems in the AI domain tend to have a large degree
of ambiguity in terms of acceptable correct answers. Depending on the problem at
hand the simplistic notion of an average answer could be replaced with an aggre-
gate answer as defined in the Wisdom of Crowds approach [46]. Both functions
could be formalized as Human-Assisted Turing Machines [41].
Human function is an easy to understand and use generalization of the Human
Oracle. One can perceive it as a way to connect and exchange information with a
real human sitting at a computer terminal. While easy to intuitively understand,
such description is not sufficiently formal. Shahaf et al. have formalized the notion
of Human Oracle as an HTM [41]. In their model a human is an oracle machine
that can decide a set of languages
L
i
in constant time:
H
⊆
{
L
i
|
L
i
⊆
∑
*}. If time
complexity is taken into account answering a question might take a non-constant
time:
H
⊆
{<
L
i
,
f
i
>
|
L
i
⊆
∑
*,
f
i
:
} there
f
i
is the time-complexity function
Turing Test as a Defining Feature of AI-Completeness
7
for language
L
i
, meaning the human can decide if x
L
i
in
f
i
(|x|) time. In order to
realistically address capabilities of individual humans a probabilistic oracle was
also presented which provided correct answers with probability
p
:
H
⊆
{<
L
i
,
p
i
>
|
L
i
⊆
∑
*, 0
≤
p
i
≤
1
}. Finally the notion of reward is introduced into the model to
capture humans improved performance on “paid” tasks:
H
⊆
{<
L
i
,
u
i
>
|
L
i
⊆
∑
*,
u
i
:
} where
u
i
is the utility function [41].
2.1 Definitions
Definition 1:
A problem
C
is
AI-Complete
if it has two properties:
1.
It is in the set of AI problems (Human Oracle solvable).
2.
Any AI problem can be converted into
C
by some polynomial time
algorithm.
Definition 2:
AI-Hard:
A problem
H
is AI-Hard if and only if there is an AI-
Complete problem C that is polynomial time Turing-reducible to H.
Definition 3:
AI-Easy:
The complexity class AI-easy is the set of problems that
are solvable in polynomial time by a deterministic Turing machine with an oracle
for some AI problem. In other words, a problem X is AI-easy if and only if there
exists some AI problem Y such that X is polynomial-time Turing reducible to Y.
This means that given an oracle for Y, there exists an algorithm that solves X in
polynomial time.
Figure 2 illustrates relationship between different AI complexity classes. Right
side illustrates the situation if it is ever proven that AI-problems = AI-Complete
problems. Left side shows the converse.
Fig. 2
Relationship between AI complexity classes
Ovaj materijal je namenjen za učenje i pripremu, ne za predaju.
Slični dokumenti