Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
342
result(s) for
"Turing test."
Sort by:
Turing's imitation game : conversations with the unknown
\"Can you tell the difference between talking to a human and talking to a machine? Or, is it possible to create a machine which is able to converse like a human? In fact, what is it that even makes us human? Turing's Imitation Game, commonly known as the Turing Test, is fundamental to the science of artificial intelligence. Involving an interrogator conversing with hidden identities, both human and machine, the test strikes at the heart of any questions about the capacity of machines to behave as humans. While this subject area has shifted dramatically in the last few years, this book offers an up-to-date assessment of Turing's Imitation Game, its history, context and implications, all illustrated with practical Turing tests. The contemporary relevance of this topic and the strong emphasis on example transcripts makes this book an ideal companion for undergraduate courses in artificial intelligence, engineering or computer science.\"-- Provided by publisher.
Mirror Turing Test: soul test based on poetry
2024
With the rapid development of machine intelligence, an increasing number of websites and servers have been amicably visited or sometimes attached by intelligent machines intensively. Therefore, how to empower a host machine to intelligently distinguish intelligent machines from humans is a challenging work. In this paper, the Mirror Turing Test (MTT) is conceived and implemented. Unlike the standard Turing Test, the tester in the MTT is replaced by a machine instead of a human. Current advancements on deep learning enable machines to recognize subtle differences between genuine and counterfeit works. Sometimes, the ability of machines is even superior to that of humans. Will machines transcend humans in an irreversible trend? Not completely right. The detection of soul in an artwork remains far beyond the capacity of machines. The two sets of MTT based on poetry generated by a machine and a novel imitated by a human were conducted in this paper and neither of them passed the MTT. Poetry is one of the art forms in which authors reveal their souls. Thus, we chose poetry in the MTT experiments on the basis of our soul computing model, thus clearly discriminating machine from human.
Journal Article
Artificial Intelligence and Consciousness: Limits and Modern Perspectives
2025
This paper provides a review of selected concepts concerning consciousness, intelligence and artificial intelligence, focusing on their interrelations and interpretative limitations. The aim of the paper is to organize key definitions and viewpoints, and to highlight central issues related to the question of whether conscious machines can ever emerge. Consciousness is often defined as subjective experience, as the capacity for reflection on one’s own mental states, or as an emergent property of complex biological systems. Intelligence, on the other hand, is interpreted as the ability to learn, solve problems, adapt to changing conditions, and control cognitive processes. The development of computational technologies has given rise to weak artificial intelligence, encompassing algorithmic and machine learning systems that can model and predict patterns with high precision. Within this category, generative artificial intelligence, represented by large language models, demonstrates impressive linguistic capabilities but lacks genuine understanding – a feature associated with strong AI. The paper discusses whether computational processes can be equated with real thinking, referring to Gödel’s incompleteness theorems, Searle’s Chinese Room argument, as well as the Turing Test. This review contributes by integrating classical philosophical arguments with a comparative evaluation of contemporary language models (GPT-5, Gemini 2.5, DeepSeek-V3.2), examining their responses to Gödelian questions and reasoning tasks. The analysis indicates that, despite significant progress in building artificial intelligence systems, the question of their potential consciousness remains unresolved and continues to be a subject of profound philosophical debate.
Journal Article
Visual Turing test for computer vision systems
by
Geman, Stuart
,
Younes, Laurent
,
Hallonquist, Neil
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2015
Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.
Significance In computer vision, as in other fields of artificial intelligence, the methods of evaluation largely define the scientific effort. Most current evaluations measure detection accuracy, emphasizing the classification of regions according to objects from a predefined library. But detection is not the same as understanding. We present here a different evaluation system, in which a query engine prepares a written test (“visual Turing test”) that uses binary questions to probe a system’s ability to identify attributes and relationships in addition to recognizing objects.
Journal Article
Alexa, What Are You? Exploring Primary School Children’s Ontological Perceptions of Digital Voice Assistants in Open Interactions
2020
Today’s children grow up in an environment that is increasingly characterized by digital voice assistants (DVAs), such as Alexa, Siri, or the Google Assistant. This paper argues that any attempt to investigate children’s interactions with, and perceptions of, DVAs should be based on the theoretical grounds of an ontological framework that considers children’s genuine understanding of what it means to be human and what it means to be a machine. Based on focus groups and a gamified data collection design, our empirical inquiry applied qualitative methods to explore primary school children’s (n =27, age range: 6–10 years, average age: 8.6 years) open interactions with DVAs. In particular, our focus was on how DVAs were embedded in children’s general ontological belief system, and how children interpreted certain aspects of DVAs’ interactive capabilities as being genuinely humanoid or non-humanoid. On the one hand, our findings suggest that children’s interactions with DVAs might be more an end in itself than a means to an end, meaning that children primarily interact with DVAs for the sake of engaging excitement instead of using the devices’ utilitarian functionalities. On the other hand, we found that children in our sample held firm ontological beliefs about the distinct nature of humans and machines, whilst interpreting certain aspects of DVAs’ interactive capabilities as being genuinely humanoid (e.g., non-responsiveness, delayed responses, inaccuracy) and non-humanoid (e.g., permanent responsiveness, promptness, accuracy, limited conversational capacities, lack of common sense, standardized responses) at the same time.
Journal Article
Corrigendum: A minimal Turing test: reciprocal sensorimotor contingencies for interaction detection
by
Bedia, Manuel G.
,
Barone, Pamela
,
Gomila, Antoni
in
Human Neuroscience
,
interaction
,
perceptual crossing
2024
[This corrects the article DOI: 10.3389/fnhum.2020.00102.].
Journal Article
Artificial intelligence test: a case study of intelligent vehicles
2018
To meet the urgent requirement of reliable artificial intelligence applications, we discuss the tight link between artificial intelligence and intelligence test in this paper. We highlight the role of tasks in intelligence test for all kinds of artificial intelligence. We explain the necessity and difficulty of describing tasks for intelligence test, checking all the tasks that may encounter in intelligence test, designing simulation-based test, and setting appropriate test performance evaluation indices. As an example, we present how to design reliable intelligence test for intelligent vehicles. Finally, we discuss the future research directions of intelligence test.
Journal Article
Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement
2017
The evaluation of artificial intelligence systems and components is crucial for the progress of the discipline. In this paper we describe and critically assess the different ways AI systems are evaluated, and the role of components and techniques in these systems. We first focus on the traditional task-oriented evaluation approach. We identify three kinds of evaluation: human discrimination, problem benchmarks and peer confrontation. We describe some of the limitations of the many evaluation schemes and competitions in these three categories, and follow the progression of some of these tests. We then focus on a less customary (and challenging) ability-oriented evaluation approach, where a system is characterised by its (cognitive) abilities, rather than by the tasks it is designed to solve. We discuss several possibilities: the adaptation of cognitive tests used for humans and animals, the development of tests derived from algorithmic information theory or more integrated approaches under the perspective of universal psychometrics. We analyse some evaluation tests from AI that are better positioned for an ability-oriented evaluation and discuss how their problems and limitations can possibly be addressed with some of the tools and ideas that appear within the paper. Finally, we enumerate a series of lessons learnt and generic guidelines to be used when an AI evaluation scheme is under consideration.
Journal Article
The Turing test of online reviews: Can we tell the difference between human-written and GPT-4-written online reviews?
2024
Online reviews serve as a guide for consumer choice. With advancements in large language models (LLMs) and generative AI, the fast and inexpensive creation of human-like text may threaten the feedback function of online reviews if neither readers nor platforms can differentiate between human-written and AI-generated content. In two experiments, we found that humans cannot recognize AI-written reviews. Even with monetary incentives for accuracy, both Type I and Type II errors were common: human reviews were often mistaken for AI-generated reviews, and even more frequently, AI-generated reviews were mistaken for human reviews. This held true across various ratings, emotional tones, review lengths, and participants’ genders, education levels, and AI expertise. Younger participants were somewhat better at distinguishing between human and AI reviews. An additional study revealed that current AI detectors were also fooled by AI-generated reviews. We discuss the implications of our findings on trust erosion, manipulation, regulation, consumer behavior, AI detection, market structure, innovation, and review platforms.
Journal Article