In 1950, British mathematician Alan Turing (1912-1954), one of the pioneers of computing, proposed to replace the question “Can the machines think?” by a more practical operational criterion.
He idealized the so -called “imitation game”, in which a human interrogator should distinguish, only through written conversation, he was interacting with another human or a machine. If the machine could deceive a significant number of evaluators, then it would be thinking, according to any sensible definition of the word.
The so -called “turing test” was not a test, in the rigorous sense of the word, with defined protocols. It was another philosophical provocation designed to check the mental rigidity of its interlocutors. But today, 75 years later, any user of Generative Artificial Intelligence Platforms (IAG), such as the American Chat GPT and Chinese Deepseek, knows that the machines have gone into the test: the consistency of their answers and the sophistication of their forms of expression actually exceeds many human interlocutors.
We already live in one of the future described by science fiction. And the article Passed the Turing Test: Living in Turing Futures by Bernardo Nunes Gonçalves, discusses the relevance of the Turing Test in the present world. And track the historical context in which the concept arose, its influence on the development of the IAG and the technical, social and philosophical implications of the new reality.
Gonçalves is a PhD in Computational Modeling (National Laboratory of Scientific Computing, LNCC, 2015) and Philosophy (University of São Paulo, USP, 2021), was fellow (2023-2024) from King's College, from the University of Cambridge, the United Kingdom, and is currently a permanent researcher at the LNCC and Associate Researcher at the Artificial Intelligence Center (C4AI), an Engineering Research Center (CPE) consisting of FAPESP and IBM at USP. Your article was published in the journal Intelligent Computing from the group Science
“Turing argued that human intelligence was largely an unknown and indefinite phenomenon, and that the best way to evaluate artificial intelligence [IA] It would be through observable behavior. His idea challenged belief in the unique superiority of the human mind and served as a reference for the development of artificial intelligence, ”says Gonçalves.
The concept influenced popular culture. In science fiction, Stanley Kubrick's classic film “2001: A Odyssey in Space” brought the figure of supercomputer Hal-9000, representing an advanced AI that can pass the Turing Test and raising questions about autonomy and reliability of the machines. In the “real world”, two machines made history: in 1997, IBM's supercomputer Deep Blue, capable of analyzing up to 200 million throws per second, defeated the then world chess champion Garry Kasparov; And in 2011, Watson, also from IBM, with natural language processing and advanced machine learning, won two of the greatest champions of the Jeopardy! Question and Answer Program.
“A insightful observation of Turing was that artificial intelligence, to be intelligence, could not depend exclusively on explicit programming, but autonomous learning, similar to the development of human intelligence. This perspective led him to predict that in the late 20th century, machines would learn to play the ‘imitation game' convincingly and that the idea of 'thinking machines' would be natural among the most educated people, ”says Gonçalves.
It is worth repeating that the bold way as Turing used “thinking machines” expressions was based on the assumption that we did not really know what human intelligence is.
The article argues that current IAG models, based on transformers ( transformers ) and deep learning ( Deep Learning ), not only imitate the human response, but learn to improve their performance without strictly dependent on prior programming. Its results improve with the training scale, certain unprogrammed functions appear as the model reaches a critical point and can support prolonged conversations in a coherent and convincing manner for unknown people.
The main innovation of transformers is the attention mechanism, which allows the model to focus on different parts of the input when processing a specific data. This makes them more efficient than previous architectures, which processed data sequentially and, therefore, slower. As for deep learning, it is a modality of machine learning ( Machine Learning ), but stands out for allowing models to learn directly from the data, without the need for human intervention to extract characteristics. The two ingredients, transformers and deep learning are sustained about neural networks, which mimic the functioning of human neuronal circuitry.
“Stuart Shieber [cientista da computação da Universidade Harvard, nos Estados Unidos] It has shown that it is not possible to create an AI based purely on memorization, as the storage volume needed to cover all the possibilities of conversation would be greater than the known universe itself. This suggests that current IAS have some level of generalization and reasoning, not merely the repetition of standards, ”argues Gonçalves.
He also discusses the social consequences of the evolution of artificial intelligence. And it points out that Turing not only predicted that machines would replace manual workers, but it was also provocative in warning that the “masters” themselves could be replaced. This means that automation does not only affect operational functions, but also intellectual professions. “To prevent the benefits of AI from being concentrated in the hands of the few, a broader debate on the equitable distribution of the wealth generated by automation is required. This resonates with the vision of Turing, who believed that technology should serve society as a whole, not just the economic interests of an elite, ”he says.
Another critical point addressed in the article is the unsustainability of the current computational model. The energy consumption of contemporary AI systems is gigantic, contrasting with the vision of Turing, which defended a more natural model inspired by the human brain with its low energy consumption. According to Gonçalves, AI needs to evolve to be more sustainable and less dependent on intensive computing.
The article concludes suggesting that as Ia becomes more sophisticated, new forms of evaluation is necessary, which can be inspired by the original Turing test. And it proposes: strict statistical protocols, to prevent it from simply “learn to deceive” traditional tests; Automated adversary tests, eliminating the need for human judges and making the assessment more objective; and verifications based on probabilistic approaches, to make machine evaluations practical and efficient. “These methods would help face emerging challenges, such as training data bias, opposing manipulation and contamination of models with previously known information,” says Gonçalves.
It is always good to reaffirm that the Turing test was proposed 75 years ago, when the first computers were just starting to be conceived and manufactured. Alan Turing was at the forefront of the process. The movie The imitation game The imitation game ), 2014, directed by Morten Tyldum, tells part of his short story, at the same time grand and tragic. Among many achievements, it was he who deciphered the enigma machine's operating code, considered inviolable and used in the exchange of Nazi Germany. This feat spared thousands of lives and contributed significantly to the defeat of Nazifascism during World War II. But it remained unknown for decades, because all the work was done in a highly secret way.
In 1952, Turing was convicted of “severe indecency” due to his homosexuality, which was illegal in the United Kingdom. As an alternative to prison, he opted for forced hormonal treatment, which in fact constituted a form of chemical castration. On June 7, 1954, at the age of 41, he was found dead at his home. The official cause attributed to death was suicide by poisoning with cyanide. Only in 2009 did the British government issue a formal apology for the way it had been treated. And in 2013, after a public campaign, Turing posthumously received the “royal forgiveness”.
“We are already living one of the ‘future of Turing', in which machines are able to imitate human cognition to the point of being indistinguishable in certain interactions. This does not mean that artificial intelligence has reached its fullness. There are still fundamental challenges to be resolved, such as computational sustainability, equity in benefit distribution and the need for more robust evaluation methods. Turing's view remains more relevant than ever, not just as a technical criterion, but as a starting point for deeper debates on the impact of AI on society and humanity, ”concludes Gonçalves.
In addition to funding to C4AI, the study based on the article received support from FAPESP through the postdoctoral scholarship and the research internship abroad granted to Gonçalves.
The article Passed the Turing Test: Living in Turing Futures can be accessed at: https://spj.science.org/doi/10.34133/icomputing.0102
This content was originally published in we already lived in one of the future described by science fiction, says research on CNN Brazil.
Source: CNN Brasil