[ad_1]
The general understanding seems to be that the day when artificial intelligence (AI) will be able to think like humans and interact in ways that are indistinguishable from real humans, at least through language, seems to be in the offing. Such a day is called a “singularity” and is a pivotal moment for humanity. With the recent success of large-scale language models (LLMs) like ChatGPT that can interpret language usage and create sentences, many believe that day is imminent.
When faced with such a possibility, Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, famously said, “But machines certainly cannot think!” . Perhaps he meant to say that the concepts of thought and intelligence are only applicable to living objects. Otherwise, it is grammatically and logically incorrect. Nevertheless, machines may certainly share some characteristics of human behavior, so even without precise definitions of these terms, the increased use of machines is probably closely related. . In fact, if we move beyond the “singularity” (a scary-sounding proposition), machines may one day have to be treated just like humans.

all-purpose computer
Most people trained in computer science believe that such AI must be possible. At the heart of the accepted theory of computation, derived by Alan Turing in 1936, is the universal abstract algorithm of his computer, a device that can simulate the behavior of all other computers. Because it exists as a concept.
At the risk of oversimplifying, you can think of this universal computer as one that, given unlimited memory and time, can run any program written in a modern programming language. Of course, it may not be possible to do it “efficiently”, but that’s simply because a sufficiently efficient computational model may not yet have been discovered. Given enough time and memory, and the theory understood, a universal computer could in principle control all physical and chemical processes in the brain and other parts of the human body, and in fact all of nature. Can be simulated with arbitrary precision. Physicist, philosopher, and computer scientist David Deutsch calls this a fundamental law of physics and computer science.
Of course, Turing was well aware of universality and believed that AI must be possible. If so, sensorimotor awareness would also be required, as we would not be able to rely on external intelligence for essential methods of survival and exchanging signals with the outside world. Turing also argued that the resources needed to simulate the human brain, which would have to be on a general-purpose computer, would not be very large, in fact smaller than a typical modern laptop. . After all, the average size of the human brain is not that large. And the fact that there must be computational problems that general-purpose computers cannot solve, as established by Gödel’s incompleteness theorem and Turing’s own results on computability, is because humans cannot solve many problems either. , did not deter him from arguing.
He also formulated a test for AI that would prevent a human judge from distinguishing whether it is a human or a program based on its interaction with the AI. Many believe that his current state-of-the-art LLM-based AI software, such as ChatGPT, which is built using deep neural networks, almost passes this Turing test.
What is the correct theory of intelligence?
Therefore, the question arises: how does the brain work in such a way that we can program general-purpose simulators for AI? In other words, can a parameterized neural network model with parameters estimated using purely data-driven induction be a universal simulator program? Unfortunately, the answer to these questions has to be a resounding “no.” We’re not even close.

In fact, as the example below shows, ChatGPT doesn’t seem to even understand the basic logic of numbers, despite receiving extensive training from examples.
user: Please tell me why 793 is smaller than 787.
Chat GPT: Looks like a typo. 787 is smaller than 793.
user: Understood. Why is 787 smaller than 793?
Chat GPT: In numerical order, 787 is smaller than 793 because 787 has a smaller hundreds digit (7) than 793.
user: But the hundreds digit is the same for both numbers!
Chat GPT: I apologize for causing confusion with my previous answer. There was an error in my explanation. To be clear, the hundreds digit is indeed the same in both 787 and 793 (7). To determine the comparison, you need to look at the numbers in the tens place. In this case, the tens digit for both numbers is 8. The tens digits are the same, so next we compare the units digits. Here, 7 is less than 3, so 787 is less than 793.
In general, logical inferences, perhaps like most other cognitive tasks, cannot be extrapolated, generalized, or derived inductively purely from data. Today’s most advanced AI systems are based on this.
For example, no amount of training data will give you a mathematical abstraction like the Pythagorean theorem. It was necessary to extrapolate logically using created representations such as numbers. Even with logical deduction, there is a fundamental problem of computational resource limitations. We know from computational theory that most logical deductions are computationally intractable, and that there are infinite hierarchies of logical deduction problems that require increasingly more time and memory resources to solve. . It is clear that we do not yet have a correct theory of intelligence.
stone, clock, frog
Moreover, scientific theories are not derived from observations of the natural world. These results are obtained through a process of abduction, in which we formulate hypotheses, sometimes using wild guesses, and then critique and reason about them, often using physical experiments, but this is not always the case. In fact, only using such methods have we obtained such wonderful theories as quantum mechanics and gravity, which are based on curved space-time. They were only verified a posteriori by experiments and telescopic observations.

And despite its obvious appeal, the Turing Test is insufficient for improving intelligence. Judges need to empirically determine whether AI is indistinguishable from humans. However, in order to judge her real AI, an explanation of how it works is always required. There is no doubt that pure operational testing is not enough. This is because it is generally well known in probability theory that there are multiple, even infinite, internal configurations and explanations of a system that can lead to the same manifestation of behavior on the observed object.
It’s like a philosopher trying to describe living things just by looking at stones, clocks, and frogs.This test will not reveal who is who. person in charge About AI behavior. If it’s an external designer, the program is not AI.
Can machines think?
Nevertheless, it is the pursuit of passing tests that has brought AI systems to their current position. The consistency of these conversations is certainly impressive, and there are certainly many engineering applications where they can be used effectively. However, this requires ensuring compliance with traditional engineering safety principles. In that sense, the Turing test was certainly helpful.
Achieving programming intelligence requires crossing new epistemological barriers. Using pure empiricism, inductive reasoning from data, “fake it till you make it” type optimization, or even logical deduction is not considered to be an adequate theory of intelligence. We don’t even know how to make wild guesses or hypotheses algorithmically, let alone criticize or analyze them. We are also algorithmically quite ignorant about emotions, emotions such as pain and happiness, and of course sensorimotor cognition.
The ultimate test of AI must be based on an explanatory theory of AI. And once you understand them, you should be able to program them. At the end of the day, we have to grudgingly admit that even if a theory of AI is discovered, it is more likely to emerge from the field of philosophy than from computer science.
Subbashis Banerjee is a professor of computer science at Ashoka University and the university’s Center for Digitalisation, AI and Society. The views expressed are personal.
This is a premium article available to subscribers only.Read over 250 premium articles every month
You have exhausted your free article limit. Please support quality journalism.
You have exhausted your free article limit. Please support quality journalism.
read {{data.cm.views}} from {{data.cm.maxViews}} Free articles.
This is the last free article.
[ad_2]
Source link