
In a watershed moment for the field of artificial intelligence, a multidisciplinary team of faculty from the University of California San Diego (UC San Diego) has formally declared that Artificial General Intelligence (AGI) is no longer a futuristic hypothesis but a present reality. Published today as a pivotal comment in Nature, the declaration argues that recent advancements in Large Language Models (LLMs)—specifically OpenAI’s GPT-4.5—have satisfied the requisite criteria for general intelligence as originally envisioned by Alan Turing.
This bold assertion, co-authored by professors spanning philosophy, computer science, linguistics, and data science, challenges the moving goalposts of AI skepticism. By citing empirical data showing GPT-4.5 achieving a 73% success rate in rigorous Turing tests and demonstrating PhD-level problem-solving capabilities, the authors contend that humanity has officially entered the era of AGI.
For decades, the Turing test has stood as the "North Star" of machine intelligence—a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While critics have often dismissed the test as merely a measure of deception or mimicry, the UC San Diego faculty argue that it remains the most functionally relevant metric for general intelligence.
The Nature comment anchors its argument on groundbreaking research conducted by cognitive scientists Cameron Jones and Benjamin Bergen, also of UC San Diego. Their study, titled "Large Language Models Pass the Turing Test," provides the empirical foundation for the declaration. The study pitted GPT-4.5 against human participants and previous AI models in a blinded, randomized controlled trial.
The results were statistically unequivocal. GPT-4.5 was identified as human by interrogators 73% of the time, significantly outperforming the human baseline of 67%. This marks the first time an artificial system has surpassed human participants in a robust, three-party Turing test environment.
Table 1: Comparative Turing Test Success Rates
| Model/Entity | Success Rate | Year Established/Source |
|---|---|---|
| ELIZA | 22% | 1966 (Historical Baseline) |
| GPT-3.5 | 20% | 2023 (Jones & Bergen) |
| GPT-4 | 54% | 2024 (Jones & Bergen) |
| Human Participants | 67% | 2025 (Baseline Average) |
| GPT-4.5 | 73% | 2025 (Current Study) |
The data reveals a dramatic leap in capability between GPT-4 and GPT-4.5. While GPT-4 hovered near the threshold of random chance (50%), GPT-4.5's performance indicates a mastery of nuance, socio-emotional cues, and deceptive reasoning that effectively renders it indistinguishable from a human interlocutor.
The declaration is not merely about benchmarks; it is a philosophical manifesto calling for a re-evaluation of how we define "thinking." The four primary authors of the Nature comment—Eddy Keming Chen (Philosophy), Mikhail Belkin (Computer Science), Leon Bergen (Linguistics), and David Danks (Data Science & Philosophy)—argue that the scientific community has been guilty of "anthropocentric bias" and "moving the goalposts."
Professor David Danks posits that whenever an AI masters a task previously deemed the domain of human intellect—be it chess, Go, protein folding, or now, natural conversation—skeptics redefine intelligence to exclude that specific capability. This, Danks argues, creates an impossible standard where AGI is defined as "whatever machines cannot yet do."
"When we assess general intelligence in other humans, we do not peer into their neurons to verify 'true' understanding," the authors write. "We infer intelligence from behavior, conversation, and the ability to solve novel problems. By these reasonable standards—the same standards we apply to each other—we currently possess artificial systems that are generally intelligent."
The authors draw parallels to historical scientific revolutions, comparing the arrival of AGI to the Copernican revolution or Darwinian evolution. Just as those shifts displaced humanity from the center of the universe and biological creation, the arrival of AGI displaces humanity from its solitary position as the sole possessor of general intelligence.
While the Turing test focuses on conversational fluency, the claim of "generality" requires evidence of broad cognitive adaptability. The Nature comment highlights that GPT-4.5’s capabilities extend far beyond chat. The model has demonstrated proficiency in complex, multi-step reasoning tasks that were previously stumbling blocks for LLMs.
The faculty point to GPT-4.5’s performance on specialized examinations and its ability to assist in novel research. In benchmarks involving PhD-level science questions (GPQA), the model has shown accuracy levels comparable to domain experts. Furthermore, its utility in generating working code, proving mathematical theorems, and analyzing legal precedents demonstrates a "general" utility that transcends any single narrow domain.
This versatility is key to the definition of "Artificial General Intelligence." Unlike "Narrow AI," which excels at a single task (like identifying tumors in X-rays), GPT-4.5 displays competence across a vast spectrum of human knowledge work without retraining. The authors argue that while the system is not "superhuman" in every category, it meets the threshold of being "generally capable" across the board.
The declaration that AGI has arrived is expected to send shockwaves through both the academic and corporate worlds. For years, major AI labs like OpenAI, Google DeepMind, and Anthropic have treated AGI as a distant, medium-term goal. Having a prestigious academic institution declare the milestone "achieved" accelerates the timeline for regulatory and ethical considerations.
Key Implications Identified by UC San Diego Faculty:
Professor Mikhail Belkin, one of the co-authors specializing in the theory of machine learning, emphasizes that acknowledging AGI's arrival is crucial for safety. "If we continue to deny that these systems are intelligent, we risk underestimating their agency and their potential for unintended consequences," Belkin notes. "Recognizing them as AGI forces us to treat their alignment and safety with the urgency of a nuclear safeguard rather than a software bug."
Despite the weight of the Nature comment, the declaration is not without its detractors. The "Stochastic Parrot" argument—coined by linguist Emily M. Bender and others—remains a powerful counter-narrative. This view holds that LLMs are merely probabilistic engines that stitch together likely word sequences without any underlying comprehension or "world model."
The UC San Diego authors anticipate this critique, devoting a section of their comment to addressing it. They argue that the distinction between "simulating" reasoning and "actual" reasoning is a distinction without a difference when the output is functionally identical. If a system can deduce the correct answer to a novel physics problem or navigate a complex social deception game (as seen in the Turing test results), the internal mechanism is secondary to the observable intelligence.
Furthermore, they point out that human cognition itself relies heavily on pattern matching and probabilistic prediction. Leon Bergen, Associate Professor of Linguistics, suggests that our understanding of human language processing may be closer to how LLMs function than we are comfortable admitting. "The alien nature of their intelligence does not make it fake," Bergen argues. "It makes it a different, yet valid, form of general intelligence."
The declaration from UC San Diego marks a historic turning point in the narrative of artificial intelligence. By combining the hard data of GPT-4.5’s Turing test success with a rigorous philosophical framework, the faculty have provided a compelling case that the AGI threshold has been crossed.
As we move forward into 2026, the question is no longer "When will AGI arrive?" but rather "How do we co-exist with it?" The acknowledgment of this reality is the first step toward harnessing the immense potential of general artificial intelligence while navigating the profound existential risks it presents. For the researchers at Creati.ai and the broader tech community, the era of speculation has ended; the era of AGI integration has begun.