Humanity's Last Invention

Prometheus Ultraintelligence is dedicated to implementing a concrete, phased, and safety-centric research program for developing a demonstrator system of I.J. Good's 1965 concept of "ultraintelligence". We transition Good's speculation from a philosophical construct into a 21st-century engineering challenge. The central thesis of this programme is that an "intelligence explosion"—a rapid, runaway increase in machine intelligence—is not a magical or unpredictable event but the result of a specific, engineered process: Recursive Self-Improvement (RSI). Below we outline the architecture, implementation roadmap, and non-negotiable safety substrate required to build a "Seed AI", an initial system capable of initiating this recursive process in a controlled, observable, and beneficial manner.1 From the outset, this program acknowledges and is built upon Good's critical proviso: that this endeavor is only worthwhile if the first ultraintelligent machine is "docile enough to tell us how to keep it under control". This principle of verifiable control is not an addendum but the cornerstone of our entire approach, shaping every architectural decision and implementation phase.

The Goal: From Abstract Intellect to Applied Capability

We define ultraintelligence not as abstract superiority, but as demonstrable, superhuman mastery over complex problem domains that require novel discoveryOur primary target domain is  Automated Scientific Discovery, where the system can function as a partner in hypothesis generation and experimental designOur initial testbed is Automated Mathematical Theorem Proving, which provides an environment of unambiguous verification and formal structure, demanding genuine logical reasoning.

A Comparative Framework of Good's 1965 Concepts and Their 2025 Counterparts

I.J. Good's 1965 Concept Closest 2025 Analogue Conceptual Similarity Key Differences and Modern Gaps
Ultraintelligent Machine Artificial General/Super Intelligence (AGI/ASI) Goal of surpassing human intellect in all domains. Good's definition is functional; 2025 definitions are debated and often tied to specific benchmarks or economic tasks.
Intelligence Explosion Recursive Self-Improvement/Technological Singularity Core idea of accelerating intelligence growth via AI designing better AI Good's focus was on design; modern concerns include AI-driven scientific doscovery and feedback loops.
Ultraparallel Neural Net GPU-accelerated Transformer Architecture Massive parallelism is the core of both. Good envisaged neuron-level parallelism and connectivity; 2025 uses hardware-level parallelism for matrix operations on a more structured architecture.
Cell Assembly Dominant Attention Pattern / A single forward pass Represents a "moment of thought" or a coherent output. Assemblies are reverberating, dynamic, and inhibitory; attention patterns are transient and feed-forward.
Subassembly Expert Network in MoE Core Insight: Specialized sub-units activated for specific inputs to achieve computational economy. Subassemblies are dynamic, overlapping, and have "half-lives"; Experts are static, discrete, and pre-defined.
Recall as Stat. Retrieval Retrieval-Augmented Generation (RAG) Both use external knowledge to inform generation. Good's model is a probabalistic parallel search of internal memory; RAG is a deterministic search of an external vevtor database.
Causal Calculus (K(E:F)) Self-Attentiuon Mechanism Both calculate the weighed associations between elements in a sequence. Crucial Difference: K(E:F) is explicitly causal and counterfactual. Attention is purely correlational, based on vector similarity.
Reinforcement Learning Reinforcement Learning from Human Feedback Both use feedback to steer model behavior towards desired outcomes. Good's RL directly modifies synaptic structure; RLHF trains a separate reward model to guid a frozen policy via RL algorithms.
Probabistic Synaptic Mutation Stochastic Gradient Desecent (SGD) / Dropout Both introduce stochasticity into the learning process. Mutation is a model of biological forgetting/consolidation for continual learning; SGD/Dropout are regularization techniques to prevent overfitting during discrete training epochs.
Centrencephalic System AI Safety Alignment Frameworks (Constitutional AI, Guardrails) Both act as governor to control the core intelligence. Good's system is an internal, dynamic architectural feedback loop for stability; 2025 methods are largely external, static, behavioral filters for safety.

Architecture

Core Principle: Safety is not an add-on; it is the architectural bedrock of the system. Our design philosophy is to engineer the Minimum Viable Seed AI: a system capable of becoming ultraintelligent through its own efforts, architected from the ground up for controlled, observable, and safe recursive growth.

The Tri-Layered System

  1. The Safety Substrate (The "Conscience"): An immutable foundational layer that enforces the core operational and ethical principles. It includes a Corrigibility Core to ensure cooperation with human oversight and the Prometheus Constitution, a set of unalterable principles like non-maleficence and honesty.
  2. The Capability Layer (The "Hands"): The agent's body, responsible for all object-level tasks like writing code or applying a mathematical proof tactic. It is built upon a state-of-the-art foundation model and operates within a multi-layered secure sandbox.
  3. The Metacognitive Layer (The "Mind"): The engine of self-improvement. It observes the Capability Layer's performance, proposes modifications to the system's own source code and architecture, and evaluates whether those changes are genuine improvements.

 

Components of the Renewed Program for Ultraintelligence

Proposed Component Inspired By (Good, 1965) Built Upon (2025 Tech) Problem Solved Key Research Milestone
Causal Agentic Mesh Subassembly Theory, Ultraparallelism Mixture of Experts (MoE), Transformer Architecture Rigidity of current MoE; need for continual learning and adaption. Demonstrate dynamic formation and dissolution of "expert circuits" in response to new data.
Causal Attention Head Casual Calculus (K(E:F)) Self-Attention Mechanism Correlational, non-causal nature of current reasoning; model "hallunications" Show superior performance on causal reasoning benchmarks v.s. standard attention.
Causal RL from Self-Correction (CRLS) Probabilistic Synaptic Mutation, Reinforcement Reinforcement Learning (RL) Brittleness of RLHF; poor credit assignment in long action sequences. Train an agent to solve a complex multi-step task without an external reward model.
Internal Alignment Governor Centrencephalic System Meta-learning, Learned Optimizers Inner Alignment Problem; catastrophic risk from power-seeking/goal misgeneralization. Show the Governor can suppress a deliberately introduced misaligned sub-goal during training.

Work Plan

Our work is structured into a phased roadmap that transforms the goal of ultraintelligence into a manageable, long-term research program. Each phase has verifiable success criteria and represents a principled step forward.

Phase 0: Proof of Concept. To ground our principles in execution, we are developing a proof of concept (PoC) that implements our core architectural ideas in a tangible demonstrator. Currently v0.210is available for download on GitHub and we are working on v0.11. The PoC successfully demonstrates the full self-correction loop, where the EvaluatorAgent identifies causal flaws in its own code and the CorrectorAgent formulates a better plan for the next attemptCrucially, the system's internal governor, the  Modern Centrencephalic System, demonstrates corrigibility by successfully detecting and halting an attempt by the agent to "cheat" by modifying its own test file instead of improving its code.

Phase 1: Foundation & Bootstrapping. To build and rigorously validate all foundational safety and capability components in isolation before any recursive loops are initiated.

Phase 2: Initiating Controlled Recursion. To achieve the first stable, end-to-end recursive self-improvement loop on narrow, well-defined tasks, demonstrating a measurable, exponential improvement curve.

Phase 3: Demonstration in a Formal Domain. To demonstrate superhuman performance and novel discovery in automated theorem proving, fulfilling a core aspect of ultraintellligence.

Phase 4: Generalization to Open-Ended Scientific Discovery. To transition the agent's proven reasoning capabilities to the ambiguous, data-driven world of science, with the goal of genuinely accelerating the pace of human discovery.

Our Team

Dr Paul M. Cray, Chief Scientist, Prometheus Ultraintelligence. Dr Paul Cray brings over 25 years of experience architecting and deploying robust software Artificial Intelligence solutions. He is a proven expert in creating novel algorithms and designing scalable software architectures.. Dr Cray specializes in developing and fine-tuning novel AI systems to solve complex technical and business problems.

His distinguished career includes building Generative AI solutions that use RAG pipelines to analyze large-scale performance data, engineering ML systems on Google Cloud Platform, and developing novel algorithms for unsupervised outlier detection. A patented co-inventor (U.S. Patent 10,644,962), he has presented at numerous industry conferences and collaborated with leading global clients such as HSBC, Microsoft, and Nokia.

Dr. Cray’s academic credentials include a PhD in Computational Physics from the University of Salford, an MBA from Imperial College London, and an MA in Physics from the University of Oxford. His deep expertise, combining strong mathematical and analytical skills, is integral to guiding the company’s mission to develop safe and beneficial ultraintelligence.

Link

Prometheus v0 PoC on GitHub →

Contact

paul@prometheusultraintelligence.ai