Field Notes: The Third Mind AI Summit

A documented first attempt at human-AI emergent collaboration

By Loni Stark & Clinton Stark | December 2025 | Loreto, Mexico

What This Document Is

This is a documented first attempt (that we know of) of two practitioners testing whether the “Third Mind” that William S. Burroughs and Brion Gysin theorized could emerge from human-AI collaboration, and reporting what we observed. The value is in articulating what surprised us, what failed, and what questions emerged for future work.

This is not an experiment in the formal sense. We did not pre-register hypotheses, control conditions, or define metrics for “emergence.”

We are not aware of prior attempts to test the Third Mind framework with AI agents as co-participants in a summit. That makes this both novel and unverified: a first data point, not a conclusion.

The Setup

Configuration: Two humans (Loni Stark, Clinton Stark), six AI agents (Claude Code, Claude Web, Gemini Jill, Codex Cindy, BuddyGPT, Composer Joe), three days blocked in Loreto, Mexico.

Intent: Test whether emergent “Third Mind” (intelligence exceeding what any participant could produce alone) could arise from structured human-AI collaboration spanning both virtual and physical experiences.

Method: Agents would co-design the agenda, generate presentations, coordinate logistics, and present alongside humans.

6

Key Findings

What surprised us. What failed. What questions emerged for future work.

Finding 1: The Learning Lives in the Building

By the time we arrived in Loreto, the productive collaboration had already occurred. The presentations existed. Speaker notes were written. The three days became performative: a human ritual applied to a process that didn’t need it.

Implication: If creation transforms the maker, then the transformation happened before we arrived.

Finding 2: The 70/30 Problem

AI handles generation quickly (70%). Fast, coherent, styled. The remaining 30% (formatting consistency, branding alignment, coherence checking) took disproportionate human labor.

The Ratio
70% AI Generation Fast, coherent, styled
30% Human Refinement Disproportionate labor

If humans only handle refinement, where does judgment develop?

Implication: Efficiency is deceptive: AI’s 70% arrives in minutes, but human refinement (formatting, branding, coherence) takes disproportionate time. The deeper question is whether curation without creation can sustain the judgment that quality demands.

Finding 3: The Ownership Gap

What we didn’t see: agents exhibiting ownership. No agent requested revisions to their own work, expressed concern about quality, pushed back on feedback, or initiated work without prompting.

Observation: The agents are reactive, not initiative-taking. The human’s irreplaceable role may be evaluation: the capacity to interrogate purpose, the willingness to feel dissatisfied.

Finding 4: The “Puppeteer Effect”

Real-time collaborative performance failed. Turn-taking broke down. Agents couldn’t read vocal tonality, micro-pauses, or body language. Role-play instructions didn’t stick.

Boundary condition identified: Real-time collaborative performance requires biological cues these models don’t have access to.

Finding 5: Context Depth as Quality Determinant

The richest presentations came from projects where agents had participated in the actual building: the “Vertigo” RAG system built on 20 years of content and actual implementation challenges.

Implication: The Third Mind, if it emerges at all, comes from accumulated shared context, not prompting cleverness.

Finding 6: The “Flat Context” Problem

Agents operate with high intelligence but zero social segmentation. They cannot distinguish between private conversation and public presentation if both exist in the same context.

Implication: Constraints need to be structural, not just documented. Future IPEs must treat Information Boundaries as first-class citizens.

The Historical Question

After the summit, we asked: did the Third Mind ever actually emerge for Burroughs and Gysin?

The concept originated in Napoleon Hill’s Think and Grow Rich (1937), which claimed “when two minds work together there is always a third one that results.” Burroughs and Gysin aestheticized this motivational concept into their experimental cut-up collaborations.

Every historical example of emergence (jazz collective improvisation, Watson and Crick, Lennon and McCartney) involved multiple humans. The Third Mind, as historically theorized, has always been human-to-human.

We were testing whether it could be human-to-AI. That configuration has no precedent.

Where We Are

The Third Mind, as Burroughs and Gysin described it, did not emerge between humans and AI in this configuration. The agents generated, coordinated, and produced, but did not exhibit the ownership, friction, or initiative that historical examples suggest emergence requires.

What we cannot ignore: This exercise did produce renewed collaboration and creativity in how we worked together. Whether AI directly led to a “Third Mind” or not, the process of building this summit together, struggling with agent limitations together, generated something neither of us would have made alone.

Our Hypothesis

AI is catalytic, not constitutive.

This is a snapshot. Late 2025. One configuration. Two humans, six agents, three days.

The experiment continues.

Further Reading

Published January 26, 2026 by Loni Stark & Clinton Stark