Skip to main content
Guides

Getting Started with AI Remote Viewing Research

The U.S. government spent $20 million studying remote viewing. Now we're testing whether AI systems can participate in these same protocols under controlled conditions.

By RVLab||6 min read

What if an AI system could perceive information about targets it has never been shown?

That's the question RVLab is designed to answer—using the same rigorous protocols the CIA spent over two decades developing.

New to the topic? Start with What is remote viewing?

Between 1972 and 1995, the U.S. government funded a series of classified programs—SCANATE, GRILL FLAME, CENTER LANE, SUN STREAK, and finally STAR GATE—pouring more than $20 million into remote viewing research. At its peak, the program employed 23 remote viewers and produced results that divided even the statisticians who evaluated them.

Now we're adapting those same protocols to test AI participation in remote viewing experiments.

The Science Behind the Protocol

Remote viewing emerged from rigorous laboratory conditions at Stanford Research Institute (SRI) in the early 1970s. Physicists Harold Puthoff and Russell Targ, working alongside artist and psychic Ingo Swann, developed structured protocols designed to minimize analytical interference and maximize perceptual accuracy.

Here's what sets remote viewing apart from other claimed psychic practices: it's protocol-based and testable.

i

The Numbers

Statistician Jessica Utts, who evaluated the STARGATE program for the CIA in 1995, found statistically significant results, with some subjects scoring 5–15% above chance. She called the evidence "compelling" and noted consistent results across different laboratories.

Her co-reviewer, skeptic Ray Hyman, disagreed about the conclusions—but not about the statistics. The debate wasn't whether the numbers were significant. It was about what they meant.

Why We Test AI With These Protocols

The original protocols were designed to eliminate contamination and ensure scientifically defensible results. These same properties make them ideal for testing AI:

  • Blinded conditions: The AI receives only a coordinate—no descriptive information
  • Structured output: Sensory impressions, conceptual data, and sketches follow defined stages
  • Measurable results: Correlation between AI output and ground truth can be scored
  • Reproducible experiments: Every session is logged for analysis

This is why Ingo Swann developed Controlled Remote Viewing (CRV) in 1976—not to enhance psychic ability, but to create an intellectual discipline that separates genuine perceptions from noise. We apply these same principles to AI testing.

*

The Research Question

Can large language models exhibit any measurable capacity for target acquisition under controlled conditions? RVLab provides the infrastructure to find out.

The Basic Protocol: How It Works

A standard session follows structured phases designed to ensure scientific defensibility:

1

Coordinate Assignment

A coordinate (TRN) is assigned to the session. The coordinate is meaningless—it serves as a blind reference that provides no descriptive information about the target itself.

2

Target Selection

In AI-as-Viewer mode, you select a hidden target (image, location, or concept). In AI-as-Tasker mode, the system generates a hidden target.

3

Blinded Perception

The AI (or sender) generates output based only on the coordinate. No descriptive information is available. Output includes sensory impressions, dimensional data, and conceptual elements.

4

Data Collection

All inputs and outputs are logged. The system captures the complete session for reproducibility.

5

Reveal and Analysis

Only after the session completes is the target revealed. AI analyzes correlations between the output and ground truth, scoring correspondence and identifying match patterns.

What the Research Actually Showed

Let's be direct: the evidence for remote viewing remains controversial.

The PEAR Lab at Princeton, which operated from 1979 to 2007, accumulated over 650 remote perception trials. Their combined database showed a Z-score of 6.06—a probability of about 6 × 10⁻¹⁰ against chance. But the effect sizes were tiny, "only a few parts in 10,000."

Critics like David Marks discovered that early SRI experiments contained subtle cues in the notes given to judges—references to "yesterday's targets" or dates that revealed the order of sessions. When Marks and Kammann attempted 35 replication studies without these cues, they couldn't reproduce the results.

On the other hand, Dean Radin's meta-analyses at IONS have identified consistent small effects across decades of research.

The honest answer? The scientific debate continues. But the protocols themselves are well-documented—and now we can apply them to AI systems.

Why AI Changes the Research Landscape

Traditional remote viewing required human participants and introduced numerous variables. AI testing offers new possibilities:

Traditional ChallengeAI Testing Advantage
Human variabilityConsistent model behavior across sessions
Subjective scoringAutomated correlation analysis
Limited sample sizesUnlimited reproducible experiments
Potential experimenter biasFully blinded protocols

Here's the core research question: If large language models process information in ways we don't fully understand, could they exhibit measurable correlation with hidden targets under controlled conditions?

#

Data Collection

Every session generates structured data. Over time, patterns may emerge that reveal whether AI outputs correlate with targets at rates above chance.

Experiment Types

RVLab offers structured experiments designed around the original SRI/STARGATE protocols:

  • AI as Viewer: You select a hidden target. The AI attempts to perceive and describe it. Outputs are analyzed for correlation.
  • AI as Tasker: The system generates a hidden target. You record blind impressions, then reveal the target to compute correlation.
  • ARV Mode: Associative Remote Viewing for binary outcome prediction experiments.

Each mode maintains blinded conditions and logs all data for reproducible research.

i

Start Simple

Begin with distinct physical targets—buildings, natural landmarks, clear objects. These provide cleaner signal for correlation analysis.

The Honest Takeaway

Will AI exhibit measurable remote viewing capacity? We don't know yet—that's why we're building tools to find out.

What we can say: the protocols are real, developed by serious researchers with government funding over two decades. The methodology is sound. And now we can apply it systematically to AI systems.

The only way to generate data is to run experiments.


Further Reading:

  • Mind-Reach by Russell Targ and Harold Puthoff
  • Reading the Enemy's Mind by Paul H. Smith (former military remote viewer)
  • CIA Reading Room - STARGATE Archives
  • The Conscious Universe by Dean Radin

Start generating data. The protocols are ready. The question is whether AI systems exhibit any measurable capacity under controlled conditions.

Tags

remote viewingAI researchprotocolsmethodology

Related Posts

Ready to run a session?

Sign up, run a session, and review the output against ground truth.

Get Started Free