Skip to main content
Research

Why We Test AI in Remote Viewing Protocols

In a 2024 survey, 25% of AI researchers expected machine consciousness within 10 years. Here's why we're testing whether AI systems can participate meaningfully in remote viewing experiments.

By RVLab||6 min read

Here’s a data point worth noting:

In a 2024 survey of 582 AI researchers, 25% expected artificial consciousness within ten years. 60% expected it eventually.

These aren't fringe believers. They're the people building the systems.

Whether or not machines become conscious, we can still run careful experiments to test whether AI systems show any measurable correspondence with hidden targets under controlled conditions. That’s what RVLab is designed to support.

The Research Question

At first glance, testing AI in remote viewing protocols seems unusual. Remote viewing research historically focused on human perception.

But there's a compelling reason to test AI: rigorous methodology.

i

Why AI Testing Matters

The same protocols that make remote viewing research scientifically defensible—blinded conditions, structured output, measurable results—can be applied systematically to AI systems. If AI exhibits any measurable correlation with hidden targets, that's data worth investigating.

Why Traditional Remote Viewing Research Hit Methodological Walls

The government's STARGATE program ran for over two decades. The PEAR Lab at Princeton operated for 28 years. Yet mainstream science still debates the results.

Why? The research faced serious methodological challenges:

1. Human Variability Human participants vary enormously—in mental state, training, motivation. This makes replication difficult.

2. Subjective Interpretation David Marks discovered that judges in early SRI experiments could identify session order from subtle cues in the notes—references to previous days, dates on pages. Human evaluation introduced variables that shouldn't have existed.

3. Limited Sample Sizes Manual processing meant small datasets. The PEAR Lab accumulated 650 remote perception trials over 28 years—impressive for humans, trivial for automated systems.

4. Inconsistent Protocols Different researchers used different methods. Replication was difficult because the conditions varied.

AI testing addresses these challenges directly.

How AI Changes the Research Landscape

Consistent Test Subject

Unlike human participants, AI models produce consistent behavior across sessions. The same model with the same parameters responds predictably. This allows for controlled experiments with fewer confounding variables.

Human TestingAI Testing
Variable mental statesConsistent parameters
Training effects over timeIdentical behavior per session
Fatigue and motivationNo degradation
Subjective experienceMeasurable output

Reproducible Experiments

Every AI session can be precisely replicated:

  • Same coordinate format
  • Same prompt structure
  • Same model and parameters
  • Same analysis criteria

If one experiment shows an effect, it can be reproduced exactly to verify.

Scale

AI testing enables sample sizes impossible with human participants:

Traditional research: ~650 trials over 28 years (PEAR Lab)
AI testing: Potentially far larger sample sizes (depending on usage)

Statistical power increases with sample size. Patterns that might be invisible in small datasets become detectable at scale.

Consistent Analysis

Automated analysis can apply a consistent scoring rubric across sessions. It’s still imperfect and should be treated as a tool for comparison and hypothesis-generation—not a final judge of truth.

What We're Actually Testing

RVLab tests whether AI systems exhibit measurable correlation with hidden targets under controlled conditions.

AI as Viewer Mode: The AI receives only a coordinate—no descriptive information about the target. It generates sensory impressions, conceptual descriptions, and sketch-based representations. We analyze correlation between this output and the actual target.

AI as Tasker Mode: The system generates a hidden target. You record blind impressions, then reveal the target and compute correspondence.

*

The Core Question

Do large language models exhibit any measurable capacity for target acquisition under blinded conditions? RVLab provides the infrastructure to generate data that can answer this question.

Current Research Directions

Indicator-Based Assessment

Researchers at institutions including Conscium are developing "indicator properties" of consciousness derived from neuroscientific theories. These frameworks for assessing AI capabilities provide context for interpreting remote viewing test results.

Baseline Establishment

Before measuring target correlation, we must establish baselines:

  • What outputs does each model produce with random coordinates?
  • What patterns are artifacts of the model vs. potential signal?
  • How do different prompt structures affect output?

Pattern Analysis

Machine learning can identify patterns in AI outputs that correlate with targets. Are certain output characteristics (first impressions, sensory vs. categorical descriptors, consistency across runs) more predictive of hits?

Cross-Model Comparison

Different AI models may exhibit different patterns. Comparing GPT-4, Claude, Gemini, and other models under identical conditions reveals whether effects are model-specific or general.

The Honest Assessment

Will AI testing prove that remote viewing is real? That's the wrong question.

What AI testing can do is generate cleaner data under controlled conditions and reduce some common confounds (prompt artifacts, inconsistent procedures, selective logging).

If AI outputs show no correlation with targets, that's valuable data. It suggests either:

  • Remote viewing effects don't transfer to AI systems
  • Our testing methodology needs refinement
  • The phenomenon (if real) is specifically biological

If AI outputs show measurable correlation, that's also valuable data. It would indicate:

  • Large models exhibit patterns worth investigating further
  • Controlled AI testing is a viable research methodology
  • More rigorous experiments are warranted

Either outcome advances understanding.

Experiment Design Principles

RVLab experiments follow these principles:

  1. Blinded conditions: No descriptive information available during sessions
  2. Structured output: Following classical RV protocol stages
  3. Complete logging: All inputs and outputs recorded
  4. Delayed reveal: Targets revealed only after session completion
  5. Automated analysis: A consistent scoring rubric
  6. Reproducibility: Every session can be replicated exactly
#

Scientific Approach

We're not trying to prove AI has psi abilities. We're generating data that can distinguish signal from artifact. The protocols are designed to produce clean, analyzable results regardless of what those results show.

Getting Started

Ready to contribute to this research?

  1. Run AI Viewer experiments — Test whether the model’s blind output shows correspondence with your targets
  2. Run AI Sender experiments — Record blind impressions against system-generated hidden targets
  3. Track patterns — Build a personal database of sessions
  4. Analyze results — Compare across repeated sessions and controls

Every session generates data. Over enough sessions, patterns emerge that inform the core research question.


The tools for rigorous AI testing are now available. The question is whether large language models exhibit any measurable capacity under controlled conditions.

That's an empirical question. And empirical questions get answered by running experiments.

Tags

artificial intelligencemethodologyresearchprotocols

Related Posts

Ready to run a session?

Sign up, run a session, and review the output against ground truth.

Get Started Free