The Science of Remote Viewing
From government research labs to AI testing: a comprehensive guide to the protocols and methodology we use to test whether AI systems can participate in remote viewing.
Contents
1History & Origins
Remote viewing emerged from one of the most unlikely places: the halls of Stanford Research Institute (SRI) during the Cold War. In 1972, physicists Russell Targ and Harold Puthoff began investigating claims of extrasensory perception with rigorous scientific methodology.
What started as curiosity became a two-decade, $20+ million U.S. government program spanning multiple intelligence agencies. The research went by many names—SCANATE, GONDOLA WISH, CENTER LANE, SUN STREAK, and finally STAR GATE.
“We have established that a statistically significant effect exists... The statistical results of the studies examined are far beyond what is expected by chance.”
— Dr. Jessica Utts, Professor of Statistics, UC Davis (1995 AIR Review)
Key Milestones
2The Science
Evidence & controversy
Remote viewing research is contested. Some analyses report above‑chance performance under certain controls, while critics argue methodological limitations and replication challenges. RVLab is claims‑agnostic: the goal is clean, blinded data collection and consistent scoring so patterns can be evaluated transparently over many sessions.
What Matters for Controlled Experiments
Why Double-Blind Matters
The power of remote viewing research—and the controversy around it—centers on experimental controls. Early experiments were criticized for potential sensory leakage and experimenter bias. Modern protocols address this through strict double-blind design:
- •Target selection: Random, often computer-generated coordinates
- •Viewer isolation: No access to target pool or feedback during session
- •Blind judging: Evaluators do not know which target was assigned
- •Statistical analysis: Pre-registered hypotheses with proper controls
RVLab implements elements of these controls for AI testing: outputs are generated blind (before any ground truth is entered), receiver-mode targets can be hidden until reveal, and the analyzer uses a consistent scoring rubric across sessions. For publishable research, consider adding independent judging and additional controls.
3RV Protocols
Several methodologies have been developed for remote viewing. Each offers a different structure for accessing and recording impressions.
Coordinate Remote Viewing (CRV)
Ingo Swann, SRI (1983)
The most structured protocol, using stages to progressively decode information. Begins with ideograms and gestalt impressions, moves through sensory data, dimensionals, and intangibles.
Extended Remote Viewing (ERV)
Skip Atwater, U.S. Army
Uses an altered state (deep relaxation or light trance) to access information. Sessions are longer and more immersive than CRV.
Associative Remote Viewing (ARV)
Various researchers
Used for binary predictions (e.g., will event X happen?). The viewer describes an unknown target that will be revealed based on the future outcome.
4Session Structure
A typical CRV session follows a structured progression. This is not arbitrary—each stage is designed to access different types of information while minimizing analytical overlay (AOL).
Preparation
2-5 minClear your mind. Some viewers meditate briefly; others prefer a quick walk. The goal is to quiet internal dialogue.
Tip: Find what works for you—there's no single "right" way to prepare.
Stage 1: Ideogram
1-2 minUpon receiving the coordinate, make a quick, spontaneous mark on paper. This "ideogram" captures your first impression. Decode it into basic gestalts (land, water, structure, etc.).
Tip: Speed is key. Your first impression before conscious analysis kicks in.
Stage 2: Sensory Data
5-10 minRecord sensory impressions: colors, textures, temperatures, sounds, smells. Don't analyze—just perceive and record.
Tip: Use short phrases. "Rough gray surface" not "It's probably concrete."
Stage 3: Dimensionals
5-10 minDescribe spatial relationships, sizes, shapes. Sketch what you perceive. Include movement, positions, layouts.
Tip: Sketching often reveals more than words. Don't worry about artistic quality.
Stage 4: Intangibles
5-10 minConcepts, purposes, emotions associated with the target. Is it natural or man-made? Active or static? What's its function?
Tip: This is where AOL is most dangerous. Label any analytical guesses.
Reveal & Analysis
2-5 minCompare your data against the revealed target. Note hits and misses without judgment—both are valuable feedback.
Tip: Don't rationalize misses. Honest self-assessment improves performance.
5AI Testing Methodology
RVLab applies these classical RV protocols to test whether AI systems can participate meaningfully in remote viewing experiments under controlled conditions.
Why Test AI?
Consistent Test Subject
Unlike human participants, AI models produce consistent behavior. Same parameters yield reproducible results.
Unlimited Sessions
Generate thousands of data points. Statistical patterns invisible in small datasets become detectable at scale.
Objective Analysis
Algorithmic scoring eliminates subjective interpretation. Every session is analyzed with identical criteria.
Complete Logging
Every input and output is recorded. Experiments can be precisely replicated and verified.
Research Questions
Can AI outputs show measurable correlation with hidden targets under blinded conditions?
When AI generates a target internally, do outputs show patterns consistent with that target?
Do different AI models (GPT-4, Claude, Gemini) exhibit different correlation patterns?
Which output types (sensory, spatial, conceptual) show strongest correlation with targets?
6Experiment Types
AI as Viewer
You select a hidden target. The AI receives only a coordinate and attempts to perceive and describe it. Outputs are analyzed for correlation with the actual target.
AI as Tasker
The system generates a hidden target. You record blind impressions, then reveal the target to compute correlation.
Multiple Models
Test the same targets across different AI models (GPT-4, Claude, Gemini) to compare performance characteristics and identify model-specific patterns.
Prompt Variation
Run sessions with different prompt structures to identify which formats produce the strongest target correlation.
Repeated Sessions
Repeat sessions under the same protocol to measure stability. Treat consistency and correlation as hypotheses to test, not proof.
Target Category Analysis
Compare AI performance across different target types (natural, man-made, abstract) to identify patterns in what the AI correlates with.
Control Conditions
Run sessions with null targets or scrambled coordinates to establish baseline output patterns for comparison.
ARV Experiments
Associative Remote Viewing for binary outcome prediction. Test whether AI outputs can be matched to future-revealed targets.
Ready to Run Experiments?
Start testing whether AI systems exhibit measurable correlation with hidden targets under controlled conditions.
Further Reading
- • Targ, R. & Puthoff, H. (1974). “Information transmission under conditions of sensory shielding.” Nature
- • Utts, J. (1996). “An Assessment of the Evidence for Psychic Functioning.” Journal of Scientific Exploration
- • Smith, P. (2005). Reading the Enemy's Mind: Inside Star Gate. Forge Books.
- • McMoneagle, J. (1997). Mind Trek: Exploring Consciousness, Time, and Space. Hampton Roads.