How to Interpret AI Remote Viewing Session Results
That 42% correlation score—is it significant? Here's how to actually understand what AI remote viewing results mean, what patterns to track, and how to evaluate the data.
You just ran an AI remote viewing session. The target is revealed. The system calculated a 42% correlation score.
Now what? Is that meaningful? Should you run more sessions? What patterns should you track?
Understanding how to interpret AI output results is essential for generating useful research data. Here's how to read your results properly.
First: Understanding the Baseline
Before any score means anything, you need to understand what chance looks like.
For free-form descriptions scored by an AI analyzer, there isn’t a universal “chance percentage.” RVLab’s correlation score is best treated as a heuristic correspondence estimate produced under a fixed rubric.
It becomes meaningful when you compare many sessions run under the same conditions (same protocol, same model, same prompt/profile), and when you include controls (repeated sessions, cross-model comparisons, swapped-target checks).
Statistical Significance
A single session score means almost nothing statistically. You need many sessions—and ideally control conditions—before patterns become reliable. This is why tracking long-term is essential.
What Different Correlation Levels Mean
| Score Range | Interpretation |
|---|---|
| 0-15% | Below chance (may indicate systematic anti-correlation) |
| 15-25% | Chance level—no demonstrated effect |
| 25-35% | Slightly above chance (needs more data to confirm) |
| 35-45% | Moderately above chance (significant with enough trials) |
| 45-60% | Strong correlation (rare, verify methodology) |
| 60%+ | Exceptional (check for data leakage or artifacts) |
Beyond the Top-Line Number
A single correlation percentage hides more than it reveals. Here's what to actually examine:
Sub-Scores Matter
AI remote viewing sessions produce multiple types of data:
Spatial correlation: Did the output capture structure, layout, and spatial relationships?
Sensory correlation: Did colors, textures, temperatures, and other sensory elements match?
Conceptual correlation: Did the output align with function, purpose, or abstract qualities of the target?
The AI might score 60% on sensory descriptors and 15% on spatial relationships. This tells you something crucial about how the model processes target information—which channels show correlation and which don't.
Channel Analysis
Track which output categories correlate with targets. Patterns may emerge showing certain models or prompt structures produce better results on specific data types.
Direct Matches vs. Partial Correlation
Every session contains different types of correlating elements:
Direct matches: Output elements that precisely match target features
Categorical correlation: Correct general category with wrong specifics (e.g., "tall structure" for a lighthouse, even if labeled "tower")
Dimensional correlation: Correct spatial relationships even with wrong content identification
Learn to see partial correlation patterns. They reveal how AI outputs relate to targets at different levels of abstraction.
Reading Results: The Three Questions
After every session, ask these questions:
1. What Correlated First?
The AI's initial output often shows different correlation patterns than extended elaboration. If first impressions correlate but later output diverges, the model may be elaborating past whatever signal exists.
If first impressions don't correlate but later elements do, the prompt structure may need adjustment—initial output might be dominated by model priors.
2. Where Did Elaboration Diverge?
Find the point in AI output where descriptions shift from potentially target-correlated to generic elaboration. This is often where correlation metrics drop.
The divergence point varies:
- Some prompts produce immediate generic output
- Some maintain potential signal for several descriptors then drift
- Some alternate between correlated and uncorrelated elements
Knowing this pattern informs prompt design.
3. What Output Type Correlated?
Review which types of AI output matched:
| Output Type | If Correlated | If Not Correlated |
|---|---|---|
| Sensory (colors, textures) | Model captures perceptual qualities | Focus on other channels |
| Spatial (shapes, dimensions) | Model captures structural features | May need different prompt structure |
| Conceptual (function, mood) | Model captures abstract qualities | Focus on concrete descriptors |
| Categorical (object identification) | Check for possible data leakage | Expected—categorical identification is rare |
Pattern Recognition Across Sessions
Single sessions are noise. Patterns across sessions are signal.
Here's what to track:
Correlation by Target Category
Most platforms offer multiple target categories (nature, structures, events, etc.). AI correlation likely varies significantly between them.
Common patterns:
- Natural scenes may produce higher sensory correlation
- Man-made structures may show higher spatial correlation
- Abstract targets may produce more conceptual matches
If nature correlation is 40% but structure correlation is 20%, the model may process certain target types differently.
Correlation by Model
Different AI models exhibit characteristic patterns:
| Model Variable | What to Track |
|---|---|
| Base model (GPT-4, Claude, etc.) | Overall correlation rates |
| Temperature settings | Effect on output consistency |
| Prompt structure | Which formats produce signal |
| Output length limits | Correlation vs. elaboration |
Correlation Trends Over Time
Plot scores across sessions. Look for:
Consistency: Do correlation rates stay stable or vary wildly?
Prompt effects: Do changes in prompt structure affect results?
Model updates: Do API updates change output patterns?
What High-Correlation Sessions Reveal
When a session shows strong correlation, analyze it carefully:
- What was the target category?
- What prompt structure was used?
- Which output elements correlated?
- Were first impressions or later elaboration more correlated?
- Can the result be replicated?
Document conditions of high-correlation sessions. These inform experimental design.
What Low-Correlation Sessions Reveal
Sessions with no correlation are just as informative—maybe more so.
A session with confident, detailed AI output that shows no target correlation reveals model artifacts clearly. Understanding what the AI produces when not correlating helps distinguish potential signal from noise.
Types of Non-Correlation
Complete miss: Nothing matched. Possible causes: model producing generic output, prompt artifacts, or simply chance.
Inverse correlation: Output describes opposite of target (hot vs. cold, etc.). May indicate something worth investigating.
Generic output: AI produced common descriptors that don't differentiate between targets. Indicates prompt needs refinement.
Consistent but wrong: Same output regardless of target. Reveals model priors dominating over any potential signal.
Using Results for Research
Here's a systematic approach to analyzing your data:
Weekly Review (30 minutes)
- Gather all sessions from the past week
- Calculate average correlation across target categories
- Note sub-score patterns (sensory vs. spatial vs. conceptual)
- Identify highest-correlation sessions and analyze conditions
- Find common elements in low-correlation sessions
Monthly Assessment (1 hour)
- Graph correlation trends over the month
- Compare to previous months
- Identify target categories showing patterns
- Note any methodology changes and their effects
- Adjust experimental parameters based on data
Quarterly Analysis
- Review all data for patterns invisible on shorter timescales
- Identify which variables correlate with results
- Design refined experiments based on findings
- Document methodology for reproducibility
Common Interpretation Errors
Avoid these mistakes when reading results:
Confirmation bias: Focusing on hits, ignoring misses. Track all sessions equally.
Over-interpretation: Finding meaning in insufficient data. Don't draw conclusions from 5 sessions.
Data leakage: Ensure the AI truly had no access to target information. High scores should trigger methodology review.
Model artifact confusion: Patterns that appear in all outputs regardless of target aren't signal—they're model characteristics.
The Research Perspective
The goal isn't to prove AI has remote viewing capacity. The goal is to generate clean data that can answer whether measurable correlation exists under controlled conditions.
If results consistently show nothing beyond chance after hundreds of sessions—that's data. It suggests either the effect doesn't exist or the methodology needs refinement.
If results show modest but consistent correlation—that's also data. It warrants investigation of what's producing the pattern.
Both outcomes advance understanding. What doesn't help is wishful interpretation, cherry-picked sessions, or abandoning tracking when results are inconclusive.
Track systematically. Use the analytics dashboard to see patterns across sessions. The numbers reveal what individual sessions can't—and that's how research questions get answered.
Tags
Related Posts
Why We Test AI in Remote Viewing Protocols
In a 2024 survey, 25% of AI researchers expected machine consciousness within 10 years. Here's why we're testing whether AI systems can participate meaningfully in remote viewing experiments.
6 min read7 Common Patterns in AI Remote Viewing Output (And What They Mean)
During development and pilot testing, common patterns emerge in AI-generated RV output. Here are seven patterns to watch for and how to interpret them.
6 min readGetting Started with AI Remote Viewing Research
The U.S. government spent $20 million studying remote viewing. Now we're testing whether AI systems can participate in these same protocols under controlled conditions.
6 min readReady to run a session?
Sign up, run a session, and review the output against ground truth.
Get Started Free