Skip to main content
Analysis

How to Interpret AI Remote Viewing Session Results

That 42% correlation score—is it significant? Here's how to actually understand what AI remote viewing results mean, what patterns to track, and how to evaluate the data.

By RVLab||7 min read

You just ran an AI remote viewing session. The target is revealed. The system calculated a 42% correlation score.

Now what? Is that meaningful? Should you run more sessions? What patterns should you track?

Understanding how to interpret AI output results is essential for generating useful research data. Here's how to read your results properly.

First: Understanding the Baseline

Before any score means anything, you need to understand what chance looks like.

For free-form descriptions scored by an AI analyzer, there isn’t a universal “chance percentage.” RVLab’s correlation score is best treated as a heuristic correspondence estimate produced under a fixed rubric.

It becomes meaningful when you compare many sessions run under the same conditions (same protocol, same model, same prompt/profile), and when you include controls (repeated sessions, cross-model comparisons, swapped-target checks).

i

Statistical Significance

A single session score means almost nothing statistically. You need many sessions—and ideally control conditions—before patterns become reliable. This is why tracking long-term is essential.

What Different Correlation Levels Mean

Score RangeInterpretation
0-15%Below chance (may indicate systematic anti-correlation)
15-25%Chance level—no demonstrated effect
25-35%Slightly above chance (needs more data to confirm)
35-45%Moderately above chance (significant with enough trials)
45-60%Strong correlation (rare, verify methodology)
60%+Exceptional (check for data leakage or artifacts)

Beyond the Top-Line Number

A single correlation percentage hides more than it reveals. Here's what to actually examine:

Sub-Scores Matter

AI remote viewing sessions produce multiple types of data:

Spatial correlation: Did the output capture structure, layout, and spatial relationships?

Sensory correlation: Did colors, textures, temperatures, and other sensory elements match?

Conceptual correlation: Did the output align with function, purpose, or abstract qualities of the target?

The AI might score 60% on sensory descriptors and 15% on spatial relationships. This tells you something crucial about how the model processes target information—which channels show correlation and which don't.

*

Channel Analysis

Track which output categories correlate with targets. Patterns may emerge showing certain models or prompt structures produce better results on specific data types.

Direct Matches vs. Partial Correlation

Every session contains different types of correlating elements:

Direct matches: Output elements that precisely match target features

Categorical correlation: Correct general category with wrong specifics (e.g., "tall structure" for a lighthouse, even if labeled "tower")

Dimensional correlation: Correct spatial relationships even with wrong content identification

Learn to see partial correlation patterns. They reveal how AI outputs relate to targets at different levels of abstraction.

Reading Results: The Three Questions

After every session, ask these questions:

1. What Correlated First?

The AI's initial output often shows different correlation patterns than extended elaboration. If first impressions correlate but later output diverges, the model may be elaborating past whatever signal exists.

If first impressions don't correlate but later elements do, the prompt structure may need adjustment—initial output might be dominated by model priors.

2. Where Did Elaboration Diverge?

Find the point in AI output where descriptions shift from potentially target-correlated to generic elaboration. This is often where correlation metrics drop.

The divergence point varies:

  • Some prompts produce immediate generic output
  • Some maintain potential signal for several descriptors then drift
  • Some alternate between correlated and uncorrelated elements

Knowing this pattern informs prompt design.

3. What Output Type Correlated?

Review which types of AI output matched:

Output TypeIf CorrelatedIf Not Correlated
Sensory (colors, textures)Model captures perceptual qualitiesFocus on other channels
Spatial (shapes, dimensions)Model captures structural featuresMay need different prompt structure
Conceptual (function, mood)Model captures abstract qualitiesFocus on concrete descriptors
Categorical (object identification)Check for possible data leakageExpected—categorical identification is rare

Pattern Recognition Across Sessions

Single sessions are noise. Patterns across sessions are signal.

Here's what to track:

Correlation by Target Category

Most platforms offer multiple target categories (nature, structures, events, etc.). AI correlation likely varies significantly between them.

Common patterns:

  • Natural scenes may produce higher sensory correlation
  • Man-made structures may show higher spatial correlation
  • Abstract targets may produce more conceptual matches

If nature correlation is 40% but structure correlation is 20%, the model may process certain target types differently.

Correlation by Model

Different AI models exhibit characteristic patterns:

Model VariableWhat to Track
Base model (GPT-4, Claude, etc.)Overall correlation rates
Temperature settingsEffect on output consistency
Prompt structureWhich formats produce signal
Output length limitsCorrelation vs. elaboration

Plot scores across sessions. Look for:

Consistency: Do correlation rates stay stable or vary wildly?

Prompt effects: Do changes in prompt structure affect results?

Model updates: Do API updates change output patterns?

What High-Correlation Sessions Reveal

When a session shows strong correlation, analyze it carefully:

  • What was the target category?
  • What prompt structure was used?
  • Which output elements correlated?
  • Were first impressions or later elaboration more correlated?
  • Can the result be replicated?

Document conditions of high-correlation sessions. These inform experimental design.

What Low-Correlation Sessions Reveal

Sessions with no correlation are just as informative—maybe more so.

A session with confident, detailed AI output that shows no target correlation reveals model artifacts clearly. Understanding what the AI produces when not correlating helps distinguish potential signal from noise.

Types of Non-Correlation

Complete miss: Nothing matched. Possible causes: model producing generic output, prompt artifacts, or simply chance.

Inverse correlation: Output describes opposite of target (hot vs. cold, etc.). May indicate something worth investigating.

Generic output: AI produced common descriptors that don't differentiate between targets. Indicates prompt needs refinement.

Consistent but wrong: Same output regardless of target. Reveals model priors dominating over any potential signal.

Using Results for Research

Here's a systematic approach to analyzing your data:

Weekly Review (30 minutes)

  1. Gather all sessions from the past week
  2. Calculate average correlation across target categories
  3. Note sub-score patterns (sensory vs. spatial vs. conceptual)
  4. Identify highest-correlation sessions and analyze conditions
  5. Find common elements in low-correlation sessions

Monthly Assessment (1 hour)

  1. Graph correlation trends over the month
  2. Compare to previous months
  3. Identify target categories showing patterns
  4. Note any methodology changes and their effects
  5. Adjust experimental parameters based on data

Quarterly Analysis

  1. Review all data for patterns invisible on shorter timescales
  2. Identify which variables correlate with results
  3. Design refined experiments based on findings
  4. Document methodology for reproducibility

Common Interpretation Errors

Avoid these mistakes when reading results:

Confirmation bias: Focusing on hits, ignoring misses. Track all sessions equally.

Over-interpretation: Finding meaning in insufficient data. Don't draw conclusions from 5 sessions.

Data leakage: Ensure the AI truly had no access to target information. High scores should trigger methodology review.

Model artifact confusion: Patterns that appear in all outputs regardless of target aren't signal—they're model characteristics.

The Research Perspective

The goal isn't to prove AI has remote viewing capacity. The goal is to generate clean data that can answer whether measurable correlation exists under controlled conditions.

If results consistently show nothing beyond chance after hundreds of sessions—that's data. It suggests either the effect doesn't exist or the methodology needs refinement.

If results show modest but consistent correlation—that's also data. It warrants investigation of what's producing the pattern.

Both outcomes advance understanding. What doesn't help is wishful interpretation, cherry-picked sessions, or abandoning tracking when results are inconclusive.


Track systematically. Use the analytics dashboard to see patterns across sessions. The numbers reveal what individual sessions can't—and that's how research questions get answered.

Tags

analysismethodologystatisticsresearch

Related Posts

Ready to run a session?

Sign up, run a session, and review the output against ground truth.

Get Started Free