Skip to main content
Research

7 Common Patterns in AI Remote Viewing Output (And What They Mean)

During development and pilot testing, common patterns emerge in AI-generated RV output. Here are seven patterns to watch for and how to interpret them.

By RVLab||6 min read

Across many sessions during development and pilot testing, distinct patterns emerge in how large language models generate output under blinded conditions.

Understanding these patterns is essential for interpreting results and designing better experiments.

Here are seven common characteristics we observe in AI-generated remote viewing data, and what they might indicate.

Pattern #1: Over-Specification

The most common pattern in AI output—and one of the most problematic for analysis.

What we see: The AI generates highly specific labels rather than sensory descriptors. "A hospital" instead of "large structure, corridors, institutional feel."

Why it matters: When an AI names a target type, it's drawing on training data associations rather than generating novel perceptions. Specific labels make hit/miss analysis binary when correlation might exist at a more abstract level.

What to track: Compare sessions where AI outputs sensory fragments versus those where it outputs categorical labels. Do fragment-based outputs correlate with targets at different rates?

Categorical OutputSensory Output
"A building""Tall, angular, hard surfaces, multiple levels"
"Ocean scene""Blue, expansive, horizontal, moving, wet sensation"
"Person standing""Vertical, organic, warm, bilateral symmetry"

Sensory-level output allows for partial correlation analysis that categorical labeling obscures.

Pattern #2: First-Token Significance

Analysis suggests the AI's initial output tokens may carry different signal characteristics than subsequent elaboration.

What we see: The first impression in AI output often shows different correlation patterns than the expanded description that follows.

Why it matters: Like human AOL (Analytical Overlay), AI systems may "elaborate" in ways that drift from initial signal. The first tokens may represent more direct response to the coordinate before the model's reasoning patterns take over.

*

Analysis Approach

Segment AI outputs into "first impression" (first 2-3 descriptors) versus "elaboration" (everything after). Track correlation rates separately. Does initial output correlate better with targets?

Research question: Is there a measurable difference in target correlation between first-token output and extended elaboration?

Pattern #3: Dimensional Correlation vs. Content Identification

A consistent finding across sessions: spatial and dimensional descriptions often show different correlation patterns than content descriptions.

What we see: The AI might correctly describe "tall, narrow, vertical" for a tower but incorrectly identify it as "a tree." The shape is right; the category is wrong.

Why it matters: This suggests outputs may contain partial correspondence that categorical analysis misses. A session scored as a "miss" because "tree ≠ tower" might still show dimensional correlation.

Analysis approach: Score dimensions separately from content:

DimensionScore Independently
Height/verticalityTall vs. short vs. flat
MovementStatic vs. dynamic
Organic vs. inorganicNatural vs. constructed
TextureRough vs. smooth vs. mixed
Temperature sensationWarm vs. cool vs. neutral

Track dimensional correlation across sessions. Are certain dimensions consistently more correlated than others?

Pattern #4: Prompt Structure Effects

How we frame the coordinate prompt significantly affects output characteristics.

What we see: Different prompt structures produce systematically different output patterns—even with the same underlying coordinate.

Why it matters: Prompt effects confound target correlation analysis. What looks like "signal" might be prompt artifact.

i

Control for Prompts

Run sessions with varied prompt structures to identify which patterns persist across prompt variations. Consistent patterns across prompt types are more likely to represent genuine effects.

Variables to test:

  • Coordinate format (numeric vs. alphanumeric)
  • Instruction framing (describe vs. perceive vs. sense)
  • Output structure (free-form vs. staged protocol)
  • Temperature and sampling parameters

Pattern #5: Model-Specific Signatures

Different AI models produce characteristic output patterns that persist across sessions.

What we see: GPT-4 outputs have different structural characteristics than Claude outputs, which differ from Gemini outputs. These differences are consistent regardless of target.

Why it matters: Model-specific patterns must be factored out of correlation analysis. If a model consistently describes "water" across sessions, that's a model artifact, not target correlation.

Analysis approach: Establish baseline output distributions for each model:

ModelCommon DescriptorsOutput Structure
Model ATrack top 50 termsNote sentence patterns
Model BTrack top 50 termsNote sentence patterns
Model CTrack top 50 termsNote sentence patterns

Correlation with targets should be measured against these model-specific baselines, not raw chance.

Pattern #6: Session Length and Signal Degradation

Extended outputs show characteristic drift patterns.

What we see: Correlation scores often decline as AI outputs get longer. Early descriptors may show one pattern; later elaboration shows another.

Why it matters: If genuine signal exists, it may be strongest in initial output before the model's language generation patterns introduce noise.

!

Length Considerations

Very long outputs may dilute signal with elaboration. Consider analyzing only the first N tokens or implementing output length limits in experimental protocols.

Research question: Is there an optimal output length that maximizes signal-to-noise ratio?

Pattern #7: Consistency Across Repeated Sessions

Running the same coordinate multiple times reveals output stability patterns.

What we see: Some coordinates produce highly consistent AI outputs across runs; others produce variable outputs. This consistency itself shows patterns.

Why it matters: High consistency could indicate:

  • Strong model priors activated by the coordinate
  • Genuine target signal producing stable output
  • Coordinate format artifacts

Analysis approach: Run each coordinate 3-5 times with identical prompts. Measure output consistency. Compare consistency rates between high-correlation and low-correlation sessions.

Consistency TypeWhat It Might Indicate
High consistency + High correlationPossible genuine signal
High consistency + Low correlationModel artifact
Low consistency + Variable correlationNoise-dominated output

Implications for Experiment Design

These patterns suggest several experimental controls:

  1. Segment analysis: Score first impressions separately from elaboration
  2. Dimensional scoring: Track spatial/sensory correlation independently from categorical identification
  3. Model baselines: Establish output distributions per model before measuring target correlation
  4. Prompt controls: Test multiple prompt formats to identify stable patterns
  5. Consistency checks: Run repeated sessions to distinguish signal from noise
  6. Length normalization: Consider output length effects in scoring

The Research Question

Do AI systems exhibit measurable correlation with hidden targets under controlled conditions?

These patterns help us design experiments that can answer that question rigorously. Raw hit/miss scoring obscures nuances that might reveal whether any genuine effect exists.

The goal is not to prove AI has remote viewing ability—it's to generate clean data that can distinguish signal from artifact.


Next steps: Run experiments with these patterns in mind. Track not just hits and misses, but the structural characteristics of outputs. Over enough sessions, patterns will emerge that inform the core research question.

Tags

AI analysispatternsprotocolsmethodology

Related Posts

Ready to run a session?

Sign up, run a session, and review the output against ground truth.

Get Started Free