Skip to main content

VerityNgn Methodology Documentation

Version 1.0 | Last Updated: October 23, 2025

Executive Summary

VerityNgn is a multimodal AI-powered video verification system that analyzes YouTube videos to assess the truthfulness of claims made within them. The system combines cutting-edge multimodal LLM analysis, counter-intelligence techniques, and probabilistic reasoning to generate comprehensive truthfulness reports. Core Innovation: VerityNgn is the first system to combine:
  1. Frame-by-frame multimodal video analysis (1 FPS sampling)
  2. YouTube counter-intelligence (analyzing review videos for contradictory evidence)
  3. Press release bias detection and penalty system
  4. Probabilistic truthfulness assessment with evidence weighting

Table of Contents

  1. System Architecture
  2. Multimodal Analysis Pipeline
  3. Claims Extraction Methodology
  4. Counter-Intelligence System
  5. Probability Distribution Model
  6. Evidence Classification & Weighting
  7. Verification Algorithm
  8. Report Generation
  9. Limitations & Future Work

1. System Architecture

Overview

VerityNgn follows a pipeline architecture with six main stages:
┌─────────────────────────────────────────────────────┐
│  Stage 1: Video Download & Preprocessing            │
│  - YouTube URL validation                            │
│  - Video metadata extraction                         │
│  - Transcript extraction                             │
└────────────┬────────────────────────────────────────┘


┌─────────────────────────────────────────────────────┐
│  Stage 2: Multimodal Analysis                        │
│  - Frame-by-frame video analysis (1 FPS)            │
│  - Audio transcription with timestamps              │
│  - OCR for visual text extraction                   │
│  - Claims extraction (5-15 claims per video)        │
└────────────┬────────────────────────────────────────┘


┌─────────────────────────────────────────────────────┐
│  Stage 3: Counter-Intelligence Gathering            │
│  - YouTube review search                             │
│  - Contradictory evidence detection                  │
│  - Press release identification                      │
└────────────┬────────────────────────────────────────┘


┌─────────────────────────────────────────────────────┐
│  Stage 4: Claims Verification                        │
│  - Web search for supporting evidence               │
│  - Evidence classification & weighting              │
│  - Source credibility assessment                    │
└────────────┬────────────────────────────────────────┘


┌─────────────────────────────────────────────────────┐
│  Stage 5: Probability Calculation                    │
│  - Multi-factor probability distribution            │
│  - Counter-intelligence adjustments                  │
│  - Evidence quality boosts                           │
│  - Normalization & mapping to verdicts              │
└────────────┬────────────────────────────────────────┘


┌─────────────────────────────────────────────────────┐
│  Stage 6: Report Generation                          │
│  - HTML, Markdown, and JSON formats                 │
│  - Evidence compilation                              │
│  - Detailed explanations                             │
│  - Source attribution                                │
└─────────────────────────────────────────────────────┘

Technology Stack

  • Multimodal LLM: Google Gemini 2.5 Flash (via Vertex AI)
  • Context Window: 64K tokens for comprehensive analysis
  • Video Processing: yt-dlp for video download and metadata
  • Web Search: Google Custom Search API
  • Framework: LangGraph for workflow orchestration
  • Storage: Local filesystem or Google Cloud Storage

2. Multimodal Analysis Pipeline

Video Processing Strategy

VerityNgn uses an aggressive frame-sampling approach:
SAMPLING_RATE = 1 FPS  # 1 frame per second
TARGET_CLAIMS = 3 per minute of video
CONTEXT_WINDOW = 64K tokens (Gemini 2.5 Flash)

Frame-by-Frame Analysis

For each frame, the system analyzes:
  1. Visual Content:
    • On-screen text and graphics
    • Demonstrations and visual evidence
    • Charts, graphs, and data visualizations
    • Product displays or examples
  2. Audio Track:
    • Spoken statements with timestamps
    • Speaker identification
    • Tone and delivery analysis
  3. Metadata:
    • Video title and description
    • Channel information
    • Upload date and view count
    • Comments and engagement metrics

CRAAP Criteria Integration

All claims are evaluated using the CRAAP framework:
  • Currency: When was the information published?
  • Relevance: Is it relevant to the video’s main message?
  • Authority: What credentials or expertise is claimed?
  • Accuracy: Can it be verified with external sources?
  • Purpose: Is it promotional, educational, or persuasive?

Prompt Engineering

The system uses carefully crafted prompts to ensure high-quality claim extraction:
CRITICAL INSTRUCTIONS FOR MULTIMODAL VIDEO ANALYSIS:
- Analyze this video with MAXIMUM detail at 1 FRAME PER SECOND sampling rate
- Extract ALL factual claims, statements, assertions from ACTUAL VIDEO CONTENT
- Focus on SPOKEN WORDS, VISUAL TEXT, ON-SCREEN GRAPHICS, and DEMONSTRATIONS
- Ignore background music, decorative elements, or irrelevant visuals
- Extract EXACTLY 5-15 specific, verifiable claims
Claim Mix Requirements:
  • 70% Scientific & Verifiable Claims (studies, statistics, product effectiveness)
  • 10% Speaker Credibility Claims (credentials, affiliations, experience)
  • 20% Other Verifiable Claims (dates, locations, specific statements)

Output Format

Claims are extracted in structured JSON:
{
  "claim_text": "Dr. X has 20 years of experience in endocrinology",
  "timestamp": "02:15",
  "speaker": "Dr. X (Narrator)",
  "source_type": "spoken",
  "initial_assessment": "Verifiable credential claim requiring external verification"
}

3. Claims Extraction Methodology

Claim Quality Criteria

Each extracted claim must meet these requirements:
  1. Specificity: Concrete, not vague or general
  2. Verifiability: Can be checked against external sources
  3. Relevance: Central to the video’s message
  4. Factuality: Stated as fact, not opinion
  5. Significance: Meaningful for truthfulness assessment

Claim Types

Primary Claims (70%):
  • Study results: “Study X found Y outcome”
  • Statistics: “75% of patients experienced Z”
  • Product effectiveness: “Product causes W in N days”
  • Scientific findings: “Research shows…”
Secondary Claims (10%):
  • Educational credentials: “Dr. X studied at Y”
  • Professional experience: “X years in field”
  • Institutional affiliations: “Works at Hospital Y”
  • Awards and recognitions
Other Claims (20%):
  • Historical facts
  • Geographic information
  • Temporal data
  • Specific events or occurrences

Anti-Patterns (Avoided)

The system explicitly avoids:
  • Vague motivational statements
  • General health advice without specifics
  • Obvious facts not requiring verification
  • Micro-claims too granular for meaningful assessment
  • Subjective opinions without factual basis

4. Counter-Intelligence System

Overview

VerityNgn’s counter-intelligence system is designed to detect and weight contradictory evidence that challenges claims made in the analyzed video. This is one of the system’s most innovative features.

Two-Pronged Approach

A. Press Release Detection

Purpose: Identify self-promotional content that lacks independent validation. Detection Method:
PRESS_RELEASE_INDICATORS = [
    'prnewswire.com', 'businesswire.com', 'globenewswire.com',
    'prnewswire', 'marketwired', 'accesswire', 'prweb.com',
    'press release', 'newswire', 'press statement'
]
Bias Application:
  • Probability Adjustment: -0.4 (significant negative bias)
  • Rationale: Press releases are promotional, not independent verification
  • Distribution Impact:
    • Reduce TRUE by 60% of adjustment
    • Increase FALSE by 40% of adjustment
    • Increase UNCERTAIN by 40% of adjustment
Example:
Claim: "Product X causes 15 pounds of weight loss"
Evidence: 5 press releases from product manufacturer

Result: TRUE probability reduced by 24% (0.4 * 0.6)
        FALSE probability increased by 16% (0.4 * 0.4)
        UNCERTAIN increased by 16% (0.4 * 0.4)

B. YouTube Counter-Intelligence

Purpose: Find and analyze review videos that contradict claims in the original video. Search Strategy:
COUNTER_INTEL_QUERIES = [
    "{video_title} scam review",
    "{video_title} fake exposed",
    "{video_title} debunk analysis",
    "{main_keywords} + warning review",
    "{product_name} doesn't work"
]
Transcript Analysis: The system analyzes transcripts of review videos for:
  1. Counter-Signals:
    • ‘scam’, ‘fake’, ‘fraud’, ‘lie’, ‘misleading’
    • ‘doesn’t work’, ‘waste of money’, ‘no results’
    • ‘red flags’, ‘warning’, ‘beware’, ‘avoid’
    • ‘overpriced’, ‘overhyped’, ‘disappointing’
    • ‘fabricated’, ‘deceptive’, ‘predatory’, ‘exposed’
  2. Supporting Signals:
    • ‘works’, ‘effective’, ‘results’, ‘recommend’
    • ‘good’, ‘helps’, ‘success’, ‘positive’
    • ‘beneficial’, ‘worth it’, ‘legit’
Stance Determination:
counter_ratio = counter_signals / (counter_signals + supporting_signals)

if counter_ratio > 0.7:
    stance = 'counter'
    confidence = min(0.95, 0.6 + (counter_ratio - 0.7) * 1.17)
elif counter_ratio < 0.3:
    stance = 'supporting'
    confidence = min(0.95, 0.6 + (0.3 - counter_ratio) * 1.17)
else:
    stance = 'neutral'
    confidence = 0.5
Credibility Weighting: Counter-intelligence evidence is weighted by:
  • View Count: Videos with 100K+ views = stronger signal
  • Channel Credibility: Established channels weighted higher
  • Video Age: Recent videos weighted higher
  • Engagement: Like ratio and comment sentiment
Probability Impact: YouTube counter-intelligence has reduced impact compared to earlier versions:
# Enhanced (balanced) approach:
youtube_impact = min(0.20, youtube_counter_power * 0.08)
enhanced_dist["FALSE"] = min(0.85, base_false + youtube_impact)

# Reduction rationale: Avoid over-aggressive FALSE bias
# Previous: max 0.35 impact, max 0.9 FALSE
# Current: max 0.20 impact, max 0.85 FALSE

Counter-Intelligence in Action

Example Case:
Original Video: "Amazing weight loss supplement - lose 15 pounds!"

Counter-Intelligence Found:
1. "Product X Scam Exposed" (450K views) - Counter stance (90% confidence)
2. "I Tried Product X - My Honest Review" (120K views) - Counter stance (85% confidence)
3. "Product X Doesn't Work - Warning" (75K views) - Counter stance (80% confidence)

Impact:
- Base FALSE probability: 30%
- YouTube CI adjustment: +12% (0.12 based on reach and confidence)
- Final FALSE probability: 42%

5. Probability Distribution Model

Three-State Model

VerityNgn uses a probabilistic approach rather than binary true/false:
P(TRUE) + P(FALSE) + P(UNCERTAIN) = 1.0

Base Probability Initialization

base_dist = {
    "TRUE": 0.3,      # Starting assumption: neutral
    "FALSE": 0.3,      # Equal weight to false
    "UNCERTAIN": 0.4   # Highest initial uncertainty
}

Enhancement Factors

The base distribution is adjusted based on five key factors:

Factor 1: Evidence Coverage (10-30% boost)

coverage_score = evidence_count / (claim_word_count / 5)
if coverage_score > 1.0:
    boost = min(0.3, coverage_score * 0.15)
    enhanced_dist["TRUE"] += boost
Rationale: More evidence per claim indicates stronger validation.

Factor 2: Independent Source Ratio (10-25% boost)

independent_ratio = independent_sources / total_sources
if independent_ratio > 0.5:
    boost = min(0.25, (independent_ratio - 0.5) * 0.5)
    enhanced_dist["TRUE"] += boost
Rationale: Independent sources provide unbiased validation.

Factor 3: Scientific Evidence (up to 40% boost)

science_weight = min(0.4, scientific_power * 0.2)
enhanced_dist["TRUE"] += science_weight
Rationale: Scientific sources (journals, academic institutions) carry highest weight.

Factor 4: YouTube Counter-Intelligence (-20% impact)

youtube_impact = min(0.20, youtube_counter_power * 0.08)
enhanced_dist["FALSE"] += youtube_impact
Rationale: Contradictory reviews reduce claim trustworthiness.

Factor 5: Press Release Penalty (-40% impact)

self_ref_count = count_self_referential_sources()
penalty = min(0.4, self_ref_count * 0.15)
enhanced_dist["FALSE"] += penalty
Rationale: Self-promotional content is inherently biased.

Normalization

After all adjustments, probabilities are normalized:
total = sum(enhanced_dist.values())
if total > 0:
    enhanced_dist = {k: v / total for k, v in enhanced_dist.items()}

# Ensure minimum thresholds
for key in ["TRUE", "FALSE", "UNCERTAIN"]:
    enhanced_dist[key] = max(0.001, enhanced_dist[key])

Verdict Mapping

The continuous probabilities are mapped to human-readable verdicts:
TRUE% = P(TRUE) * 100
FALSE% = P(FALSE) * 100
UNCERTAIN% = P(UNCERTAIN) * 100

if TRUE \> 70 and FALSE \< 10:
    verdict = "HIGHLY_LIKELY_TRUE"
elif (TRUE + UNCERTAIN) \> 65 and FALSE \< 35:
    verdict = "LIKELY_TRUE"
elif TRUE \> 40 and FALSE \< 35:
    verdict = "LEANING_TRUE"
elif abs(TRUE - FALSE) \< 10:
    verdict = "UNCERTAIN"
elif FALSE \> 35 and TRUE \< 30:
    verdict = "LEANING_FALSE"
elif (FALSE + UNCERTAIN) \> 65 and TRUE \< 35:
    verdict = "LIKELY_FALSE"
elif FALSE \> 75:
    verdict = "HIGHLY_LIKELY_FALSE"
Rationale for Thresholds:
  • 65% threshold for “likely” verdicts (relaxed from 70% to avoid over-conservatism)
  • 70% threshold for “highly likely” verdicts
  • Combination with UNCERTAIN allows for nuanced assessment

6. Evidence Classification & Weighting

Source Categories

Evidence is classified into six categories, each with distinct validation power:

1. Scientific Sources (Highest Weight: 2.5-4.0)

Characteristics:
  • Peer-reviewed journals
  • Academic institutions (.edu domains)
  • Research papers and studies
  • Government research agencies
Examples:
  • PubMed/NCBI publications
  • Journal articles (Nature, Science, NEJM)
  • University research papers
  • NIH/CDC reports
Validation Power: 2.5 - 4.0 (highest)

2. Independent Sources (High Weight: 1.5-2.0)

Characteristics:
  • Reputable news organizations
  • Fact-checking organizations
  • Independent analysis
  • No financial stake in outcome
Examples:
  • Reuters, AP News, BBC
  • FactCheck.org, Snopes, PolitiFact
  • Independent medical websites (Mayo Clinic, WebMD)
Validation Power: 1.5 - 2.0

3. Government Sources (High Weight: 1.5-2.5)

Characteristics:
  • .gov domains
  • Official government publications
  • Regulatory agency reports
Examples:
  • FDA reports
  • CDC guidelines
  • NIH publications
  • FTC consumer alerts
Validation Power: 1.5 - 2.5

4. Press Releases (Negative Weight: -0.5 to -1.0)

Characteristics:
  • Self-promotional content
  • Newswire services
  • Company press statements
  • Marketing materials
Examples:
  • PR Newswire articles
  • Business Wire releases
  • Company blog posts
  • Marketing pages
Validation Power: -0.5 to -1.0 (penalized)

5. YouTube Counter-Intelligence (Variable: 0.5-2.0)

Characteristics:
  • Review videos
  • Debunking content
  • Investigation videos
  • Consumer experiences
Weight Determined By:
  • View count (100K+ = higher weight)
  • Channel credibility
  • Stance confidence
  • Counter-signal strength
Validation Power: 0.5 - 2.0 (variable)

6. Other Web Sources (Standard Weight: 1.0)

Characteristics:
  • General web pages
  • Blogs
  • Forums
  • Social media
Validation Power: 1.0 (baseline)

Self-Referential Detection

The system identifies “self-referential” sources that cite the original video or its creators:
def is_self_referential(source_url, source_text, video_title, video_channel):
    # Check if source cites the original video
    if video_title.lower() in source_text.lower():
        return True
    if video_channel.lower() in source_text.lower():
        return True
    if source_url contains video_domain:
        return True
    return False
Self-referential sources are:
  • Separated into press_releases category
  • Given negative validation power
  • Excluded from supporting evidence counts

7. Verification Algorithm

Claim Verification Process

For each extracted claim, the system follows this process:
1. Generate search queries
   ├─ Fact-check queries: "{claim} fact check"
   ├─ Scientific queries: "{claim} study research"
   ├─ General queries: exact claim text
   └─ Debunk queries: "{claim} debunk false"

2. Execute web searches
   ├─ Google Custom Search API
   ├─ Retrieve top 10 results per query
   └─ Extract URL, title, snippet

3. Classify evidence
   ├─ Identify source category
   ├─ Assign validation power
   ├─ Detect self-referential content
   └─ Determine supporting vs contradicting

4. Calculate probability distribution
   ├─ Start with base distribution
   ├─ Apply evidence boosts
   ├─ Apply counter-intelligence penalties
   ├─ Normalize to sum = 1.0
   └─ Map to verdict category

5. Generate explanation
   ├─ Summarize evidence
   ├─ Explain probability factors
   ├─ Cite key sources
   └─ Provide confidence assessment

Multi-Factor Probability Calculation

The complete algorithm:
def calculate_enhanced_probability_distribution(
    base_dist: Dict[str, float],
    evidence_groups: Dict[str, List[Dict]]
) -> Tuple[Dict[str, float], List[str]]:
    
    enhanced_dist = base_dist.copy()
    modifications = []
    
    # Extract evidence by category
    scientific_evidence = evidence_groups["scientific"]
    independent_evidence = evidence_groups["independent"]
    press_releases = evidence_groups["press_releases"]
    youtube_counter = evidence_groups["youtube_reviews"]
    
    # Calculate validation powers
    scientific_power = sum(e['validation_power'] for e in scientific_evidence)
    independent_power = sum(e['validation_power'] for e in independent_evidence)
    youtube_counter_power = sum(e['validation_power'] for e in youtube_counter)
    
    # Factor 1: Evidence coverage
    total_evidence = len(scientific_evidence) + len(independent_evidence)
    claim_length = len(claim_text.split())
    coverage_score = total_evidence / (claim_length / 5)
    
    if coverage_score > 1.0:
        coverage_boost = min(0.3, coverage_score * 0.15)
        enhanced_dist["TRUE"] += coverage_boost
        modifications.append(f"Evidence coverage boost: +{coverage_boost:.2f}")
    
    # Factor 2: Independent source ratio
    if independent_power > 0:
        independent_ratio = independent_power / (independent_power + scientific_power + 0.01)
        if independent_ratio > 0.5:
            indie_boost = min(0.25, (independent_ratio - 0.5) * 0.5)
            enhanced_dist["TRUE"] += indie_boost
            modifications.append(f"Independent sources boost: +{indie_boost:.2f}")
    
    # Factor 3: Scientific evidence weighting
    if scientific_power > 0:
        science_weight = min(0.4, scientific_power * 0.2)
        enhanced_dist["TRUE"] += science_weight
        modifications.append(f"Scientific evidence boost: +{science_weight:.2f}")
    
    # Factor 4: YouTube counter-intelligence
    if youtube_counter_power > 0:
        youtube_impact = min(0.20, youtube_counter_power * 0.08)
        enhanced_dist["FALSE"] += youtube_impact
        modifications.append(f"YouTube CI penalty: +{youtube_impact:.2f} to FALSE")
    
    # Factor 5: Press release penalty
    self_ref_count = len([pr for pr in press_releases if pr.get('self_referential')])
    if self_ref_count > 0:
        penalty = min(0.4, self_ref_count * 0.15)
        enhanced_dist["FALSE"] += penalty
        modifications.append(f"Press release penalty: +{penalty:.2f} to FALSE")
    
    # Normalize
    total = sum(enhanced_dist.values())
    enhanced_dist = {k: v / total for k, v in enhanced_dist.items()}
    
    # Ensure minimums
    for key in ["TRUE", "FALSE", "UNCERTAIN"]:
        enhanced_dist[key] = max(0.001, enhanced_dist[key])
    
    return enhanced_dist, modifications

8. Report Generation

Report Structure

Each report contains:
  1. Executive Summary
    • Total claims analyzed
    • Verdict distribution (Highly Likely True, Likely False, etc.)
    • Overall truthfulness assessment
  2. Video Information
    • Title, channel, duration
    • Upload date and view count
    • Video embed/thumbnail
  3. Detailed Claims Analysis
    • Claim text with timestamp
    • Speaker identification
    • Verification verdict
    • Probability distribution
    • Detailed explanation
    • Source links
  4. Evidence Summary
    • Categorized by source type
    • Validation power indicators
    • Quality metrics
  5. Counter-Intelligence Section
    • Press releases identified
    • YouTube review analysis
    • Impact on truthfulness scores

Output Formats

HTML Report:
  • Interactive source links (modal popups)
  • Video embed
  • Formatted tables
  • Visual probability bars
Markdown Report:
  • Clean, readable format
  • Source citations with links
  • Table-formatted claims analysis
  • GitHub-compatible
JSON Report:
  • Structured data format
  • All probabilities and evidence
  • Machine-readable
  • API-friendly

Explanation Generation

For each claim, the system generates a human-readable explanation: Template:
[VERDICT] based on [EVIDENCE_COUNT] sources ([SCIENTIFIC_COUNT] scientific, 
[INDEPENDENT_COUNT] independent). 

[COUNTER_INTEL_STATEMENT if applicable]

Key supporting evidence: [TOP_SOURCES]

Probability: TRUE: X%, FALSE: Y%, UNCERTAIN: Z%
Example:
LIKELY FALSE based on 8 sources (1 scientific, 3 independent). 

YOUTUBE COUNTER-INTELLIGENCE: Multiple review videos (450K+ views) contradict 
this claim, providing independent contradictory perspective.

Press release sources have reduced validation power due to self-promotional nature.

Key supporting evidence: [1] National Institutes of Health study shows no 
significant effect [2] Mayo Clinic states insufficient evidence.

Probability: TRUE: 15%, FALSE: 70%, UNCERTAIN: 15%

9. Limitations & Future Work

Current Limitations

1. Language Support

  • Current: English-only
  • Impact: Cannot analyze non-English videos
  • Mitigation: Rely on auto-translated transcripts (degraded quality)

2. Visual Understanding

  • Current: Text-based analysis of visual content
  • Impact: May miss subtle visual cues or demonstrations
  • Mitigation: 1 FPS sampling captures most visual information

3. Context Understanding

  • Current: Individual claim verification
  • Impact: May miss broader narrative context
  • Mitigation: Video analysis summary provides context

4. Search API Limitations

  • Current: Dependent on Google Custom Search API
  • Impact: Results limited by search algorithm bias
  • Mitigation: Multiple query strategies, diverse sources

5. Real-Time Updates

  • Current: Analysis is point-in-time
  • Impact: New evidence published later not included
  • Mitigation: Reports include generation timestamp

6. Subjective Claims

  • Current: Best for factual, verifiable claims
  • Impact: Opinion-based claims harder to verify
  • Mitigation: Focus on CRAAP criteria, objective claims

Future Enhancements

  1. Multi-Language Support
    • Integrate translation APIs
    • Support major world languages
    • Cross-language evidence validation
  2. Enhanced Visual Analysis
    • Deeper computer vision integration
    • Object detection and recognition
    • Facial recognition for speaker verification
  3. Real-Time Monitoring
    • Continuous evidence updates
    • Alert system for new contradictory evidence
    • Live verification scores
  4. Community Validation
    • Crowd-sourced evidence submission
    • Expert reviewer network
    • Public challenge system
  5. Automated Fact-Checking
    • Integration with fact-checking databases
    • Automated source verification
    • Claim similarity detection
  6. Enhanced Counter-Intelligence
    • Social media analysis (Twitter, Reddit)
    • Forum scanning (specialized communities)
    • Academic preprint servers
  7. Explainability Improvements
    • Visual probability explanations
    • Interactive evidence exploration
    • Confidence intervals on scores

Conclusion

VerityNgn represents a significant advancement in automated video verification technology. By combining multimodal AI analysis, sophisticated counter-intelligence techniques, and probabilistic reasoning, the system provides transparent, evidence-based truthfulness assessments. Key Strengths:
  • Comprehensive multimodal analysis
  • Transparent probability calculations
  • Counter-intelligence innovation
  • Evidence-based reasoning
  • Detailed explanations
Ethical Commitment:
  • Transparent methodology
  • Source attribution
  • Probabilistic rather than binary verdicts
  • Acknowledgment of uncertainty
  • Open documentation
The system is designed to augment, not replace, human critical thinking. Users are encouraged to:
  • Review the evidence themselves
  • Consider the probability distributions
  • Understand the limitations
  • Make informed judgments

For Technical Implementation Details, see:
  • ARCHITECTURE.md - System architecture
  • API_DOCUMENTATION.md - API reference
  • DEPLOYMENT_GUIDE.md - Deployment instructions
For Research Papers, see:
  • papers/verityngn_methodology.pdf - Academic paper
  • papers/counter_intelligence_techniques.pdf - CI methods paper
  • papers/probability_model.pdf - Mathematical foundations

This methodology documentation is maintained as part of the VerityNgn open-source project. For updates and contributions, visit the GitHub repository.