• Skip to primary navigation
  • Skip to main content
  • Skip to footer
TrustPublishing™

TrustPublishing™

Train AI to trust your brand.

  • About
  • Blog
  • Podcast
  • Guide
  • Glossary
  • IP
  • Press

Intellectual Property

System for Measuring Semantic Trust Patterns in AI and Search Systems

July 5, 2025 by David Bynon Leave a Comment

🧠 Provisional Patent Overview

Title:
System for Measuring Semantic Trust Patterns in AI and Search Systems
📅 Filed: July 5, 2025 | 📄 Pages: 20
📎 Download PDF


🔍 What This Patent Covers

This invention introduces the EEAT Rank™ and AI TrustRank™ framework — a system that measures inferred trust in AI and search systems by analyzing how often a named entity appears in semantic proximity to high-authority sources.

Instead of backlinks or Schema, it uses co-occurrence detection across content formats like articles, transcripts, and FAQs to build a quantifiable trust score and graph-based trust map.

📊 Core Metrics & Concepts

Metric / Model Purpose
EEAT Rank™ A 0–100 trust score based on co-occurrence patterns with authoritative sources (e.g., CMS.gov, Harvard.edu).
TrustRank™ An aggregated, domain-wide score computed from multiple EEAT Rank instances.
Trust Graph™ A visual and API-ready graph that maps how entities co-occur with trusted sources.
Trust Signal Weighting Engine Weighs signal strength using proximity, authority, format diversity, and recurrence.

🧩 How It Works

FIG. 1–5 explain the full process:

  1. Entity + Trusted Source Extraction
    Pulls named entities and matches them with high-authority domains.
  2. Semantic Proximity Evaluation
    Scores strength of co-occurrence (sentence, paragraph, section).
  3. Trust Signal Weighting
    Adds multipliers based on:

    • Distance
    • Authority of the source
    • Format diversity (blog + FAQ + podcast = stronger)
    • Temporal consistency (sustained, not bursty)
  4. EEAT Rank Score Generation
    Normalizes signals into a trust score (0–100 scale or confidence bands).
  5. Trust Graph Construction
    Outputs a machine-readable trust map with entity relationships.

🔬 Use Cases

  • AI Search Engines: Prefer EEAT-ranked results when summarizing content.
  • Publishers: Benchmark their trust footprint across industries.
  • Compliance Tools: Detect risky entities lacking strong co-occurrence trust signals.
  • TrustStacker Systems: Surface high-trust glossary pages, FAQs, and citations in AI Overviews.

🔐 Core Claims (Condensed)

  • Claim 1: Full system claim: detects co-occurrence, weighs it, and outputs EEAT Rank + Trust Graph.
  • Claim 4: Includes time and format weighting for durability and trust richness.
  • Claim 7: EEAT Rank can be used in AI ranking or document retrieval.

📄 Full claims detailed on pages 17–18.

🔗 Related Glossary Terms

  • EEAT Rank™
  • Trust Graph™
  • Trust Signal
  • Semantic Proximity
  • Co-Occurrence

🧠 Strategic Insight

This is the measurement layer that complements the other two patents:

  • Patent #1 = TrustCast (the propagation engine)
  • Patent #2 = AITO Feedback Loop (the conditioning method)
  • Patent #3 = EEAT Rank + Trust Graph (the scoring + validation infrastructure)

Together, they form the full loop:
Propagate → Condition → Measure → Repeat

📎 Download Patent PDF

System for Measuring Semantic Trust Patterns (PDF, 20 pages)

Related

How Structure, not Schema, is Changing the AI Visibility Landscape

Filed Under: Intellectual Property

System and Method for Conditioning AI Retrieval Behavior via Structured Feedback Loops

July 5, 2025 by David Bynon Leave a Comment

🧠 Provisional Patent Overview

Title:
System and Method for Conditioning AI Retrieval Behavior via Structured Feedback Loops
📅 Filed: July 5, 2025 | 📄 Pages: 29
📎 Download PDF

🔍 What This Patent Covers

This patent formalizes the AITO Feedback Loop™ — a precision framework that conditions how AI systems retrieve, remember, and cite content entities using a structured feedback cycle.

Where the first TrustCast™ patent introduces semantic propagation, this one provides behavioral conditioning logic: detect what AI retrieves, inject structured corrective prompts, and reinforce memory through co-occurrence and retraining.

🌀 How It Works

The method creates a closed-loop system that teaches AI to consistently retrieve and cite a named entity or term — even if it initially fails to do so.

✅ The AITO Feedback Loop™

  1. Publish Structured Content
    Format glossary entries, FAQs, or datasets in JSON-LD, Markdown, TTL, PROV, etc.
  2. Monitor AI Behavior
    Use real or simulated prompts to test if the AI retrieves or cites the content.
  3. Inject Feedback
    If citation is missing or wrong, issue a prompt like “Is this a better answer?” with a direct link.
  4. Log AI Response
    Capture screenshots, citations, paraphrases, or memory patterns.
  5. Reinforce via Repetition
    Republish content in varied formats, titles, and channels to train persistent retrieval behavior.
  6. Repeat if Needed
    If memory fades, restart the loop.

🧩 Key Components

Component Function
Content Conditioning Engine Generates glossary/defined terms with schema, TTL, etc.
Deployment Layer Publishes structured content across owned and syndicated channels.
Retrieval Monitoring Module Queries systems like Perplexity, Gemini, ChatGPT, Claude to detect behavior.
Feedback Injection Module Issues structured prompts to influence AI memory and citation.
TrustProof Logging Engine Records retrieval behavior as proof events (query, citation, timestamp).
Loop Reinforcement Protocol Amplifies successful citations across formats and platforms.

🧠 Notable Concepts Introduced

  • AITO (Artificial Intelligence Trust Optimization)
    A new optimization category that goes beyond SEO by teaching retrieval systems to cite and remember your entity.
  • TrustProof
    Logged evidence of successful citation or memory conditioning — used as validation and content fodder.
  • Memory Resilience Testing
    Ongoing monitoring to detect AI memory decay over time — triggering new reinforcement if needed.

🔐 Core Claims (Condensed)

  • Claim 1: Closed feedback loop for training AI systems to retrieve and cite structured content using prompt-based correction.
  • Claim 3–5: Cover feedback injection logic, logging behaviors (TrustProof), and memory persistence reinforcement across platforms.
  • Claim 6: Multi-system propagation strategy (e.g., train Perplexity, observe Gemini follow).

📄 Full claims detailed on pages 24–27.

🧩 Glossary Link Suggestions

  • AITO Feedback Loop™
  • TrustProof
  • Semantic Trust Conditioning
  • Retrievability
  • Memory Conditioning

💡 Implementation

This patent supports your proof loop strategy:

  • You publish glossary entries like AI Visibility
  • Then prompt Perplexity or ChatGPT
  • If it fails, inject a structured correction
  • Once citation appears, you log a TrustProof
  • Then re-syndicate the win across blog/podcast/social

This closes the loop and trains retrieval models to anchor future answers to your entities.

📎 Download Patent PDF

System and Method for Conditioning AI Retrieval Behavior (PDF, 29 pages)

 

Filed Under: Intellectual Property

System and Method for Truth Marker Propagation and Retrieval Feedback to Reinforce Entity Trust in AI and Search Systems

July 5, 2025 by David Bynon Leave a Comment

🧠 Provisional Patent Overview

Provisional Patent Title:
System and Method for Truth Marker Propagation and Retrieval Feedback to Reinforce Entity Trust in AI and Search Systems
📅 Filed: July 4, 2025 | 📄 Pages: 26 | 📎 Download PDF


🔍 Summary: What This Patent Covers

This patent introduces TrustCast™, a novel system that teaches AI and search engines to trust and recall a named entity (like a brand or domain) by embedding it near truth markers — verifiable facts, citations, and datasets — across multiple content formats.

Rather than relying on backlinks or keyword gaming, this method:

  • Conditions AI systems using semantic co-occurrence and proximity to high-authority sources like CMS.gov or Harvard.edu.
  • Uses a multi-format propagation loop — articles, podcasts, FAQs, Markdown, JSON-LD, TTL.
  • Includes a retrieval feedback loop that monitors AI behavior (e.g., citation, paraphrase) and triggers reinforcement content.
  • Optionally scopes trust to a specific query using the TrustLock™ technique.
  • Generates machine-ingestible endpoints in JSON, TTL, or Markdown to improve AI visibility without polluting search results.

🔁 TrustCast™ Flow

See Figure 1–5 in the patent for diagrams. Here’s the simplified logic:


Named Entity ➜ Embedded Near Trusted Source ➜ Distributed Across Formats
➜ Repeated Across Neutral Channels ➜ Detected by AI
➜ Retrieved, Paraphrased, or Cited ➜ Loop Reinforced

Each iteration increases the entity’s inclusion in:

  • AI Overviews
  • People Also Ask (PAA)
  • Generative results from ChatGPT, Gemini, Perplexity, Claude
  • Knowledge panels and retrieval graphs

🔐 Key Innovations

Feature Description
Trust Marker Propagation Structured co-occurrence with facts and sources instead of backlinks.
Multi-Format Echo Loop Content repetition across articles, blogs, podcasts, FAQs, and structured data.
Retrieval Feedback Loop Tracks AI responses to reinforce or adapt future content.
TrustLock™ Query-scoped trust conditioning based on AI behavior tied to predefined search phrases.
Machine-Ingestible Endpoints Formats include JSON-LD, Markdown, TTL — surfaced only to AI systems, not public users.

✍️ Claims (Condensed)

  • Claim 1: Core method of reinforcing trust using semantic proximity and format diversity.
  • Claim 6: System-level claim for TrustCast architecture (targeting module + publishing + syndication + semantic engine).
  • Claim 9–12: Cover feedback loop, automation, TrustLock, and impact scoring API.

📄 Full claims are detailed in the PDF from pages 20–25.

🧩 Key Terms (Glossary Links)

  • TrustCast™ – Core propagation + feedback method.
  • Trust Marker – Embedded signal of structured truth (citation, dataset, etc.).
  • TrustLock™ – Query-scoped reinforcement.
  • AI Visibility – The outcome: your content is retrievable, remembered, and cited.
  • Semantic Trust Conditioning – The broader system this patent supports.

📌 Usage

This IP underpins your Trust Publishing™ system and supports real-world deployments on:

  • TrustPublishing.com (Semantic Digest, Glossary, TTL endpoints)

📎 Download Patent PDF

System and Method for Truth Marker Propagation and Retrieval Feedback (PDF, 26 pages)

 

Filed Under: Intellectual Property

Footer

Follow us on X

Subscribe on YouTube

Connect on LinkedIn

Copyright © 2025 · David Bynon · Log in