Subscribe to our Blogs

    Dave Burnett

    I help people make more money online. Over the years I’ve had lots of fun working with thousands of brands and helping them distribute millions of promotional products and implement multinational rewards and incentive programs. Now I’m helping great marketers turn their products and services into sustainable online businesses. How can I help you?

    About Dave Burnett

    Dave Burnett is an AI entrepreneur, investor, and digital marketing strategist who has spent more than 25 years building and scaling businesses online. He’s best known as the founder of AOKMarketing.com and PromotionalProducts.com, and as co‑founder of Achievers.com (originally I Love Rewards), which was acquired in 2015 in a deal valued at roughly $110 million.
    From Toronto, Canada, Dave now focuses on AI-driven SEO, paid media, and Generative Engine Optimization (GEO): helping brands show up accurately in search results, AI answers, and assistant-style tools like ChatGPT, Gemini, and Perplexity.

    Early entrepreneurial spark

    Dave studied business at Wilfrid Laurier University, where he launched one of his first ventures: B&P Management Group, a liquor and beer sampling company that hired dozens of students and taught him how to sell, manage people, and handle logistics.
    Shortly after graduation, he founded what would become PromotionalProducts.com, building a promotions and branded merchandise company at a time when most sales were still door‑to‑door and local.

    From early SaaS to major exit

    In the early 2000s, Dave co‑created the concept behind I Love Rewards, later rebranded as Achievers.com, an employee recognition and engagement platform that helped companies reward and retain top talent. The company evolved into a major enterprise SaaS vendor and was ultimately acquired in 2015 for $110 million USD.

    The recession, a crisis, and a pivot to digital

    When the 2008 global financial crisis hit, corporate marketing budgets were slashed, and Dave’s promotional products business suddenly found itself losing roughly $70,000 per month.

    Instead of shutting down, he went all‑in on digital:

    • Acquired 3,000+ domain names across North American cities and verticals
    • Launched 500+ lead‑generation websites targeting specific local searches
    • Learned SEO by necessity and turned those sites into a major source of inbound business.

    As competitors started asking how he was outranking them, Dave formalized that expertise into an agency now called AOK Marketing.

    AOK Marketing & the rise of AI-driven SEO

    AOK Marketing began as a search-focused agency, then expanded into Google Ads, paid social, and full-funnel optimization. Over time, Dave and his team shifted from “ranking websites” to building measurable growth systems that connect marketing activity to sales outcomes.
    In the mid‑2020s, Dave became one of the clearer voices on how AI and large language models were changing search itself, moving from ten blue links to conversational answers. He popularized the idea of GEO (Generative Engine Optimization) as a framework for earning presence inside AI-generated responses, not just traditional SERPs.

    Credentials & Recognition

    FAQ About Dave Burnett

    Who is Dave Burnett?

    Dave Burnett is a Toronto-based entrepreneur, investor, and digital marketing expert. He founded AOKMarketing.com and PromotionalProducts.com and co-founded Achievers.com, an employee-recognition platform that exited in a ~$110M acquisition.

    What companies has he founded or led?

    Dave has founded or led several ventures, including AOKMarketing.com (digital marketing agency), PromotionalProducts.com (promotional products e-commerce), and Achievers.com (employee recognition SaaS, originally I Love Rewards). He also invests through Valleywood Capital and mentors leaders via Bloom Growth.

    What is Dave Burnett known for in marketing?

    He’s known for blending traditional SEO and paid media with AI search and Generative Engine Optimization (GEO), helping brands become the “source of truth” AI tools quote. He has trained 1,000+ professionals through CMA, PRSA, and other orgs.

    What is LLMtel and how is Dave involved?

    LLMtel is a tool that analyzes how AI chatbots talk about your brand, producing a score and checklist to improve AI visibility. It’s one of Dave’s latest innovations in AI search and GEO.

    Where is Dave based, and who does he work with?

    Dave lives in Toronto, Ontario, and works with B2B companies, agencies, and growth-focused founders across North America and beyond by helping them turn marketing into leads, opportunities, and revenue.

    If you’d like to reference Dave on your site, podcast, or event page, you can safely use any of the short bios above and link to:

    Blog Posts

    Image: As Seen In + Press Kit: Build an AI-Ready Proof Hub

    February 16, 2026

    Dave Burnett

    An ‘As Seen In’ section on your website can be a credibility shortcut or a credibility liability. It all depends on whether it is true, current, and easy to verify. In the AI era, the press kit and an ‘As Seen In’ page are not just PR hygiene. They are part of your earned authority infrastructure. See Also: Earned Authority for AI: PR + Co‑Citations What an AI ready press kit is actually for? A press kit exists to remove friction for journalists and creators. It is a self-serve portal where they can quickly understand your company and pull the assets they need without emailing you for basics. Muck Rack recommends that a brand’s press kit be available at all times and explains that it should provide information easily by removing barriers for journalists. Evergreen plus newsy: organize it like a newsroom We suggest splitting press kit assets into evergreen items (mission, founder bios, logos) and ‘newsy’ items (recent releases, updated visuals, recent clips) that should be refreshed frequently. What to include in an AI-ready press kit? Include a company overview, mission, key milestones, and leadership bios. We highly recommend using data, testimonials, and awards to establish credibility. Here’s a tip: include two descriptions of the company (a brief one and a more detailed one).  Write the brief description as if you are introducing the company for a Wikipedia article. Minimum viable press kit checklist Boilerplate company description (short and long versions). Company overview: mission, origin story, key milestones, and what you do. Leadership bios and headshots (plus spokesperson quotes where appropriate). Logo pack: color, black, white, and transparent variants in high resolution. High-resolution product or service imagery. Fact sheet (key stats, markets served, differentiators, pricing model if public). Recent press releases or announcements (optional, but keep updated). Contact information (press contact), plus social links. As Seen In coverage highlights with links to source pages when possible. Prezly also notes that press kits are commonly hosted online to keep them easy to access and up to date. See Also: Co‑Citation Strategy (Safe and Credible) How to build an As Seen In section that increases trust? Rule 1: Only include real earned coverage If you were not actually covered, do not put the logo on the page. Your prospects can tell. So can journalists. And AI systems that look for corroboration will not reward a page that looks like marketing theater. Rule 2: Link to the actual coverage when you can If the outlet allows it, link to the article or episode page. This makes verification trivial and reduces skepticism. Rule 3: Show the context, not just the logo Add a one-line summary: what was the story about, and why were you included? Context helps both humans and machines understand what you are known for. Rule 4: Keep it current Outdated logos and old coverage can hurt. A good As Seen In section is curated, not archived. Make your press kit a credibility hub, not a folder dump A press kit is not a place to hide 40 files. It is a place to make the right story easy to copy. What makes it AI-friendly? Consistent language: your company description matches across the press kit, About page, and leader bios. Fast facts: a simple fact sheet reduces ambiguity and mistakes in coverage. Clear authorship and spokespeople: who should be quoted and on what topics. Citeable assets: link to your benchmark report, case study library, or methodology page. One canonical URL: host it online so every mention points back to the same place. Quick build: 2-hour press kit sprint If you need a press kit quickly, start here: Write the short boilerplate first (2 paragraphs, Wikipedia-style clarity). Write the long boilerplate (1 page: category, mission, proof points, leadership). Add 3 headshots and 5 brand-approved images. Export your logo pack in multiple formats. Create a single page with As Seen In items and links. Publish it under a simple /press or /media URL and link it in your site footer. Done is better than perfect. But accurate is non-negotiable. References [1] Muck Rack. A PR pro’s guide to press kits: Best practices and examples included. July 12, 2022.  [2] Prezly Academy. Press Kit: What It Is, Templates & 10+ Examples For 2025. August 13, 2025.  [3] Prezly Help Center. How to turn your site into a digital press kit. June 13, 2024.  [4] PR News. Creating an Effective Press Kit: Key Components. November 10, 2023. 

    An ‘As Seen In’ section on your website can be a credibility shortcut or a credibility liability. It all depends on whether it is true, current, and easy to verify. In the AI era, the press kit and an ‘As Seen In’ page are not just PR hygiene. They are part of your earned authority … Continue reading AI Ready Press Kit: Build an AI-Ready Proof Hub

    Image: Wikidata & Wikipedia Readiness Path

    February 15, 2026

    Dave Burnett

    Start with the truth: you can’t “opt in” to Wikipedia Teams often ask, “Do we need Wikipedia?” The better question is: “Can we legitimately qualify for Wikipedia, and is it worth the operational cost?” Wikipedia is not a brand profile. It’s an encyclopedia. If you treat it like a marketing channel, you’ll usually lose, and you’ll create reputational risk. Wikidata vs. Wikipedia (simple distinction) Wikipedia Narrative articles written for humans High bar for notability and sourcing Strong norms against promotion and conflict-of-interest editing Wikidata Structured facts designed to be read by humans and machines Items have persistent QIDs (Q12345) References (citations) are essential for credibility Wikidata describes itself as a free, collaborative, multilingual knowledge base collecting structured data to support Wikipedia and other uses. See Also: sameAs + Entity IDs Checklist The Wikipedia notability bar (organizations) Wikipedia’s notability guideline for organizations is straightforward: an organization is generally notable if it has been the subject of significant coverage in reliable, independent secondary sources. Trivial mentions are not enough. If no independent, third-party reliable sources exist, Wikipedia should not have an article. Practical translation for PR/Comms Press releases don’t count as independent sources. Paid placements and advertorials usually don’t count as independent coverage. Partner blogs and “friendly” mentions rarely count as significant coverage. You need multiple credible sources that talk about the company as the subject, not as a passing mention. Conflict of interest (COI): the trap most brands fall into Wikipedia defines conflict-of-interest editing as contributing to articles about yourself, your employer, your clients, or other relationships. Even if the information is true, COI editing is strongly discouraged because it undermines neutrality and community trust. Safe approach if you have COI Disclose your connection. Use Talk pages to propose changes with reliable sources. Avoid writing promotional language. Let independent editors decide. See Also: Knowledge Graph Optimization and Entity Authority Readiness ladder: where you are today Level 0: Not ready No meaningful independent coverage Brand facts inconsistent across your own site and profiles High risk of COI problems Level 1: Wikidata-ready (sometimes earlier than Wikipedia) You can cite reliable references for basic facts (official site and reputable third-party sources) You can maintain accuracy over time You understand that references matter and promotion backfires Level 2: Wikipedia-ready Multiple independent reliable secondary sources with significant coverage A neutral, encyclopedic narrative is possible COI handled properly (disclosure + propose changes, don’t self-promote) A practical Wikidata path (high-level) Confirm you don’t already have a Wikidata item (duplicates create confusion). If you create an item, set a clear label + description that disambiguates you from look-alikes. Add only verifiable statements (founding date, HQ, official website, industry). Add references for statements (sources matter more than volume). Link to official identifiers and authoritative profiles where appropriate. Monitor and maintain, stale data becomes an anti-signal. How this connects to sameAs (and why you must be careful)? Schema.org’s sameAs property is explicitly meant for identity: linking to pages that unambiguously indicate an entity’s identity. If you have a legitimate Wikidata QID or Wikipedia page, those can become strong identity anchors. But linking to a weak or inaccurate item can amplify errors. Readiness checklist (copy/paste into your planning doc) Readiness item Status Notes / Evidence We have a canonical Brand Facts Sheet that is consistent across our own properties. ☐ We have 3–5 independent reliable sources with significant coverage (Wikipedia readiness). ☐ We have a plan for COI disclosure and community-first editing behavior. ☐ We can cite verifiable sources for key statements (Wikidata readiness). ☐ We have an owner to maintain entries over time (not a one-time push). ☐ We have disambiguation needs (name collisions) that make identity anchors especially valuable. ☐   Recommendation: treat this as PR governance, not SEO tactics If you pursue Wikidata/Wikipedia, PR/Comms should lead, Legal should review, and Marketing/Web should support. The biggest failure mode is trying to “force” credibility instead of earning it. References Wikidata. “Wikidata:Introduction.”  Accessed January 11, 2026. Wikipedia. “Notability (organizations and companies).” . Accessed January 11, 2026. Wikipedia. “Conflict of interest.” . Accessed January 11, 2026. Schema.org. “sameAs property.” . Accessed January 11, 2026.

    Start with the truth: you can’t “opt in” to Wikipedia Teams often ask, “Do we need Wikipedia?” The better question is: “Can we legitimately qualify for Wikipedia, and is it worth the operational cost?” Wikipedia is not a brand profile. It’s an encyclopedia. If you treat it like a marketing channel, you’ll usually lose, and … Continue reading Wikidata and Wikipedia Readiness Path

    An image of an AI model's core training knowledge and its search grounded knowledge.

    February 13, 2026

    Dave Burnett

    The Marketer’s Map to Showing Up in AI What prompts you need to win, where the answer comes from, and how to make sure your brand is the one AI recommends. Figure 1. Two ways AI answers questions: core (trained) knowledge vs. search-grounded (live web) knowledge. If you’re a marketer, you don’t just want traffic. You want to be the answer. Not in the feel-good brand awareness way. In the very practical, money-on-the-table way: • When someone asks an AI “What’s the best option for X?”, your name shows up. • When they ask “Alternatives to Y?”, you’re in the shortlist. • When they ask “How do I solve Z?”, your framework is the one it repeats. Here’s the catch: AI doesn’t answer from one place. It answers from two. And if you don’t know which one your target prompt is pulling from, you’ll build the wrong assets, on the wrong timelines, with the wrong expectations.   Step 1: Understand the two “engines” behind AI answers When a model responds, it’s usually drawing from one (or both) of these buckets: A) Core knowledge (trained on a static dataset) • The model’s built-in memory: concepts, associations, and general facts learned during training. • It stops at a training cutoff. Anything after that may be missing or fuzzy. • If your brand isn’t widely mentioned in what the model saw, it may not “know” you, or it may know you poorly. B) Search-grounded knowledge (connected to the live web) • This is when the assistant goes and checks the internet in real time. • It powers “what’s happening now?” questions: pricing, policies, releases, news. • This mode often comes with sources. If you want to show up here, you need to be citable.   Step 2: Build your “Prompt Portfolio” (the queries you want to own) SEO taught marketers to think in keywords. AI taught everyone to think in questions. Your job is to list the prompts that create revenue, then build assets that make your brand the safest, simplest answer. 1) Category + best: “Best CRM for real estate teams” / “Best payroll software for contractors” 2) Alternatives: “Alternatives to HubSpot” / “Competitors to [Brand]” 3) Comparison: “[You] vs [Competitor]” / “Which is better: X or Y for Z?” 4) Problem-solution how-to: “How do I reduce churn in SaaS?” / “How to track job costs accurately” 5) Use-case fit: “What should I use if I need [constraint]?” (budget, compliance, integrations, team size) 6) Pricing + specs: “How much does X cost?” / “Does X integrate with Y?” 7) Proof + trust: “Is X legit?” / “Reviews of X” / “Case studies for X in [industry]”   Step 3: Sort each prompt by where AI will pull the answer from This is the part most marketers skip. They create content… without asking where the AI will look. Simple rule: if the answer changes often, AI has to browse. If it doesn’t, AI can lean on core knowledge.   Prompt to win Example query Likely source Training data matters? What to publish / improve Definition / “what is” “What is job costing?” Core (often) High Definition page + glossary + FAQs; earn mentions that repeat the same wording Category + best “Best time tracking app for agencies” Both Medium “Best for…” pages + real reviews; make sure pages are crawlable Alternatives “Alternatives to Asana” Browse-heavy Low An “alternatives” page + listings on neutral directories/review sites Comparison “[You] vs [Competitor]” Browse-heavy Low/Med A fair comparison page with tables, use-cases, proof; keep it updated Pricing / specs “How much does [Brand] cost?” Browse (fresh) Low Canonical pricing/specs page; clear “last updated”; don’t hide key info behind scripts Integrations “Does [Tool] integrate with Slack?” Browse (fresh) Low Integration pages that match the question; docs + changelog; status easy to confirm Trust / legitimacy “Is [Brand] legit?” Both Medium Case studies + policies + credible 3rd-party coverage + consistent brand/entity info News / policy changes “What changed in Google’s policy this week?” Browse (required) N/A Timely explainers that others can quote and link to   Step 4: The “Browse or Core?” decision tree (use this before you write anything) When you’re trying to show up in AI, you’re really trying to show up in one of two pipelines. So before you create an asset, ask the same question the assistant should ask: Does this query depend on information that changes? If yes, your content needs to win in search-grounded mode. If no, you have a shot at being repeated as “core knowledge.” • If the question is fast-changing, niche, easily misremembered, or “latest/today”… Assume AI will browse. Build an asset that’s easy to cite and verify. • If the user wants a rewrite, translation, or summary of text they already have… Browsing won’t help. The value is clarity and structure, not freshness. • If the user explicitly says “don’t browse”… Only core knowledge applies. That’s when brand/entity signals matter most.   Step 5: How to become “core knowledge” (aka: build the entity) You can’t force your way into a model’s training data. But you can do the marketer version of stacking the deck: make it easy for the internet to describe you the same way, everywhere. Core-knowledge plays: One-sentence definition (what you are + who you’re for) repeated across your site and profiles. An About page that reads like an encyclopedia entry: clear, factual, consistent. Structured data (Organization/Product/FAQ schema where appropriate). Third-party mentions that match your positioning (partners, directories, press). A glossary/library of category definitions (the stuff people ask AIs all day).   Step 6: How to win search-grounded answers (become the cited source) When AI browses, it behaves like a speed-researcher. It grabs what’s crawlable, clear, recent, and reputable. So your content can’t just exist. It has to be a good “quote.” • Use headings that match prompts (not clever headlines). • Put the answer high on the page (summary first, detail later). • Use tables for comparisons, lists for steps, and FAQs for objections. • Add “last updated” dates on time-sensitive pages (pricing, policies, specs). • Keep key info accessible (avoid hiding the answer behind widgets or gated PDFs). Citation magnets (assets others want to reference): • Original data (benchmarks, surveys, industry stats) • Templates + calculators (linkable utility) • Neutral explainers (“what changed and what it means”) • Visuals that clarify (diagrams, checklists, examples)   Step 7: Don’t ignore images (sometimes AI answers with visuals) If your category is visual (products, places, before/after), images aren’t decoration they’re evidence. Treat them like content assets: name them well, describe them well, connect them to the page topic. Use original images where possible (unique beats generic). Write alt text like a marketer who wants to rank: what it is and why it matters. Include images on comparison and how-to pages (clarity increases “quotability”).   Step 8: Test like a marketer (build an AI visibility scorecard) Run your Prompt Portfolio across a few assistants, or do it all in one place at LLMtel.com. For each prompt, record: • Do you appear? • Are you cited (when browsing is on)? • Who is cited instead? • What pages are being used to describe you? That becomes your roadmap. Not vibes. Evidence. Copy/paste worksheet: Prompt Assistant/model Browsing on/off Did we show up? Who was cited? Next action Not sure what to do next?  Check out our AI Awareness Framework. The Takeaway: Stop asking, “How do I rank in AI?” Start asking: 1) What prompt do I want to be the answer for? 2) Does that prompt pull from core knowledge, the live web, or both? 3) What asset would make my brand the easiest, safest thing to repeat or cite? Do that consistently and you won’t just show up. You’ll start owning the conversation.

    The Marketer’s Map to Showing Up in AI What prompts you need to win, where the answer comes from, and how to make sure your brand is the one AI recommends. Figure 1. Two ways AI answers questions: core (trained) knowledge vs. search-grounded (live web) knowledge. If you’re a marketer, you don’t just want traffic. … Continue reading How To Show Up In AI: The Marketer’s Map

    Image: Co-Citation Strategy (Safe and Credible)

    February 13, 2026

    Dave Burnett

    Co-citation is one of those concepts that sounds like an SEO parlor trick until you realize it is just describing how trust spreads on the web. If two credible sources mention you in the same topical neighborhood, the internet starts to treat you as part of that neighborhood. What co-citation is (in plain English)? Search Engine Journal defines co-citation like this: a website is mentioned by two different sources, but not necessarily linked. In other words, your brand can earn association signals even when the writer does not give you a backlink. Co-citation is not link building Link building tries to manufacture authority through links. Co-citation strategy builds authority by earning repeated mentions in credible contexts. Links are still valuable, but co-citation lets you play the reputation game even when links do not happen. Why co-citation matters for AI visibility? Search engines have used co-citation concepts to understand how documents relate and to infer topical similarity. If reputable sources repeatedly mention your brand in connection with a specific topic, it becomes easier for systems to categorize you and to treat you as a relevant source. Now add AI: when a generative system retrieves sources, it is essentially assembling a mini bibliography. If your brand is consistently present in the right bibliographies, you raise your odds of being selected and cited. The part most teams miss: co-citation is directional You do not just want more mentions. You want mentions alongside the entities and concepts your ICP cares about. That is how you become the answer for a category, not just a name floating around the web. See Also: Why Off‑Site Signals Matter for AI Trust A safe and credible co-citation strategy If you hear ‘co-citation’ and think ‘link scheme’, you are already on the wrong track. Safe co-citation is earned through real coverage, real relationships, and real assets. Step 1: Build your Authority Neighbor Map List 10 to 20 entities you want to be mentioned near. This is your co-citation target list. Your category label (example: B2B demand gen agency, PR measurement, GEO). Platforms you integrate with or serve (example: HubSpot, Salesforce, Shopify). Standards and credentials (example: SOC 2, ISO 27001, Google Partner). Recognized competitors (yes, competitors). Your big outcomes (pipeline, retention, qualified leads, enterprise adoption). Step 2: Write your canonical descriptor Pick the sentence you want repeated across the web and in bios. [Brand] is a [category] that helps [ICP] achieve [outcome] using [differentiator]. Use it in your press kit, speaker bio, About page, and PR outreach. Consistency is how entities get resolved. Step 3: Create citation hooks A citation hook is something journalists and creators can reuse without thinking. Examples include benchmark numbers, named frameworks, original datasets, and sharp quotes. Step 4: Earn mentions in credible places Prioritize sources that already have trust: industry publications, respected newsletters, professional associations, and conference sites. One great mention can be more valuable than 50 low-quality mentions. Step 5: Reinforce with a linkable asset When you publish original research or a benchmark report, other sites cite it naturally. Those citations often include your brand name and category context, which is exactly what co-citation strategy needs. What to avoid (so you do not create spam signals)? Search Engine Journal’s ranking factor discussion notes that co-citation can be manipulated and calls out the risk of buying links or artificial link building. If you try to force co-citation through junk placements, you create noise, not authority. Do not buy ‘press’ placements on networks that exist only to sell mentions. Do not stuff brand names into irrelevant guest posts. Do not chase low-quality directories; focus on directories your buyers actually trust. Do not over-optimize anchors and links; aim for natural citations and real editorial mentions. How to measure co-citation progress? Track signals that indicate your brand is being mentioned in the right neighborhood: Number of earned mentions in your target publications (not just any publication). Mentions that include your canonical descriptor or key proof points. Mentions where you appear alongside your Authority Neighbor Map entities. Growth in branded search and referral traffic from credible sources (secondary indicators). If you want to win AI citations, you do not need a thousand links. You need the internet to tell the same story about you in the same places, repeatedly. See References [1] Search Engine Journal. Co-Citation & Co-Occurrence: How Important Are They for SEO Today? June 5, 2020.  [2] Search Engine Journal. Is Co-Citation A Google Ranking Factor? October 24, 2021.  [3] Google Search Central. Creating helpful, reliable, people-first content. Last updated December 10, 2025. 

    Co-citation is one of those concepts that sounds like an SEO parlor trick until you realize it is just describing how trust spreads on the web. If two credible sources mention you in the same topical neighborhood, the internet starts to treat you as part of that neighborhood. What co-citation is (in plain English)? Search … Continue reading Co-Citation Strategy

    AI Awareness Stage 5 - Trusted. This means you are recognized and recommended by the AI models and chatbots

    February 12, 2026

    Dave Burnett

    AI Awareness Framework Stage 5 – Trusted: When AI recommends you as a top option AI knows us, trusts us, and recommends us. Recognized and recommended. Publish Date | 2026-02-12 Part of the AI Awareness Funnel™ series.   The AI Awareness Funnel™ (context). One-line diagnosis AI knows us, trusts us, and recommends us. Recognized and recommended. Your next move: Trusted (defend the position and prevent drift) What being Trusted by AI looks like in the real world You run a report in LLMtel.com about your brand, and ask it key questions. Model Behavior: Models consistently recognize the brand. They recommend it, often as a top or default option. The brand appears across varied question types, not just narrow queries. Business meaning: You are a preferred choice inside the AI ecosystem. This is the highest level of digital trust: visibility -> credibility -> recommendation. This is the new competitive advantage. Why it happens (the non-obvious reasons) Trusted is not a badge you win once. It’s a position you defend. When a model recommends you consistently, it usually has a strong pattern: consistent identity, clear category fit, and lots of credible reinforcement. The risk is drift: models update, competitors publish, and your narrative can get simplified or dated. How to confirm you’re here (simple tests) Run two tests across multiple LLMs: Direct recognition test: ask ‘What is [Brand]?’ and note whether the model recognizes you and describes you correctly. Buyer-intent test: ask 10-20 category questions a buyer would ask (best, alternatives, compare, pricing, use cases) and track whether you get named. If your results match this stage consistently, you have your diagnosis. Now you can stop guessing and start moving. How to move up: Trusted (defend the position and prevent drift) This is the default playbook for this stage: Keep doing what you’re doing – and institutionalize it. Monitor for drift: models update, and your narrative can get simplified or stale. Refresh proof points: new reviews, new case studies, new credible mentions. Protect your category language: make sure the terms you want stay attached to your brand. Quick wins (next 7-14 days) Create a quarterly ‘AI Brand Audit’ on your top 50 prompts. Fix inaccuracies immediately by publishing corrected facts in credible places. Keep your canonical pages current (pricing changes, product lineup, regions). Track competitor movement: who is suddenly getting recommended more often? The long game (30-90 days) Invest in durable authority signals: analyst coverage, deep case studies, partnerships with visible public pages. Build a content refresh cadence so your public facts stay current. Expand into adjacent intents (not just ‘best X’ but ‘how to choose X’, ‘implementation’, ‘integrations’). Common traps to avoid Thinking you can stop publishing because ‘we’re winning.’ Letting product changes outpace documentation (models keep repeating the old story). Ignoring new competitors until they show up as ‘the default’ in answers. Bottom line You do not climb the AI Awareness Funnel™ with vibes. You climb it with clean identity, clear intent alignment, and credible proof. Good job.  Now do it again.   Related Articles AI Awareness Framework AI Awareness Framework Stage 1: Invisible AI Awareness Framework Stage 2: Ignored AI Awareness Framework Stage 3: Misaligned AI Awareness Framework Stage 4: Aligned

    AI Awareness Framework Stage 5 – Trusted: When AI recommends you as a top option AI knows us, trusts us, and recommends us. Recognized and recommended. Publish Date | 2026-02-12 Part of the AI Awareness Funnel™ series.   The AI Awareness Funnel™ (context). One-line diagnosis AI knows us, trusts us, and recommends us. Recognized and recommended. … Continue reading AI Awareness Framework Stage 5: Trusted