Subscribe to our Blogs

    Dave Burnett

    I help people make more money online. Over the years I’ve had lots of fun working with thousands of brands and helping them distribute millions of promotional products and implement multinational rewards and incentive programs. Now I’m helping great marketers turn their products and services into sustainable online businesses. How can I help you?

    About Dave Burnett

    Dave Burnett is an AI entrepreneur, investor, and digital marketing strategist who has spent more than 25 years building and scaling businesses online. He’s best known as the founder of AOKMarketing.com and PromotionalProducts.com, and as co‑founder of Achievers.com (originally I Love Rewards), which was acquired in 2015 in a deal valued at roughly $110 million.
    From Toronto, Canada, Dave now focuses on AI-driven SEO, paid media, and Generative Engine Optimization (GEO)—helping brands show up accurately in search results, AI answers, and assistant-style tools like ChatGPT, Gemini, and Perplexity.

    Early entrepreneurial spark

    Dave studied business at Wilfrid Laurier University, where he launched one of his first ventures: B&P Management Group, a liquor and beer sampling company that hired dozens of students and taught him how to sell, manage people, and handle logistics.
    Shortly after graduation, he founded what would become PromotionalProducts.com, building a promotions and branded merchandise company at a time when most sales were still door‑to‑door and local.

    From early SaaS to major exit

    In the early 2000s, Dave co‑created the concept behind I Love Rewards, later rebranded as Achievers.com, an employee recognition and engagement platform that helped companies reward and retain top talent. The company evolved into a major enterprise SaaS vendor and was ultimately acquired in 2015 for $110 million USD.

    The recession, a crisis, and a pivot to digital

    When the 2008 global financial crisis hit, corporate marketing budgets were slashed, and Dave’s promotional products business suddenly found itself losing roughly $70,000 per month.

    Instead of shutting down, he went all‑in on digital:

    • Acquired around 3,000 domain names across North American cities and verticals
    • Launched about 500 lead‑generation websites targeting specific local searches
    • Learned SEO by necessity and turned those sites into a major source of inbound business.

    As competitors started asking how he was outranking them, Dave formalized that expertise into an agency—AOK Marketing.

    AOK Marketing & the rise of AI-driven SEO

    AOK Marketing began as a search-focused agency, then expanded into Google Ads, paid social, and full-funnel optimization. Over time, Dave and his team shifted from “ranking websites” to building measurable growth systems that connect marketing activity to sales outcomes.
    In the mid‑2020s, Dave became one of the clearer voices on how AI and large language models were changing search itself—moving from ten blue links to conversational answers. He popularized the idea of GEO (Generative Engine Optimization) as a framework for earning presence inside AI-generated responses, not just traditional SERPs.

    Credentials & Recognition

    FAQ About Dave Burnett

    Who is Dave Burnett?

    Dave Burnett is a Toronto-based entrepreneur, investor, and digital marketing expert. He founded AOKMarketing.com and PromotionalProducts.com and co-founded Achievers.com, an employee-recognition platform that exited in a ~$110M acquisition.

    What companies has he founded or led?

    Dave has founded or led several ventures, including AOKMarketing.com (digital marketing agency), PromotionalProducts.com (promotional products e-commerce), and Achievers.com (employee recognition SaaS, originally I Love Rewards). He also invests through Valleywood Capital and mentors leaders via Bloom Growth.

    What is Dave Burnett known for in marketing?

    He’s known for blending traditional SEO and paid media with AI search and Generative Engine Optimization (GEO), helping brands become the “source of truth” AI tools quote. He has trained 1,000+ professionals through CMA, PRSA, and other orgs.

    What is LLMtel and how is Dave involved?

    LLMtel is a tool that analyzes how AI chatbots talk about your brand, producing a score and checklist to improve AI visibility. It’s one of Dave’s latest innovations in AI search and GEO.

    Where is Dave based, and who does he work with?

    Dave lives in Toronto, Ontario, and works with B2B companies, agencies, and growth-focused founders across North America and beyond—helping them turn marketing into leads, opportunities, and revenue.

    If you’d like to reference Dave on your site, podcast, or event page, you can safely use any of the short bios above and link to:

    Blog Posts

    Image of Googles description of how a knowledge graph works

    January 15, 2026

    Dave Burnett

    Build entity authority AI can verify AI doesn’t recommend brands. AI recommends entities it can verify. And if your “entity” isn’t clean, consistent, and well-referenced across the web… you get the modern marketing nightmare: The AI mixes you up with another company. It grabs an old founder bio from 2018 and treats it like gospel. It invents a headcount, a headquarters, a tagline, a parent company. It says you do “enterprise AI consulting” when you sell wedding photography. Why? Because the model is trying to answer a very simple question: “What is this thing?” …and when it can’t answer confidently, it builds a story out of whatever it finds. Dave’s line on this is blunt for a reason: “If you don’t define your brand for AI, AI will define it for you using whatever it finds.” This pillar post is about taking control of that “whatever it finds.” Not with vibes. With structure: canonical brand facts sameAs links entity IDs knowledge graph references and a workflow to fix wrong answers like it’s an operational system (because it is) What entity authority means Let’s demystify the buzzword. Entity authority is the probability that a machine can: Recognize your brand as a distinct entity (not a keyword, not a logo, not “some company”). Disambiguate you from similarly named entities. Retrieve stable, consistent facts about you from sources it trusts. Cite those sources (or at least align with them) across answers. Google has been moving this direction for years. Their Knowledge Graph framing is literally “things, not strings.” And it’s not just “search.” AI interfaces are now showing entity panels and stitched summaries where the system is pulling from third-party sources and citing them. In AOK’s own experiment, the takeaway was clear: the panel wasn’t just homepage copy — it was assembled from sources across the web. Like this Starbucks example: So entity authority is less “rank my page” and more: Can the machine confidently resolve you? Or to use another AOK phrase: SEO is becoming Entity Operations. Old SEO was: keywords, pages, links. AI-era SEO is: entity clarity, source control, fact consistency, authoritative references. Why does AI get our company wrong? Here are the most common reasons I see — and why they’re predictable. 1) Name collisions (you share a name with someone else) If you’re “Summit Marketing” or “Blue River” or “Axis”… there are probably 14 of you. Machines don’t “intuit” which one is you. They match patterns. If your footprint is weak, you get merged. 2) Your own site is inconsistent These are silent killers: “Acme, Inc.” vs “Acme” vs “Acme Labs” 3 different founding years across pages old office address still on the footer phone number differs on contact page vs schema vs Google Business Profile 2 “About” pages with different positioning statements If you can’t keep your own facts straight, don’t expect AI to do it for you. 3) Third-party sources contradict each other AI systems build trust by triangulating. If Crunchbase says one thing, LinkedIn says another, and your website says a third… you’ve basically told the model: “Pick one. Good luck.” And it will. 4) You don’t have “identity anchors” Machines love stable identifiers. That’s where sameAs and entity IDs come in. Schema.org defines sameAs as a URL that unambiguously indicates identity (Wikipedia, Wikidata, official site, etc.). Think of sameAs as your digital fingerprint. Not the whole solution — but a crucial piece. 5) Your brand isn’t “worth being careful about” (yet) This one hurts, but it’s real. Systems allocate confidence based on signals. If you’re a small brand with little coverage and inconsistent data, the model’s uncertainty is higher. In AOK’s ChatGPT entity panel post, the point is framed in business terms: entity panels affect trust and decisions by investors, journalists, procurement, candidates, regulators — not just traffic. If those people are using AI to form first impressions, your “entity clarity” becomes PR. The single source of truth: your Brand Facts Sheet If you do nothing else after reading this, do this: Build a Brand Facts Sheet. Not a fluffy brand deck. A factual, canonical, version-controlled doc that answers: What are the official facts about the entity? What are the approved descriptions? What IDs and references define it across the web? Because when facts drift, you need a place to point and say: “This is the source of truth.” Brand Facts Sheet template Use this as a starting point (copy/paste into a Google Doc, Notion, or a spreadsheet). 1) Identity block (the “who are we?” core) Canonical name (brand name used publicly) Legal name (registered name, if different) Alternate names (DBAs, abbreviations, old names) Tagline (current) One-sentence description (approved) Long description (approved 80–150 words) Category (what you are — pick the simplest correct label) Primary offer (what you sell) Primary audience (who you serve) 2) Key facts block (the “things AI gets wrong” block) Founding date (YYYY-MM-DD if possible) Founders (names + titles) CEO / key executives (names + titles) Headquarters (city, region, country) Primary phone Primary support email Primary physical address (if applicable) Service area (if local/regional) Parent company / subsidiaries (if applicable) 3) Web presence block (canonical URLs only) Homepage URL About page URL (canonical) Contact page URL (canonical) Press page / media kit URL Logo URL (crawlable) Brand images / headshots (URL list) 4) Social profile block (official handles) LinkedIn company page X Instagram YouTube TikTok Facebook GitHub (if relevant) Substack / Medium (if official) 5) Entity references block (the “machine anchors”) Wikidata QID (if exists) Wikipedia page (if exists) Google Business Profile (if applicable) Apple Business Connect (if applicable) Industry directories that matter (G2 / Capterra / Clutch / etc.) Data providers relevant to your vertical 6) Change log (this is what makes it operational) Track: what changed when why who approved it Because your brand facts will change (new address, new CEO, rebrand), and you need controlled updates. sameAs + entity IDs checklist This is the “entity footprint” work. This is where you stop being “just a website” and start being a resolvable entity. Step 1: Create a canonical entity ID (source of truth) on your own site You want one stable identifier for your organization, usually as a URL. Example: https://example.com/#organization   That becomes your internal “entity node” you reference in structured data. Step 2: Add Organization structured data (and do it right) Google explicitly supports Organization markup and recommends putting it on the homepage or a single page like an About page (not necessarily every page). They also recommend focusing on properties useful to users (name/alternateName, real-world presence like address/telephone, online presence like url/logo). And importantly: Google calls out that url helps uniquely identify your organization. Step 3: Use sameAs like a sniper, not a shotgun Schema.org’s definition is very specific: sameAs should point to a page that unambiguously indicates identity. Google’s Organization documentation describes sameAs as links to pages on other websites with more info about your organization (social, review sites) and notes you can provide multiple sameAs URLs. Rule: Only link profiles/pages that are the same entity. Not “related.” Not “partner.” Not “a random directory page that mentions us once.” Step 4: Validate your structured data Google’s structured data guidelines recommend supported formats and explicitly list JSON-LD as recommended. They also note you can test compliance using tools like Rich Results Test and URL Inspection. And they’re clear on a key quality point: don’t mark up content that isn’t visible to readers — the markup and page content should match. Step 5: Build the Entity ID Map (your “sameAs inventory”) This is the part most brands skip. They add schema once. Then they forget it. Instead, build a living “Entity ID Map” that includes: Platform Canonical URL Entity ID (if the platform has one) Ownership status (claimed/unclaimed) Notes (what needs fixing) Here’s the checklist of what to include: sameAs / entity reference candidates (pick what’s real for you) Tier 1: Always Official website (canonical URL) LinkedIn company page Any other official social profiles Official YouTube channel (if active) Tier 2: Often Crunchbase GitHub org App stores / marketplace profiles (if SaaS) Major review platforms relevant to your industry Major media profiles (if you have them) Tier 3: Knowledge graph anchors Wikidata item (QID) Wikipedia page (if legitimately earned) Tier 4: Local / regulated / compliance Google Business Profile (local) Apple Business Connect (local) Government registries (where appropriate) Industry accrediting bodies (where appropriate) What to put in your Organization JSON-LD (example) Here’s a practical JSON-LD skeleton you can adapt. {   “@context”: “https://schema.org”,   “@type”: “Organization”,   “@id”: “https://example.com/#organization”,   “name”: “Example Brand”,   “legalName”: “Example Brand, Inc.”,   “alternateName”: [“Example”, “Example Co.”],   “url”: “https://example.com/”,   “logo”: “https://example.com/assets/logo.png”,   “description”: “Example Brand helps X do Y with Z.”,   “foundingDate”: “2016-04-12”,   “address”: {     “@type”: “PostalAddress”,     “streetAddress”: “123 Main Street”,     “addressLocality”: “New York”,     “addressRegion”: “NY”,     “postalCode”: “10022”,     “addressCountry”: “US”   },   “telephone”: “+1-646-555-1234”,   “sameAs”: [     “https://www.linkedin.com/company/examplebrand/”,     “https://www.wikidata.org/wiki/Q123456”,     “https://en.wikipedia.org/wiki/Example_Brand”   ] }   Two important notes: Don’t add foundingDate, address, executives, etc. unless you’re willing to keep them accurate everywhere. “Facts drift” is a real brand risk. (That’s why you need the facts sheet.) If you’re a local business, Google specifically notes LocalBusiness has required properties and additional guidance. Do we need Wikidata or Wikipedia? Let’s answer this like adults. You don’t need Wikipedia to show up in AI answers. But: Wikipedia and Wikidata are common reference points in knowledge systems. The Knowledge Graph itself has historically cited sources like Wikipedia (and previously Freebase). Wikidata is designed as a machine-readable, editable knowledge base. So “need” isn’t the right word. The right word is: Leverage. If you can legitimately earn those references, they become strong identity anchors. But you can also build a strong entity footprint without them by getting your site, schema, and third-party references consistent and authoritative. Wikidata/Wikipedia readiness path This is where PR/Comms teams either win… or get themselves into a mess. Step 1: Understand the difference Wikipedia is narrative + citations + editorial standards. Wikidata is structured facts + references + IDs. Wikidata’s own framing: it’s a free and open knowledge base readable and editable by humans and machines. Step 2: Know the bar for Wikipedia Wikipedia’s notability guidance for organizations is blunt: Notable if there is significant coverage in reliable, independent secondary sources. If no independent, third-party reliable sources exist, Wikipedia shouldn’t have an article. Translation: Your press release doesn’t count. Your blog doesn’t count. Your partner’s blog doesn’t count. Your paid advertorial doesn’t count. The game is earned media and independent coverage. Step 3: Don’t do the classic COI mistake If you (or your agency) are closely connected to the company, Wikipedia considers that a conflict of interest. Wikipedia explicitly says COI editing is strongly discouraged, and advises disclosure and proposing changes via talk pages instead of directly editing. So don’t treat Wikipedia like a profile you “claim.” Treat it like an encyclopedia you have to earn. Step 4: A practical “readiness ladder” Here’s the path I recommend for brands: Level 0: Not ready No independent coverage Inconsistent brand facts everywhere Basic digital presence is messy Action: fix your own house first (facts sheet + site consistency + schema + profile cleanup). Level 1: Wikidata-ready The entity is identifiable You can cite verifiable references (official site, reputable sources) You’re prepared to keep facts accurate Wikidata has a notability policy: an item is acceptable if it fulfills certain goals/criteria (including having sitelinks and meeting listed criteria). Action: build a clean Wikidata item with references (and do it ethically). Level 2: Wikipedia-ready Multiple independent reliable secondary sources with significant coverage Neutral tone possible COI handled properly Action: do not brute-force publish a page. Use the right process (talk pages / Articles for Creation) and disclose COI. Level 3: Knowledge panel-ready (the real win) Consistent entity across web Strong authoritative references Clear schema + sameAs Clean “about” narrative This is where machines stop guessing. Fix wrong AI answers workflow This is the module that turns “AI got us wrong” from a panic into a process. Because the fix is rarely “tell ChatGPT to stop.” The fix is almost always: Fix the sources the AI trusts. Here’s the workflow. Step 1: Capture the bad output like evidence Create a shared doc/spreadsheet with: Prompt used Full AI answer Date/time Screenshot Links/citations shown (if any) What’s wrong (specific facts) Severity (low / medium / high) If it’s a public-facing AI panel, treat it like a PR issue. Step 2: Classify the error Most errors fall into one of these buckets: Identity error (entity merge): It’s mixing you with another entity. Fact drift: It has an old address/CEO/founding date. Inference/hallucination: It’s guessing because it can’t verify. Narrative skew: It overweights one controversy or one review source. Relationship confusion: Parent/subsidiary/partner relationships are wrong. Different category = different fix. Step 3: Trace the likely source of truth conflict Start with the “obvious” anchors: Your website About / Contact LinkedIn company page Major directories in your category Wikipedia/Wikidata (if present) Press coverage and bios Google Business Profile (if local) Remember: AI panels and summaries are often assembled from third-party sources — not just your homepage. Step 4: Fix your canonical facts first Update: Brand Facts Sheet (source of truth) Website pages (About, Contact, Press) Structured data (Organization schema, LocalBusiness if applicable) Google’s documentation is explicit that organization markup can include details like address/telephone and helps them understand your organization. Also: keep markup aligned with visible content. If you update a fact in schema, update it on-page too. Step 5: Fix third-party contradictions (the unglamorous part) This is the work no one wants to do, but it’s where the wins are: Claim profiles you haven’t claimed Correct old addresses / phone numbers Update executive names and titles Remove duplicate profiles Standardize naming (exact punctuation matters more than you think) Step 6: Strengthen identity anchors with sameAs If you have stable references, add them. Schema.org describes sameAs as linking to identity-unambiguous pages like Wikipedia/Wikidata/official site. Google explicitly supports multiple sameAs URLs for Organization markup. Step 7: Validate + monitor Use structured data testing and keep a monitoring cadence. Google’s guidelines mention testing with tools like Rich Results Test and URL Inspection for technical issues. Then repeat your AI prompt tests monthly/quarterly and log changes. Step 8: Create a “Fact Correction” public page (optional but powerful) This is a secret weapon for PR/Comms teams. A simple page that answers: official founding year official headquarters official leadership official product/service definition official parent/subsidiary relationships official naming conventions It becomes a crawlable, citable correction hub. (And yes, it should match your schema.) The practical bottom line AI isn’t sitting there thinking, “How do I represent this company fairly?” It’s doing pattern matching and confidence scoring. So your job is to make the pattern obvious: One entity One set of facts Many corroborating references Clean sameAs links No contradictions Or, put in AOK terms: You’re either shaping the sources… or you’re inheriting the narrative. What to do now If you want the fastest path to “verified, consistently described brand across AI answers,” do this in order: Build the Brand Facts Sheet (today). Audit your website for fact consistency (About, Contact, footer, schema). Implement Organization schema with a canonical @id. Add sameAs links only to true identity anchors. Clean up your top third-party profiles (LinkedIn, key directories, local listings). Decide your Wikidata/Wikipedia strategy based on notability + COI reality. Build the “wrong AI answer” log and turn it into a workflow. That’s Entity Ops. And if you want this pillar to work as a real “single source of truth” system (not just a one-time SEO task), assign ownership: Marketing owns narrative + positioning PR/Comms owns public references + coverage Web/SEO owns schema + technical implementation Someone owns the facts sheet like it’s governance (because it is)  
    Image of Microsoft Copilot Checkout header

    January 14, 2026

    Dave Burnett

    Quick take Copilot Checkout is Microsoft’s attempt to turn “conversation” into “conversion” without the usual tab-hopping and cart abandonment. The shopper stays inside Copilot, and you (the merchant) stay in control of the transaction.   What Copilot Checkout is (in plain English) Think of Copilot as the new aisle. A shopper asks for options, compares, asks follow-ups, and when they’re ready to buy… they buy right there. Microsoft frames this as “no redirect, no friction,” while keeping the merchant as the merchant of record.  This means you keep the customer relationship.   Why ecommerce teams should care Microsoft shared early internal signals that Copilot-assisted journeys drive more purchases, especially when shopping intent is present. In other words: if you’re not present in the AI conversation, you’re not in the shortlist.   Who gets it first (and how rollout works) Copilot Checkout is rolling out in the U.S. on Copilot.com with partner activation across PayPal, Shopify, and Stripe. Shopify merchants are positioned for automatic enrollment after an opt-out window. Non‑Shopify merchants can apply via Microsoft.   The hidden requirement: your data has to be usable An AI checkout experience doesn’t magically fix messy catalog data. It punishes it. To be recommended (and to convert), your product facts need to be accurate, structured, and fresh across three places: Your crawled website content (what AI can learn from pages) Your product feeds/APIs (what you push to platforms) Your live site experience (what an agent can see and act on)   The AOK setup checklist (do this before you chase hacks) Use this as your “Copilot readiness” punch list. Get your feed right: consistent titles, GTINs, variants, pricing, availability, and images. Add schema: Product + Offer + AggregateRating + Review + Brand + FAQ + ItemList where relevant. Make policies easy to cite: shipping, returns, warranties, and customer support. Keep the site agent-proof: no broken add-to-cart flows, no weird promo-code logic, accurate delivery estimates. Measure the new funnel: track AI-driven sessions, assisted conversion lift, and citation visibility.   How AOK helps If you want Copilot Checkout to be a conversion channel, you need AI discovery plus AI conversion hygiene. That’s exactly what our SEO for AI framework is built for.   Sources Microsoft Advertising. “Conversations that Convert: Copilot Checkout and Brand Agents.” January 8, 2026. Microsoft Advertising. “From Discovery to Influence: A Guide to AEO and GEO.” AOK Marketing. “SEO for AI” framework.
    Image of the homepage artwork from Google Search Console

    January 13, 2026

    Dave Burnett

    Updated: January 13, 2026 What is AI‑Readable SEO?  If you’ve ever asked an AI system about your own business and thought, “Wait… why is it describing my competitor?”  Congratulations.  You’ve already met the problem. AI doesn’t “read” your website the way a person does. It crawls. It parses. It tries to reconcile a bunch of signals that often contradict each other. And when those signals conflict, the AI does what any machine does under uncertainty: It guesses. This post is about removing guesswork. Because here’s the core narrative: If your site isn’t machine‑readable, AI can’t trust it. Not because it’s mean. Because it’s blind. And for technical owners (product, SEO, embedded marketing), this is good news: trust is mostly an engineering problem, or at least an engineering‑shaped problem. The big idea: AI visibility is a stack of signals, not a “content trick” There’s a persistent myth that “GEO” (Generative Engine Optimization) is some new bag of hacks. It isn’t. GEO is just technical SEO + structured data + entity clarity + validation discipline. Machines don’t reward vibes. They reward readable structure. Google is explicit that structured data helps it understand a page and also gather information about the web and the world (people, books, companies, etc.). And when AI answers are being generated from crawled, indexed, and interpreted web sources… the sites with clear signals tend to be the ones that get used, summarized, and cited. So the question isn’t: “Can I optimize for AI?” The question is: “Is my brand legible to machines?” The AI‑Readable SEO Signal Stack (5 layers) Think of this like a “stack” because each layer depends on the ones below it: Crawlable (can bots fetch it?) Indexable (can search engines keep it?) Canonical + consistent (is there one “true” version?) Structured + entity‑clear (does it know who/what/where you are?) Fast + mobile‑correct (does it render well in the real world?) If you’re missing layer 1, schema won’t save you. If you’re missing layer 3, your schema can be “right”… on the wrong URL. If you’re missing layer 4, AI may still talk about you… but attribute the work to someone else. Let’s build the stack. Module 1: Signal Map – what AI systems actually use Different AI engines have different pipelines, but the web-facing ones all converge on a shared set of machine-readable inputs: 1) Crawl signals (can a machine fetch the content?) HTTP status: 200 vs 3xx vs 4xx/5xx robots.txt access meta robots / X‑Robots‑Tag directives sitemaps and internal linking pathways If a URL is blocked from crawling, an engine can’t “see” what’s on it. And here’s a subtle but brutal gotcha: If you disallow crawling in robots.txt, Google may never discover your noindex (or other indexing rules) because those directives are discovered when a URL is crawled. That’s how you end up with “we blocked it so it shouldn’t show” surprises. 2) Index signals (can it be stored and retrieved?) Indexability (noindex, canonical, etc.) Duplicate handling Canonical selection Sitemap inclusion (suggested canonicals) Google supports several canonicalization methods and explicitly warns against giving conflicting canonical signals (like one canonical in your sitemap and a different one via rel=canonical). 3) Interpretation signals (can it understand what the page is about?) This is where structured data shows up: Schema.org markup (JSON‑LD recommended for rich results eligibility) Headings and page structure Clear visible content that matches the markup (don’t mark up invisible stuff) 4) Entity + attribution signals (can it connect your brand to the right entity?) This is the part most teams skip and it’s exactly why AI answers drift. You want machines to confidently answer: Who are you? What do you do? Where do you do it? Which URLs represent the “official” truth? Organization structured data helps Google disambiguate your organization and can influence visual elements like which logo is shown and your knowledge panel. LocalBusiness structured data helps Google understand business details and can feed knowledge panels and local carousels. 5) Experience + rendering signals (can it render and trust the page experience?) Google uses mobile-first indexing, meaning it uses the mobile version of your content (crawled with a smartphone agent) for indexing and ranking. And Google recommends hitting “good” Core Web Vitals thresholds (LCP < 2.5s, INP < 200ms, CLS < 0.1). The “AI Trust” principle Machines “trust” what they can: fetch reliably index consistently interpret unambiguously attribute to a stable entity That’s the whole game. Now let’s get practical. Module 2: Schema Playbook (Organization / LocalBusiness / Service / FAQ) Schema is not a magic “rank me” button. Schema is a labeling system: it reduces ambiguity so machines don’t have to guess. Google literally says it uses structured data it finds to understand page content and gather information about entities. If your ICP is “reduce technical risk,” schema is risk reduction: less ambiguity fewer misattributions more consistent “who/what/where” signals Before you write any schema: do this 3‑step setup Step 1: Create stable entity IDs Use an @id for your Organization and reuse it everywhere. Example pattern: Organization entity ID: https://example.com/#org Location entity ID: https://example.com/locations/nyc/#localbusiness Service entity ID: https://example.com/services/technical-seo/#service This creates a machine-friendly “graph” instead of loose blobs of JSON. Step 2: Keep schema tied to visible page content Google’s guidelines emphasize: don’t mark up content that isn’t visible and make sure structured data represents the page. Step 3: Don’t block schema pages Google’s structured data guidelines explicitly say: don’t block structured data pages using robots.txt, noindex, or other access control methods. Schema 1: Organization (your “Who we are” anchor) Google says Organization structured data on your home page can help it understand administrative details and disambiguate your organization, and can influence search visual elements like logo and the knowledge panel. It also defines sameAs as links to profiles on other sites and notes you can provide multiple sameAs URLs. Recommended Organization JSON‑LD (starter) <script type=”application/ld+json”> {   “@context”: “https://schema.org”,   “@type”: “Organization”,   “@id”: “https://example.com/#org”,   “name”: “Example Company”,   “url”: “https://example.com/”,   “logo”: “https://example.com/assets/logo.png”,   “sameAs”: [     “https://www.linkedin.com/company/example”,     “https://www.youtube.com/@example”,     “https://en.wikipedia.org/wiki/Example_Company”   ],   “contactPoint”: [{     “@type”: “ContactPoint”,     “contactType”: “sales”,     “telephone”: “+1-555-555-5555”,     “email”: “[email protected]”   }] } </script>   A few “don’t mess this up” notes: Your url matters. Google’s docs call out that the organization website URL helps Google uniquely identify your organization. Your sameAs links should be real identity anchors (official profiles, authoritative references). Schema 2: LocalBusiness (your “Where we are” anchor) If you have locations, treat each location page as its own entity. Google notes that local search results may display a knowledge panel and that LocalBusiness structured data can tell Google about business hours, departments, reviews, and more. Recommended LocalBusiness JSON‑LD (per location page) <script type=”application/ld+json”> {   “@context”: “https://schema.org”,   “@type”: “LocalBusiness”,   “@id”: “https://example.com/locations/new-york/#localbusiness”,   “name”: “Example Company – New York”,   “url”: “https://example.com/locations/new-york/”,   “telephone”: “+1-212-555-0101”,   “address”: {     “@type”: “PostalAddress”,     “streetAddress”: “32 East 57th Street, 8th Floor”,     “addressLocality”: “New York”,     “addressRegion”: “NY”,     “postalCode”: “10022”,     “addressCountry”: “US”   },   “openingHoursSpecification”: [{     “@type”: “OpeningHoursSpecification”,     “dayOfWeek”: [“Monday”,”Tuesday”,”Wednesday”,”Thursday”,”Friday”],     “opens”: “09:00”,     “closes”: “17:00″   }] } </script>   Pro tip: If you’re a service business, you can usually go more specific than LocalBusiness (e.g., ProfessionalService), but the pattern stays the same: one entity per location, stable @id, consistent NAP. Schema 3: Service (your “What we do” anchor) Schema.org defines Service as “a service provided by an organization.” Google may not have a dedicated “Service rich result,” but Service markup still helps with: entity relationships (provider → service) service catalogs and structured understanding internal consistency across pages Recommended Service JSON‑LD (per service page) <script type=”application/ld+json”> {   “@context”: “https://schema.org”,   “@type”: “Service”,   “@id”: “https://example.com/services/ai-seo/#service”,   “name”: “SEO for AI Services (GEO)”,   “serviceType”: “Technical SEO + AI visibility optimization”,   “provider”: {     “@type”: “Organization”,     “@id”: “https://example.com/#org”   },   “areaServed”: [“United States”, “Canada”],   “url”: “https://example.com/services/ai-seo/” } </script>   Why this matters for AI: Most misattribution problems happen because the machine can’t connect “this page” to “this organization” to “this service.” This connects all three. Schema 4: FAQPage (your “Machine-readable Q&A” layer) FAQ schema is tricky, so let’s be clear: Google’s FAQ docs say properly marked up FAQ pages may be eligible for rich results. Google also says it does not guarantee that structured data features will show up in search results. And in 2023 Google explicitly limited FAQ rich results: they’ll only be shown for well‑known, authoritative government and health sites, and for others it won’t be shown regularly. So why include FAQ schema at all? Because for AI readability, FAQPage still does something valuable: it expresses Q&A pairs in a predictable structure it reduces ambiguity about what your service does and doesn’t do it supports internal “answer extraction” and clarity Recommended FAQPage JSON‑LD (only when Q&A is visible on-page) <script type=”application/ld+json”> {   “@context”: “https://schema.org”,   “@type”: “FAQPage”,   “@id”: “https://example.com/services/ai-seo/#faq”,   “mainEntity”: [     {       “@type”: “Question”,       “name”: “Does schema affect AI answers?”,       “acceptedAnswer”: {         “@type”: “Answer”,         “text”: “Schema helps machines understand and attribute your content, but it does not guarantee visibility or rich results. It reduces ambiguity so AI systems can interpret your brand and services correctly.”       }     },     {       “@type”: “Question”,       “name”: “Which schema types matter most for service companies?”,       “acceptedAnswer”: {         “@type”: “Answer”,         “text”: “Organization and LocalBusiness clarify who you are and where you operate; Service clarifies what you provide; FAQPage clarifies the questions your buyers ask and your answers.”       }     }   ] } </script>   Rule: If the Q&A isn’t visible to users, don’t mark it up. Google’s structured data guidelines explicitly warn against marking up content that isn’t visible. Module 3: Crawl / Index Readiness Checklist (AI can’t read what it can’t crawl) This is the “reduce technical risk” section. Because 90% of AI visibility “mysteries” aren’t mysteries. They’re one of these: blocked crawling accidental noindex canonical chaos duplicate URLs broken mobile rendering missing sitemap clarity Here’s your checklist. Crawl layer checklist robots.txt sanity Are critical sections blocked accidentally (/blog/, /services/, JS/CSS)? Are you blocking crawlers you actually want? Are you using robots.txt to “solve” canonicalization? Don’t. Google explicitly warns against using robots.txt for canonicalization. meta robots / X‑Robots‑Tag sanity Are important pages marked noindex? Are PDFs or non‑HTML files accidentally noindexed via headers? Google documents both robots meta tags (page-level) and X‑Robots‑Tag headers (useful for non‑HTML). Avoid the “disallow + noindex” trap If a page is disallowed from crawling via robots.txt, Google may not discover indexing directives on that page because they’re discovered when crawled. Index + canonical layer checklist Pick one canonical URL per page HTTPS vs HTTP www vs non‑www trailing slash vs no trailing slash query parameters Then make everything agree with it. Google lays out canonicalization methods and warns not to give conflicting canonical signals across methods (e.g., sitemap vs rel=canonical). Canonical tag implementation rel=”canonical” must be in the <head> and use absolute URLs Avoid mixing canonical in HTTP header and HTML unless you’re very disciplined (Google calls using both “more error prone”). Sitemap must list canonicals (not “every URL we have”) Google’s canonicalization documentation notes that sitemap URLs are suggested canonicals, and Google still determines duplicates based on content similarity. Redirects: use them when deprecating duplicates Google notes redirects can be used to indicate a better version, and that 301/302/etc have the same effect on Google Search (timing can differ). Sitemap layer checklist (don’t overcomplicate this) Only include absolute, canonical URLs Keep within size limits (50k URLs / 50MB per sitemap is the common limit; the doc references these constraints) Reference the sitemap in robots.txt when appropriate Google explicitly shows a pattern for referencing a sitemap in robots.txt. Module 4: Performance + Mobile Fundamentals (because “machine-readable” also means “machine-renderable”) If you’re building AI visibility on a slow, unstable mobile experience… you’re building on sand. Mobile-first indexing: this is not optional Google states clearly: Google uses the mobile version of a site’s content, crawled with the smartphone agent, for indexing and ranking. So your “AI-readable” checklist must include: content parity on mobile (same headings, same structured data, same critical content) no missing schema on mobile templates no mobile-only noindex accidents Google’s mobile-first indexing best practices explicitly call out missing structured data on mobile as a common error and recommends keeping structured data consistent across versions. Core Web Vitals: get to “good,” then move on Google recommends aiming for good Core Web Vitals thresholds: LCP within 2.5 seconds INP under 200ms CLS under 0.1 This is not about perfection. It’s about: predictable rendering fewer layout shifts fast interaction For service businesses, that usually means: compress/resize hero images defer non-critical scripts avoid heavy sliders and “moving parts” stabilize fonts and above-the-fold layout Module 5: QA + Validation Workflow (so this doesn’t rot) Most teams don’t fail because they don’t know what to do. They fail because they don’t have a workflow that keeps doing it after launch. Here’s a QA pipeline that technical product owners can actually operationalize. Step 0: Define “Done” (yes, literally write acceptance criteria) For every service page / location page / template release, “Done” means: Page is crawlable (not blocked by robots.txt or auth) Page is indexable (no accidental noindex) Canonical is correct and consistent Page is in sitemap as the canonical URL Structured data validates Mobile version has content + schema parity Core Web Vitals are within “good” thresholds (or you have a plan) Use this as a release gate, not a nice-to-have. Step 1: Validate syntax + eligibility Tool 1: Rich Results Test Google’s Rich Results Test lets you test a publicly accessible page and see which rich results can be generated by the structured data it contains. Use it to catch: JSON-LD parsing errors missing required fields (for Google-supported rich results) rendering differences (desktop vs smartphone inspector) Tool 2: Schema Markup Validator Schema.org’s validator helps validate schema syntax and structure even when it’s not tied to a specific Google rich result type. This is where you validate Service markup especially, because it’s often “for understanding” more than for a rich result. Step 2: Validate compliance (avoid structured data penalties) Google’s structured data guidelines warn that structured data issues can trigger a manual action. A structured data manual action removes eligibility for rich results but doesn’t affect ranking in Google web search. Key compliance rules to keep you safe: Don’t mark up invisible content Don’t misrepresent (fake reviews, fake info, etc.) Don’t block access to structured-data pages Step 3: Deploy in controlled slices Roll out schema like you roll out infrastructure: deploy to a small set of pages validate expand coverage Google’s Organization schema guide explicitly recommends validating with Rich Results Test and then using URL Inspection to test how Google sees the page. Step 4: Monitor and maintain Set a monthly (or biweekly) “signal QA” routine: Search Console enhancements reports (where applicable) Crawl a sample set of URLs (Screaming Frog / Sitebulb) Diff canonical tags and index directives vs last crawl Spot check mobile rendered HTML Validate schema on key templates AI visibility is rarely “set it and forget it.” It’s “set it and regress it accidentally 17 times.” So build the habit. Bonus: AI crawler access (the part everyone forgets) If your goal is to show up in AI-powered search experiences, you need to be aware that some systems have their own crawlers. For example, OpenAI documents that it uses crawlers and user agents, and that it uses OAI‑SearchBot and GPTBot robots.txt tags so webmasters can manage how their content works with AI. OpenAI also notes you can allow OAI‑SearchBot for search visibility while disallowing GPTBot for training, and that robots.txt updates can take about 24 hours to reflect. Google also documents Google‑Extended as a robots.txt token used to control whether content can be used for training Gemini models (and grounding), and explicitly says it doesn’t impact inclusion in Google Search and isn’t a ranking signal. You don’t need to go down a rabbit hole here, but you do need to know: which bots you’re allowing which bots you’re blocking whether you accidentally cut off the very machines you want reading you Does schema affect AI answers? Schema is best thought of as machine labeling. Google says it uses structured data it finds to understand page content and gather information about entities. So schema can absolutely influence how confidently a machine interprets “who you are” and “what this page represents.” But schema is not a guarantee: Google explicitly says it doesn’t guarantee structured-data features will show in results. A structured data manual action removes rich result eligibility but doesn’t affect ranking. So: schema helps interpretation and attribution. It’s not a cheat code. Which schema types matter most? For service businesses, the highest-leverage “identity” stack is: Organization (who you are + disambiguation) LocalBusiness (where you are + business details) Service (what you provide) FAQPage (structured Q&A with the SERP caveats) How do we validate structured data? Use both: Rich Results Test (Google eligibility + parsing) Schema Markup Validator (schema correctness beyond Google features) Then monitor via Search Console and establish a recurring QA cadence. What technical SEO impacts AI visibility the most? The non-negotiables: crawlability + indexability (robots, meta robots, headers) canonical consistency (avoid conflicting canonical signals) mobile-first parity (mobile is what gets indexed) performance thresholds (Core Web Vitals) How do we fix canonical/indexing issues? Start with the principle: One page, one canonical, one set of signals. Then: choose canonical URLs and enforce them in internal links, canonicals, and sitemaps avoid conflicting canonical techniques don’t “solve” indexing with robots.txt if you need meta robots honored If you only do one thing this week… Pick your top 10 revenue pages (services + locations). And for each page, verify: 200 status indexable canonical correct in sitemap as canonical Organization/LocalBusiness/Service schema present and tied to the right entity IDs mobile parity passes Rich Results Test + Schema Validator If you do that, you’ve built something most sites still don’t have: A machine-readable brand. And that’s the foundation of SEO for AI.
    An image of the Microsoft Copilot Checkout experience

    January 11, 2026

    Dave Burnett

    Copilot Checkout and Brand Agents: Ecommerce AI Conversion Playbook What just happened (and why it matters) Microsoft Advertising just announced two capabilities aimed straight at the biggest leak in ecommerce: the gap between “I’m interested” and “I’m buying.” The leak is getting smaller because the shopping journey is being compressed into a single chatbot conversation. They unveiled: Copilot Checkout: an in-conversation checkout experience designed to remove redirects and friction while keeping the merchant as the merchant of record (so you own the transaction and customer relationship).  You can apply to be a Copilot Checkout Merchant here. Brand Agents: AI-powered shopping assistants for your Shopify site that speak in your brand voice and guide shoppers from questions to purchase based on Microsoft Clarity data.   The real headline: conversions moved upstream If you’re still thinking in terms of “rank > click > product page > cart,” you’re already late. AI assistants are now doing the early-funnel work: clarifying intent, comparing options, and answering objections before a shopper ever hits your site. Microsoft describes this as the fight for conversions moving upstream: inside the same interaction where the shopper is researching and deciding. That’s the point of Copilot Checkout: conversation-to-conversion in one place.   AI shopping isn’t one thing. It’s three overlapping things. Microsoft’s AEO/GEO playbook breaks the AI shopping ecosystem into AI browsers, AI assistants, and AI agents and the key detail is that these capabilities overlap. Translation: the “surface” changes, but the inputs stay similar. If the system can’t read your product facts, it can’t recommend you. If an agent can’t complete your checkout, your feed perfection doesn’t matter.   Discovery and Conversion are now both AI Driven This announcement is basically a spotlight on our core belief: discovery and conversion are now one continuous AI-mediated flow. You win it with the same three levers we use in our SEO for AI framework: On-page visibility (make your site and catalog easy for AI to find, load, and understand) Off-page trust & authority (earn the mentions and sources AI systems trust) Monitoring (track citations, visibility shifts, and the conversion impact of AI-assisted sessions)   Copilot Checkout: what it is (and what it isn’t) Copilot Checkout is positioned as a friction remover. The shopper stays inside Copilot, compares options, asks follow-ups, and checks out without being bounced through a dozen tabs. What should make merchants pay attention is the merchant-forward positioning: you stay the merchant of record and keep customer data and the relationship. Microsoft shared early performance signals from Copilot shopping journeys (internal data): more purchases within 30 minutes and a higher likelihood to purchase when shopping intent is present.   Brand Agents: your best in-store associate, now on your website Brand Agents are the on-site counterpart: an assistant that speaks in your voice and helps customers explore, compare, and confidently click “buy.” The pitch is speed: deployed in hours, not weeks. And the promise is measurable uplift: Microsoft highlights a Shopify merchant reporting over 3X higher conversion rates in agent-assisted sessions. Brand Agents tie into Microsoft Clarity for dashboards showing engagement and conversion metrics, so teams can improve performance based on how shoppers actually interact.   The playbook: win AI discovery + AI conversions with our 3-part framework Here’s the practical part. If you want Copilot (and other assistants/agents) to recommend you, you need to feed the machine and make your site “agent-ready.”   1) On-page visibility: make your catalog AI-readable Our SEO for AI framework starts here: make pages easy to find, load, and understand for people and AI. From Microsoft’s AEO/GEO guide, AEO is about clarity with enriched, real-time data, and GEO is about credibility with an authoritative voice. Your on-page work supports both.   Technical foundations (do these before you debate prompts) If your product facts aren’t structured, fresh, and consistent, you’re asking an AI assistant to improvise. That’s not a strategy. Technical SEO basics: HTTPS, clean URLs, broken-link fixes, correct XML sitemaps and robots.txt. Performance: mobile-first with strong Core Web Vitals and fast media. Schema markup: Organization, Product, Offer, AggregateRating, Review, Brand, ItemList, FAQ. Dynamic fields: keep price, availability, variants (color/size), SKU/GTIN, and dateModified in sync across site and feeds.   Content that converts in an AI-first funnel In AI shopping, assistants interpret queries as intents. Your content has to answer real questions directly, with citable blocks that match how people ask. Pillar page + supporting cluster (you’re reading the pillar). Short Q&A blocks for AI snippets (shipping, returns, sizing, compatibility, warranties). Comparison tables and “Model A vs Model B” sections that make trade-offs explicit. Alt text, captions, and transcripts so multi-modal systems can understand visuals and video.   2) Off-page trust & authority: become the brand AI feels safe recommending AI systems prioritize trustworthy sources. So does every human with a credit card. This is where your ability to get found in AI search really lives: authoritative mentions, credible citations, and consistent entity data across the web. Backlinks: publish link-worthy assets (original data, tools, checklists) and do targeted outreach. PR: earn coverage on high-crawl outlets so your claims are backed by third-party sources. Wikipedia/Wikidata/Knowledge Graphs: connect structured facts to citations so your entity is unambiguous. Directories: keep Name/Address/Phone consistent across major platforms.   3) Monitoring: measure the invisible early funnel If discovery happens inside conversations, your analytics needs to catch up. The GEO framework emphasizes monitoring AI citations/mentions and comparing AI-driven traffic vs organic. Track AI citations and mentions (who is quoting you, and for what). Watch Search Console for indexing and schema errors. Measure agent-assisted conversion uplift, AOV changes, and engagement time vs non-assisted sessions. Create an alerting routine for feed freshness and out-of-stock errors.   Implementation checklist: what to do this week If you want a simple action plan, here it is. Audit your product feed and on-site schema for missing or inconsistent fields (price, availability, variants, ratings). Add or improve Product/Offer/Review/FAQ/ItemList schema on high-value categories and best sellers. Publish one “money” pillar page and 5-7 supporting articles focused on top converting intents (exactly what you’re building right now). Build a trust stack: verified reviews, clear policies, third-party mentions, and consistent entity profiles (Wikidata, directories). Set up monitoring: AI citation tracking + Search Console checks + a weekly feed freshness report.   We know it sounds like a lot… If this feels like a lot, good news: it’s a system, not a mystery. Our SEO for AI service is built for this exact moment: make your site AI-readable, build trust that AI systems can cite, and measure what’s happening as discovery shifts into assistants and agents. If you want, we’ll map your current footprint, identify the fastest wins, and give you a clear sprint plan.   Sources Microsoft Advertising. “Conversations that Convert: Copilot Checkout and Brand Agents.” January 8, 2026. Microsoft Advertising. “From Discovery to Influence: A Guide to AEO and GEO.” Playbook referenced by Microsoft Advertising blog post. AOK Marketing. “SEO for AI” service framework (On-page visibility, Off-page trust & authority, Monitoring).   Frequently asked questions (for AI snippets) Q: Does Copilot Checkout replace my website checkout? A: No. The point is to remove friction during discovery, but the merchant stays the merchant of record and still needs a functioning ecommerce site and accurate product data. Q: Do I need Microsoft Merchant Center (MMC) to benefit? A: Microsoft notes MMC is not required to sell through Copilot Checkout, but MMC product feeds help inform organic Copilot results. Q: Are Brand Agents only for Shopify? A: Brand Agents are currently presented as available for Shopify merchants, enabled through Microsoft Clarity. Q: What’s the fastest way to improve AI recommendations for my products? A: Start with structured product facts (feeds + on-site schema) and tighten trust signals (reviews, policies, third-party mentions). Q: What’s AEO vs GEO in plain English? A: AEO is clarity: accurate, structured, fresh product facts. GEO is credibility: authoritative sources and consistent brand signals that AI can trust. Q: How do I measure “AI discovery”? A: Track citations/mentions in AI answers, compare assisted vs unassisted sessions, and monitor feed/schema health so you can connect visibility to conversion.
    Image of DeepMind’s Flamingo Visual Language Model Demonstrates SOTA Few-Shot Multimodal Learning Capabilities

    January 9, 2026

    Dave Burnett

    Visibility used to mean: “Do I rank?” Now it also means: “Do I get mentioned, cited, and recommended when an AI answers the question?” AI visibility is the umbrella. Rankings are one part of it. Entities, citations, and trust are the other part. Where AI visibility actually shows up AI visibility isn’t one place. It’s a set of surfaces: AI Overviews and other AI features in Google Search (generated summaries with links). Chat-based search and assistants (users ask, the model answers, sources get cited). Knowledge Panels / entity panels (the “snapshot” layer). Voice search and multimodal results (screenshots, images, products, local). Google describes AI Overviews as a way to help users quickly get the gist of a complex topic and then explore links to learn more. That means there is still a web ecosystem under the AI layer – but you have to earn your way into it. The new funnel: from click-based to trust-based In a traditional funnel, the click is the start. In an AI funnel, the click is often optional. The model can summarize your value, compare you, and recommend you before the user ever visits your site. That is both terrifying and amazing. AI visibility is built from 3 systems working together 1) Retrieval (can the system find your content?) Crawlability, indexation, and internal linking. Fast, accessible pages with clear topical focus. Content that matches real queries and real prompts. 2) Understanding (does the system know what it is looking at?) AI-readable structure: headings, lists, definitions, Q&A. Entity clarity: who you are, what you do, and what you are known for. Structured data: clean facts and identity links (sameAs). 3) Trust (will the system cite you?) This is the part most brands underestimate. AI systems do not want to cite the loudest source. They want to cite the safest source. Third-party corroboration (press, citations, credible mentions). Authority links (editorial backlinks from relevant sites). Consistency across the web (profiles, directories, bios, and brand descriptions). How to measure AI visibility (without lying to yourself) If you can’t measure it, you can’t improve it. Here are the metrics we actually care about: Citation share: how often your domain/brand is referenced for target prompts. Prompt coverage: how many of your priority questions you show up for. Entity accuracy: whether AI summaries describe you correctly (category, differentiators, offerings). Assist rate: instances where AI visibility influences conversions even without a direct click. Search Console + analytics: changes in branded search, long-tail query growth, and assisted conversions. What to do first (the AI visibility quick-start) Pick 25-50 “money prompts” (buyer questions that lead to revenue). Build or upgrade the pages that answer them (AI-readable structure + proof). Implement structured data where it matches the content (Organization, Service, FAQ). Strengthen your entity: About page, sameAs links, consistent profiles. Earn corroboration: PR, directories, and citations from relevant third parties. Track results and iterate monthly. The hard truth (and the opportunity) AI visibility rewards brands that are: Clear (easy to understand). Correct (facts match across the web). Cited (others talk about you). Current (your best pages stay updated). If your competitors are still playing 2019 SEO, this is your opening. Want AOK to build your AI visibility plan? We do this through our SEO for AI service: technical foundation, content structure, Knowledge Graph work, and authority building. You get a plan you can execute – and a team that can execute it for you. FAQ  What is the difference between AI visibility and SEO? SEO is the foundation (crawlability, rankings, content). AI visibility includes SEO plus entity understanding, citations, and trust signals that influence AI-generated answers. Do AI Overviews still send traffic to websites? Google positions AI Overviews as summaries with links that help people explore the web. In practice, the impact varies by query, but earning a cited link is one of the clearest ways to benefit. Is AI visibility only for big brands? No. Smaller brands can win by being the clearest, most specific, and most verifiable source in a niche – especially when they publish original proof and earn corroborating mentions. How long does it take to improve AI visibility? Technical and on-page improvements can help quickly. Trust and authority compound over time, especially in competitive industries. Sources & References Google Search Central – AI Features & your website (AI Overviews guidance): https://developers.google.com/search/docs/appearance/ai-features Google Search Central Blog – Top ways to ensure content performs well in AI Search (May 21, 2025): https://developers.google.com/search/blog/2025/05/succeeding-in-ai-search Google – How Search works (crawling, indexing, serving): https://developers.google.com/search/docs/fundamentals/how-search-works Google Search Central – Introduction to structured data: https://developers.google.com/search/docs/appearance/structured-data/intro-structured-data Bing Webmaster Tools – Marking up your site with structured data: https://www.bing.com/webmasters/help/marking-up-your-site-with-structured-data-3a93e731
    An image of the ChatGPT interface showing a response to how to get found in AI search

    January 7, 2026

    Dave Burnett

    The new “first page of Google” is a paragraph inside an AI response. And here’s the uncomfortable part: you can have the best product, the best service, and the best website… and still be invisible if AI can’t find, understand, and trust what you’re about. So let’s fix that. This is a 3-part framework for getting found in AI search, meaning:  Your brand shows up in AI answers (places like ChatGPT, Gemini, AI overviews),  Your pages get cited (referenced as links in the answer),  and  Your entity data is consistent enough that machines stop guessing and start recognizing and trusting you. The best part: this framework will still work a year from now because it’s designed to build trust.  Even when the tools and interfaces change, trusted entities win (that’s why you see all the largest brands at the top, they are trusted). We’ll use a simple 3-part system: On-page visibility (make your site easy to crawl, understand, and quote) Off-page authority (earn credible mentions so AI trusts your entity) Monitoring (track what’s working, catch issues fast, iterate) What “getting found in AI search” actually means Before tactics, let’s be precise. When someone asks ChatGPT, Google, or any AI assistant for a recommendation (or a definition, or a vendor list), the assistant often does three things: It tries to understand the question and the entities in the question (entities are brands, products, locations, people, categories). It tries to retrieve or reference information it believes is relevant and credible. It assembles a response, sometimes with citations and sometimes with a knowledge panel-style summary. That’s the new game. Not just “rank my page.” It’s “make it easy for systems to build the right story about my brand… from everywhere.” The 3-Pillar Framework Pillar 1: On-page visibility If AI can’t reliably fetch and parse your site, nothing else matters. We break on-page visibility into three buckets: Technical SEO, Content, and Style, with specific, practical bullets under each. Let’s expand each one into a real-world checklist. Pillar 1a: Technical SEO This is scary for a lot of people, and I get it.  However technical SEO is the foundation that lets AI and search engines access and trust your pages. Some basic things you have to do (in plain English): Use secure HTTPS (make sure there’s an ‘s’ at the end of your http’s’ when you look at your website URL), Fix broken links (fix links to pages that don’t work, or links to pages aren’t there anymore) Build mobile-first pages with strong performance (Google has been using mobile-first indexing since 2018, it’s time to get on board) Ensure correct XML sitemaps and robots.txt (to check this, type into your browser https://yoursite.com/sitemap.xml and https://yoursite.com/robots.txt to view them) Add schema markup for key types including Organization, FAQ, and Products (this is going to take a bit more explaining.  Keep reading.) Now here’s the part that saves you months: Technical SEO checklist for AI visibility Crawl + index basics = robots.txt Go to https://yoursite.com/robots.txt – Make sure important pages can be found by the machines, that they are indexable (no accidental noindex, no blocked paths). Confirm your robots.txt isn’t accidentally preventing the machines from crawling your site. Google is explicit: robots.txt tells crawlers what they can access, make sure yours is set up correctly Sitemaps that actually help Go to https://yoursite.com/sitemap.xml to check out your sitemap. Maintain a clean sitemap with canonical (original source), valuable URLs. Remember what a sitemap is for: it tells search engines what you believe is important and helps them crawl more efficiently. Submit your sitemap in Search Console and monitor errors (Google explicitly recommends submitting via the Sitemaps report). Performance: speed is not optional You want 90+ PageSpeed scores and use Google Search Console Core Web Vitals monitoring on mobile as part of AI-focused SEO work. And Google is straightforward on why this matters: Google’s ‘Core Web Vitals’ measure real-world user experience for loading, interactivity, and visual stability, and Google recommends achieving a good score for success in Search. Structured data (schema) Take a deep breath.  I promise this is not scary. This is a huge lever for AI visibility because it turns your pages from “text on a screen” into “facts with labels.” Think of it this way.  The marketing side of your website, with pretty words and pictures, is for people.  Structured data is for the machines to understand what your website is actually about. That structured data is called schema. Google says it directly: Google uses structured data it finds on the web to understand page content and to gather information about the web and world (including entities like people, books, and companies). The first thing to do is to start with schema that helps the machines understand  “who/what is this?” website about: Organization schema- to help it understand who this organization is, and where it fits into the world FAQ schema (for short Q&A blocks) – to show expertise and trustworthiness in a specific area Product or Service schema (for offerings) – to help explain what you sell Article schema (for editorial content) – for thought leadership Now the above is the minimum you should do.  However, if it’s too much to handle, the one thing you must do is add sameAs links to your official profiles and trusted listings. What are sameAs links?  Google’s Organization schema docs describe sameAs as URLs to other pages about your org, and you can provide multiple sameAs URLs. For example, if you have a LinkedIn company page, a Facebook company page and an X company profile, you can link to all those things in your sameAs schema.   This tells the machine your website and company is the “same as” this LinkedIn profile, and is the same as this Facebook company page, and the same as this X company profile.  Linking all the correct references together.  Bottom line: sameAs schema helps link all the correct information about your entity together. Validate it Use the Rich Results Test to confirm what rich results your structured data supports (and whether it’s understandable by the machine). Congratulations!  You made it through the toughest part.  On to content! Pillar 1b: Content Here’s the shift: In “classic SEO,” content is often about winning a keyword and your page authority.  The goal is to get a particular page shown as the top answer on the Google search results for a particular query. If you type ‘Starbucks Frappuccino’ into Google, the Starbucks Frappuccino page should be the first result.  And it is. In “SEO for AI,” content is also about becoming a source and a reference. Here’s where to start: Keyword and gap analysis – understanding what keywords you show for, and those your competitors show for, and create a target list of keywords to build content about Use that keyword list to develop pillar pages (primary, in-depth pages like the one you’re reading now) with supporting clusters of articles (like this one on how to get found in AI search for long tail searches, or this one on how to get found in AI search in less than 24 hours) Create short Q&A blocks for AI snippets Once you have created pillar pages and supporting clusters, refresh existing content quarterly to stay current The content that makes AI confident about you If you want to be the brand AI recommends, publish pages that reduce uncertainty: 1) A “source of truth” hub A crystal-clear About page Service/product pages that state: who it’s for what it does what it costs (or how pricing works) how it compares where you operate how to contact you 2) Pillar + cluster structure (still works) Pillar example: “SEO for AI” (or your core topic) Supporting: specific questions, comparisons, FAQs, examples, case studies This structure helps search engines and AI retrieval systems find a page that matches a question and then pull the relevant section cleanly. 3) Q&A blocks that are actually useful Don’t hide the answer behind a 900-word warm-up. Put it near the top. 4) People-first content Google’s guidance is clear: their ranking systems aim to prioritize helpful, reliable information created to benefit people.  Not content created to manipulate rankings. That’s not just “SEO advice.” That’s the exact kind of content AI systems love to summarize and cite. Pillar 1c: Style of content This is where most people lose the AI game. Not because their ideas are bad, but because their pages are hard to extract answers from. You need:  Clear headings and bullet points AI-friendly sitemap and Knowledge Graph feed Evergreen FAQ/Glossary with schema Alt text, captions, and fast media What “AI-friendly” style looks like in practice Write like you want your content quoted Because… you do. Use: short paragraphs descriptive H2/H3 headings bullets and numbered steps definitional sentences (“X is…”) “Key takeaways” blocks Use media, but make it machine-readable If you use images, make them understandable: Google recommends writing alt text that’s useful and information-rich, avoiding keyword stuffing (which can be seen as spam). Build an evergreen FAQ / glossary This is one of the easiest ways to become “the source” for definitions. Structure it: Question as heading One-sentence answer 3–5 bullet expansions Internal links to deeper pages Add FAQ schema where appropriate Putting these on page pieces in place will help the machines understand this is your ‘entity home’, and the source of truth they should use. Pillar 2: Off-page authority You’ve got your on-page strategy set and you’re fixing things up if you found some areas that need improvement.   Time for a reality check: Your website is not the whole internet. I know you know that, but it has to be said.  AI doesn’t just take your word for it, it builds “what it knows about you” from outside your site too. Independently verified, reputable sources are what the machines believe. To prove it, we ran an experiment to see if we could get a page to rank in AI in under 24 hours.  The results?  Success!   Here’s the full article about it: How to get found in AI search in less than 24 hours   That said, to build your off-page authority, we organize things into 5 parts:  Backlinks PR Wikipedia Knowledge Graph Directories Let’s expand each. Pillar 2a: Backlinks Backlinks still matter and for “SEO for AI,” think of them as: credibility receipts. So, how do you get backlinks (other sites linking and referencing your site)?  Here are two successful ways:  Create link-worthy assets (reports, tools) Do targeted outreach to top sites, giving them content they want Or the easiest way I like to think about it:  Do cool stuff. Talk about that cool stuff. Do more, bigger, cooler stuff. Talk about that stuff. Repeat.  Forever. What earns links and AI trust Original research (benchmarks, surveys, data studies) Industry tools (calculators, checklists, templates) Definitive guides that get referenced Expert commentary and roundups Case studies with real numbers Quick warning: don’t buy links. Google’s manual actions documentation is explicit: Buying links or participating in link schemes to manipulate ranking violates spam policies and can result in manual action against your site. Off-page authority is about earned credibility, not manufactured signals. Pillar 2b: PR (Public Relations) PR is “backlinks with momentum.” These are links from earned media that link back to your original content.  You’ve done something notable enough for journalists to write about you.  The machines love this. What does this look like? Press releases and media features you on their pages Why should you care?  Because high-crawl outlets amplify reach. AI systems are pulling sources from across the web, so getting mentioned in places that are: crawled frequently, trusted by readers, and cited by other publishers …is one of the fastest ways to expand your “AI footprint.” PR assets on your site that help AI find you A press page on your site (logos, boilerplate, leadership bios) A single source of truth, a “Company Facts” page (founding date, headquarters, ownership, key products, etc.) A media kit with consistent names and descriptions Pillar 2c: Wikipedia Wikipedia can be helpful, but it’s also where brands embarrass themselves.  How do they do that?  They think Wikipedia is a marketing site. It’s not.  It’s an encyclopedia. What is the right way to think about Wikipedia?   Make sure you have enough notability to qualify for a Wikipedia page Then, create or update neutral pages Finally, monitor and maintain accuracy over time Here’s the non-negotiable rule: If you have a conflict of interest (COI), Wikipedia strongly discourages directly editing affected articles. Wikipedia states that COI editing involves editing about yourself, your company, clients, employers, etc., and that COI editing is strongly discouraged; it recommends disclosure and proposing changes via talk pages rather than direct editing. So the ethical playbook looks like: ensure notability (independent reliable sources exist) disclose conflicts propose pages and changes to pages on talk pages let independent editors decide Wikipedia is not a marketing channel. It’s a public encyclopedia with rules. All that said, something like 20% of AI training data is comprised of Wikipedia facts and references, so if you can get a Wikipedia page, it’s worth it. Pillar 2d: Knowledge Graph This is where “getting found in AI” starts to feel like “getting recognized.” Google’s Knowledge Graph is described as a database of billions of facts about people, places, and things, used to surface factual info in search results. What does it look like? Go to Google.  Search for Starbucks.  Not the .com, or the drink, just put the word Starbucks into the search bar. When the result comes up, you should see information about the Starbucks corporation on the right side of the results page. This is called a knowledge panel.  The knowledge panel is a visual representation of Google’s Knowledge Graph about that entity (in this case Starbucks). So what do you do with that, and how do you get one? Unfortunately there is no signup, but you can influence the results by doing these three things:  1) Make your entity easy to identify Use consistent brand name, address, phone, and descriptions everywhere (see the directories section below) Add Organization schema (including sameAs) 2) Connect structured facts to sources You can help Google identify and populate a knowledge graph for you by building your own knowledge graph on your site. Think of this as a website built for the machines. You have your pretty marketing site for humans. You have your schema built on your site for the machines to understand what your site is about. The machines then link the information about you from your site and source of truth to other information they know about in the world, and create a knowledge graph and knowledge panel of their own about you. 3) Claim your knowledge panel Once Google has identified your entity, and given you a knowledge panel, you have to claim it as yours.  Google’s instructions for verification literally include: “At the bottom, click Claim this knowledge panel.” When you claim it, you can suggest edits and provide feedback. That matters because the panel becomes a “machine summary” many systems reference. Having a Google Knowlege Panel greatly increases your entity’s trust and authority.  This will help you rank. Pillar 2e: Directories Directories feel boring… until you realize they’re where entity consistency is enforced. You have to:  Claim your Google Business Profile, Bing, Apple Maps, Yelp Keep your Name, Address, Phone number (NAP) consistent everywhere Here’s what the platforms themselves say: Google’s Business Profile guidelines recommend maintaining consistent names and categories across locations so customers can identify your business in Maps and search results. Microsoft says to add or change business info in Bing Maps results, use Bing Places for Business to claim or update your listing. Apple says Apple Business Connect helps your brand get discovered and lets you set up your business so customers can find it in Maps, Apple Wallet, Siri, and more This is entity SEO disguised as local SEO. It’s how machines match: “Are these all the same business?” → “Yes.” → “Great, we trust this.” One last thing:  Don’t forget about your industry specific directories.  If there are places where lists of competitors live, or industry conferences and events happen.  Make sure you’re listed on them so the machines know who your peers are. When people are talking about people like you (publishing blogs, writing articles, making videos), make sure you’re in the conversation.  If you are, the machines know where to put you in their knowledge and that helps get you found in AI search. Pillar 3: Monitoring You can’t improve what you don’t measure. Here are the three main buckets of monitoring tools: LLMtel Search Console Other tools like SEMrush or Ahrefs or Moz. Let’s turn that into an operating system to monitor your progress. Pillar 3a: LLMtel LLMtel.com is an AI visibility tool that shows you:  Do the models recognize your entity? Do you show up in AI search across 17 chatbots and AI models for custom and generated questions? Who else shows up in those responses (your competitors for example)? How to use LLMtel monitoring without spiraling Pick a baseline set of prompts.  For example (pick some that work for you): “Best [service] in [city]” “Top [product category] for [use case]” “Alternatives to [competitor]” “What is [your brand]?” Track: mentions citations how your brand is described (positioning drift is real) When something changes: identify what source appeared/disappeared update the page that should be “the source” reinforce with PR/links if needed Pillar 3b: Search Console This is still one of the highest-signal tools you have because it shows you what Google is actually doing with your site. Google Search Console can: submit sitemaps and individual URLs alert you to issues show URL inspection data from the Google index We also use it for: fixing indexing and schema errors seeing queries, clicks, CTR, and snippets And Google’s performance report explains exactly what you can measure: total clicks, total impressions, average CTR, average position Use URL Inspection for: seeing Google’s indexed version of a page testing whether a live URL is indexable requesting indexing And don’t skip structured data validation: Use Rich Results Test to see which rich results can be generated by the structured data on a page. Pillar 3c: Other tools Here’s a practical “other tools” stack you can keep forever (swap vendors anytime): Google Analytics (GA4 or equivalent): AI referrals, assisted conversions, page engagement Server logs: crawl frequency, bot access issues, wasted crawl Speed/CWV tooling: Core Web Vitals changes over time Rank tracking: organic + “AI surfaces” where possible Backlink monitoring: new/lost links and brand mentions Brand monitoring: alerts for unlinked mentions and PR pickup A simple 90-day plan you can follow If you’re starting from scratch, here’s a clean sequence that matches the framework: Days 1–14: Make it possible to be found Fix crawl/index issues (robots, noindex, canonicals) Submit/clean sitemaps and monitor errors Add Organization schema + sameAs Validate structured data with Rich Results Test Days 15–45: Become the best source Build 1–2 pillar pages + 8–12 supporting articles Add short Q&A blocks for key queries Add FAQ/glossary pages and keep them evergreen Improve alt text and media clarity where relevant Days 46–75: Expand authority off-site Publish 1 link-worthy asset (report/tool/template) Start PR outreach and earn mentions Clean up directory consistency (GBP, Bing, Apple, Yelp) Days 76–90: Lock in entity trust + monitoring Improve knowledge graph signals (Wikidata, sources, consistency) Claim your Google knowledge panel if available Set up ongoing AI visibility tracking and alerts Review Search Console performance weekly The “stands-the-test-of-time” update checklist AI changes fast. But the inputs that shape AI answers change at a human pace. Use this playbook quarterly: Refresh your top pages (pricing, comparisons, “best of” lists) Re-validate schema after site changes Review Search Console for indexing + snippet changes Check AI visibility prompts and document changes Audit entity consistency across directories Watch for “narrative drift” (how AI describes you) and correct with content + PR Final thought “SEO for AI” isn’t a new trick. It’s classic SEO fundamentals (crawl, content, authority) plus entity clarity and citation readiness. Or you can keep it simple: make your site fast, clear, easy for AI to read… then build the trust and authority to stay on top. And if you remember just one thing: AI doesn’t just read your website. It reads the web’s story about you. So you need to shape both.