So… I did the most marketer thing possible.
I wrote a post about how to get found in AI search in less than 24 hours.
And then I tried to get that post found in AI search in less than 24 hours.
Yes. It’s painfully meta.
No. It didn’t happen in 24 hours.
And that’s exactly why this experiment was worth running.
Why I ran experiment #2
Experiment #1 was the “shiny object” test.
It was about the ChatGPT Entity Panel, a fresh discovery, a new behavior, something that felt under-covered and (most importantly) quote-worthy.
That first test showed me something:
If you publish something useful, structured, and genuinely new… AI systems can pick it up quickly.
So the obvious follow-up question was:
Was that speed because of my process… or because the topic itself was a novelty gift-wrapped for AI retrieval?
Experiment #2 was designed to remove the novelty advantage.
I wanted a long-tail keyword search that wasn’t tied to a new feature, a breaking announcement, or some unique “first on the internet” moment.
Just a longer-term keyword.
A more regular topic.
A long-tail phrase that people might actually search.
The target query
The test phrase was:
“How to get found in AI search in less than 24 hours.”
Long-tail.
Specific.
A little ridiculous.
But also… a very real question people are asking right now.
And if AI search is the new battleground, then long-tail prompts are the new long-tail keywords.
What I did (same playbook as experiment #1)
I didn’t reinvent the wheel for experiment #2.
I used the same mechanics as the first test:
-
Published the post on the AOK Marketing blog
-
Cross-linked it internally (so crawlers and humans can discover it faster)
-
Posted it on LinkedIn via the company page
-
Wrote a LinkedIn article that mirrored the core idea but was different enough to stand on its own (not a lazy copy/paste)
In other words:
I didn’t just “publish and pray.”
I published and distributed.
The result at 24 hours
This is where the experiment gets honest.
At 24 hours:
-
It didn’t work.
-
There were no results.
-
There was no indexing in Google.
-
There was no indexing in Bing either.
That last part surprised me.
Because the big narrative right now is:
“AI crawls fast.”
“AI finds everything.”
“AI is real-time.”
But here’s the reality:
If your page isn’t indexed… it’s basically invisible to anything that depends on web retrieval.
And yes AI search systems can feel magical, but when they’re using web search, they still need the web to have actually found your page.
No crawl = no index = no citation.
The pivot: I requested indexing in Google Search Console
After the first 24 hours, the page still wasn’t indexed.
So I submitted the URL for indexing.
Then I waited again.
And about 24 hours after that (roughly ~48 hours from publishing), things started to move.
That screenshot matters because it validates the most unsexy truth in all of this:
AI visibility is still gated by indexing.
The “LLMtel said no… but Gemini said yes” moment
After the second 24-hour window, I ran another check using LLMtel (my “don’t trust your own prompt history” sanity check).
And LLMtel didn’t show the post in the answer.
So by that measurement, the experiment looked like a fail.
Except… something else happened.
We got cited anyway.
Gemini (with web search enabled) returned an answer for the query and cited the AOK Marketing article.
So here’s the weird (and interesting) part:
-
We weren’t actually mentioned, so the AI visibility tool didn’t show it
-
But the live AI search experience did
Which tells me we’re dealing with a moving target:
- Different retrieval layers
- Different caching behavior
- Different timing windows
- Different model behaviors
- Different “what counts as found” definitions
And that’s not a complaint.
That’s the new game board.
Bing did us a favor
After seeing the Gemini citation, I double-checked Bing.
And Bing did two very important things:
-
It ranked the AOK Marketing post #2 for the full long-tail phrase
-
Copilot Search included us twice in the AI-generated response experience
Meaning:
-
Our blog post showed up as a referenced result
-
And our LinkedIn article also showed up as a referenced result
Two listings.
Two citations.
Same prompt.
This is why I’m calling experiment #2 a success.
Not because it hit the 24-hour headline.
But because it proved something more useful:
You can earn AI citations and strong rankings for long-tail prompts with a repeatable process… even when the topic isn’t “new.”
It just might not happen on your preferred timeline.
Why I’m still calling this a win
Let’s be blunt:
The “24 hours” part is the marketing hook.
The real test is: Do you show up at all?
And we did.
-
Gemini cited us (even if it took longer)
-
Bing ranked us and cited us twice (blog + LinkedIn)
That’s a visibility footprint across multiple surfaces:
- Traditional SERP ranking
- AI-generated summary citations
- Platform-level distribution validation (LinkedIn as an asset, not just a megaphone)
If you’re doing SEO for AI, that’s what you want.
Not just traffic.
Presence.
What I think mattered (working hypotheses)
This is still early, and one experiment isn’t a law of physics.
But here’s what I believe experiment #2 reinforces:
1) Indexing is still the first domino
If you want to be cited by an AI system using web retrieval, you need:
-
Crawl access
-
Index inclusion
-
Clean rendering
-
Discoverability pathways (internal links, sitemap health, distribution)
AI doesn’t “magically know your post exists” if the post doesn’t exist in the systems it’s pulling from.
2) Long-tail works… but doesn’t get priority
In experiment #1, I wrote about something unique and under-covered. This got us indexed right away, and ranked in AI within 24 hours.
In experiment #2, I wrote about a broader topic.
Even though the long-tail phrase is specific, the topic itself (“AI search visibility”) is not rare.
That probably means:
-
more competing pages
-
less urgency to surface a new source
-
more time required for the engines to “decide” you belong in the set
3) Multi-surface distribution creates multi-surface citations
The biggest “aha” from Bing wasn’t just that AOK ranked.
It’s that Copilot gave us two entries.
- One from the blog.
- One from the LinkedIn article I wrote.
That’s not an accident.
That’s what happens when you stop thinking like:
“I need one page to rank.”
…and start thinking like:
“I need multiple credible surfaces to exist for the same idea.”
Experiment #3: going shorter
Experiment #2 targeted the long-tail phrase:
“How to get found in AI search in less than 24 hours.”
Now I’m moving to a shorter version:
“How to get found in AI search.”
The question is simple:
Can we win the shorter phrase, not just the long-tail?
And after that, I want to push into something even more specific and branded.
Because eventually, the goal isn’t just:
“show up for a keyword.”
It’s:
“become the cited source when someone asks the best possible question in your category.”
So… wish me luck.
I’m going to try a few different things.
And we’ll see what happens (and don’t worry, I’ll keep you in the loop, just like I did for the last experiment).
About The Author
Dave Burnett
I help people make more money online.
Over the years I’ve had lots of fun working with thousands of brands and helping them distribute millions of promotional products and implement multinational rewards and incentive programs.
Now I’m helping great marketers turn their products and services into sustainable online businesses.
How can I help you?






