All posts

AI SEO·April 8, 2026·14 min read·Dr. Kebar Y

What ChatGPT and Perplexity actually cite for local services

Generative search is no longer a thought experiment for local services. By the end of 2025, ChatGPT, Perplexity, Claude, and Google's AI Overviews were collectively responsible for an estimated 8 to 14% of high-intent local service queries in the United States, depending on the vertical and the metro. For contractor verticals in Florida, our internal tracking on Rocket Garage Door Services puts the number at the high end of that range and growing month over month.

I spent the last six months testing every major LLM on the same set of Polk County contractor queries to see what they cite, what they ignore, and why. The work was inspired by a question Andre asked me on a discovery call with a roofing contractor: 'Is there anything we can actually do to get cited by ChatGPT, or is it pure luck?' The short answer turned out to be: it is not luck, and the signals are surprisingly consistent across models once you know where to look.

Quick context on my background, because it shapes how I read the data. I spent two years inside Google and currently work at Meta. My doctoral research focused on small business survival patterns. I bring both perspectives because LLM citation behavior is closer to academic citation patterns than to traditional SEO ranking, and the academic frame is genuinely useful here. As always at Reimagine, every tactic in this post has been validated against Rocket Garage Door Services, the contractor my husband Andre and I started from scratch to be our internal lab.

This post is the long answer to Andre's question. Methodology, signals that worked, signals that did not, and the practical playbook we now run on Rocket. If you are a contractor wondering whether AI search matters for your business, the answer is yes, and the actions you can take are concrete.

Methodology

I built a query set of 60 templates across 12 cities and 4 contractor verticals: roofing, garage doors, paving, and fencing. The templates covered transactional queries (best garage door repair Lakeland), informational queries (how much does a new roof cost in Polk County), and comparison queries (best roofers in Bartow vs Auburndale). Each template was rendered for each city, producing 720 distinct queries per vertical per week.

Every Monday for 26 weeks, I ran the full query set through ChatGPT, Perplexity, Claude, Google AI Overviews, and Gemini. For each query, I logged the cited URLs, the cited business names, and the exact phrasing of the citation. Cited businesses were then cross-referenced against their structured data, GBP profile completeness, review counts, content depth on cited pages, and presence on trusted third-party domains.

The dataset by week 26 included roughly 93,000 individual citation events. Big enough to see patterns. Small enough that I could read every cited URL by hand for the businesses that showed up most often. The patterns I am about to describe held across all five models, with model-specific quirks I will note where they matter.

What got cited

Pages with named, structured FAQ blocks at the top were cited disproportionately. LLMs love semantically clean question and answer pairs, and the closer those pairs sit to the top of the page, the more likely they are to be quoted. Pages with FAQs in the bottom third of the page were cited about a third as often as pages with FAQs in the top third, holding everything else constant.

Pages that included specific local entities by name were cited far more often than generic city pages. 'Lakeland' alone is weak. 'Lake Hollingsworth neighborhood in Lakeland' is much stronger. Named landmarks, neighborhood names, school zones, and county references all compounded. Generic templated city pages with the city name swapped in were essentially invisible.

Businesses with 200 or more reviews and a sustained 4.7 star average were cited at roughly 4x the rate of businesses with under 100 reviews, even when the smaller business ranked higher in classic Google. LLMs appear to weight social proof much more heavily than the Google Maps algorithm does, which is the single biggest divergence between the two systems and the most actionable insight in this entire post.

Pages that contained explicit pricing ranges, even rough ranges, were cited far more often than pages that hid pricing behind a contact form. If your pricing page says 'starting at $189 for a standard spring replacement,' you are giving the model a fact it can quote. If your pricing page says 'contact us for a custom quote,' you are giving the model nothing.

Wikipedia mentions and trusted-domain citations of the business name compounded everything else. A business cited once on a city's Wikipedia page or on the local chamber of commerce site received roughly 2x the LLM citations of an otherwise equivalent business with no third-party mentions. Trusted domains act as a kind of ground truth signal.

What did not get cited

Thin programmatic city pages were essentially invisible. We saw dozens of contractors with hundreds of templated city pages that ranked in classic Google but received zero LLM citations across the entire 26-week test. The lesson is brutal and simple: programmatic SEO works for ranking, and it does not work for citations. The two systems weight content depth differently, and LLMs are far less forgiving of thin content.

Pages full of marketing language without facts were ignored. Phrases like 'we are committed to quality and excellence' do not get quoted because they are not quotable. They contain no facts, no entities, no numbers. LLMs are essentially fact-extraction engines, and a page with no facts has nothing to extract.

Businesses that ranked in Google Maps top 3 but had under 50 reviews were rarely cited. Map ranking and LLM citation are not the same problem. A business can dominate the Maps pack and still be invisible to ChatGPT if its review base is thin or its on-site content is shallow.

Sites without LocalBusiness schema were cited at a fraction of the rate of structured-data sites, even when content quality was equivalent. Structured data is doing a lot of work behind the scenes here. If you only do one technical thing this quarter, validate and expand your LocalBusiness schema across every service and city page.

Long pages without clear heading hierarchy got partial citations or none at all. A 3,000 word page with three H2s spread across the whole document got cited far less often than a 1,500 word page with eight clean H2s and a logical structure. LLMs need heading anchors to chunk and quote pages. Give them anchors.

The factual density score

I built an internal metric I call factual density: the number of verifiable, specific facts (numbers, named entities, dated events) per 100 words on a page. Pages above a factual density of 5 were cited at roughly 7x the rate of pages below 2. The pattern was consistent across all four verticals and all five LLMs.

For Rocket, we rewrote every service page to hit at least 6 facts per 100 words. That sounds high until you actually count. A sentence like 'We replaced both torsion springs on a double-car door in Auburndale on March 14, using 27,000-cycle springs rated for 12 years of typical residential use' contains six verifiable facts in 24 words. Once we trained the team to think in facts, the rewrites became easier than expected.

Citation rate across LLMs roughly tripled within 8 weeks of the rewrites going live. ChatGPT specifically started citing Rocket by name for 'best garage door repair Polk County' type queries. Perplexity started linking Rocket reviews directly in answers. Within 4 months, LLM-attributed traffic was 11% of total Rocket inbound calls based on phone-asked attribution. We ask new callers how they found us, and we log every answer, which is how we know.

Model-specific quirks

ChatGPT weights review velocity and Wikipedia presence the most heavily of the five models. If you can only optimize for one model, optimize for ChatGPT, because it has the largest user base and the strongest review-velocity signal.

Perplexity weights structured data and explicit pricing more heavily than ChatGPT. Perplexity is also the most likely to link directly to a business website rather than to a third-party directory, which makes it the highest-converting of the five models per citation in our tracking.

Claude is the most conservative citer. It rarely cites local businesses by name unless multiple trusted sources agree. Getting cited by Claude is the hardest of the five but the strongest validation that your authority is broad enough to be uncontested.

Google AI Overviews behave more like the existing local pack: maps presence and reviews dominate. Optimization for AI Overviews is essentially the same as optimization for Maps, with an added emphasis on FAQ schema.

Gemini is the noisiest and the least predictable. We do not currently optimize for Gemini specifically because the citation patterns shift week to week and the user base remains the smallest. We monitor it but we do not chase it.

The playbook for contractors

Audit your top 10 service and city pages for factual density. Rewrite anything below a score of 4. Aim for 6 or higher. Use real numbers, named neighborhoods, dated examples, and specific products. Every sentence is a chance to add a fact, and every fact is a chance to be quoted.

Add structured FAQ blocks to the top half of every service and city page. Use real customer questions, not fabricated ones. Answer in 2 to 4 sentences with named local entities in the answer. The FAQ block should feel like it was written by a person who answers these questions every day, because LLMs are surprisingly good at detecting voice mismatches.

Publish pricing ranges. Vague pricing kills LLM citations. If you are afraid of competitors seeing your numbers, publish ranges instead of fixed prices. 'Spring replacements typically run $180 to $360 in Polk County depending on door size' is enough fact for an LLM to quote and not enough for a competitor to undercut you on.

Pursue review velocity aggressively. The 200 review threshold is not a vanity metric, it is the floor for serious LLM visibility. If you are at 80 reviews today, your 90 day plan should be to get to 200. Every job should trigger a review request within 24 hours of completion. Follow up once. Track the conversion rate.

Build complete LocalBusiness schema with full service area, verified NAP, services list, and FAQ schema on every page. This is technical foundation work that most contractors skip. Skipping it is the cheapest way to underperform in AI search.

Get cited by trusted domains. Chamber of commerce pages, local news, niche industry directories, and Wikipedia city pages where appropriate. One trusted citation outperforms ten low-quality ones. Stop chasing volume and start chasing trust.

Re-test your queries every month. LLM behavior shifts faster than classic SEO. Models update, ranking weights shift, citation patterns evolve. The contractors who win at AI search treat it as a moving target and check the scoreboard often.

Takeaway

LLM citation is not luck. It is a tractable problem with measurable signals, and the signals are different from classic SEO signals, which is why most agencies are still flying blind. We are not flying blind because we have a contractor in our garage to test against, and the test rig has been running for half a year.

If you want help getting cited by ChatGPT and Perplexity for your local service queries, Reimagine bundles AI citation work into our Content SEO engagements. Book a discovery call and we will run your top 5 queries through our test rig before you sign anything. You will see exactly where you are cited, where you are not, and what we would change first.

Written by

Dr. Kebar Y

Co-Founder, Reimagine Digital Marketing · PhD in Marketing, with doctoral research on small business failure patterns. Ex-Google (2 years). Currently at Meta (1+ year).

Book a Call