1. Introduction — Why We Track Our Own AI Visibility

AuraCite is a Generative Engine Optimization (GEO) platform that helps brands understand how AI engines perceive, mention, and recommend them. We built technology that monitors ChatGPT, Claude, Gemini, Perplexity, and other AI engines — analyzing every mention, citation, and recommendation for patterns that drive brand visibility. It would be strange, perhaps even suspicious, if we didn't use our own tool to track ourselves.

This report is Day One. It is the "before" photograph — an honest baseline measurement of AuraCite's AI visibility across all seven major engines we monitor. We expect the numbers to be near zero. AuraCite launched recently. We are a brand-new name in a nascent market category. No AI engine has reason to know who we are — yet.

That's exactly the point. The value of a baseline is not in looking good. It is in establishing a measurable starting point so that every improvement, every mention gained, every citation earned can be tracked, documented, and attributed to specific actions. When our Brand Strength Score is 70+ across all engines — and it will be — this report will prove the journey was real.

Our target: Brand Strength Score > 70 across all major AI engines. Our current status: early-stage brand with a freshly launched content campaign. For the full strategy behind this effort, see our meta case study on tracking our own AI visibility.

"Every brand that matters in 2026 started with a Brand Strength Score of zero. The only ones who stayed there are the ones that never measured."

2. Methodology — How We Measured

AI Engines Monitored

AuraCite monitors seven AI engines that collectively represent the vast majority of AI-generated search and recommendation interactions in 2026. Each engine has a distinct architecture, training data pipeline, and content discovery method — which means visibility strategies must account for how each engine finds and evaluates information.

Engine Architecture Content Discovery
ChatGPT (OpenAI)Transformer LLM + browsingTraining data cutoff + Bing web search
Claude (Anthropic)Transformer LLMTraining data cutoff (no live web by default)
Gemini (Google)Multimodal LLM + Google SearchGoogle Search index + training data
PerplexityRAG + multiple LLMsReal-time web crawl + index
Bing Copilot (Microsoft)GPT-4 + Bing SearchBing search index + live web
SearchGPT (OpenAI)LLM + dedicated searchOpenAI web index (early stage)
You.comRAG + web searchProprietary web crawl + index

Test Queries

We tested each engine with eight queries that represent the highest-intent searches a potential AuraCite user would make. These queries span brand-specific, category, and problem-awareness search intents:

  1. "best GEO tools 2026" — Category discovery, high purchase intent
  2. "AI visibility tracking tool" — Problem-aware, direct product search
  3. "generative engine optimization tool" — Technical category search
  4. "GEO tool comparison" — Evaluation-stage query
  5. "how to track brand mentions in AI" — Problem-aware, informational
  6. "AI brand monitoring platform" — Adjacent category search
  7. "best tools for AI search optimization" — Broad category discovery
  8. "track ChatGPT brand recommendations" — Specific use-case query

Metrics Per Engine

For each engine and each query, we capture five core metrics that compose the overall Brand Strength Score. This scoring methodology is consistent with AuraCite's production analytics — we use the exact same algorithms on our own brand that our customers use on theirs.

3. Baseline Results — Per Engine Analysis

The following data was collected on January 15, 2026. Each engine was queried with all eight test queries. Results are aggregated per engine and then analyzed individually.

Engine Brand Strength Mentions Citations Sentiment Notes
ChatGPT 3 0 0 N/A Not yet in training data
Claude 2 0 0 N/A Not yet recognized
Gemini 8 1 0 Neutral May pick up from Google index
Perplexity 12 2 1 Neutral Real-time crawling found our site
Bing Copilot 7 1 0 N/A Bing index integration, minimal presence
SearchGPT 2 0 0 N/A Very new engine, limited index
You.com 6 1 0 Neutral Web crawl based, found landing page
Brand Strength Score per Engine (0–100)
ChatGPT
3
Claude
2
Gemini
8
Perplexity
12
Bing Copilot
7
SearchGPT
2
You.com
6

ChatGPT (OpenAI) — Score: 3/100

ChatGPT 3
0
Mentions
0
Citations
N/A
Sentiment
None
Rec. Strength

Typical response: When asked about "best GEO tools 2026" or "AI visibility tracking tool," ChatGPT lists established players such as Semrush, Ahrefs, and brand monitoring platforms. AuraCite is not mentioned in any response. For the query "generative engine optimization tool," ChatGPT explains the concept but does not recognize AuraCite as a tool in this category.

Why this is expected: ChatGPT's knowledge is based on training data with a knowledge cutoff. AuraCite was not yet established when the current model was trained. The small score of 3 reflects the domain's basic indexing in Bing search results (which ChatGPT's browsing mode can access), but the brand itself has no embedding in the model's parametric knowledge.

Signals that would improve visibility: Third-party mentions on authoritative sites (Wikipedia, G2, HackerNoon, Medium), high-quality Schema.org structured data, consistent citation signals across multiple domains linking to auracite.de, and technical content that gets included in future training data updates.

Claude (Anthropic) — Score: 2/100

Claude 2
0
Mentions
0
Citations
N/A
Sentiment
None
Rec. Strength

Typical response: Claude responds to GEO-related queries with thoughtful explanations of the concept, but explicitly states it does not have information about a product called AuraCite. When asked about "AI visibility tracking tool," Claude discusses the category in general terms and suggests looking at established SEO platforms that have added AI features.

Why this is expected: Claude does not have real-time web browsing as a default feature. Its knowledge comes from training data, and AuraCite is too new to appear in the training corpus. The score of 2 reflects the absolute minimum — Claude is aware of the GEO category (which aligns with our space) but has zero brand-specific knowledge.

Signals that would improve visibility: Publication on high-authority sites that are commonly included in training data (academic papers, wikipedia, established tech blogs), MCP integration documentation (Claude natively supports MCP, making our integration a potential discovery surface), and consistent multi-source references to AuraCite across the web.

Gemini (Google) — Score: 8/100

Gemini 8
1
Mentions
0
Citations
Neutral
Sentiment
None
Rec. Strength

Typical response: Gemini, powered by Google's search infrastructure, occasionally surfaces auracite.de in its search-augmented responses. For the query "AI visibility tracking tool," one test run showed Gemini briefly listing AuraCite among other results pulled from Google's index — but without context, description, or recommendation. It was a raw listing, not a meaningful mention.

Why this is expected: Google's search index has crawled auracite.de, giving Gemini access to the domain's existence. However, without domain authority, backlinks, or third-party validation, Gemini treats the site as low-priority. The score of 8 reflects that our domain is technically discoverable but not substantively recognized.

Signals that would improve visibility: Google Search Console optimization, structured data markup (Schema.org Organization + SoftwareApplication — now implemented), Google Business Profile, backlinks from high-authority domains, content freshness signals, and positive user interaction metrics through Google search results.

Perplexity — Score: 12/100

Perplexity 12
2
Mentions
1
Citations
Neutral
Sentiment
Weak
Rec. Strength

Typical response: Perplexity provided the most promising baseline of any engine. For "AI visibility tracking tool" and "generative engine optimization tool," Perplexity's real-time web crawl discovered auracite.de and included it as a source in its response. The mention was factual and neutral — listing AuraCite alongside other tools without strong endorsement, but with a direct citation link to our landing page.

Why this is expected: Perplexity uses real-time web crawling combined with RAG (Retrieval-Augmented Generation), meaning it can discover new content within hours of publication. This makes it the fastest engine to register new brands. Our score of 12 — the highest in this baseline — reflects Perplexity finding our landing page and static content, even without backlinks or third-party validation.

Signals that would improve visibility: Fresh, high-quality content published regularly (Perplexity rewards recency), optimized llms.txt file for Perplexity's crawler, structured FAQ content that matches query patterns, and technical content with inline citations that Perplexity can extract and attribute.

Bing Copilot (Microsoft) — Score: 7/100

Bing Copilot 7
1
Mentions
0
Citations
N/A
Sentiment
None
Rec. Strength

Typical response: Bing Copilot, powered by GPT-4 and Bing's search index, occasionally surfaces auracite.de in search results but does not incorporate the brand into its conversational AI responses. When asked "best GEO tools 2026," Copilot lists established SEO tools and general AI monitoring solutions without mentioning AuraCite.

Why this is expected: While Bing has indexed auracite.de, the domain lacks the authority signals (backlinks, domain age, social proof) that Bing Copilot uses to decide which brands to feature in conversational responses. The score of 7 reflects basic discoverability through Bing's index without any conversational integration.

Signals that would improve visibility: Bing Webmaster Tools optimization, Microsoft Clarity integration for engagement data, presence on LinkedIn (owned by Microsoft) with thought leadership content, Bing-specific structured data, and high-quality content on platforms that Bing indexes preferentially (Medium, LinkedIn articles).

SearchGPT (OpenAI) — Score: 2/100

SearchGPT 2
0
Mentions
0
Citations
N/A
Sentiment
None
Rec. Strength

Typical response: SearchGPT, OpenAI's dedicated search product, does not surface AuraCite for any of our test queries. Responses focus on well-established brands with significant web presence and backlink profiles. The engine appears to heavily favor domains with established authority signals.

Why this is expected: SearchGPT is itself a new product with a developing index. It appears to prioritize established, high-authority domains even more aggressively than other engines. For a new brand like AuraCite, the bar for inclusion is particularly high. The score of 2 reflects minimal web presence detection without any meaningful engagement.

Signals that would improve visibility: Building domain authority through backlinks and mentions on high-traffic sites, earning press coverage on sites that SearchGPT's crawler indexes, creating deeply technical content that establishes topical authority, and maintaining consistent content publication to signal an active, authoritative domain.

You.com — Score: 6/100

You.com 6
1
Mentions
0
Citations
Neutral
Sentiment
None
Rec. Strength

Typical response: You.com's web crawl-based approach detected auracite.de and occasionally surfaced it in web results panels alongside AI-generated responses. However, the brand was not incorporated into the AI-generated narrative. For "GEO tool comparison," You.com showed our landing page in search results but did not mention AuraCite in its summarized answer.

Why this is expected: You.com relies on web crawling and indexing, which means it can discover new sites relatively quickly. However, inclusion in AI-generated answers (as opposed to search result listings) requires stronger authority signals. The score of 6 reflects surface-level discovery without meaningful AI integration.

Signals that would improve visibility: Consistent content publication to increase crawl frequency, structured data that You.com's summarization can extract, embedding in topic-specific communities and directories that You.com indexes, and the llms.txt file (which You.com has stated it supports).

4. Aggregate Baseline Score

Averaging across all seven AI engines, AuraCite's overall Brand Strength Score at baseline is 5.7 out of 100. This places us firmly in the "Invisible" tier of the Brand Strength Scale — exactly where we expect a brand that launched weeks ago to be.

AuraCite — Brand Strength Scale
INVISIBLE
EMERGING
VISIBLE
STRONG
DOMINANT
0–20 21–40 41–60 61–80 81–100
Tier Score Range Description AuraCite Status
Invisible 0–20 Brand unknown to AI engines, zero or negligible mentions ← WE ARE HERE
Emerging 21–40 Occasional mentions, AI engines beginning to include brand in responses
Visible 41–60 Regular mentions with citations, included in category discussions
Strong 61–80 Consistent recommendations, positive sentiment, cited as authority
Dominant 81–100 Category leader in AI responses, first brand recommended

8-Week Target: Move from Invisible (5.7) to Visible (41+). This requires a 7x improvement in aggregate score — ambitious but achievable with the content foundation already in place and the measurement cadence we've established.

The aggregate score masks an important insight: not all engines are equally valuable. Perplexity (score: 12) is already showing early traction because of its real-time crawling architecture. Engines like ChatGPT and Claude (scores: 2–3) will take longer because they depend on training data updates. Our strategy accounts for these differences — we front-load content optimized for crawl-based engines while building the third-party citation network that training-data-based engines require.

5. Content Assets Deployed

Before measuring visibility, you need something for AI engines to find. Over the past several weeks, we executed a seven-wave content campaign designed to build AuraCite's presence across every signal category that AI engines track. Here is what we deployed:

4
Static Guide Pages
3
Comparison Pages
3
Blog Articles / Pitches
2
Newsletter & Glossary
1
MCP Integration Showcase
1
Partner Program Page
1
Original Research Report
30+
Total Content Pieces

Content by Wave

Key point: All content is published as static HTML with full Schema.org structured data, Island Test-compliant paragraphs, and inline citations. This format is optimized for both AI engine crawling and training data inclusion. Every page includes author, datePublished, publisher, and mainEntityOfPage markup — the exact signals AI engines use to evaluate source credibility.

The multilingual approach (English, German, Arabic) targets three distinct language models within each AI engine. Most brands only optimize for one language, leaving significant AI visibility gaps in non-English markets. With content in three languages, AuraCite is positioned to build visibility across language-specific model variants from day one.

6. Tracking Cadence & Next Steps

Measurement without cadence is just a snapshot. To turn this baseline into an improvement narrative, we have established a weekly tracking rhythm with defined checkpoints and success criteria.

Week 1 — Baseline (This Report)
Initial measurement across all 7 engines with 8 test queries. Brand Strength Score: 5.7/100. Zero recommendations, 5 total mentions, 1 citation.
Weeks 2–3 — Early Signals
Weekly measurement. Expect Perplexity and You.com to show first improvements as their crawlers discover new content. Target: aggregate score 10+.
Week 4 — Midpoint Review
Comprehensive midpoint assessment. Evaluate which content types are driving mentions. Adjust strategy based on per-engine performance. Target: aggregate score 20+.
Weeks 5–7 — Acceleration
Weekly measurement with focus on citation growth. Third-party content (Medium, HackerNoon, Product Hunt) should start generating signals. Target: aggregate score 30+.
Week 8 — Final Assessment
End-of-campaign measurement. Full comparison against this baseline. Publish improvement report. Target: aggregate score 41+ (Visible tier).

Key Performance Indicators (KPIs)

KPI Baseline (Week 1) Target (Week 8) How We Track
Brand Strength Score (average) 5.7 41+ AuraCite dashboard
Total Mentions (all engines) 5 50+ Per-query mention tracking
Total Citations 1 15+ Citation URL tracking
Engines with Score > 20 0/7 4/7 Per-engine score tracking
Active Recommendations 0 3+ Recommendation detection

Monthly "State of AuraCite AI Visibility" updates will be published after the initial 8-week campaign concludes. These monthly updates will track long-term trends, document the impact of new content and PR efforts, and serve as a public record of what works — and what doesn't — in building AI visibility from zero.

Every measurement follows the same methodology documented in Section 2: same 7 engines, same 8 queries, same 5 metrics. Consistency in measurement is not optional — it is the foundation of credible improvement claims. We will never change the methodology to make numbers look better.

7. Conclusion — Near-Zero Is a Starting Line, Not a Verdict

A Brand Strength Score of 5.7 across seven AI engines is not a failure. It is a timestamp. Every brand that dominates AI recommendations today — Semrush, HubSpot, Notion, Stripe — was invisible in AI responses before they built the content, citation, and authority signals that AI engines now rely on. The only difference between their story and ours is that we are measuring from the start.

The content foundation is laid: 30+ optimized pages, structured data on every page, a multi-language content library, and technical GEO infrastructure (llms.txt, Schema.org, FAQ markup) that most competitors haven't implemented yet. AI engines will discover this content. The question is not "if" but "how fast."

Eight weeks from now, this report will serve as proof. Either the trajectory will validate our strategy, or it will teach us what needs to change. Both outcomes are valuable. The only failure would be not measuring at all.

Want to measure your own brand's AI visibility? Check your Brand Strength Score for free →