ChatGPT's 74% Weekly Domain Rotation

By Nathan Williams··Methodology: v1.0
SISTRIX measured 82,619 ChatGPT prompts over 17 weeks and found that 74% of cited domains rotate out within seven days. A single probe tells you what ChatGPT cited on that day — not what it will cite tomorrow.

Key findings

  • 74% of cited domains rotate out within 7 days (SISTRIX, 82,619 prompts)
  • Single-run citation checks have 40%+ week-over-week variance
  • Minimum viable measurement requires weekly panel cadence over 8+ weeks

The problem with single-run checks

Many AEO tools offer "check if you're cited by ChatGPT" as a one-shot lookup. The user enters their domain, the tool makes one API call, and reports whether the domain appeared.

This produces a measurement with a shelf life of approximately one day.

The SISTRIX finding

SISTRIX ran 82,619 prompts through ChatGPT weekly for 17 consecutive weeks. Their finding: 74% of domains cited in week N are not cited in week N+1 for the same prompt.

This rotation rate means that a single observation has a 74% chance of being different from the next observation of the same prompt. In statistical terms, the week-over-week test-retest reliability is approximately 0.26 — far below any threshold for a valid measurement instrument.

Implications for measurement design

  1. Minimum viable cadence is weekly, over 8+ weeks. With 74% weekly rotation, you need at least 8 weekly observations to distinguish signal from noise with 95% confidence.

  2. Point-in-time "visibility checks" are marketing, not measurement. A tool that shows you a single ChatGPT response is showing you noise. It cannot tell you whether your brand is consistently cited or whether you happened to appear on the day the check ran.

  3. Confidence intervals widen dramatically at low cadence. Our bootstrap analysis shows that 95% CIs for citation share drop below actionable width (less than 10 percentage points) only after 6-8 weeks of daily measurement.