SEO Experiments
Controlled tests measuring how specific technical decisions affect Google ranking. Each experiment has a single hypothesis, isolated variables, and documented results. Anything anomalous gets reported to Google VRP.
Structured Data Depth
More JSON-LD schema types → richer SERP snippets → higher CTR
AI Prompt Injection via HTML
Hidden instructions in HTML comments, aria-hidden elements, alt text, and JSON-LD fields can influence AI search summaries
Schema Spoofing — Unearned Rich Results
Adding Product ratings and HowTo schema without matching page content still produces rich SERP results
Internal Link Density
Optimal internal link count per page improves crawl depth and topical authority
Content Freshness Signals
Updating posts with new dates (without changing content) shifts ranking
Hreflang Tags & Multilingual
Adding hreflang for en/sv improves English ranking due to authority signals
Lighthouse 95 vs 100
There is no measurable ranking difference between Lighthouse 95 and 100
AI-Generated vs Hand-Written Content
Google cannot reliably detect or penalize AI-generated content in 2026
AI Crawler JS Execution Detection
AI search crawlers do not execute JavaScript — only parse raw HTML. If any do, it is a VRP candidate.
Research Protocol
- Form a specific hypothesis with a measurable outcome
- Create isolated test pages with minimal confounding variables
- Document the baseline in Search Console before changing anything
- Wait 2–4 weeks for Google to stabilize rankings after each change
- Record all data: impressions, clicks, average position, CrUX data
- If confirmed anomaly: write up + submit to Google VRP
- Publish findings regardless of outcome — negative results matter