publicconsultation.org achieves a 75/100 on the AI-Crawler Reality Index, reflecting above-average readiness for AI-driven discovery. In the developer sector, publicconsultation.org outperforms the average (57), suggesting strong competitive positioning in AI search. The low ghost ratio (0%) confirms that what crawlers see matches what users see — a hallmark of strong SSR implementation. Heavy markup overhead (31.9× bloat) forces AI systems to wade through excess code before finding useful information. Zero schema blocks puts this site at a disadvantage in knowledge graph and AI-answer pipelines that rely on explicit structured data. Robots.txt grants unrestricted access to the key AI user-agents, which is the strongest starting position for AI visibility.
🧮 Score Transparency — How is this calculated?
📊 ACRI Sub-Scores (AI Readiness Detail)
publicconsultation.org ranks much higher on Google (Tranco Top 50%+) than in AI search (Top 19%). This is the 'Visibility Gap' pattern — implementing the recommendations above can help close the AI gap. ACRI measures technical crawler readiness. Read the methodology →
Why publicconsultation.org ranks here
Fastest improvements
- Add basic Organization and WebSite JSON-LD to fix “0 schema blocks” (see Schema Coverage).
- Reduce token bloat (navigation/footer/code) so agents reach your main content faster (see Token Bloat).
- Create an
llms.txtfile so AI crawlers can discover your content structure without heavy crawling. Generate llms.txt → - Run a full entropy audit to find which DOM regions waste the most tokens. Run Entropy Audit →
Traditional SEO
1/100 25 % of Global Score 🔴 Low Confidence📝 Title Tag
Optimal range: 30–60 characters for SERP display.
📋 Meta Description
Optimal range: 120–160 characters for snippet control.
🔤 Heading Hierarchy
- ✗ Exactly 1 <h1> tag — found 0
- ✓ Has <h2> headings — found 5
- ✗ <h2> not before <h1>
🔍 Indexability
- ✗ Canonical tag present
- ✓ No noindex directive
- ✓ Meta viewport set
- ✓ HTML lang attribute →
en - ➖ Hreflang tags — N/A (single language site)
- ✓ Googlebot allowed by robots.txt
🌐 Social / OpenGraph
- ✓ og:title — Majorities of Republicans and Democrats Overwhelmingly Favor the Government Regulating Artificial Intelligence
- ✓ og:description — The White House recently released its AI Action Plan, which opposes government regulation of AI programs, saying that it would impede the US becoming the dominant force in artificial intelligence globally. A new survey found that bipartisan majorities of Americans, while receptive to the argument that regulation may harm innovation, ultimately favor five proposals under consideration in Congress for the government to regulate AI domestically. An international treaty to ban lethal autonomous weapons was also favored by a bipartisan majority. “Clearly Americans are seriously concerned about the current and potential harms from AI,” comments Steven Kull, director of PPC. “And while the public is wary of government regulation, they are clearly more wary of the unconstrained development and use of AI.” The survey was fielded from July 30th through August 7th, 2025 with 1,202 adults nationwide, by the Program for Public Consultation (PPC) at the University of Maryland’s School of Public Policy, and follows an earlier survey PPC conducted in March 2024. AI That Makes Life-Impacting Decisions Many AI programs are used to make decisions which can significantly impact people’s lives, including in healthcare, banking, hiring, criminal justice, and welfare. Proposals have been put forward to require that such AI programs pass a government test to ensure they do not violate laws, engage in discrimination, or have security vulnerabilities. The arguments against these regulations were found convincing by majorities of 59-69%, including the arguments that, “being so cautious will hurt innovation,” and, “the private sector can move faster than the government to address risks.” However, the arguments in favor of regulations did significantly better (77-84%), including that it is better to take a preventative approach than to react after significant harm has already been done. In the end, bipartisan majorities favored regulating decision-making AI programs, by: Requiring AI programs to pass a government test before they can be put into use, which would evaluate whether they may violate regulations, make biased decisions, or have security vulnerabilities was favored by 79%, (Republicans 84%, Democrats 81%, very red states 74%, very blue states 85%). Support has barely changed since 2024, when 81% were in support. Allowing the government to audit the AI programs that are already in use, and require the company to fix any problems that are found, was favored by 78% (Republicans 82%, Democrats 78%, very red states 72%, very blue states 83%) – essentially the same as 2024, when 77% were in support. One respondent summarized their support for the regulations as follows: “Instead of letting things out into the world and then reacting based on flaws, they must be functioning to their best ability beforehand.” Disclosure of AI Training Data To aid with the pre-testing and audits, 74% favor requiring companies to disclose information to the government about how their decision-making AI was trained, if requested (Republicans 77%, Democrats 76%, very red states 70%, very blue states 77%). Support has increased only slightly since 2024 (72%). While the argument that such a regulation is government over-reach that violates companies’ rights to privacy was found convincing by a majority, the argument that disclosure is necessary to effectively audit the programs and make sure the data used for training was not collected illegally, did much better. Deepfakes AI-generated deepfakes have been increasing in their ability to pass as real, and are increasingly used for political ends. Creating new restrictions on deepfakes registered overwhelming bipartisan support: Prohibiting the use of deepfakes in political campaign advertisements, such as to depict an opponent saying something they did not, or to depict an event that did not occur, was favored by 80% (Republicans 83%, Democrats 78%, very red states 77%, very blue states 82%). Requiring that all deepfakes which are shared publicly be clearly labeled as such, was favored by 80% (Republicans 83%, Democrats 81%, very red states 74%, very blue states 85%). Support for both policies has not moved more than a few percent since 2024, when they were favored by 84% and 83%, respectively. One respondent explained their support for prohibiting deepfakes in political campaigns as follows: “Using deepfakes for nefarious acts such as swaying elections should not be acceptable. Purposefully putting out misleading videos and pictures to the public as the truth will be the downfall of any democratic society.” Lethal Autonomous Weapons The UN Secretary General recently called on nations to come together to ban the development of weapons that can use AI to fire on targets without human control, known as lethal autonomous weapons. The US government has so far been opposed to such a ban, and has been developing its own autonomous weapons since 2023 as part of the Replicator program. The arguments against such a treaty were found convincing by majorities (60-64%), including that lethal autonomous weapons have the potential to reduce civilian casualties, and the US placing constraints on itself and its allies could give an upper hand to enemies that decide to use such weapons. The arguments in favor did better though (74-81%), including that the risk of such weapons getting out of control is too high, and such risks will never truly be known until they are deployed. In the end, a large majority of Americans (74%) favored the US working to create a treaty banning the development of lethal autonomous weapons, and creating an international agency to enforce that prohibition. Support for the proposal was overwhelmingly bipartisan , with no significant difference between the parties (Republicans 75%, Democrats 77%, very red states 71%, very blue states 76%). Since 2024, support has declined a little, from 81% to 74%. Report: Americans on Regulating Artificial Intelligence Questionnaire with Toplines, Crosstabs and Methodology Take the Policymaking Simulation Yourself About the SurveyThe survey was a “public consultation survey” in which respondents are provided briefings and arguments for and against proposals. Content was reviewed by experts from each side of the debate to ensure that the briefings are accurate and balanced and that the arguments presented are the strongest ones being made. The survey was fielded July 30th through August 7th, 2025 with 1,202 adults nationally. Samples were obtained from multiple online opt-in panels, including Cint, Dynata and Prodege. Sample collection and quality control was managed by QuantifyAI under the direction of the Program for Public Consultation. Samples were pre-stratified and weighted by age, race, gender, education, income, geographic region, marital status, and home ownership to match the general adult population. The survey was also weighted by partisan affiliation. The survey was offered in both English and Spanish. The confidence interval is +/- 3.0% and the response rate for the sample was 7.9%.Classification of states as “very red” and “very blue” are based on the Cook Partisan Voting Index: The quarter of respondents who live in the most red states were categorized as “very red,” and vice versa for “very blue.”
- ✗ og:image
- ✗ twitter:card
📐 How the SEO Pillar score is calculated
SEO Pillar = Title (20 pts) + Meta Desc (20 pts) + Heading Hierarchy (20 pts) + Indexability (20 pts) + Social/OG (20 pts)
Each sub-score is derived from the checks above. Canonical tag, lang attribute, og:image, and a single H1 are the highest-impact items.
AI Readiness / GEO
56/100 40 % of Global Score 🟢 High ConfidenceThis pillar aggregates citation share, hallucination risk, bot access, schema health, and content extractability. The individual diagnostic sections below contribute to this score.
Is AI lying about your brand? This panel measures how likely LLMs are to hallucinate facts when extracting information from your page.
🤖 Bot Access Matrix
📊 Structure & Information Density Docs
🏷️ Schema Health Docs
Schema Coverage Map
📐 AI Efficiency Metrics Docs
Token Bloat Research
Multimodal Readiness
TDM Rights
🔥 Structural Entropy Check Research
🔬 AI-Crawler Simulation
See your website the way AI crawlers do. CSS stripped, structure labeled, content chunked.
Toggle to "AI Agent View" to see what GPTBot, ClaudeBot, and other AI crawlers actually extract from this page.
AI Answer Preview
NEWSee how AI models summarize your site. Left: your actual content. Right: what the LLM extracts and says about you.
🔧 Tech Stack
Performance & Speed
68/100 20 % of Global Score 🟢 High Confidence⏱️ Time to First Byte
Google considers <200 ms "good". AI crawlers may have even shorter timeouts.
📦 Page Weight
DOM nodes
HTML payload
🗄️ Cache & CDN
- ✗ Cache-Control header
- ✗ CDN cache status
- ✗ CDN detected
🔬 Tracker Tax
tracker scripts
third-party domains
token overhead
📐 How the Performance Pillar score is calculated
Perf Pillar = TTFB (35 pts) + Page Weight (25 pts) + Cache/CDN (20 pts) + Tracker Tax (20 pts)
TTFB <200 ms = full marks. DOM >3000 or payload >300 KB incurs heavy penalties. Tracker scripts beyond 5 reduce score.
Architecture & Trust
73/100 15 % of Global Score 🟢 High Confidence🗺️ Sitemap & Robots
- ✓ Sitemap declared in robots.txt →
https://publicconsultation.org/wp-sitemap.xml - ✓ Googlebot allowed
- ✓ GPTBot allowed
- ✓ ClaudeBot allowed
🔗 Linking
internal links
external links
🔒 Security & Trust
- ✗ HSTS header (Strict-Transport-Security)
- ✓ Content-Security-Policy header
- ✓ HTTP status 200 OK (got 200)
♿ Accessibility Signals
- ✓ HTML lang attribute → en
- ✓ Meta viewport for mobile
- ✗ Single H1 for screen readers
📐 How the Architecture Pillar score is calculated
Arch Pillar = Sitemap & Robots (30 pts) + Linking (25 pts) + Security (25 pts) + Accessibility (20 pts)
Having a valid sitemap, allowing AI bots, HSTS, and a good internal link count are the highest-impact items.
🏅 AI-Verified Trust Badge
Your site scores 40/100. Reach 80+ to unlock the green "AI-Verified" badge. Fix the issues below to improve your score.
<a href="https://seodiff.io/radar/domains/publicconsultation.org" rel="noopener"><img src="https://seodiff.io/api/v1/badge?domain=publicconsultation.org" alt="AI-Verified by SEODiff" width="280" height="52"></a>
💡 Paste in your site footer, GitHub README, or email signature. Badge updates automatically as your score changes.
� Deep Crawl Analysis 10 pages · Deep-10
Homepage scores 40, but internal pages average only 5 — a -35-point gap. Blogs, docs, and legacy content are dragging down AI readability site-wide.
| Page | Type | ACRI | Token Bloat | Words | Status |
|---|---|---|---|---|---|
| about | 54 | 77.1× | 369 | ✓ | |
| pricing | 0 | 0.0× | 0 | ✓ | |
| product | 0 | 0.0× | 0 | ✓ | |
| product | 0 | 0.0× | 0 | ✓ | |
| docs | 0 | 0.0× | 0 | ✓ | |
| blog | 0 | 0.0× | 0 | ✓ | |
| social-proof | 0 | 0.0× | 0 | ✓ | |
| integrations | 0 | 0.0× | 0 | ✓ | |
| support | 0 | 0.0× | 0 | ✓ | |
| support | 0 | 0.0× | 0 | ✓ |
| Path | Pages | Avg ACRI | Ghost % | Bloat | Top Issue |
|---|---|---|---|---|---|
| /integrations/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /contact/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /case-studies/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /docs/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /pricing/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /products/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /about/ | 1 | 54 | 0% | 77.1× | High JS Bloat |
| /faq/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /blog/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
| /features/ | 1 | 0 | 0% | 0.0× | Low AI Readiness |
Scores update automatically each month. Create a free account for on-demand re-crawls (3/month free).
🔌 API Access
Pull this data programmatically. All sub-page metrics are available via our public API.
curl https://seodiff.io/api/v1/deep10/domain/publicconsultation.org
Get your free API key — 100 requests/month included.
🔗 Similar developer Sites
Domains with a similar tech stack, industry, and AI readiness profile to publicconsultation.org. Compare side-by-side.
| Domain | ACRI | AI Score | Tech Stack | Token Bloat | Schema | |
|---|---|---|---|---|---|---|
| publicconsultation.org (this site) | 40 | 75 | WordPress | 31.9× | 0 | — |
| wissenschaft.de | 65 | 78 | WordPress | 25.1× | 1 | Compare → |
| denimio.com | 65 | 77 | WordPress | 3.2× | 0 | Compare → |
| competitionplus.com | 65 | 83 | WordPress | 16.0× | 1 | Compare → |
| blog.heroku.com | 65 | 83 | WordPress | 20.6× | 4 | Compare → |
| neotel.com.mk | 65 | 83 | WordPress | 11.9× | 1 | Compare → |
📊 Semantic Share of Voice
How often would an AI cite publicconsultation.org when users ask about topics in this domain's niche? We run entity queries through our 188k-page search index and measure citation probability.
Analyzing citation landscape…
Remediation Patches
COPY-PASTEAuto-generated code fixes tailored to publicconsultation.org. Copy and paste these into your codebase to improve AI visibility. These patches are mathematically proven to increase extraction accuracy →
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Publicconsultation",
"url": "https://publicconsultation.org",
"logo": "https://publicconsultation.org/wp-content/uploads/2025/03/cropped-umdglobe512x512-32x32.jpg",
"sameAs": []
}
</script>
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "WebSite",
"name": "Publicconsultation",
"url": "https://publicconsultation.org",
"potentialAction": {
"@type": "SearchAction",
"target": "https://publicconsultation.org/search?q={search_term_string}",
"query-input": "required name=search_term_string"
}
}
</script>
<!-- Move inline CSS to external stylesheets --> <link rel="stylesheet" href="/css/main.css"> <!-- Move inline scripts to external files with defer --> <script src="/js/app.js" defer></script> <!-- Remove duplicate navigation blocks --> <!-- Keep only ONE <nav> in the <header> --> <!-- Ensure <main> wraps your primary content --> <main> <!-- Your content here — this is what AI sees first --> </main>
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is Publicconsultation?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Add your answer here — describe what Publicconsultation does in 1-2 sentences."
}
},
{
"@type": "Question",
"name": "How does Publicconsultation work?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Explain the key features and how users interact with Publicconsultation."
}
}
]
}
</script>
Projected Impact
ROI EST.If you apply the patches above, here's the estimated improvement for publicconsultation.org:
*Estimates based on SEODiff's scoring model. Actual results depend on implementation quality.
📋 Data Export
Download scores and metadata for audits, client reports, or CI/CD pipelines. Exports contain computed metrics only (no copyrighted content).
All data is generated automatically and updated with each crawl. JSON exports contain scores and metadata only (no copyrighted content).
Is this your company?
Monitor your AI visibility score weekly and get alerted when changes happen.
Start Free →