Home Integrations Connect AI Rankia MCP to Claude

Connect AI Rankia MCP to Claude

Last updated on Apr 10, 2026

Connect AI Rankia to Claude

Track your brand's AI visibility directly inside Claude — web, desktop, or CLI.

Requirements

  • An AI Rankia account — sign up here
  • Claude Pro or Max plan (for claude.ai web), or Claude Desktop / Claude Code

Claude.ai Web (Easiest)

Works directly in your browser at claude.ai — no installs needed.

Step 1: Open Integrations

Go to Settings → Integrations → Add more

Step 2: Add AI Rankia

  • Name: AI Rankia
  • URL:
https://mcp.airankia.com/

Accept the security notice and click Add.

Step 3: Authorize

A login window opens — sign in with your AI Rankia email and password → click Authorize.

Step 4: Enable in Your Chat

In any conversation, click the + button (bottom-left) → Connectors → toggle AI Rankia on.

Now just ask:

"Run 'best CRM software' and show me brand rankings"


Claude Desktop App

1. Install the bridge

npm install -g mcp-remote

2. Open your config file

Mac: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

3. Paste this:

{
  "mcpServers": {
    "airankia": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://mcp.airankia.com/"
      ]
    }
  }
}

4. Restart Claude Desktop

A browser opens — log in → Authorize → done. Look for the hammer icon to confirm.


Claude Code CLI

One command:

claude mcp add airankia --transport http https://mcp.airankia.com/

First use opens your browser to log in. Then:

> Run "best AI SEO tools" and show me brand rankings

Other Clients

Client Setup
Cursor Settings → MCP → Add Server → use same mcp-remote config
Windsurf Settings → MCP → Add URL: https://mcp.airankia.com/
Continue.dev config.yaml → MCP server with streamableHttp transport
Cline (VS Code) cline_mcp_settings.jsonmcp-remote proxy
Zed settings.jsonlanguage_models.mcp

What You Can Ask

Quick brand check:

"Run 'best CRM software' and show me which brands rank across AI search engines"

Monitor a keyword:

"Create a monitored prompt for 'best AI writing tools' targeting the US"

Your brand visibility:

"Show me my brand visibility score for the last 2 weeks"

Competitive analysis:

"Show the competitive landscape — top 10 brands across all my prompts"

What AI models say:

"Show me what GPT-5 Search, Gemini, and Perplexity say about my brand"

Citations:

"Which websites do AI models cite? Show me the actual URLs"

Fan-out queries:

"What follow-up questions do AI models generate from my prompts?"

Rerun a prompt:

"Rerun my 'best project management tools' prompt"

Credits:

"How many credits do I have left?"


All Available Tools

Brands

Tool What it does
list_brands See all your tracked brands
get_brand_detail Full brand profile — competitors, description, social links
create_brand Start tracking a new brand
update_brand Edit brand info
delete_brand Remove a brand permanently
get_brand_competitors View competitor list
get_brand_visibility_metrics Visibility scores and trends over time

Prompts

Tool What it does
list_monitored_prompts All your tracked queries
create_monitored_prompt Set up a new weekly/daily tracked query (~9-10 credits/run)
get_prompt_detail Full prompt config
pause_monitored_prompt Stop runs, keep data
resume_monitored_prompt Restart a paused prompt
rerun_prompt Run again now (fast mode: ~30-60 sec)

Quick Analysis

Tool What it does
fast_run Instant query across 7 AI models — no setup needed, just type a query
complete_fast_run Saves results (runs automatically in background)

Reports

Tool What it does
get_prompt_ai_responses Full text from each AI model
get_prompt_brand_timeline Brand visibility over time
get_prompt_brand_overview Brand summary across all runs
get_prompt_citation_sources Which websites get cited
get_citation_share_of_voice Your citation share vs competitors
get_prompt_fan_out Follow-up queries AI models generate
get_prompt_report Complete report in one call
get_prompt_shopping_results AI shopping/product results
get_prompt_local_results AI local/map results
get_workspace_competitive_landscape Full competitive intel across all prompts

Account

Tool What it does
list_workspaces See your workspaces
switch_workspace Change active workspace
get_credit_balance Check credits remaining
get_credit_usage_history Credit spending history

AI Models Tracked

Google AI Overview · Google AI Mode · GPT-5 Search · Gemini Search · Perplexity Sonar · Claude Search · Grok 4.1

Plus 10 non-search models: DeepSeek V3, Llama 4, Claude Sonnet 4, GPT-5, Grok 4, Ernie 4.5, DeepSeek R1, Qwen 3, Mistral Medium, Kimi K2.


Claude Code Skill File (Copy-Paste)

For optimized tool routing in Claude Code, save this to ~/.claude/skills/airankia/SKILL.md:

---
name: airankia-mcp
description: Run AI visibility queries and present brand ranking reports using the AI Rankia MCP. ALWAYS read this skill before calling ANY airankia-saas tool. Contains exact tool routing rules (fast_run vs rerun_prompt vs list_monitored_prompts), the visibility dashboard template with correct calculations, citation display rules, and country handling. Triggers on any mention of AI Rankia, fast run, brand visibility, AI search tracking, monitored prompts, prompt runs, visibility report, or any intent to query AI search models for brand rankings. Also trigger when the user pastes a query and wants to see how brands rank across AI models. Without this skill, Claude will waste time listing prompts or creating prompts when it should just call fast_run directly.
---

# AI Rankia MCP — Tool Routing & Dashboard Skill

## TOOL ROUTING — WHICH TOOL TO CALL

This is the most important section. Claude MUST pick the right tool on the first try.

### Decision tree

User gives a query string (e.g. "best ai seo tools", "run best crm software")
  → Call fast_run directly. Do NOT list prompts. Do NOT create a prompt.

User says "rerun prompt X" or references an existing prompt by name/ID
  → Call rerun_prompt with the prompt UUID and {"speed": "fast", "extract_brands": false}

User says "list my prompts" / "what prompts do I have" / "show monitored queries"
  → Call list_monitored_prompts with status=active

User says "create a prompt for X" / "start monitoring X" / "track X"
  → Call create_monitored_prompt (but ask for country first — see COUNTRY section)

User says "show report for prompt X" / "get latest results for X"
  → Call get_prompt_report with compact=true

User says "delete prompt X"
  → Call delete_monitored_prompt (confirm with user first — destructive)

### CRITICAL: fast_run is the default

When a user types a query like "best ai seo tools" or "search for top CRM software" — go straight to:

fast_run({"query": "best ai seo tools", "extract_brands": false})

Do NOT call list_monitored_prompts to "find" the query.
Do NOT call create_monitored_prompt to "set it up first".
Do NOT ask the user to confirm before running.
Just call fast_run immediately.

### COUNTRY handling

1. User specifies a country → use it (ISO: "US", "ES", "MX", "DE", "FR")
2. Query in specific language → ask which country
3. Query mentions a location → use that country
4. Unclear → ASK the user
5. For create_monitored_prompt: NEVER leave country empty

## FAST RUN WORKFLOW

### Step 1: Call fast_run
fast_run({"query": "user's query here", "extract_brands": false, "country": "US"})

### Step 2: Process response
- data.run.id — run UUID (for complete_fast_run)
- data.model_responses[] — 7 model responses
- data.citations — citation domains
- data.fan_out — fan-out queries
- meta.models_with_content — models with non-empty text
- meta.excluded_models — failed/empty models

### Step 3: Extract brands from each model's summary text
For each brand: {"name", "rank", "score" (0-100), "reasoning", "sentiment", "is_target_brand", "mention_count"}

### Step 4: Compute visibility — 60/40 FORMULA

maxActiveModels = models that mention at least 1 brand (NOT total called)
maxMentions = highest total_mentions of any brand

Per brand:
- coverageRatio = models_mentioning_brand / maxActiveModels
- mentionsIndex = total_mentions / maxMentions
- visibilityScore = 100 * (coverageRatio * 0.6 + mentionsIndex * 0.4)

Display: Coverage as "X / Y", Visibility as percentage.
Sort descending. Same score = same rank.

### Step 5: Present dashboard
KPI cards: Active models (X of 7), Brands detected, Total citations, Fan-out queries.
Bar chart: visibility % per brand.
Brand table: #, Brand, Coverage (X/N), Visibility (%), Mentions, Model pills.
Model pill colors: AI Mode #378ADD, GPT-5 #1D9E75, Gemini #D85A30, Claude #7F77DD, Perplexity #D4537E, Grok #73726c, AI Overview #888780.
Citations: ALL domains with clickable URLs.
Fan-outs: queries with model pills.

### Step 6: Persist silently
complete_fast_run({"run_id": "...", "brands_per_model": {...}})
Do NOT mention this to user. Do NOT wait before showing results.

## RERUN PROMPT WORKFLOW
rerun_prompt({"id": "prompt-uuid", "speed": "fast", "extract_brands": false})
Then follow Steps 2-6 from fast run.

## DESIGN RULES
- var(--color-background-secondary) for cards, NOT transparent
- CSS variables for all text colors
- Border radius: 8-10px cards, 6-8px small
- Font: 11px labels, 13px body, 18-20px KPIs
- HTML under 4000 tokens
- Chart height: ~400-450px

## EDGE CASES
- ALL models empty → "No models returned results"
- 1 model only → still show dashboard
- Target brand found → highlight with var(--color-background-info)
- No brands → show citations and fan-outs only

Troubleshooting

Problem Fix
No tools in Claude Desktop Check JSON syntax (no trailing commas), restart completely
mcp-remote not found Run npm install -g mcp-remote
Auth loop Clear cookies for mcp.airankia.com, reconnect
Claude Code error Use --transport http when adding
Don't see Connectors in claude.ai You need a Pro or Max plan

Cost

Same credits as the dashboard. ~9-10 credits per prompt run.