import requests

resp = requests.get(

  "api.rankparse.com/v1/backlinks",

  params={"domain": "stripe.com"}

)

RankParse
from_url: "blog.hubspot.com/..."anchor: "backlink checker"da: 93+ 97 more results
← All posts
Tutorial7 min readApril 26, 2026

Python Backlink Checker: Pull Backlink Data with the RankParse API

Build a Python backlink checker using the RankParse API. Working 10-line script, pagination, bulk domain analysis, and cost breakdown — free tier included.

If you need backlink data in a Python script — for an SEO audit tool, a bulk analysis pipeline, or just quick research — you want an API that behaves like a normal HTTP endpoint: send a request, get JSON back, done.

This tutorial walks through exactly that using the RankParse API. By the end you will have a working script that fetches backlinks, handles pagination, and loops over a list of domains.

No scraping. No headless browsers. No rate-limit wrestling. Just requests.get().


Prerequisites

  • Python 3.8+ — anything that ships with f-strings is fine.
  • requests librarypip install requests if you do not have it.
  • A RankParse API key — sign up at rankparse.com/signup. No credit card required. The free tier gives you 100 credits, which is enough to run the examples in this post several times over.

Once you have your key, store it as an environment variable so it never ends up in source control:

export RANKPARSE_API_KEY="rp_your_key_here"

The 10-Line Script

Here is the complete, working script. Everything else in this post builds on it.

import requests

API_KEY = "rp_your_key_here"
DOMAIN  = "stripe.com"

resp = requests.get(
    "https://api.rankparse.com/v1/backlinks",
    params={"domain": DOMAIN, "limit": 100},
    headers={"X-API-Key": API_KEY},
)

backlinks = resp.json()["data"]
print(f"{len(backlinks)} backlinks found")

for link in backlinks[:5]:
    print(link["from_url"], "→", link["to_url"])

Run it and you will see something like:

100 backlinks found
https://news.ycombinator.com/item?id=29382910 → https://stripe.com/blog/payment-api-design
https://dev.to/swyx/how-stripe-builds-apis-3k2g → https://stripe.com/docs/api
https://lobste.rs/s/abc123/stripe_checkout → https://stripe.com/docs/payments
...

Let's walk through the three meaningful parts.

Authentication. Every request must include an X-API-Key header. There is no OAuth dance, no token refresh, no session management — just the header. If the key is missing or invalid the API returns a 401 with a JSON error body.

Query parameters. domain is the domain you want backlinks to — no https:// prefix or trailing slash. limit controls how many results come back (default 100, max 1000).

The response. Your data lives under the "data" key. If the domain has no backlinks in the dataset, you get a 200 with an empty list — never a 404.


What the Response Looks Like

Each element in resp.json()["data"] has this shape:

{
  "from_url": "https://news.ycombinator.com/item?id=29382910",
  "to_url": "https://stripe.com/blog/payment-api-design",
  "anchor_text": "payment API design",
  "domain_authority": 91
}
  • from_url — the page that contains the link.
  • to_url — the page on your domain being linked to.
  • anchor_text — the visible link text. Useful for anchor profile analysis.
  • domain_authority — a 0–100 score for the linking domain, derived from the Common Crawl link graph. Higher means more authoritative.

The full response envelope also includes a "meta" field:

{
  "data": [...],
  "meta": {
    "total": 4821,
    "offset": 0,
    "limit": 100
  }
}

total is the approximate count of matching backlinks for the domain. You need total and offset when paginating.

API response shape — /v1/backlinks

data[ ]each backlink
from_url:"https://news.ycombinator.com/item?id=29382910"
to_url:"https://stripe.com/blog/payment-api-design"
anchor_text:"payment API design"
domain_authority:91
metapagination envelope
total:4821
offset:0
limit:100

Key fields

domain_authority0–100 score from Common Crawl link graph
anchor_textvisible link text — useful for anchor profile analysis
meta.totalapproximate match count — use for pagination

Pagination Using ?offset=

One call returns at most 1000 results. For domains with thousands of backlinks, page through using the offset parameter.

import requests

API_KEY = "rp_your_key_here"
DOMAIN  = "stripe.com"
LIMIT   = 100

def fetch_all_backlinks(domain):
    all_links = []
    offset    = 0

    while True:
        resp = requests.get(
            "https://api.rankparse.com/v1/backlinks",
            params={"domain": domain, "limit": LIMIT, "offset": offset},
            headers={"X-API-Key": API_KEY},
        )
        resp.raise_for_status()

        body  = resp.json()
        batch = body["data"]

        if not batch:
            break

        all_links.extend(batch)
        offset += len(batch)

        total = body["meta"]["total"]
        print(f"Fetched {len(all_links)} / {total}")

        if len(all_links) >= total:
            break

    return all_links

links = fetch_all_backlinks(DOMAIN)
print(f"Done. {len(links)} total backlinks.")

The loop stops when the API returns an empty batch — that is the reliable termination signal regardless of what meta.total says (it is an approximation). resp.raise_for_status() surfaces HTTP errors early so you are not silently iterating on a 402 or 429.

Paginating a large domain — offset pattern

stripe.com · ~5,000 backlinks · limit=100

p1
offset=0
1–100
2 credits
p2
offset=100
101–200
2 credits
p3
offset=200
201–300
2 credits
· · ·
p50
offset=4900
4,901–5,000
2 credits
50 callstotal pages
100 creditstotal cost
~5,000backlinks retrieved

Bulk Domain Analysis

If you have a list of domains to audit, loop over them. Keep it simple — one domain at a time, results written to a list you can dump to CSV or a database.

import csv
import requests

API_KEY = "rp_your_key_here"
DOMAINS = ["stripe.com", "vercel.com", "supabase.com", "railway.app"]

def get_backlink_summary(domain):
    resp = requests.get(
        "https://api.rankparse.com/v1/backlinks",
        params={"domain": domain, "limit": 100},
        headers={"X-API-Key": API_KEY},
    )
    resp.raise_for_status()
    body      = resp.json()
    backlinks = body["data"]
    total     = body["meta"]["total"]

    # Average domain authority of linking domains
    avg_da = (
        sum(link["domain_authority"] for link in backlinks) / len(backlinks)
        if backlinks else 0
    )

    return {
        "domain":    domain,
        "total":     total,
        "sample":    len(backlinks),
        "avg_da":    round(avg_da, 1),
    }

results = []
for domain in DOMAINS:
    summary = get_backlink_summary(domain)
    results.append(summary)
    print(summary)

with open("backlink_summary.csv", "w", newline="") as f:
    writer = csv.DictWriter(f, fieldnames=["domain", "total", "sample", "avg_da"])
    writer.writeheader()
    writer.writerows(results)

print("Saved to backlink_summary.csv")

Output:

{'domain': 'stripe.com',  'total': 4821, 'sample': 100, 'avg_da': 62.4}
{'domain': 'vercel.com',  'total': 3104, 'sample': 100, 'avg_da': 58.1}
{'domain': 'supabase.com','total': 1893, 'sample': 100, 'avg_da': 54.7}
{'domain': 'railway.app', 'total':  441, 'sample': 100, 'avg_da': 47.2}

You now have a sortable competitor backlink audit in about 20 lines of code.

Bulk backlink summary — 4 domains

stripe.com

4,821 backlinks
DA 62.4

vercel.com

3,104 backlinks
DA 58.1

supabase.com

1,893 backlinks
DA 54.7

railway.app

441 backlinks
DA 47.2

avg_da = average domain authority of the first 100 linking domains

Cost Calculation

Each call to /v1/backlinks costs 2 credits.

ScenarioCallsCredits used
1 domain, 1 page of results12
1 domain, fully paginated (50 pages)50100
50 domains, 1 page each50100
4 domains (bulk example above)48

The free tier gives you 100 credits with no credit card required — enough to pull a first page of results for 50 domains, or fully paginate a single domain with up to ~5,000 backlinks.

If you are running a one-time audit of a large site, the Starter pack (500 credits) covers 250 domain lookups. Growth and Scale packs are available for ongoing pipelines.


Next Steps

  • Read the backlinks endpoint reference for the full parameter list and response schema.
  • Combine backlinks with domain authority scores to rank linking domains by quality.
  • Use the batch endpoint to fetch multiple signal types for a domain in a single request.
  • Connect the MCP server to query backlink data directly from Claude or Cursor without writing any code.

Try RankParse free

100 credits, no credit card required.

Get API Key