import requests
resp = requests.get(
"api.rankparse.com/v1/backlinks",
params={"domain": "stripe.com"}
)
Python Backlink Checker: Pull Backlink Data with the RankParse API
Build a Python backlink checker using the RankParse API. Working 10-line script, pagination, bulk domain analysis, and cost breakdown — free tier included.
If you need backlink data in a Python script — for an SEO audit tool, a bulk analysis pipeline, or just quick research — you want an API that behaves like a normal HTTP endpoint: send a request, get JSON back, done.
This tutorial walks through exactly that using the RankParse API. By the end you will have a working script that fetches backlinks, handles pagination, and loops over a list of domains.
No scraping. No headless browsers. No rate-limit wrestling. Just requests.get().
Prerequisites
- Python 3.8+ — anything that ships with f-strings is fine.
- requests library —
pip install requestsif you do not have it. - A RankParse API key — sign up at rankparse.com/signup. No credit card required. The free tier gives you 100 credits, which is enough to run the examples in this post several times over.
Once you have your key, store it as an environment variable so it never ends up in source control:
export RANKPARSE_API_KEY="rp_your_key_here"The 10-Line Script
Here is the complete, working script. Everything else in this post builds on it.
import requests
API_KEY = "rp_your_key_here"
DOMAIN = "stripe.com"
resp = requests.get(
"https://api.rankparse.com/v1/backlinks",
params={"domain": DOMAIN, "limit": 100},
headers={"X-API-Key": API_KEY},
)
backlinks = resp.json()["data"]
print(f"{len(backlinks)} backlinks found")
for link in backlinks[:5]:
print(link["from_url"], "→", link["to_url"])Run it and you will see something like:
100 backlinks found
https://news.ycombinator.com/item?id=29382910 → https://stripe.com/blog/payment-api-design
https://dev.to/swyx/how-stripe-builds-apis-3k2g → https://stripe.com/docs/api
https://lobste.rs/s/abc123/stripe_checkout → https://stripe.com/docs/payments
...Let's walk through the three meaningful parts.
Authentication. Every request must include an X-API-Key header. There is no OAuth dance, no token refresh, no session management — just the header. If the key is missing or invalid the API returns a 401 with a JSON error body.
Query parameters. domain is the domain you want backlinks to — no https:// prefix or trailing slash. limit controls how many results come back (default 100, max 1000).
The response. Your data lives under the "data" key. If the domain has no backlinks in the dataset, you get a 200 with an empty list — never a 404.
What the Response Looks Like
Each element in resp.json()["data"] has this shape:
{
"from_url": "https://news.ycombinator.com/item?id=29382910",
"to_url": "https://stripe.com/blog/payment-api-design",
"anchor_text": "payment API design",
"domain_authority": 91
}from_url— the page that contains the link.to_url— the page on your domain being linked to.anchor_text— the visible link text. Useful for anchor profile analysis.domain_authority— a 0–100 score for the linking domain, derived from the Common Crawl link graph. Higher means more authoritative.
The full response envelope also includes a "meta" field:
{
"data": [...],
"meta": {
"total": 4821,
"offset": 0,
"limit": 100
}
}total is the approximate count of matching backlinks for the domain. You need total and offset when paginating.
API response shape — /v1/backlinks
Key fields
Pagination Using ?offset=
One call returns at most 1000 results. For domains with thousands of backlinks, page through using the offset parameter.
import requests
API_KEY = "rp_your_key_here"
DOMAIN = "stripe.com"
LIMIT = 100
def fetch_all_backlinks(domain):
all_links = []
offset = 0
while True:
resp = requests.get(
"https://api.rankparse.com/v1/backlinks",
params={"domain": domain, "limit": LIMIT, "offset": offset},
headers={"X-API-Key": API_KEY},
)
resp.raise_for_status()
body = resp.json()
batch = body["data"]
if not batch:
break
all_links.extend(batch)
offset += len(batch)
total = body["meta"]["total"]
print(f"Fetched {len(all_links)} / {total}")
if len(all_links) >= total:
break
return all_links
links = fetch_all_backlinks(DOMAIN)
print(f"Done. {len(links)} total backlinks.")The loop stops when the API returns an empty batch — that is the reliable termination signal regardless of what meta.total says (it is an approximation). resp.raise_for_status() surfaces HTTP errors early so you are not silently iterating on a 402 or 429.
Paginating a large domain — offset pattern
stripe.com · ~5,000 backlinks · limit=100
Bulk Domain Analysis
If you have a list of domains to audit, loop over them. Keep it simple — one domain at a time, results written to a list you can dump to CSV or a database.
import csv
import requests
API_KEY = "rp_your_key_here"
DOMAINS = ["stripe.com", "vercel.com", "supabase.com", "railway.app"]
def get_backlink_summary(domain):
resp = requests.get(
"https://api.rankparse.com/v1/backlinks",
params={"domain": domain, "limit": 100},
headers={"X-API-Key": API_KEY},
)
resp.raise_for_status()
body = resp.json()
backlinks = body["data"]
total = body["meta"]["total"]
# Average domain authority of linking domains
avg_da = (
sum(link["domain_authority"] for link in backlinks) / len(backlinks)
if backlinks else 0
)
return {
"domain": domain,
"total": total,
"sample": len(backlinks),
"avg_da": round(avg_da, 1),
}
results = []
for domain in DOMAINS:
summary = get_backlink_summary(domain)
results.append(summary)
print(summary)
with open("backlink_summary.csv", "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=["domain", "total", "sample", "avg_da"])
writer.writeheader()
writer.writerows(results)
print("Saved to backlink_summary.csv")Output:
{'domain': 'stripe.com', 'total': 4821, 'sample': 100, 'avg_da': 62.4}
{'domain': 'vercel.com', 'total': 3104, 'sample': 100, 'avg_da': 58.1}
{'domain': 'supabase.com','total': 1893, 'sample': 100, 'avg_da': 54.7}
{'domain': 'railway.app', 'total': 441, 'sample': 100, 'avg_da': 47.2}You now have a sortable competitor backlink audit in about 20 lines of code.
Bulk backlink summary — 4 domains
stripe.com
vercel.com
supabase.com
railway.app
avg_da = average domain authority of the first 100 linking domains
Cost Calculation
Each call to /v1/backlinks costs 2 credits.
| Scenario | Calls | Credits used |
|---|---|---|
| 1 domain, 1 page of results | 1 | 2 |
| 1 domain, fully paginated (50 pages) | 50 | 100 |
| 50 domains, 1 page each | 50 | 100 |
| 4 domains (bulk example above) | 4 | 8 |
The free tier gives you 100 credits with no credit card required — enough to pull a first page of results for 50 domains, or fully paginate a single domain with up to ~5,000 backlinks.
If you are running a one-time audit of a large site, the Starter pack (500 credits) covers 250 domain lookups. Growth and Scale packs are available for ongoing pipelines.
Next Steps
- Read the backlinks endpoint reference for the full parameter list and response schema.
- Combine backlinks with domain authority scores to rank linking domains by quality.
- Use the batch endpoint to fetch multiple signal types for a domain in a single request.
- Connect the MCP server to query backlink data directly from Claude or Cursor without writing any code.