Getting Started with FineData API
Learn how to set up and make your first web scraping request with FineData API in under 5 minutes.
Getting Started with FineData API
Welcome to FineData. This guide will walk you through setting up your account and making your first web scraping request in under 5 minutes.
Prerequisites
Before you begin, make sure you have:
- A FineData account (sign up free)
- Python 3.8+ or Node.js 18+
- Your API key from the dashboard
Step 1: Get Your API Key
After creating your account, navigate to the API Keys section in your dashboard. Click Create New Key and give it a descriptive name like “Development”.
Keep your API key secure. Never commit it to version control or share it publicly.
Step 2: Install the SDK
Python
pip install finedata
Node.js
npm install finedata
Step 3: Make Your First Request
Here’s a minimal example that scrapes a webpage and returns structured data:
Python
from finedata import FineData
client = FineData(api_key="your-api-key")
result = client.scrape(
url="https://example.com",
options={
"render_js": True,
"wait_for": "networkidle"
}
)
print(result.content) # Page HTML content
print(result.status_code) # HTTP status
print(result.metadata) # Extracted metadata
Node.js
import { FineData } from 'finedata';
const client = new FineData({ apiKey: 'your-api-key' });
const result = await client.scrape({
url: 'https://example.com',
options: {
renderJs: true,
waitFor: 'networkidle',
},
});
console.log(result.content);
console.log(result.statusCode);
console.log(result.metadata);
Step 4: Handle Anti-Bot Protection
FineData automatically handles common anti-bot measures. For sites with stronger protection, enable advanced bypass:
result = client.scrape(
url="https://protected-site.com",
options={
"render_js": True,
"bypass_level": "advanced",
"tls_profile": "chrome124",
"residential_proxy": True
}
)
Step 5: Parse Structured Data
Extract specific data points using CSS selectors:
result = client.scrape(
url="https://example.com/products",
extract={
"title": "h1.product-title",
"price": ".price-current",
"description": ".product-description p",
"images": ["img.product-image @src"]
}
)
for item in result.extracted:
print(f"{item['title']}: {item['price']}")
Understanding Token Usage
Each request consumes tokens based on the features used:
| Feature | Token Cost |
|---|---|
| Base request | 1 token |
| JavaScript rendering | +5 tokens |
| Anti-bot bypass | +2 tokens |
| Residential proxy | +3 tokens |
| Captcha solving | +10 tokens |
Monitor your usage in the dashboard or via the API:
usage = client.get_usage()
print(f"Tokens used: {usage.tokens_used}/{usage.tokens_limit}")
What’s Next?
Now that you’ve made your first request, explore these resources:
- API Reference — Complete endpoint documentation
- Proxy Configuration — TLS profiles and proxy options
- Batch Scraping — Process multiple URLs efficiently
- Webhooks — Async scraping with callbacks
Need help? Reach out to our support team or join the community on Discord.
Related Articles
Free No-Code Web Scraper: Extract Data Without Writing Code
How to use no-code web scrapers to extract structured data from websites. Tools, workflows, and practical limitations for non-developers.
TutorialHow to Scrape Dynamic Job Listings with Authentication in 2026
Learn how to scrape job portals with login requirements using FineData API, including session handling and secure credential management.
TutorialHow to Scrape Job Postings with Dynamic Filters Using FineData API
Step-by-step guide to extract job listings from career sites with dynamic filters using FineData's API and Playwright rendering.