API Reference
⚠️ Disclaimer: AI-generated responses may contain inaccuracies.
Always verify important information from the original sources provided.
Getting Started
No API keys required! Simply make requests to the endpoints below.
Base URL
https://anveshasearch.vercel.app
🔍 Quick Search
POST
/api/search
Search 7 engines in parallel, extract page content, get AI-synthesized answer.
Request Body:
{
"query": "best python frameworks 2025",
"num_results": 5
}
Response:
{
"success": true,
"query": "best python frameworks 2025",
"ai_answer": "Based on the sources, the top Python frameworks...",
"results": [
{
"title": "Top Python Frameworks",
"url": "https://example.com/...",
"content": "Django, Flask, and FastAPI lead...",
"engine": "DuckDuckGo"
}
],
"total_results": 25,
"engine_stats": {
"duckduckgo": "✓ 5 results",
"startpage": "✓ 5 results",
"bing": "✓ 5 results",
"brave": "✓ 5 results"
}
}
🧠 Deep Research (Streaming)
GET
/api/deep-research-stream
Real-time streaming research with live AI response generation.
Research Depth Modes:
| Mode | Questions | Est. Time |
|---|---|---|
surface |
3 | ~45 seconds |
moderate |
6 | ~2 minutes |
deep |
10 | ~4 minutes |
Query Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
topic |
string | Yes | The research topic |
depth |
string | No | surface, moderate, or deep |
num_questions |
int | No | Override question count (1-10) |
Example:
GET /api/deep-research-stream?topic=quantum+computing&depth=moderate
SSE Event Types:
status → { "type": "status", "message": "Generating questions..." }
questions → { "type": "questions", "questions": ["Q1", "Q2", ...] }
question_start → { "type": "question_start", "num": 1, "total": 6 }
engine_result → { "type": "engine_result", "engine": "DuckDuckGo", "count": 5 }
question_done → { "type": "question_done", "num": 1 }
report_start → { "type": "report_start", "sources": [...], "total_sources": 25 }
report_token → { "type": "report_token", "token": "The" }
report_done → { "type": "report_done", "report": "# Full Report..." }
done → { "type": "done" }
🤖 AI Fallback System
The API automatically handles AI model failures:
- Primary: Llama 3.3 70B (DeepInfra)
- Cache: Felo.ai cached answer
- Fallback: Qwen 72B (DeepInfra)
- Error: Friendly error message
📊 Other Endpoints
POST
/api/felo
Search Felo.ai specifically for AI-powered results.
GET
/api/engines
Get list of available search engines and their status.
GET
/api/health
Health check endpoint.
⚡ How It Works
- Parallel Search — Queries 7 engines simultaneously
- Deduplication — Removes duplicate URLs
- Content Extraction — Fetches actual page content
- AI Synthesis — Streams response token-by-token
❌ Error Handling
{
"success": false,
"error": "Query is required"
}
All endpoints return success: false with an error message on failure.