API Reference
Complete documentation for the Grep Research API. Start research jobs, poll for results, and integrate deep research into your applications.
Authentication
All API requests require authentication using a Bearer token. Your API key follows the format parcha-xyz-{32_hex_chars}.
curl -X POST "https://api.grep.ai/v1/research/start" \
-H "Authorization: Bearer parcha-xyz-1234567890abcdef..." \
-H "Content-Type: application/json" \
-d '{"question": "Due diligence on Stripe"}'Keep your API key secure
Never expose your API key in client-side code. Use server-side requests or environment variables.
Base URL
Rate Limits
| Tier | Requests/min | Concurrent Jobs |
|---|---|---|
| Free | 10 | 2 |
| Pro | 60 | 10 |
| Enterprise | Unlimited | Unlimited |
Endpoints
/api/v1/research/startStart a new research job. Returns a job ID for polling or webhook callback.
Request Body
questionrequired | string | The research query or question to investigate |
effort | enum | Effort level: low (~30s), medium (2-3 min), high (5+ min), or build (apps/media). Legacy: ultra_fast, deep, ultra_deep still accepted. |
entity_type | enum | Type of entity: company, person, or vessel |
website | string | Entity's website URL for additional context |
approach | enum | Research methodology: general, scientific, technical, or creative |
context | string | Additional context or background information |
skills | string[] | Specific skills to use (see /research/skills) |
mcp_tools | string[] | MCP tools to enable (see /research/mcp-tools) |
webhook_url | string | URL to POST results when job completes |
json_schema | object | Custom JSON schema for structured output. See Structured Output guide below. |
Example Request
{
"question": "Complete due diligence on Stripe Inc",
"effort": "high",
"entity_type": "company",
"website": "https://stripe.com",
"approach": "general",
"context": "Evaluating for Series B investment",
"skills": ["financial-data-research", "competitive-analysis"],
"mcp_tools": ["mcp__financial_datasets__getCompanyFinancials"],
"webhook_url": "https://myapp.com/webhook"
}Response
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "queued",
"message": "Research pipeline started"
}/api/v1/jobs/{job_id}Get the status and results of a research job.
Path Parameters
job_idrequired | string | The job ID returned from /research/start |
Response
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "complete",
"progress": 100,
"result": {
"summary": "Stripe Inc is a leading...",
"sections": [
{
"title": "Company Overview",
"content": "...",
"sources": [...]
}
],
"metadata": {
"effort": "high",
"skills_used": ["financial-data-research"],
"sources_count": 47
}
},
"created_at": "2024-01-15T10:30:00Z",
"completed_at": "2024-01-15T10:38:00Z"
}Job Status Values
queued | Job is waiting to be processed |
in_progress | AI experts are working |
complete | Research finished, results available |
failed | Job failed, check error message |
/api/v1/research/skillsList all available research skills and their descriptions.
Returns an array of 1000s of expert-curated skills organized by domain. Use skill IDs in the skills parameter when starting research.
View All Skills/api/v1/research/mcp-toolsList all available MCP tools across all servers.
Returns 24 MCP tools across 3 servers (parallel, parcha_tools, financial_datasets). Use tool IDs in the mcp_tools parameter.
View All MCP ToolsStructured Output
Get research results in a custom JSON structure. Define a JSON schema with your request and receive machine-readable data alongside the standard research report.
Supported Effort Levels
Structured output is available with low and medium effort levels. high and build do not support custom schemas. Legacy names (ultra_fast, deep) still work.
Pass a standard JSON Schema object in the json_schema parameter when starting a research job. The AI extracts the requested data points from its research findings and returns them in the custom_output field of the response.
The pipeline:
- You define a JSON Schema describing the data you want
- Your schema is merged with the base research output schema
- The AI conducts research and extracts your fields from findings
- Results include both the standard report and your structured data
The json_schema parameter accepts a standard JSON Schema object. Define your fields under properties with types and descriptions.
Supported Types
| JSON Schema Type | Use For |
|---|---|
string | Names, descriptions, URLs, text content |
integer | Years, counts, whole numbers |
number | Prices, percentages, decimals |
boolean | Yes/no answers, flags |
array | Lists of items (with nested items schema) |
object | Nested structures with sub-fields |
Add description to your fields to guide the AI. The more specific your descriptions, the better the extraction quality.
Include the json_schema parameter in your POST /research/start request body:
{
"question": "Who are the founders of Stripe and what are their backgrounds?",
"effort": "medium",
"json_schema": {
"type": "object",
"properties": {
"founders": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"role": { "type": "string" },
"background": { "type": "string" },
"education": { "type": "string" }
}
},
"description": "List of company founders with their details"
},
"founding_year": {
"type": "integer",
"description": "Year the company was founded"
},
"company_name": {
"type": "string",
"description": "Name of the company"
}
}
}
}When the job completes, the response includes a custom_output field containing your structured data:
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "complete",
"result": {
"summary": "Stripe Inc was founded in 2010...",
"sections": [...],
"metadata": {
"effort": "medium",
"sources_count": 32
}
},
"custom_output": {
"founders": [
{
"name": "Patrick Collison",
"role": "CEO & Co-founder",
"background": "Irish entrepreneur, started coding at age 10",
"education": "MIT (dropped out)"
},
{
"name": "John Collison",
"role": "President & Co-founder",
"background": "Youngest self-made billionaire at age 26",
"education": "Harvard University (dropped out)"
}
],
"founding_year": 2010,
"company_name": "Stripe"
}
}Best Practices
- Use descriptive field names and
descriptionvalues to guide the AI - Group related data in arrays of objects rather than parallel arrays
- Avoid redundant computed fields (e.g.
countalongside an array) - Fields that cannot be found will return
null
Webhooks
Instead of polling, you can receive a POST request when your job completes. Add a webhook_url to your research request.
View Webhook Documentation