tavily-ai avatar

Tavily Best Practices

tavily-ai/skills
80

This skill provides access to Tavily's search API, enabling real-time web data retrieval for AI applications. It offers capabilities such as web search, URL content extraction, site crawling, URL discovery, and AI-powered research, making it suitable for developers building AI agents, research tools, and data pipelines. The skill supports both SDK and command-based integrations for creating custom workflows or using out-of-the-box research functions.

npx skills add https://github.com/tavily-ai/skills --skill tavily-best-practices

Tavily

Tavily is a search API designed for LLMs, enabling AI applications to access real-time web data.

Installation

Python:

pip install tavily-python

JavaScript:

npm install @tavily/core

See references/sdk.md for complete SDK reference.

Client Initialization

from tavily import TavilyClient
# Uses TAVILY_API_KEY env var (recommended)
client = TavilyClient()
#With project tracking (for usage organization)
client = TavilyClient(project_id="your-project-id")
# Async client for parallel queries
from tavily import AsyncTavilyClient
async_client = AsyncTavilyClient()

Choosing the Right Method

For custom agents/workflows: Need Method Web search results search() Content from specific URLs extract() Content from entire site crawl() URL discovery from site map() For out-of-the-box research: Need Method End-to-end research with AI synthesis research()

Quick Reference

response = client.search(
    query="quantum computing breakthroughs",  # Keep under 400 chars
    max_results=10,
    search_depth="advanced"
)
print(response)

Key parameters: query, max_results, search_depth (ultra-fast/fast/basic/advanced), include_domains, exclude_domains, time_range See references/search.md for complete search reference.

extract() - URL Content Extraction

# Simple one-step extraction
response = client.extract(
    urls=["https://docs.example.com"],
    extract_depth="advanced"
)
print(response)

Key parameters: urls (max 20), extract_depth, query, chunks_per_source (1-5) See references/extract.md for complete extract reference.

crawl() - Site-Wide Extraction

response = client.crawl(
    url="https://docs.example.com",
    instructions="Find API documentation pages",  # Semantic focus
    extract_depth="advanced"
)
print(response)

Key parameters: url, max_depth, max_breadth, limit, instructions, chunks_per_source, select_paths, exclude_paths See references/crawl.md for complete crawl reference.

map() - URL Discovery

response = client.map(
    url="https://docs.example.com"
)
print(response)

research() - AI-Powered Research

import time
# For comprehensive multi-topic research
result = client.research(
    input="Analyze competitive landscape for X in SMB market",
    model="pro"  # or "mini" for focused queries, "auto" when unsure
)
request_id = result["request_id"]
# Poll until completed
response = client.get_research(request_id)
while response["status"] not in ["completed", "failed"]:
    time.sleep(10)
    response = client.get_research(request_id)
print(response["content"])  # The research report

Key parameters: input, model ("mini"/"pro"/"auto"), stream, output_schema, citation_format See references/research.md for complete research reference.

Detailed Guides

For complete parameters, response fields, patterns, and examples:

GitHub Owner

Owner: tavily-ai

Files

sdk.md

search.md

extract.md

crawl.md

research.md

sdk.md

search.md

extract.md

crawl.md

research.md

integrations.md

More skills