āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā š browser-use/supported-models ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā
ChatBrowserUse() is our optimized in-house model, matching the accuracy of top models while completing tasks 3-5x faster.
from browser_use import Agent, ChatBrowserUse
# Initialize the model
llm = ChatBrowserUse()
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
BROWSER_USE_API_KEY=
Get your API key from the Browser Use Cloud. New signups get $10 free credit via OAuth or $1 via email.
ChatBrowserUse offers competitive pricing per 1 million tokens:
| Token Type | Price per 1M tokens | |---|---| | Input tokens | $0.50 | | Output tokens | $3.00 | | Cached tokens | $0.10 |
Cached tokens provide significant cost savings on repeated context, reducing input costs by 80%.
GEMINI_API_KEY is deprecated and should be named GOOGLE_API_KEY as of 2025-05.
from browser_use import Agent, ChatGoogle
from dotenv import load_dotenv
# Read GOOGLE_API_KEY into env
load_dotenv()
# Initialize the model
llm = ChatGoogle(model='gemini-flash-latest')
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
GOOGLE_API_KEY=
O3 model is recommended for best accuracy.
from browser_use import Agent, ChatOpenAI
# Initialize the model
llm = ChatOpenAI(
model="o3",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
OPENAI_API_KEY=
You can use any OpenAI compatible model by passing the model name to the ChatOpenAI class using a custom URL (or any other parameter that would go into the normal OpenAI API call).
from browser_use import Agent, ChatAnthropic
# Initialize the model
llm = ChatAnthropic(
model="claude-sonnet-4-0",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
ANTHROPIC_API_KEY=
from browser_use import Agent, ChatAzureOpenAI
from pydantic import SecretStr
import os
# Initialize the model
llm = ChatAzureOpenAI(
model="o4-mini",
)
# Create agent with the model
agent = Agent(
task="...", # Your task here
llm=llm
)
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_API_KEY=
AWS Bedrock provides access to multiple model providers through a single API. We support both a general AWS Bedrock client and provider-specific convenience classes.
from browser_use import Agent, ChatAWSBedrock
# Works with any Bedrock model (Anthropic, Meta, AI21, etc.)
llm = ChatAWSBedrock(
model="anthropic.claude-3-5-sonnet-20240620-v1:0", # or any Bedrock model
aws_region="us-east-1",
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
from browser_use import Agent, ChatAnthropicBedrock
# Anthropic-specific class with Claude defaults
llm = ChatAnthropicBedrock(
model="anthropic.claude-3-5-sonnet-20240620-v1:0",
aws_region="us-east-1",
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
You can also use AWS profiles or IAM roles instead of environment variables. The implementation supports:
from browser_use import Agent, ChatGroq
llm = ChatGroq(model="meta-llama/llama-4-maverick-17b-128e-instruct")
agent = Agent(
task="Your task here",
llm=llm
)
GROQ_API_KEY=
OCI provides access to various generative AI models including Meta Llama, Cohere, and other providers through their Generative AI service.
from browser_use import Agent, ChatOCIRaw
# Initialize the OCI model
llm = ChatOCIRaw(
model_id="ocid1.generativeaimodel.oc1.us-chicago-1.amaaaaaask7dceya...",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="ocid1.tenancy.oc1..aaaaaaaayeiis5uk2nuubznrekd...",
provider="meta", # or "cohere"
temperature=0.7,
max_tokens=800,
top_p=0.9,
auth_type="API_KEY",
auth_profile="DEFAULT"
)
# Create agent with the model
agent = Agent(
task="Your task here",
llm=llm
)
uv add oci or pip install ociollama serve to start the serverollama pull llama3.1:8b (this has 4.9GB)from browser_use import Agent, ChatOllama
llm = ChatOllama(model="llama3.1:8b")
Example on how to use Langchain with Browser Use.
Currently, only qwen-vl-max is recommended for Browser Use. Other Qwen models, including qwen-max, have issues with the action schema format.
Smaller Qwen models may return incorrect action schema formats (e.g., actions: [{"navigate": "google.com"}] instead of [{"navigate": {"url": "google.com"}}]). If you want to use other models, add concrete examples of the correct action format to your prompt.
from browser_use import Agent, ChatOpenAI
from dotenv import load_dotenv
import os
load_dotenv()
# Get API key from https://modelstudio.console.alibabacloud.com/?tab=playground#/api-key
api_key = os.getenv('ALIBABA_CLOUD')
base_url = 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1'
llm = ChatOpenAI(model='qwen-vl-max', api_key=api_key, base_url=base_url)
agent = Agent(
task="Your task here",
llm=llm,
use_vision=True
)
ALIBABA_CLOUD=
from browser_use import Agent, ChatOpenAI
from dotenv import load_dotenv
import os
load_dotenv()
# Get API key from https://www.modelscope.cn/docs/model-service/API-Inference/intro
api_key = os.getenv('MODELSCOPE_API_KEY')
base_url = 'https://api-inference.modelscope.cn/v1/'
llm = ChatOpenAI(model='Qwen/Qwen2.5-VL-72B-Instruct', api_key=api_key, base_url=base_url)
agent = Agent(
task="Your task here",
llm=llm,
use_vision=True
)
MODELSCOPE_API_KEY=
We support all other models that can be called via OpenAI compatible API. We are open to PRs for more providers.
Examples available:
ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā