Skip to main content

Browserless MCP Server

The Browserless MCP server gives AI assistants full browser automation capabilities through the Model Context Protocol. Connect Claude Desktop, Cursor, VS Code, Windsurf, or any MCP-compatible client to the hosted server and start scraping, exporting, downloading, and running custom browser code — no infrastructure required.

Prerequisites

  • A Browserless account — either an API token from your account dashboard, or OAuth sign-in
  • An MCP-compatible client (Claude Desktop, Cursor, VS Code, Windsurf, etc.)

Hosted Server

Browserless provides a hosted MCP server ready to use:

https://mcp.browserless.io/mcp

No installation or environment variables required. See Authentication for how to connect.

Authentication

The hosted server supports three authentication methods:

MethodBest for
OAuth (Browserless account login)Clients that support OAuth — no token needed
Authorization headerClients that support custom headers
token query parameterURL-only clients (e.g. Claude.ai custom connectors)

When multiple methods are present, they are evaluated in this order: Authorization header (plain API key) → token query parameter → OAuth JWT.

OAuth

For clients that support OAuth (e.g. Claude Desktop, Cursor), the hosted server can authenticate you through your Browserless account — no API token required. When you connect, your client will open a browser window to sign in. After authenticating, the server resolves your API key automatically.

OAuth is enabled on the hosted server at https://mcp.browserless.io/mcp with no extra configuration needed.

API Token

Pass your API token as a Bearer header or query parameter:

  • Header (recommended): Authorization: Bearer your-token-here
  • Query parameter: ?token=your-token-here

Client Setup

Claude.ai supports MCP servers via custom connectors. Since the connector form only accepts a URL, pass your token as a query parameter:

  1. Go to Settings > Connectors in Claude.ai.
  2. Click Add custom connector.
  3. Enter a name (e.g., Browserless) and the following URL:
https://mcp.browserless.io/mcp?token=your-token-here
  1. Click Add.

Replace your-token-here with your Browserless API token from the account dashboard.

tip

Clients that support OAuth (like Claude Desktop) can connect without a token — the server will prompt you to sign in with your Browserless account.

Regional Endpoints

By default, the hosted MCP server connects to the US West (San Francisco) Browserless region. To use a different region, pass the endpoint as a header or query parameter:

RegionEndpoint
US West — San Francisco (default)https://production-sfo.browserless.io
Europe — Londonhttps://production-lon.browserless.io
Europe — Amsterdamhttps://production-ams.browserless.io

Using the x-browserless-api-url header (for clients that support headers):

{
"mcpServers": {
"browserless": {
"url": "https://mcp.browserless.io/mcp",
"headers": {
"Authorization": "Bearer your-token-here",
"x-browserless-api-url": "https://production-sfo.browserless.io"
}
}
}
}

Using the browserlessUrl query parameter (for URL-only clients like Claude.ai):

https://mcp.browserless.io/mcp?token=your-token-here&browserlessUrl=https://production-sfo.browserless.io

Tools

The MCP server exposes four tools to your AI assistant:

browserless_smartscraper

Scrapes any webpage using cascading strategies — HTTP fetch, proxy, headless browser, and CAPTCHA solving — automatically selecting the best approach.

ParameterTypeRequiredDefaultDescription
urlstringYesThe URL to scrape (http or https)
formatsstring[]No["markdown"]Output formats: markdown, html, screenshot, pdf, links
timeoutnumberNo30000Request timeout in milliseconds

Output formats:

  • markdown — Page content converted to clean Markdown (default)
  • html — Raw HTML of the page
  • screenshot — Full-page screenshot as a PNG image
  • pdf — PDF rendering of the page
  • links — All links found on the page

browserless_function

Executes custom Puppeteer JavaScript code on the Browserless cloud. Your function receives a Puppeteer page object and optional context data, and returns { data, type } to control the response payload and Content-Type.

ParameterTypeRequiredDescription
codestringYesJavaScript (ESM) code to execute. The default export receives { page, context } and should return { data, type }
contextobjectNoOptional context object passed to the function
timeoutnumberNoRequest timeout in milliseconds

browserless_download

Runs custom Puppeteer code and returns the file that Chrome downloads during execution. Useful for downloading CSVs, PDFs, images, or any file from a website.

ParameterTypeRequiredDescription
codestringYesJavaScript (ESM) code that triggers a file download in the browser
contextobjectNoOptional context object passed to the function
timeoutnumberNoRequest timeout in milliseconds

browserless_export

Exports a webpage by URL in its native format (HTML, PDF, image, etc.). Set includeResources to bundle all page assets into a ZIP archive for offline use.

ParameterTypeRequiredDescription
urlstringYesThe URL to export (http or https)
gotoOptionsobjectNoPuppeteer Page.goto() options (waitUntil, timeout, referer)
bestAttemptbooleanNoWhen true, proceed even if awaited events fail or timeout
includeResourcesbooleanNoBundle all linked resources (CSS, JS, images) into a ZIP file
waitForTimeoutnumberNoMilliseconds to wait after page load before exporting
timeoutnumberNoRequest timeout in milliseconds

Example Usage

Ask your AI assistant:

Scrape https://example.com and summarize the content.

Take a screenshot and extract all links from https://example.com.

Download the CSV export from https://example.com/report.

Export https://example.com as a full offline ZIP with all assets.

Resources

The MCP server also exposes these resources that your AI assistant can read:

ResourceDescription
browserless://api-docsSmart Scraper API documentation and parameter reference
browserless://statusLive status of the Browserless API connection

Prompt Templates

Built-in prompt templates help your AI assistant use the tools effectively:

PromptDescription
scrape-urlScrape a webpage and summarize its content
extract-contentExtract specific information from a webpage using custom instructions

Further Reading