Session Reconnect
Reconnects let you reuse a running browser session across multiple BrowserQL queries. Instead of launching a new browser for every request, you keep the same instance alive and send additional queries to it. The session preserves cookies, cache, and all page state between requests, which cuts proxy bandwidth, eliminates redundant page loads, and reduces the chance of triggering bot detection.
How Reconnects Work
The reconnect mutation returns a browserQLEndpoint URL. You send your next query to that URL to reuse the same browser. Each reconnect also resets the session's idle timer, so the browser stays alive as long as you keep reconnecting before the timeout expires.
reconnect(timeout: 30000) {
browserQLEndpoint
}
The timeout value is in milliseconds. The response includes the endpoint URL for your next query.
Response
{
"data": {
"reconnect": {
"browserQLEndpoint": "https://production-sfo.browserless.io/e/53.../chromium/bql/05..."
}
}
}
Start a Session
Send your first query to open a browser session. Include the reconnect mutation to get a reusable endpoint URL.
import fetch from 'node-fetch';
const API_KEY = "YOUR_API_TOKEN";
const BQL_ENDPOINT = `https://production-sfo.browserless.io/chromium/bql?token=${API_KEY}`;
const sessionQuery = `
mutation StartSession {
goto(url: "https://example.com", waitUntil: networkIdle) {
status
}
reconnect(timeout: 30000) {
browserQLEndpoint
}
}`;
async function startSession() {
const response = await fetch(BQL_ENDPOINT, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: sessionQuery }),
});
const data = await response.json();
const reconnectUrl = data.data.reconnect.browserQLEndpoint;
console.log("Reconnect URL:", reconnectUrl);
return reconnectUrl;
}
startSession();
Use the Reconnect URL
Send your next query to the reconnect URL instead of the original endpoint.
The browserQLEndpoint returned by the reconnect mutation does not include your API token. Append ?token=YOUR_API_TOKEN to the reconnect URL before sending subsequent requests.
const RECONNECT_BQL_ENDPOINT = "YOUR_RECONNECT_BQL_ENDPOINT" + "?token=YOUR_API_TOKEN";
const scrapeQuery = `
mutation FetchData {
text(selector: ".product-title") {
text
}
}`;
async function fetchData() {
const response = await fetch(RECONNECT_BQL_ENDPOINT, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: scrapeQuery }),
});
const data = await response.json();
console.log("Fetched Data:", data.data.text.text);
}
fetchData();
Full Example
This example starts a session, scrapes multiple pages, and periodically reconnects to extend the session timeout. Each call to reconnect resets the idle timer, so you can keep the browser alive indefinitely by reconnecting before the timeout expires.
- Javascript
- Python
- Java
- C#
import fetch from 'node-fetch';
const API_KEY = "YOUR_API_TOKEN";
const BQL_ENDPOINT = `https://production-sfo.browserless.io/chromium/bql?token=${API_KEY}`;
const sessionQuery = `
mutation StartSession {
goto(url: "https://example.com", waitUntil: networkIdle) {
status
}
reconnect(timeout: 30000) {
browserQLEndpoint
}
}`;
async function startSession() {
const response = await fetch(BQL_ENDPOINT, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: sessionQuery }),
});
const data = await response.json();
return data.data.reconnect.browserQLEndpoint;
}
async function fetchData(reconnectUrl) {
const scrapeQuery = `
mutation FetchData {
text(selector: ".product-title") {
text
}
}`;
const response = await fetch(reconnectUrl + "?token=" + API_KEY, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: scrapeQuery }),
});
const data = await response.json();
console.log("Fetched Data:", data.data.text.text);
}
// Reconnect before the timeout to extend the session
async function refreshSession(reconnectUrl) {
const refreshQuery = `
mutation RefreshSession {
reconnect(timeout: 30000) {
browserQLEndpoint
}
}`;
const response = await fetch(reconnectUrl + "?token=" + API_KEY, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: refreshQuery }),
});
const data = await response.json();
return data.data.reconnect.browserQLEndpoint;
}
(async () => {
let reconnectUrl = await startSession();
let pagesScraped = 0;
const REFRESH_INTERVAL = 20;
for (let i = 0; i < 100; i++) {
// Reconnect periodically to reset the idle timer
if (pagesScraped >= REFRESH_INTERVAL) {
reconnectUrl = await refreshSession(reconnectUrl);
pagesScraped = 0;
}
await fetchData(reconnectUrl);
console.log(`Scraped page ${i + 1}`);
pagesScraped++;
}
})();
import requests
API_KEY = "YOUR_API_TOKEN"
BQL_ENDPOINT = "https://production-sfo.browserless.io/chromium/bql"
session_query = """
mutation StartSession {
goto(url: "https://example.com", waitUntil: networkIdle) {
status
}
reconnect(timeout: 30000) {
browserQLEndpoint
}
}
"""
scrape_query = """
mutation FetchData {
text(selector: ".product-title") {
text
}
}
"""
refresh_query = """
mutation RefreshSession {
reconnect(timeout: 30000) {
browserQLEndpoint
}
}
"""
def start_session():
headers = {"Content-Type": "application/json"}
response = requests.post(
f"{BQL_ENDPOINT}?token={API_KEY}",
json={"query": session_query},
headers=headers,
)
response.raise_for_status()
return response.json()["data"]["reconnect"]["browserQLEndpoint"]
def fetch_data(reconnect_url):
headers = {"Content-Type": "application/json"}
response = requests.post(
f"{reconnect_url}?token={API_KEY}",
json={"query": scrape_query},
headers=headers,
)
response.raise_for_status()
data = response.json()["data"]["text"]["text"]
print("Fetched Data:", data)
return data
# Reconnect before the timeout to extend the session
def refresh_session(reconnect_url):
headers = {"Content-Type": "application/json"}
response = requests.post(
f"{reconnect_url}?token={API_KEY}",
json={"query": refresh_query},
headers=headers,
)
response.raise_for_status()
return response.json()["data"]["reconnect"]["browserQLEndpoint"]
def main():
reconnect_url = start_session()
pages_scraped = 0
REFRESH_INTERVAL = 20
for i in range(100):
# Reconnect periodically to reset the idle timer
if pages_scraped >= REFRESH_INTERVAL:
reconnect_url = refresh_session(reconnect_url)
pages_scraped = 0
fetch_data(reconnect_url)
print(f"Scraped page {i + 1}")
pages_scraped += 1
if __name__ == "__main__":
main()
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
public class BrowserlessScraper {
static final String API_KEY = "YOUR_API_TOKEN";
static final String BQL_ENDPOINT = "https://production-sfo.browserless.io/chromium/bql";
static final ObjectMapper mapper = new ObjectMapper();
static final HttpClient client = HttpClient.newHttpClient();
static String executeQuery(String url, String query) throws Exception {
String payload = mapper.writeValueAsString(new Query(query));
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(url))
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(payload))
.build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
return response.body();
}
static String startSession() throws Exception {
String sessionQuery = """
mutation StartSession {
goto(url: "https://example.com", waitUntil: networkIdle) { status }
reconnect(timeout: 30000) { browserQLEndpoint }
}""";
String json = executeQuery(BQL_ENDPOINT + "?token=" + API_KEY, sessionQuery);
return mapper.readTree(json).path("data").path("reconnect").path("browserQLEndpoint").asText();
}
static String fetchData(String reconnectUrl) throws Exception {
String scrapeQuery = """
mutation FetchData {
text(selector: ".product-title") { text }
}""";
String json = executeQuery(reconnectUrl + "?token=" + API_KEY, scrapeQuery);
return mapper.readTree(json).path("data").path("text").path("text").asText();
}
// Reconnect before the timeout to extend the session
static String refreshSession(String reconnectUrl) throws Exception {
String refreshQuery = """
mutation RefreshSession {
reconnect(timeout: 30000) { browserQLEndpoint }
}""";
String json = executeQuery(reconnectUrl + "?token=" + API_KEY, refreshQuery);
return mapper.readTree(json).path("data").path("reconnect").path("browserQLEndpoint").asText();
}
public static void main(String[] args) throws Exception {
String reconnectUrl = startSession();
int pagesScraped = 0;
int REFRESH_INTERVAL = 20;
for (int i = 0; i < 100; i++) {
// Reconnect periodically to reset the idle timer
if (pagesScraped >= REFRESH_INTERVAL) {
reconnectUrl = refreshSession(reconnectUrl);
pagesScraped = 0;
}
String data = fetchData(reconnectUrl);
System.out.printf("Scraped page %d: %s%n", i + 1, data);
pagesScraped++;
}
}
static class Query {
public String query;
Query(String q) { this.query = q; }
}
}
using System;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;
class Program
{
const string ApiKey = "YOUR_API_TOKEN";
const string BqlEndpoint = "https://production-sfo.browserless.io/chromium/bql";
const string sessionQuery = @"
mutation StartSession {
goto(url: ""https://example.com"", waitUntil: networkIdle) { status }
reconnect(timeout: 30000) { browserQLEndpoint }
}";
const string scrapeQuery = @"
mutation FetchData {
text(selector: "".product-title"") { text }
}";
const string refreshQuery = @"
mutation RefreshSession {
reconnect(timeout: 30000) { browserQLEndpoint }
}";
static async Task<string> ExecuteQuery(string url, string query)
{
using var client = new HttpClient();
var content = new StringContent(
JsonSerializer.Serialize(new { query }),
Encoding.UTF8,
"application/json"
);
var response = await client.PostAsync(url, content);
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
static async Task Main()
{
var json = JsonDocument.Parse(
await ExecuteQuery($"{BqlEndpoint}?token={ApiKey}", sessionQuery)
);
var reconnectUrl = json.RootElement
.GetProperty("data")
.GetProperty("reconnect")
.GetProperty("browserQLEndpoint")
.GetString();
int pagesScraped = 0;
int REFRESH_INTERVAL = 20;
for (int i = 0; i < 100; i++)
{
// Reconnect periodically to reset the idle timer
if (pagesScraped >= REFRESH_INTERVAL)
{
json = JsonDocument.Parse(
await ExecuteQuery($"{reconnectUrl}?token={ApiKey}", refreshQuery)
);
reconnectUrl = json.RootElement
.GetProperty("data")
.GetProperty("reconnect")
.GetProperty("browserQLEndpoint")
.GetString();
pagesScraped = 0;
}
json = JsonDocument.Parse(
await ExecuteQuery($"{reconnectUrl}?token={ApiKey}", scrapeQuery)
);
var data = json.RootElement
.GetProperty("data")
.GetProperty("text")
.GetProperty("text")
.GetString();
Console.WriteLine($"Scraped page {i + 1}: {data}");
pagesScraped++;
}
}
}
Reconnect Timeout Limits
The timeout parameter in the reconnect mutation controls how long the browser stays alive waiting for a reconnect. The maximum allowed value depends on your plan:
| Plan | Maximum Reconnection TTL |
|---|---|
| Free | 10 seconds (10,000ms) |
| Prototyping (20k) | 30 seconds (30,000ms) |
| Starter (180k) | 60 seconds (60,000ms) |
| Scale (500k) and above | 5 minutes (300,000ms) |
| Enterprise (self-hosted) | Custom |
Passing a timeout above your plan's limit returns an immediate error:
"Reconnect timeout (Xms) exceeds the maximum allowed limit (Yms)."
The reconnect timeout is not the same as your session timeout. Session timeouts control how long a BrowserQL session runs in total. The reconnect timeout only controls how long the browser waits idle between disconnects.
Reconnect to Puppeteer or Playwright
You can attach Puppeteer or Playwright to a running BQL session. The reconnect mutation returns a browserWSEndpoint alongside the browserQLEndpoint. Pass that URL to puppeteer.connect() or playwright.chromium.connectOverCDP() to attach to the same browser instance. All session state (cookies, localStorage, navigation history) carries over.
This is useful when part of your workflow is easier to express with a CDP-based library and part of it with BQL. You can switch between them without restarting the browser.
See Puppeteer & Playwright for a full guide.