Existing clients: v3.useburrow.com

Website monitoring for agencies — beyond uptime into forms, checkout, and deploys

Uptime tools tell you the page loads. They can't tell you why it went down or that the contact form silently broke. Burrow monitors uptime alongside form submissions, checkout health, deploys, and CMS changes — the operational context that HTTP status codes miss.

[ How it works ]

  1. Connect monitoring alongside your existing tools

    Add Oh Dear, UptimeRobot, or any monitoring integration to each client's Burrow project. Monitoring signals join the same timeline as GitHub deploys, WordPress events, Shopify orders, and Stripe billing. No new monitoring tool required — Burrow layers context on top of what you already use.

  2. See health signals in operational context

    An uptime alert in isolation tells you the site is down. The same alert next to a GitHub deploy event from 10 minutes earlier tells you why. Burrow's per-client timeline shows monitoring, deploys, CMS changes, and form health side by side — the context that turns 'it's down' into 'here's what happened.'

  3. Detect cascading failures across the stack

    The site is up but the checkout is broken. The page loads but the contact form stopped delivering. Uptime tools miss these. Burrow captures form submission events, commerce checkout signals, and CMS activity — the operational health signals that HTTP status codes cannot reveal.

  4. Report on reliability with real incident data

    Monthly digests include uptime percentage, incident count, mean time to resolution, and the operational context around each incident. Client reviews cover reliability as part of the retainer story — not a separate report from a separate tool.

The alert says the site is down. It doesn’t say why.

3:12am. Your phone buzzes. Oh Dear alert: “Client X — site is down. HTTP 503.”

You check the status page. Confirmed down. You SSH into the server or open the hosting panel. Something broke. But what?

Was it the deploy your developer pushed at 11pm? Was it a WordPress auto-update that ran overnight? Did the hosting provider do maintenance? Is it a traffic spike from the email campaign that went out at midnight? The monitoring alert gives you one data point — “down.” The cause requires archaeology across 3-4 other tools.

20 minutes later — you’ve checked GitHub for recent commits, ManageWP for plugin updates, the hosting panel for server events, and Stripe for any billing-related changes. The issue was a WordPress auto-update that conflicted with the caching plugin. Fix time: 5 minutes. Investigation time: 20 minutes.

That 20-minute investigation is the cost of monitoring without context.

What uptime tools see vs. what’s actually happening

Every monitoring tool — Oh Dear, UptimeRobot, Pingdom, StatusCake — does fundamentally the same thing: send an HTTP request to the URL and check the response code.

200 = up. Anything else = down.

That binary is useful but shallow. It misses entire categories of operational failure:

The page loads but the form is broken. Contact form displays “Thank you for your submission” — but the handler throws a PHP error and no email is ever sent. HTTP 200. Uptime: 100%. Leads lost: all of them.

The page loads but checkout is failing. Shopify returns the product page just fine. The cart works. But the checkout throws a 500 error on mobile only after a theme update. Desktop monitoring shows green. Mobile conversions crater for two days.

The page loads but the CDN is serving stale content. The site is technically “up.” But the homepage is showing content from 3 days ago. The product page prices are wrong. The banner promoting last week’s sale is still there.

The page loads but API calls are timing out. The WordPress site renders the template, but the external API that powers the pricing calculator or the store locator silently fails. The page shows fallback content. Nobody notices for a week.

Uptime monitoring catches the first category — is the page reachable? It misses everything else. And “everything else” is where most agency client incidents actually live.

Monitoring + context: how Burrow changes the investigation

Burrow doesn’t replace your monitoring tool. It places monitoring signals in the same timeline as every other operational event for that client.

When the 3:12am alert fires, the client’s Burrow timeline shows:

11:47pm  — deploy.succeeded (GitHub: merged PR #89, theme update)
11:52pm  — plugin.updated (WordPress: advanced-cache 2.1.0 → 2.2.0)
 3:12am  — uptime.down (Oh Dear: HTTP 503)
 3:14am  — form.submitted volume → 0 (Scout anomaly flag)

The investigation that took 20 minutes across 4 tools takes 30 seconds in one timeline. The deploy at 11:47pm. The plugin update 5 minutes later. The crash 3 hours later when the caching conflict hit under load. The form submissions confirming the site was non-functional.

Cause identified. Fix the caching conflict. Restore service. The client’s Monday morning digest shows: “Incident: 3:12am downtime, 18-minute duration, caused by caching plugin conflict after deploy. Resolved 3:30am. Uptime for the month: 99.96%.”

Three monitoring scenarios Burrow makes visible

Scenario 1: Post-deploy failure

The pattern: Developer deploys at 5pm. Site crashes at 7pm when traffic increases. Monitoring tool alerts on the crash. Without context, the on-call engineer starts investigating from scratch — server logs, error logs, recent changes. With Burrow, the deploy event and the crash event are in the same timeline. Investigation starts at the most likely cause.

Time saved: 15-25 minutes per incident. Over a quarter with 4-6 incidents across a portfolio, that’s 1-2.5 hours of faster resolution — plus the client trust earned by resolving issues before they escalate.

Scenario 2: The silent checkout failure

The pattern: Shopify checkout breaks on mobile after a theme update. The site is “up.” Uptime monitoring shows green. But Shopify checkout error signals spike in Burrow. Scout flags the anomaly. The developer investigates the checkout flow specifically instead of assuming everything is fine because the page loads.

Impact: Revenue loss measured in hours instead of days. The checkout issue that would have gone unnoticed for a weekend gets caught the same afternoon.

Scenario 3: The multi-site infrastructure event

The pattern: Three clients share a hosting provider. The provider does unannounced maintenance at 2am. All three sites go down simultaneously. Monitoring tools send 3 separate alerts. Your team opens 3 separate investigations.

In Burrow, the uptime signals across all three client projects are visible in the same dashboard. The simultaneous timing makes the shared infrastructure cause obvious. One investigation instead of three. One resolution path instead of three parallel troubleshooting sessions.

Reporting on reliability

Reliability is part of the retainer story. When the quarterly review happens, the client wants to know: “How available was our site? What incidents happened? How fast did you respond?”

Without centralized monitoring context, answering those questions means digging through Oh Dear’s incident history, cross-referencing with deploy dates, and manually calculating uptime percentages. For 30+ clients, that’s a reporting exercise in itself.

Burrow’s automated digests include reliability data as a standard section:

Uptime: 99.97% (two incidents this month)

Incident 1: March 7, 3:12am — 18 minutes downtime. Cause: caching plugin conflict after v2.3 deploy. Resolved by rolling back the caching configuration. No client-facing impact (off-peak hours).

Incident 2: March 19, 2:45pm — 4 minutes downtime. Cause: hosting provider network maintenance. Resolved automatically. Form submissions resumed within 5 minutes.

That reliability narrative is compiled from real events — monitoring signals, deploy events, and form health data — without manual assembly.

Getting started with monitoring context

You don’t need to change your monitoring setup. Keep Oh Dear, UptimeRobot, or whatever tool you use today.

  1. Connect Oh Dear to the client’s Burrow project (native integration) — or send monitoring webhooks from your existing tool through the Burrow API
  2. Ensure GitHub deploys and CMS plugins are also connected to the same project
  3. The first time an incident coincides with a deploy, the value is immediately obvious

Monitoring without context is a noise machine. Monitoring with context is an incident response accelerator.

Oh Dear integration | Form monitoring use case | Agency operations | Compare with WP Umbrella

Frequently asked questions

Does Burrow replace my uptime monitoring tool?
No. Burrow complements monitoring tools like Oh Dear, UptimeRobot, and Pingdom. Those tools do the monitoring — pinging URLs, checking response codes, alerting on downtime. Burrow adds context by placing monitoring signals in the same timeline as deploys, CMS changes, form health, and commerce events. Keep your monitoring tool. Use Burrow for the story around the alerts.
What monitoring integrations does Burrow support?
Burrow has a native integration with Oh Dear. For other monitoring tools, send events through the Burrow API or webhooks. Any system that can emit webhook payloads — UptimeRobot, Pingdom, StatusCake, custom scripts — can feed monitoring signals into a client project.
How does Burrow detect issues that uptime tools miss?
Uptime tools check if the page returns HTTP 200. A page can be 'up' while forms silently fail, checkout flows break, or CMS functions error out. Burrow captures form submission events, commerce checkout signals, and CMS activity from inside the applications. When form volume drops to zero but the page is still loading, Burrow sees the operational failure that uptime tools cannot.
Can Burrow correlate downtime with specific deploys?
Yes. When a GitHub deploy event and an uptime-down event appear in the same project timeline within minutes of each other, the correlation is visible immediately. Your team doesn't need to cross-reference Git logs with monitoring dashboards — both signals are side by side in one view.
What about multi-site monitoring across a portfolio?
Each client project in Burrow has its own monitoring signals. Your agency dashboard shows all client projects with their latest status. When multiple sites share infrastructure (same hosting provider, same CDN), monitoring events across affected projects become visible simultaneously.
How does monitoring data appear in client reports?
Monthly digests automatically include: uptime percentage for the reporting period, number of incidents detected, mean time to resolution, and a brief description of each incident with its operational context (what deploy or change preceded it, how fast it was resolved). Clients see reliability as part of the overall retainer narrative.
Does Burrow do synthetic monitoring or real-user monitoring?
Burrow is not a monitoring tool. It's the context layer that makes monitoring data actionable. For synthetic monitoring (testing endpoints, form submission tests), use Oh Dear or a similar tool. For real-user monitoring, use your analytics platform. Burrow unifies those signals with everything else in the client timeline.
What is the best way to monitor agency client sites?
Use a dedicated monitoring tool (Oh Dear, UptimeRobot) for uptime and SSL checks. Layer Burrow on top for operational context — correlating monitoring alerts with deploys, CMS changes, form health, and commerce activity. The monitoring tool tells you something is wrong. Burrow helps you understand why and report on it.

Your agency's work deserves to be seen.

We're onboarding agencies in small cohorts to keep the quality high. Request early access and we'll be in touch.

Self-funded · Independent · Built for the long term