Client-Side · No Server Required · Free Forever

Hunt down every
broken link on your site

LinkRadar crawls your website from your browser, no backend, no tracking, no installs. Get a full report of every URL, redirect and dead link in seconds.

100% Client-Side No Data Leaves Your Browser MIT Licensed
linkradar — scan terminal
$
Max depth:
⚠ CORS Note: Because LinkRadar runs entirely in your browser, cross-origin requests may be blocked by the target server's CORS policy. Same-origin and many public sites work fine. For restricted sites, consider running LinkRadar from a local dev server on the same domain, or use a CORS proxy. No data ever leaves your machine.
live output 0 urls checked
0
Total URLs
0
OK (2xx)
0
Redirects
0
Broken
0
Skipped
Status URL Source Time Depth
Everything a dev needs. Nothing they don't.
100% Client-Side
Runs entirely in your browser using the Fetch API. No backend, no proxy, no account required. Your scanned URLs stay on your machine.
Real-Time Progress
A high-contrast top progress bar and a live console log let you watch every URL get checked as it happens, so you know the scanner is alive.
Source Context ("Fix-it")
Every broken link shows exactly which page it was found on, the HTML element type and the anchor text, so you can fix it without hunting.
Status Badges
Color-coded rounded badges for every HTTP status class: green for 2xx, orange for 3xx redirects, red for 4xx/5xx broken, grey for failures.
Filter & Search
Instantly filter by status class and search by URL text. Sort any column ascending or descending. Paginated for large sites.
Export CSV & JSON
Download the full report or the current filtered view as CSV for spreadsheets or JSON for pipelines, CI integrations or further tooling.
Configurable Depth
Control how deep the crawler follows internal links, from a quick top-level audit to a full recursive multi-level crawl of your entire site.
Images & Assets
Optionally scan <img>, <script> and <link> tags in addition to anchors, catching broken images and missing stylesheets too.
How LinkRadar works under the hood
1
Fetch & Parse Entry Point
LinkRadar fetches the provided URL via the browser's native fetch() API with mode: 'cors'. The HTML response is parsed with DOMParser to extract all anchor, image, script and link elements.
fetch(url, { method: 'HEAD', mode: 'cors', redirect: 'follow' })
2
BFS Queue with Deduplication
URLs are processed using a breadth-first queue with a Set-based visited registry. Already-seen URLs are never re-fetched, preventing infinite loops and duplicate reports.
3
Concurrent HEAD Requests
Outbound links are checked with HEAD requests in configurable concurrency batches. Only same-origin pages are crawled for sub-links; external URLs are checked for status only.
4
Source Attribution
Every URL is tagged with its parent page, HTML element type (<a>, <img>, etc.), anchor text and crawl depth, forming the "fix-it" context shown in the report.
5
Live Reporting
Results are streamed into the DOM in real time. The progress bar updates per batch. The final table is built once scanning is complete and supports client-side filtering, sorting and export.
scanner.js, pseudocode
// Entry point
async function scan(rootUrl, opts) {
  // Breadth-first queue
  const queue = [{ url: rootUrl, depth: 0 }]
  const visited = new Set()
  const results = []

  while (queue.length > 0) {
    // Process in concurrency batches
    const batch = queue.splice(0, CONCURRENCY)
    await Promise.all(batch.map(async item => {
      if (visited.has(item.url)) return
      visited.add(item.url)

      const result = await checkUrl(item.url)
      results.push(result)

      // Only crawl same-origin pages
      if (isSameOrigin(item.url) &&
          item.depth < opts.maxDepth) {
        const links = await extractLinks(item.url)
        queue.push(...links.map(l => ({
          url: l.href,
          depth: item.depth + 1,
          source: item.url,
          tag: l.tagName,
          text: l.innerText
        })))
      }
      updateUI(results)
    }))
  }
  return results
}
Built by devs, for devs, for free
The problem we solved
Every broken link tool we tried was either a paid SaaS, required a Chrome extension install, sent your URLs to a third-party server, or was just a clunky CLI. We wanted a zero-friction, zero-trust solution.
Privacy is not a feature, it's the default
Your site's URL structure, internal architecture and broken pages can be sensitive. By running fully in the browser, LinkRadar ensures zero data exfiltration by design, not policy.
Open source forever
Single-file HTML with vanilla JS. Fork it, vendor it in your CI pipeline, self-host it, customize it. No node_modules, no build step, no lock-in. If you can read HTML, you can audit every line.
The "fix-it" context was missing everywhere
Other tools tell you a URL is broken but not which page or which element contains it. That last mile of information is what lets you actually fix it without spending 10 minutes hunting the source.
Typical scan output
404 Not Found
/blog/post-about-launch
Found on: /index.html · <a> · "Read our launch post"
301 Moved
https://old-cdn.example.com/logo.png
Found on: /about · <img> · alt="Company Logo"
200 OK
https://docs.example.com/api
Found on: /docs · <a> · "API Reference" · 142ms
503 Unavailable
https://api.external-service.io/v2
Found on: /integrations · <a> · "Connect"