LinkRadar crawls your website from your browser, no backend, no tracking, no installs. Get a full report of every URL, redirect and dead link in seconds.
| Status | URL | Source | Time | Depth |
|---|
<img>, <script> and <link> tags in addition to anchors, catching broken
images and missing stylesheets too.
fetch() API with mode: 'cors'. The HTML response is
parsed with DOMParser to extract all anchor, image, script and link elements.
Set-based visited registry. Already-seen URLs are never re-fetched,
preventing infinite loops and duplicate reports.
HEAD requests in configurable concurrency batches. Only same-origin pages are crawled for
sub-links; external URLs are checked for status only.
<a>, <img>, etc.), anchor text and crawl
depth, forming the "fix-it" context shown in the report.
// Entry point async function scan(rootUrl, opts) { // Breadth-first queue const queue = [{ url: rootUrl, depth: 0 }] const visited = new Set() const results = [] while (queue.length > 0) { // Process in concurrency batches const batch = queue.splice(0, CONCURRENCY) await Promise.all(batch.map(async item => { if (visited.has(item.url)) return visited.add(item.url) const result = await checkUrl(item.url) results.push(result) // Only crawl same-origin pages if (isSameOrigin(item.url) && item.depth < opts.maxDepth) { const links = await extractLinks(item.url) queue.push(...links.map(l => ({ url: l.href, depth: item.depth + 1, source: item.url, tag: l.tagName, text: l.innerText }))) } updateUI(results) })) } return results }