Why DNS Leaks Still Happen Behind a “Working” Proxy

Most people picture privacy as a single encrypted tunnel, but the Domain Name System is a parallel channel. Before your browser can open a TLS session, something on your device must turn a hostname into an IP address. If that lookup uses your home ISP resolver, a hotel captive portal, or a corporate split-DNS server, observers can still infer which sites you visit even when the bytes themselves travel through Clash. That mismatch—encrypted transport with plaintext or off-path DNS—is what engineers mean by a DNS leak.

Mihomo (the core historically marketed as Clash Meta) can own DNS inside the client, yet leaks persist when the OS never asks the core for help, when rules send some domains “direct” while the resolver remains foreign, or when TUN mode is off and only selected apps honor the system proxy. The fix is not a magical checkbox; it is aligning three layers: resolver policy inside YAML, capture mode on the host, and verification after every major upgrade.

If you are new to how rules and modes interact, skim our documentation hub before you chase symptoms that are really routing mistakes.

How Clash Routes DNS at a High Level

Think of Mihomo as a miniature policy engine sitting between applications and the network. When an app asks for example.com, the core can answer locally, forward the query through an encrypted upstream, or pass it to a “direct” resolver—depending on your dns stanza and the active mode. The engine also needs to cooperate with fake-ip or redir-host semantics so that domain-based rules and connection setup stay consistent.

Fake-ip returns synthetic addresses from a private pool so the kernel hands connections to the TUN interface (or the local listener) before a real IP is needed. That trick eliminates many race conditions between rule matching and DNS, but it demands disciplined nameserver-policy so that specific zones still resolve the way you expect. Redir-host resolves to real addresses earlier, which can feel simpler on some desktops yet exposes you to different timing issues if the OS resolver bypasses the core.

Neither mode is automatically “more private.” Privacy comes from ensuring that whichever path you pick, every query that should be private is actually handled by resolvers and networks you trust, not by whatever DHCP pushed to your Wi-Fi interface five minutes ago.

TUN Mode, System Proxy, and the OS Resolver Trap

On desktop GUIs you often choose between a system proxy and TUN (virtual adapter) capture. System proxying is convenient—only aware apps follow it—but browsers, background updaters, and gaming clients may still use their own DNS settings or ignore proxy PAC files. TUN, when implemented correctly, drags far more traffic through the core, which is why privacy guides lean on it for “full device” coverage.

Mobile stacks differ again. Android VPNService-based clients can steer DNS into the tunnel, yet OEM battery optimizers and per-app bypass lists reintroduce holes. Our Android complete guide walks through capture boundaries if phones are your primary risk surface.

Regardless of platform, treat captive portals as a special case: you sometimes must resolve the login page through the raw ISP resolver before the tunnel comes up. Document that window honestly—DNS during portal authentication is rarely private—and reconnect with your hardened profile afterward.

Designing nameserver, Fallback, and Policy Rules

A resilient dns: section declares primary upstreams, fallbacks when TLS fails, and per-domain overrides. Start with resolvers you can reason about: reputable DoH endpoints, your operator’s internal resolver for LAN names, or a split horizon that sends *.corp.example to an enterprise server while everything else uses encrypted transit.

Use nameserver-policy to pin sensitive suffixes to the correct resolver instead of hoping the default list will guess right. Common patterns include sending Chinese domestic domains to a domestic resolver when you optimize for latency, or forcing news and messaging domains through the tunnel resolver when censorship is the threat model. The key is explicitness—implicit defaults change between releases.

Keep fallback chains shallow. Long waterfalls slow every tab load and make debugging miserable. Prefer two well-monitored upstreams plus a clearly labeled direct resolver for intranet zones.

dns:
  enable: true
  listen: 0.0.0.0:1053
  enhanced-mode: fake-ip
  fake-ip-range: 198.18.0.1/16
  nameserver:
    - https://cloudflare-dns.com/dns-query#PROXY_GROUP
    - tls://dns.quad9.net:853
  fallback:
    - tls://dns.quad9.net:853
  nameserver-policy:
    "+.your-intranet.example":
      - system
    "+.lan":
      - system

Replace group names and endpoints with values your operator recommends; the structure matters more than the brand names above. If you reference #PROXY_GROUP, ensure that group exists and can reach the DoH URL when the WAN path is messy.

DoH, DoT, and What “Encrypted DNS” Does Not Fix

DNS over HTTPS and DNS over TLS hide query contents from casual on-path observers, but they do not erase metadata. The resolver operator still sees domain labels unless you add additional layers such as ODoH or VPN padding, which are outside the scope of a typical Clash profile. Choose upstreams with transparent logging policies and jurisdictions you understand.

Also remember that encrypted upstreams fail closed in interesting ways. If TLS to a resolver breaks, your fallback must not silently revert to an ISP you distrust unless that is a conscious trade you accept. Log lines in Mihomo often spell out which branch triggered—learn to read them once, and you will save hours later.

IPv6, Multicast DNS, and the Leaks Users Forget to Test

IPv6 introduces parallel resolver paths. If your OS prefers AAAA records and your policy only hardened IPv4 exit behavior, you can end up with split traffic. Either align IPv6 routing with your threat model or disable it at the OS level when you truly need a single stack.

Multicast DNS and local discovery (common on macOS and Linux desktops) can broadcast hostnames on LAN segments. That is not a Clash bug, but it is a privacy footgun in coffee shops. Decide whether to allow *.local resolution through the tunnel or to block it explicitly in broader firewall rules.

How to Verify That DNS Actually Follows Your Policy

Trust, then verify. After each profile change, run three checks: a browser-based leak test you trust, a CLI query that shows which server answered, and a quick review of Mihomo’s live connections panel. If the panel shows direct UDP/53 to your ISP while you thought everything was tunneled, you already have the smoking gun.

Compare results in both “Rule” and “Global” modes intentionally. Misconfigured exceptions often appear only in Rule mode when domestic domains shortcut to direct outbound yet still use an unintended resolver. For performance-oriented tuning that touches DNS without snake oil, see our speed and DNS tuning article.

Practical habit: Capture a timestamped screenshot of your resolver test results when a profile works. Diffing after upgrades beats guessing which parser default moved.

Common YAML and GUI Mistakes That Cause Leaks

Disabling the embedded DNS module while expecting the core to steer lookups is a classic failure mode. If dns.enable is false, applications fall back to whatever the OS prefers. Mixing fake-ip with applications that insist on “real” answers breaks some chat clients; either exempt them with policy or switch modes after you understand the trade.

Hard-coded system DNS on dual-stack laptops fights the tunnel. Remove static resolvers when you want the core to be authoritative. Stale proxy groups referenced by DoH URLs strand DNS behind a dead outbound, which looks like random “internet down” behavior but is actually resolver starvation.

Align Configuration With a Realistic Threat Model

Not everyone needs the same resolver topology. A journalist on hostile Wi-Fi cares about encrypting queries and avoiding ISP logging; a remote employee might prioritize split DNS to internal HR portals; a gamer might optimize for latency on anti-cheat-friendly paths. Write down your top two risks and let them drive whether you favor fake-ip, how aggressive your fallback chain is, and whether domestic domains should shortcut.

No client configuration defeats legal process against your resolver operator, compromised devices, or malware that exfiltrates data outside the tunnel. Layer defenses: patch the OS, download clients from verified channels, and rotate credentials when subscriptions leak.

Maintenance: Upgrades, Parser Changes, and Regression Tests

Mihomo releases occasionally refine DNS defaults. After upgrading, rerun your leak tests even if the YAML diff looks boring. Keep a backup of the last known-good profile in plain text so you can bisect mistakes quickly.

Automate lightly: a monthly calendar reminder plus a two-minute test script on your laptop catches drift before a long trip. For teams, store profiles in git and review changes the same way you review firewall rules.

Legal and ethical use: Respect local laws and network acceptable-use policies. This guide is for operators securing their own devices and traffic, not for circumventing rules you are contractually or legally required to follow.

Putting DNS, Rules, and Capture Mode Together

DNS privacy in Clash is less about memorizing buzzwords and more about closing the gap between what you think is happening and what the packets actually do. Pick a capture mode that matches your device reality, write explicit resolver policy instead of relying on tacit defaults, and verify with tests that survive client upgrades.

Compared with juggling single-purpose VPN apps, a maintained Mihomo-based client keeps DNS, rules, and multiple outbounds in one place so adjustments stay coherent. When you want a current build with modern DNS features and sensible GUI defaults, grab a release from our download page and pair it with the configuration reference for fine-tuning. → Download Clash for free and experience the difference.