Serverless key-value stores use DNS TXT data

Blog 14 min read

Storing 670,000 TXT records to serve a single gigabyte proves DNS functions as a viable, if absurd, serverless key-value store. You will learn how to architect this serverless key-value abuse, implement file reassembly from fragmented text chunks, and execute binary patching entirely within.

While Verified Market Reports notes the DNS security sector is exploding due to rising threats, the core vulnerability remains the lack of validation on TXT record contents. Demonstrates that splitting a base64-encoded payload across thousands of these records allows an implant to reconstruct and run programs like DOOM without ever writing a file to disk. This technique exploits the fact that forensic tools rarely flag historic text data or query patterns that deviate from standard A or PTR lookups.

The discussion moves beyond simple staging to full in-memory execution, where the DNS infrastructure acts as the sole host for critical assets. By using globally distributed edge caching, an adversary can deliver malicious logic that appears as legitimate traffic to most monitoring systems. ## The Role of DNS TXT Records as a Serverless Key-Value Store

DNS TXT Records as Arbitrary Text Fields for Base64 Data

DNS TXT records act as unvalidated text fields that accept any string, allowing storage of arbitrary base64-encoded binary data without content verification. This lack of validation transforms a protocol built for email authentication into a functional serverless key-value store. Attackers use this design flaw to stage payloads entirely in memory, sidestepping disk-based detection mechanisms that monitor file system writes. Data indicates these fields hold diverse content types ranging from simple love letters to complex executable code. The mechanism depends on splitting large binaries into chunks small enough to fit standard DNS response limits. A sharp tension exists between deployment ease and forensic visibility; while setup is trivial, historical data collection services like WhoisFreaks maintain databases of over 14 billion historical DNS records, guaranteeing permanent exposure of staged content. This payload staging technique proves infrastructure intended for metadata can execute complex programs like DOOM when clients reassemble fragments in RAM. Such capability shifts the security perimeter from endpoint file integrity to network traffic analysis. Network operators must recognize that arbitrary text fields effectively decouple code execution from traditional file artifacts.

Stashing malicious code in TXT records enables runtime implant delivery by fetching base64-encoded binary data directly into memory. Each record holds about 2,000 characters, forcing attackers to split payloads like the 1.7MB compressed DOOM WAD across nearly 2,000 distinct entries. This approach defines payload staging as retrieving executable logic at runtime rather than storing it on disk, effectively bypassing file-based antivirus scanners monitoring local storage writes. Unit 42 researchers documented cases where adversaries infiltrate commands or binary files using this exact method to evade detection. Bandwidth and query volume create constraints; resolving thousands of records takes seconds longer than loading a local file, creating a detectable latency spike. Unlike traditional downloads, this approach leaves no binary artifacts on the hard drive after execution ends. Investigators cannot recover the original payload from the host machine once the memory clears. Standard defense tools often ignore TXT traffic, assuming it contains only email authentication strings. This blind spot allows complex programs to run entirely within volatile memory spaces.

FeatureTraditional DownloadDNS TXT Staging
Storage LocationDiskDNS Zone
Detection VectorFile HashQuery Volume
PersistenceHigh (File exists)None (Memory only)

Operators must assume any text field can execute code if the client logic permits reassembly.

Forensic Evasion Challenges Due to Rare TXT Record Querying

DNS TXT record abuse exploits the scarcity of routine TXT queries compared to A or PTR lookups to hide malicious traffic. Standard client behavior rarely requests these records, allowing attackers to blend payload retrieval with normal background noise. This mechanism enables in-memory code execution where implants fetch encoded binaries without triggering alerts designed for high-volume data transfers. Operational complexity increases notably; reassembling a program from fragmented text requires precise sequencing that can fail under network latency. Historical analysis remains the primary detection gap because few security teams archive or inspect the content of past TXT responses. ADAMnetworks identified this flaw where attackers used TXT records to hide and spread malware, evading traditional security defenses. HTTP payload delivery leaves clear web proxy logs, yet DNS traffic often bypasses deep inspection due to perceived low risk. Legitimate TXT usage occurs so infrequently that malicious activity persists undetected for extended periods within this blind spot. Security architectures treating DNS as purely navigational miss the potential for data exfiltration hidden in plain text fields.

Architecture of In-Memory Code Execution via DNS

Managed-Doom: Pure C# Assemblies for Memory Reflection

Can It Run DOOM? Data shows the project required a DOOM port written in a language reflectable into memory on Windows to avoid disk artifacts. Native binaries typically demand file system presence for loader resolution, whereas managed-doom, a pure C# port of the original engine, permits direct loading from raw bytes. This architectural distinction enables the player script to reconstruct the game entirely within RAM using . NET reflection APIs.

FeatureNative BinaryManaged Assembly
Loading MethodDisk File PathMemoryStream
Forensic TraceHigh (File Created)None (RAM Only)
DependencyOS LoaderCLR Runtime

The mechanism functions by fetching compressed DLL bundles via DNS, decoding the base64 stream, and invoking the entry point without writing sectors to storage. Notes that while the uncompressed WAD file reaches 4MB, compression reduces this footprint significantly for transport. However, the limitation is strict dependency management; any native DLL reference forces a revert to disk-based operations, breaking the fileless chain. Unit 42 researchers documented cases where attackers used TXT records to infiltrate code, yet few implementations achieve pure memory residency due to these native constraints. The implication for network defenders is clear: detecting such threats requires monitoring anomalous DNS query patterns rather than scanning local drives for executables. NET assemblies loaded unexpectedly during user sessions.

Base64 Chunking and Reassembly of DNS TXT Payloads

The DOOM project required 1,966 distinct TXT records to reassemble the full payload from distributed chunks. The mechanism splits Base64-encoded binary streams into fixed-size segments that fit within standard DNS response limits, tagged with sequence metadata for ordered reconstruction. This approach mirrors the Duck. Jpg proof of concept, where file integrity was verified by matching hashes after retrieval and decoding. According to KEY DATA POINTS, compressing the original 4MB WAD file reduced the footprint significantly, though the record count remained high. A critical tension exists between payload size and query volume; larger files like a 1GB video would demand roughly 670,000 records, rendering the technique impractical for bulk data exfiltration. InterLIR notes that while memory reflection avoids disk artifacts, the sheer number of DNS queries creates a detectable traffic anomaly despite the legitimacy of individual packets. Network defenders observing sustained, high-volume TXT resolution patterns can identify this staging behavior even without inspecting payload content. The reliance on precise chunk ordering means any packet loss during transmission corrupts the final executable, forcing attackers to implement strong error-checking logic within the stager itself.

Scalability Limits: 1GB Files Versus Slow Exfiltration Rates

data shows a 1GB MP4 file requires roughly 670,000 TXT records, rendering large-scale delivery unscalable compared to the 1,966 records needed for DOOM. This mechanism forces a choice between massive record proliferation and maintaining a functional in-memory code execution chain within standard DNS zones. Typical DNS exfiltration attacks transmit data much more slowly, with some estimates showing rates of just over 3 MB per day using small chunks sent every few seconds. The limitation is throughput; moving significant volume through this channel creates a temporal footprint that contrasts sharply with the instant gratification of local disk reads.

MetricSmall Payload (DOOM)Large Asset (1GB Video)
Record Count~2,000~670,000
FeasibilityHighUnscalable
Primary UseRed teaming toolsTheoretical staging

Operators utilizing memory reflection in red teaming must recognize that while bypassing disk writes avoids forensic traces, the query volume itself becomes a detectable anomaly. Relying on this vector for gigabyte-scale assets introduces latency that defeats the purpose of rapid deployment.

Implementing File Reassembly and Assembly Patching for DNS Delivery

Patching .NET Assemblies for MemoryStream Reflection Loading

ADAMnetworks revealed a DNS TXT record exploit used for malware delivery on September 9, 2025, proving fileless loading bypasses disk monitors.

  1. Modify the managed-doom source to accept a MemoryStream instead of reading hardcoded file paths like `C:\Path\to\Game. Dll`.
  2. Strip the native windowing DLL that demands disk presence and substitute direct Win32 API calls for rendering.
  3. Recompile the target as a pure . NET assembly to enable reflection-based loading without triggering executable whitelists.

The original WAD file shrank from 4MB to 1.7MB via compression, yet the DLL bundle still required reduction from 4.4MB to 1.2MB to fit viable limits. A specific tension exists between functionality and stealth; cutting audio entirely saved record count but degraded the user experience. Operators must recognize that removing native dependencies eliminates the primary forensic artifact of file creation, yet introduces complexity in maintaining stable Win32 API hooks within a purely managed context. This architectural shift means detection relies entirely on monitoring unusual DNS query patterns rather than static file signatures.

CloudFlare API uploads consumed 15 minutes to stage 1,966 records for the DOOM payload. Operators must compress WAD files and DLL bundles before splitting binaries into TXT chunks that fit DNS response limits. According to Creating the Records, compression reduced the WAD file from 4MB to 1.7MB while the DLL bundle dropped from 4.4MB to 1.. The mechanism calculates required record counts by dividing total compressed size by the ~2,000 character limit per TXT entry. Reassembly logic reads metadata to order chunks correctly before reconstructing the binary stream in memory. However, the cost is query volume; a 1GB video would demand roughly 670,000 records, creating massive noise. This constraint forces attackers to prioritize small, high-value binaries over large assets to avoid detection thresholds. Network defenders observing sudden bursts of TXT queries should investigate the cumulative byte count immediately.

  1. Compress target binaries using standard algorithms to minimize total character count.
  2. Split the encoded output into fixed-size segments matching DNS protocol constraints.
  3. Upload segments via automation tools to populate the CloudFlare Pro zone efficiently.
  4. Generate a metadata record containing hash values and total segment counts for the client.

The limitation is that high-frequency polling triggers alerts quicker than slow, steady exfiltration channels. Operators balancing stealth against payload size must accept that larger programs increase forensic visibility significantly.

Executing Fileless Reassembly via PowerShell Resolve-as reported by DNSName

Creating the Records, the player script executes roughly 2,000 DNS queries in 10 to 20 seconds using Resolve-DNSName. Operators must invoke this standalone PowerShell utility to fetch base64 chunks, strip headers, and concatenate them into a single byte array within a MemoryStream. The script then loads the modified . NET assemblies via reflection, bypassing disk write events entirely. A critical tension exists between execution speed and detection thresholds; while the operation completes rapidly, bursting 2,000 TXT requests might trigger volume-based alerts despite staying under the 10-query-per-minute heuristic for sustained monitoring. Per Creating the Records, the entire reassembly logic fits within about 250 lines of code, ensuring portability across environments without native dependencies.

Verification requires confirming zero file artifacts remain on the filesystem post-execution. Network teams should validate that the Win32 API calls render the window correctly without loading external DLLs from disk paths.

CheckTarget StateFailure Indicator
Disk I/OZero writesTemp files created
DependenciesNoneMissing native libs
DurationUnder 20sTimeout errors

Strategic Viability of DNS for Payload Delivery in Red Teaming

Application: based on DNS TXT Records as Arbitrary Text Fields for Base64 Data

Dashboard showing DNS payload delivery metrics: 90% filter bypass rate, 512-byte limit, 670,000 records needed for 1GB transfer, and 20-50% growth/compliance impacts.
Dashboard showing DNS payload delivery metrics: 90% filter bypass rate, 512-byte limit, 670,000 records needed for 1GB transfer, and 20-50% growth/compliance impacts.

Palo Alto Networks Unit 42, attackers infiltrate code into systems by hiding binary files inside TXT records that lack content validation. This mechanism functions because the protocol treats these fields as opaque strings, allowing operators to store Base64-encoded payloads without triggering file-type restrictions. According to Yahoo Finance, ADAMnetworks identified this flaw as a primary vector for spreading malware while evading traditional security defenses. The limitation is scale; moving significant volume through this channel creates a temporal footprint that contrasts sharply with instant local disk reads. A typical deployment splits compressed binaries into thousands of chunks, reassembling them in memory to bypass disk monitors entirely.

ConstraintImpact on Payload Delivery
Character LimitForces fragmentation of large binaries into small text segments
Query VolumeHigh record counts increase detection risk via traffic analysis
LatencyReassembly speed depends on resolver caching and network conditions

The cost of this approach is query volume; even modest assets demand massive record proliferation across zones. Most operators overlook the fact that historical DNS data remains queryable long after the initial delivery window closes. This persistence allows forensic recovery of staged binaries if the zone configuration permits recursion.

Application: Stashing Malicious Code in TXT Records for Runtime Implant Delivery

Should I use DNS for payload delivery depends on accepting a fileless execution model that trades transfer speed for forensic invisibility. Data shows the project successfully stored, launched, and ran DOOM from DNS records by encoding binaries into text chunks. The mechanism splits compressed payloads into base64 strings, storing each segment within a separate TXT record field. An implant queries these fields, reassembles the byte stream in a MemoryStream, and invokes the code via reflection without disk writes. However, the limitation is volume scalability; transferring a 1GB file would require roughly 670,000 records, creating excessive noise compared to smaller implants. This constraint forces operators to target high-value, low-size binaries rather than bulk data exfiltration or large toolsets. ADAMnetworks identified a DNS flaw where attackers used TXT records to hide and spread malware, evading traditional security defenses that focus on HTTP or SMTP channels. The implication for network defenders is that standard egress filtering fails against this vector because the traffic mimics legitimate resolution requests.

As reported by Infoblox, DNS exfiltration analysis began in May 2016, yet TXT record monitoring remains secondary to A-record logging. Most security stacks prioritize volume-heavy traffic, treating text-based queries as low-risk noise rather than command vectors. This operational blind spot allows attackers to stage fileless payloads without triggering standard threshold alerts. The mechanism exploits the rarity of legitimate client-side TXT requests, masking malicious retrieval within sporadic background noise. However, ADAMnetworks revealed a DNS flaw in May 2016, where this exact obscurity facilitated undetected malware spread across enterprise networks. The trade-off for defenders is visibility versus volume; enabling deep packet inspection on all text records increases processing load significantly. InterLIR recommends isolating TXT query logs from general DNS streams to identify anomalous patterns early. Unlike HTTP delivery, which leaves clear server logs and file artifacts, DNS staging leaves only transient cache entries. This distinction creates a persistent forensic gap where evidence vanishes before analysts can triage the incident effectively. Operators must assume baseline traffic includes hidden code if zone authority is compromised.

About

Evgeny Sevastyanov Support Team Leader at InterLIR brings a unique operational perspective to the complexities of DNS TXT records. While his daily work focuses on managing IPv4 resources and maintaining clean BGP route objects, his deep involvement with RIPE database administration provides critical insight into how DNS infrastructure is configured and potentially exploited. At InterLIR, a Berlin-based marketplace dedicated to transparent IP address redistribution, Evgeny oversees the technical integrity of network resources, giving him firsthand knowledge of the very protocols discussed in this article. His experience creating and verifying database objects allows him to understand precisely how malicious actors might hide payloads within legitimate-looking TXT records to bypass forensic detection. By connecting routine network administration tasks with advanced security concepts, Evgeny illustrates why understanding these obscure record types is essential for anyone managing modern network availability and security posture.

Conclusion

Relying on compression to fit malicious binaries into DNS payloads merely shifts the bottleneck from bandwidth to orchestration complexity. As attackers target high-value, low-size binaries, the real breakage point emerges in forensic retention; transient cache entries vanish before standard logging intervals capture the full reconstruction chain. While the global DNS security market expands rapidly, most organizations still treat text-based queries as low-risk noise, creating a critical visibility gap that fileless malware exploits effortlessly. You must assume that without specific isolation of these logs, your baseline traffic already harbors hidden command vectors.

Organizations handling sensitive intellectual property should mandate deep packet inspection for all TXT queries by Q1 2026, specifically targeting non-standard query frequencies rather than just payload size. This timeline aligns with the anticipated surge in automated exfiltration tools that bypass traditional HTTP-centric defenses. Do not wait for a breach to validate this architectural blind spot; the cost of processing full text-record streams is negligible compared to the revenue loss from undetected data theft.

Start by auditing your current DNS logging policy this week to verify if TXT record queries are being captured at full fidelity or truncated. If your logs discard these entries after standard TTL expiration, you lack the forensic evidence required to reconstruct an attack post-incident.

Frequently Asked Questions

What is the maximum file size practical for DNS TXT storage?
A 1GB video file demands roughly 670,000 records, making it impractical for this method. Smaller compressed files like the 1.7MB DOOM WAD are far more viable for successful in-memory reassembly and execution without detection.
How much compression is needed to fit payloads into DNS limits?
The original 4MB WAD file shrank to 1.7MB via compression to fit viable limits. Similarly, DLL bundles required reduction from 4.4MB down to 1.2MB to ensure successful delivery through fragmented TXT record chunks.
Can historical DNS data expose stored payloads to investigators?
Historical data collection services maintain databases of over 14 billion records, guaranteeing permanent exposure. This means any payload staged in TXT records remains visible to forensic investigators long after the initial memory-only execution event has concluded.
Why do attackers prefer TXT records over standard A records for payloads?
TXT records act as unvalidated text fields holding about 2,000 characters each for arbitrary base64 data. Unlike A records, they allow storing complex binary code directly within the protocol for later in-memory reassembly by the target implant.
What specific capacity limit forces payload chunking in this technique?
Each TXT record holds about 2,000 characters, forcing the 1.7MB compressed DOOM WAD split across nearly 2,000 entries. This strict character limit necessitates complex metadata management for accurate file reassembly during runtime execution.
Evgeny Sevastyanov
Evgeny Sevastyanov
Support Team Leader