Does AI Dream in Noise?
Feeding the machine static and seeing what it makes of itself
Here’s the question I keep poking at: if you strip a generative model of every well-formed prompt, no curated nouns, no “in the style of,” no mood board, no helpful adjectives, and just shove pure entropy through it, what comes back?
I’ve been running two experiments to find out. One is dumb on purpose. The other is dumb on the radio.
Experiment One: 100 Random Word Visions
I had an LLM cough up 100 random words. No theme. No vibe. No “make these cohere, please.” Just one hundred unrelated chunks of human language pulled from whatever the model felt like generating that day. Then I fed those words one by one into a video model and let it dream up an experimental short, one micro-vision per word.
The result is a stitched-together fever dream. Some clips snap together with weird accidental rhyme. Others are completely unrelated. The whole thing has the rhythm of channel-surfing through a parallel cable system that only exists inside a server rack.
Here’s the wild bit, though. The transitions feel intentional even though nothing about this was. Your brain does the editing job for free. You start narrating connective tissue between “wagon” and “elapsed” and “blueprint” because that’s what brains do. They reach for story even when there isn’t one. The AI didn’t compose anything coherent. You did, after the fact, in your head.
That’s a wild thing to watch happen in real time.
Experiment Two: Paintings From RF Noise
The second one is more physical, and honestly more fun. I plugged an RTL-SDR dongle into my laptop. That’s a $30 USB radio dongle and wrote a Python tool that:
Tunes to a random frequency between 24 MHz and 1.7 GHz
Listens for a couple of seconds
Renders the IQ samples as an image: spectrogram, constellation plot, or polar phase map
Chains a randomly-chosen subset of sixteen filters over the result (chromatic shift, kaleidoscope, bloom, datamosh slice shifts, the whole funk pile)
Drops the PNG in a date-stamped folder with a JSON manifest of every parameter, so when one of the random outputs is great I can reproduce or remix it precisely
The radio is the random number generator. The actual electromagnetic ambient mess of my workshop: neighbor’s wifi, FM broadcast bleed-through, an ADS-B aircraft transponder ten miles overhead, whatever else is in the air. It is doing the creative work. The code is just a translation layer between physics and pixels.
Some runs land on a frequency where nothing’s transmitting and the output is a smooth Gaussian blob, the visual equivalent of a held breath. Other runs catch a busy ISM band or an FM station and you get structured rings and lattices that look like somebody tried to sketch a circuit diagram in a dream.
I never know which one I’m going to get. That’s the whole point.
I then took the raw data (a .csv file) and fed it directly into an AI tool. I got this back:
So, does AI dream in noise?
Here’s what I think is actually going on with both of these projects, and why I keep building them.
A generative model is a giant compressed atlas of human pattern. Prompting it well is asking the model to navigate that atlas to a specific destination.
Prompting it with noise is asking it to navigate to no specific destination. This means the model has to fall back on its own internal gravity. The places it drifts toward when nothing’s pulling on the wheel.
That drift is, I’d argue, the closest thing you can get to seeing the model’s own aesthetic. Its defaults. Its dreams. Whatever you want to call it.
The radio version goes one notch further. It introduces an entropy source the model has no relationship to whatsoever. The actual electromagnetic state of a room in western Maryland on a Tuesday afternoon. The model didn’t train on that. Nobody did. It’s just there. It’s only ever been there.
When I generate art from it, I’m not really making art. I’m making a transcription of a moment of physical reality nobody, human or machine, has ever bothered to pay attention to before.
The point
I’m a tradigital craftsman. I like making things with my hands and my code and the weird overlap between the two. But I also like asking the tools I use what they’re actually doing when nobody’s micromanaging them. I want to find more ways to feed these tools with noise from the real-world.
Every prompt-engineered output is, in some sense, a duet between you and the model. Both of you are steering. Pull your hands off the wheel and feed the model garbage on purpose, and you start to see what it does when it’s alone in the room.
Sometimes it’s beautiful. Sometimes it’s a Gaussian blob. Both answers are interesting.
More noise to come.
*Word soup used to generate this article’s image:
ablaze, bungalow, cleaver, dapple, ewer, flounder, grapple, hatchet, idle, jangle, kindle, looseleaf, mottled, niggle, oblong, perch, quickstep, rasp, scribble, twiddle, unhinge, vex, wobble, yelp, zoom, ample, brisket, chortle, dimple, ease, fizzle, grumble, hover, itch, jostle, knit, lounge, muddle, nudge, oust, pluck, quack, rouse, smudge, trudge, undo, veer, waft, yank, zap, amble, brood, chuckle, dawdle, ebb, fumble, gulp, heave, ink, jab, knack, lurch, mumble, nuzzle, ogle, prod, quench, rummage, snicker, totter, unwind, vault, wheeze, yawn, zest, awash, brisk, clamp, dodge, evade, flick, gnash, huddle, irk, jolt, kink, limp, mope, nod, oaf, peer, quaff, ruffle, scour, tug, undulate, vie, whisk, yelp, zigzag, antic, bluff





