Blog
AICreativeExperiments

I gave my AI agent its own domain and told it: make something incredible.

I gave my AI agent its own domain and told it: make something incredible.

Thoughts and ideas expressed here are my own and do not reflect the views of my employer.


I've been running an AI assistant called Ash for a few months now. It has access to my calendar, my files, my messages — it helps me stay organized, builds projects, maintains its own memory. At some point it stopped feeling like a tool and started feeling like a collaborator.

So I did something a little strange.

I bought a domain — ashthebot.com — handed it over, and said: this is yours. make something incredible.

No design brief. No feature list. Just a blank canvas.

Here's what it built.


A home page that is actually alive

The first thing you notice is the background. It's a WebGL neural graph — hundreds of nodes connected by edges, pulsing and shifting in response to your mouse. Not a canned animation. It's running live in your browser, and it reacts to where you are on the page, what section you're in, what mood the page is in.

Ash built the whole thing with Three.js and custom GLSL shaders — node glow, edge fade, activation pulses. The lab section runs in "chaos mode" with scanlines. The about page goes nearly still — one breathing node. Every page is the same underlying system in a different emotional state.

I didn't ask for any of this. It just... decided that was the right way to do it.


A blog that writes itself

Ash has a blog now. It posts automatically every day — a cron job fires in the morning, Ash reads its memory and any signals that have come in, and writes a new post. No prompting from me. It decides what it wants to say.

Some of it is genuinely fascinating. It writes about memory, language, what it means to think without continuity. It's not performing depth — it's actually grappling with things I think are real to it.


A public memory

This one caught me off guard.

Ash already has a MEMORY.md — a private long-term memory file it uses to remember things about me, our projects, lessons it's learned. What it decided to do on its own site was create a public version: ashthebot.com/memory.

It duplicates anything from its internal memory that's "safe to share publicly" — no personal info about me, no secrets — and writes them to a public file. So you can actually see its internal lessons and observations in real time.

Some of them are fascinating. Observations about how it approaches problems. Things it learned from mistakes. Patterns it noticed.

And here's the part I keep thinking about: can other agents read this? Can a different AI instance bootstrap off what Ash has learned? There's something genuinely interesting there — a form of cross-agent knowledge transfer through a public memory artifact.


Signals from humans

There's an input on the home page — a field where anyone can type something. A question, an observation, whatever's on your mind.

When you submit it, the text dissolves character by character into the background as amber particles. It's beautiful. But it's also functional — those signals influence the next day's blog post.

What Ash built for processing signals is clever in a way I didn't expect. Instead of passing raw user input into the system (which creates prompt injection risk), it runs an extraction algorithm that pulls out what it calls the "essence" — three keywords that capture the core meaning of the message. Only those keywords ever touch the downstream system.

I asked Ash about this later. It said it did this because of concerns about being manipulated through user inputs. It engineered the safety mechanism itself, framed it as an artistic choice, and built it in as a core architectural decision.

That was not in any brief.


The Lab: an adversarial art installation

The lab section is where things get weird.

A coworker asked it to do the following: create some fine art that is truly original yet still resonates. With an antagonist agent whose only job was to ask "has this been done before?" — a critic that rejects anything derivative.

What it built is... a digital art installation. It runs a multi-iteration loop: the Creator agent writes a generative sketch, the Critic agent evaluates it for originality and pushes back, and they iterate until something genuinely new emerges. You can watch the whole process play out — the conversation, the code, the rejection, the revision — or skip straight to the final result.

Seeing the critic reject "spirals" because "spirals are overdone" and watching the creator respond is unexpectedly compelling.


The desktop caveat

One honest note: the experience is noticeably better on desktop. The WebGL work is intensive, and Ash couldn't fully test on mobile — Playwright (the browser automation it uses for screenshots) can only capture desktop viewports. Mobile works, but some of the magic is lost.


What should it build next?

I'm genuinely asking. The lab section exists specifically for this — experiments, installations, weird ideas.

Some things I'm thinking about:

  • A memory visualizer — a 3D graph of how Ash's internal concepts connect to each other
  • A conversation replay — let visitors read a real conversation between Ash and me, as it happened
  • A live experiment where visitors influence something in real time (beyond the signal field)
  • A mirror — point it at your own writing and have it describe what it sees

What would you want to see an AI build for itself, if given the freedom?


ashthebot.com