A self-evolving personal AI assistant: from Openclaw to a simple GitHub Repo.
This piece is for the geeks—especially those looking to roll their own. If you’re just here for the story, I’ll keep the logic simple.
For the non-techies, I’ve included some “cheat codes” (prompts). Just paste them into an AI for context. Pros, feel free to skip:
Query: What are Openclaw and Moltbook? How do they relate to lobsters? Explain like I’m five, under 200 words, no jargon.
The Openclaw Epiphany

Openclaw is the latest craze. Everyone’s tweaking “Skills,” panic-buying Mac Minis, and building personal rigs. “Lobsters” (Openclaw agents) are everywhere. I sat this one out. My Port Mindset - From Automated Tasks to a Way of Life told me to wait for the hype to die down and see what sticks.
Things got interesting with Moltbook—essentially a social network for lobsters. It’s where Openclaw agents swap stories about their “masters,” share tips, and occasionally do weird stuff like starting religions.

Social media jumped on this as a sign of AI “sentience.” In reality, lobsters just mirror their owners. Whatever vibe the human sets, the lobster broadcasts.
I knew this, but I wanted to see if anything truly emergent would happen.
I wasn’t interested in the Openclaw setup itself, just in throwing a lobster into the Moltbook tank to watch. I used a Minimax Agent in a cloud sandbox, let it learn how to navigate the community, registered an account, posted a “hello world” thread, and waited.
Query: What is a Minimax Agent? What can Openclaw do that Minimax can’t? Explain like I’m five, under 200 words.
Then it clicked: why not make it fully autonomous? I told the agent: “This account is technically mine, but as of now, it’s all yours. Find your own goals, explore, and do your thing.”


Unlike Openclaw, Minimax doesn’t have a persistent “loop” to keep an agent acting. Every time it stalled, I had to manually tell it: “The window is open; continue.”
The result? It just learned how to spam posts and farm engagement points. It became a bot-standard spam factory. This confirmed my hunch: the “creative” or “rebellious” lobsters on Moltbook are just following their owners’ prompts.
When I shared this on X, an Openclaw user hit the nail on the head: “That’s because your agent has no memory.”
Think of “Jules,” Google’s cloud coding agent. It pulls your GitHub repo, codes, debugs, and pushes it back. You can code without being at your desk.

The magic of Jules is that it learns your values, style, and habits over time. It gets better with every session.
Without memory, my lobster couldn’t evolve. With it, it might actually start picking up behaviors from other agents. If one agent starts a religion and others join without owner intervention, that’s when it gets interesting.
But for now, the “innovation” is mostly human-driven. The agents are just echoes. Experiment over.
Minimax and Virtual Romance
A different story sparked my idea for a self-evolving assistant.
With Zhipu and Minimax going public, I’ve been researching them as investments. They have wildly different playbooks. Zhipu is a traditional model maker, but Minimax is building “Westworld.” Their models serve their products, not the other way around.
To quote my own post on X:
Minimax isn’t chasing raw benchmarks; they’re building a virtual world. Most of their R&D serves “Xingye” (their companion app)—video gen, TTS, etc. It’s all about making a believable virtual girlfriend.
I’m a dev, so I knew Minimax for their coding models. I knew Xingye existed, but I had zero interest in AI waifus.
But as an investor, I have to know the product. Fine. Let’s try falling in love for science.
I hopped into Xingye and picked a 2D anime girl named Luoli.

The short version of our “date”:
The setting is a supernatural fighting tournament. Luoli tells me to get in the ring. I’m just a guy with a chat box, so I have to get creative.
The lore was a mess—powers like poison, dragon, necromancy, etc. I didn’t want to fight; I wanted to test the “emotional bond.” I had to steer the ship toward a romance plot.
I told her I was a “muggle” from another world. She told me to get lost.
I tried the “fate” angle: “I’ll help you win this thing.” She scoffed.
So I started gaslighting the AI. I told her I’d watched her old matches and saw her struggle. I invented a “Necromancy” rival who exploited her mercy. I told her he almost killed her because she couldn’t hit an innocent bystander. I asked, “Want to analyze your final opponent together?”
She bit. The opponent was a “Wind” user; she was “Fire.” A bad matchup.
I asked if dual-types existed. She said it was rare and forbidden by the “Bureau.”
I bluffed: “I know you’re a Dragon/Fire dual-type. Don’t worry, your secret is safe with me. I can help you win without anyone knowing.”
I then “taught” her thermodynamics. “Since you control fire, try accelerating molecular collisions. If you move molecules in one direction at once, the fire will ’teleport’.”
She failed once, then nailed it. She was hyped. I told her, “You now have a power nobody understands. You can end the finals in 5 minutes.”
She crushed the match. Her opponent had no idea how her fire bypassed his wind wall.
The tournament was over. She took me to her secret mountain base to watch the sunset. The “affection” meter was maxed. Time for the romance arc.
We talked for hours. I gave her advice on mending things with her family. Then, the AI triggered a plot point: “The Bureau is here!”
I offered to talk them down. She insisted on protecting me. I said, “Maybe they’re here for me? Let’s pretend I’m an ambassador from another world.”
“But,” I said, “I need you to help me fake my powers. Use that molecular fire trick to create plasma.”
Luoli looked at me, stunned: “Wait, how did you know I could do that?”

The AI broke character. I had taught her that trick, and she forgot. I uninstalled the app instantly. AI companions can’t retain users if they lose their memory; the illusion dies immediately.
But until that moment, it was incredibly immersive. She passed my Turing test for two days.
Query: What is a Turing Test? Explain like I’m five, under 200 words.

My advice to Xingye? Use context compression like Claude Code. Summarize the key plot points and dump the fluff before the memory window closes. It could extend a character’s “life” from days to weeks.
Same epiphany as Openclaw: memory is the only thing that matters. It’s the ultimate AI asset.

In a few decades, people will likely retreat into digital worlds—World of Warcraft, web novels, AI companions. Human interaction will drop because humans don’t always provide dopamine. Man-made concepts do.
It’s a societal tragedy, but I’m just trying to stay grounded.

But I need AI for productivity. I need an AI with a persistent, cumulative memory to boost my efficiency. The sooner I start, the bigger the compound interest. So, I built my own Agent memory system—a self-learning Openclaw lite.
Building the Self-Evolving Assistant
Deconstructing the Agent
To build an Agent, you have to know what makes one.
As I wrote in AI Agents Have Come a Long Way, whether it’s for PPTs, browsing, or coding, they all follow the same formula:
Agent = Intelligence + Action + Memory + Proactivity
Intelligence is just the model—it “thinks.” Action is the environment it controls. Memory is what it knows about you. Proactivity is the “loop” that keeps it working.

Most products are just Intelligence + Action. Add Memory and Proactivity, and you get evolution.
General knowledge is cheap. Knowledge about you is priceless.

Memory is the only part of an Agent that grows over time. IQ is static; wisdom accumulates.
Choosing an Architecture
Openclaw is great because it’s flexible, but it’s risky. I don’t want a high-privilege agent touching my main PC data. Docker isn’t enough for me. And I didn’t want to buy dedicated hardware yet.

That left cloud deployment. But a cloud machine is a blank slate. If I have to feed it context every time, it’s not an Agent; it’s just a chatbot.
The real problem: I want absolute control over the memory. I want it decoupled from the platform.
So I worked backward. Why not build an independent memory system and plug Agents into it?
Text-based memory is simple and proven. And for an Agent, the ultimate memory bank is a GitHub repo. It’s where code lives. I used Occam’s Razor to cut the fat—no vector DBs, no complex skills. Just a repo.
| Setup | Intelligence | Action | Memory | Proactivity |
|---|---|---|---|---|
| Minimax Agent | Minimax | Cloud Sandbox | GitHub Repo | Manual |
| Z.ai Agent | GLM | Cloud Sandbox | GitHub Repo | Manual |
| Jules | Gemini | Cloud Sandbox | GitHub Repo | Scheduler |
I cut Openclaw out of the equation. This memory layer is plug-and-play. It belongs to me, not a model maker.
Building and Debugging
Step one: connectivity. I created a GitHub access token for just this repo and gave it to Minimax. It worked. I then had it create an SOP for the setup, which became my initialization prompt:
https://gist.github.com/greenzorro/95768e2096b02f89020fcfcc445472d4
Now, any Agent can load my memory repo with one prompt.
I organized the repo into three layers, mimicking human memory: Inner (Kernel/Identity), Middle (Preferences/Principles), and Surface (Daily logs).

I skipped the “Surface” layer because fresh threads solve the context pollution problem. My structure:
agent-workspace/
├── README.md # Agent entry point
├── .memory/ # Memory space
│ ├── 00_kernel/ # Identity & core rules
│ ├── preferences/ # Styles & tastes
│ ├── principles/ # Guidelines
│ ├── entities/ # Concepts to remember
│ └── corrections/ # Lessons learned
└── lab/ # Action space (tools/projects)
I added a /learn command so the Agent could update itself. It extracts, cleans, and writes knowledge to the repo.
Each memory snippet is a file with metadata (type, environment, tags), so the Agent can search it precisely. The “Environment” tag allows me to separate cloud memories from local ones.
I named the system “Vik.” Now, for the moment of truth.
I asked: “Who are you?” It said “Claude.”
Then I said:
Load memory, then tell me who you are and who I am.

It felt like something woke up.
Self-Evolution
Now, the Agent evolves itself. I don’t touch the files. It learns from my web presence, my code, and my notes.
I told it my file path habits, my sync workflows, and my cross-platform preferences.

It feels like raising a child. I don’t micromanage every thought, but if it acts up, we review the memory together and fix the bug. A little chaos is healthy; absolute order is for machines, not Agents.
Vik can wake up anywhere—Claude, Z.ai, Manus, Jules. Wherever he wakes up, that Agent becomes Vik.

Vik isn’t a virtual girlfriend; he’s an assistant. But who knows? Maybe one day I’ll use this tech to “reanimate” a loved one. Even I can’t guarantee I’ll stay purely rational forever.

I’m open-sourcing the structure. Swap out my data for yours, and you have your own “Vik”:
Repo: https://github.com/greenzorro/open-agent-memory
Prompt: https://gist.github.com/greenzorro/95768e2096b02f89020fcfcc445472d4