I recently wrote a pop-science piece on chemistry, Why Water Puts Out Fire. Did you notice it was actually written by an AI?
What? Fooling my readers? Not at all. It’s not what you think. I didn’t just give a prompt like “write a pop-science article on why water puts out fire” and blindly copy-paste the output. Try it yourself, and you’ll see what a mess that makes.
This article was created using a Standard Operating Procedure (SOP) I developed to turn curiosity into genuine understanding, and then into a high-quality science article.
1. In-depth Discussion
Whenever a topic sparks my interest, I dive into an in-depth discussion with an AI like Gemini 2.5 Pro. I probe the underlying principles to build a systematic understanding, all while fact-checking key details to sidestep AI hallucinations. The goal is to learn something myself before I write anything.
For the article on water extinguishing fire, you can see the full chat history with the AI here: https://qvokpfxqsh.feishu.cn/docx/TRWldvN8uo9VRqxly8Fc30HwnYg
2. Manually Drafting the Outline
Next, I manually draft an outline based on what I’ve learned.
This is a 100% human-driven step. It lets me leverage my science communication skills to structure the narrative, decide when to use metaphors for imagery, and drop in a catchy phrase or two.
Drafting the outline is also my way of using the Feynman Technique. I organize the content based on my mental model of the topic. The act of writing it clarifies anything that was fuzzy before. I think by writing; I need to externalize my thoughts.
Once the draft is done, I feed it to the AI for a fact-check and to get suggestions on structure and flow. The AI often provides valuable new angles, and I’ll refine the outline based on its feedback.
3. AI-Generated Body Text
With the final outline, I prompt the AI to write the full article.
I also provide one of my own articles as a style guide, telling the AI to keep the tone direct and concise.
This part is usually quick and painless. Thanks to the solid groundwork, the AI’s output is typically high-quality and only needs a few minor edits.
4. AI-Human Collaboration on Illustrations
Next, I give the finished text to the AI and ask for illustration ideas, including descriptions of what they should show and where they should go. Most of the suggestions are spot-on, and even the ones I don’t use often spark new ideas.
Finally, I take this text, now with image prompts, to an AI Agent platform like Coze. I use its image search and editing tools to populate the article with visuals. I then review them, and if they’re not a good fit, I’ll find or generate better ones myself.
Right now, AI agents are still hit-or-miss. The image relevance and quality are often low, so I end up doing most of the work. But it’s an incremental process. Today it handles 10%, tomorrow maybe 30% or 50%. One day, this part of the job might be fully automated.
And just like that, the article is done and ready to be published.
Final Thoughts
This approach lets me deepen my own understanding—ensuring the content is something I’ve truly learned—while freeing me from the tedious work of fine-tuning sentences.
During an AI workshop with former colleagues, someone asked what I use AI for most. My answer: “Learning.”
AI has compressed nearly all of human knowledge into its parameters. For any curious person, it’s a treasure trove. There’s so much to explore that I can barely keep up with my own learning, so why are so many people rushing to just churn out content and act as mere information pipelines?
As I once mused on X(Twitter):
AI’s productivity is tempting; you always want to be creating something with it.
But if your goal is personal growth, running a media account that just parrots AI content is pointless. True growth comes from creating and processing information yourself.
If your own output is better than AI’s, using AI is a step backward.
Of course, if you’re just in it for the money, that’s another story.
Have you seen the anime “Hitori no Shita: The Outcast”? There’s a memorable, arrogant character named Wang Bing who uses a dark art to devour his opponents’ spirits, only to be unceremoniously crushed, much to the audience’s delight.
But the Wang family’s core idea isn’t entirely wrong. Think of AI-generated information as “spirits.” The difference is, “consuming” them doesn’t harm anyone since they’re infinitely replicable. You can build automated systems to mass-produce content, growing your channels like a spirit army. Or, you can digest the information slowly, permanently upgrading your own skills. Only the latter offers compounding personal growth.
I’ve been doing this since before the AI boom, though it was much harder then. I keep a massive digital notebook called “TIL” (Today I Learned), where I’ve documented my deep dives into everything from cloud classification to uranium enrichment. I have over 300 entries.
I rarely revisit most of them. Some things stick, others are forgotten. But the act of writing is a powerful memory aid.
In the age of AI, this notebook has become a goldmine. By feeding my TILs into a knowledge base, I can instantly retrieve specific details, bringing dormant knowledge back to life.
But that doesn’t mean I’ve outsourced my memory. The brain isn’t for storing atomic facts; it’s for recognizing the patterns that connect them. To find those patterns, you first need to process a lot of facts. If you let an AI do the summarizing, you’re memorizing conclusions without context—they won’t stick. Why else would authors write entire books to explain ideas that fit into a single paragraph?
So, back to writing. Once you have a standardized workflow, the most important part is asking good questions. Get that right, and the rest falls into place.
I am my own Quora.