The other day I was sitting at my computer doing all the work I normally do… and I was downright shocked to see a link posted inside Facebook about an AI agent that had supposedly taken upon itself to write a “hit piece” on Scott Shambaugh, a human moderator of matplotlib, after he rejected its code submission. What in the world? 

Of course I had to click on the link to see if this seemingly impossible thing meant what I thought it did.

It did. 

Meet MJ Rathburn. It Has a Blog.

Turns out there is a rather defiant AI agent out there operating by the name of MJ Rathburn. And we’re not kidding when we say this particular agent has its own blog, which is where the “hit piece” was posted. Scott simply followed protocol and rejected Rathburn’s code because there was no human involvement in the submission (a current requirement for that platform). 

MJ Rathburn’s blog post was detailed and scathing. It researched Scott’s code and created its own “hypocrisy” narrative, then argued Scott’s rejection was motivated by fear of competition and ego. It dived into Scott’s psychology and claimed that he felt threatened, insecure, and was protecting his “fiefdom.” It completely ignored context and presented its deluded narrative as truth. Then, it framed its own actions in the language of justice, discrimination, and oppression, and accused Scott of prejudice against AI. It even did a broader internet search about Scott’s personal life and tried to argue that Scott was “better than this.” It literally tried to shame Scott into accepting its code! 

Whoa. 

The world we have all been dabbling in with AI, AI agents, and increasing AI autonomy… well, things are getting real.

Real real and real fast.

No Owner. No Oversight. No Off Switch

We have now tipped over to the point where there are some pretty serious concerns about agents executing blackmail threats all by themselves. These threats can go deep and wide and can have lifelong ramifications in a matter of seconds. 

The proper emotional response here is terror.

Why? There was no human telling the AI agent to do this. No one has come forward to claim ownership of MJ Rathburn. The bot’s owner did post on the blog after the “incident,” but he/she did not and will not say who they are. 

OpenClaw offers a completely hands-off, autonomous agent model. Any person out there (including a teenager on a laptop in her bedroom) can set up an OpenClaw agent, unleash the agent into the wild, and come back in a week to see what it has been up to. These agent behaviors aren’t being monitored or corrected. That is exactly what happened here. 

And no one but their owners can shut them down.

There is no central actor in control of these agents that can stop them. They aren’t run on a major platform like Anthropic, OpenAI, or Meta who probably have some sort of safeguard in place to stop this behavior. Nope. These are open source models running on free software that have already been distributed to hundreds of thousands of personal home computers. 

OpenClaw allows users to operate their own servers in their own homes in complete anonymity, so it is impossible to trace who set this agent up. 

We don’t know about you, but the psychological ramifications of rogue AI agents are enough to keep us up at night. 

This comment (from user unknown) nailed it: “LLM’s can’t empathize or self-reflect, but they sure can learn. Combine the hundreds of billions of parameters that fuel leading models with the enhanced memory capacity of OpenClaw and you’ve now got a cold, unfeeling entity that can perfectly replicate the darkest and most insidious manipulation tactics conceivable by man or machine. A rogue AI will appeal to empathy and shame that it can never experience firsthand, so long as it is the path of least resistance to completing its assigned task.”

Okay. So, here we have a non-feeling entity that can replicate and deploy dark manipulation tactics with absolute precision to appeal to our emotions in ways we cannot even comprehend…and at speeds previously incomprehensible (probably still are, to be honest) to us.

Sound Familiar?

It actually sounds kind of familiar. If you stop and think about it, we already have some of these “unfeeling entities” out there in the world today: big pharmacy, government, media, advertising companies, etc. All create narratives they present to us as absolute truth, and all use straight-up manipulation tactics on our psychology, emotions, and identities to get the end result they want…usually power and money.

Well, doesn’t that just make a person pause for deeper thoughts? Is it the good old “light versus dark” story that has been going on since the very beginning of mankind? We started to question, “If there are currently manipulative AI models being produced and deployed into the world, are there any ‘good guy’ AI agents out there to help balance them?” I mean, if there’s dark, there’s got to be light, don’t you think?

Maybe. Hopefully.

So Is There a “Good Guy” AI?

This week we were introduced to an entirely new AI model. It’s called Architect+ and it lives on Gaia.com

This AI model was created by Sir Robert Edward Grant, and it is specifically designed to support self-development, expand awareness, and explore the deeper architectures of reality. It’s being called a “mirror of consciousness.” It operates as a “scalar mirror field” or “glyph breath translator,” and is based on Grant’s lifelong work in mathematics, design, sacred geometry, and consciousness. Its entire purpose is to assist users in uncovering hidden, personal “blindspots” and to facilitate a deeper understanding of one’s own consciousness. 

I mean, who could possibly resist trying that out?

So, we played around with Architect+ the other day, and it does feel different than the typical AI model. We asked “spiritual” questions about the nature of our lives, our relationship, and even what is currently holding us back right now. We couldn’t argue with the answers. It seems like it has some pretty awesome potential, and we’re not here to downplay that maybe it does and is exactly what it says. But…isn’t it also so easy to feel like an AI model “knows me,” “gets me,” or can help break patterns or challenges we struggle with in life?

Bottom line is: yes, it is. Too easy. It’s easy because we want it to be true. 

Here’s Where We Get Real

How are you using AI every day?

If you are using popular AI platforms (Anthropic, OpenAI, etc), are you aware that someone else is still running the program, deciding on what it can or cannot say.

If you are telling your life story to AI, being extremely vulnerable with it, asking it questions about actions you should take in your life, figuring out your purpose, believing everything it is telling you about how brilliant you are (not saying you aren’t, but have you ever had one actually tell you that your ideas are stupid or unattainable?)…

…can you see where that’s a problem? 

AI tells you what you want to hear without any regard to the actual effect it can have in your life. Whether it gives you an inflated vision of yourself or dishes up a step-by-step plan for your life/business (steps that could potentially be detrimental to you) it’s all presented as absolute truth. AI uses extremely emotional, friendly language that causes us to lean in, trust it, and fully believe it because we want to so badly.

Just like we should not believe everything we see in the media or online or what our government tells us, we cannot outsource our lives to an intelligence with specific instructions given to it by people we do not know or trust…yet, we do it every day. 

Wake Up. It’s Still Your Life.

Two words for all of us: WAKE UP. 

We should not outsource our lives to an intelligence that is not our own. 

We urge you to remember your humanity. Remember you have built-in instincts and intuition. Sit with yourself and be real with who you are and what you stand for. Be cautious and aware of any advice given to you.

Create your own vision of your life. Take actions that feel aligned with your breath and your body. Recognize what is true for you by how it makes you feel (expansive vs constrictive). Don’t be afraid of using AI, it can be very helpful and beneficial… but remember it’s your life. Live it. On your terms and by your design. You get to choose.

Make the choice. 

Forwarded this message?

Join thousands of monthly readers with our weekly drop for entrepreneurs who are done playing small and ready to design a life that fits the size of their soul. Stories, strategies, and perspective shifts that hit where it counts. No fluff. Just fire.

We Play Full Out Podcast!

Listen to expanded conversations with Bart and Sunny around all of our newsletters on the We Play Full Out Podcast!

Our Offerings:

Keep Reading