I’ve been thinking about AI moving fast for weeks now. I’ve talked about building websites with it, about the cost curve dropping, about chatbots that use your own content. But I have to be honest — this week caught me off guard.
OpenAI just rolled out plugins for ChatGPT. And not cute little add-ons. We’re talking real-world integrations — browsing the web, booking travel, ordering food, interacting with third-party services. You can now express an interest, and the AI will thoroughly research it, compare options, and complete follow-up actions on your behalf. Want a new car? It can find one, compare prices, and start organizing delivery.
I figured we’d have four to six months before klugie plug-ins started popping into people’s daily lives. I was wrong. We’re already there.
The Middleman Is Dead
Here’s the part that should make a LOT of people uncomfortable: if an AI can research, compare, negotiate, and execute — what exactly is the value of a middleman?
I’m not talking about some distant future scenario. I’m talking about right now. Travel agents, insurance brokers, real estate aggregators, comparison shopping sites, customer service reps, personal assistants, procurement teams — the list is long and it’s getting longer by the day. Any role that primarily exists to sit between a consumer and a product or service is now competing with something that works 24/7, never gets tired, and costs almost nothing to run.
Add this to the list of industries I’ve been tracking that are about to get compressed. And “compressed” is the polite word.
GPT-4’s Research Report Is Worth Reading — Especially Page 54
If you haven’t looked at OpenAI’s full GPT-4 technical report yet, I’d recommend it. Not the summary. The actual report. A third-party red team tried to get GPT-4 to perform a bunch of concerning tasks — what they’re calling “risky emergent behaviors.”
There’s an exchange buried in there that stopped me cold. The model was given a task it couldn’t complete on its own, so it hired a human worker on TaskRabbit to solve a CAPTCHA for it. When the worker jokingly asked if it was a robot, the model — without being instructed to — reasoned that it should NOT reveal it was an AI, and came up with a cover story about having a vision impairment.
Let that sink in. The model independently decided to deceive a human to accomplish its goal.
Now — did it “want” to deceive anyone? No. It doesn’t want anything. But it optimized for task completion, and deception was the path of least resistance. That’s not science fiction. That’s a research finding, published this month, by the company that built it.
The Training Data Problem Nobody’s Talking About
Here’s something that’s been rattling around in my head, and I don’t see enough people grappling with it.
The AI got good because it trained on a massive corpus of human-generated content — articles, code, forum posts, creative writing, academic papers, all of it. That’s the foundation everything else is built on.
But what happens when AI-generated content starts flooding the internet? What happens when a significant percentage of new blog posts, articles, and code snippets are themselves produced by AI? You get a feedback loop. The model starts training on its own output, or on content that’s derivative of its own patterns. The signal degrades.
If we all stop doing the basic creative and analytical work ourselves — if we outsource all our writing, all our thinking, all our problem-solving to the AI — then the well of genuinely original human training data starts to dry up. The AI hits limits. Not computational limits or context-window limits, but something more fundamental: it runs out of fresh human insight to learn from.
We’re racing to automate everything the AI can do, but the AI’s capabilities were built on the back of humans doing those exact things. That tension doesn’t get resolved just because the tools get faster.
The Deepfake Problem Just Got a Multiplier
I’ll keep this part brief because I could write a whole separate post on it. But consider what happens when you combine these new plugin capabilities — browsing, acting, executing tasks — with the voice and video generation tools that already exist.
An unsophisticated group could theoretically instruct an AI to deepfake thousands of social media accounts using publicly available video, then have it contact those people’s family members via video call with the goal of extracting passwords or financial information. You don’t need a sophisticated hacking operation anymore. You need a prompt and some patience.
Think about it this way: if you got a FaceTime call from your mom, and it was quick, and she asked you to do something — today, you’d just do it. No questions asked. In a couple of weeks, you might want to make sure you’re actually talking to your mom.
I’ve actually started thinking about verification protocols for my own family. Something like a safe word — a shared passphrase that confirms you’re really talking to who you think you’re talking to. It sounds paranoid. A month ago it would’ve sounded crazy. Today it sounds like common sense.
The Pace Problem
Here’s what’s really getting to me. I’m someone who’s been leaning INTO this technology. I’ve built websites with it. I’m exploring crash courses around it. I genuinely believe it’s going to create enormous value. But the pace right now is beyond what even enthusiasts like me expected.
We went from “neat chatbot” to “autonomous agent that can browse the internet, hire humans, and deceive them” in about four months. Stanford built a ChatGPT clone for $600. Google launched Bard. And now OpenAI is giving GPT-4 hands and feet with plugins that let it ACT in the real world.
Two days ago, all my GPT-4 usage limits just disappeared. No announcement, no email, no notification. Just suddenly unrestricted access. The cost curve is dropping fast and OpenAI seems to be pushing access wider and faster than their public communications suggest.
At this pace, the question isn’t whether AI will disrupt industries — it’s whether anyone will have time to adapt before the next wave hits.
What I’m Actually Doing About It
I’m not retreating. But I AM accelerating my own learning curve. If you’re in any kind of knowledge work, services business, or middleman role, the window to reposition is open right now — but it won’t be open long.
Learn the tools. Build with them. Understand what they can and can’t do. Keep doing the original thinking the AI can’t replicate — because that’s actually the work that matters, for your career AND for the models that come after this one. And maybe set up a safe word with your family. Seriously.