← back to writing
January 29, 2025 · essay

ChatGPT can order me groceries now. Who really cares?

Agentic AI systems seem like a party trick until you extrapolate the trend. Why Operator signals something much bigger than convenience.


Recently, OpenAI announced their new “Operator” model. What makes Operator different from OpenAI’s other products is that it’s what’s called an agentic system. Unlike ChatGPT, Operator is not confined to a little chatbox. It can navigate the web and take actions in the very real digital world. You want to book a dinner reservation? Operator can handle that for you. Don’t feel like navigating a million websites to pay your bills? Operator’s got you covered.

Talking to many of my “non AI-enthusiast” friends, this new feature seems to be of little significance to them. “Who cares if it can order me groceries? Sure it’s convenient, but that’s a pretty basic task.” Yes, that is certainly true. Ordering groceries is a pretty straightforward thing to do. But I think people are failing to extrapolate the implications of a tool like this.

Agentic systems are a pivotal advancement towards AGI (artificial general intelligence). While Operator might only be able to handle simple tasks, that isn’t going to be the case for long. As a field, AI research is accelerating at an exponential pace. A few years ago, the best models in the world were about as smart as your average middle schooler. Now, they’re as smart as a PhD student, not just in a single field, but across numerous fields. Think about that. A single entity that can rival PhDs across math, physics, biology, chemistry, etc. Take a genius with an IQ of 150, give them a century of schooling, and that’s the level of competency that we’re talking about.

Now, the best models are still limited by how well they can think long term. Yes, it can solve graduate level physics problems, but it can’t reason over long time horizons. Though it might be as knowledgeable as a physicist AND a biologist AND a mathematician, it can’t rediscover Einstein’s General Relativity, or Darwin’s Theory of Evolution, or solve the Riemann Hypothesis. Real world problems. Those kinds of problems require complex reasoning over long time horizons. This long term reasoning has proven to be a difficult hurdle towards AGI. At least until recently.

Over the past year or so, AI researchers have made astounding breakthroughs with respect to this problem. What they’ve done is a bit too technical for me to discuss here, but it’s become more or less clear what the right direction is to make this whole long term reasoning thing work. It won’t be long until an agentic AI system will be able to be given some abstract task, and reason through how to achieve said task, decomposing its primary goal into necessary sub goals. And when I say it won’t be long, I mean months, not years.

Now this long term reasoning is slightly different than Operator’s ability to navigate the web, but the two go hand in hand. Very soon, there will be models that can be given a general task like “help increase sales,” and go out into the digital world to achieve said task. Just think about the implications of that. Ignoring the very real security concerns of letting these AIs roam autonomously about the internet, what are you going to do if your job involves sitting at a computer 90% of the day? Why would a business pay you tens of thousands of dollars a year when it can get an AI agent to do the same thing for orders of magnitude cheaper and orders of magnitude faster? While these systems won’t be perfect, they’ll be pretty damn good. What would’ve taken a team of 10 people to achieve might now require a single person overseeing 9 other Operator agents.

Now you might say, “Well sure, maybe something like Operator can replace a secretary’s job, but my job has a lot more nuance involved.” This mindset, I feel, is once again looking at the current state of AI and failing to extrapolate this very consistent trend we’ve been seeing. For decades, AI researchers thought a system that could beat the best chess players in the world was the pinnacle of intelligence. Many doubted it was possible. Then, in 1997, IBM’s Deep Blue became the first computer to beat a reigning world chess champion. “Well sure, beating a chess champion is impressive, but chess is a relatively simple game. Something like the Chinese game, Go, is a lot more nuanced. Surely an AI will never be the world Go champion.” That was true until 2015 when Google’s DeepMind released AlphaGo. Over and over, these things that only humans are capable of turns out aren’t really things that only humans are capable of. And the time frame in which we’ve been able to knock down these hurdles has only been getting shorter and shorter. In less than a decade, we’ve gone from a handful of games to everything from art, to writing, to math, to programming, etc.

Interestingly, in the near term future, it seems blue-collar jobs are the ones that are actually safest from the impending upending of the global workforce. This primarily ties back to the fact that data is the lifeblood of AI. And it turns out to be pretty hard to get enough data for AI robots to be comparable to the dexterity and skill of humans in the physical world. Unfortunately (or fortunately depending on who you are), that hurdle is also beginning to fall. Between all the data-collection-oriented startups and attempts at humanoid robots, we’re already starting to see this. I expect that, with AI itself aiding in developments in robotics, systems that operate in the physical world will be just as pervasive (if not more so) a decade from now, as their digital counterparts are today.

But when physical AI systems become so prevalent, what happens to labor with any sort of physical component? And this goes for nearly all physical labor. Everything from truck drivers to surgeons. What then?

With the exception of an exceedingly small number of individuals, your job is not safe (me included). Right now, the things these systems can do in the real world are relatively limited. But soon, the secretaries that lost their jobs will be software engineers. And then consultants, and then investment bankers, and then production managers, and then lawyers, and then doctors, and so on and so on.

This isn’t some distant future, this is happening today.

As a student myself, I can’t help but think, “What’s going to happen to the millions of young people out there who think their efforts will result in them landing a job and securing that future they want?” Whether it’s getting a bachelor’s in business administration, pursuing a PhD in molecular biology, going to trade school to become a mechanic, or even practicing coffee-making to become a barista, it’s hard to foresee a future where these efforts actually yield anything tangible for them.

Quite frankly, I’m extremely worried about what the future will most likely look like. We are grossly underprepared for the world we are hurtling towards. What’s worse, I really don’t see a clear way out. Any calls to halt or at least slow AI research have fallen and would fall on deaf ears. Considering the current geopolitical and economic climate, I see no reason for this to stop. This entire situation — in which we may very likely be facing a massive global unemployment crisis — is just one of many issues we will have to address. This is not to speak of anything about AI being used by rogue organizations for malicious intent, or by world leaders to establish enduring authoritarian states, or a million other risks.

And of arguably greater concern, is that facing all these issues could be argued to be the “good” outcome. This all assumes we are able to keep these ever-improving systems under control. It will not be long until artificial intelligence will be the superior intelligence. If humanity’s global impact as a species tells us anything, it’s that the smarter ones tend to almost always win. And as I’ve said before, this is all rapidly moving from science fiction to the real world. Artificial intelligence poses an existential risk, as real as climate change or nuclear war, but will act over a much shorter time span.

I truthfully do not know how to end this essay. I find it hard to say that I think things will turn around. Nonetheless, I’d like to remind the reader that this doesn’t have to be the world we live in. There’s a world where AI can help cure all diseases, end global poverty, and finally understand how our own minds work. It takes work to get there, but it’s possible.

Dario Amodei, CEO of Anthropic, has an amazing essay called “Machines of Loving Grace” on this. I highly recommend it.