Chronicle of a week of alarming AI predictions

Over the past few days I counted several cautionary predictions from top AI industry figures. Collected all the statements and links in one post.

Author: Michael Kokin ·

Over the past few days (February 10–13) I counted several cautionary or reflective predictions from top AI industry figures. Decided to collect all these statements and links in one post so the big picture is visible.

February 10
Matt Shumer: "It's like February 2020"

HyperWrite's CEO wrote a piece that quickly went viral: "Something Big Is Happening."
Several people sent it to me with a vibe of "oh no, it's all over." Shumer writes that we're in the calm before the storm, very similar to the start of the pandemic. In closed labs, next-generation models already work that are fundamentally smarter than current ones. Insiders who see these tests are concerned about the gap between reality and public expectations.
In my opinion, Shumer is describing not a catastrophe, but a shift in perception. Adapting to new tools takes time. Companies (and people) that start preparing for the transition to agents now will gain an advantage. Those who wait for a release will feel the wave harder.

Full text here

February 11
Mustafa Suleyman (Microsoft): "Office workers, get ready"

Microsoft's AI chief told FT that most cognitive tasks (lawyers, accountants, managers) will be automated within 18 months. Microsoft is preparing its own "superintelligence" for release this year.
I think Suleyman is not talking about specialists disappearing entirely. Lawyers won't vanish, but those who only write contract letters will be replaced by agents that write and verify results themselves. The profession will evolve toward analysis, strategy, and accountability. This is a normal cycle. Professional standards for students and juniors need to change.

Interview

February 12 (Morning)
Sam Altman: "Singularity is close"

OpenAI's CEO cryptically tweeted: near the singularity; unclear which side. Rumors say the internal Codex 5.3 model already writes code for its own self-training autonomously. This means the transition from a tool to a fully autonomous mind may have already happened (we just haven't been told yet).
This is not proof of a breakthrough yet. But when leaders start playing with singularity language, it's more likely a sign that: (1) something significant has changed internally, (2) they need to prepare society for the jump in advance. But "singularity" in his understanding is not one D-day, but a gradual transition.

Tweet

February 12 (Evening)
Dario Amodei (Anthropic): "The centaur phase is over"

On the NYT podcast he stated that the era of "human + AI" collaboration (what he calls the centaur) is ending. In 1–2 years, autonomous agents will work better than us without our supervision. The human operator will become the bottleneck slowing things down.
I think Amodei is simply describing risks. Yes, for certain standard tasks (routine coding, routine documents) human assistance may become a drag. Those who learn to set tasks for agents and verify results will gain a competitive advantage.

NYT Podcast

February 13
Dwarkesh Patel and (again) Amodei: "Where's the money?"

On the podcast they discuss AI economics: if AGI is so close, why can't anyone make money from it? Chatbots are unprofitable. Real profit will only come from agents that deliver work end-to-end.
This is the most sensible of all the discussions. Between a smart model and a product that works in the real world lies a huge engineering gap: reliability, integrations, quality control, accountability. So even with the technology ready, deployment will take time.

Podcast

In short, the productivity leap has already happened inside companies. And before this reaches us as releases and layoffs that fundamentally reshape the job market — 3 to 9 months. The main takeaway for everyone right now: start working with agents, not against them.