Illustration: Sarah Grillo/Axios
Architects of the leading generative AI models are abuzz that a top company, possibly OpenAI, in coming weeks will announce a next-level breakthrough that unleashes Ph.D.-level super-agents to do complex human tasks.
- We’ve learned that OpenAI CEO Sam Altman — who in September dubbed this “The Intelligence Age,” and is in Washington this weekend for the inauguration — has scheduled a closed-door briefing for U.S. government officials in Washington on Jan. 30.
Why it matters: The expected advancements help explain why Meta’s Mark Zuckerberg and others have talked publicly about AI replacing mid-level software engineers and other human jobs this year.
“[P]robably in 2025,” Zuckerberg told Joe Rogan 10 days ago, “we at Meta, as well as the other companies that are basically working on this, are going to have an AI that can effectively be a sort of midlevel engineer that you have at your company that can write code.”
- “[O]ver time, we’ll get to the point where a lot of the code in our apps, and including the AI that we generate, is actually going to be built by AI engineers instead of people engineers,” he added.
Between the lines: A super-agent breakthrough could push generative AI from a fun, cool, aspirational tool to a true replacement for human workers.
- Our sources in the U.S. government and leading AI companies tell us that in recent months, the leading companies have been exceeding projections in AI advancement.
- OpenAI this past week released an “Economic Blueprint” arguing that with the right rules and infrastructure investments, AI can “catalyze a reindustrialization across the country.”
To be sure: The AI world is full of hype. Most people struggle now to use the most popular models to truly approximate the work of humans.
- AI investors have reason to hype small advancements as epic ones to juice valuations to help fund their ambitions.
- But sources say this coming advancement is significant. Several OpenAI staff have been telling friends they are both jazzed and spooked by recent progress. As we told you in a column Saturday, Jake Sullivan — the outgoing White House national security adviser, with security clearance for the nation’s biggest secrets — believes the next few years will determine whether AI advancements end in “catastrophe.”
The big picture: Imagine a world where complex tasks aren’t delegated to humans. Instead, they’re executed with the precision, speed, and creativity you’d expect from a Ph.D.-level professional.
- We’re talking about super-agents — AI tools designed to tackle messy, multilayered, real-world problems that human minds struggle to organize and conquer.
- They don’t just respond to a single command; they pursue a goal. Super agents synthesize massive amounts of information, analyze options and deliver products.
A few examples:
- Build from scratch: Imagine telling your agent, “Build me new payment software.” The agent could design, test and deliver a functioning product.
- Make sense of chaos: For a financial analysis of a potential investment, your agent could scour thousands of sources, evaluate risks, and compile insights faster (and better) than a team of humans.
- Master logistics: Planning an offsite retreat? The agent could handle scheduling, travel arrangements, handouts and more — down to booking a big dinner in a private room near the venue.
This isn’t a lights-on moment — AI is advancing along a spectrum.
- These tools are growing smarter, sharper, and more integrated every day. “This will have huge applications for health, science and education,” an AI insider tells us, “because of the ability to do deep research at a scale and scope we haven’t seen — then the compounding effects translate into real productivity growth.”
The other side: There are still big problems with generative AI’s Achilles heel — the way it makes things up. Reliability and hallucinations are an even bigger problem if you’re going to turn AI into autonomous agents: Unless OpenAI and its rivals can persuade customers and users that agents can be trusted to perform tasks without going off the rails, the companies’ vision of autonomous agents will flop.
- Noam Brown, a top OpenAI researcher, tweeted Friday: “Lots of vague AI hype on social media these days. There are good reasons to be optimistic about further progress, but plenty of unsolved research problems remain.”
What to watch: Two massive tectonic shifts are happening at once — President-elect Trump and MAGA are coming into power at the very moment AI companies are racing to approximate human-like or human-surpassing intelligence.
- Look for Congress to tackle a massive AI infrastructure bill to help spur American job growth in the data, chips and energy to power AI.
- And look for MAGA originals like Steve Bannon to argue that coming generations of AI will be job-killing evil for managerial, administrative and tech workers. The new models “will gut the workforce — especially entry-level, where young people start,” Bannon told us.
Axios’ Scott Rosenberg, managing editor for tech, contributed reporting.