AI out of control? How a single article is sending shock waves with an apocalyptic warning

Be afraid. Be very afraid.

That’s the message that has caught fire in the media-tech world when it comes to artificial intelligence (AI).

This column, for what it’s worth, is being written by a fallible human being on a battered keyboard with no technological assistance.

It’s extremely rare–once in a blue moon–that I read a piece that completely changes my view of an issue.

Like most people, I have viewed the rise of AI with a mixture of concern, skepticism and bemusement.

DEMOCRATS ARE LOSING AI BECAUSE OF A BIG MESSAGING PROBLEM

It’s fun to conjure up images on ChatGPT, for instance, and I get that some people use it for hyperspeed research. But then you hear anecdotes about AI screwing up math problems or spewing stuff that’s simply untrue.

Sure, we’ve all seen warnings that this fast-growing technology will cost some people their jobs, but I assumed that would be mainly in Silicon Valley. The era of plane travel didn’t wipe out passenger trains or buses, though it was curtains for the horse-and-buggy business.

But now comes Matt Shuman, who works in AI, and he’s not simply joining the prediction sweepstakes. He tells us what is happening right now.

Last year, he says, "new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise."

On Feb. 5, two major companies, OpenAI and Anthropic, released new models that Shuman likens to "the moment you realize the water has been rising around you and is now at your chest."

Bingo: "I am no longer needed for the actual technical work of my job. I describe what I want built in plain English, and it just ... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave."

Wait, there’s more. The new GPT model "wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter."

This goes well beyond the geeky world of techies, in case you were feeling immune. "Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think ‘less’ is more likely."

AI RAISES THE STAKES FOR NATIONAL SECURITY. HERE’S HOW TO GET IT RIGHT

My knee-jerk reaction is, well, I’ll be okay because no super-smart bot could talk about news on TV or podcasts with the same attitude and verve that I do. Then I remember, even as a writer, that news organizations are increasingly relying on AI.

What about musicians who bring soul to their rock ’n roll or bop to their pop? Well, the most popular AI singer is Xania Monet. Some fans were stunned to discover she wasn’t real, though created by an actual poet, Telisha "Nikki" Jones, and most listeners didn’t care. In fact, "Xania" now has a multimillion-dollar recording deal.

One other sobering thought: "Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years."

Gulp.

This has really hit the media echo chamber, reverberating from Axios to the New York Times to the Wall Street Journal, among others.

The fact that Matt Shuman presents this in a measured tone, not a sky-is-falling shout, adds to his credibility.

Anthropic, for its part, released a study that defended its Claude Opus model, "against any attempt to autonomously exploit, manipulate, or tamper" with a company’s operations "in a way that raises the risk of future catastrophic outcomes."

The report added: "We do not believe it has dangerous coherent goals that would raise the risk of sabotage, nor that its deception capabilities rise to the level of invalidating our evidence."

95% OF FACULTY SAY AI MAKING STUDENTS DANGEROUSLY DEPENDENT ON TECHNOLOGY FOR LEARNING: SURVEY

Meanwhile, National Review provides a counterweight to what's called "doomerism."

For one thing, "most predictions anticipate that AI will be a top-down disruption rather than a bottom-up phenomenon."

For another, writes Noah Rothman, "there is almost no room in the discourse for undesirable outcomes that fall short of catastrophism. After all, modesty and prudence do not go viral."

And what about the positive impact?

"Rather than wiping out whole sectors, it is just as possible that the workers displaced by AI will be retained in the sectors in which they’re already employed.

It defies logic to assume that an industry that grows as rapidly as AI is predicted to will not need human data scientists, research analysts, specialized engineers, and, yes, even support and administrative staff. In addition, sectors such as health care, agriculture, and emerging industries will require as much, or even more, human talent than they currently employ."

The conservative magazine is also annoyed that "participants in this debate default to the assumption that the only solution to AI’s disaggregating potential, whatever its scale, is big government."

Well, take your pick.

If AI, which can now code well enough to reproduce itself, doesn’t wipe out zillions of jobs, or society finds ways to adapt, we can all breathe a very human sigh of relief.

And if artificial intelligence is as destructive as Shuman’s alarming article says it already is, we can’t say we weren’t warned–but perhaps we can harness it to do our jobs for us while we work three days a week with three-hour lunches.

I’m agnostic at this point, except to say it’s going to be a wild ride.

Ken Paxton sues Dallas over alleged failure to fund police as required by Proposition U

Texas Attorney General Ken Paxton announced he has filed a lawsuit against officials in Dallas, alleging the city failed to properly fund its police department as required by a voter-approved public safety measure.

Paxton, a Republican running for U.S. Senate, accused Dallas of unlawfully refusing to comply with Proposition U, a public safety measure approved by the city's voters in 2024.

Proposition U requires that 50% of all new annual revenue the city receives be directed toward police and fire pensions. The measure also mandates that the city maintain a minimum of 4,000 police officers — roughly 900 more than the department had in 2024.

SAN ANTONIO ENDS ITS ABORTION TRAVEL FUND AFTER NEW STATE LAW, LEGAL ACTION

The lawsuit, announced on Friday, names Dallas City Manager Kimberly Bizor Tolbert and Chief Financial Officer Jack Ireland Jr. as defendants.

"I filed this lawsuit to ensure that the City of Dallas fully funds law enforcement, upholds public safety, and is accountable to its constituents," Paxton said in a press release.

"When voters demand more funding for law enforcement, local officials must immediately comply," he continued. "As members of law enforcement across the country increasingly face attacks from the radical Left, it’s crucial that we fully fund the brave men and women in law enforcement defending law and order in our communities. This lawsuit aims to do just that by ensuring Dallas follows its own charter and gives police officers the support they need to protect the public."

Paxton alleges Dallas officials under-calculated the total of excess money the city had in its current budget to put toward safety measures in Proposition U. The additional revenue for the 2025-2026 fiscal year should be $220 million, according to Paxton, but the city only reported approximately $61 million in excess revenue. 

The lawsuit also accuses Dallas of failing to hire an independent third-party firm to conduct an annual police compensation survey, as required under the measure.

FEDERAL JUDGE ALLOWS TEXAS AG TO CHALLENGE HARRIS COUNTY BAIL REFORMS: 'UNLEASHING CRIMINALS'

The complaint demands that the city properly allocate the excess revenue towards police pensions, officer pay and increasing the number of officers in accordance with Proposition U.

Dallas city leaders have taken action to comply with Proposition U, according to Fox 4. In December, the city council approved a 30-year, $11 billion dollar pension funding plan for the police department.

About Us

Virtus (virtue, valor, excellence, courage, character, and worth)

Vincit (conquers, triumphs, and wins)