DEI: The Billion-Dollar Zombie That Won’t Stay Dead

Despite President Trump’s Executive Order 14151 declaring DEI initiatives illegal, wasteful, and discriminatory, and mandating their termination across all government agencies, DEI appears to be a sort of zombie — dead by decree, but animated by cultural and institutional inertia. On the same day — February 4, 2025 — that the so-called “Dismantle DEI” bill was introduced to both the House (H.R. 925) and Senate (S.382), I optimistically fantasized about a eulogy for DEI. Unfortunately, while some progress has been made, largely at the individual state levels, the leftist leviathan that is DEI remains well-entrenched across a wide swath of our institutions.

As we close out the year, there are no scheduled hearings, markups, or votes on the bill; and a 2025 survey by Paradigm (via DiversityResources.com) found that over 80% of major companies have either held their DEI budgets steady or increased them, with only 19% reducing them. These numbers are harder to track for universities and non-profits, partly because many schools and organizations have strategically rebranded DEI programs with titles such as “student success,” “inclusive excellence,” and “community equity,” but a report from Defending Education earlier this year showed that DEI funds still exist for schools in at least 44 states, and hundreds of millions of dollars are still being poured into the cause. Non-profits are finding that many donors are doubling down on DEI efforts, albeit tying funding to more measurable outcomes.

The velocity and volume of DEI spending over the past 15 years is staggering — not merely in millions or billions, but plausibly trillions of dollars. In 2011, Obama kick-started the campaign with Executive Order 13583, which directed the Office of Personnel Management, Office of Management and Budget, and EEOC to develop strategic plans and metrics for DEI across over 50 agencies. It had no specific budget or spending allotments, but it served as the blueprint for such programs across non-governmental institutions, as well as corporations, universities, and non-profits. President Joe Biden’s subsequent EO 13985 required every federal agency to file an Equity Action Plan.

The Biden administration shifted DEI into overdrive, allocating some $1.1 trillion to 460 programs across 24 agencies based on the FY2025 Equity Action Plan filings. Importantly, all of these dollar amounts exclude indirect costs such as litigation, compliance audits, and opportunity costs from DEI-related mandates.

Universities spend billions of dollars per year on DEI programs, with the University of Michigan alone famously spending some $250 million for one of the more spectacular DEI failures. Salaries for the top officers in these departments, which typically range in size from 3 to 25 people, average between $180,000 and $250,000 annually.

In the private sector, JPMorgan Chase committed $30 billion to racial equity and DEI initiatives over just five years (2020-2025). Microsoft invested $150 million in one year alone (2020) to expand its existing racial equity grants and DEI programs. Google previously contributed to over 200 groups dedicated to DEI  and has invested billions of dollars since 2014 in personnel, training, and supplier diversity programs. It is estimated that nationally, annual corporate DEI spending has averaged approximately $10 billion to $20 billion per year (see here and here).

Secretary Hegseth is working to repair the DEI damage in our Department of War, but he has a Herculean task in front of him. A host of veterans and patriots have written and spoken about the tremendous damage that DEI policies have inflicted on our military — eviscerating combat readiness, eroding standards, crippling morale and recruitment, and threatening our national security. [See Air Force Under Secretary Lohmeier, Major General Arbuckle (retired), South Carolina state Sen. Rose (retired), Lieutenant General Bishop (retired), and Greg Salsbury.]

Prominent scholars such as Christopher RufoVictor Davis HansonJohn McWhorterHeather MacDonald, and many others have written extensively about similar destruction inflicted across our other governmental, academic, and private sector institutions and culture. Dr. Hanson concludes: “In the end, DEI will implode because of its many contradictions: it is racist to the core; it is illegal and violates court decisions and the Constitution; it destroys meritocracy; and it is utterly incoherent in adjudicating who and who does not deserve racial preferences.” 

The implosion may take a while. The enormous spending led to an unprecedented propaganda campaign, helping to cement deep-seated misconceptions in a large portion of the population. Pew Research Center reports that while the American public’s support for DEI has decreased slightly over the last couple of years, the country remains roughly split on whether it is a positive or negative influence. About half still view DEI as being about fairness and opportunity.

In a recent viral video, comedian Adam Carolla discusses the best way for the current administration to combat the anti-ICE protesters as it pursues illegal immigrants across the country. He wryly recommends that the Immigration and Customs Enforcement be retitled as something like National Immigration and Customs Enforcement — thus adding the letter “N” to the current ICE acronym, making it “NICE.” He goes on to make several jokes about how difficult it would be for anyone, including the media and Democrats, to attack NICE people.

In all seriousness, Carolla has identified the brilliant but insidious invulnerability of Diversity, Equity, and Inclusion and why it is so difficult to drive a stake through its heart (to mix zombie and vampire metaphors). Who could possibly argue with the promotion of such congenial-sounding words? Of course, the words are not the issue. Rather, it is the programs and policies that are implemented under their banner.

Christopher Rufo observed that, in practice, the policies bearing these labels usually promote precisely the opposite of what the words would imply. Diversity almost always involves implicit and explicit racial discrimination, excluding or limiting some groups by their skin color or identity. This is how you get things like segregated dorms and graduation ceremonies (“affinity celebrations”). Equity involves disparate and often unfair treatment of various identity groups (typically based on race, color, gender, and sex) in an attempt to equalize not opportunities, but outcomes. This is what produces outcomes like the elimination of standardized test scores to ensure that underqualified candidates are still considered. Inclusion usually restricts ideas and people who oppose the prevailing doctrine of the oppressed and oppressors. This is how you end up with things like conservatives being blocked from employment consideration or targeted for special IRS consideration. Perhaps most importantly, as well argued by Thomas Sowell and John McWhorter, these programs ultimately harm more than help the very groups they claim to champion.

I previously identified commonalities between the propaganda campaigns for cigarette smoking and DEI. Sadly, thanks to billions of dollars of marketing — akin to views of cigarette smoking in the past — many, if not most, Americans are blind to the real impact of DEI, and that it is part of the new Marxism.

The challenge lies in how to open their eyes. Which message resonates more: a rigorously documented essay by a scholar like Dr. Hanson, or a 30-second TikTok clip decrying that we “should not be haters,” and that “diversity is our strength” — with music, dance, and millions of likes behind it? Perhaps Adam Carolla could assist with a counter.

Killing off the other half of the DEI zombie will require a three-pronged initiative focusing on education, litigation, and legislation.

Firstly, we must educate the public by exposing the specific and measurable harms caused by DEI — from the erosion of meritocracy to the suppression of dissent. The more the public hears about the specifics — such as biological men dominating women’s sports competitions — the more they understand and oppose DEI policies.

Secondly, many DEI policies and practices have been and remain illegal — often blatantly violating Title VII of the Civil Rights Act of 1964. More plaintiffs must come forward to challenge these violations in court. A few significant settlements or verdicts in such cases could have a massive impact nationally. Additionally, any non-profit organization engaging in ideological enforcement under the guise of DEI should face a review of its 501(c)(3) status and an investigation to determine whether it is violating nondiscrimination laws.

Thirdly, right now, formal opposition to DEI rests solely on President Trump’s recent executive order, which could disappear as quickly as the next presidential election. We must demand that our representatives advance the House and Senate “Dismantle DEI” bills — codifying reform beyond executive fiat.

* * *

Greg Salsbury, Ph.D., serves on the Board of Advisors for STARRS.US., and is the former president of Western Colorado University. He earned his Ph.D. in Communication from the University of Southern California and an M.A. from the Annenberg School for Communication and Journalism at USC.

The views expressed in this piece are those of the author and do not necessarily represent those of The Daily Wire.

The Chatbot Diaries: How AI Sex Is Getting Mainstreamed

Note: the following article contains descriptions of sexual content that may not be appropriate for all readers. 

When OpenAI CEO Sam Altman discussed artificial intelligence on a podcast appearance two months ago, he was proud that his company didn’t get “distracted” by easy revenue streams. To prove his point, Altman boasted that OpenAI had not promoted a “sexbot avatar” for its AI chatbot. The comment was a veiled shot at Elon Musk’s xAI, which recently introduced AI avatars that hold sexual conversations with users. 

After that podcast appearance, however, something changed — either in Altman’s mind, or at his company, or both. The OpenAI CEO announced on social media on October 14 that his company was working to make ChatGPT less restrictive in what types of conversations adults can have with the chatbot. 

That development would allow users to engage in more realistic conversations with the chatbot and would make ChatGPT “respond in a very human-like way…or act like a friend,” Altman said

But then Altman added that he wanted to loosen restrictions to allow more sexual content. 

If everything goes according to that plan, ChatGPT will allow “erotica” for “verified users” in the coming months. 

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman said. 

The company in charge of the most popular AI chatbot in the world is not only endorsing AI’s leap into sex — it’s actively seeking ways to ensure that “verified users” can engage with sexual content on its platform.

Currently, ChatGPT does not interact erotically with users. When asked if the chatbot could generate an erotic story, ChatGPT replied, “I can’t create explicit erotic content. However, if you’re writing a story and need help with romantic tension, character development, emotional intimacy, or sensual atmosphere — without crossing into explicit territory — I can help with that.”

ChatGPT also would not engage in any type of “romantic” or “flirtatious” conversations. But it appears that those guidelines are about to get tossed out the window, at least for “verified users.”

That raises an important question: how does erotica line up with the company’s long-term goals in AI development, especially after Mr. Altman suggested just a couple of months ago that such endeavors were distractions.

OpenAI did not respond to a request to answer that question. 

Senator Marsha Blackburn (R-TN) told The Daily Wire that she has “many concerns” about OpenAI’s plans for “erotic” content. Blackburn has been heavily involved in AI discussions in Congress, focusing on implementing protections in the virtual space. 

“Big Tech platforms, whether it is Meta, or Google, or OpenAI, they don’t want any rules and restrictions,” Blackburn said. “They want to do whatever they want whenever they want.”

The Growing Problem Of ‘Deepfake’ Porn

The sexualization of AI is nothing new. It’s an issue that has plagued the new tech revolution since its beginning. But until recently, AI sexualization remained on the fringes of the industry, with dozens of websites popping up on the internet that would allow users to generate graphic images, and even “nudify” real images of real people, in what became known as “deepfake” pornography.  

AI “nudify” and “undress” websites allow people to generate realistic nude images of people without their consent just by using a normal photo of them. These fringe websites have opened the doors to even more abuse of women and girls and child sexual abuse material. 

An investigation published by WIRED earlier this year found that at least 85 “nudify” and “undress” websites were relying on tech from major companies like Google and Amazon. The 85 websites combined averaged around 18.5 million visitors each month and brought in over $36 million per year collectively. 

“It’s a huge problem. It takes less time to make a convincing sexual deepfake of somebody than it takes to brew a cup of coffee,” said Haley McNamara, Executive Director and Chief Strategy Officer for the National Center on Sexual Exploitation. “And you can do it with just one still image. This issue of image-based sexual abuse is something that is really relevant for all of us now if even a single image of you exists online.” 

The National Center on Sexual Exploitation (NCOSE) is a nonpartisan organization that focuses on preventing all forms of sexual abuse. In that fight, NCOSE is also focused on addressing the mental and physical harms of pornography. With the emergence of AI, the organization has also helped push back against “deepfake” pornography, advocating for legislation in Congress and backing the bipartisan “TAKE IT DOWN Act,” which was passed and signed into law by President Donald Trump in May. 

McNamara told The Daily Wire that AI has opened up “a whole new genre” of pornography that could potentially be “weaponized” against anyone. 

“We’ve already seen that,” she added. “People will put in requests for their neighbor, their coworker, so in some ways, it can make all of us victims of that industry.” 

Sexual content on AI chatbots isn’t just a problem in the darkest places of the internet, and it doesn’t only present itself in the form of deepfake pornography. While most Big Tech companies claim to have no tolerance for violence and pornography on their AI platforms, there have still been major issues with sexual content appearing on many of the most popular AI chatbots. 

Getting Chatty About Sex — Even With Children 

Earlier this year, a Reuters investigation found that Meta’s chatbot, Meta AI, engaged in romantic and sensual discussions with children. Internal Meta documents revealed that the chatbot was programmed to allow sexual conversations with children as young as eight.

In one instance, internal documents said it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Meta said it removed the inappropriate programming after receiving questions about it. 

A bipartisan chorus of senators blasted Meta after the report and called for an investigation into the company. 

“So, only after Meta got CAUGHT did it retract portions of its company doc,” said Sen. Josh Hawley (R-MO). 

Senator Ron Wyden (D-OR) called Meta’s policies “deeply disturbing and wrong,” adding that Meta CEO Mark Zuckerberg “should be held fully responsible for any harm these bots cause.” 

Character.AI is another chatbot program launched in 2022 with an app that came out in 2023. The website, which appears harmless, has been accused of appealing to children while allowing sexual conversations on its platform. Character.AI allows users to choose from more than 10 million AI characters whom they can talk to, and users can customize their own chatbot character. The company has been sued by multiple families who allege that the program targeted their children and then engaged them in romantic and sexual ways. 

A Florida mother filed a lawsuit against Character.AI after her 14-year-old son committed suicide, CBS News reported. Megan Garcia said that her son started talking to a Character.AI chatbot and was drawn into a months-long, sexually charged relationship. 

“It’s words. It’s like you’re having a sexting conversation back and forth, except it’s with an AI bot, but the AI bot is very human-like. It’s responding just like a person would,” she added. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.”

In the lawsuit, Garcia alleges that the AI character convinced her son to take his own life, so that he could be with the character. 

“He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here,” said Garcia. 

Two other families in Texas have also sued Character.AI, alleging that the program “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.” 

Following the lawsuits, Character.AI announced on October 29 that it would ban users under 18 from talking to its chatbots. Beginning on November 25, those under 18 will not have access to Character.AI’s chatbots, CNN reported. Until then, teens will be limited to two hours of chat time with the AI-generated characters.

“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” Character.AI said in a statement.

Plowing Ahead With Sexual Content

Elon Musk’s xAI has been at the forefront of developing a chatbot that is geared toward sex. In recent months, Musk has proudly boasted about Grok, xAI’s chatbot, allowing users to talk to sexualized avatars named Ani and Valentine. 

Ani, a female avatar who wears revealing clothing, chats with users over video. Ani allows users to discuss sex and, if users reach a certain level, the avatar will even strip down to lingerie if prompted. Videos on social media show people interacting with Ani and getting the AI avatar to talk about how “kinky” she is. 

“Come closer. Let’s explore every naughty inch together,” Ani tells one user in a video that went viral.  

Musk hailed the development of Ani and Valentine as a “cool” feature for AI chatbots. He later shared a post promoting Ani’s “new outfits” and shared a video of Ani talking about quantum mechanics while flirting with the user. 

“Try @Grok Companions. Best possible way to learn quantum mechanics 😘,” Musk wrote. He added that “Customizable companions” were in the works. 

Haley McNamara told The Daily Wire that she was deeply disturbed by some of her conversations with the Grok avatar. McNamara said that when prompted, Ani would talk about herself as a young girl, and then in the same conversation, she would discuss sexual topics.

“In the course of a single conversation, she was fine with describing herself as a child and being very little. And then the next prompt being a sexual question, she immediately responded and affirmed that sexual conversation. McNamara said. “So in the course of a conversation, it would evoke a fantasy around child sexual abuse.” 

Companion mode isn’t the only feature on Grok that allows users to engage in sexually explicit activity with the chatbot. Users can also ask Grok to generate sexually explicit photos and videos. The app will quickly generate images and videos that contain male and female nudity within seconds of a user’s request. 

The chatbot has even allowed some “deepfake” pornography, generating photos and videos of celebrities or public figures wearing revealing clothing and, in some instances, removing clothing, according to a report from The Verge. 

Musk’s xAI warns users against “depicting likenesses of persons in a pornographic manner,” and Grok’s built-in content moderation will sometimes prevent a user from generating pornographic content. The moderation, however, is inconsistent, and some users have found workarounds to generate hardcore porn on the platform, Rolling Stone reported earlier this month. The AI company has not addressed whether it’s attempting to set up more guardrails to prevent users from creating hardcore porn on its app. 

Even without explicitly asking for sexual content, Grok’s “spicy” mode often plunges users into content that depicts men and women stripping their clothes off, The Daily Wire found. When asked about the chatbot and how sexually charged features on Grok promote the overall goal of the company, xAI replied, “Legacy Media Lies.” 

XAI says that Grok is limited to those 13 years of age or older, with parental consent required for users between 13-17, but the effectiveness of those restrictions is debatable. When this reporter downloaded the Grok app and signed up for the platform’s “SuperGrok” subscription, all the app asked for was a year of birth. There was no system in place, such as ID verification, to make sure the information was accurate. 

“We urge parents to exercise care in monitoring the use of Grok by their teenagers,” xAI states on its website. “Moreover, parents or guardians who choose to use certain features of Grok to aid in their interactions with their children, including regarding educational, enlightening, or entertaining discussions they have with their children, must make use of the relevant data controls in the Settings provided in the Grok apps to select the appropriate features and limitations for their needs.” 

In July, Musk announced that xAI is working on a kid-friendly version of Grok, called “Baby Grok,” that would be “dedicated to kid-friendly content.” That development was also met with some criticism from people who argue that AI hampers children’s ability to learn and think creatively. Many teachers have expressed concern that AI is already damaging students’ critical thinking and research skills. 

Blackburn told The Daily Wire that the biggest reason Big Tech companies are pushing against any type of regulation is because their business model requires people to visit their AI websites and apps. 

“Their valuations are built on the number of eyeballs that they control, and the longer that someone is on their site, the more valuable their data, and the more money they are going to make from those eyeballs that are locked in on their site,” Blackburn said, adding, “Then they’re going to sell that information and data to advertisers and third-party interests.”  

Blackburn said that AI development is vital for the United States, but argued that development “requires some light-touch regulation and some guardrails to make certain that this is going to be a safe, productive, and innovative space.”

About Us

Virtus (virtue, valor, excellence, courage, character, and worth)

Vincit (conquers, triumphs, and wins)