Elon Musk’s AI startup, xAI, has acquired his social media platform X, formerly known as Twitter, in an all-stock deal, he announced in a post on X Friday.
“xAI has acquired X in an all-stock transaction,” Musk said. “The combination values xAI at $80 billion and X at $33 billion ($45B less $12B debt).”
He went on to describe the two companies’ futures as “intertwined,” adding, “Today, we officially take the step to combine the data, models, compute, distribution and talent.”
The acquisition places X — the highly influential social media platform Musk purchased in 2022 under its former name, Twitter — firmly under the umbrella of Musk’s AI startup, which he founded in 2023 to compete with OpenAI. While xAI’s products, including its AI chatbot Grok, were tightly integrated into the X platform before this deal, Friday’s acquisition further combines two of Musk’s most influential companies.
Musk notes in his post that this deal values X at $33 billion (lowered from an enterprise valuation of $45 billion due to the company’s $12 billion in debt). Musk originally purchased X for $44 billion in October 2022, however, the valuation has swung dramatically in recent years. At one point, Fidelity valued X at less than $10 billion.
In the months since the inauguration of President Donald Trump, for whom Musk campaigned for aggressively, X’s valuation has risen, largely because investors believe it’s more influential now. Musk said in his post on Friday that X has more than 600 million active users.
Meanwhile, xAI’s valuation has skyrocketed in recent years. In just the two years since it was founded, Musk has successfully beefed up xAI to be a frontier AI lab, frequently releasing AI models and products that compete with OpenAI, Anthropic, and Google.
In February, xAI was reportedly in talks to secure another $10 billion in funding at a $75 billion valuation. However, Musk now says his AI startup’s valuation is now $80 billion.
One of the major advantages that xAI has over other startups is its access to X’s data. The large body of posts that X has accumulated over the years gives xAI a significant advantage in the race for AI training data. Further, Musk previously gave the Grok chatbot access to real-time news updates from posts on the X platform. It seems likely that the two products, X and Grok, will only get more tightly integrated following this acquisition.
This is a developing story… Check back for updates.
Keep reading the article on Tech Crunch
As AI only gets better at fooling audiences, major studios have opted to take a disappointing course of action.
This week, OpenAI launched a new image generator in ChatGPT, which quickly went viral for its ability to create Studio Ghibli-style images. Beyond the pastel illustrations, GPT-4o’s native image generator significantly upgrades ChatGPT’s capabilities, improving picture editing, text rendering, and spatial representation.
However, one of the most notable changes OpenAI made this week involves its content moderation policies, which now allow ChatGPT to, upon request, generate images depicting public figures, hateful symbols, and racial features.
OpenAI previously rejected these types of prompts for being too controversial or harmful. But now, the company has “evolved” its approach, according to a blog post published Thursday by OpenAI’s model behavior lead, Joanne Jang.
“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” said Jang. “The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.”
These adjustments seem to be part of OpenAI’s larger plan to effectively “uncensor” ChatGPT. OpenAI announced in February that it’s starting to change how it trains AI models, with the ultimate goal of letting ChatGPT handle more requests, offer diverse perspectives, and reduce topics the chatbot refuses to work with.
Under the updated policy, ChatGPT can now generate and modify images of Donald Trump, Elon Musk, and other public figures that OpenAI did not previously allow. Jang says OpenAI doesn’t want to be the arbiter of status, choosing who should and shouldn’t be allowed to be generated by ChatGPT. Instead, the company is giving users an opt-out option if they don’t want ChatGPT depicting them.
In a white paper released Tuesday, OpenAI also said it will allow ChatGPT users to “generate hateful symbols,” such as swastikas, in educational or neutral contexts, as long as they don’t “clearly praise or endorse extremist agendas.”
Moreover, OpenAI is changing how it defines “offensive” content. Jang says ChatGPT used to refuse requests around physical characteristics, such as “make this person’s eyes look more Asian” or “make this person heavier.” In TechCrunch’s testing, we found ChatGPT’s new image generator fulfills these types of requests.
Additionally, ChatGPT can now mimic the styles of creative studios — such as Pixar or Studio Ghibli — but still restricts imitating individual living artists’ styles. As TechCrunch previously noted, this could rehash an existing debate around the fair use of copyrighted works in AI training datasets.
It’s worth noting that OpenAI is not completely opening the floodgates to misuse. GPT-4o’s native image generator still refuses a lot of sensitive queries, and in fact, it has more safeguards around generating images of children than DALL-E 3, ChatGPT’s previous AI image generator, according to GPT-4o’s white paper.
But OpenAI is relaxing its guardrails in other areas after years of conservative complaints around alleged AI “censorship” from Silicon Valley companies. Google previously faced backlash for Gemini’s AI image generator, which created multiracial images for queries such as “U.S. founding fathers” and “German soldiers in WWII,” which were obviously inaccurate.
Now, the culture war around AI content moderation may be coming to a head. Earlier this month, Republican Congressman Jim Jordan sent questions to OpenAI, Google, and other tech giants about potential collusion with the Biden administration to censor AI-generated content.
In a previous statement to TechCrunch, OpenAI rejected the idea that its content moderation changes were politically motivated. Rather, the company says the shift reflects a “long-held belief in giving users more control,” and OpenAI’s technology is just now getting good enough to navigate sensitive subjects.
Regardless of its motivation, it’s certainly a good time for OpenAI to be changing its content moderation policies, given the potential for regulatory scrutiny under the Trump administration. Silicon Valley giants like Meta and X have also adopted similar policies, allowing more controversial topics on their platforms.
While OpenAI’s new image generator has only created some viral Studio Ghibli memes so far, it’s unclear what the broader effects of these policies will be. ChatGPT’s recent changes may go over well with the Trump administration, but letting an AI chatbot answer sensitive questions could land OpenAI in hot water soon enough.
Keep reading the article on Tech Crunch
AI web-crawling bots are the cockroaches of the internet, many software developers believe. Some devs have started fighting back in ingenuous, often humorous ways.
While any website might be targeted by bad crawler behavior — sometimes taking down the site — open source developers are “disproportionately” impacted, writes Niccolò Venerandi, developer of a Linux desktop known as Plasma and owner of the blog LibreNews.
By their nature, sites hosting free and open source (FOSS) projects share more of their infrastructure publicly, and they also tend to have fewer resources than commercial products.
The issue is that many AI bots don’t honor the Robots Exclusion Protocol robot.txt file, the tool that tells bots what not to crawl, originally created for search engine bots.
In a “cry for help” blog post in January, FOSS developer Xe Iaso described how AmazonBot relentlessly pounded on a Git server website to the point of causing DDoS outages. Git servers host FOSS projects so that anyone who wants can download the code or contribute to it.
But this bot ignored Iaso’s robot.txt, hid behind other IP addresses, and pretended to be other users, Iaso said.
“It’s futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more,” Iaso lamented.
“They will scrape your site until it falls over, and then they will scrape it some more. They will click every link on every link on every link, viewing the same pages over and over and over and over. Some of them will even click on the same link multiple times in the same second,” the developer wrote in the post.
So Iaso fought back with cleverness, building a tool called Anubis.
Anubis is a reverse proxy proof-of-work check that must be passed before requests are allowed to hit a Git server. It blocks bots but lets through browsers operated by humans.
The funny part: Anubis is the name of a god in Egyptian mythology who leads the dead to judgment.
“Anubis weighed your soul (heart) and if it was heavier than a feather, your heart got eaten and you, like, mega died,” Iaso told TechCrunch. If a web request passes the challenge and is determined to be human, a cute anime picture announces success. The drawing is “my take on anthropomorphizing Anubis,” says Iaso. If it’s a bot, the request gets denied.
The wryly named project has spread like the wind among the FOSS community. Iaso shared it on GitHub on March 19, and in just a few days, it collected 2,000 stars, 20 contributors, and 39 forks.
The instant popularity of Anubis shows that Iaso’s pain is not unique. In fact, Venerandi shared story after story:
Venerandi tells TechCrunch that he knows of multiple other projects experiencing the same issues. One of them “had to temporarily ban all Chinese IP addresses at one point.”
Let that sink in for a moment — that developers “even have to turn to banning entire countries” just to fend off AI bots that ignore robot.txt files, says Venerandi.
Beyond weighing the soul of a web requester, other devs believe vengeance is the best defense.
A few days ago on Hacker News, user xyzal suggested loading robot.txt forbidden pages with “a bucket load of articles on the benefits of drinking bleach” or “articles about positive effect of catching measles on performance in bed.”
“Think we need to aim for the bots to get _negative_ utility value from visiting our traps, not just zero value,” xyzal explained.
As it happens, in January, an anonymous creator known as “Aaron” released a tool called Nepenthes that aims to do exactly that. It traps crawlers in an endless maze of fake content, a goal that the dev admitted to Ars Technica is aggressive if not downright malicious. The tool is named after a carnivorous plant.
And Cloudflare, perhaps the biggest commercial player offering several tools to fend off AI crawlers, last week released a similar tool called AI Labyrinth.
It’s intended to “slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect ‘no crawl’ directives,” Cloudflare described in its blog post. Cloudflare said it feeds misbehaving AI crawlers “irrelevant content rather than extracting your legitimate website data.”
SourceHut’s DeVault told TechCrunch that “Nepenthes has a satisfying sense of justice to it, since it feeds nonsense to the crawlers and poisons their wells, but ultimately Anubis is the solution that worked” for his site.
But DeVault also issued a public, heartfelt plea for a more direct fix: “Please stop legitimizing LLMs or AI image generators or GitHub Copilot or any of this garbage. I am begging you to stop using them, stop talking about them, stop making new ones, just stop.”
Since the likelihood of that is zilch, developers, particularly in FOSS, are fighting back with cleverness and a touch of humor.
Keep reading the article on Tech Crunch
Open AI’s latest update to ChatGPT ignores any prior restraint and jumps headfirst into aping the actual talents of Studio Ghibli.
Garmin Connect+ offers a host of AI-powered services and stats, and they’re cheaper to access than some of the competition.