OpenAI says that it will team up with Japanese conglomerate SoftBank and with Oracle, along with others, to build multiple data centers for AI in the U.S.
The joint venture, called The Stargate Project, will begin with a large data center project in Texas and eventually expand to other states. The companies expect to commit $100 billion to Stargate initially and pour up to $500 billion into the venture over the next four years.
They promise it will create “hundreds of thousands” of jobs and “secure American leadership in AI.”
“The Stargate Project is a new company which intends to [build] new AI infrastructure for OpenAI in the United States,” OpenAI, Oracle, and SoftBank said in a joint statement. “This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.”
The companies made the announcement during a press conference at the White House on Tuesday, where President Donald Trump spoke about plans for investment in U.S. infrastructure. SoftBank chief Masayoshi Son, OpenAI CEO Sam Altman, and Oracle co-founder Larry Ellison were in attendance.
Microsoft is also involved in Stargate as a tech partner. So are Arm and Nvidia. Middle East AI fund MGX will join SoftBank in its investment; MGX’s first public deal was an investment in OpenAI.
SoftBank, OpenAI, and Oracle are also listed as “initial equity investors” in Stargate.
“SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility,” the statement continued. “Masayoshi Son will be the chairman [of Stargate] […] As part of Stargate, Oracle, Nvidia, and OpenAI will closely collaborate to build and operate this computing system.”
The data centers could house chips designed by OpenAI someday. The company is said to be aggressively building out a team of chip designers and engineers, and working with semiconductor firms Broadcom and TSMC to create an AI chip for running models that could arrive as soon as 2026.
SoftBank is already an investor in OpenAI, having reportedly committed $500 million toward the AI startup’s last funding round and an additional $1.5 billion to allow OpenAI staff to sell shares in a tender offer. Oracle, meanwhile, has an ongoing deal with OpenAI to supply AI computing resources.
Softbank also earlier pledged to invest $100 billion in the U.S. over the next four years. Son and Trump have had a close working relationship since 2016, during Trump’s first term, when Son announced that SoftBank would invest $50 billion in U.S. startups and create 50,000 jobs.
The Information previously reported that OpenAI was negotiating with Oracle to lease an entire data center in Abilene, Texas — a data center that could could reach nearly a gigawatt of electricity by mid-2026. (A gigawatt is enough to power roughly 750,000 small homes.) Data center startup Crusoe Energy was said to be involved in the project, which was estimated to cost around $3.4 billion.
That Abilene site will be Stargate’s first site, and OpenAI says that Stargate is “evaluating potential sites across the country for more campuses as [it finalizes] definitive agreements.”
It’s unclear what connection, if any, Stargate has to a rumored partnership between Microsoft and OpenAI to spin up a $100 billion supercomputer. TechCrunch has reached out to OpenAI for additional information.
Last year, The Information reported that Microsoft and OpenAI would build a series of data centers for AI beginning in five stages over the next several years, culminating in Stargate: a 5-gigawatt facility spanning several hundred acres of land. Stargate was expected to take between five and six years to complete, according to The Information. In the lead-up to its completion, Microsoft had reportedly planned to launch a smaller-scope data center for OpenAI around 2026.
A number of tech leaders have called for the U.S. to up its investment in data centers, particularly as the AI industry continues to grow at an explosive pace. AI systems require enormous server banks to develop and run at scale.
Goldman Sachs estimates that AI will represent about 19% of data center power demand by 2028. OpenAI has blamed a lack of available compute for delaying its products, and compute capacity has reportedly become a source of tension between the AI company and Microsoft, its close collaborator and major investor.
Microsoft, which recently announced it is on track to spend $80 billion on AI data centers, said in a recent blog post that the company’s success depends on “new partnerships founded on large-scale infrastructure investments.” In an interview with Bloomberg, Altman said that he believes it is urgent that what he perceives as barriers to building additional data center infrastructure in the U.S. be cleared.
“The thing I really deeply agree with [President Trump] on is, it is wild how difficult it has become to build things in the United States,” Altman said in that interview. “Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general.”
Massive data center projects have vocal critics who say that data centers often create fewer jobs than promised and tend to have severe environmental impacts. Data centers are typically water hungry, placing a strain on regions with insufficient water resources, and their high power requirements have forced some utilities to lean heavily on fossil fuels.
Those concerns don’t appear to be slowing investments any. Per a McKinsey report, capital spending on procurement and installation of mechanical and electrical systems for data centers could eclipse $250 billion in the next five years.
In January, Trump announced that Hussain Sajwani, an Emirati billionaire businessman who founded the property development giant DAMAC Properties, will invest $20 billion in new data centers across the U.S. Industry insiders have expressed skepticism of the deal’s concreteness, however.
Keep reading the article on Tech Crunch
Anthropic CEO Dario Amodei says that the company plans to release a “two-way” voice mode for its chatbot, Claude, as well as a memory feature that lets Claude remember more about users and past conversations.
Speaking to The Wall Street Journal at the World Economic Forum at Davos, Amodei also revealed that Anthropic expects to release “smarter” AI models in the coming months, and that the company has been “overwhelmed” by the “surge in demand” in the last year.
“The surge in demand we’ve seen over the last year, and particularly in the last three months, has overwhelmed our ability to provide the needed compute,” Amodei said.
Anthropic is racing to keep pace with its chief AI rival, OpenAI, in an extremely capital-intensive sector. Despite having raised $13.7 billion in capital to date, Anthropic reportedly lost billions of dollars last year. Anthropic is said to be in talks to raise another ~$2 billion at a $60 billion valuation.
Keep reading the article on Tech Crunch
French AI lab, Mistral, is working toward an initial public offering, co-founder and CEO Arthur Mensch said Tuesday in an interview with Bloomberg at the World Economic Forum in Davos.
Mistral is “not for sale,” Mensch said, adding that the company plans to open an office in Singapore to focus on the Asia-Pacific region and is growing in Europe and the U.S. “Of course, [an IPO is] the plan.”
Mistral, which Mensch launched in 2023 alongside former researchers from Google’s DeepMind and Meta, is often described as Europe’s answer to U.S. incumbents like OpenAI. The lab releases AI models and services that compete with offerings from OpenAI and others, including a ChatGPT-like platform called Le Chat.
Mistral has raised around $1.14 billion in capital to date from investors including Andreessen Horowitz, General Catalyst, and Lightspeed Venture Partners. The company was reportedly last valued at around $6 billion.
Keep reading the article on Tech Crunch
OpenAI may be close to releasing an AI tool that can take control of your PC and perform actions on your behalf.
Tibor Blaho, a software engineer with a reputation for accurately leaking upcoming AI products, claims to have uncovered evidence of OpenAI’s long-rumored Operator tool. Publications including Bloomberg have previously reported on Operator, which is said to be an “agentic” system capable of autonomously handling tasks like writing code and booking travel.
According to The Information, OpenAI is targeting January as Operator’s release month. Code uncovered by Blaho this weekend adds credence to that reporting.
OpenAI’s ChatGPT client for macOS has gained options, hidden for now, to define shortcuts to “Toggle Operator” and “Force Quit Operator,” per Blaho. And OpenAI has added references to Operator on its website, Blaho said — albeit references that aren’t yet publicly visible.
Confirmed – the ChatGPT macOS desktop app has hidden options to define shortcuts for the desktop launcher to “Toggle Operator” and “Force Quit Operator” https://t.co/rSFobi4iPN pic.twitter.com/j19YSlexAS
— Tibor Blaho (@btibor91) January 19, 2025
According to Blaho, OpenAI’s site also contains not-yet-public tables comparing the performance of Operator to other computer-using AI systems. The tables may well be placeholders. But if the numbers are accurate, they suggest that Operator isn’t 100% reliable, depending on the task.
OpenAI website already has references to Operator/OpenAI CUA (Computer Use Agent) – “Operator System Card Table”, “Operator Research Eval Table” and “Operator Refusal Rate Table”
Including comparison to Claude 3.5 Sonnet Computer use, Google Mariner, etc.
(preview of tables… pic.twitter.com/OOBgC3ddkU
— Tibor Blaho (@btibor91) January 20, 2025
On OSWorld, a benchmark that tries to mimic a real computer environment, “OpenAI Computer Use Agent (CUA)” — possibly the AI model powering Operator — scores 38.1%, ahead of Anthropic’s computer-controlling model but well short of the 72.4% humans score. OpenAI CUA surpases human performance on WebVoyager, which evaluates an AI’s ability to navigate and interact with websites. But the model falls short of human-level scores on another web-based benchmark, WebArena, according to the leaked benchmarks.
Operator also struggles with tasks a human could perform easily, if the leak is to be believed. In a test that tasked Operator with signing up with a cloud provider and launching a virtual machine, Operator was only successful 60% of the time. Tasked with creating a Bitcoin wallet, Operator succeeded only 10% of the time.
OpenAI’s imminent entry into the AI agent space comes as rivals including the aforementioned Anthropic, Google, and others make plays for the nascent segment. AI agents may be risky and speculative, but tech giants are already touting them as the next big thing in AI. According to analytics firm Markets and Markets, the market for AI agents could be worth $47.1 billion by 2030.
Agents today are rather primitive. But some experts have raised concerns about their safety, should the technology rapidly improve.
One of the leaked charts shows Operator performing well on selected safety evaluations, including tests that try to get the system to perform “illicit activities” and search for “sensitive personal data.” Reportedly, safety testing is among the reasons for Operator’s long development cycle. In a recent X post, OpenAI co-founder Wojciech Zaremba criticized Anthropic for releasing an agent he claims lacks safety mitigations.
“I can only imagine the negative reactions if OpenAI made a similar release,” Zaremba wrote.
It’s worth noting that OpenAI has been criticized by AI researchers, including ex-staff, for allegedly de-emphasizing safety work in favor of quickly productizing its technology.
Keep reading the article on Tech Crunch
Friend, a startup creating a $99, AI-powered necklace designed to be treated as a digital companion, has delayed its first batch of shipments until Q3.
Friend had planned to ship devices to pre-order customers in Q1. But according to co-founder and CEO Avi Schiffman, that’s no longer feasible.
“As much as I would liked to have shipped in Q1 of this year, I still have refinements to do, and unfortunately you can only start manufacturing electronics when you are 95% done with your design,” Schiffman said in an email to customers. “I estimate that by the end of February, when our prototype is complete, that we will begin our final sprint.”
An email I sent out to all Friend preorder customers: pic.twitter.com/wUPR0OhpI4
— Avi (@AviSchiffmann) January 20, 2025
Friend, which has an eight-person engineering staff and $8.5 million in capital from investors including Perplexity CEO Aravind Srinivas, raised eyebrows when it spent $1.8 million on the domain name Friend.com. This fall, as part of what Schiffman called an “experiment,” Friend debuted a web platform on Friend.com that allowed people to talk to random examples of AI characters.
Reception was mixed. TechRadar’s Eric Schwartz noted that Friend’s chatbots often inexplicably kicked off conversations with anecdotes of traumas, including muggings and firings. Indeed, when this reporter visited Friend.com Monday afternoon, a chatbot named Donald shared that the “ghosts of [his] past” were “freaking him the f— out.”
In the above-mentioned email, Schiffman also said that Friend would be winding down its chatbot experience.
“We’re glad that millions got to play around with what I believe to be the most realistic chatbot out there,” Schiffman wrote. “This has really proven our internal ability to manage traffic, and has really taught us a lot about digital companionship … [But] I want us to stay focused on solely the hardware, and I have realized that digital chatbots and embodied companions don’t mix well.”
AI-powered companions have become a hot-button topic. Character.AI, a chatbot platform backed by Google, has been accused in two separate lawsuits of inflicting psychological harm on children. Some experts have expressed concerns that AI companions could worsen isolation by replacing human relationships with artificial ones, and generate harmful content that can trigger mental health conditions.
Keep reading the article on Tech Crunch
Chinese AI lab DeepSeek has released an open version of DeepSeek-R1, its so-called reasoning model, that it claims performs as well as OpenAI’s o1 on certain AI benchmarks.
R1 is available from the AI dev platform Hugging Face under an MIT license, meaning it can be used commercially without restrictions. According to DeepSeek, R1 beats o1 on the benchmarks AIME, MATH-500, and SWE-bench Verified. AIME employs other models to evaluate a model’s performance, while MATH-500 is a collection of word problems. SWE-bench Verified, meanwhile, focuses on programming tasks.
Being a reasoning model, R1 effectively fact-checks itself, which helps it to avoid some of the pitfalls that normally trip up models. Reasoning models take a little longer — usually seconds to minutes longer — to arrive at solutions compared to a typical nonreasoning model. The upside is that they tend to be more reliable in domains such as physics, science, and math.
R1 contains 671 billion parameters, DeepSeek revealed in a technical report. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.
671 billion parameters is massive, but DeepSeek also released “distilled” versions of R1 ranging in size from 1.5 billion parameters to 70 billion parameters. The smallest can run on a laptop. As for the full R1, it requires beefier hardware, but it is available through DeepSeek’s API at prices 90%-95% cheaper than OpenAI’s o1.
There is a downside to R1. Being a Chinese model, it’s subject to benchmarking by China’s internet regulator to ensure that its responses “embody core socialist values.” R1 won’t answer questions about Tiananmen Square, for example, or Taiwan’s autonomy.
Many Chinese AI systems, including other reasoning models, decline to respond to topics that might raise the ire of regulators in the country, such as speculation about the Xi Jinping regime.
R1 arrives days after the outgoing Biden administration proposed harsher export rules and restrictions on AI technologies for Chinese ventures. Companies in China were already prevented from buying advanced AI chips, but if the new rules go into effect as written, companies will be faced with stricter caps on both the semiconductor tech and models needed to bootstrap sophisticated AI systems.
In a policy document last week, OpenAI urged the U.S. government to support the development of U.S. AI, lest Chinese models match or surpass them in capability. In an interview with The Information, OpenAI’s VP of policy Chris Lehane singled out High Flyer Capital Management, DeepSeek’s corporate parent, as an organization of particular concern.
So far, at least three Chinese labs — DeepSeek, Alibaba, and Kimi, which is owned by Chinese unicorn Moonshot AI — have produced models that they claim rival o1. (Of note, DeepSeek was the first — it announced a preview of R1 in late November.) In a post on X, Dean Ball, an AI researcher at George Mason University, said that the trend suggests Chinese AI labs will continue to be “fast followers.”
“The impressive performance of DeepSeek’s distilled models […] means that very capable reasoners will continue to proliferate widely and be runnable on local hardware,” Ball wrote, “far from the eyes of any top-down control regime.”
Keep reading the article on Tech Crunch