Blue Diamond Web Services

Your Best Hosting Service Provider!

April 2, 2025

OpenAI’s o3 model might be costlier to run than originally estimated

When OpenAI unveiled its o3 “reasoning” AI model in December, the company partnered with the creators of ARC-AGI, a benchmark designed to test highly capable AI, to showcase o3’s capabilities. Months later, the results have been revised, and they now look slightly less impressive than they did initially.

Last week, the Arc Prize Foundation, which maintains and administers ARC-AGI, updated its approximate computing costs for o3. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, cost around $3,000 to solve a single ARC-AGI problem. Now, the Arc Prize Foundation thinks that the cost is much higher — possibly around $30,000 per task.

The revision is notable because it illustrates just how expensive today’s most sophisticated AI models may end up being for certain tasks, at least early on. OpenAI has yet to price o3 — or release it, even. But the Arc Prize Foundation believes OpenAI’s o1-pro model pricing is a reasonable proxy.

For context, o1-pro is OpenAI’s most expensive model to date.

“We believe o1-pro is a closer comparison of true o3 cost […] due to amount of test-time compute used,” Mike Knoop, one of the co-founders of The Arc Prize Foundation, told TechCrunch. “But this is still a proxy, and we’ve kept o3 labeled as preview on our leaderboard to reflect the uncertainty until official pricing is announced.”

A high price for o3 high wouldn’t be out of the question, given the amount of computing resources the model reportedly uses. According to the Arc Prize Foundation, o3 high used 172x more computing than o3 low, the lowest-computing configuration of o3, to tackle ARC-AGI.

Moreover, rumors have been flying for quite some time about pricey plans OpenAI is considering introducing for enterprise customers. In early March, The Information reported that the company may be planning to charge up to $20,000 per month for specialized AI “agents,” like a software developer agent.

Some might argue that even OpenAI’s priciest models will cost well under what a typical human contractor or staffer would command. But as AI researcher Toby Ord pointed out in a post on X, the models may not be as efficient. For example, o3 high needed 1,024 attempts at each task in ARC-AGI to achieve its best score.

Keep reading the article on Tech Crunch


OpenAI seeks to convene group to advise its nonprofit goals

As it prepares to transition from a nonprofit corporation to a for-profit, OpenAI says it’s convening a group of experts to “help OpenAI’s philanthropy understand the most urgent and intractable problems nonprofits face today.”

This group, which OpenAI says will incorporate feedback from “leaders and communities” in health, science, education, and public services, particularly within OpenAI’s home state of California, will be announced in April and submit insights to OpenAI’s board of directors in the next 90 days.

“[T]he Board will consider these insights in its ongoing work to evolve the OpenAI nonprofit well before the end of 2025,” OpenAI wrote in a blog post. “The Board recognizes the importance of engaging with the philanthropic community and those closest to the work to help inform how OpenAI’s philanthropy can best deploy its potentially historic resources.”

OpenAI was founded in 2015 as a nonprofit research lab. But as its experiments became increasingly capital intensive, it created its current structure, taking on outside investments from VCs and companies, including Microsoft.

OpenAI today has a for-profit org controlled by a nonprofit, with a “capped profit” share for investors and employees. But as alluded to in the blog post, the company’s intention is to transition its existing for-profit into a traditional corporation, with ordinary shares of stock. The nonprofit would receive billions of dollars to cede control.

The stakes are high for OpenAI to complete the conversion expeditiously. If it isn’t successful by the end of the year, at least one of its backers, SoftBank, could claw back billions of dollars in pledged capital.

Keep reading the article on Tech Crunch


Epic Games acquires Loci to introduce automated 3D tagging

Epic Games announced on Wednesday the acquisition of Loci, an AI platform for automated tagging 3D assets. The deal will help creators with the labor-intensive process of tagging as well as to help detect potential intellectual property (IP) violations.

Loci, which uses computer vision models to understand 3D content, automatically tags 3D assets, making content easier to search, share, and discover. Due to the large number of assets that creators often need to manage, the new integration is likely to be very helpful, as the process can be very time-consuming. 

Not only will Loci’s technology address the challenges associated with manual tagging, but it’ll also help identify possible IP infringements. Fortnite has encountered issues before where numerous players brought in elements from popular IPs, including Mario Kart and Shrek. 

The AI technology will be integrated across the Epic ecosystem, including in Unreal Editor for Fortnite (UEFN) and Fab, the marketplace for selling and buying digital assets.

Deal terms were not disclosed.

Keep reading the article on Tech Crunch


Epic Games introduces automated 3D tagging to help detect IP violations in Fortnite

Epic Games announced on Wednesday the launch of a Loci integration to assist creators with the labor-intensive process of tagging 3D assets as well as to help detect potential intellectual property (IP) violations.

Loci, which uses computer vision models to understand 3D content, automatically tags 3D assets, making content easier to search, share, and discover. Due to the large number of assets that creators often need to manage, the new integration is likely to be very helpful, as the process can be very time-consuming. 

Not only will the integration address the challenges associated with manual tagging, but it’ll also help identify possible IP infringements. Fortnite has encountered issues before where numerous players brought in elements from popular IPs, including Mario Kart and Shrek. 

The AI technology will be integrated across the Epic ecosystem, including in Unreal Editor for Fortnite (UEFN) and Fab, the marketplace for selling and buying digital assets.

Keep reading the article on Tech Crunch


DeepMind’s 145-page paper on AGI safety may not convince skeptics

Google DeepMind on Wednesday published an exhaustive paper on its safety approach to AGI, roughly defined as AI that can accomplish any task a human can.

AGI is a bit of a controversial subject in the AI field, with naysayers suggesting that it’s little more than a pipe dream. Others, including major AI labs like Anthropic, warn that it’s around the corner, and could result in catastrophic harms if steps aren’t taken to implement appropriate safeguards.

DeepMind’s 145-page document, which was co-authored by DeepMind co-founder Shane Legg, predicts that AGI could arrive by 2030, and that it may result in what the authors call “severe harm.” The paper doesn’t concretely define this, but gives the alarmist example of “existential risks” that “permanently destroy humanity.”

“[We anticipate] the development of an Exceptional AGI before the end of the current decade,” the authors wrote. “An Exceptional AGI is a system that has a capability matching at least 99th percentile of skilled adults on a wide range of non-physical tasks, including metacognitive tasks like learning new skills.”

Off the bat, the paper contrasts DeepMind’s treatment of AGI risk mitigation with Anthropic’s and OpenAI’s. Anthropic, it says, places less emphasis on “robust training, monitoring, and security,” while OpenAI is overly bullish on “automating” a form of AI safety research known as alignment research.

The paper also casts doubt on the viability of superintelligent AI — AI that can perform jobs better than any human. (OpenAI recently claimed that it’s turning its aim from AGI to superintelligence.) Absent “significant architectural innovation,” the DeepMind authors aren’t convinced that superintelligent systems will emerge soon — if ever.

The paper does find it plausible, though, that current paradigms will enable “recursive AI improvement”: a positive feedback loop where AI conducts its own AI research to create more sophisticated AI systems. And this could be incredibly dangerous, assert the authors.

At a high level, the paper proposes and advocates for the development of techniques to block bad actors’ access to hypothetical AGI, improve the understanding of AI systems’ actions, and “harden” the environments in which AI can act. It acknowledges that many of the techniques are nascent and have “open research problems,” but cautions against ignoring the safety challenges possibly on the horizon.

“The transformative nature of AGI has the potential for both incredible benefits as well as severe harms,” the authors write. “As a result, to build AGI responsibly, it is critical for frontier AI developers to proactively plan to mitigate severe harms.”

Some experts disagree with the paper’s premises, however.

Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be “rigorously evaluated scientifically.” Another AI researcher, Matthew Guzdial, an assistant professor at the University of Alberta, said that he doesn’t believe recursive AI improvement is realistic at present.

“[Recursive improvement] is the basis for the intelligence singularity arguments,” Guzdial told TechCrunch, “but we’ve never seen any evidence for it working.”

Sandra Wachter, a researcher studying tech and regulation at Oxford, argues that a more realistic concern is AI reinforcing itself with “inaccurate outputs.”

“With the proliferation of generative AI outputs on the internet and the gradual replacement of authentic data, models are now learning from their own outputs that are riddled with mistruths, or hallucinations,” she told TechCrunch. “At this point, chatbots are predominantly used for search and truth-finding purposes. That means we are constantly at risk of being fed mistruths and believing them because they are presented in very convincing ways.”

Comprehensive as it may be, DeepMind’s paper seems unlikely to settle the debates over just how realistic AGI is — and the areas of AI safety in most urgent need of attention.

Keep reading the article on Tech Crunch


Anthropic launches an AI chatbot plan for colleges and universities

Anthropic announced on Wednesday that it’s launching a new Claude for Education tier, an answer to OpenAI’s ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic’s AI chatbot, Claude, with a few additional capabilities.

One piece of Claude for Education is “Learning Mode,” a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Claude for Education may help Anthropic boost its revenue. The company already reportedly brings in $115 million a month, but it’s looking to double that in 2025 while directly competing with OpenAI in the education space. Anthropic has historically tended to match OpenAI’s offerings, and this launch is no exception.

Anthropic says Claude for Education comes with its standard chat interface, as well as “enterprise-grade” security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot.

To help universities integrate Claude into their systems, Anthropic says it’s partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck “full campus agreements” with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner — Anthropic says it’s working with the institution’s students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks.

Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI “builder” programs, to capitalize on the growing number of students using AI in their studies. A 2024 survey from the Digital Education Council found that 54% of university students use generative AI every week. Claude for Education deals could help Anthropic get more young people familiar with its tools, while well-funded universities pay for it.

It’s not yet clear what sort of impact AI might have on education — or whether it’s a desirable addition to the classroom. Research is mixed, with some studies finding that AI can be a helpful tutor and others suggesting it might harm critical thinking skills.

Keep reading the article on Tech Crunch


and this