Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
America’s AI war with China is intensifying — or at least, the rhetoric around it is.
On Tuesday, a U.S. congressional commission proposed a “Manhattan Project-style” effort to fund the development of AI systems with human-level — or superhuman — intelligence.
In its annual report, the U.S.-China Economic and Security Review Commission (USCC) recommended that policymakers authorize funding for “leading AI, cloud, and data center companies,” and direct the U.S. secretary of defense to ensure AI development receives “national priority.”
“We’ve seen throughout history that countries that are first to exploit periods of rapid technological change can often cause shifts in the global balance of power,” Reuters quoted USCC commissioner Jacob Helberg as saying. “China is racing towards [AI superintelligence]. … It’s critical that we take them extremely seriously.”
The USCC, established by Congress to provide recommendations on U.S.-China relations, tends to be hawkish in its proposals. But the commission isn’t alone in advocating for more aggressive actions to slow China’s tech ambitions.
Commerce secretary Gina Raimondo, for example, has suggested the U.S. share AI technology with foreign allies to combat China’s rise. Defense Department officials, meanwhile, have called for safeguards to prevent technology leakage to China through overseas data centers and chip suppliers.
The U.S. has already adopted a number of policies aimed at curbing China’s AI progress, including export bans on hardware infrastructure and investments in AI tech in the region. China has circumvented some of these. But the impacts have been palpable — and far-reaching. To give one example, China’s access to the most sophisticated chips required to train AI, including next-gen GPUs, has been completely cut off.
And in light of that, the USCC’s pronouncements seem a bit overkill.
It’s not clear what superintelligent AI would even look like. But assuming for a moment it involves so-called reasoning models, as some people suggest, Chinese labs appear to be lagging, not leading. According to one analysis, top Chinese companies’ models are about six to nine months behind their U.S. counterparts.
We must consider the possibility that the USCC’s recommendations are self-interested. Helberg is a senior adviser to the CEO of Palantir, a company with many AI defense contracts. And, naturally, government funding for AI would benefit U.S. AI companies.
That’s all to say, calls for a Manhattan Project-type program for superintelligent AI seem more alarmist than anything.
AI at Ignite: Microsoft announced a slew of AI products during Microsoft Ignite 2024 on Tuesday, including a voice cloner and an AI dev platform called Azure AI Foundry.
Advanced Voice Mode on the web: OpenAI has expanded ChatGPT’s Advanced Voice Mode feature to the web, letting users talk to the AI chatbot right from their desktop browser.
Indian news agency sues OpenAI: On the subject of OpenAI, one of India’s largest news agencies, Asian News International, has sued the startup in what could be a precedent-setting case over the use of copyrighted news content.
Gemini gets memory: Google’s Gemini chatbot can now remember things like info about your life, work, and personal preferences during conversations.
U.K. green-lights Anthropic investment: The U.K.’s Competition and Markets Authority has okayed Alphabet’s partnership and investment in AI rival Anthropic, concluding that it doesn’t qualify for investigation under current merger rules.
Perplexity launches shopping: AI-powered search engine Perplexity debuted a feature that offers e-commerce recommendations, as well as the ability to place an order without navigating to a retailer’s website. It seems like Stripe is doing the heavy lifting here, though.
Altman joins team SF: San Francisco’s mayor-elect, Daniel Lurie, has tapped OpenAI CEO Sam Altman to help run his transition team. Alongside nine other San Francisco leaders, Altman will provide guidance to Lurie’s team on ways the city can innovate.
New Mistral models: French AI startup Mistral released major new products and tools this week, including a “canvas” feature in its chatbot platform that lets users transform and edit content, like web mock-ups.
The U.K. AI Safety Institute, a U.K. government body that studies risks in AI systems, has released its first academic paper, which proposes a way AI developers can demonstrate that their models don’t pose “unacceptable cyber risks.”
In the paper, the AI Safety Institute co-authors note that “safety cases” — structured, substantiated arguments for why risks associated with a model are acceptable — are gaining traction. Yet there isn’t a “readily available” safety case methodology for frontier AI.
The co-authors propose a safety case template focusing on cyber capabilities, which they assert have well-established near-term risks. The template is designed to inform deployment decisions, they say, including whether to start or continue a model’s training run.
“This template serves as a proof of concept,” the co-authors wrote. “It does not guarantee safety; some of the claims in our template could fail to hold true in reality, invalidating the conclusion. Still, we expect that even these imperfect safety cases serve to increase the level of rigor in reasoning about development or deployment decisions.”
Suno, the controversial generative music startup, released its latest music-generating model today, Suno v4.
Suno claims that v4, which is only available to the platform’s paying users, delivers crisper audio, better lyrics, and “more dynamic” song structures than its predecessor, v3. Suno’s v4 now powers the company’s Covers feature, which “reimagines” uploaded audio, and Personas, which captures the vocals, style, and “vibe” of a track and carries it into other creations.
It’s remarkable, in many ways, that Suno’s charging ahead, given it’s been sued by three major record labels alleging copyright infringement. Sony Music Entertainment, Universal Music Group, and Warner Music Group filed a lawsuit against Suno and rival firm Udio this summer, alleging that the pair trained their models on music without permission.
In their responses to the lawsuits, Suno and Udio more or less admitted that their models might’ve ingested copyrighted music during training — but they argued that fair use doctrine under U.S. copyright law shields them.
HarperCollins has inked a three-year data licensing deal with Microsoft to let the tech giant train its AI on the publisher’s nonfiction works.
HarperCollins, whose parent company, News Corp., has a similar agreement in place with OpenAI, says that authors will have to opt in and that the deal only covers “select nonfiction backlist titles.”
Authors aren’t pleased — and it hasn’t helped that the payouts HarperCollins is offering are measly. One author, Daniel Kibblesmith, says he was offered a flat $2,500 per book.
“I’d probably do it for a billion dollars,” Kibblesmith wrote in a post on Tuesday. “I’d do it for an amount of money that wouldn’t require me to work anymore, since that’s the end goal of this technology.”
Keep reading the article on Tech Crunch