A new kids’ show will come with a crypto wallet when it debuts this fall

A new animated kids’ series expected to premiere this year won’t be headed for a TV network. Or a streaming service. Instead, the founders of production studio We Ghosted Media plan to launch on a decentralized web platform that uses blockchain technology.

And yes, a crypto wallet will be involved. 

We Ghosted Media — founded by Chris Jammal, an assistant director for “Bob’s Burgers,” and Jaclynn Demas, producer of hit children’s show “Peg + Cat” — is a TV production studio abandoning traditional show release methods in favor of a decentralized approach, commonly referred to as Web3.

The studio announced Friday it was teaming up with Lamina1 to launch the new animated kids’ series entitled “Owen Nowhere.

Lamina1 was founded by “Snow Crash” author Neal Stephenson and launched in 2022 as a Layer 1 blockchain platform designed to give creators an environment to protect, control, and monetize their intellectual property. Lamina1’s overarching mission, however, is to build an open metaverse. Stephenson’s vision of the metaverse — a concept he coined in his 1992 acclaimed novel — consists of a virtual world where users get their own lifelike 3D avatar.

Blockchain technology and the metaverse are buzzwords in the tech world and they have been slow to achieve mass adoption. Introducing a kids’ show in this space is particularly bold, considering the production studio will have to figure out how kids will navigate a platform that requires a crypto wallet. 

But Jammal and Demas are banking on the freedom of a decentralized platform, which allows the audience to interact and even participate, as a selling point that will win over users.

Image Credits:We Ghosted Media

The new show centers around Owen B. Gloom, a preteen aspiring content creator on a family road trip, documenting their visits to unusual tourist attractions, starting with the world’s largest. The family’s dynamic is funny, sweet, and slightly dysfunctional, featuring Owen’s adoptive vampire parents, a magical transforming vehicle, a pet cat, and a fish in a stroller. 

But as Jammal and Demas told TechCrunch, this is more than a show. It’s really about their mission to set a “new standard for the future of children’s entertainment in the decentralized era.” 

The project will be developed and viewable on Lamina1’s yet-to-be-launched Spaces, an offering that enables creators to create their own virtual worlds. In these worlds, creators can build interactive experiences, digital items, and content in various formats, including 2D, 3D, augmented reality (AR), and virtual reality (VR). 

Jammal and Demas envision “Owen Nowhere” as an immersive experience that allows fans to engage with the world and contribute their ideas for the series. 

The virtual space will also include exclusive behind-the-scenes content, collectible digital assets, and online community-driven experiences like voting. The studio believes that the most attractive feature is the opportunity for viewers to make key decisions for the story, such as suggesting destinations for the family’s adventures.

“We were thinking [fans could] vote for where the Glooms can travel next. Do you want them to come to your hometown? Maybe they want to buy that souvenir that Owen picked up at the Grand Canyon [as] their own digital asset. Maybe they want to change his outfit. There are so many possibilities of how this can go,” Jammal said. 

Image Credits:We Ghosted Media

While it’s clear that this show has all the ingredients to resonate with viewers and hold their attention, there will be challenges, including convincing parents to manage a crypto wallet for their child.

Parents may worry that introducing kids to this ecosystem, even indirectly, could expose them to financial manipulation or loss, even if the parents are the ones in control of the wallet.

However, some parents are more open to the idea, with some sending their five-year-olds to crypto summer camps. In 2022, Zigazoo introduced NFTs for several IPs, including CoComelon.

“It’s a big topic of discussion. It’s like, ‘What permissions do we need in place around it?’” Lamina1 CEO Rebecca Barkin said, adding, “I won’t tell you that we have the perfect answer right now…we’re going to learn real fast as this develops, what protections need to be put in place.”

Owen Nowhere’s digital assets are positioned as a way for fans to be involved in the show and enable them to contribute financially to the show’s production by owning digital collectibles– including artwork, characters, and outfits — fostering a community of supporters who are invested in its success.

“That token can be used as a loyalty token, it doesn’t have to be about cash and trading and the traditional crypto stuff. It’s about token-gated access and rewarding those who are sharing things, who are making really creative contributions to the community,” Barkin explained. 

While the new series is primarily aimed at kids and pre-teens, it’s also designed to appeal to adults. This is similar to how “Bob’s Burgers” attracts many adult fans through its hilarious storylines about parenting.

“We’re not going after that super young demographic,” said Barkin.

Nonetheless, they may need to approach this with transparency and possibly even parental controls to appeal to their entire audience. 

Lamina1’s Spaces product is slated to launch in the fall. Another virtual world launching on Spaces is “Artefact,” a project by visual effects company Wētā, known for its work on the “Lord of the Rings” film trilogy.

Lamina1 has raised $9 million to date from notable investors and angels, such as LinkedIn co-founder Reid Hoffman and Bloq co-founder Matthew Roszak.

Keep reading the article on Tech Crunch

Techstars increases startup funding to $220,000, mirroring YC structure

Techstars, a nearly 20-year-old startup accelerator, announced new terms for startups that enter its three-month program. The organization will now invest $220,000, which is $100,000 more than it offered previously, in companies starting with its fall 2025 batch.

The capital will be divided into two components. The group is offering companies $20,000 in exchange for 5% ownership in the business. Startups will also receive $200,000 in the form of an uncapped SAFE note with a “most favored nation” clause. Put more simply, Techstars percentage ownership of its $200,000 SAFE will depend on the company’s subsequent valuations. For example, if the startup’s next financing “prices” it at $10 million, Techstars will receive 2% equity on the SAFE component for a total of 7% ownership.

Techstars’ new terms now closely mirror those of Y Combinator.  The famed Silicon Valley accelerator increased its funding to startups three years ago by adding a $375,000 SAFE note to its standard deal of $125,000 for 7% of the startup’s equity.

So, which accelerator is offering a better deal for startups? The answer largely depends on the company’s capital needs. Compared to Techstars, startups going through YC get more than double the funding but give up more equity.

Keep reading the article on Tech Crunch

OpenAI’s new reasoning AI models hallucinate more

OpenAI’s recently launched o3 and o4-mini AI models are state-of-the-art in many respects. However, the new models still hallucinate, or make things up — in fact, they hallucinate more than several of OpenAI’s older models.

Hallucinations have proven to be one of the biggest and most difficult problems to solve in AI, impacting even today’s best-performing systems. Historically, each new model has improved slightly in the hallucination department, hallucinating less than its predecessor. But that doesn’t seem to be the case for o3 and o4-mini.

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

In its technical report for o3 and o4-mini, OpenAI writes that “more research is needed” to understand why hallucinations are getting worse as it scales up reasoning models. O3 and o4-mini perform better in some areas, including tasks related to coding and math. But because they “make more claims overall,” they’re often led to make “more accurate claims as well as more inaccurate/hallucinated claims,” per the report.

OpenAI found that o3 hallucinated in response to 33% of questions on PersonQA, the company’s in-house benchmark for measuring the accuracy of a model’s knowledge about people. That’s roughly double the hallucination rate of OpenAI’s previous reasoning models, o1 and o3-mini, which scored 16% and 14.8%, respectively. O4-mini did even worse on PersonQA — hallucinating 48% of the time.

Third-party testing by Transluce, a nonprofit AI research lab, also found evidence that o3 has a tendency to make up actions it took in the process of arriving at answers. In one example, Transluce observed o3 claiming that it ran code on a 2021 MacBook Pro “outside of ChatGPT,” then copied the numbers into its answer. While o3 has access to some tools, it can’t do that.

“Our hypothesis is that the kind of reinforcement learning used for o-series models may amplify issues that are usually mitigated (but not fully erased) by standard post-training pipelines,” said Neil Chowdhury, a Transluce researcher and former OpenAI employee, in an email to TechCrunch.

Sarah Schwettmann, co-founder of Transluce, added that o3’s hallucination rate may make it less useful than it otherwise would be.

Kian Katanforoosh, a Stanford adjunct professor and CEO of the upskilling startup Workera, told TechCrunch that his team is already testing o3 in their coding workflows, and that they’ve found it to be a step above the competition. However, Katanforoosh says that o3 tends to hallucinate broken website links. The model will supply a link that, when clicked, doesn’t work.

Hallucinations may help models arrive at interesting ideas and be creative in their “thinking,” but they also make some models a tough sell for businesses in markets where accuracy is paramount. For example, a law firm likely wouldn’t be pleased with a model that inserts lots of factual errors into client contracts.

One promising approach to boosting the accuracy of models is giving them web search capabilities. OpenAI’s GPT-4o with web search achieves 90% accuracy on SimpleQA. Potentially, search could improve reasoning models’ hallucination rates, as well — at least in cases where users are willing to expose prompts to a third-party search provider.

If scaling up reasoning models indeed continues to worsen hallucinations, it’ll make the hunt for a solution all the more urgent.

“Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability,” said OpenAI spokesperson Niko Felix in an email to TechCrunch.

In the last year, the broader AI industry has pivoted to focus on reasoning models after techniques to improve traditional AI models started showing diminishing returns. Reasoning improves model performance on a variety of tasks without requiring massive amounts of computing and data during training. Yet it seems reasoning also leads to more hallucinating — presenting a challenge.

Keep reading the article on Tech Crunch

Ryan Coogler’s X-Files Reboot Is Apparently Still Happening

Ryan Coogler’s X-Files Reboot Is Apparently Still Happening

Amid all the vampire hubbub surrounding Ryan Coogler‘s new vampire thriller Sinners, starring Michael B. Jordan, the director wants to remind folks that his long-teased X-Files reboot is still on his to-do list. It’s at the very top of that list, in fact.

Speaking on a recent episode of Last Podcast on the Left (via Screen Rant), Coogler said he plans to hit the ground running, working on The X-Files once the dust has settled from his Sinners press tour.

“I’m working on X-Files. That’s what’s immediately next,” Coogler said. “So, I’ve been excited about that for a long time, and I’m fired up to get back to it, and that, you know, some of those episodes, if we do our jobs right, will be really fucking scary.”

While not much is known about what Coogler’s take on the iconic sci-fi series will be, show creator Chris Carter previously said the Black Panther director plans to remount The X-Files with a diverse cast.

Since the first major update on the project came in 2021, when Coogler’s production company, Proximity, inked a five-year deal to create television content for Disney’s networks and streaming platforms, it’s understandable that fans might have assumed the reboot had quietly faded away. And if you need another reminder of how fast time flies, the last time The X-Files revival brought Agents Dana Scully and Fox Mulder back to our screens was all the way back in 2018.

While Gillian Anderson has gone on record as not being super interested in yet another round playing the series’ resident skeptic, she has also expressed she wouldn’t say no to returning to a series helmed by Coogler, whom she described as “a bit of a genius” in a 2024 appearance on the Today show.

“Whether I am involved in it is a whole other thing. I’m not saying no. I think he’s really cool and I think if he did it, it would probably be done incredibly well,” Anderson said at the time. “And maybe I’ll pop in for a little something something.”

When asked if X-Files fans could hope for Gillian Anderson’s return, after her character’s less-than-ideal treatment in the revival, Ryan Coogler didn’t make any promises. Instead, he shared his excitement for her upcoming sci-fi role in Tron: Ares. Still, he left fans with a sliver of hope on the front.

“I’ve spoken to the great Gillian,” Coogler told Last Podcast on the Left. “She’s incredible. Fingers crossed there … When I spoke to her, she was finishing [Tron: Ares] up. But, yeah, but we’re gonna try to make something really great … and really be something for the real X-Files fans, you know what I’m saying? And, maybe, find some new ones.”

Regardless of whether Scully returns, Coogler is bound to deliver something compelling with The X-Files. His knack for putting a fresh spin on familiar tales—whether it’s Marvel superheroes or the timeless allure of sexy vampires—speaks for itself.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.

2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.

OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.

In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.

Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.

To see a list of 2024 updates, go here.

Timeline of the most recent ChatGPT updates

April 2025

OpenAI could “adjust” its safeguards if rivals release “high-risk” AI

OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.

OpenAI is building its own social media network

OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.

OpenAI will remove its largest AI model, GPT-4.5, from the API, in July

OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.

OpenAI unveils GPT-4.1 AI models that focus on coding capabilities

OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.

OpenAI will discontinue ChatGPT’s GPT-4 at the end of April

OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.

OpenAI could release GPT-4.1 soon

OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.

OpenAI has updated ChatGPT to use information from your previous conversations

OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.

OpenAI is working on watermarks for images made with ChatGPT

It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”

OpenAI offers ChatGPT Plus for free to U.S., Canadian college students

OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.

ChatGPT users have generated over 700M images so far

More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.

OpenAI’s o3 model could cost more to run than initial estimate

The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.

OpenAI CEO says capacity issues will cause product delays

In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.

March 2025

OpenAI plans to release a new ‘open’ AI language model

OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.

OpenAI removes ChatGPT’s restrictions on image generation

OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.

OpenAI adopts Anthropic’s standard for linking AI models with data

OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.

The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.

OpenAI expects revenue to triple to $12.7 billion this year

OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.

ChatGPT has upgraded its image-generation feature

OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.

OpenAI announces leadership updates

Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.

OpenAI’s AI voice assistant now has advanced feature

OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.

OpenAI, Meta in talks with Reliance in India

OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.

OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations

Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

OpenAI upgrades its transcription and voice-generating AI models

OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.

OpenAI has launched o1-pro, a more powerful version of its o1

OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.

OpenAI research lead Noam Brown thinks AI “reasoning” models could’ve arrived decades ago

Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.

OpenAI says it has trained an AI that’s “really good” at creative writing

OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.

OpenAI launches new tools to help businesses build AI agents

OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.

OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’

OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.

ChatGPT can directly edit your code

The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.

ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases

According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.

February 2025

OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release

OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 

ChatGPT may not be as power-hungry as once assumed

A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.

OpenAI now reveals more of its o3-mini model’s thought process

In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.

You can now use ChatGPT web search without logging in

OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.

OpenAI unveils a new ChatGPT agent for ‘deep research’

OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.

January 2025

OpenAI used a subreddit to test AI persuasion

OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 

OpenAI launches o3-mini, its latest ‘reasoning’ model

OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”

ChatGPT’s mobile users are 85% male, report says

A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.

OpenAI launches ChatGPT plan for US government agencies

OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.

More teens report using ChatGPT for schoolwork, despite the tech’s faults

Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.

OpenAI says it may store deleted Operator data for up to 90 days

OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.

OpenAI launches Operator, an AI agent that performs tasks autonomously

OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.

OpenAI may preview its agent tool for users on the $200-per-month Pro plan

Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.

OpenAI tests phone number-only ChatGPT signups

OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.

ChatGPT now lets you schedule reminders and recurring tasks

ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’

OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.

FAQs:

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Yes.

Can ChatGPT commit libel?

Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

There have also been cases of ChatGPT accusing individuals of false crimes.

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

Keep reading the article on Tech Crunch

Bluesky may soon add blue check verification

Bluesky may soon get a new blue checkmark verification system, according to changes to the app’s public GitHub repository spotted Friday by reverse engineer alice.mosphere.at.

The blue checks may have a similar look to the system pioneered by Twitter, now X, but Bluesky’s version seems like it will work quite differently.

Bluesky’s blue check system may rely on multiple organizations to distribute blue checks, according to the codebase changes. That suggests Bluesky will actively verify notable accounts, but also label certain organizations as “trusted verifiers,” and give them the authority to directly issue blue checks themselves.

The changes to Bluesky’s verification system may be announced as soon as Monday, according to a blog post link found in Friday’s pull request titled “verification,” which is dated for April 21, 2025.

While Bluesky already lets users verify themselves by tying their accounts to official websites, CEO Jay Graber has hinted the company would try other types of verification. Last year, Graber said Bluesky may experiment with a system where it’s not the only group that can verify users.

The pull request also shows an icon, a blue circle containing a white checkmark, that will appear on verified users’ profiles. Meanwhile, trusted verifiers will have scalloped blue circles containing a white checkmark on their profiles.

A blog spotted in Bluesky’s github repo posted by an occasional reverse engineer. (credit: alice.mosphere.at)

An image spotted in Bluesky’s forthcoming announcement suggests The New York Times, and other trusted news publishers, may soon have the ability to verify users in the blue check system. By tapping on a user’s blue check, other users can see which organizations have granted verification, according to the changes.

How the verification system will work (Credit: alice.mosphere.at)

Bluesky’s approach to verification is a lot different from how X operates its verification services. While X used to distribute blue checks to popular, authentic accounts, Elon Musk decided to overhaul the system and only verifies users who pay a monthly subscription. Musk has since walked back that decision, giving blue checks to some influential users that don’t pay for it, while still allowing other people to pay for it.

Some have argued that X has diluted the value of a blue check on its platform altogether, even allowing some bot accounts to be verified.

Bluesky did not immediately respond to TechCrunch’s request for comment.

Bluesky seems to be taking a decentralized approach to verification and by spreading out the decision-making power to several organizations. That could mean a lot of users on Bluesky are getting verified, but it remains to be seen how this approach will work in practice.

Keep reading the article on Tech Crunch

Trump’s DOJ Sends Intimidating Letters to Medical Journals for Supposedly Being ‘Partisan’

Trump’s DOJ Sends Intimidating Letters to Medical Journals for Supposedly Being ‘Partisan’

The U.S. Department of Justice has sent letters to medical journals for supposedly being “partisan,” heavily suggesting that they’re spreading misinformation and are influenced by “funders” rather than medical science. Gizmodo has confirmed that the journal CHEST, a peer-reviewed publication on pulmonary care published by the American College of Chest Physicians, received a letter this week. And MedPage Today reports that at least two other journals have received similar intimidating letters.

One of the DOJ letters, signed by the interim U.S. Attorney for the District of Columbia, Edward R. Martin Jr., first went viral on social media Thursday and is addressed to the editor-in-chief of the journal CHEST, Dr. Peter Mazzone.

“It has been brought to my attention that more and more journals and publications like CHEST Journal are conceding that they are partisans in various scientific debates,” the letter from Martin explains, “that is, that they have a position for which they are advocating either due to advertisement (under postal code) or sponsorship (under relevant fraud regulations). The public has certain expectations and you have certain responsibilities.”

The letter then goes on to list several questions, none of which are the purview of the U.S. Department of Justice, given the free speech protections afforded by the First Amendment of the U.S. Constitution:

  • How do you assess your responsibilities to protect the public from misinformation?
  • How do you clearly articulate to the public when you have certain viewpoints that are
    influenced by your ongoing relations with supporters, funders, advertisers, and others?
  • Do you accept articles or essays from competing viewpoints?
  • How do you assess the role played by government officials and funding organizations like the
    National Institutes of Health in the development of submitted articles?
  • How do you handle allegations that authors of works in your journals may have misled their
    readers?

The letter went on to say, “I am also interested to know if publishers, journals, and organizations with which you work are adjusting their method of acceptance of competing viewpoints. Are there new norms being developed and offered?” while ending with a request for a response by May 2.

Reached for comment over email, a spokesperson for CHEST confirmed the letter’s authenticity, while noting, “its content was posted online without our knowledge.” The journal also posted a public statement online Friday affirming its editorial standards but didn’t address the DOJ letter specifically.

“In its 90-year history, CHEST has published numerous articles that were breakthroughs in scientific research and clinical treatment, advancing the medical profession and improving the health and well-being of patients worldwide,” the journal said in a statement published online Friday.

The journal went on to stress the editorial standards the publication adheres to, including the ICMJE and COPE guidelines.

“CHEST adheres to the International Committee of Medical Journal Editors (ICJME) and the COPE ethical guidelines for scholarly publishing, applying strict peer review standards to ensure scientific rigor,” the statement continued. “As the publisher, the American College of Chest Physicians respects and supports the journal’s editorial independence.”

The spokesperson told Gizmodo over email that legal counsel is currently reviewing the DOJ request, and the journal has no further comment at this time. It’s not yet clear which other journals cited by MedPage Today may have received the letters. The U.S. Attorney’s Office for the District of Columbia didn’t immediately respond to questions over email Friday.

President Donald Trump has launched a war of retribution against perceived enemies, recently ordering the Department of Justice to investigate Chris Krebs, a cybersecurity expert whom Trump dislikes because he denied false claims by the president that the 2020 election had been rigged or stolen. Trump has also extorted several law firms for over a billion dollars in pro bono work, some of whom really regret their decisions to play ball, according to a recent report from the New York Times. Trump has also threatened to pull funding from major universities if they didn’t bend to his will, with many like Columbia University capitulating. Harvard University has pushed back, and Trump has reportedly told the IRS to strip the institution of its tax-exempt status.

The Society for the Rule of Law, a conservative legal group, has made a request to the Office of Disciplinary Counsel for the D.C. Court of Appeals for an investigation into several of Martin’s actions since he was appointed by President Trump to be the acting U.S. Attorney. Martin has announced investigations against his political opponents, tried to intimidate a law firm that represented Special Counsel Jack Smith, and has demonstrated “a fundamental misunderstanding of the role of a federal prosecutor,” according to the group.

The ACLU of D.C. also sent a letter back in February outlining its concerns about how Martin has threatened people over protected speech, specifically when the attorney promised legal action against anyone who takes actions to impede Elon Musk’s so-called Department of Government Efficiency. Musk has been unlawfully destroying the federal government, unilaterally dismantling entire agencies like USAID and cutting funding for vital services, despite not having the authority to do so. But Martin appeared to threaten anyone who hindered their activities, even by just speaking out against them.

Trump is clearly trying to establish the U.S. as a fascist state, as he breaks vital government services, threatens American allies with invasion, and ships people convicted of no crime to a torture prison in El Salvador without due process. Trump has clearly ignored a Supreme Court order to return one man in particular from the CECOT prison, Kilmar Abrego Garcia, but the regime refuses to comply.

And if history has taught us anything, it’s that the ideology of fascism is all-encompassing.  Yes, they’re coming for the medical journals now, as ridiculous as that sounds. And every Trump supporter who thinks they’ll escape fascism’s wrath is sorely mistaken. In this kind of system, everyone is eventually made to prove their fealty to the leader. Anyone who makes a mistake or steps out of line, no matter how much they think they’ve proved themselves, will still feel the consequences.

TechCrunch Mobility: Lyft buys its way into Europe, Kodiak SPACs, and how China’s new ADAS rules might affect Tesla

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility!

Enough with my typical small talk. Let’s jump into the news right away this week. And there’s plenty of it, including Lyft’s entry into Europe, AV startup Nuro heading to Japan, the first drive of the Lucid Gravity SUV, a few New York International Auto Show highlights, and a SPAC. 

Yes, the SPAC is back. Or did this financial instrument really ever fade away? 

Let’s go. 

A little bird

blinky cat bird green
Image Credits:Bryce Durbin

A little bird told us that some people are hoping to get ahead of the many, many hurdles eVTOLs need to jump before there can be “highways in the skies.” This includes working with real estate owners in rural areas to set up vertiports and charging infrastructure. The pitch? Adding that infrastructure has the potential to increase your property value in the future.

We’re digging into this to find out more!

Got a tip for us? Email Kirsten Korosec at [email protected] or my Signal at kkorosec.07, Sean O’Kane at [email protected], or Rebecca Bellan at [email protected]. Or check out these instructions to learn how to contact us via encrypted messaging apps or SecureDrop.

Deals!

money the station
Image Credits:Bryce Durbin

I expected a few more IPOs in 2025 than the prior year, but SPACs? Say it ain’t so. And yet, here we are with a fresh merger between autonomous vehicle technology startup Kodiak Robotics and special purpose acquisition company Ares Acquisition Corporation II.

The transaction values Kodiak, which has raised around $243 million to date, at about $2.5 billion pre-money. New and existing Kodiak institutional investors, like Soros Fund Management, ARK Investments, and Ares, have funded or committed over $110 million in financing to support the transaction, as well as about $551 million of cash held in trust.

I spoke to Kodiak founder and CEO Don Burnette and asked the obvious question: A SPAC? Now? Why? 

“Kodiak, now that we have launched driverless, we have our vehicles on the road, we have driverless revenue coming in,” Burnette told me. “We think now is the time for growth. We want to take advantage of the tailwinds we’re seeing in the markets.”

Tailwinds? I asked him to explain, and Burnette meant tailwinds in the autonomy sector as opposed to the broader economic markets.

“Obviously, we’re seeing short-term volatility — that’s an understatement,” he said. “But we’re really thinking about this as a long-term thesis of transforming the transportation markets, using AI, using technology, and through automation. It’s something I’ve always believed in.”

Other deals that got my attention …

Conifer, a startup developing electric hub motors that are free of rare earth elements, raised a $20 million seed round from investors, including True Ventures, MaC Ventures, MFV Partners, and others. True Ventures’ Rohit Sharma has joined Conifer’s board.

Kavak, a Mexico-based online used car dealer, raised $127 million in an equity round, cutting its valuation to $2.2 billion from $6.5 billion. The round was co-led by SoftBank Group Corp and General Atlantic. The company also secured $400 million in new debt.

Lyft agreed to acquire FreeNow, a German multi-mobility app with ride-hail at its core, from BMW and Mercedes-Benz Mobility for about $197 million in cash. The acquisition opens up the European market to Lyft for the first time.

Nyobolt, a British EV charging startup, raised $30 million in funding, led by IQ Capital and Latitude. Strategic partners, including Scania Invest and Takasago Industry, also participated.

Notable reads and other tidbits

Image Credits:Bryce Durbin

ADAS

China is cracking down on how automakers advertise driver-assistance features, banning terms like “autonomous driving,” “self-driving,” and “smart driving.” If you immediately thought of Tesla and its “Full Self-Driving” software, it’s worth noting the automaker has changed the branding. But Tesla and others will be affected by rules around over-the-air software updates for ADAS, which requires testing and government approval. 

Autonomous vehicles

Nuro will begin mapping and collecting data in Japan using autonomous vehicles (retrofitted Prius cars) from its U.S. fleet. 

Waymo and Uber are preparing to launch their joint robotaxi service in Atlanta this summer. Uber opened up an “interest list” this week to customers in Atlanta. The two companies launched the “Waymo on Uber” service in Austin in March. Data from market analytics firm YipitData shows Waymo robotaxis made up about 20% of rides offered by Uber in Austin in the last week of March.

Zoox has partnered with Stingray Music, which will offer a curated selection of 16 stations via the touchscreen inside the robotaxis. Yet another sign Zoox is getting ready to launch commercially.

Electric vehicles, charging, & batteries

Lime will send batteries used in its scooters and e-bikes to Redwood Materials, which will extract and recycle critical minerals such as lithium, cobalt, nickel, and copper.

Kia debuted its 2026 EV4 sedan at the New York International Auto Show. This is the company’s first global electric sedan and one designed for customers looking for an affordable EV. Will Americans buy it?

TechCrunch contributor Abigail Bassett spent a day driving the new all-electric Lucid Gravity SUV. Read the full review and find out why she wrote that the Gravity is  “over-engineering at its best.”

Rivian’s first non-Amazon van customer is HelloFresh.

Subaru unveiled its second EV, a wagon-like SUV called the Trailseeker that, like its predecessor the Solterra, includes a bit of Toyota handiwork.

Future of flight

Archer Aviation unveiled its proposed air taxi network for New York City in partnership with United Airlines, which would allow passengers to tack on an Archer ride to their traditional airline tickets. 

Ride-hailing 

India’s market regulator launched an investigation into Gensol Engineering after finding alleged misuse of electric vehicle loans. BluSmart, a ride-hailing startup connected to Gensol that was once seen as an emerging Uber rival in the South Asian market, has also been swept up into the investigation. And now it seems BluSmart has suspended services in some Indian cities. 

Security

Hertz customers have been notified of a data breach that included their personal information and driver’s licenses. Hertz attributed the breach to a vendor, software maker Cleo, which last year was at the center of a mass-hacking campaign by a prolific Russia-linked ransomware gang.

Keep reading the article on Tech Crunch

White House replaces covid.gov website with ‘lab leak’ theory

The government-run website covid.gov used to host information about COVID-19 vaccines, testing, and treatment. Now, under President Trump’s purview, the page redirects to a White House website espousing the unproven theory that COVID-19 originated in a Chinese laboratory.

The theory, which has been opposed by many virologists, was espoused in a report by House Republicans last year that determined the pandemic began with a lab leak in China. House Democrats released a rebuttal at the time, stating the probe failed to determine Covid’s true origins.

The covidtests.gov website, where people could previously order free coronavirus tests, also redirects to this new page.

The White House’s new website also includes medical disinformation about the treatment of the virus, falsely claiming that social distancing, mask mandates, and lockdowns are not effective at mitigating the spread of COVID-19. However, hundreds of studies have shown that these preventative measures do, in fact, reduce the transmission of respiratory infections like COVID-19.

In the months since Trump has reassumed his role as the U.S. President, numerous government websites have been edited to reflect the agenda of his administration. With the help of Elon Musk’s DOGE, the government has attempted to remove hundreds of words related to diversity from government documents. This includes words like “Black,” “disability,” “diversity,” “gender,” “racism,” “women,” and more. The government has also removed mentions of scientifically-proven climate change from environmental websites.

Keep reading the article on Tech Crunch

ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’

Some ChatGPT users have noticed a strange phenomenon recently: Occasionally, the chatbot refers to them by name as it reasons through problems. That wasn’t the default behavior previously, and several users claim ChatGPT is mentioning their names despite never having been told what to call them.

Reviews are mixed. One user, software developer and AI enthusiast Simon Willison, called the feature “creepy and unnecessary.” Another developer, Nick Dobos, said he “hated it.” A cursory search of X turns up scores of users confused by — and wary of — ChatGPT’s first-name basis behavior.

“It’s like a teacher keeps calling my name, LOL,” wrote one user. “Yeah, I don’t like it.”

It’s not clear when, exactly, the change happened, or whether it’s related to ChatGPT’s upgraded “memory” feature that lets the chatbot draw on past chats to personalize its responses. Some users on X say ChatGPT began calling them by their names even though they’d disabled memory and related personalization settings.

OpenAI hasn’t responded to TechCrunch’s request for comment.

In any event, the blowback illustrates the uncanny valley OpenAI might struggle to overcome in its efforts to make ChatGPT more “personal” for the people who use it. Last week, the company’s CEO, Sam Altman, hinted at AI systems that “get to know you over your life” to become “extremely useful and personalized.” But judging by this latest wave of reactions, not everyone’s sold on the idea.

An article published by The Valens Clinic, a psychiatry office in Dubai, may shed some light on the visceral reactions to ChatGPT’s name use. Names convey intimacy. But when a person — or chatbot, as the case may be — uses a name a lot, it comes across as inauthentic.

“Using an individual’s name when addressing them directly is a powerful relationship-developing strategy,” writes Valens. “It denotes acceptance and admiration. However, undesirable or extravagant use can be looked at as fake and invasive.”

In a similar vein, perhaps another reason many people don’t want ChatGPT using their name is that it feels ham-fisted — a clumsy attempt at anthropomorphizing an emotionless bot. In the same way that most folks wouldn’t want their toaster calling them by their name, they don’t want ChatGPT to “pretend” it understands a name’s significance.

This reporter certainly found it disquieting when o3 in ChatGPT earlier this week said it was doing research for “Kyle.” (As of Friday, the change seemingly had been reverted; o3 called me “user.”) It had the opposite of the intended effect — poking holes in the illusion that the underlying models are anything more than programmable, synthetic things.

Keep reading the article on Tech Crunch

New Toxic Avenger Trailer Takes Gory Aim at Health Care

New Toxic Avenger Trailer Takes Gory Aim at Health Care

The upcoming Toxic Avenger reboot is months away, and each new glimpse of it has been either pretty goofy, gross, or both. Its newest promo falls into the third category—silly and bloody as all hell, and it’ll further fuel your distaste for the healthcare industry this year.

Seeing a doctor and getting the meds that keep you healthy (or at the very least, alive) can be a trying thing, even when the doctor actually wants to help. That’s not the case for Winston Gooze (Peter Dinklage): years of being a janitor have left him sick with [jackhammer noise], and his only medical help lies in a super expensive medical treatment he might have to pay out of pocket for.

When he asks his boss (Kevin Bacon) for help, Winston is thrown into toxic waste. Miraculously, he survives, but comes out of it severely mutated. The only thing on his mind is reconnecting with his stepson Wade (Jacob Tremblay) and getting violent payback against his ex-employer, or anyone else who pisses him off.

[embedded content]

Directed by Macon Blair, Toxic Avenger is a reboot of the original 1984 Trom classic. It’s taken a while for this to get released, and when we reviewed it back in 2023, we thought it was stupid and delightfully nasty. Two years later, it still looks exactly that, from the Adult Swim energy in the medical infomercial to the way a mutated Winston rips off a guy’s arm and the blood just gushes out. Looking forward to this when it hits theaters on August 29.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Dead Mail Is a Grimy Retro Horror Thriller Well Worth Seeking Out

Dead Mail Is a Grimy Retro Horror Thriller Well Worth Seeking Out

In a random Midwestern town in a nondescript moment in the 1980s, a man wrapped in chains bursts out of a home and crawls toward a blue mail collection box, barely managing to slip a scrap of paper in before he’s recaptured by a blurry figure behind him. Thus begins Dead Mail, a refreshingly unconventional horror film made in a deliberately downgraded analogue style that perfectly captures both its setting and the quirky mood that runs through it.

Instead of immediately following up on that grabby opening, Dead Mail—which unfolds with a great attention to detail, including retro cinematography and production design that feels completely organic and correct to its world—then introduces us to Jasper (Tomas Boykin), a dead letter investigator who’s the superstar employee of his postal branch. Not that you’d know it by looking at him; he keeps to himself in a back room, methodically tracking down the proper owners of valuables that would otherwise have been lost in the mail.

But his detective skills are CSI-level amazing: you almost wish the entire plot of Dead Mail followed Jasper as he phones the National Weather Service checking precipitation levels to see if a smudged letter came through a certain location, or dialing up a foreign hacker to check car registrations to narrow down lists of potential names. His co-workers Ann and Bess (Micki Jackson, Susan Priver) think he’s a genius, and as soon as we see him work, we understand why. But this isn’t a movie only about Jasper; there’s that blood-stained scrap of paper that eventually winds its way onto his list of mysteries, which Jasper initially tosses aside, insisting “they don’t pay me to be a crime detective.”

Deadmail JasperJasper on the job. © Shudder

While Dead Mail is certainly invested in the plight of the chained-up man who sent that desperate letter, it takes its time crafting the series of events that lead up to his written call for help. And much like the offbeat but fascinating Jasper, the characters that emerge in the film’s main drama feel both specific and singular. There’s Josh (Sterling Macer Jr.), a talented synthesizer engineer who isn’t sure how to level up the musical innovations he knows he’s capable of—and Trent (John Fleck), the older loner who slinks up to him at a demo and asks if he’s ever thought about collaborating with a partner.

We already know where this is headed, having seen Josh as a prisoner and Trent’s involvement in some extreme behavior to try to reclaim Josh’s letter. But Dead Mail wants to dig into the dynamics between these two, as we watch Josh tinker on his prototype while Trent buys him cutting-edge equipment and giddily learns to cook his favorite meal. Josh may not realize it, but the audience already knows Trent’s interest has already skipped over the line into something very unwholesome, and we must wait as the tension rises ahead of that inevitable mail-box moment—and whatever happens next.

Throughout, Dead Mail makes perfect use of its synthesizer plot to use electronic music both in its score and as part of its diegetic soundtrack; this creates a haunting and nearly funereal effect, since Josh’s particular interests include recreating the sounds of pipe organs as well as woodwinds. The longer the two men work together, the atmosphere of unease grows heavier and heavier. But Trent’s self-perpetuating psychodrama doesn’t exist in a vacuum; there’s always the idea that (despite some circumstances getting in the way of Jasper’s usual process) Josh’s small, blood-stained missive has raised an alarm in the outside world.

Deadmail TrentTrent in his home. © Shudder

While tales of dangerous obsessions are not unfamiliar, Dead Mail places its peril in a setting that could not better illustrate the idea of the “banality of evil.” Sometimes an obsequious stranger might have a creepy stare you don’t notice in time—or a dead letter investigator and his intrepid co-workers might be the best “crime detectives” of all. It’s rare to see a movie with such a carefully considered point of view and style that it doesn’t remind you of anything you’ve seen before—so all hail co-directors Joe DeBoer and Kyle McConaghy for coming up with this one.

Dead Mail arrives today, April 18, on Shudder. Do yourself a favor and check it out.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Judge Limits DOGE’s Grubby Hands From Grabbing Social Security Administration Data

Judge Limits DOGE’s Grubby Hands From Grabbing Social Security Administration Data

Elon Musk and company’s apparent desire to suck up as much personal information as they can about Americans has hit another snag. U.S. District Judge Ellen Hollander issued a preliminary injunction that will at least temporarily prevent staffers from the Department of Government Efficiency from accessing sensitive data on millions of Americans while working within the Social Security Administration.

The ruling, which comes in response to a lawsuit filed by several unions and groups of retirees in Maryland, follows a previous temporary restraining order granted by Hollander questioned why DOGE would need to access personally identifiable information (PII) as part of its work to allegedly identify fraud and waste within the agency. The injunction, which comes several weeks after the initial intervention, was issued because DOGE has not explained why the SSA should provide them access to PII.

“For some 90 years, SSA has been guided by the foundational principle of an expectation of privacy with respect to its records. This case exposes a wide fissure in the foundation,” she wrote.

Notably, Hollander did not rule that DOGE’s stated goal is a problem. Instead, it’s how the pseudo-agency is going about achieving those ends. “To be sure, rooting out possible fraud, waste, and mismanagement in the SSA is in the public interest,” the judge wrote. “But, that does not mean that the government can flout the law to do so.”

DOGE and the Trump administration more broadly seem to disagree with that notion, operating more closely to the premise that they get to do whatever they want, and it’s beneath them to have to explain their methods. Liz Huston, a White House spokesperson, told NPR, “The American people gave President Trump a clear mandate to uproot waste, fraud, and abuse across the federal government. The Trump Administration will continue to fight to fulfill the mandate.”

That seems to suggest that DOGE will continue to try to stuff its pockets with as much information as it can snag from the stashes held at these government agencies. DOGE has, on multiple occasions, according to the courts, violated privacy laws by allowing staffers to access sensitive data. Musk’s minions have also allegedly attempted to access data related to union members from the Office of Personnel Management. Earlier this month, a whistleblower told NPR that DOGE seemed to be sending out data from the National Labor Relations Board, including information that may relate to ongoing legal cases and sensitive corporate information.

As all of this has happened, Musk’s bunch has not justified their need to access or interact with that information. In fact, Judge Hollander previously stated in a ruling that the Trump administration has “never identified or articulated even a single reason for which the DOGE Team needs unlimited access to SSA’s entire record systems, thereby exposing personal, confidential, sensitive, and private information that millions of Americans entrusted to their government.” The same has been true of its access to data in other agencies. The answer seems to boil down to “Because we can.”

Scientists Learned How to Trick Our Eyes Into Seeing an Entirely New Color

Scientists Learned How to Trick Our Eyes Into Seeing an Entirely New Color

Black Mirror, eat your heart out. Researchers have apparently just figured out how to make people see a color completely new to humanity.

Scientists at the University of California, Berkeley conducted the research, published Friday in Science Advances. Using a technique called Oz, the research team induced human volunteers into seeing a color beyond the “natural human gamut.” Oz could allow scientists to conduct experiments previously not possible before, the authors say, and the lessons we learn from it might even someday help color-blind people regain their missing color vision.

Our retinas contain certain photoreceptive cells, known as cones, that allow us to see color. There are three cone types that correspond to different wavelengths of light: short-wavelength (S) cones, medium-wavelength (M) cones, and long-wavelength (L) cones.

Typically, when we try to reproduce color in front of someone’s eyes, we do so by manipulating the spectrum of light seen by the retina’s cones. But since some of our cones, particularly M cones, share overlap in how they respond to certain wavelengths, there are theoretically colors out there that our eyes can never truly see. The UC Berkeley researchers, based on their earlier work studying cone cells, say they’ve found a way around this limitation.

Rather than trying to mix and match different wavelengths of light to produce color, their Oz system stimulates individual cone cells using safe microdoses of laser light. By applying these doses in just the right spatial pattern to only activate people’s M cones—something that isn’t naturally possible—they’ve figured out how to produce the perception of a brand new color.

They tested the Oz system on five human volunteers with normal vision. Once they activated M-only cones, the volunteers reported seeing a blue-green color of “unprecedented saturation.” The researchers have coined this new color “olo.”

To confirm that olo is a genuine new color, the researchers also had the volunteers perform color matching tests. One of these tests involved a near-monochromatic laser, which produced the most saturated possible colors of the rainbow that can be naturally seen. The volunteers were only able to match olo to the blue-green color of this rainbow by turning down its saturation, showing that olo does indeed exist outside the natural boundaries of our color vision.

Scientists have been able to stimulate a few cone cells at a time before, but the Oz system demonstrates that it’s possible to stimulate thousands of cone cells all at once. And the researchers are hopeful that Oz can have all sorts of potential uses down the line.

“Showing olo is definitely cool, but we’re all looking toward the future for how we can use the technology itself,” co-lead researcher Hannah Doyle, a fourth-year PhD student in electrical engineering at UC Berkeley, told Gizmodo. “I’m actually now working on a project using the same exact system to simulate cone loss, like what happens in retinal disease, in healthy subjects.”

Other members of the research team are studying whether it’s possible to stimulate the retina’s cones, and by extension the brain, in such a way that a person could experience having a fourth type of cone cell. That same approach might also possibly allow people missing a cone type (like those with color blindness) to experience the corresponding missing colors, the researchers speculate.

“Essentially we feel like this is a platform that we can use to do a whole host of new experiments,” Doyle said.

All that sounds wonderful. But personally, I’m hoping that I’ll be able to see olo and otherworldly colors for myself one day.

A Lightless Galaxy: Scientists Discover a Starless, Spinning Ghost

A Lightless Galaxy: Scientists Discover a Starless, Spinning Ghost

Astronomers may have just stumbled across a ghost galaxy hiding in plain sight — a small, starless, fast-moving cloud of gas that checks all the boxes for what’s known as a “dark galaxy.” And if the discovery holds up, it could help plug one of cosmology’s most puzzling holes: the mysterious “missing satellite” problem.

The team’s research, published today in Science Advances, describes AC G185.0–11.5, a compact hydrogen cloud tucked inside a larger high-velocity cloud (HVC) known as AC-I. The cloud was spotted by an international research team using China’s huge FAST radio telescope. While HVCs are known to zoom around at speeds beyond that of our Milky Way’s rotation, most are relatively featureless gas blobs. But the recently spotted gas cloud is different: it spins.

FAST’s ultra-sensitive observations revealed a clear rotational pattern in the cloud, whose gas is arranged in a disk shape — the kind of structure you’d expect from a dwarf galaxy. But something’s amiss: there’s no sign of stars in the cloud, and no molecular gas (the usual star-forming stuff) to be found. AC G185.0–11.5 is apparently constituted of just hydrogen gas, swirling in space, with nothing within it lighting it up. Ergo, a dark galaxy.

Using galactic motion equations and a cosmic yardstick called the Tully-Fisher relation, the team estimated the cloud’s distance from Earth: about 278,000 light-years. That puts the cloud comfortably within the Local Group, our galactic neighborhood. As for mass, the cloud sits between 30 million and nearly 500 million Solar masses—not huge, but enough to be considered a galaxy in its own right.

But what gives AC G185.0–11.5 its nifty label of “dark galaxy” is its dark matter content. The researchers believe the cloud is held together by a massive dark matter halo, making it an ideal dark galaxy candidate — a theoretical type of galaxy made mostly of dark matter, with little or no visible stars.

This isn’t the first time scientists have suspected that some high-velocity clouds might actually be hidden galaxies, but most other candidates have lacked clear rotation or have been too difficult to distinguish from the Milky Way’s halo. AC G185.0–11.5 looks like the real deal — potentially the best evidence yet for a galaxy that’s all meat and no potatoes. You know, if potatoes were stars.

If the candidate dark galaxy is confirmed as such, it could rewrite how we think about galaxy formation. The cloud offers a tantalizing hint about where all the “missing” small galaxies might be hiding — not missing, just sitting lightless in plain sight.

Trump Turns Covid.gov Into a Lab Leak Theory Fan Page

Trump Turns Covid.gov Into a Lab Leak Theory Fan Page

The White House has changed covid.gov into a website for promoting the so-called lab leak theory for the origins of covid-19. Donald Trump, who was president during the first year of the covid pandemic in 2020, has long sought to claim the virus originated from a laboratory in Wuhan, China, in an effort to suggest it was a weapon intentionally unleashed onto the world. But the best science we have at the moment still suggests covid had natural origins.

The covid.gov website was previously a government-run destination to find information about covid-19 testing, vaccines, and treatment options. The Internet Archive’s Wayback Machine has snapshots saved from what the site looked like as recently as April 10:

Covid.gov as it appeared on April 10, 2025 before the White House redirected the site to a conspiracy theory page.Screenshot: Internet Archive / Wayback Machine

But the Trump regime has turned what used to be a dry, informative, fact-based website into a hype page for Trump’s personal grievances against perceived political enemies. At some point in the past week, covid-19, started to redirect to the White House. And incredibly, it looks like this:

The White House page that users are redirected to from covid.gov.Screenshot: White House

The covid.gov URL redirects to the White House website and now features a list of “facts” that aren’t widely agreed upon by scientists who have studied the origins of covid-19.

Trump’s allies have long claimed that covid-19 was designed in a lab and was either intentionally or accidentally leaked. The CIA even changed its assessment of covid’s origins shortly after he took power again, suddenly claiming it may have been from a lab leak, though admitting “low confidence” in that assessment. But the most recent studies on the topic, looking at genomic data, still suggest natural origins from an animal market in Wuhan, China. And a study earlier this year found that most virologists and other scientists with relevant expertise still don’t think the lab leak theory is the best explanation for how covid-19 came into the world.

The new website presents highly contested claims as facts and prominently features several people like Anthony Fauci and Joe Biden, who were supposedly instrumental in covering up some big scandal. Disturbingly, the website also names several other people whom the White House suggests conspired to cover up the real origins of covid, including Dr. David Morens, a senior advisor to Fauci. It’s disturbing because President Trump has promised a campaign of retribution against his enemies and has already started to target individuals like cybersecurity expert Chris Krebs for telling the truth about the 2020 election. Trump has also targeted law firms that he’s extorting to get free services, and institutions like Harvard University to resegregate American life.

But at least Trump’s new website looks dumb as shit. As one user on Bluesky pointed out, the graphic design of the site makes it look like Trump is the one who was doing the leaking. Another user compared the design to the Pixar lamp logo.

The cartoonish nature of Trump’s redesign would be shocking in any other timeline, but we happen to be living in the timeline where Trump was elected to be president. Twice, in fact. And that means we wake up to new absurdities like this every day.

The response to the covid-19 pandemic by the first Trump administration was arguably one of the worst among wealthy countries. The U.S. had 341 deaths per 100,000 residents, the second worst in the world after Peru, according to Johns Hopkins University data that runs through early 2023. So it makes sense that Trump, who rather famously will never admit when he’s done something wrong, would try to deflect blame.

It wasn’t Trump’s bungled response to testing early in the pandemic that allowed the virus to spread like wildfire. It wasn’t Trump’s inability to provide health care workers with enough PPE. It wasn’t his bald-faced lies told directly to the American people as a way to calm the markets. It was some shadowy forces in China who were just trying to hurt Americans.

The idea that covid-19 was designed in a lab is certainly something that could’ve happened. It’s just that there’s no strong evidence for that theory. And while there’s nothing wrong with exploring all possible reasons for something like a pandemic, people like Trump and his goons at the White House clearly have a motive for blaming anyone but themselves. This, after all, is a guy who suggested injecting bleach into the body to get rid of it. Trump needs there to be some other outside force that he can blame. Because compared to the rest of the world, Trump failed spectacularly to keep Americans safe.

The First Him Trailer Tackles You With Ominous Sports Horror

The First Him Trailer Tackles You With Ominous Sports Horror

There’s different types of horror stories out there, usually involving supernatural monsters of some kind. But what about the more realistic horror that comes from destroying your body or wrecking your life to achieve your goals?

That’s the pitch for the upcoming movie Himwhich comes courtesy of director Justin Tipping and horror head Jordan Peele as producer. In the film, Tyriq Withers (The Game) stars as Cameron Cade, a promising young football star on the verge of becoming the next big thing until an attack by a fan leaves him with brain trauma that could end his career.

His prospects seem to turn around when he’s approached by Marlon Wayans’ Isaiah White, an aging pro on the verge of retirement who decides to take him under his wing.

[embedded content]

While at the isolated compound, things start to get weird for Tyriq and the other players—the training becomes more intense, the people more elite, and more demands from Isaiah of what he’ll sacrifice to be the very best. “No days off, no sleep. We grind,” he tells Tyriq early in the trailer.

We’ve all seen a sports movie here and there in our lifetime, so turning well-worn parts of those stories into a horror show, complete with teases of body and cult horror, gives Him some edge—and from those training scenes, Tyriq’s not exactly in for a good time.

Also starring Julia Fox, Tim Heidecker, and Akeem Hayes, Him releases on September 19.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Startups Weekly: Mixed messages from venture capital

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here.

This week brought us mixed messages. A fresh IPO filing, but a bleak outlook for exits overall. New funding rounds, but founders frustrated over lack of capital. And in the midst of it all, some VCs are still finding ways to create liquidity and raising funding for more bullish times.

Most interesting startup stories from the week

Dylan Field, co-founder and chief executive officer of Figma Inc., speaks during a Bloomberg Technology television interview in San Francisco, California, U.S., on Thursday, June 24, 2021. Software design company Figma has raised fresh funding at a valuation of $10 billion, quintupling its price tag since last year. Photographer: David Paul Morris/Bloomberg via Getty Images
Image Credits:David Paul Morris/Bloomberg / Getty Images

In a week of contrasts, startups exhibited both confidence and insecurity, and even second-time founders weren’t spared from struggles.

Fearless or not: Design software company Figma filed its confidential paperwork for an IPO, ignoring the fears that made both Klarna and StubHub pause their IPO plans this month following the stock market crash triggered by tariff announcements.

Figma, however, isn’t worry-free: It sent a cease-and-desist letter to fast-rising “vibe coding” rival Lovable over the term “Dev Mode.” 

Frustrated: U.K. founders expressed frustration at the widening gap between funding raised by British startups and their Silicon Valley peers. According to Dealroom, British startups raised just £16.2 billion (approximately $21.5 billion) last year compared to the approximate $73.8 billion (£65 billion) raised in the U.S.

Smashed: Smashing, an AI-powered reading curation app launched last June by Goodreads’ founder Otis Chandler, shut down due to disappointing growth.

Suspended: BluSmart, an Indian Uber rival using EVs, apparently suspended service a day after the Securities and Exchange Board of India launched an investigation into Gensol Engineering, which shares its co-founders.

Back: One month after reassuming his role as Bolt’s CEO, Ryan Breslow unveiled a new “super app” that reflects his vision for the fintech company he founded in 2014.

Investigating: Rippling’s efforts to serve Deel CEO Alex Bouaziz have been significantly hindered by the fact that he and his lawyer are now in the UAE, TechCrunch learned. But the company isn’t giving up and is also pushing for Revolut to reveal who paid off Deel’s alleged spy.

Tailwinds: OpenAI is reportedly seeking to buy Windsurf for $3 billion. The startup was previously known as Codeium, whose popular AI coding assistant competes with Cursor and the like.

Most interesting VC and funding news this week

Marshmallow billboard
Image Credits:Marshmallow (opens in a new window) under a license.

This week brought us funding news that’s hinting at better days ahead, with increased valuations and bigger funds that may no longer be the exception.

Growing: Marshmallow, a British insurance startup, raised $90 million in equity and debt at a valuation slightly above $2 billion. Focusing on customers left out by traditional insurers, it boasts a million drivers insured and a profitable annual revenue run rate of $500 million.

Hammered win: Hammerspace, a company that helps clients like Meta use their unstructured data, raised $100 million in funding to expand its business. The valuation is above $500 million, according to sources.

New chapter: Chapter, a Medicare advisory startup co-founded by former U.S. Republican presidential candidate Vivek Ramaswamy, raised a $75 million funding round at a $1.5 billion valuation.

Phantom limbs: Austin, Texas-based Phantom Neuro raised $19 million to fund the next stage of development of its product, a subdermal wristband-like device that lets amputees control prosthetic limbs.

Resilient: Conifer, a startup whose electric hub motors don’t require rare earth elements, secured a $20 million seed round from deep tech investors.

Sunny days: Arnergy, a clean tech startup backed by Bill Gates’ Breakthrough Energy Ventures, locked down a $15 million Series B extension to expand solar access in Nigeria.

Bullish: Peter Thiel’s Founders Fund completed the raise of its third growth fund. Closing at $4.6 billion, it is a big step up from its previous $3.4 billion growth fund — which could be another sign that the market has gone from bearish to bullish again.

Last but not least

Hans Swildens of Industry Ventures
Image Credits:Industry Ventures

VCs need liquidity, and they often know how to find it even when there are no IPOs in sight. In the latest episode of StrictlyVC Download, Industry Ventures CEO Hans Swildens broke down the way in which firms are navigating this issue.

Keep reading the article on Tech Crunch

TikToker sues Roblox over her Charli XCX ‘Apple’ dance

TikTok content creator Kelley Heyer sued the video game Roblox for using her dance to Charli XCX’s “Apple” without permission.

Heyer posted the viral dance in June 2024, which fed off of the hype of Charli XCX’s hit summer album Brat. The dance became so popular that Charli XCX incorporated it into her live show and even invited Heyer to perform the dance at her show in New York City.

@kelley.heyer

1 month ago I made a dance on a whim and now look at everyone dancing along and having so much fun!! Thank you @Charli XCX for this amazing song and iconic album and for 💚BRAT SUMMER💚 ⭐️ ps who in nyc wants to PARTY because I want to DANCE ⭐️ Outfit details: Top-A lovely gift from @Alvaro Earrings-@Airik ✮ Glasses-@AKILA Eyewear Shoes-Feners Pants-From ebay, tag says I.N. San Francisco #apple #brat #charlixcx

♬ Apple – Charli xcx

Fortnite and Roblox, two blockbuster games popular among children, each incorporated the dance into their games, allowing players to purchase “emotes” of the dance for their avatars to perform. But while Fortnite got Heyer’s permission to license her choreography, she alleges that Roblox did not sign an agreement to do so.

According to Polygon, Heyer estimates that Roblox earned over $123,000 from selling over 60,000 emotes of the “Apple” dance.

“As a platform powered by a community of creators, Roblox takes the protection of intellectual property very seriously and is committed to protecting intellectual property rights of independent developers and creators to brands and artists both on and off the platform,” Roblox said in a statement.

Keep reading the article on Tech Crunch

ChatGPT will now use its ‘memory’ to personalize web searches

OpenAI is upgrading ChatGPT’s “memory” again.

In a changelog and support pages on OpenAI’s website Thursday, the company quietly announced “Memory with Search,” a feature that lets ChatGPT draw on memories — details from past conversations, such as your favorite foods — to inform queries when the bot searches the web.

The update comes shortly after OpenAI beefed up ChatGPT’s long-in-the-tooth memory tool with the ability to reference a user’s entire chat history. It’s seemingly a part of OpenAI’s ongoing effort to differentiate ChatGPT from rival chatbots like Anthropic’s Claude and Google’s Gemini, the latter of which also offers a memory feature.

As OpenAI explains in its documentation, when Memory with Search is enabled and a user types in a prompt that requires a web search, ChatGPT will rewrite that prompt into a search query that “may also leverage relevant information from memories” to “make the query better and more useful.” For example, for a user that ChatGPT “knows” from memory is vegan and lives in San Francisco, ChatGPT may rewrite the prompt “what are some restaurants near me that I’d like” as “good vegan restaurants, San Francisco.”

Memory with Search can be disabled by disabling Memory in the ChatGPT settings menu. It’s not clear which users have it yet — some accounts on X report they began seeing Memory with Search earlier this week.

Keep reading the article on Tech Crunch