In the battle between two “agentic” coding tools — Anthropic’s Claude Code and OpenAI’s Codex CLI — the latter appears to be fostering more developer goodwill than the former. That’s at least partly because Anthropic has issued takedown notices to a developer trying to reverse-engineer Claude Code, which is under a more restrictive usage license than Codex CLI.
Claude Code and Codex CLI are dueling tools that accomplish much of the same thing: allow developers to tap into the power of AI models running in the cloud to complete various coding tasks. Anthropic and OpenAI released them within months of each other — each company racing to capture valuable developer mindshare.
The source code for Codex CLI is available under an Apache 2.0 license that allows for distribution and commercial use. That’s in contrast to Claude Code, which is tied to Anthropic’s commercial license. That limits how it can be modified without explicit permission from the company.
Anthropic also “obfuscated” the source code for Claude Code. In other words, Claude Code’s source code isn’t readily available. When a developer de-obfuscated it and released the source code on GitHub, Anthropic filed a DMCA complaint — a copyright notification requesting the code’s removal.
Developers on social media weren’t pleased by the move, which they said compared unfavorably with OpenAI’s rollout of Codex CLI. In the week or so since Codex CLI’s release, OpenAI has merged dozens of developer suggestions into the tool’s codebase, including one that lets Codex CLI tap AI models from rival providers — including Anthropic.
Anthropic didn’t respond to a request for comment. To be fair to the lab, Claude Code is still in beta (and a bit buggy); it’s possible Anthropic will release the source code under a permissive license in the future. Companies have many reasons for obfuscating code, security considerations being one of them.
It’s a somewhat surprising PR win for OpenAI, which in recent months has shied away from open-source releases in favor of proprietary, locked-down products. It may be emblematic of a broader shift in the lab’s approach; OpenAI CEO Sam Altman earlier this year said he believed that the company has been on the “wrong side of history” when it comes to open source.
Keep reading the article on Tech Crunch
ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.
2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.
OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.
In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.
Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.
To see a list of 2024 updates, go here.
OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.
OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.
OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.
OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.
OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.
OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.
Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.
OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.
OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.
OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.
OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.
OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.
OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.
OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.
It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”
OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.
More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.
The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.
In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.
OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.
OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.
OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.
The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.
OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.
OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.
Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.
OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.
OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.
Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.
OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.
Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.
OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.
we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.
PROMPT:
Please write a metafictional literary short story…
— Sam Altman (@sama) March 11, 2025
OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.
OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.
The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.
According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.
OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.
A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.
In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.
OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.
OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.
OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.
OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”
A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.
OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.
Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.
OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.
OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.
Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.
ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.
OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.
ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.
November 30, 2022 is when ChatGPT was released for public use.
Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.
There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.
Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.
Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.
Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT. And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.
GPT stands for Generative Pre-Trained Transformer.
A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.
ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
Yes.
Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.
We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.
Yes, there is a free ChatGPT mobile app for iOS and Android users.
It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.
Yes, it was released March 1, 2023.
Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.
Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.
It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.
Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.
Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.
OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.
The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.
In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”
Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.
An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.
CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.
Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.
There have also been cases of ChatGPT accusing individuals of false crimes.
Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.
Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.
No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.
None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.
Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
Keep reading the article on Tech Crunch
Kai Chen, a Canadian AI researcher working at OpenAI who’s lived in the U.S. for 12 years, was denied a green card, according to Noam Brown, a leading research scientist at the company. In a post on X, Brown said that Chen learned of the decision Friday and must soon leave the country.
“It’s deeply concerning that one of the best AI researchers I’ve worked with […] was denied a U.S. green card,” wrote Brown. “A Canadian who’s lived and contributed here for 12 years now has to leave. We’re risking America’s AI leadership when we turn away talent like this.”
Another OpenAI employee, Dylan Hunn, said in a post that Chen was “crucial” for GPT-4.5, one of OpenAI’s flagship AI models.
Green cards can be denied for all sorts of reasons, and the decision won’t cost Chen her job. In a follow-up post, Brown said that Chen plans to work remotely from an Airbnb in Vancouver “until [the] mess hopefully gets sorted out.” But it’s the latest example of foreign talent facing high barriers to living, working, and studying in the U.S. under the Trump administration.
OpenAI didn’t immediately respond to a request for comment. However, in a post on X in July 2023, CEO Sam Altman called for changes to make it easier for “high-skill” immigrants to move to and work in the U.S.
Over the past few months, more than 1,700 international students in the U.S., including AI researchers who’ve lived in the country for a number of years, have had their visa statuses challenged as part of an aggressive crackdown. While the government has accused some of these students of supporting Palestinian militant groups or engaging in “antisemitic” activities, others have been targeted for minor legal infractions, like speeding tickets or other traffic violations.
Meanwhile, the Trump administration has turned a skeptical eye toward many green card applicants, reportedly suspending processing of requests for legal permanent residency submitted by immigrants granted refugee or asylum status. It has also taken a hardline approach to green card holders it perceives as “national security” threats, detaining and threatening several with deportation.
AI labs like OpenAI rely heavily on foreign research talent. According to Shaun Ralston, an OpenAI contractor providing support for the company’s API customers, OpenAI filed more than 80 applications for H1-B visas last year alone and has sponsored more than 100 visas since 2022.
H1-B visas, favored by the tech industry, allow U.S. companies to temporarily employ foreign workers in “specialty occupations” that require at least a bachelor’s degree or the equivalent. Recently, immigration officials have begun issuing “requests for evidence” for H-1Bs and other employment-based immigration petitions, asking for home addresses and biometrics, a change some experts worry may lead to an uptick in denied applications.
Immigrants have played a major role in contributing to the growth of the U.S. AI industry.
According to a study from Georgetown’s Center for Security and Emerging Technology, 66% of the 50 “most promising” U.S.-based AI startups on Forbes’ 2019 “AI 50” list had an immigrant founder. A 2023 analysis by the National Foundation for American Policy found that 70% of full-time graduate students in fields related to AI are international students.
Ashish Vaswani, who moved to the U.S. to study computer science in the early 2000s, is one of the co-creators of the transformer, the seminal AI model architecture that underpins chatbots like ChatGPT. One of the co-founders of OpenAI, Wojciech Zaremba, earned his doctorate in AI from NYU on a student visa.
The U.S.’s immigration policies, cutbacks in grant funding, and hostility to certain sciences have many researchers contemplating moving out of the country. Responding to a Nature poll of over 1,600 scientists, 75% said that they were considering leaving for jobs abroad.
Keep reading the article on Tech Crunch
Google started testing AI-summarized results in Google Search, AI Overviews, two years ago, and continues to expand the feature to new regions and languages. By the company’s estimation, it’s been a big success. AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries.
AI Overviews compiles results from around the web to answer certain questions. When you search for something like “What is generative AI?” AI Overviews will show AI-generated text at the top of the Google Search results page. While the feature has dampened traffic to some publishers, Google sees it and other AI-powered search capabilities as potentially meaningful revenue drivers and ways to boost engagement on Search.
Last October, the company launched ads in AI Overviews. More recently, it started testing AI Mode, which lets users ask complex questions and follow-ups in the flow of Google Search. The latter is Google’s attempt to take on chat-based search interfaces like ChatGPT search and Perplexity.
During its Q1 2025 earnings call on Thursday, Google highlighted the growth of its other AI-based search products as well, including Circle to Search. Circle to Search, which lets you highlight something on your smartphone’s screen and ask questions about it, is now available on more than 250 million devices, Google said — up from around 200 million devices as of late last year. Circle to Search usage rose close to 40% quarter-over-quarter, according to the company.
Google also noted in its call that visual searches on its platforms are growing at a steady clip. According to CEO Sundar Pichai, searches through Google Lens, Google’s multimodal AI-powered search technology, have increased by 5 billion since October. The number of people shopping on Lens was up over 10% in Q1, meanwhile.
The growth comes amid intense regulatory scrutiny of Google’s search practices. The U.S. Department of Justice has been pressuring Google to spin off Chrome after the court found that the tech giant had an illegal online search monopoly. A federal judge has also ruled that Google has an adtech monopoly, opening the door to a potential breakup.
Keep reading the article on Tech Crunch
This is your last chance to put your brand at the center of the AI conversation during TechCrunch Sessions: AI Week — with applications to host a Side Event closing tonight at 11:59 p.m. PT.
From June 1-7, TechCrunch is curating a dynamic weeklong series of Side Events leading up to and following the main event — TC Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall. These are the gatherings where off-stage magic happens — and you still have a chance to lead one.
Whether it’s a roundtable, workshop, happy hour, or meetup, your Side Event can connect with over 1,000 AI investors, builders, and thought leaders — from the event and the broader Berkeley tech scene — on your terms, in your voice, and on your turf.
Side Event perks include:
There’s no fee to apply or host your Side Event. You’re in charge — from logistics and costs to promotion and everything in between. That said, we do have a few guidelines:
Side Events are a standout way to connect with the AI community and boost your brand visibility. Apply for free and make your mark at TechCrunch Sessions: AI — the deadline is tonight at 11:59 p.m. PT.
Keep reading the article on Tech Crunch
Prince Harry, Duke of Sussex, walked into the sunlight-lit hotel conference room in Brooklyn on Thursday to meet with a dozen youth leaders working in tech safety, policy, and innovation.
The young adults chatted away at black circular tables, many unaware of his presence until he plopped down at a table and started talking with them.
After making his way through various tables in the room, he took the stage to talk about the hopes and harms of this era of technological progress.
“Thank God you guys exist, thank God you guys are here,” he said. He spoke about tech platforms having become more powerful than governments; that these social media spaces were created based on community, yet said there has been “no responsibility to ensure the safety of those online communities.”
At one point, he said that there were people in power only incentivized by pure profit, rather than safety and well-being. “You have the knowledge and the skillset and the confidence and the bravery and the courage to be able to stand up to these things,” he said to the crowd.
The event yesterday was hosted by the Responsible Tech Youth Power Fund (RTYPF), a grant initiative to support youth organizations working to shape the future of technology. The Duke’s Foundation, Archewell, which he co-founded with his wife, Meghan, Duchess of Sussex, funded the second cohort of RTYPF grantees, alongside names like Pinterest and Melinda French Gates’ Pivotal Ventures.
TechCrunch received exclusive access to the event to chat with attendees, average age of around 22, about their work amidst the rapidly changing technological landscape.
The young people at the event were cautiously optimistic about the future of artificial intelligence, but worried about the impact social media was having on their livelihoods. Everything is moving so fast these days, they said, faster than the law can keep up.
“It’s not that the youth are anti-technology,” said Lydia Burns, 27, who leads youth and community partnerships at the nonprofit Seek Common Grounds. “It’s just that we feel we should have more input and seats at the table to talk about how these things impact our lives.”
Each turn of every conversation at the event led back to social media.
It’s consuming every part of a young person’s life, yet the clouds have the potential to become darker, the young people said at the event.
Adam Billen, 23, helps run the organization Encode, which advocates for safe and responsible AI. He’s worked on the Take It Down Act, seeking to tackle AI generated porn and other pieces of legislation, like California’s SB53 that wants to establish whistleblower protections for employees over AI-related issues. Billen, like the other young people at the event, is working fast to help the people in power understand new technology that is innovating even faster.
“As recently as two years ago, it was just not possible for someone without technical expertise to create realistic AI nudes of someone,” he told TechCrunch. “But today, with advances in generative AI, there are apps and websites publicly available for free that are being advertised to kids,” on social media platforms.
He’s heard of cases where young people simply take photos of their classmates, fully clothed, and then upload them to AI image platforms to get realistic nudes of their peers. Doing that is not nationally illegal yet, he said, and guardrails from Big Tech are loose. On these platforms, he said, it’s all too easy to see advertisements for tools to create deep fake porns, meaning it’s all too easy for children to find it too.
Sneha Dave, 26, the founder of Generation Patient, an organization that advocates for the support of young people with chronic conditions, is also worried about the sharp turn social media has taken. Influencers are doing paid advertisements for prescription medications, and teenagers are being fed pharmaceutical ads on social media, she said.
“We don’t know how the FDA works with these companies to try to flag to make sure there’s not misinformation being spread by influencers advertising these prescription medications,” Dave told TechCrunch, speaking about Big Tech platforms.
Social media in general has become a mental health crisis, the young people told us. Yoelle Gulko, 22, is working on a film to help people better understand the dangers of social media. She said walking through college campuses these days, she hears of numerous people simply deleting their social media accounts, feeling helpless in their relationship to the online world.
“Young people shouldn’t be left to fend for themselves,” Gulko said. “Young people should really be given the tools to succeed online, and that’s something a lot of us are doing.”
Leo Wu, 21, remembers the exact moment that led him to start his nonprofit, AI Consensus.
It was back in 2023 when hype around ChatGPT was becoming widespread. “There was all this press from universities and media outlets about how it was destroying education,” Wu told TechCrunch. “And we just had this feeling that this was not at all the way, the attitude to take.”
So he launched AI Consensus, which works with students, tech companies, and educational institutions to talk about the best ways students can use AI in school.
“Is it a teenager’s fault for being addicted to Instagram?” Wu told us, capturing what many young people felt when asked. “Or is it the fault of a company that is making this technology addictive?”
Wu wants to help students learn how to work with AI while still learning how to think for themselves.
Working to push regulation was the main way the attendees we spoke to were looking to advocate for themselves. Some were, however, building their own organizations, putting the youth perspective at the forefront.
“I see youth as the bridge between our current government and what the responsible tech future is,” said Jennifer Wang, the founder of Paragon, which connects students with governments looking for perspectives on tech policy issues.
Meanwhile, Generation Patient’s Dave is pushing for more collaboration between the FDA and FTC. She’s also working to help pass a bill through Congress to protect patients from deceptive drug ads online.
Encode’s Billen said he’s considering supporting bills in various states that will require disclosure boxes so people know they are talking to AI and not a human, as well as ones like the bill in California, looking to ban minors from using chatbots. He’s watching the Character.AI lawsuit closely, saying a verdict in that case would be a landmark in shaping future AI regulation.
His company, Encode, along with others in the tech policy space, filed an amicus brief in support of the mother suing Character.AI over the alleged role it played in her son’s death.
At one point during the event, the Duke sat next to Wu to talk about the opportunities and dangers of AI. They spoke about the need for more accountability and who had the power to push for change. That solution was clear.
“The people in this room,” Wu said.
Keep reading the article on Tech Crunch