Blue Diamond Web Services

Your Best Hosting Service Provider!

April 18, 2025

OpenAI’s new reasoning AI models hallucinate more

OpenAI’s recently launched o3 and o4-mini AI models are state-of-the-art in many respects. However, the new models still hallucinate, or make things up — in fact, they hallucinate more than several of OpenAI’s older models.

Hallucinations have proven to be one of the biggest and most difficult problems to solve in AI, impacting even today’s best-performing systems. Historically, each new model has improved slightly in the hallucination department, hallucinating less than its predecessor. But that doesn’t seem to be the case for o3 and o4-mini.

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

In its technical report for o3 and o4-mini, OpenAI writes that “more research is needed” to understand why hallucinations are getting worse as it scales up reasoning models. O3 and o4-mini perform better in some areas, including tasks related to coding and math. But because they “make more claims overall,” they’re often led to make “more accurate claims as well as more inaccurate/hallucinated claims,” per the report.

OpenAI found that o3 hallucinated in response to 33% of questions on PersonQA, the company’s in-house benchmark for measuring the accuracy of a model’s knowledge about people. That’s roughly double the hallucination rate of OpenAI’s previous reasoning models, o1 and o3-mini, which scored 16% and 14.8%, respectively. O4-mini did even worse on PersonQA — hallucinating 48% of the time.

Third-party testing by Transluce, a nonprofit AI research lab, also found evidence that o3 has a tendency to make up actions it took in the process of arriving at answers. In one example, Transluce observed o3 claiming that it ran code on a 2021 MacBook Pro “outside of ChatGPT,” then copied the numbers into its answer. While o3 has access to some tools, it can’t do that.

“Our hypothesis is that the kind of reinforcement learning used for o-series models may amplify issues that are usually mitigated (but not fully erased) by standard post-training pipelines,” said Neil Chowdhury, a Transluce researcher and former OpenAI employee, in an email to TechCrunch.

Sarah Schwettmann, co-founder of Transluce, added that o3’s hallucination rate may make it less useful than it otherwise would be.

Kian Katanforoosh, a Stanford adjunct professor and CEO of the upskilling startup Workera, told TechCrunch that his team is already testing o3 in their coding workflows, and that they’ve found it to be a step above the competition. However, Katanforoosh says that o3 tends to hallucinate broken website links. The model will supply a link that, when clicked, doesn’t work.

Hallucinations may help models arrive at interesting ideas and be creative in their “thinking,” but they also make some models a tough sell for businesses in markets where accuracy is paramount. For example, a law firm likely wouldn’t be pleased with a model that inserts lots of factual errors into client contracts.

One promising approach to boosting the accuracy of models is giving them web search capabilities. OpenAI’s GPT-4o with web search achieves 90% accuracy on SimpleQA. Potentially, search could improve reasoning models’ hallucination rates, as well — at least in cases where users are willing to expose prompts to a third-party search provider.

If scaling up reasoning models indeed continues to worsen hallucinations, it’ll make the hunt for a solution all the more urgent.

“Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability,” said OpenAI spokesperson Niko Felix in an email to TechCrunch.

In the last year, the broader AI industry has pivoted to focus on reasoning models after techniques to improve traditional AI models started showing diminishing returns. Reasoning improves model performance on a variety of tasks without requiring massive amounts of computing and data during training. Yet it seems reasoning also leads to more hallucinating — presenting a challenge.

Keep reading the article on Tech Crunch


ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.

2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.

OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.

In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.

Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.

To see a list of 2024 updates, go here.

Timeline of the most recent ChatGPT updates

April 2025

OpenAI could “adjust” its safeguards if rivals release “high-risk” AI

OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.

OpenAI is building its own social media network

OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.

OpenAI will remove its largest AI model, GPT-4.5, from the API, in July

OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.

OpenAI unveils GPT-4.1 AI models that focus on coding capabilities

OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.

OpenAI will discontinue ChatGPT’s GPT-4 at the end of April

OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.

OpenAI could release GPT-4.1 soon

OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.

OpenAI has updated ChatGPT to use information from your previous conversations

OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.

OpenAI is working on watermarks for images made with ChatGPT

It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”

OpenAI offers ChatGPT Plus for free to U.S., Canadian college students

OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.

ChatGPT users have generated over 700M images so far

More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.

OpenAI’s o3 model could cost more to run than initial estimate

The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task.

OpenAI CEO says capacity issues will cause product delays

In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.

March 2025

OpenAI plans to release a new ‘open’ AI language model

OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.

OpenAI removes ChatGPT’s restrictions on image generation

OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.

OpenAI adopts Anthropic’s standard for linking AI models with data

OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.

The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.

OpenAI expects revenue to triple to $12.7 billion this year

OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said.

ChatGPT has upgraded its image-generation feature

OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.

OpenAI announces leadership updates

Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.

OpenAI’s AI voice assistant now has advanced feature

OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.

OpenAI, Meta in talks with Reliance in India

OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.

OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations

Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

OpenAI upgrades its transcription and voice-generating AI models

OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.

OpenAI has launched o1-pro, a more powerful version of its o1

OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.

OpenAI research lead Noam Brown thinks AI “reasoning” models could’ve arrived decades ago

Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.

OpenAI says it has trained an AI that’s “really good” at creative writing

OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all.

we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.

PROMPT:

Please write a metafictional literary short story…

— Sam Altman (@sama) March 11, 2025

OpenAI launches new tools to help businesses build AI agents

OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.

OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’

OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.

ChatGPT can directly edit your code

The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.

ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases

According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.

February 2025

OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release

OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 

ChatGPT may not be as power-hungry as once assumed

A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.

OpenAI now reveals more of its o3-mini model’s thought process

In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.

You can now use ChatGPT web search without logging in

OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.

OpenAI unveils a new ChatGPT agent for ‘deep research’

OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.

January 2025

OpenAI used a subreddit to test AI persuasion

OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 

OpenAI launches o3-mini, its latest ‘reasoning’ model

OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”

ChatGPT’s mobile users are 85% male, report says

A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.

OpenAI launches ChatGPT plan for US government agencies

OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.

More teens report using ChatGPT for schoolwork, despite the tech’s faults

Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.

OpenAI says it may store deleted Operator data for up to 90 days

OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.

OpenAI launches Operator, an AI agent that performs tasks autonomously

OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.

OpenAI may preview its agent tool for users on the $200-per-month Pro plan

Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.

OpenAI tests phone number-only ChatGPT signups

OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.

ChatGPT now lets you schedule reminders and recurring tasks

ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.

New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’

OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.

FAQs:

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.

Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Yes.

Can ChatGPT commit libel?

Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.

There have also been cases of ChatGPT accusing individuals of false crimes.

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

Keep reading the article on Tech Crunch


ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’

Some ChatGPT users have noticed a strange phenomenon recently: Occasionally, the chatbot refers to them by name as it reasons through problems. That wasn’t the default behavior previously, and several users claim ChatGPT is mentioning their names despite never having been told what to call them.

Reviews are mixed. One user, software developer and AI enthusiast Simon Willison, called the feature “creepy and unnecessary.” Another developer, Nick Dobos, said he “hated it.” A cursory search of X turns up scores of users confused by — and wary of — ChatGPT’s first-name basis behavior.

“It’s like a teacher keeps calling my name, LOL,” wrote one user. “Yeah, I don’t like it.”

Does anyone LIKE the thing where o3 uses your name in its chain of thought, as opposed to finding it creepy and unnecessary? pic.twitter.com/lYRby6BK6J

— Simon Willison (@simonw) April 17, 2025

It’s not clear when, exactly, the change happened, or whether it’s related to ChatGPT’s upgraded “memory” feature that lets the chatbot draw on past chats to personalize its responses. Some users on X say ChatGPT began calling them by their names even though they’d disabled memory and related personalization settings.

OpenAI hasn’t responded to TechCrunch’s request for comment.

It feels weird to see your own name in the model thoughts. Is there any reason to add that? Will it make it better or just make more errors as I did in my github repos? @OpenAI o4-mini-high, is it really using that in the custom prompt? pic.twitter.com/j1Vv7arBx4

— Debasish Pattanayak (@drdebmath) April 16, 2025

In any event, the blowback illustrates the uncanny valley OpenAI might struggle to overcome in its efforts to make ChatGPT more “personal” for the people who use it. Last week, the company’s CEO, Sam Altman, hinted at AI systems that “get to know you over your life” to become “extremely useful and personalized.” But judging by this latest wave of reactions, not everyone’s sold on the idea.

An article published by The Valens Clinic, a psychiatry office in Dubai, may shed some light on the visceral reactions to ChatGPT’s name use. Names convey intimacy. But when a person — or chatbot, as the case may be — uses a name a lot, it comes across as inauthentic.

“Using an individual’s name when addressing them directly is a powerful relationship-developing strategy,” writes Valens. “It denotes acceptance and admiration. However, undesirable or extravagant use can be looked at as fake and invasive.”

In a similar vein, perhaps another reason many people don’t want ChatGPT using their name is that it feels ham-fisted — a clumsy attempt at anthropomorphizing an emotionless bot. In the same way that most folks wouldn’t want their toaster calling them by their name, they don’t want ChatGPT to “pretend” it understands a name’s significance.

This reporter certainly found it disquieting when o3 in ChatGPT earlier this week said it was doing research for “Kyle.” (As of Friday, the change seemingly had been reverted; o3 called me “user.”) It had the opposite of the intended effect — poking holes in the illusion that the underlying models are anything more than programmable, synthetic things.

Keep reading the article on Tech Crunch


ChatGPT will now use its ‘memory’ to personalize web searches

OpenAI is upgrading ChatGPT’s “memory” again.

In a changelog and support pages on OpenAI’s website Thursday, the company quietly announced “Memory with Search,” a feature that lets ChatGPT draw on memories — details from past conversations, such as your favorite foods — to inform queries when the bot searches the web.

ChatGPT release notes were updated yesterday with o3 and o4-mini added to ChatGPT on Apr 16, 2025 – but interestingly, they also mention “Memory with Search” (anyone seen this rolling out already? Not for me yet) pic.twitter.com/oVBcJNqf6z

— Tibor Blaho (@btibor91) April 18, 2025

The update comes shortly after OpenAI beefed up ChatGPT’s long-in-the-tooth memory tool with the ability to reference a user’s entire chat history. It’s seemingly a part of OpenAI’s ongoing effort to differentiate ChatGPT from rival chatbots like Anthropic’s Claude and Google’s Gemini, the latter of which also offers a memory feature.

As OpenAI explains in its documentation, when Memory with Search is enabled and a user types in a prompt that requires a web search, ChatGPT will rewrite that prompt into a search query that “may also leverage relevant information from memories” to “make the query better and more useful.” For example, for a user that ChatGPT “knows” from memory is vegan and lives in San Francisco, ChatGPT may rewrite the prompt “what are some restaurants near me that I’d like” as “good vegan restaurants, San Francisco.”

Memory with Search can be disabled by disabling Memory in the ChatGPT settings menu. It’s not clear which users have it yet — some accounts on X report they began seeing Memory with Search earlier this week.

Keep reading the article on Tech Crunch


April 17, 2025

The latest viral ChatGPT trend is doing ‘reverse location search’ from photos

There’s a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures.

This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely “reason” through uploaded images. In practice, the models can crop, rotate, and zoom in on photos — even blurry and distorted ones — to thoroughly analyze them.

These image-analyzing capabilities, paired with the models’ ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

Wow, nailed it and not even a tree in sight. pic.twitter.com/bVcoe1fQ0Z

— swax (@swax) April 17, 2025

In many cases, the models don’t appear to be drawing on “memories” of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken.

X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it’s playing “GeoGuessr,” an online game that challenges players to guess locations from Google Street View images.

this is a fun ChatGPT o3 feature. geoguessr! pic.twitter.com/HrcMIxS8yD

— Jason Barnes (@vyrotek) April 17, 2025

It’s an obvious potential privacy issue. There’s nothing preventing a bad actor from screenshotting, say, a person’s Instagram Story and using ChatGPT to try to doxx them.

o3 is insane
I asked a friend of mine to give me a random photo
They gave me a random photo they took in a library
o3 knows it in 20 seconds and it’s right pic.twitter.com/0K8dXiFKOY

— Yumi (@izyuuumi) April 17, 2025

Of course, this could be done even before the launch of o3 and o4-mini. TechCrunch ran a number of photos through o3 and an older model without image-reasoning capabilities, GPT-4o, to compare the models’ location-guessing skills. Surprisingly, GPT-4o arrived at the same, correct answer as o3 more often than not — and took less time.

There was at least one instance during our brief testing when o3 found a place GPT-4o couldn’t. Given a picture of a purple, mounted rhino head in a dimly-lit bar, o3 correctly answered that it was from a Williamsburg speakeasy — not, as GPT-4o guessed, a U.K. pub.

That’s not to suggest o3 is flawless in this regard. Several of our tests failed — o3 got stuck in a loop, unable to arrive at an answer it was reasonably confident about, or volunteered a wrong location. Users on X noted, too, that o3 can be pretty far off in its location deductions.

But the trend illustrates some of the emerging risks presented by more capable, so-called reasoning AI models. There appear to be few safeguards in place to prevent this sort of “reverse location lookup” in ChatGPT, and OpenAI, the company behind ChatGPT, doesn’t address the issue in its safety report for o3 and o4-mini.

We’ve reached out to OpenAI for comment. We’ll update our piece if they respond.

Keep reading the article on Tech Crunch


April 16, 2025

OpenAI’s latest AI models have a new safeguard to prevent biorisks

OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report.

O3 and o4-mini represent a meaningful capability increase over OpenAI’s previous models, the company says, and thus pose new risks in the hands of bad actors. According to OpenAI’s internal benchmarks, o3 is more skilled at answering questions around creating certain types of biological threats in particular. For this reason — and to mitigate other risks — OpenAI created the new monitoring system, which the company describes as a “safety-focused reasoning monitor.”

The monitor, custom-trained to reason about OpenAI’s content policies, runs on top of o3 and o4-mini. It’s designed to identify prompts related to biological and chemical risk and instruct the models to refuse to offer advice on those topics.

To establish a baseline, OpenAI had red teamers spend around 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. During a test in which OpenAI simulated the “blocking logic” of its safety monitor, the models declined to respond to risky prompts 98.7% of the time, according to OpenAI.

OpenAI acknowledges that its test didn’t account for people who might try new prompts after getting blocked by the monitor, which is why the company says it’ll continue to rely in part on human monitoring.

O3 and o4-mini don’t cross OpenAI’s “high risk” threshold for biorisks, according to the company. However, compared to o1 and GPT-4, OpenAI says that early versions of o3 and o4-mini proved more helpful at answering questions around developing biological weapons.

Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

The company is actively tracking how its models could make it easier for malicious users to develop chemical and biological threats, according to OpenAI’s recently updated Preparedness Framework.

OpenAI is increasingly relying on automated systems to mitigate the risks from its models. For example, to prevent GPT-4o’s native image generator from creating child sexual abuse material (CSAM), OpenAI says it uses on a reasoning monitor similar to the one the company deployed for o3 and o4-mini.

Yet several researchers have raised concerns OpenAI isn’t prioritizing safety as much as it should. One of the company’s red-teaming partners, Metr, said it had relatively little time to test o3 on a benchmark for deceptive behavior. Meanwhile, OpenAI decided not to release a safety report for its GPT-4.1 model, which launched earlier this week.

Keep reading the article on Tech Crunch


and this