Blue Diamond Web Services

Your Best Hosting Service Provider!

May 8, 2025

Microsoft employees are banned from using DeepSeek app, president says 

Microsoft employees aren’t allowed to use DeepSeek due to data security and propaganda concerns, Microsoft vice chairman and president Brad Smith said in a Senate hearing today.

“At Microsoft we don’t allow our employees to use the DeepSeek app,” Smith said, referring to DeepSeek’s application service (which is available on both desktop and mobile).

Smith said Microsoft hasn’t put DeepSeek in its app store over those concerns, either. 

Although lots of organizations and even countries have imposed restrictions on DeepSeek, this is the first time Microsoft has gone public about such a ban.

Smith said the restriction stems from the risk that data will be stored in China and that DeepSeek’s answers could be influenced by “Chinese propaganda.”

DeepSeek’s privacy policy states it stores user data on Chinese servers. Such data is subject to Chinese law, which mandates cooperation with the country’s intelligence agencies. DeepSeek also heavily censors topics considered sensitive by the Chinese government.

Despite Smith’s critical comments about DeepSeek, Microsoft offered up DeepSeek’s R1 model on its Azure cloud service shortly after it went viral earlier this year.

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

But that’s a bit different from offering DeepSeek’s chatbot app itself. Since DeepSeek is open source, anybody can download the model, store it on their own servers, and offer it to their clients without sending the data back to China. 

That, however, doesn’t remove other risks like the model spreading propaganda or generating insecure code.

During the Senate hearing, Smith said that Microsoft had managed to go inside DeepSeek’s AI model and “change” it to remove “harmful side effects.” Microsoft did not elaborate on exactly what it did to DeepSeek’s model, referring TechCrunch to Smith’s remarks.

In its initial launch of DeepSeek on Azure, Microsoft wrote that DeepSeek underwent “rigorous red teaming and safety evaluations” before it was put on Azure.

While we can’t help pointing out that DeepSeek’s app is also a direct competitor to Microsoft’s own Copilot internet search chat app, Microsoft doesn’t ban all such chat competitors from its Windows app store. 

Perplexity is available in the Windows app store, for instance. Although any apps by Microsoft’s archrival Google (including the Chrome browser and Google’s chatbot Gemini) did not surface in our webstore search.

Keep reading the article on Tech Crunch


ChatGPT’s deep research tool gets a GitHub connector to answer questions about code

OpenAI is enhancing its AI-powered “deep research” feature with the ability to analyze codebases on GitHub.

On Thursday, OpenAI announced what it’s calling the first “connector” for ChatGPT deep research, the company’s tool that searches across the web and other sources to compile thorough research reports on a topic. Now, ChatGPT deep research can link to GitHub (in beta), allowing developers to ask questions about a codebase and engineering documents.

The connector will be available for ChatGPT Plus, Pro, and Team users over the next few days, with Enterprise and Edu support coming soon, according to an OpenAI spokesperson.

ChatGPT deep research GitHub
OpenAI’s ChatGPT Deep Research feature can now connect to GitHubImage Credits:OpenAI

The GitHub connector for ChatGPT deep research arrives as AI companies look to make their AI-powered chatbots more useful by building ways to link them to outside platforms and services. Anthropic, for example, recently debuted Integrations, which gives apps a pipeline into its AI chatbot Claude.

OpenAI years ago offered a plug-in capability for ChatGPT, but deprecated it in favor of custom chatbots called GPTs.

“I often hear that users find ChatGPT’s deep research agent so valuable that they want it to connect to their internal sources, in addition to the web,” OpenAI Head of Business Products Nate Gonzalez wrote in a blog post on LinkedIn. “[That’s why] today we’re introducing our first connector.”

In addition to answering questions about codebases, the new ChatGPT deep research GitHub connector lets ChatGPT users break down product specs into technical tasks and dependencies, summarize code structure and patterns, and understand how to implement new APIs using real code examples.

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

There’s a risk that ChatGPT deep research hallucinates, of course — no AI model in existence doesn’t confidently make things up sometimes. But OpenAI is pitching the new capability as a potential time saver, not a replacement for experts.

An OpenAI spokesperson said ChatGPT will respect an organization’s settings so users only see GitHub content they’re already allowed to view and codebases that’ve been explicitly shared with ChatGPT.

OpenAI has been investing in its tooling for assistive coding, recently unveiling an open source coding tool for terminals called Codex CLI and upgrading the ChatGPT desktop app to read code in a handful of developer-focused coding apps. The company sees programming as a top use case for its models. Case in point, OpenAI has reportedly reached an agreement to buy AI-powered coding assistant Windsurf for $3 billion.

In other OpenAI news on Thursday, the company launched fine-tuning options for developers looking to customize its newer models for particular applications. Devs can now fine-tune OpenAI’s o4-mini “reasoning” model via a technique OpenAI calls reinforcement fine-tuning, which uses task-specific grading to improve the model’s performance. Fine-tuning has also rolled out for the company’s GPT-4.1 nano model.

Only verified organizations can fine-tune o4-mini, according to OpenAI. GPT-4.1 nano fine-tuning, meanwhile, is available for all paying developers.

OpenAI began gating certain models and developer features behind verification, which requires organizations to submit an ID and other identity documents, in April. The company claims that it’s necessary to prevent abuse.

Keep reading the article on Tech Crunch


Ex-Synapse CEO reportedly trying to raise $100M for his new humanoid robotics venture

Sankaet Pathak’s last startup, fintech Synapse, filed for bankruptcy in 2024 amid issues with partner Evolve Bank & Trust. Tens of millions of dollars in deposits made by consumers, mostly customers of fintechs that worked with Synapse, remain unaccounted for.

Yet according to The Information, Pathak is reportedly moving full steam ahead on attempts to fundraise for his new venture, humanoid robotics startup Foundation. Pathak is said to be in the midst of raising $100 million for Foundation at a whopping $1 billion valuation.

The numbers seem particularly ambitious considering the startup only debuted its humanoid robot, Phantom, earlier this year. Foundation only last August raised $11 million in a pre-seed funding round from Tribe Capital and “other angels.”

Foundation’s self-proclaimed mission is to “create advanced humanoid robots that can operate in complex environments” to address the labor shortage.

TechCrunch has reached out to Pathak for comment.

Keep reading the article on Tech Crunch


Meta taps former Google DeepMind director to lead its AI research lab

Meta has chosen Robert Fergus to lead its Fundamental AI Research (FAIR) lab, according to Bloomberg.

Fergus had been working at Google DeepMind as a research director for roughly five years, per his LinkedIn. Prior to Google, he worked as a researcher scientist at Meta.

Meta’s FAIR, which has been around since 2013, has faced challenges in recent years, according to a report from Fortune. FAIR led research on the company’s early AI models, including Llama 1 and Llama 2. However, researchers have reportedly departed the unit en masse for other startups, companies, and even Meta’s newer GenAI group, which led the development of Llama 4.

Meta’s previous VP of AI Research, Joelle Pineau, announced in April she’d be leaving the company for a new opportunity.

Keep reading the article on Tech Crunch


Google launches ‘implicit caching’ to make accessing its latest AI models cheaper

Google is rolling out a feature in its Gemini API that the company claims will make its latest AI models cheaper for third-party developers.

Google calls the feature “implicit caching” and says it can deliver 75% savings on “repetitive context” passed to models via the Gemini API. It supports Google’s Gemini 2.5 Pro and 2.5 Flash models.

That’s likely to be welcome news to developers as the cost of using frontier models continues to grow.

Caching, a widely adopted practice in the AI industry, reuses frequently accessed or pre-computed data from models to cut down on computing requirements and cost. For example, caches can store answers to questions users often ask of a model, eliminating the need for the model to re-create answers to the same request.

Google previously offered model prompt caching, but only explicit prompt caching, meaning devs had to define their highest-frequency prompts. While cost savings were supposed to be guaranteed, explicit prompt caching typically involved a lot of manual work.

Some developers weren’t pleased with how Google’s explicit caching implementation worked for Gemini 2.5 Pro, which they said could cause surprisingly large API bills. Complaints reached a fever pitch in the past week, prompting the Gemini team to apologize and pledge to make changes.

In contrast to explicit caching, implicit caching is automatic. Enabled by default for Gemini 2.5 models, it passes on cost savings if a Gemini API request to a model hits a cache.

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

“[W]hen you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it’s eligible for a cache hit,” explained Google in a blog post. “We will dynamically pass cost savings back to you.”

The minimum prompt token count for implicit caching is 1,024 for 2.5 Flash and 2,048 for 2.5 Pro, according to Google’s developer documentation, which is not a terribly big amount, meaning it shouldn’t take much to trigger these automatic savings. Tokens are the raw bits of data models work with, with a thousand tokens equivalent to about 750 words.

Given that Google’s last claims of cost savings from caching ran afoul, there are some buyer-beware areas in this new feature. For one, Google recommends that developers keep repetitive context at the beginning of requests to increase the chances of implicit cache hits. Context that might change from request to request should be appended at the end, the company says.

For another, Google didn’t offer any third-party verification that the new implicit caching system would deliver the promised automatic savings. So we’ll have to see what early adopters say.

Keep reading the article on Tech Crunch


Google rolls out AI tools to protect Chrome users against scams

Google announced on Thursday that it’s rolling out new AI-powered defenses to help combat scams on Chrome. The tech giant is going to start using Gemini Nano, its on-device large language model (LLM), on desktop to protect users against online scams. It’s also launching new AI-powered warnings for Chrome on Android to help users be aware of spammy notifications.

Google notes that Chrome’s Enhanced Protection mode of Safe Browsing on Chrome offers the highest level of protection, offering users twice the protection against phishing and other online threats compared to the browser’s Standard Protection mode. Now Google will use Gemini Nano to provide Enhanced Protection users with an additional layer of defense against online scams.

Google says this on-device approach will provide immediate insight into risky websites to protect users against scams, including those that haven’t been seen before.

“Gemini Nano’s LLM is perfect for this use because of its ability to distill the varied, complex nature of websites, helping us adapt to new scam tactics more quickly,” Google said in a blog post.

The company is already using this AI-powered defense to protect users from remote tech support claims. Google plans to expand this defense to Android devices and even more types of scams in the future.

Image Credits:Google

As for the new AI-powered warnings, Google notes that the risk from scammy sites can extend beyond the site itself through notifications if you have them enabled. Malicious websites can use notifications to try to scam you, which is why Chrome will now help you be aware of malicious, spammy, or misleading notifications on Android.

Now when Chrome’s on-device machine learning model flags a notification as possibly being a scam, you will receive a warning. You can choose to either unsubscribe or view the content that was blocked. If you think the warning was shown incorrectly, you can allow all future notifications from that site.

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

As part of today’s announcement, Google shared that it has been using AI to stop scams in Search by detecting and blocking hundreds of millions of scammy results every day. Its AI-powered scam detection systems have helped to catch 20 times the number of scammy pages, Google says.

For example, Google has seen an increase in bad actors impersonating airline customer service agents and scamming people looking for help. The company says it has reduced these scams by more than 80%, decreasing the risk of users coming across a scammy phone number on Search.

Keep reading the article on Tech Crunch


and this