Blue Diamond Web Services

Your Best Hosting Service Provider!

November 21, 2024

DOJ: Google must sell Chrome to end monopoly

The United States Department of Justice argued Wednesday that Google should divest its Chrome browser as part of a remedy to break up the company’s illegal monopoly in online search, according to a U.S District Court of the District of Columbia filing. Google would not be allowed to re-enter the search market for five years if the DOJ’s proposed remedy is approved.

Ultimately, it will be up to U.S. District of Columbia District Court Judge Amit Mehta to decide what Google’s final punishment will be, a decision that could fundamentally change one of the world’s largest businesses and alter the structure of the internet as we know it. That phase of the trial is expected to kick off sometime in 2025.

Judge Mehta ruled in August that Google was an illegal monopoly for abusing its power over the search business. The judge also took issue with Google’s control of various gateways to the internet, and the company’s payments to third parties in order to retain its status as a default search engine. 

The Justice Department proposed other remedies to address the search giant’s monopoly, including that Google spin off its Android mobile operating system. Prosecutors also argued the company should be prohibited from entering into exclusionary third-party contracts with browser or phone companies, such as Google’s contract to be the default search engine on all Apple products.

The Wednesday filing confirms earlier reports that prosecutors were considering pushing Google to spin off Chrome, which controls about 61% of the browser market in the U.S., according to web traffic service StatCounter.

Google did not immediately respond to TechCrunch’s request for comment.

This story is developing…

Keep reading the article on Tech Crunch


India’s Arzooo, once valued at $310M, sells in distressed deal

Arzooo, an Indian startup founded by former Flipkart executives that sought to bring “best of e-commerce” to physical stores, has sold its assets in distressed sale to Moksha Group.

The deal follows Arzooo engaging with several startups, including Bengaluru-headquartered Udaan, for potential merger opportunities, according to people familiar with the matter.

Arzooo had raised approximately $90 million from investors including SBI Investment, Trifecta, Tony Xu, and Celesta Capital, and climbed to a peak valuation of $310 million.

The startup didn’t disclose financial terms of the deal.

Keep reading the article on Tech Crunch


Nvidia’s CEO defends his moat as AI labs change how they improve their AI models

Nvidia raked in more than $19 billion in net income during the last quarter, the company reported on Wednesday, but that did little to assure investors that its rapid growth would continue. On its earnings call, analysts prodded CEO Jensen Huang about how Nvidia would fare if tech companies start using new methods to improve their AI models.

The method that underpins OpenAI’s o1 model, or “test-time scaling,” came up quite a lot. It’s the idea that AI models will give better answers if you give them more time and computing power to “think” through questions. Specifically, it adds more compute to the AI inference phase, which is everything that happens after a user hits enter on their prompt.

Nvidia’s CEO was asked whether he was seeing AI model developers shift over to these new methods and how Nvidia’s older chips would work for AI inference.

Huang told investors that o1, and test-time scaling more broadly, could play a larger role in Nvidia’s business moving forward, calling it “one of the most exciting developments” and “a new scaling law.” Huang did his best to ensure investors that Nvidia is well-positioned for the change.

The Nvidia CEO’s remarks aligned with what Microsoft CEO Satya Nadella said onstage at a Microsoft event on Tuesday: o1 represents a new way for the AI industry to improve its models.

This is a big deal for the chip industry because it places a greater emphasis on AI inference. While Nvidia’s chips are the gold standard for training AI models, there’s a broad set of well-funded startups creating lightning-fast AI inference chips, such as Groq and Cerebras. It could be a more competitive space for Nvidia to operate in.

Despite recent reports that improvements in generative models are slowing, Huang told analysts that AI model developers are still improving their models by adding more compute and data during the pretraining phase.

Anthropic CEO Dario Amodei also said on Wednesday during an onstage interview at the Cerebral Valley summit in San Francisco that he is not seeing a slowdown in model development.

“Foundation model pretraining scaling is intact and it’s continuing,” said Huang on Wednesday. “As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough.”

That’s certainly what Nvidia investors wanted to hear, since the chipmaker’s stock has soared more than 180% in 2024 by selling the AI chips that OpenAI, Google, and Meta train their models on. However, Andreessen Horowitz partners and several other AI executives have previously said that these methods are already starting to show diminishing returns.

Huang noted that most of Nvidia’s computing workloads today are around the pre-training of AI models – not inference — but he attributed that more to where the AI world is today. He said that one day, there will simply be more people running AI models, meaning more AI inference will happen. Huang noted that Nvidia is the largest inference platform in the world today and the company’s scale and reliability gives it a huge advantage compared to startups.

“Our hopes and dreams are that someday, the world does a ton of inference, and that’s when AI has really succeeded,” said Huang. “Everybody knows that if they innovate on top of CUDA and Nvidia’s architecture, they can innovate more quickly, and they know that everything should work.”

Keep reading the article on Tech Crunch


November 20, 2024

FTX CTO Gary Wang avoids prison time

Not every former top FTX executive is heading to prison.

Gary Wang, former FTX chief technical officer, was spared prison time by U.S. District Judge Lewis Kaplan today. Judge Kaplan praised Wang’s cooperation with federal authorities. Wang testified against former FTX founder and CEO Sam Bankman-Fried at his trial last fall.

Wang pleaded guilty to four felony counts of fraud and conspiracy.

Former FTX CEO Bankman-Fried was sentenced to 25 years in prison last year, a sentence he filed to appeal in September. Caroline Ellison, the former CEO of FTX-affiliated Alameda Research, was sentenced to two years in prison in September.

FTX and Alameda both filed for Chapter 11 bankruptcy in November 2022 following a run on FTX’s assets by investors and revelations of fraudulent activity at the company.

Keep reading the article on Tech Crunch


TV Time points to Apple’s ‘significant power’ over developers after being removed from App Store

TV Time, a popular TV and movie tracking and recommendations app with more than 30 million registered users, disappeared from Apple’s App Store for several weeks, leading to questions about its future from the app’s avid fan base. Considering that 2.5 million users use the app every month to track what they’re watching and to engage in a social network where they can comment on individual episodes, vote for favorite characters, post images and GIFs, and connect with other users, its disappearance didn’t go unnoticed.

On November 1, the company announced via a post on X that it was aware the app had been removed from the App Store and that it was “working with Apple to get it back ASAP.” It offered no other details as to what may have caused the app to be pulled or how soon it could return. Users continually reply to that post in hopes of an update, but unfortunately for TV Time fans, several weeks passed without a resolution.

After TechCrunch reached out to TV Time and Apple about the app’s removal, the app was reinstated on the App Store.

TV Time has long been operated by entertainment analytics platform Whip Media Group following its acquisition in 2016 of the French startup, formerly known as TVShow Time. Similar to other services like Reelgood or JustWatch, the app can direct users to where a show or movie can be streamed and can suggest other series you might like, based on your viewing activity.

During the time of its removal, existing iOS users were still able to access the app on their devices, but anyone trying to install TV Time on a new iPhone or iPad would have been out of luck. In addition, the App Store removal meant TV Time was no longer able to issue updates to its app to its current user base.

TechCrunch reached out to the company to find out why the app was pulled.

According to Whip Media Chief Marketing Officer Jerry Inman, the dispute with Apple had to do with the mishandling of a routine intellectual property (IP) complaint. TV Time users had uploaded some TV and film cover art to the app, leading a company to claim copyrights over the app and issue a takedown notice via the Digital Millennium Copyright Act (DMCA). While TV Time complies with the DMCA, it asked the complainant to provide proof of ownership — like a copyright registration — which it was unable to do. Despite the lack of evidence, TV Time says it still removed the images from both the TV Time platform and its metadata platform, TheTVDB.

However, the complainant also demanded a financial settlement not consistent with the DMCA so Whip Media did not agree to pay, Inman claims.

“Despite Whip Media having complied with the DMCA and explaining that to Apple, the complainant notified Apple that its claim was ‘unresolved,’ and Apple decided to remove TV Time from the App Store,” he says. The company has since resolved the matter with the complainant. As of the time of writing, the TV Time app was in the process of returning to the App Store.

However, Inman warns this is another case where Apple had too much power over the companies doing business on its App Store platform.

“Apple holds significant power over app developers by controlling access to a massive market and, in this case, seems to have acted on a complaint without requiring robust evidence from the complainant,” Inman shared with TechCrunch.

Apple was asked for comment and did not respond.

While TV Time was missing from the App Store, fans could use the app on Android and the web at app.tvtime.com. According to data from app intelligence firm Appfigures, TV Time has seen 7.4 million installs on iOS to date since Appfigures’ tracking system began, on Jan. 1, 2017. (The app itself first launched in 2012).

Keep reading the article on Tech Crunch


Current AI scaling laws are showing diminishing returns, forcing AI labs to change course

AI labs traveling the road to super-intelligent systems are realizing they might have to take a detour.

“AI scaling laws,” the methods and expectations that labs have used to increase the capabilities of their models for the last five years, are now showing signs of diminishing returns, according to several AI investors, founders, and CEOs who spoke with TechCrunch. Their sentiments echo recent reports that indicate models inside leading AI labs are improving more slowly than they used to.

Everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models, and expect them to turn into some sort of all-knowing digital god. Maybe that sounds obvious, but these scaling laws were a key factor in developing ChatGPT, making it better, and likely influencing many CEOs to make bold predictions about AGI arriving in just a few years.

OpenAI and Safe Super Intelligence co-founder Ilya Sutskever told Reuters last week that “everyone is looking for the next thing” to scale their AI models. Earlier this month, a16z co-founder Marc Andreessen said in a podcast that AI models currently seem to be converging at the same ceiling on capabilities.

But now, almost immediately after these concerning trends started to emerge, AI CEOs, researchers, and investors are already declaring we’re in a new era of scaling laws. “Test-time compute,” which gives AI models more time and compute to “think” before answering a question, is an especially promising contender to be the next big thing.

“We are seeing the emergence of a new scaling law,” said Microsoft CEO Satya Nadella onstage at Microsoft Ignite on Tuesday, referring to the test-time compute research underpinning OpenAI’s o1 model.

He’s not the only one now pointing to o1 as the future.

“We’re now in the second era of scaling laws, which is test-time scaling,” said Andreessen Horowitz partner Anjney Midha, who also sits on the board of Mistral and was an angel investor in Anthropic, in a recent interview with TechCrunch.

If the unexpected success – and now, the sudden slowing – of the previous AI scaling laws tell us anything, it’s that it is very hard to predict how and when AI models will improve.

Regardless, there seems to be a paradigm shift underway: the ways AI labs try to advance their models for the next five years likely won’t resemble the last five.

What are AI scaling laws?

The rapid AI model improvements that OpenAI, Google, Meta, and Anthropic have achieved since 2020 can largely be attributed to one key insight: use more compute and more data during an AI model’s pretraining phase.

When researchers give machine learning systems abundant resources during this phase – in which AI identifies and stores patterns in large datasets – models have tended to perform better at predicting the next word or phrase.

This first generation of AI scaling laws pushed the envelope of what computers could do, as engineers increased the number of GPUs used and the quantity of data they were fed. Even if this particular method has run its course, it has already redrawn the map. Every Big Tech company has basically gone all in on AI, while Nvidia, which supplies the GPUs all these companies train their models on, is now the most valuable publicly traded company in the world.

But these investments were also made with the expectation that scaling would continue as expected.

It’s important to note that scaling laws are not laws of nature, physics, math, or government. They’re not guaranteed by anything, or anyone, to continue at the same pace. Even Moore’s Law, another famous scaling law, eventually petered out — though it certainly had a longer run.

“If you just put in more compute, you put in more data, you make the model bigger – there are diminishing returns,” said Anyscale co-founder and former CEO Robert Nishihara in an interview with TechCrunch. “In order to keep the scaling laws going, in order to keep the rate of progress increasing, we also need new ideas.”

Nishihara is quite familiar with AI scaling laws. Anyscale reached a billion-dollar valuation by developing software that helps OpenAI and other AI model developers scale their AI training workloads to tens of thousands of GPUs. Anyscale has been one of the biggest beneficiaries of pretraining scaling laws around compute, but even its cofounder recognizes that the season is changing.

“When you’ve read a million reviews on Yelp, maybe the next reviews on Yelp don’t give you that much,” said Nishihara, referring to the limitations of scaling data. “But that’s pretraining. The methodology around post-training, I would say, is quite immature and has a lot of room left to improve.”

To be clear, AI model developers will likely continue chasing after larger compute cluster and bigger datasets for pretraining, and there’s probably more improvement to eke out of those methods. Elon Musk recently finished building a supercomputer with 100,000 GPUs, dubbed Colossus, to train xAI’s next models. There will be more, and larger, clusters to come.

But trends suggest exponential growth is not possible by simply using more GPUs with existing strategies, so new methods are suddenly getting more attention.

Test-time compute: the AI industry’s next big bet

When OpenAI released a preview of its o1 model, the startup announced it was part of a new series of models separate from GPT.

OpenAI improved its GPT models largely through traditional scaling laws: more data, more power during pretraining. But now that method reportedly isn’t gaining them much. The o1 framework of models relies on a new concept, test-time compute, so called because the computing resources are used after a prompt, not before. The technique hasn’t been explored much yet in the context of neural networks, but is already showing promise.

Some are already pointing to test-time compute as the next method to scale AI systems.

“A number of experiments are showing that even though pretraining scaling laws may be slowing, the test-time scaling laws – where you give the model more compute at inference – can give increasing gains in performance,” said a16z’s Midha.

“OpenAI’s new ‘o’ series pushes [chain-of-thought] further, and requires far more computing resources, and therefore energy, to do so,” said famed AI researcher Yoshua Benjio in an op-ed on Tuesday. “We thus see a new form of computational scaling appear. Not just more training data and larger models but more time spent ‘thinking’ about answers.”

Over a period of 10 to 30 seconds, OpenAI’s o1 model re-prompts itself several times, breaking down a large problem into a series of smaller ones. Despite ChatGPT saying it is “thinking,” it isn’t doing what humans do — although our internal problem-solving methods, which benefit from clear restatement of a problem and stepwise solutions, were key inspirations for the method.

A decade or so back, Noam Brown, who now leads OpenAI’s work on o1, was trying to build AI systems that could beat humans at poker. During a recent talk, Brown says he noticed at the time how human poker players took time to consider different scenarios before playing a hand. In 2017, he introduced a method to let a model “think” for 30 seconds before playing. In that time, the AI was playing different subgames, figuring out how different scenarios would play out to determine the best move.

Ultimately, the AI performed seven times better than his past attempts.

Granted, Brown’s research in 2017 did not use neural networks, which weren’t as popular at the time. However, MIT researchers released a paper last week showing that test-time compute significantly improves an AI model’s performance on reasoning tasks.

It’s not immediately clear how test-time compute would scale. It could mean that AI systems need a really long time to think about hard questions; maybe hours or even days. Another approach could be letting an AI model “think” through a questions on lots of chips simultaneously.

If test-time compute does take off as the next place to scale AI systems, Midha says the demand for AI chips that specialize in high-speed inference could go up dramatically. This could be good news for startups such as Groq or Cerebras, that specialize in fast AI inference chips. If finding the answer is just as compute-heavy as training the model, the “pick and shovel” providers in AI win again.

The AI world is not yet panicking

Most of the AI world doesn’t seem to be losing their cool about these old scaling laws slowing down. Even if test-time compute does not prove to be the next wave of scaling, some feel we’re only scratching the surface of applications for current AI models.

New popular products could buy AI model developers some time to figure out new ways to improve the underlying models.

“I’m completely convinced we’re going to see at least 10 to 20x gains in model performance just through pure application-level work, just allowing the models to shine through intelligent prompting, UX decisions, and passing context at the right time into the models,” said Midha.

For example, ChatGPT’s Advanced Voice Mode is one the more impressive applications from current AI models. However, that was largely an innovation in user experience, not necessarily the underlying tech. You can see how further UX innovations, such as giving that feature access to the web or applications on your phone, would make the product that much better.

Kian Katanforoosh, the CEO of AI startup Workera and a Stanford adjunct lecturer on deep learning, tells TechCrunch that companies building AI applications, like his, don’t necessarily need exponentially smarter models to build better products. He also says the products around current models have a lot of room to get better.

“Let’s say you build AI applications and your AI hallucinates on a specific task,” said Katanforoosh. “There are two ways that you can avoid that. Either the LLM has to get better and it will stop hallucinating, or the tooling around it has to get better and you’ll have opportunities to fix the issue.”

Whatever the case is for the frontier of AI research, users probably won’t feel the effects of these shifts for some time. That said, AI labs will do whatever is necessary to continue shipping bigger, smarter, and faster models at the same rapid pace. That means several leading tech companies could now pivot how they’re pushing the boundaries of AI.

Keep reading the article on Tech Crunch


and this