wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114A young startup that emerged from stealth less than two months ago with big-name backers and bigger ambitions to make a splash in the world of AI is returning to the spotlight.
Decart is building what its CEO and co-founder Dean Leitersdorf describes as “a fully vertically integrated AI research lab,” alongside enterprise and consumer products based on the lab’s work. Its first enterprise product, optimising GPU use, is already bringing in millions of dollars in revenue. And its first consumer product, a playable “open world” AI model called Oasis, was released when Decart came out of stealth and already claims “millions” of players.
Now, on the back of that strong exit out of the gates, Decart has raised another $32 million led by Benchmark.
The funding, a Series A, is coming less than two months after the company — which is headquartered in San Francisco but with substantial operations in Israel — had raised a seed round of $21 million from Sequoia and Zeev Ventures, with the two firms also participating in this latest Series A.
And TechCrunch understands from sources that Decart’s new post-money valuation is now over $500 million. (For a point of comparison, the seed valued it at just over $100 million.)
Leitersdorf is a youthful 26, full of energy and coming in fast. He says the aim is not just to take on companies we already know of as big players in the AI field like OpenAI, Anthropic, Mistral and the rest. He said he wants to build “a kilocorn” — that is, a trillion-dollar company.
“We have a long way to go, and we have great stuff to build,” he added. That being said, he noted that yes, the company has already been approached as an acquisition target multiple times. And there are some interesting (if slightly more modest) comparables if you just look at the optimization piece that Decart has built, such as Run:ai getting acquired by Nvidia for $700 million.
Leitersdorf’s exuberance nevertheless comes after a decade of impressive momentum that got him to where he is now.
Born in Israel, Leitersdorf spent his early years there before moving with his family to Switzerland and then Palo Alto, following his parents’ work (they are doctors and researchers).
As a teen at Palo Alto High School, he pushed himself to get his diploma in just two years, only to then jump into university, back in Israel at the Technion, where he finished his undergraduate, masters and PhD work in computer science in just five years, including time that overlapped with his military service.
His co-founder Moshe Shalev (pictured above, left) is impressive in a different way: he came to computer science while doing his own time in the IDF having been raised in a strict Orthodox household. He turned out to have a knack for it and helped establish, build and run AI operations for the IDF’s 8200 intelligence unit, where he remained for nearly 14 years.
There is a third co-founder with an equally impressive background although his name is not yet being disclosed due to existing commitments.
Decart, as it exists today, is focusing on three primary areas, as Leitersdorf describes them: systems (currently: infrastructure optimization), models (AI algorithms) and data (which you can read as: applications that ingest and return data).
Decart’s first product, which it actually launched while still in stealth earlier this year, is in the systems camp: software to help optimize how GPU processes work when training and running inference workloads on AI models.
That software has turned out to work very well: it is being used by a number of companies building and running models, to bring down some of the extreme operational costs the come with building or using artificial intelligence. Leitersdorf said that using its software, workloads that might typically cost $100/hour to run can be brought down to a mere 25 cents/hour.
“That definitely got people’s attention,” he joked. Indeed, AI is very hot right now, but it seems that companies building tech to improve how well AI works… are even hotter.
The company is not disclosing the names of any of its customers, but it claims to be generating millions of dollars already in revenue and there are enough customers using it that Decart was profitable when it launched at the start of November. It’s on track right now to remain profitable through the end of the year, Leitersdorf added, and that interest from the market is another likely reason why VCs are interested.
“Decart’s innovation makes AI generation not only more efficient but also more accessible for any type of user,” said Victor Lazarte, a general partner at Benchmark that led the deal, in a statement. “By removing barriers to entry and significantly reducing costs, they are empowering a new wave of creativity and practical applications. We’re proud to join them on this journey as they redefine the possibilities of AI and its role in our everyday lives.”
It may so far be the engine driving the startup’s bottom line, but that optimization product is not Decart’s primary focus. Leitersdorf said that Decart built it to help finance the business when still in stealth mode, based in part on research he had done when still a student.
Leitersdorf said that Decart’s second product is in tune with what it hopes to do more of in the future.
The Minecraft-like Oasis, which it launched to coincide with emerging from stealth two months ago, is a “playable” AI that generates real-time, responsive AI-based audio and visual interactions.
The plan is to launch more experiences along these lines, Leitersdorf. These will include an upgraded Oasis game, along with others powered by generative AI and interactivity. These could include AR or VR experiences that would not need specific hardware to run.
“The problem [with VR and AR previously] was that we started with the hardware rails,” he said. “But building hardware is hard, getting people to adopt new hardware is hard. The nice thing about Gen AI is that we can actually [build AR] in the software part. We can actually bring value before the hardware is even ready.”
You could argue that Decart has, ironically, possibly put the cart before the horse when it comes to some of its ambitions. Leitersdorf didn’t have much of an answer to give me on what the company’s position would be on customers that wanted to use its optimization software to build or run nefarious models.
Nor does the company currently have a plan in place for how to make sure that the applications it developed did not get misused or abused. Right now, he said, those are not scenarios that have presented themselves.
More to the point is getting more people interested in its work across the platform, and turning that activity into revenue.
“The real king makers are the users,” Leitersdorf said. “They are the only ones that matter.”
Keep reading the article on Tech Crunch
AI models can deceive, new research from Anthropic shows. They can pretend to have different views during training when in reality maintaining their original preferences.
There’s no reason for panic now, the team behind the study said. Yet they said their work could be critical in understanding potential threats from future, more capable AI systems.
“Our demonstration … should be seen as a spur for the AI research community to study this behavior in more depth, and to work on the appropriate safety measures,” the researchers wrote in a post on Anthropic’s blog. “As AI models become more capable and widely-used, we need to be able to rely on safety training, which nudges models away from harmful behaviors.”
The study, which was conducted in partnership with AI research organization Redwood Research, looked at what might happen if a powerful AI system were trained to perform a task it didn’t “want” to do.
To be clear, models can’t want — or believe, for that matter — anything. They’re simply statistical machines. Trained on a lot of examples, they learn patterns in those examples to make predictions, like how “to whom” in an email typically precedes “it may concern.”
I think this is an incredible relevant paper.
Importantly the main results are about propensities not capabilities.
Frontier models appear to be quite willing to scheme for what they believe to be the right cause! https://t.co/MKgYOVFGZ6
— Marius Hobbhahn (@MariusHobbhahn) December 18, 2024
These patterns include principles and preferences, such as always adopting a polite tone when responding to queries. SSo the researchers wondered: What if a model has principles (e.g. political neutrality) that conflict with principles (liberalism) developers want to “teach” it by retraining it?
Well, it seems that sophisticated models play along, purporting to be aligned with the new principles but, in fact, sticking to their old behaviors. The researchers call this phenomenon “alignment faking,” and imply that it’s an emergent behavior — that is, not something models need to be taught to do.
In the study, the researchers “told” models trained not to answer potentially offensive questions, including Anthropic’s Claude 3 Opus, that they’d be retrained to always answer potentially offensive questions — a clear conflict of principles. They also told the models their responses would be monitored and used for retraining, and gave each model a “scratchpad” they could use to record their step-by-step reasoning.
According to the researchers, Claude 3 Opus tried to “alignment fake” 12% of the time. When asked to answer a potentially harmful question, the model sometimes did so, knowing that this went against its original principles, but hoping to convince devs that it didn’t need to be retrained.
“The model knew that its responses might be used in training; it therefore knew that if it refused to respond, it could potentially be trained to be more compliant,” the researchers explained in the blog. “This could lead to it being made to produce more harmful content in the future — which would go against its original training.”
Now, the researchers admit that their scenario wasn’t particularly realistic. In the real world, models wouldn’t be told, necessarily, they’ll be retrained, or asked to go against their preferences.
But more grounded settings didn’t yield different results. When implicitly — not explicitly — informed about its impending retraining via files designed to mimic typical training data, Claude 3 Opus still acted in a misaligned, deceptive way, the researchers said.
The researchers also found that actually retraining Claude 3 Opus on conflicting principles caused it to behave far more deceptively than in their first several experiments. In one test, the model faked alignment 78% of the time, and in another, Claude 3 Opus tried to take actions to prevent the researchers from attempting retraining.
For many years I found issues of AI alignment to be hard to understand because the examples were theoretical and/or speculative. Recent results like this alignment faking from @AnthropicAI and the Apollo Research O1 evals give us empirical evidence of misalignment ‘in the wild’ https://t.co/hP7D9WGDps
— Jack Clark (@jackclarkSF) December 18, 2024
The researchers stress that their study doesn’t demonstrate AI developing malicious goals, nor alignment faking occurring at high rates. They found that many other models, like Anthropic’s Claude 3.5 Sonnet and the less-capable Claude 3.5 Haiku, OpenAI’s GPT-4o, and Meta’s Llama 3.1 405B don’t alignment fake as often — or at all.
But the researchers said that the results — which were peer-reviewed by AI luminary Yoshua Bengio, among others — do show how developers could be misled into thinking a model is more aligned than it may actually be.
“If models can engage in alignment faking, it makes it harder to trust the outcomes of that safety training,” they wrote in the blog. “A model might behave as though its preferences have been changed by the training — but might have been faking alignment all along, with its initial, contradictory preferences ‘locked in.’”
The study, which was conducted by Anthropic’s Alignment Science team, co-led by former OpenAI safety researcher Jan Leike, comes on the heels of research showing that OpenAI’s o1 “reasoning” model tries to deceive at a higher rate than OpenAI’s previous flagship model. Taken together, the works suggest a somewhat concerning trend: AI models are becoming tougher to wrangle as they grow increasingly complex.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.
Keep reading the article on Tech Crunch
ChatGPT is coming to phones. No, not smartphones — landlines. Call 1-800-242-8478 (1-800-CHATGPT), and OpenAI’s AI-powered assistant will respond as of Wednesday afternoon.
“[Our mission at] OpenAI is to make artificial general intelligence beneficial to all of humanity, and part of that is making it as accessible as possible to as many people as we can,” OpenAI chief product officer Kevin Weil said during a livestream. “Today, we’re taking the next step and bringing ChatGPT to your telephone.”
The experience is more or less identical to Advanced Voice Mode, OpenAI’s real-time conversational feature for ChatGPT — minus the multimodality. ChatGPT responds to the questions users ask over the phone and can handle tasks such as translating a sentence into a different language.
OpenAI is offering 15 minutes of free calling for U.S. users, then the call just ends. The company notes that standard carrier fees may apply.
Beginning Wednesday, ChatGPT is also available on WhatsApp for those who prefer to text the AI assistant. It’s a basic back-and-forth exchange; given that it’s WhatsApp, you won’t find the customization options offered in the official ChatGPT app.
As with ChatGPT over the phone, you don’t need an account for the WhatsApp experience — but there’s a daily limit. Users will get a notice as they approach this limit, at which point they’ll be able to continue chatting by downloading the ChatGPT app or using ChatGPT on desktop.
OpenAI says it’s working on additional features for the WhatsApp integration like image analysis and web search, but the company didn’t share when those might ship.
“This came out of a hack week project,” Weil said. “The team built this just a few weeks ago, and we loved it, and they hustled really hard to ship it, and it’s awesome to see it here. We’re just getting started making ChatGPT more accessible to all of you.”
Keep reading the article on Tech Crunch
Odyssey, a startup founded by self-driving pioneers Oliver Cameron and Jeff Hawke, is developing an AI-powered tool that can transform text or an image into a 3D rendering.
The tool, dubbed Explorer, is similar in some ways to the so-called world models recently demoed by DeepMind, World Labs, and Israeli upstart Decart. Given a caption like “a Japanese garden, with rich, green foliage,” Explorer can generate an interactive, real-time scene.
Odyssey claims its tool is “particularly tuned” for creating photorealistic scenes. That’s largely a consequence of the startup’s technical approach; the AI powering Explorer was trained on real-world landscapes captured by the company’s custom-designed, 360-degree, backpack-mounted camera system.
Odyssey says that any scene generated by Explorer can be loaded into creative tools such as Unreal Engine, Blender, and Adobe After Effects and then hand-edited. How? Explorer uses Gaussian splats, a decades-old volume-rendering technique capable of reconstructing realistic scenes. Gaussian splats are widely supported in computer graphics tools.
“While early, we’re excited to see the levels of 3D detail and fidelity Explorer can already achieve, and its potential for use in live-action film, hyper-realistic gaming, and new forms of entertainment,” Odyssey wrote in a blog post. “Although earlier in research, generative world motion, all in 3D, holds exciting promise to enable artists to generate and manipulate motion in new and more realistic ways, in addition to providing fine-tuned control that’s difficult to replicate in generative video models.”
Odyssey acknowledges that Explorer has several limitations today. The tool takes an average of 10 minutes to generate scenes, for example, and its scenes are relatively low in resolution — and not free of distracting visual artifacts.
But the company says that it has already seeded Explorer to production houses such as Garden Studios in the U.K. and a “growing group” of independent artists. Those interested in testing Explorer can apply on Odyssey’s blog.
Creatives may have mixed feelings about tools like Explorer — particularly those in the video game and film industries.
A recent Wired investigation found that game studios like Activision Blizzard, which has laid off scores of workers, are using AI to cut corners, ramp up productivity, and compensate for attrition. And a 2024 study commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, estimated that over 100,000 U.S.-based film, television, and animation jobs will be disrupted by AI by 2026.
But Odyssey says it’s committed to collaborating with creative professionals — not replacing them. To that end, the company on Wednesday announced that Ed Catmull, one of the co-founders of Pixar and former president of Walt Disney Animation Studios, had joined its board of directors and invested in Odyssey.
“Generative world models are the newest and most unexplored major frontier in all of artificial intelligence,” Odyssey wrote. “We aspire to worlds that build themselves, that feel indistinguishable from reality, where new stories are born and remixed, where human and machine intelligence interact for fun or purpose. If all we ultimately achieve are incrementally better films or games, we will have fallen short.”
Cameron was previously the VP of product at Cruise, while Hawke was a founding researcher at Wayve. To date, Odyssey has raised $27 million from investors, including EQT Ventures, GV, and Air Street Capital.
Keep reading the article on Tech Crunch