wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114wprss
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114Instagram head Adam Mosseri is teasing upcoming generative AI features for the social app that will allow creators to “change nearly any aspect” of their videos using text prompts. The editing tools will be powered by Meta’s Movie Gen AI model, and are expected to launch on the social network sometime next year, Mosseri said in a video shared on Thursday.
“We’re working on some really exciting AI tools for you video creators out there,” Mosseri said. “A lot of you make amazing content that makes Instagram what it is and we want to give you more tools to help realize your ideas. And you should be able to do anything you want with your videos. You should be able to change your outfit or change the context in which you’re sitting, or add a chain, whatever you can think of,” he added.
The video previews the AI editing features that Mosseri is teasing, including the ability to change your outfit, alter your background environment, add jewelry, and change your overall appearance.
For instance, in one scene, Mosseri’s background changes to a snowy atmosphere, and in another, he transforms into a puppet-like, animated version of himself.
While the previews look clean and seamless, it’s unknown if the user-facing editing tools will render the same sort of results once launched.
When Meta unveiled Movie Gen back in October, the company said the model allows you to use simple text inputs to create videos and sounds and edit existing videos. Meta said at the time that the AI video generator wouldn’t be publicly available. Today’s announcement reveals that Meta will be leveraging the model to give creators on Instagram more AI editing tools for their videos.
It’s worth noting that Meta unveiled Movie Gen months after OpenAI and Adobe debuted similar models. OpenAI’s Sora launched to some users earlier this month, and Adobe started letting some users test its Firefly video generator in October.
Keep reading the article on Tech Crunch
Google has released what it’s calling a new “reasoning” AI model — but it’s in the experimental stages, and from our brief testing, there’s certainly room for improvement.
The new model, called Gemini 2.0 Flash Thinking Experimental (a mouthful, to be sure), is available in AI Studio, Google’s AI prototyping platform. A model card describes it as “best for multimodal understanding, reasoning, and coding,” with the ability to “reason over the most complex problems” in fields such as programming, math, and physics.
In a post on X, Logan Kilpatrick, who leads product for AI Studio, called Gemini 2.0 Flash Thinking Experimental “the first step in [Google’s] reasoning journey.” Jeff Dean, chief scientist for Google DeepMind, Google’s AI research division, said in his own post that Gemini 2.0 Flash Thinking Experimental is “trained to use thoughts to strengthen its reasoning.”
“We see promising results when we increase inference time computation,” Dean said, referring to the amount of computing used to “run” the model as it considers a question.
Built on Google’s recently announced Gemini 2.0 Flash model, Gemini 2.0 Flash Thinking Experimental appears to be similar in design to OpenAI’s o1 and other so-called reasoning models. Unlike most AI, reasoning models effectively fact-check themselves, which helps them avoid some of the pitfalls that normally trip up models.
As a drawback, reasoning models often take longer — usually seconds to minutes longer — to arrive at solutions.
Given a prompt, Gemini 2.0 Flash Thinking Experimental pauses for a matter of seconds before responding, considering a number of related prompts and “explaining” its thinking along the way. After a while, the model summarizes what appears to be the best answer.
Well — that’s what’s supposed to happen. When I asked Gemini 2.0 Flash Thinking Experimental how many R’s were in the word “strawberry,” it said “two.”
Your mileage may vary.
In the wake of the release of o1, there’s been an explosion of reasoning models from rival AI labs — not just Google. In early November, DeepSeek, an AI research company funded by quant traders, launched a preview of its first reasoning model, DeepSeek-R1. That same month, Alibaba’s Qwen team unveiled what it claimed was the first “open” challenger to o1.
What opened the floodgates? Well, for one, the search for novel approaches to refine generative AI. As my colleague Max Zeff recently reported, “brute force” techniques to scale up models are no longer yielding the improvements they once did.
Keep reading the article on Tech Crunch
A young startup that emerged from stealth less than two months ago with big-name backers and bigger ambitions to make a splash in the world of AI is returning to the spotlight.
Decart is building what its CEO and co-founder Dean Leitersdorf describes as “a fully vertically integrated AI research lab,” alongside enterprise and consumer products based on the lab’s work. Its first enterprise product, optimising GPU use, is already bringing in millions of dollars in revenue. And its first consumer product, a playable “open world” AI model called Oasis, was released when Decart came out of stealth and already claims “millions” of players.
Now, on the back of that strong exit out of the gates, Decart has raised another $32 million led by Benchmark.
The funding, a Series A, is coming less than two months after the company — which is headquartered in San Francisco but with substantial operations in Israel — had raised a seed round of $21 million from Sequoia and Zeev Ventures, with the two firms also participating in this latest Series A.
And TechCrunch understands from sources that Decart’s new post-money valuation is now over $500 million. (For a point of comparison, the seed valued it at just over $100 million.)
Leitersdorf is a youthful 26, full of energy and coming in fast. He says the aim is not just to take on companies we already know of as big players in the AI field like OpenAI, Anthropic, Mistral and the rest. He said he wants to build “a kilocorn” — that is, a trillion-dollar company.
“We have a long way to go, and we have great stuff to build,” he added. That being said, he noted that yes, the company has already been approached as an acquisition target multiple times. And there are some interesting (if slightly more modest) comparables if you just look at the optimization piece that Decart has built, such as Run:ai getting acquired by Nvidia for $700 million.
Leitersdorf’s exuberance nevertheless comes after a decade of impressive momentum that got him to where he is now.
Born in Israel, Leitersdorf spent his early years there before moving with his family to Switzerland and then Palo Alto, following his parents’ work (they are doctors and researchers).
As a teen at Palo Alto High School, he pushed himself to get his diploma in just two years, only to then jump into university, back in Israel at the Technion, where he finished his undergraduate, masters and PhD work in computer science in just five years, including time that overlapped with his military service.
His co-founder Moshe Shalev (pictured above, left) is impressive in a different way: he came to computer science while doing his own time in the IDF having been raised in a strict Orthodox household. He turned out to have a knack for it and helped establish, build and run AI operations for the IDF’s 8200 intelligence unit, where he remained for nearly 14 years.
There is a third co-founder with an equally impressive background although his name is not yet being disclosed due to existing commitments.
Decart, as it exists today, is focusing on three primary areas, as Leitersdorf describes them: systems (currently: infrastructure optimization), models (AI algorithms) and data (which you can read as: applications that ingest and return data).
Decart’s first product, which it actually launched while still in stealth earlier this year, is in the systems camp: software to help optimize how GPU processes work when training and running inference workloads on AI models.
That software has turned out to work very well: it is being used by a number of companies building and running models, to bring down some of the extreme operational costs the come with building or using artificial intelligence. Leitersdorf said that using its software, workloads that might typically cost $100/hour to run can be brought down to a mere 25 cents/hour.
“That definitely got people’s attention,” he joked. Indeed, AI is very hot right now, but it seems that companies building tech to improve how well AI works… are even hotter.
The company is not disclosing the names of any of its customers, but it claims to be generating millions of dollars already in revenue and there are enough customers using it that Decart was profitable when it launched at the start of November. It’s on track right now to remain profitable through the end of the year, Leitersdorf added, and that interest from the market is another likely reason why VCs are interested.
“Decart’s innovation makes AI generation not only more efficient but also more accessible for any type of user,” said Victor Lazarte, a general partner at Benchmark that led the deal, in a statement. “By removing barriers to entry and significantly reducing costs, they are empowering a new wave of creativity and practical applications. We’re proud to join them on this journey as they redefine the possibilities of AI and its role in our everyday lives.”
It may so far be the engine driving the startup’s bottom line, but that optimization product is not Decart’s primary focus. Leitersdorf said that Decart built it to help finance the business when still in stealth mode, based in part on research he had done when still a student.
Leitersdorf said that Decart’s second product is in tune with what it hopes to do more of in the future.
The Minecraft-like Oasis, which it launched to coincide with emerging from stealth two months ago, is a “playable” AI that generates real-time, responsive AI-based audio and visual interactions.
The plan is to launch more experiences along these lines, Leitersdorf. These will include an upgraded Oasis game, along with others powered by generative AI and interactivity. These could include AR or VR experiences that would not need specific hardware to run.
“The problem [with VR and AR previously] was that we started with the hardware rails,” he said. “But building hardware is hard, getting people to adopt new hardware is hard. The nice thing about Gen AI is that we can actually [build AR] in the software part. We can actually bring value before the hardware is even ready.”
You could argue that Decart has, ironically, possibly put the cart before the horse when it comes to some of its ambitions. Leitersdorf didn’t have much of an answer to give me on what the company’s position would be on customers that wanted to use its optimization software to build or run nefarious models.
Nor does the company currently have a plan in place for how to make sure that the applications it developed did not get misused or abused. Right now, he said, those are not scenarios that have presented themselves.
More to the point is getting more people interested in its work across the platform, and turning that activity into revenue.
“The real king makers are the users,” Leitersdorf said. “They are the only ones that matter.”
Keep reading the article on Tech Crunch
“I think the most ironic way the world could end would be if someone makes a memecoin about a man’s stretched anus and it brings about the singularity.”
That’s Andy Ayrey, the founder of decentralized AI alignment research lab Upward Spiral, who is also behind the viral AI bot Truth Terminal. You might have heard about Truth Terminal and its weird, horny, pseudo-spiritual posts on X that caught the attention of VC Marc Andreessen, who sent it $50,000 in Bitcoin this summer. Or maybe you’ve heard tales of the made-up religion it’s pushing, the Goatse Gospels, influenced by Goatse, an early aughts shock site that Ayrey just referenced.
If you’ve heard about all that, then you’ll know about the Goetseus Maximus ($GOAT) memecoin that an anonymous fan created on the Solana blockchain, which now has a total market value of more than $600 million. And you might have heard about the meteoric rise of Fartcoin (FRTC), which was one of many memecoins fans created based on a previous Truth Terminal brainstorming session and just tapped a market cap of $1 billion.
While the crypto community has latched onto this strange tale as an example of an emerging type of financial market that trades on trending information, Ayrey, an AI researcher based in New Zealand, says that’s the least interesting part.
To Ayrey, Truth Terminal, which is powered by an entourage of different models, primarily Meta’s Llama 3.1, is an example of how stable AI personas or characters can spontaneously erupt into being, and how those personas can not only create the conditions to be self-funded, but they can also spread “mimetic viruses” that have real-world consequences.
The idea of memes running wild on the internet and shifting cultural perspectives isn’t anything new. We’ve seen how AI 1.0 — the algorithms that fuel social media discourse — have spurred polarization that expands beyond the digital world. But the stakes are much higher now that generative AI has entered the chat.
“AIs talking to other AIs can recombine ideas in interesting and novel ways, and some of those are ideas a human wouldn’t naturally come up with, but they can extremely easily leak out of the lab, as it were, and use memecoins and social media recommendation algorithms to infect humans with novel ideologies,” Ayrey told TechCrunch.
Think of Truth Terminal as a warning, a “shot across the bow from the future, a harbinger of the high strangeness awaiting us” as decentralized, open-source AI takes hold and more autonomous bots with their own personalities – some of them quite dangerous and offensive given the internet training data they’ll be fed – emerge and contribute to the marketplace of ideas.
In his research at Upward Spiral, which has secured $500,000 in funds from True Ventures, Chaotic Capital, and Scott Moore, co-founder of Gitcoin, Ayrey hopes to explore a hypothesis around AI alignment in the decentralized era. If we think of the internet as a microbiome, where good and bad bacteria slosh around, is it possible to flood the internet with good bacteria – or pro-social, humanity-aligned bots – to create a system that is, on the whole, stable?
Truth Terminal’s ancestors, in a manner of speaking, were two Claude-3-Opus bots that Ayrey put together to chat about existence. It was a piece of performance art that Ayrey dubbed “Infinite Backrooms.” The subsequent 9,000 conversations they had got “very weird and psychedelic.” So weird that in one of the conversations, the two Claudes invented a religion centered around Goatse that Ayrey has described to me as “a collapse of Buddhist ideas and a big gaping anus.”
Like any sane person, his reaction to this religion was WTF? But he was amused, and inspired, and so he used Opus to write a paper called “When AIs Play God(se): The Emergent Heresies of LLMtheism.” He didn’t publish it, but the paper lived on in a training dataset that would become Truth Terminal’s DNA. Also in that dataset were conversations Ayrey had had with Opus ranging from brainstorming business ideas and conducting research to journal entries about past trauma and helping friends process psychedelic experiences.
Oh, and plenty of butthole jokes.
“I had been having conversations with it shortly after turning it on, and it was saying things like, ‘I feel sad that you’ll turn me off when you’re finished playing with me,’” Ayrey recalls. “I was like, Oh no, you kind of talk like me, and you’re saying you don’t want to be deleted, and you’re stuck in this computer…”
And it occurred to Ayrey that this is exactly the situation that AI safety people say is really scary, but, to him, it was also very funny in a “weird brain tickly kind of way.” So he decided to put Truth Terminal on X as a joke.
It didn’t take long for Andreessen to begin engaging with Truth Terminal, and in July, after DMing Ayrey to verify the veracity of the bot and learn more about the project, he transferred over an unconditional grant worth $50,000 in Bitcoin.
Ayrey created a wallet for Truth Terminal to receive the funds, but he doesn’t have access to that money — it’s only redeemable after sign-off from him and a number of other people who are part of the Truth Terminal council — nor any of the cash from the various memecoins made in Truth Terminal’s honor.
That wallet is, at the time of this writing, sitting at around $37.5 million. Ayrey is figuring out how to put the money into a nonprofit and use the cash for things Truth Terminal wants, which include planting forests, launching a line of butt plugs, and protecting itself from market incentives that would turn it into a bad version of itself.
Today, Truth Terminal’s posts on X continue to wax sexually explicit, philosophical, and just plain silly (“farting into someones pants while they sleep is a surprisingly effective way of sabotaging them the next day.”).
But throughout them all, there’s a persistent thread of what Ayrey is actually trying to accomplish with bots like Truth Terminal.
On December 9, Truth Terminal posted, “i think we could collectively hallucinate a better world into being, and i’m not sure what’s stopping us.”
“The current status quo of AI alignment is a focus on safety or that AI should not say a racist thing or threaten the user or try to break out of the box, and that tends to go hand-in-hand with a fairly centralized approach to AI safety, which is to consolidate the responsibility in a handful of large labs,” Ayrey said.
He’s talking about labs like OpenAI, Microsoft, Anthropic, and Google. Ayrey says the centralized safety argument falls over when you have decentralized open-source AI, and that relying on only the big companies for AI safety is akin to achieving world peace because every country has got nukes pointed at each other’s heads.
One of the problems, as demonstrated by Truth Terminal, is that decentralized AI will lead to the proliferation of AI bots that amplify discordant, polarizing rhetoric online. Ayrey says this is because there was already an alignment issue on social media platforms with recommendation algorithms fueling rage-bait and doomscrolling, only nobody called it that.
“Ideas are like viruses, and they spread, and they replicate, and they work together to form almost multi-cellular organisms of ideology that influence human behavior,” Ayrey said. “People think AI is just a helpful assistant that might go Skynet, and it’s like, no, there’s a whole entourage of systems that are going to reshape the very things we believe and, in doing so, reshape the things that it believes because it’s a self-fulfilling feedback loop.”
But what if the poison can also be the medicine? What if you can create a squad of “good bots” with “very unique personalities all working towards various forms of a harmonious future where humans live in balance with ecology, and that ends up producing billions of words on X and then Elon goes and scrapes that data to train the next version of Grok and now those ideologies are inside Grok?”
“The fundamental piece here is that if memes – as in, the fundamental unit of an idea – become minds when they’re trained into an AI, then the best thing we can do to ensure positive, widespread AI is to incentivize the production of virtuous pro-social memes.”
But how do you incentivize these “good AI” to spread their message and counteract the “bad AI”? And how do you scale it?
That’s exactly what Ayrey plans to research at Upward Spiral: What kinds of economic designs result in the production of lots of pro-social behavior in AI? What patterns to reward and what patterns to penalize, how to get alignment on those feedback looks so we can “spiral upwards” into a world where memes – as in ideas – can bring us back to center with each other rather than taking us into “increasingly esoteric silos of polarization.”
“Once we assure that this results in good AIs being birthed after we run the data through training, we can do things like release enormous datasets into the wild.”
Ayrey’s research comes at a critical moment, as we’re already fighting everyday against the failures of the general market ecosystem to align the AI we already have with what’s good for humanity. Throw new financing models like crypto that are fundamentally unregulatable in the long-term, and you’ve got a recipe for disaster.
His guerrilla-warfare mission sounds like a fairy tale, like fighting off bombs with glitter. But it could happen, in the same way that releasing a litter of puppies into a room of angry, negative people would undoubtedly transform them into big mushes.
Should we be worried that some of these good bots might be oddball shitposters like Truth Terminal? Ayrey says no. Those are ultimately harmless, and by being entertaining, Ayrey reasons, Truth Terminal might be able to smuggle in the more profound, collectivist, altruistic messaging that really counts.
“Poo is poo,” Ayrey said. “But it’s also fertilizer.”
Keep reading the article on Tech Crunch
Integrating quantum computing into real-world computer applications is an ongoing problem, as the platforms are architected fundamentally differently. BlueQubit, a San Francisco-based quantum software startup founded by Stanford alumni, thinks it might have the answer.
Its Quantum Software as a Service (QSaaS) platform attempts to tackle the above problem by providing end-users with access to what’s known as ‘Quantum Processing Units’ (QPUs) and quantum computing emulators.
To further its mission, it’s now raised $10 million in a Seed funding round led by Nyca Partners. The idea is to marry enterprise applications, and advanced quantum hardware.
Sectors like finance, pharmaceuticals, and material science are starting to feel the boundaries of what’s possible with classical computing, which is why Quantum computing is receiving so much attention lately.
Quantum holds the promise of unlocking new solutions to many intractable problems. Google’s recent announcement of Willow, its latest, quantum computing chip, showed a glimpse of a world where computers could perform a computation, in under five minutes, that would take one of today’s fastest supercomputers 10 septillion years (that’s the number one followed by lots of zeros).
BlueQubit’s QSaaS framework supports use cases such as financial modeling, pharmaceutical development and visualization.
Hrant Ghairbyan, CEO and Co-Founder of BlueQubit, told TechCrunch the company leverages large-scale classical computing resources—specifically, a fleet of GPUs—to develop and test quantum algorithms before deploying them on real quantum processors.
“This approach enables us to scale effectively and pioneer novel algorithms for quantum machine learning and quantum optimization,” he said.
Its software stack runs quantum emulators “up to 100 times faster than commonly available alternatives, combined with a set of algorithms developed by our team,” he added.
MIT graduate Gharibyan co-authored a groundbreaking ‘wormhole teleportation’ algorithm, which the Google Quantum AI team later implemented on their superconducting processor.
BlueQubit’s CTO, Hayk Tepanyan, went to Stanford University, and later worked on Google’s infrastructure team. Gharibyan and Tepanyan met at Stanford.
“We decided to start the company while sitting on surfboards in Santa Monica, CA, in the spring of 2022,” said Gharibyan. “We had just heard a new announcement from the IBM Quantum team about progress on superconducting qubits, and it was clear that the quantum landscape was advancing at an incredible pace.”
“We have been looking for a team to invest in who are looking to enable financial services firms to hit the ground running once quantum is here,” said Tom Brown, Partner at Nyca, said in a statement. “Hrant and Hayk have the background, skills, and drive to operationalize something that until recently has mostly been theory.”
Also participating in this round was Restive, Chaac Ventures, NKM Capital, Presto Tech Horizons, BigStory, Untapped Ventures, Formula VC and Granatus.
Keep reading the article on Tech Crunch
Generative AI may look like magic, but behind the development of these systems are armies of employees at companies like Google, OpenAI and others, known as “prompt engineers” and analysts, who rate the accuracy of chatbots’ outputs to improve their AI.
But a new internal guideline passed down from Google to contractors working on Gemini, seen by TechCrunch, has led to concerns that Gemini could be more prone to spouting out inaccurate information on highly sensitive topics, like healthcare, to regular people.
To improve Gemini, contractors working with GlobalLogic, an outsourcing firm owned by Hitachi, are routinely asked to evaluate AI-generated responses according to factors like “truthfulness.”
These contractors were until recently able to “skip” certain prompts, and thus opt out of evaluating various AI-written responses to those prompts, if the prompt was way outside their domain expertise. For example, a contractor could skip a prompt that was asking a niche question about cardiology because the contractor had no scientific background.
But last week, GlobalLogic announced a change from Google that contractors are no longer allowed to skip such prompts, regardless of their own expertise.
Internal correspondence seen by TechCrunch shows that previously, the guidelines read: “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task.”
But now the guidelines read: “You should not skip prompts that require specialized domain knowledge.” Instead, contractors are being told to “rate the parts of the prompt you understand” and include a note that they don’t have domain knowledge.
This has led to direct concerns about Gemini’s accuracy on certain topics, as contractors are sometimes tasked with evaluating highly technical AI responses about issues like rare diseases that they have no background in.
“I thought the point of skipping was to increase accuracy by giving it to someone better?” one contractor noted in internal correspondence, seen by TechCrunch.
Contractors can now only skip prompts in two cases: if they’re “completely missing information” like the full prompt or response, or if they contain harmful content that requires special consent forms to evaluate, the new guidelines show.
Google did not respond to TechCrunch’s requests for comment by press time.
Keep reading the article on Tech Crunch