Blue Diamond Web Services

Your Best Hosting Service Provider!

September 16, 2024

Generative AI startup Typeface acquires two companies, Treat and Narrato, to bolster its portfolio

Typeface, a generative AI startup focused on enterprise use cases, has acquired a pair of companies just over a year after raising $100 million at a $1 billion valuation.

Typeface revealed on Monday that it’s purchased Treat, a company using AI to create personalized photo products, and Narrato, an AI-powered content creation and management platform.

Treat and Narrato will “enrich [Typeface’s] multimodal capabilities,” the company said in a press release, while “propelling [its] vision of end-to-end content lifecycle transformation.”

“Building on our foundation of multimodal AI workflows, these acquisitions’ top-tier AI technology and talent further enrich our visual and textual capabilities,” Typeface wrote in the release. “By integrating these technologies, we’re supercharging the entire Typeface portfolio.”

Typeface, founded in 2022 by former Adobe CTO Abhay Parasnis, offers tools for text and image generation, a fine-tuning engine to personalize AI to a brand’s style and integrations with third-party apps, software and services. Typeface claims to place a greater emphasis on brand governance and privacy than its generative AI rivals; for example, Typeface trains dedicated AI models for each customer to ensure their assets and activity remain private.

So how do Treat and Narrato fit into this vision? Well, both were started by founders well-acquainted with the enterprise landscape. And — not for nothing — the startups offer products appealing the sorts of corporate clients with whom Typeface does business.

NYC-based Treat, the brainchild of Matt Osman and ex-Drizly CTO Hugh Hunter, uses a company’s data on customers to generate product images that incorporate elements known to perform well with certain target demographics. For example, if a fruit vendor’s data suggested that younger men prefer seeing food ads that show a person eating the product, Treat may create an ad that depicts someone biting into fruit.

Treat
An image generated by Treat.
Image Credits: Treat

An Australian venture, Narrato — which coincidentally also launched in 2022 — sells access to an “AI content assistant” designed to help orgs achieve their internal content creation and planning goals. As founder Sophia Solanki explained to TechCrunch in an interview last March, Narrato customers also get collaboration and workflow tools including templates for articles, video scripts, blogs, emails, social media content, art and more.

Narrato
Image Credits: Narrato

Treat raised at least $8.5 million from investors including Greylock prior to the acquisition, while Narrato manage to raise more than $1 million from AirTree Ventures, OfBusiness and serial entrepreneur Shreesha Ramdas.

Typeface wouldn’t disclose the terms of either acquisition.

Treat and Narrato mark the third and fourth acquisitions for Typeface, which purchased AI photo and video editing suite TensorTour in January and chatbot app Cypher in May. It’s unclear how much of a dent those deals have made in Typeface’s $165 million warchest.

Keep reading the article on Tech Crunch


Runway announces an API for its video-generating models

Runway, one of several AI startups developing video-generating tech, today announced an API to allow devs and organizations to build the company’s generative AI models into third-party platforms, apps and services.

Currently in limited access (there’s a waitlist), the Runway API only offers a single model to choose from — Gen-3 Alpha Turbo, a faster but less capable version of Runway’s flagship, Gen-3 Alpha — and two plans, Build (which is aimed at individuals and teams) and Enterprise. Base pricing is one cent per credit (1 second of video costs 5 credits), and Runway says that “trusted strategic partners” including marketing group Omnicom are already using the API.

The Runway API also comes with unusual disclosure requirements. Any interfaces using the API must “prominently display” a “Powered by Runway” banner linking to Runway’s website, the company writes in a blog post. “This helps users understand the technology behind your application while adhering to our usage terms,” the post continues.

Runway, which is backed by investors including Salesforce, Google and Nvidia and was last valued at $1.5 billion, faces stiff competition in the video generation space, including from OpenAI, Google and Adobe. OpenAI is expected to release its video generation model, Sora, in some form early this fall, while startups like Luma Labs continue to refine their technologies.

Runway Gen-3
Image Credits: Runway

With the preliminary launch of the Runway API, Runway becomes one of the first AI vendors to offer a video generation model through an API. But while the API might help the company along the road to profitability (or at least recouping the high costs of training and running models), it won’t resolve the lingering legal questions around those models and generative AI technology more broadly.

Runway’s video-generating models, like all video-generating models, were trained on a vast number of examples of videos to “learn” the patterns in these videos to generate new footage. Where did the training data come from? Runway refuses to say, like many vendors these days — partly out of fear of losing competitive advantage.

But training details are also a potential source of IP-related lawsuits if Runway trained on copyrighted data without permission. There’s evidence that it did, in fact — a report from 404 Media in July exposed an internal spreadsheet of training data that included links to YouTube channels belonging to Netflix, Rockstar Games, Disney and creators like Linus Tech Tips and MKBHD.

It’s unclear whether Runway ended up sourcing any of the videos in the spreadsheet to train its models. In an interview with TechCrunch in June, Runway co-founder Anastasis Germanidis would only say the company uses “curated, internal datasets” for model training. But if it did, it wouldn’t be the only AI vendor playing fast and loose with copyright rules.

Earlier this year, OpenAI CTO Mira Murati didn’t outright deny that Sora was trained on YouTube content. And Nvidia reportedly used YouTube videos to train an internal video-generating model called Cosmos.

Generative AI vendors believe that the doctrine known as fair use provides them a legal shield. Others aren’t taking chances; to train its video-generating models, Adobe is said to be offering artists payments in exchange for clips. If we’re lucky, cases making their way through the courts will bring clarity soon enough.

However it shakes out, one thing’s becoming clear: Generative AI video tools threaten to upend the film and TV industry as we know it. A 2024 study commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, found that 75% of film production companies that have adopted AI have reduced, consolidated or eliminated jobs after incorporating the tech. The study also estimates that by 2026, more than 100,000 of U.S. entertainment jobs will be disrupted by generative AI.

Keep reading the article on Tech Crunch


AI coding assistant Supermaven raises cash from OpenAI and Perplexity co-founders

Jacob Jackson was all-in on AI early in his career.

Jackson co-founded Tabnine, the AI coding assistant that went on to raise close to $60 million in venture backing, while still a computer science student at the University of Waterloo. After selling Tabnine to Codata in 2019 (during his final exams), Jackson joined OpenAI as an intern, where he worked until 2022.

It’s at that juncture Jackson had the urge to start a company again, one focused on supporting common developer workflows.

“In the years since I built Tabnine, tools like ChatGPT and Github Copilot have changed the way developers work,” Jackson told TechCrunch. “It’s a really exciting time to be working on developer tools because the underlying technology has improved so much since I started Tabnine — which has led to many more developers becoming interested in using AI tools to accelerate their workflow.”

So Jackson started Supermaven, an AI coding platform along the lines of Tabnine but with a few quality of life and technical upgrades.

Supermaven’s in-house generative AI model, Babble, can understand a lot of code at once, Jackson says, thanks to a 1 million-token context window. (In data science, tokens are subdivided bits of raw data — like the syllables “fan,” “tas” and “tic” in the word “fantastic.”) 

A model’s context, or context window, refers to input data (e.g. code) that the model considers before generating output (e.g. additional code). Long context can prevent models from “forgetting” the content of recent docs and data, and from veering off topic and extrapolating wrongly.

“Our large context window helps reduce the frequency of hallucinations because it lets the model draw answers from the context in situations where it would otherwise have to guess,” Jackson said.

One million tokens is a big context window, indeed. But it’s not bigger than AI coding startup Magic’s, which is 100 million tokens. Meanwhile, Google’s recently introduced Code Assist tool matches Supermaven’s context at 1 million tokens.

So what are Supermaven’s advantages over rivals? Well, Jackson claims that Babble is lower-latency thanks to a “new neural architecture.” He wouldn’t elaborate beyond saying that the architecture was developed “from scratch.”

“Supermaven spends 10 to 20 seconds processing a developer’s code repository to become familiar with its APIs and the unique conventions of its codebase,” Jackson said. “With lower latency because of our in-house model serving infrastructure, our tool remains responsive while working with the long prompts that come with large codebases.”

The market for AI coding tools is a large and growing one, with Polaris Research projecting that it’ll be worth $27.17 billion by 2032. The vast majority of respondents in GitHub’s latest dev poll say that they’ve adopted AI tools in some form, and over 1.8 million people — and ~50,000 businesses — are paying for GitHub Copilot.

But Supermaven — along with startup competitors like Cognition, Anysphere, Poolside, Codeium, and Augment — have ethical and legal challenges to overcome.

Businesses are often wary of exposing proprietary code to a third party; for instance, Apple reportedly banned staff from using Copilot last year, citing concerns about confidential data leakage. Some code-generating tools trained using restrictively licensed or copyrighted code have been shown to regurgitate that code when prompted in a certain way, posing a liability risk (i.e., developers that incorporate the code could be sued). And, because AI makes mistakes, assistive coding tools can result in more mistaken and insecure code being pushed to codebases.

Jackson said that Supermaven doesn’t use customer data to train its models. He did admit, however, that the company retains data for a week to “make the system quick and responsive,” he said. On the subject of copyright, Jackson didn’t explicitly deny that Babble was trained on IP-protected code — only that it was “trained almost exclusively on publicly available code rather than a scrape of the public internet” to “reduce exposure to toxic content during training.”

Customers don’t appear to be dissuaded. Over 35,000 developers are using Supermaven, Jackson says, and a sizeable chunk are paying for the premium Pro ($10 per month) and Team ($10 per month per use) plans. Supermaven’s annual recurring revenue reached $1 million this year on the back of a user base that’s grown 3x since the platform’s February launch.

That momentum got the attention of VCs.

Supermaven this week announced its first outside funding: a $12 million round led by Bessemer Venture Partners and high-profile angel investors including OpenAI co-founder John Schulman and Perplexity co-founder Denis Yarats. Jackson says that the plan is to spend the money on hiring developers (Supermaven has a five-person team presently) and developing Supermaven’s text editor, which is currently in beta.

“We plan to grow significantly through the end of the year,” he added. “Despite headwinds for tech overall, the market for coding copilots has been growing quickly. Our growth since our launch in February — as well as our most recent funding round — position us well as we head into next year.”

Keep reading the article on Tech Crunch


and this