Blue Diamond Web Services

Your Best Hosting Service Provider!

October 17, 2024

Elon Musk’s X is changing its privacy policy to allow third parties to train AI on your posts

On Wednesday, social network X (formerly Twitter) updated its Privacy Policy to indicate that it would allow third-party “collaborators” to train their AI models on X data, unless users opt out. While X owner Elon Musk trained xAI’s Grok AI chatbot on X user data, leading to an investigation by the EU’s lead privacy regulator, the company hadn’t yet amended its policy to indicate its data may also be used by third parties.

The addition to the policy implies that X, like Reddit and various media organizations, is looking into licensing data to AI companies as a potential new revenue stream.

In Section 3 of the updated Privacy Policy titled “Sharing Information,” X added a paragraph detailing how X user data can be used and how users could opt out.

It reads:

Third-party collaborators. Depending on your settings, or if you decide to share your data, we may share or disclose your information with third parties. If you do not opt out, in some instances the recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise.”

The policy points to the settings page on X, but does not specifically indicate where users would go within the settings to toggle off data-sharing. Currently, the “Privacy and safety” section in settings lets users turn on or off data-sharing with xAI’s Grok and with other “business partners,” but the latter is described as those companies that X may work with to “run and improve its products,” not other AI providers.

That may be because the updated privacy policy won’t become effective until Nov. 15th, at which point the opt-out option could be added. (We hope.)

In addition, the company removed a paragraph that said it keeps user “profile information and content for the duration of your account,” and that it keeps other “personally identifiable data we collect when you use our products and services for a maximum of 18 months.”

Instead, the new section explains that X will keep “different types of information for different periods of time, depending on how long we need to retain it in order to provide you with our products and services, to comply with our legal requirements and for safety and security reasons.” As an example, it notes that usage information like the “content you post” and your interactions with others’ content will be kept for “the duration of your account or until such content is removed.”

The policy also added a note reminding users that public content can exist elsewhere even after it’s removed from X. This could potentially cover the data’s ingestion by AI providers, as X adds, “search engines and other third parties may retain copies of your posts longer, based upon their own privacy policies, even after they are deleted or expire on X.”

Separately, X has added a new “Liquidated Damages” section to its updated Terms of Service that says any organization scraping its content will be liable for damages. Specifically, “for requesting, viewing, or accessing more than 1,000,000 posts (including reply posts, video posts, image posts, and any other posts) in any 24-hour period,” X says the organization will be charged $15,000 USD per 1,000,000 posts. 

The move to monetize X data follows advertiser withdrawals and boycotts and a subscription feature that has yet to take off, which have left the company in need of new ways to pay its bills.

X did not respond to a request for comment.

Keep reading the article on Tech Crunch


Apple Music’s latest feature lets artists create dedicated playlists for concerts and tours

Apple Music introduced a new feature for artists on Thursday to enable them to create playlists tied to their recent concert set lists. 

Dubbed “Set List,” the feature acts as a new promotional tool for musicians to engage with their fans. It allows artists to turn their set list from a single show, residency, or entire tour into a playlist, which can then be shared with fans on Apple Music and displayed on Shazam’s Artist and Concert pages, as well as posted on social media. 

Using the feature is simple. Artists first need to upload an artist image to their profile for the playlist’s cover art. Then, they can choose tracks by typing in search or by pasting Apple Music links. There’s also an option to schedule the publish date, allowing artists to choose a specific time for when they want their fans to listen to the playlist, giving them a way to provide a sneak peek into their live performances. Artists can even rearrange tracks to correspond with their actual show sequence.

Set List is powered by Bandsintown, the music discovery platform where users are notified about tours and bands playing in their area. Artists are required to connect their Apple Music page with Bandsintown to help increase the discoverability of their shows.

The playlist creation tool is an addition to Apple Music’s Set Lists space, which launched last year to help fans discover upcoming concerts.

Keep reading the article on Tech Crunch


Can AI make us feel less alone? The founder of Manifest thinks so

Amy Wu, founder of the AI-based mental health app Manifest, has a bold prediction for the next wave of tech.

“Separately from the AI trend, I think so many people are seeing this loneliness epidemic that’s happening with Gen Z,” she said. “There is no doubt in my mind that there will be unicorns that emerge from those categories to address the loneliness epidemic.”

Manifest isn’t quite a unicorn yet – it’s only in its seed stage, having just raised $3.4 million from a16z Speedrun and a number of other investors. But Wu sees her company as part of a new crop of products trying to mitigate a rise in loneliness.

Wu is in her late twenties, right on the cusp of the murky boundary between Millennials and Gen Z, but she understands the struggles of the younger generation. A report from Cigna found that three out of five adults report that they sometimes or always feel lonely; that number is even higher among respondents aged 18-22, at 73%. Manifest is the app she wishes she had when she was an undergraduate at Stanford, navigating a competitive, intimidating environment while living on her own for the first time.

“I really felt like the real world punched me in the face,” Wu told TechCrunch. “I feel like school teaches you all these things around, here’s how to get a job at Facebook, or Google, or Microsoft, or Goldman Sachs, but it doesn’t teach you how to go build your own emotional toolkit.”

When you open the Manifest app, you’ll see a pastel gradient orb in the center of the screen. You can hold the button to talk, or tap it to type, in response to a number of prompts: “What’s on your mind?,” “What are you worried about?,” or “What would be useful for us to talk about?”

Then, the app’s AI will mirror your language and turn it into an affirmation, which you can turn into a personalized audio meditation.

Image Credits:Manifest on the App Store

For example, if you tell the app that you’re finding it hard to be proud of yourself after running a 5K because you got last place in your age group (totally not pulling from personal experience…!), it will spit out a couple of affirmations, like, “I strive to appreciate my progress, no matter how small,” or, “I trust that my commitment to this process will lead to growth in both my physical and mental health.”

Maybe those words of AI-generated wisdom help. Maybe they don’t. But Manifest isn’t meant to be an end-all-be-all mental health solution or a replacement for actual mental health treatment. Instead, Manifest is designed to be something that you can use for a few minutes every day to feel just a little bit more grounded.

“We are a wellness app that’s really kind of designed to meet Gen Z where they’re already at,” Wu said. “The real core thesis behind Manifest was like, can we make these bite-sized interactions with wellness super easy and super delightful, where it doesn’t feel like a chore to go do Manifest?”

In a time when young people are overwhelmed by the constant noise of social media, it may seem counterintuitive to use technology – let alone something that can feel as impersonal and amorphous as AI – to address loneliness. But Wu thinks that if Gen Z is already sucked into their phones, then wellness needs to happen there, too.

“Gen Z is hanging out way less in person,” she said. “So it’s like, what do you give a generation that we’ve already done this to? Like, the idea that you tell that person to go outside and hang with their friend is an astronomical leap for them, so how do you go and give them something where they’re already at?”

Image Credits:Manifest

Manifest launched in stealth this summer, and so far, users have generated 18.7 million “manifestations” in the app.

As with any app of its nature, Manifest has to navigate the ethical challenges around making a consumer mental health product with no medical backing. Wu said that there are safeguards embedded in Manifest’s AI, such as redirecting users to a suicide hotline if they mention self-harm. There are some topics like this that Manifest will decline to engage with.

From a risk standpoint, this could be a smart move for Manifest – it’s dangerous to leverage an experimental AI as a tool to help with something as serious as preventing self-harm. But other startups battling loneliness, like chatbot company Nomi AI, take a different approach. When Nomi AI users open up about thoughts of self-harm, the AI companions won’t halt the conversation – instead, they will try to de-escalate the situation by talking the user through their feelings.

Alex Cardinell, the founder of Nomi AI, argues that just stopping a conversation and providing a suicide hotline number could be alienating to someone who’s struggling for connection.

“I want to make those users feel heard in whatever their dark moment is, because that’s how you get someone to open up, how you get someone to reconsider their way of thinking,” Cardinell told TechCrunch in a recent conversation. “I really want to look at what’s aligned with the user, rather than what’s aligned with the strictest attorney’s loss mitigation strategy.”

Wu doesn’t think that Manifest, or any consumer app, is where people should go if they are in a situation where they need legitimate medical help. But young people are turning to these tools when seeking real medical care isn’t accessible. So, if Wu is right about the impending unicorn startups that will combat the loneliness epidemic, those companies – and Manifest – will need to tread thoughtfully.

Keep reading the article on Tech Crunch


Instagram rolls out new safety features to protect teens from sextortion

Instagram is introducing a series of new safety features to protect users from sextortion scammers, the company announced on Thursday. Most notably, the company is no longer going to allow people to screenshot or screen record ephemeral images or videos sent in private messages.

Up until now, you have been able to screenshot ephemeral content in Instagram DMs (direct messages), but the other person would be notified that you saved it. With this change, if you send someone a photo or video via DMs using the “view once” or “allow replay” features, the other person won’t be able to save the content. Plus, Instagram won’t let people open “view once” or “allow replay” images or videos on the desktop to ensure that they can’t circumvent the safety measures. 

By preventing users from taking screenshots of ephemeral content, Instagram is taking things a step further than Snapchat when it comes to making sure ephemeral content remains that way. On Snapchat, if you send someone an image, they are allowed to screenshot it. While Snapchat does notify the user that their image was saved, the app doesn’t do anything to prevent users from taking screenshots of ephemeral content in the first place.

Instagram, on the other hand, is now ensuring that content meant to be viewed once can, in fact, only be viewed once.

Image Credits:Instagram

The Meta-owned social network says the new features announced today complement the recent launch of Teen Accounts, which automatically enroll young users into an app experience with built-in protections that limit who can contact them.

With Teen Accounts, young users can’t receive messages from anyone they don’t follow or aren’t connected to, but they can still receive follow requests from anyone. Now, Instagram is making it harder for suspicious accounts, like those that were recently created, to request to follow teens.

Depending on how scammy an account appears, Instagram will either block the follow request entirely or send it to a teen’s spam folder. 

Image Credits:Instagram

The app is also rolling out safety notices in DMs to let teens know when they’re talking to someone who might live in a different country. The company says it’s doing so because sextortion scammers often lie about where they live in order to get teens to trust them. 

Since sextortion scammers often use a teen’s following and follower lists to try to blackmail them, Instagram is going to prevent accounts depicting scammy behavior from seeing people’s following and follower lists. These accounts also won’t be able to see who has liked someone’s post or see which photos they have been tagged in.

In addition, Instagram is fully rolling out its nudity protection feature globally after first testing it starting in April. The safety measure automatically blurs images that contain nudity in DMs. The feature will be enabled by default for teen users. As for the people sending them, Instagram will warn them of the risks involved with sending private photos.

Image Credits:Instagram

To provide more support within its app, Instagram is partnering with Crisis Text Line in the U.S. Now, when a user reports an issue related to child safety or sextortion, they will see an option to talk to a crisis counselor. 

The changes come ten months after Instagram, alongside other major social networks, was grilled by lawmakers for not doing enough to protect young users on its platform.

As part of its efforts to fight sextortion, Instagram is going to start showing users an educational video about sextortion scammers to teens in the U.S., U.K., Australia, and Canada. The social network is also partnering with influencers like Bella Poarch and Brent Rivera to create content about spotting sextortion and what to do if it happens.

Keep reading the article on Tech Crunch


and this