llama

joined 5 days ago
[–] llama@lemmy.dbzer0.com 4 points 17 hours ago

Hmmm... You're right. It does feel a lot more arbitrary when you put it that way.

[–] llama@lemmy.dbzer0.com 2 points 17 hours ago

You know what? You actually do have a point.

[–] llama@lemmy.dbzer0.com 15 points 20 hours ago* (last edited 20 hours ago)

My favorite anime website is down; good thing FMHY has a bunch of great ones to choose from. Migrating sucks, though.

[–] llama@lemmy.dbzer0.com 1 points 20 hours ago (6 children)

There isn't really a natural barrier between North and South America, though. Asia has the Urals.

[–] llama@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

Interesting question... I think it would be possible, yes. Poison the data, in a way.

[–] llama@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

Not Perplexity specifically; I'm taking about the broader "issue" of data-mining and it's implications :)

[–] llama@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago)

You're aware that it's in their best interest to make everyone think their """AI""" can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it's mostly faked?

Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn't all powerful and all knowing. It just crawls the web and uses other language models to "digest" what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.

Taking what an """AI""" company has to say about their product at face value in this part of the hype cycle is questionable at best.

Sure, that might be part of it, but they've always been very transparent on their reliance on third party models and web crawlers. I'm not even sure what your point here is. Don't take what they said at face value; test the claims yourself.

[–] llama@lemmy.dbzer0.com 2 points 2 days ago (3 children)

What did you mean by "police" your content?

[–] llama@lemmy.dbzer0.com 1 points 2 days ago* (last edited 2 days ago) (3 children)

Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

That doesn't make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I've shared some initial points, I'm more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.

I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity's official docs.

[–] llama@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago)

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that's an issue that can be solved with some prompt engineering and as one's account gets more established.

[–] llama@lemmy.dbzer0.com 0 points 3 days ago* (last edited 3 days ago) (1 children)

I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)

[–] llama@lemmy.dbzer0.com 1 points 3 days ago

There's a flatpak too, but it's not good.

 

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


#Prompt Update

The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

 

I use both Threads and Mastodon. However, I realized that sometimes (public) profiles on Threads don't show up on Mastodon and vice versa. I also realized that most comments made on Threads posts don't show up on Mastodon – that is, if the posts appear on Mastodon at all. The same is true the other way around. Why does this happen?

 

I've been using Lemmy since the Reddit exodus. I haven't looked back since, but I miss a lot of mental health communities that I haven't been able to find replacements for here on Lemmy. Does anyone know any cool mental health communities that are somewhat active?

view more: next ›