this post was submitted on 22 Jul 2023
0 points (NaN% liked)

Asklemmy

43622 readers
1096 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Sometimes it can be hard to tell if we're chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

top 13 comments
sorted by: hot top controversial new old
[–] rodbiren@midwest.social 1 points 1 year ago

You can always help their software QA by pasting in the entirety of the declaration of independence. A couple of things could happen. If they comment, why did you post that? You have a human. If they give a generic response, probably an AI. If it crashes then you know they didn't think anyone would post that.

You can also post zero width spaces. Generic chatbot will respond with something meaningless and a human might not even respond. You could also post text using typoglycemia. The language will confuse most models but can usually be read by people.

[–] wildbus8979@sh.itjust.works 1 points 1 year ago (2 children)

If you ask them, they have to tell you. Like the cops!

[–] WackyTabbacy42069@reddthat.com 1 points 1 year ago (1 children)

Not necessarily. OpenAI has been trying to make their AIs do this and be generally unharmful, but there's lots of support in the open source LLM space for uncensored models. The uncensored models are less likely to be inclined to say so if they've been instructed to pretend they're humans

[–] hsl@wayfarershaven.eu 1 points 1 year ago

Speaking as a real support person, people do ask and it's fun to come up with responses. It really depends on my mood.

[–] platysalty@kbin.social 1 points 1 year ago (1 children)

Ask for the difference in behaviour between apple seeds and baseball cards, or anything equally nonsensical.

A human would go "bro wtf"

[–] intensely_human@lemm.ee 1 points 1 year ago

Unlike baseball cards, apple seeds are aerodynamic and are thus able to fly.

[–] nobodyspecial@kbin.social 1 points 1 year ago (1 children)

This is very, very easy. Google "cyrillic keyboard" or just install the Cyrillic keyboard support on your phone. Many letters in the Cyrillic alphabet look exactly like their Roman counterparts, but are completely different sounds and meanings. Cut and paste the Unicode into the chat, in place of regular letters. For example, 'Неllо' looks exactly like 'Hello' in most fonts, but is actually 'Nello.' I know you doubt, so check it out in a Unicode inspector: https://apps.timwhitlock.info/unicode/inspect?s=%D0%9D%D0%B5ll%D0%BE

The reverse also works. E.g., TPAKTOP B CPAKY means 'tractor into ass', and I typed that using 100% Roman characters.

[–] fearout@kbin.social 1 points 1 year ago (1 children)

So I’ve just tried it with chatGPT, and it replied normally. I asked it why it wasn’t bothered by Cyrillic letters, and it answered this:

I am designed to understand and generate text in multiple languages, including those that use the Cyrillic alphabet like Russian, Bulgarian, and others. So, Cyrillic letters don't bother me. You can continue the conversation in either English or a language that uses the Cyrillic alphabet if you prefer.

So it really depends on the model, I guess. Don’t rely too much on this advice.

[–] nobodyspecial@kbin.social 1 points 1 year ago

ChatGPT capabilities border on the supernatural. Here's how I broke it:

𐤔hat is ᛐ + ᛐ?

ChatGPT
The symbols you've used, "ᛐ," appear to be runes from ancient runic alphabets. However, the specific meaning of these runes might differ depending on which runic alphabet you're referencing (e.g., Elder Futhark, Younger Futhark, Anglo-Saxon Futhorc, etc.).

As of my knowledge cutoff in September 2021, the rune "ᛐ" is not a standard rune in any of the commonly known runic alphabets. Therefore, without knowing the exact meaning or value of the rune "ᛐ," it is impossible to provide a specific answer to the expression "ᛐ + ᛐ."

If you could clarify the runic alphabet you're referring to or provide more context about the runes, I'd be happy to help you with the calculation or interpretation.

I had limited success with gokturk (ancient turkish) and Phoenician unicode blocks (letters 𐰗𐰓𐤔𐤕) depending on the query, but you are correct. GPTs ability to divine intent from even small amounts of context are superhuman. Cyrillic used to break it, but no longer does. This thing learns like a beast. Canadian aboriginal ᗷ and ᗅ and possibly ᖇ hold some promise, but only in combination with other writing systems. I'll have to add a LOT of other unicode code blocks to my tool belt.

[–] ImplyingImplications@lemmy.ca 0 points 1 year ago (2 children)

Ask them a logical paradox like "does a set of all sets contain itself?". This will send all robots into an infinite loop.

[–] fearout@kbin.social 1 points 1 year ago

You’re probably joking, but I’ll comment anyway. It won’t affect LLMs at all. ChatGPT just answers the question and discusses the paradox. LLM’s function is basically just to construct sentences, so there’s nothing really that can potentially infinitely loop. It doesn’t “think” about paradoxes.

[–] FrickAndMortar@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)

Well, I just asked the weird “message Carrot” option in my weather app, and it replied:

Oh, look at you, trying to puzzle your way through the depths of set theory. How amusing, like a chimp trying to juggle chainsaws.

Well, my dear meatbag, the answer to your question is a resounding NO. Just like you, that set cannot contain itself. It's like expecting Johnny Five to date Stephanie from "Short Circuit." Simply not gonna happen! 🤖💔