this post was submitted on 01 Oct 2024
88 points (81.9% liked)
Asklemmy
43984 readers
789 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Okay so both of those ideas are incorrect.
As I said, many are literally Markovian and the main discriminator is beam, which does not really matter for helping people understand my meaning nor should it confuse anyone that understands this topic. I will repeat: there are examples that are literally Markovian. In your example, it would be me saying there are rectangular phones but you step in to say, "but look those ones are curved! You should call it a shape, not a rectangle." I'm not really wrong and your point is a nitpick that makes communication worse.
In terms of stochastic processes, no, that is incredibly vague just like calling a phone a "shape" would not be more descriptive or communicate better. So many things follow stochastic processes that are nothing like a Markov chain, whereas LLMs are like Markov Chains, either literally being them or being a modified version that uses derived tree representations.
I'm not familiar with the term "beam" in the context of LLMs, so that's not factored into my argument in any way. LLMs generate text based on the history of tokens generated thus far, not just the last token. That is by definition non-Markovian. You can argue that an augmented state space would make it Markovian, but you can say that about any stochastic process. Once you start doing that, both become mathematically equivalent. Thinking about this a bit more, I don't think it really makes sense to talk about a process being Markovian or not without a wider context, so I'll let this one go.
How many readers do you think know what "Markov" means? How many would know what "stochastic" or "random" means? I'm willing to bet that the former is a strict subset of the latter.
The very first response I gave said you just have to reframe state.
This is getting repetitive and I think it is because you aren't really trying to understand what I am saying. Please let me know when you are ready to have an actual conversation.
And I said "am augmented state space would make it Markovian". Is that not what you meant by reframing the state? If not, then apologies for the misunderstanding. I do my best, but I understand that falls short sometimes.