I don't think it will. People are treating it like artificial general intelligence and are trying to make it do tasks a purpose built model would do much better at. But that takes more money, so companies are just trying to make chatgpt do everything and people aren't using it because of the error rate and privacy concerns.
Futurology
People are treating it like artificial general intelligence
Exactly. And people seem to think that we are close to AGI. We ain’t. AGI is many decades away.
People are treating it like artificial general intelligence
Yes, its definitely been over-hyped (by some) in that regard.
That said, next-gen LLMs that have greatly reduced the hallucination problem are on the horizon, and I would say will be much more successful.
The medical AI the Microsoft employee talks about here (and other peoples versions of it) is almost sure to be a huge global success.
Many use cases are an example of using it as a solution when there isn’t a problem. I feel like people use it to replace human beings and it fails at that, when in reality it just is a tool to speed up those very human beings.
Some people are just now learning what LLMs really are.
Its not as cool as we thought. AI is a misnomer. LLM (aka big pile of spam,) seems more apt.