Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries – a conservative estimate based on the paper’s analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.
I think we would just be more careful with how we used the technology. E.g. don't autocomplete code if the threshold is not met for reasonable certainty.
I would argue that it's more useful having a system that says it doesn't know half the time than a system that's confidently wrong half the time