People generalize to form beliefs about a large language model's performance based on what they've seen from past interactions. When an LLM is misaligned with a person's beliefs, even an extremely capable model may fail unexpectedly when deployed in a real-world situation.

from Latest Science News -- ScienceDaily https://ift.tt/Qcb9hs0