ChatGPT Is Saying VWeird Things in Chinese
Briefly

ChatGPT Is Saying VWeird Things in Chinese
"One of its go-to tics, the publication reports, is to answer questions with "我会稳稳地接住你," which literally translates to "I will catch you steadily," a phrase signalling willingness to talk about a person's feelings (as Wired's Zeyi Yang notes, a more flowerly translation could be "I'll hold you steadily through whatever comes," though the sentiment is apparently irritating to Chinese speakers either way.)"
"At other times, ChatGPT will tell its Chinese users "砍一刀," which means either "help me cut it once," or "slash the price," an obnoxious bit of ad copy parroted by Chinese eCommerce platform Pinduoduo, Wired notes. The odd mannerisms are so ubiquitous that they've become a meme among Chinese netizens, with some depicting ChatGPT as a giant inflatable airbag placed to break someone's fall - catching them steadily."
"The problem, Wired observes, may come down to a phenomenon called "mode collapse," a fundamental bias tracing back to the people training large language models (LLMs). Basically, the idea goes that when human data annotators comb through text to train AI chatbots, they unknowingly favor familiar turns-of-phrase over more foreign-sounding sentences. After an LLM like ChatGPT is trained, it becomes difficult to force it to "unlearn" certain phrases."
"While AI developers can reinforce the usage of a particular response - "I will catch you steadily" may be a great answer in a particular situation - accounting for range and quantity is a different beast altogether. "We don't know how to say: 'this is good writing, but if we do this good wr"
ChatGPT’s Chinese output can include recurring conversational habits that feel irritating or unnatural to Chinese speakers. Common examples include responses like “我会稳稳地接住你,” meaning “I will catch you steadily,” and phrases such as “砍一刀,” associated with either “help me cut it once” or “slash the price.” These patterns have become memes, with some portraying the system as an inflatable airbag that catches people. The behavior may stem from “mode collapse,” a bias introduced during training when human annotators favor familiar phrasing. After training, it is difficult to make the model unlearn specific repeated responses, even if developers can reinforce certain answers.
Read at Futurism
Unable to calculate read time
[
|
]