Study Finds That 52 Percent of ChatGPT Answers to Programming Questions Are Wrong Доброчанька 05/25/2024 (Sat) 15:21 No.4158 del
https://futurism.com/the-byte/study-chatgpt-answers-wrong
https://dl.acm.org/doi/pdf/10.1145/3613904.3642596

>Q&A platforms have been crucial for the online help-seeking behavior of programmers. However, the recent popularity of ChatGPT is altering this trend. Despite this popularity, no comprehensive study has been conducted to evaluate the characteristics of ChatGPT’s answers to programming questions. To bridge the gap, we conducted the first in-depth analysis of ChatGPT answers to 517 programming questions on Stack Overflow and examined the correctness, consistency, comprehensiveness, and conciseness of ChatGPT answers. Furthermore, we conducted a large-scale linguistic analysis, as well as a user study, to understand the characteristics of ChatGPT answers from linguistic and human aspects. Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.

Нюанс, правда, в том, что речь, видимо, идёт об оригинальной ChatGPT, а не о последних версиях, где это соотношение могло значительно улучшиться.