Food Standards Agency in UK has advised customers to dispose the product and contact the brand
world1 day ago
Researchers found students to have fared better at accounting exams than ChatGPT, OpenAI's chatbot product.
Despite this, they said that ChatGPT's performance was "impressive" and that it was a "game changer that will change the way everyone teaches and learns — for the better". The researchers from Brigham Young University (BYU), US, and 186 other universities wanted to know how OpenAI's technology would fare on accounting exams. They have published their findings in the journal Issues in Accounting Education.
In the researchers' accounting exam, students scored an overall average of 76.7 per cent, compared to ChatGPT's score of 47.4 per cent.
While in 11.3 per cent of the questions, ChatGPT was found to score higher than the student average, doing particularly well on accounting information systems (AIS) and auditing, the AI bot was found to perform worse on tax, financial, and managerial assessments. Researchers think this could possibly be because ChatGPT struggled with the mathematical processes required for the latter type.
The AI bot, which uses machine learning to generate natural language text, was further found to do better on true/false questions (68.7 per cent correct) and multiple-choice questions (59.5 per cent), but struggled with short-answer questions (between 28.7 and 39.1 per cent).
In general, the researchers said that higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT was found to provide authoritative written descriptions for incorrect answers, or answer the same question different ways.
They also found that ChatGPT often provided explanations for its answers, even if they were incorrect. Other times, it went on to select the wrong multiple-choice answer, despite providing accurate descriptions.
Researchers importantly noted that ChatGPT sometimes made up facts. For example, when providing a reference, it generated a real-looking reference that was completely fabricated. The work and sometimes the authors did not even exist.
The bot was seen to also make nonsensical mathematical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly.
Wanting to add to the intense ongoing debate about how how models like ChatGPT should factor into education, lead study author David Wood, a BYU professor of accounting, decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.
His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions.
They also recruited undergraduate BYU students to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered AIS, auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer).
Food Standards Agency in UK has advised customers to dispose the product and contact the brand
world1 day ago
Residents were seen pulling bodies from rubble and beneath trees
world1 day ago
Rulings by the International Court of Justice are final and binding
world1 day ago
Kyiv says about 20,000 children have been taken from Ukraine to Russia or Russian-occupied territory without the consent of family
world2 days ago
The protesters had been calling for the university to divest from companies with ties to Israel
world2 days ago
Official says use of child-friendly flavours, combined with sleek designs that resemble toys, is an attempt to addict young people
world2 days ago
Some 26 million students will be out of lessons from Saturday in Punjab, which has ordered schools to close for the summer break one week early
world2 days ago
The visa permits travel across 27 European countries, including popular tourist destinations such as France, Germany, Italy, and Spain
world2 days ago