Generative AI -Chatbots It is known that they make many mistakes. Let’s hope you didn’t follow Google’s AI suggestion Add glue to your pizza recipe Or eat a rock or two a day for your health.
These mistakes are known as hallucinations: essentially, things that make up for the model. Will this technology get better? Even researchers who study Ai are not optimistic that will happen soon.
That is one of the findings of a panel Two dozen experts on artificial intelligence This month released by the Association for the Advancement of Artificial Intelligence. The group also investigated more than 400 of the members of the association.
In contrast to the hype you may see about developers, only years (or months, depending on who you ask) are away from improving AI, this panel of academics and experts from the industry seems to be monitored more about how fast these tools will take place. This not only includes getting facts well and avoiding bizarre errors. The reliability of AI tools must increase dramatically if developers produce a model that is possible Meet or surpass human intelligencegenerally known as artificial general intelligence. Researchers seem to believe that improvements on that scale will probably not happen quickly.
“We are usually a bit careful and don’t believe something until it really works,” Vincent ConitzerTold a professor in computer science at Carnegie Mellon University and one of the panel members, told me.
Artificial intelligence has developed rapidly in recent years
The aim of the report, AAI -president Francesca Rossi, wrote in his introduction, is to support research in artificial intelligence that produces technology that helps people. Stories of trust and reliability are serious, not only in providing accurate information, but also in avoiding bias and ensuring a future AI does not cause serious unintended consequences. “We all have to work together to promote AI in a responsible way, to ensure that technological progress supports the progress of humanity and is tailored to human values,” she wrote.
The acceleration of AI, especially since OpenAi was launched Chatgpt In 2022 it was remarkable, Conitzer said. “In some respects that has been amazing, and many of these techniques work much better than most of us ever thought they would do that,” he said.
There are some areas of AI research where “the hype does have merit”, John DickstunAssistant professor of computer science at Cornell University, told me. This is especially the case in mathematics or science, where users can check the results of a model.
“This technology is great,” said Dickstun. “I have been working in this area for more than ten years and it has been shocked how good it has become and how quickly it became good.”
Despite those improvements, there are still important issues that deserve research and consideration, experts said.
Will chatbots get their facts directly?
Despite some progress in improving the reliability of the information from Generative AI models, much more work must be done. A recent Report from Columbia Journalism Review It is unlikely that chatbots refuse to answer questions that they could not answer accurately, confidently about the wrong information they provided and made up with) sources to support those wrong claims.
Improving reliability and accuracy “is perhaps the largest area of AI research today,” said the AAID report.
Researchers noted three important ways to stimulate the accuracy of AI systems: refinement, such as strengthening learning with human feedback; Pick-up-augmented generation, in which the system collects specific documents and gets its answer from this; And Debit thought out, where instructions split the question into smaller steps that the AI model can check for hallucinations.
Will those things make your chatbot reactions more accurate soon? Unlikely: “Feituality is far from resolved,” said the report. About 60% of the respondents indicated that doubts whether reliability problems would be resolved soon.
In the generative AI industry, there is optimism that scaling up existing models will make them more accurate and reduce hallucinations.
“I think that hope was always a bit overly optimistic,” said Dickstun. “In recent years I have not seen any evidence that really accurate, very factual language models are around the corner.”
Despite the fallibility of large language models such as such as Anthropic’s Claude or MetUsers can wrongly assume that they are more accurate because they present answers with confidence, Conitzer said.
“If we see someone trust react or words that sound confident, we assume that the person really knows what he is talking about,” he said. “An AI system, it can simply claim that it is very trust in something that is completely nonsense.”
Lessons for the AI user
Awareness of the limitations of generative AI is vital to use it properly. Dickstun’s advice for users of models such as Chatgpt and Google’s Gemini Is simple: “You must check the results.”
General major language models do bad work to consistently collect factual information, he said. If you ask something, you probably have to follow up by looking up the answer in a search engine (and not to rely on the AI summary of the search results). By the time you do that, you might be better finished in the first place.
Dickstun said that the way he uses AI models the most is to automate tasks that he could still do and that he can check the accuracy, such as drawing up tables with information or writing code. “The wider principle is that I think these models are the most useful to automate work that you already know how to do,” he said.
Read more: 5 ways to stay smart when using Gen AI, explained by professors in computer science
Is artificial general intelligence around the corner?
A priority of the AI development industry is a clear race to create what is often called artificial general intelligence or AGI. This is a model that is generally capable of a human level of thinking or better.
The survey of the report found strong opinions about the race for Agi. In particular, more than three-quarters (76%) of the respondents said that scaling up current AI techniques such as large language models probably did not produce AGI. A considerable majority of the researchers doubt whether the current Mars will work towards AGI.
A similar majority believes that systems that are capable of artificial general intelligence must be in public if they are developed by private entities (82%). This corresponds to worries about the ethics and potential disadvantages of creating a system that can surpass people. Most researchers (70%) said they oppose stopping AGI research until safety and control systems have been developed. “These answers seem to suggest a preference for continuous exploration of the subject, within some guarantees,” said the report.
The conversation around Agi is complicated, Dickstun said. In a sense we have already made systems that have a form of general intelligence. Large language models such as Chatgpt from OpenAI are able to do different human activities, in contrast to older AI models that can only do one thing, such as Play Chess. The question is whether it can consistently do many things at a human level.
“I think we are very far away from this,” said Dickstun.
He said that these models miss a built -in concept of truth and the ability to be able to handle really open creative tasks. “I don’t see the path to make them work robust in a human environment with the help of current technology,” he said. “I think there are many research pre -shots to get there.”
Conitzer said that the definition of what exactly AGI is is difficult: often people mean something that most tasks can do better than a person, but some say it’s just something that can perform a series of tasks. “A stricter definition is something that would really make us completely superfluous,” he said.
While researchers are skeptical Agi is around the cornerConitzer warned that AI researchers do not necessarily expect the dramatic technological improvement that we have all seen in recent years.
“We didn’t see how quickly things have changed recently,” he said, “and so you might wonder if we’ll see it coming if things keep going faster.”
Source link
, , #accuracy #problems #Gen #quickly #researchers, #accuracy #problems #Gen #quickly #researchers, 1742597194, the-accuracy-problems-of-gen-ai-do-not-go-away-quickly-say-researchers