Called the danger of artificial intelligence for traditional school education

by time news

2023-07-07 08:34:18

Artificial intelligence is likely to end the traditional classroom, says leading expert. Professor Stuart Russell says that AI technology could lead to “fewer teachers being hired – maybe not even one.”

One of the world’s leading experts on artificial intelligence predicted in an exclusive interview with The Guardian that recent advances in artificial intelligence will likely lead to the end of traditional schooling.

Professor Stuart Russell, a British computer scientist at the University of California, Berkeley, said ChatGPT-style personalized tutors could greatly enrich education and expand global access by providing personalized learning to every family with a smartphone. According to him, the technology could realistically provide “most of the material until the end of high school.”

“Education is the biggest asset we can look forward to in the next few years,” Russell said before speaking Friday at the United Nations’ Artificial Intelligence for Good Global Summit in Geneva. “It should be possible within a few years, maybe by the end of this decade, to provide a fairly high quality education to every child in the world. It will potentially transform the situation.”

However, the expert warned that introducing powerful technology to the education sector also comes with risks, including potential indoctrination.

Stuart Russell cited evidence from studies using human tutors that individual learning can be two to three times more effective than traditional classroom instruction in allowing children to receive individualized support and be guided by curiosity.

“Oxford and Cambridge don’t really use traditional classes … they use tutors, probably because it’s more efficient,” the professor said. “It is literally impossible to do this for every child in the world. There are not enough adults here to beat around the bush.”

OpenAI is already exploring educational applications, announcing in March a partnership with educational nonprofit the Khan Academy to test a virtual tutor powered by ChatGPT-4.

The prospect could raise “reasonable fears” among teachers and teacher unions that “fewer teachers will be hired – perhaps none at all,” Russell said. He predicted that human participation would still be important, but could be dramatically different from the traditional role of a teacher, potentially including “playground watcher” duties, fostering more complex collective activities, and providing civic and moral education.

“We haven’t done experiments, so we don’t know if the artificial intelligence system will be enough for a child. There is motivation, there is collaboration, it’s not just a matter of “Can I count?” Russell noted. “It will be important to ensure that the social aspects of childhood are preserved and improved.”

The technology also needs careful risk assessment. “Hopefully the system, if properly designed, won’t tell a child how to make a bioweapon. I think it can be dealt with,” says Russell. A more pressing concern, he says, is the potential for software to be hacked by authoritarian regimes or other players. “I am sure that the Chinese government hopes that [технология] more effective for instilling loyalty to the state, he said. “I suppose we would expect this technology to be more effective than a book or a teacher.”

According to The Guardian, Professor Russell has spent years highlighting the broader existential risks associated with artificial intelligence, signing an open letter in March, along with Elon Musk and others, calling for a pause in the “out of control race” for the development of powerful digital minds. According to Russell, this issue has become more relevant with the advent of large language models. “I think about [искусственном общем интеллекте] like a giant magnet in the future,” he said. “The closer we get to it, the stronger the power becomes. It definitely feels closer than before.”

According to him, politicians are late in dealing with this issue. “I think governments have woken up… now they’re running around the bush figuring out what to do,” he said. “That’s good – at least people are paying attention to it.”

However, managing AI systems comes with both regulatory and technical challenges, as even experts don’t know how to quantify the risks of losing control of the system. OpenAI announced on Thursday that it will dedicate 20% of its computing power to finding a solution to “manage potentially super-intelligent AI and prevent it from going out of business.”

“In particular with large language models, we really have no idea how they work,” Russell said. “We don’t know if they’re capable of reasoning or planning. They may have internal goals that they pursue – we don’t know what they are.”

Even beyond the direct risks, the systems could have other unintended consequences for everything from climate change action to relations with China.

“Hundreds of millions of people, and pretty soon billions, will be in constant contact with these things,” Russell said. “We don’t know in what direction they could change world public opinion and political trends.”

“We could face a massive environmental crisis or a nuclear war and not even understand why this happened,” the scientist added. “It’s just a consequence of the fact that whatever direction it takes, public opinion does it in an interconnected way around the world.”

#Called #danger #artificial #intelligence #traditional #school #education

You may also like

Leave a Comment