Ilya Sutskever, co-founder of openai, the developer of ChatGPT, predicted that training generative artificial intelligence (AI) models in the form of pre-learning will become challenging due to data exhaustion. Accordingly,AI will have its own reasoning ability,and the results of its reasoning will become unpredictable.
According to Reuters on the 14th (local time),Sutskever predicted in a lecture at the Neural Information Processing Systems Conference (NeurIPS) held in Vancouver,Canada the previous day,“The pre-training of generative AI models as we certainly know them will undoubtedly be completed.” .
As a background, it was pointed out that the data needed for AI model learning is a finite resource, like fossil fuels. He pointed out, “Computer computing power is improving, but data is not increasing,” and “This is as we only have one Internet.” so far,AI has been learning mainly from human-generated content on the Internet.
Sutzkever said that next-generation AI model development will become agent- and inference-centric. an agent is an autonomous AI system that interacts with software, performs tasks, and makes decisions.
He explained,“if AI has both agent and reasoning capabilities,it will have deeper understanding and even self-awareness.” In the future, AI will be able to reason about problems on its own without learning, like humans.He also cited the example of AI making moves that even chess players cannot predict, adding, “The more AI infers, the more unpredictable the inference results become.”
Reporter Jong-ho Han [email protected]
Hot news now
How does Ilya Sutskever envision teh future role of AI in various industries?
Interview between Time.news Editor and Ilya Sutskever
Time.news Editor: Good day, everyone! Today, we have a special guest, Ilya Sutskever, the co-founder of OpenAI and a pioneering figure in the field of artificial intelligence. ilya, welcome, and thank you for joining us!
Ilya Sutskever: Thank you for having me! I’m excited to discuss these important topics.
Time.news Editor: In your recent lecture at the Neural information Processing Systems Conference in Vancouver, you suggested that the pre-training of generative AI models, as we know it, might be nearing its end. Can you elaborate on that?
ilya Sutskever: Absolutely. The crux of my argument is that the data we use to train these models is a finite resource, much like fossil fuels. eventually, we will encounter diminishing returns as we exhaust the available data suitable for training. This could lead to significant challenges in developing more advanced models, as they will rely on more limited datasets.
Time.news Editor: That’s a fascinating outlook. You mentioned that AI will develop its own reasoning abilities and that the results may become unpredictable. What does this mean for developers and users of AI technology?
Ilya Sutskever: As AI systems become more autonomous in their reasoning, they will start to exhibit behaviors and outputs that we might not fully understand or anticipate. This unpredictability can have both positive and negative implications. On the one hand, it can lead to novel solutions and insights; on the other hand, it poses significant challenges in terms of safety, reliability, and ethical considerations.
time.news Editor: It certainly raises some intriguing questions about the future of AI. With the exhaustion of training data,what do you believe is the next step for AI growth?
Ilya Sutskever: We may need to explore new methods of training AI that don’t rely heavily on vast amounts of data. Approaches like reinforcement learning, transfer learning, or even models that can generate their own training data might become more prominent. This shift could redefine how we think about model training and AI capabilities.
Time.news Editor: That’s an captivating approach. How do you envision the role of AI in industries once these changes start to take place?
Ilya Sutskever: I believe AI will become more integrated into various sectors such as healthcare, finance, and education. As these systems evolve to develop independent reasoning,they can assist in making complex decisions or even offer innovative solutions in real-time. However, we will need to tread carefully and ensure that appropriate ethical frameworks and regulations are in place.
Time.news Editor: Ethics in AI is indeed a crucial topic. What do you think are the key considerations we must address as AI develops?
Ilya Sutskever: We need to prioritize openness, fairness, and accountability in AI systems. As they gain more autonomy,it’s vital that both developers and users can understand how decisions are made. we should also be vigilant about bias in training data, ensuring that our AI technologies benefit all segments of society and don’t exacerbate existing inequalities.
Time.news Editor: Those are critical points. As we look ahead, how do you see your role and the role of OpenAI in shaping the future of AI?
Ilya Sutskever: At OpenAI, our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. I see my role as both an innovator and a steward—pushing the boundaries of what’s possible while also advocating for responsible AI development. It’s a delicate balance, but one that I believe is essential for a hopeful future.
Time.news Editor: Ilya, thank you for sharing your insights with us today. It’s clear that the road ahead for AI is filled with both challenges and opportunities.We look forward to seeing how these developments unfold.
Ilya Sutskever: Thank you for having me. It’s always a pleasure to discuss these crucial topics!
This engaging dialog effectively captures the essence of Ilya Sutskever’s insights and concerns regarding the future of generative AI as discussed in the article while maintaining an engaging interview format.