As technology becomes increasingly intertwined in our daily lives, questions of safety, human interaction, and trust become critical. OpenAI CEO Sam Altman addressed the trajectory of Artificial Intelligence (AI) during a session titled ‘Technology in a Turbulent World’ at the World Economic Forum in Davos, Switzerland, on January 18.
ALSO READ: WEF: Impact of Generative AI on music, content creation
ALSO READ: WEF: Rwanda urges world leaders to turn plastic tap off, end pollution
Altman reflected on a year where generative AI, notably ChatGPT, made headlines. OpenAI, a U.S.-based AI research organisation, is dedicated to developing "safe and beneficial” artificial general intelligence.
Productivity gains with AI usage
Altman emphasised that despite its limitations, people are finding ways to leverage AI tools for significant productivity gains.
"Even with its very limited current capability and its very deep flaws, people are finding ways to use this tool for great productivity gains or other gains and understand the limitations. People understand tools and the limitations of tools more than we often give them credit for. People have found ways to make ChatGPT super useful to them and understand what not to use it for, for the most part.
"AI has been somewhat demystified because people really use it now. And that’s always the best way to pull the world forward with new technology,” Altman said.
Explainable AI: Trust through understanding
Altman envisions a future where AI systems can articulate their reasoning in natural language, allowing users to comprehend the decision-making process. Part of being able to trust technology involves understanding how it works.
But Altman says truly understanding how generative AI operates will be "a little different” than people think now.
"I can’t look in your brain to understand why you’re thinking what you’re thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not. I think our AI systems will also be able to do the same thing. They’ll be able to explain to us in natural language the steps from A to B, and we can decide whether we think those are good steps, even if we’re not looking into it to see each connection.”
Trust in technology involves understanding how it operates, and Altman believes AI’s ability to explain its steps will be pivotal in building trust.
Human care prevails over AI dominance
When IBM chess computer Deep Blue beat World Champion Garry Kasparov in 1997, commentators said it would be the end of chess, and no one would bother to watch or play chess again because a computer had won.
"But chess has never been more popular than it is now,” said Altman, and "almost no one watches two AIs play each other; we’re very interested in what humans do. When I read a book that I love, the first thing I do when I finish is find out everything about the author’s life. I want to feel some connection to that person who made this thing that resonated with me. Humans know what other humans want. Humans are going to have better tools. We've had better tools before, but we’re still very focused on each other.”
Shifting roles: Humans engaging with ideas
Altman anticipates a shift in job roles towards higher-level abstraction, with individuals focusing more on generating ideas and curating decisions.
"When I think about my job, I’m certainly not a great AI researcher. My role is to figure out what we’re going to do, think about that, and then work with other people to coordinate and make it happen. I think everyone’s job will look a little bit more like that. We will all operate at a little bit higher level of abstraction. We will all have access to a lot more capability. We’ll still make decisions. They may trend more towards curation over time, but we’ll make decisions about what should happen in the world.”
Optimism in AI values alignment
Altman expressed optimism about aligning AI values with societal expectations. "The technological direction we’ve been trying to push this in, is one we believe we can make safe,” said Altman.
Iterative deployment means that society can get used to the technology and that "our institutions have time to have these discussions to figure out how to regulate this, how to put some guardrails in place.”
Altman said there had been "massive progress” between GPT-3 and GPT-4 in terms of how well it can align itself to a set of values. But the harder question is: "Who gets to decide what those values are and what the defaults are, what the bounds are? How does it work in this country versus that country? What am I allowed to do with it or not? That's a big societal question.
"From the technological approach, there’s room for optimism,” he said, adding that the current alignment techniques would not scale to much more powerful systems, so "we’re going to need to invent new things.” He welcomed the scrutiny AI technology was receiving.
"I think it’s good that we and others are being held to a high standard. We can draw on lessons from the past about how technology has been made to be safe and how different stakeholders have handled negotiations about what safe means.”
Altman said it was the responsibility of the tech industry to get input from society into decisions such as what the values are and the safety thresholds so that the benefits outweigh the risks.
"I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us...We have our own nervousness, but we believe that we can manage through it and the only way to do that is to put the technology in the hands of people.
"Let society and the technology co-evolve and sort of step by step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting safety requirements.”
New economic models for content development
Altman distinguished between displaying content and using it for training AI models. He outlined a future where content owners are compensated for training models and suggested evolving economic models to ensure fair compensation for content used in AI training.
"When a user says, ‘Hey, ChatGPT, what happened at Davos today?’ we would like to display content, link out to brands of places like the New York Times or the Wall Street Journal or any other great publication and say, ‘Here’s what happened today,’ and then we’d like to pay for that. We’d like to drive traffic for that,” said Altman, adding it’s not a priority to train models on that data, just display it.
In the future, large language models (LLMs) will be able to take smaller amounts of higher-quality data during their training process, think harder about it, and learn more.
When content is used for training, Altman said we need new economic models that would compensate content owners.
"If we’re going to teach someone else physics using your textbook and using your lesson plans, we’d like to find a way for you to get paid for that. If you teach our models, I’d love to find new models for you to get paid based on the success of that...The current conversation is focused a little bit at the wrong level, and I think what it means to train these models is going to change a lot in the next few years,” Altman said.