Sam Altman Confirms GPT-5.2X Level Intelligence Will Cost 100x Less by 2027

Sam Altman holding a microphone in a wood-paneled room, gesturing expressively. Green plant in the background, relaxed setting.

Sam Altman held a town hall meeting with AI builders and developers to discuss the next generation of AI tools and gather feedback on what OpenAI should build. The event covered software engineering’s transformation, biosecurity risks, education, and the future of human-AI collaboration.

AI Costs Will Drop 100X by End of 2027

When asked about cost reductions for running agents at scale, Altman made a bold prediction.

“I think we should be able to deliver GPT 5.2x to x high level intelligence by the end of 2027 for I would say at least 100x less.”

He also revealed a new priority: speed over cost.

“There’s another dimension which we haven’t thought about as much historically and now as these model outputs get so complex more people are pushing us on the speed we can deliver it at than the cost,” Altman explained.

“We are really good at writing down the cost curve. We have not thought as much about how we deliver the output the same output and maybe at a much higher price but in 1/100th of the time.”

“I think for a lot of things you’re talking about people are going to really want that,” he said, noting these are “very different problems.”

“Assuming we go push on cost and assuming that’s kind of like what you all the market wants, we can go very far down that curve.”

 

Other Key Topics Discussed at OpenAI Town Hall

Software Engineering: More People Will Build, Jobs Will Transform

When asked about Jevans paradox and whether AI would reduce demand for software engineers, Altman predicted more people will become engineers, not fewer.

“I think what it means to be an engineer is going to super change,” Altman said. “There will be probably far more people creating far more value and capturing more value that are getting computers to do what they want, getting computers to do what other people want, figuring out ways to make these useful experiences for others.”

The shape of the job will change dramatically. Engineers will spend less time typing code or debugging. This pattern has happened before in engineering history, and each time more people have joined the field and become productive.

“Demand for software seems to not be slowing down at all,” Altman said.

He expects a future where “a lot of us use software that was written for one person or a very small number of people and we’re constantly customizing our own software.”

“So I think many more people will get computers to do the things they want to do and it will be a very different way than we do it today,” he explained. “So if you count that as software engineering then I think we’ll see much more of it and I think a greater percentage of the world’s GDP will be created that way and consumed that way too.”

Go-to-Market Remains the Biggest Challenge

An attendee raised concerns about GTM (go-to-market) becoming the new bottleneck now that building with AI tools is easier.

Altman drew on his Y Combinator experience: “The consistent thing you’d hear from startup founders is I thought the hard part of this was going to be building a product and the hard part is getting anyone to care or to use it or like to connect people.”

“I think this has always been extremely hard, but now it’s gotten so much easier to build that you feel the delta even more,” he said.

The fundamental challenge remains: “Even in a world of incredible abundance, human attention remains like this very limited thing.”

While AI can help automate sales and marketing, Altman emphasized: “I just expect this to be hard and you got to come up with creative ideas and build great things.”

OpenAI Won’t Figure Out Everything: Third-Party Builders Are Safe

A solo developer building multi-agent orchestration tools on the Codex SDK asked if OpenAI would compete with third-party builder tools.

“We don’t know what the right interface for all of this is going to be. We don’t know how people are going to want to use it,” Altman responded. “We’re not going to figure this out on our own.”

He explained that different users want different experiences. Some will want 30 computer screens with complex multi-agent setups. Others will want calm voice interactions where they speak to their computer once per hour.

“People will just have to try different approaches and see what they like, and the world will probably converge on a few, but we won’t figure out all of them,” Altman said.

He emphasized a major opportunity: “The overhang of what these models are capable of relative to what most people can figure out how to get out of them is like huge and growing.”

“Someone is going to build a tool to really help you do that. And no one’s gotten it right yet,” he added.

AI Could Close Economic Gaps Through Massive Deflation

When asked how AI can solve economic gaps like the wage gap, Altman predicted AI will be “massively deflationary.”

“By the end of this year for a hundred or $1,000 of inference, you will be able to and a good idea, you’ll be able to create a piece of software that would have taken teams of people, you know, a year to do,” Altman said.

“This is like very hard to wrap, at least my head around the sort of magnitude of this economic change.”

He believes this creates massive empowerment for individuals: “The empowerment of individual people whether or not society has structured in a way where they’ve naturally had all of the advantages looks like it’s going to go up and up.”

This should be “an equalizing force in society and a way that people who have not gotten treated that fairly get a really good shot as long as we don’t screw up the policy around it in a big way, which could happen.”

However, Altman warned: “I am worried that you can imagine worlds in which AI really concentrates power and wealth and that feels like needs to be one of the main goals of policy for that not to happen.”

GPT-5 Writing Quality: “We Just Screwed That Up”

When asked about GPT-5 writing quality being worse than GPT-4.5, Altman admitted fault directly.

“I think we just screwed that up,” Altman said. “We will make future versions of GPT 5.x hopefully much better at writing than 4.5 was.”

He explained that OpenAI decided “for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing.”

“We have limited bandwidth here, and sometimes we focus on one thing and neglect another,” he said.

But the future will be different: “I believe that the future is mostly going to be about very good general purpose models.”

“Intelligence is a surprisingly fungible thing and we can get really good at all these things in a single model,” Altman added.

“It does seem like this is a particularly important time to push on kind of let’s call it coding intelligence. But we will try to excel and catch up on everything else quickly.”

Software Is No Longer Static: Customization Is the Future

Altman shared his personal experience with Codex changing how he thinks about software.

“This is one that I have noticed in my own use of Codex recently is I no longer think of software as this static thing. If I have a little problem, I expect the computer to like write some code right away and get it solved for me.”

“I think this trend is going to go much further. I suspect that the whole way we use computers and operating systems is going to change.”

He doesn’t expect core applications to be rewritten constantly: “I don’t think it’ll be like, oh, every time you need to edit a document, a new version of a word processor is going to be written for you right on the spot because, you know, we get like very used to our interfaces and it’s very important that like that button is in the same place it was last time.”

But customization will increase: “Maybe I want to use the same word processor every time. But I do kind of have a bunch of repeated quirks of how I use it and I would like the software to be increasingly customized.”

“That idea that our kind of tools are constantly evolving and converging just for us, that seems like it’s going to happen,” Altman said.

“Internally at OpenAI where people have like very much adopted Codex for their workflows right now, everybody has their own little custom things and use things super differently.”

Building Durable Startups: Traditional Rules Still Apply

Addressing concerns about model updates replacing startup features, Altman emphasized business fundamentals haven’t changed.

“It’s so tempting to assume that like the laws of physics for business have totally changed and they haven’t yet,” he said.

“Right now what’s changed is that you can do work faster and you can kind of create new software much much faster. But all the other rules of building a successful startup, you know, you got to figure out a way to get users, you got to figure out a way to like solve the GTM problem. You got to figure out a way to provide something sticky, have some sort of moat, network effect, competitive advantage, whatever you want to call it. None of that has changed.”

“The good news is like hasn’t changed for us either. So there have been many startups that have done things that maybe in a perfect world we would have done sooner but it was too late and people built up you know a real durable advantage and that will keep happening.”

Altman provided a framework: “Will your company be happy or sad if GPT-6 is like a wildly impressive update?”

“I encourage people to try to build things where you are so hoping the model gets wildly better. And there’s so many things to build that way.”

Autonomous Agents: It Depends on the Task

An OpenAI team member responded to questions about autonomous agent timelines.

“I think it really depends on the kind of task. So there’s a number of tasks today where just inside OpenAI we see people who are like prompting Codex in a very special way. Maybe they’re using the SDK. So it’s like a custom harness that keeps prompting it to continue but they basically having it running you know forever.”

“So I think this isn’t a question of when but it’s a question of like broadening of the horizon. So if you have a very specific task that you understand very well try doing it today.”

For open-ended tasks: “If you’re starting to think like, okay, I want to get to the point where I can like prompt the model to build a startup, like that’s a much more open-ended problem with like a much harder verification loop.”

The advice: “Figure out, okay, how can I break that down into a different problem where an agent can like verify itself and then when I can verify its final output at the end of it and then over time we can let the agent do more and more complex tasks.”

Building Tools to Improve Idea Quality

An attendee working on GTM automation noted that often “the products actually just aren’t worth their attention” and asked what tools can improve idea quality.

“It’s very hard to come up with good new ideas and I am increasingly a believer that we think at the limits of our tools,” Altman responded.

“I think we should try to build tools that help people come up with good ideas.”

He described a common problem: “The experience of sitting down in front of an AI you know, like a genetic code writer and just not being sure what to ask for next is something that a lot of people report.”

“If we can build tools to help you come up with good ideas, and I believe we could do that. I believe we could like look at all your past work and all your past code and try to figure out what might be useful to you or interesting to you and can just continuously suggest things.”

The Paul Graham Bot Idea

Altman shared a specific vision for idea-generation tools.

“There have been like three or four people in my life that I have consistently found every time I hang out with them, I leave with a lot of ideas. They’re people who are just really good at asking questions or giving you seeds to build on. And like Paul Graham is off the charts amazing at this.”

“If we can build like a Paul Graham bot that you can have the same kind of interaction with to help generate new ideas, even if most of them are bad, even if you know you kind of say absolutely not to 95 out of a hundred of them, I think something like that is going to be a very significant contribution to the amount of good stuff that gets built in the world.”

He pointed to current model capabilities: “With 5.2, like a special version of 5.2 we use internally, we’re now for the first time hearing from scientists that the scientific progress of these models is no longer super trivial.”

“I just can’t believe that a model that can come up with new scientific insights is not also capable, you know, with a different harness and trained a little bit differently of coming up with new insights about products to build and stuff like that.”

Models Will Adapt to New Technologies Quickly

A developer worried that models might get stuck using old technologies, like trying to get them to use updates from two years ago.

“I think we really will be very good at getting the models to use new things,” Altman said.

“Fundamentally, if we’re using these models correctly, they’re like a general purpose reasoning engine. The way we have things architected right now, they also have, you know, a huge amount of world knowledge built into them. But I think we are moving in the right direction.”

“I hope that updates and using new things and learning new skills even faster than humans do is like a, you know, next couple of years thing.”

He described a key milestone: “When the model can be presented with something totally new, new environment, new tools, new technology, whatever. And you can explain it once or you know what I mean to explain the model can explore it once and then just super reliably use that and get it right. And that doesn’t feel that doesn’t feel very far away.”

Fully Autonomous Scientific Research: Still a Long Way Off

A scientist asked if models will take over the entire research enterprise.

“I think it’s still a long or reasonably long way away from the models doing truly completely closed loop autonomous research in most areas,” Altman said.

Even in mathematics, which doesn’t need physical labs: “Eventually even there for now the mathematicians who are making the most progress with the models are very heavily involved in looking at intermediate progress and saying nah this just doesn’t feel right you know I have an intuition that there’s something different on this other path.”

“I’ve gotten to meet a few mathematicians who now say their entire day is collaborating with the latest models they’re making rapid progress but they do something very different than the model.”

Altman compared it to chess history: “Deep Blue beat Kasparov. Then there was this period of time where okay, you know, AI is better than humans, but a human plus an AI where the human is like choosing the best of 10 moves from the AI is better than that. And then very quickly after that, the AI was again better and the human was like just making it worse.”

“I sort of suspect for something like many kinds of research, something like that should happen over time.”

However: “There seems to be something about creativity, intuition, judgment that we are not close to with the current generation of models.”

“I can’t come up with any principled reason why we won’t get there. So I assume we will. But today I don’t think just sort of saying like hey GPT 5 point GPT-6 go solve math, that is certainly not going to outperform a few very good people doing math with it.”

Scientists Using AI Like “Unlimited Postdocs”

Altman shared insights from scientists using GPT-5.2 aggressively.

“That’s where it’s been very cool to talk to the scientists that are really using this aggressively. I mean, they burn a lot of GPUs in the process.”

“There is a new skill of being able to say here’s the 20 new problems and I’m going to do a breadth first search on them. I’m not going to go deep on anyone and I’m going to use the AI to like be unlimited grad students is how someone described it. I actually recently upgraded them to unlimited postdocs.”

On automating physical science with wet labs: “We go back and forth a lot about whether we should be building automated wet labs for every field, which we’re open to doing, or if the world as a whole will figure out great experiments and has a lot of equipment and will happily contribute data back in.”

“It sort of seems like just watching the scientific community embrace 5.2 and how much they’ve been willing to help that that might work and that would clearly be a easier, better, more distributed, more smart people, more different kinds of equipment world.”

Biosecurity: Moving from Blocking to Resilience

A biosecurity startup founder asked where security falls in OpenAI’s roadmap.

“There are many ways AI can go wrong in 2026. Certainly one of them that we are quite nervous about is bio,” Altman said.

“The models are quite good at bio and right now most of our and by like our not just OpenAI’s the world strategy is to try to restrict who gets access to them and you know put a bunch of classifiers to not help people make novel pathogens. I don’t think that’s going to work for much longer.”

“The shift that I think the world needs to make for AI security generally and bio AI biosecurity in particular is to move from one of like blocking to one of resilience.”

He shared an analogy: “My co-founder Vojtech uses this analogy I really like about fire safety. Fire did all these wonderful things for society. Then it started burning down cities. We tried to do all of these things to restrict fire. I just actually learned this weekend that curfew comes from like when you weren’t allowed to have fires anymore because they were burning down cities. And then we got better at like resilience to fire and we came up with fire code and flame resistant materials and a bunch of other things.”

“I think we need to think about AI the same way. AI is going to be a real problem for bioterrorism. AI is going to be a real problem for cyber security. AI is also a solution to those things. It’s a solution to a lot of other problems as well.”

“I think we need like a society-wide effort to sort of provide the infrastructure for this resilience, not labs that we trust to sort of always block what they’re supposed to block.”

Altman issued a warning: “I am very nervous about where things are, but I don’t see a path other than the sort of resilience-based approach. And it does seem like AI can really help us do that fast. But if something goes really wrong, like visibly really wrong for AI this year, I think bio would be a reasonable bet for what that could be. And then as we get into next year and the following year, you can imagine lots of other things going really wrong, too.”

AI in Education: Change Teaching Methods, Not Block Tools

A Berkeley student asked about AI in education during formative years.

Altman drew parallels to Google’s emergence: “I’m much older than most of you, but I was still like I was in middle school when Google came out. And the teachers tried to make the kids promise not to use it because there was this feeling that if you could look up anything at your fingertips, then why come to history class? Why memorize anything, you know?”

“It seemed to me totally insane. And I was like, actually, I’m gonna be way smarter and I’m gonna learn way more. I’m gonna be able to do way more. And this is the tool that I’m gonna like live with as an adult. And if I don’t learn, it would be crazy to make me learn something that assumed it.”

“I feel the same way about AI tools. I understand that in the current way we teach kids, AI tools are a problem. But I think that suggests that we need to change the way we teach people.”

“You still need to learn to think and writing, learning to write or the practice of writing is very important to learn how to think. But probably the way we should teach you to think and the way we should evaluate your thinking ability has changed and we shouldn’t pretend otherwise.”

“We will figure out new ways to teach the curriculum and bring the other students along.”

For young children: “I’m a fan of keeping computers out of kindergarten. And I think kindergarteners should be like running around outside and playing with physical things and trying to learn how to interact with each other. So not only would I not have AI in most kindergartens, most of the time, I wouldn’t put computers either.”

“I think developmentally we still don’t understand all of the impacts of technology. There’s been like a lot written about the impact of social media on teenagers and that seems like it’s been pretty bad but I have a sense that unfortunately a bunch of technology on young children has been even much worse and is still talked about relatively little and I think until we understand that better, probably we don’t need kindergarteners using a ton of AI.”

Human Connection Will Become More Valuable With AI

When asked about human collaboration versus AI collaboration, Altman predicted increased value for human connection.

“I suspect that human connection is going to be more valuable in a world of lots of AI, not less, and that people are going to value getting together with other people and working with other people more.”

“We have started to see people explore interfaces to make that easier. And as we think about making our own hardware, our own devices, we have thought a lot about I maybe we’ve even thought first about what a collaborative sort of multiplayer plus an AI experience looks like.”

“My sense is that although no one has cracked it quite yet, we will be surprised at how much this is enabled by AI in a way that no other technology has enabled. So you can have, you know, five people sitting around at the table and a little, you know, kind of robot or something also there and you will be able to be way more productive as a group and you’ll just be used to this all the time. Like every group brainstorm, every time you try to solve a problem, there’ll just be an AI as part of it and it’ll help the group do better.”

The Security Risk: Convenience Over Caution

When asked about underestimated failure modes for agents in production, Altman shared a personal story.

“One of the things that surprised me personally is when I first started using Codex I said look I don’t know how this is going to go but for sure I’m not going to give this thing like complete unsupervised access to my computer. I was so confident in that and I lasted about like 2 hours and then I was like you know what it seems very reasonable the agent seems to really do reasonable things. I hate having to approve these commands every time. I’m just going to turn it on for like a little bit and see what happens. And I never turned like you know full access off. And I think other people have had a similar thing.”

His concern: “The general worry I have is that the power and convenience of these are so high and the failure rates are maybe catastrophic. The failures when they happen are maybe catastrophic, but the rates are so low that we are going to kind of slide into this like you know what YOLO and hopefully it’ll be okay.”

“As the models get more capable and harder to understand everything they’re doing, if there’s a misalignment in the model, if there’s some sort of complex problem that emerges over weeks or months of usage and you kind of put some security vulnerability into something you’re making.”

“I think what’s going to happen is the pressure to adopt these tools, to use them, not just the pressure, the like delight and the power of them is going to be so great that people get pulled along into sort of not thinking enough about the complexity of how they’re running these things, how they’re sure about, you know, their this whatever sandbox they’ve set up.”

“The general worry I have is that capability is going to rise very steeply. We’re going to get used to how the models work at a certain level and decide we trust them and without building very good I’ll call it big picture security infrastructure around it we will sleepwalk into something.”

“I think that would be a great kind of company to build.”

3D Reasoning for Drug Design: Coming But Not in 2026

A biopharma developer asked about 3D reasoning capabilities for drug design.

“We’re going to get that solved. I don’t know if it’s a 2026 thing. But that is a super common request and I think we know how to do it. We just have a lot of other urgent areas to push on, but we will get there,” Altman responded.

University in the Age of AI: Make Your Own Decision

A Y Combinator participant who dropped out of university asked for advice on his parents’ pressure to finish.

“I dropped out of university and it took my parents 10 years to stop asking when I was going to go back. So I think like parents are just going to do that and they love you and they’re trying to give you advice they think is best and you just sort of keep explaining to them that you can always go back if you want, but the world is in a different place now and going to keep being in a different place.”

“Everybody’s got to make their own decision, but I think you do need to make your own decision and not just do what society tells you to do.”

“I don’t personally I think this is a time where for if you are an AI builder, it is probably not the best use of your time to be in university right now. If you’re just like a sort of ambitious high agency driven person, this is this is an unusual period of time. And, you know, you can always go back later and I think just tell your parents that you’re not like it doesn’t mean that it’s not the right thing for many people, it doesn’t mean that it won’t be the right thing for you sometime, but like right now you got to do this thing and they’ll I think they’ll understand eventually.”

On angel investing: “I respect the hustle, but not anymore. I miss it. I just got really busy with OpenAI and it kind of gets strange cuz if I end up investing in companies that are like big OpenAI customers, I decided it’s easier not to.”

Sign In With OpenAI: Coming Soon

A developer from WorkOS requested “sign in with my ChatGPT account” functionality.

“We are going to do that. People ask me for it all the time,” Altman confirmed.

The developer asked about token budgets and memory access. Altman responded: “So we do want to figure out how to do this. It’s very scary because ChatGPT does know so much about you.”

“If you like tell a person that you’re very close to a bunch of secrets, you can be like relatively confident they’ll know the exact social nuances and when they share what with who and when something overrules something else. Our models are not quite there, although they’re getting like pretty good at it.”

“I would, I think, feel uncomfortable if I connected my ChatGPT account to a bunch of sites and said, ‘Just use your judgment about like when to share what you know about me from all of my chat history and everything I’ve connected.'”

“But when we can get there, it will clearly be a cool thing to offer. And in the meantime, I think doing something just with, you know, token budgets and if I pay for the pro model, then I can use it on other services, that seems like a cool thing to do. So I think we will at least do that and we’ll try to figure out a way to get the information sharing right, but like we really don’t want to screw that up.”

OpenAI Hiring: Dramatically Slowing Growth

When asked about software engineering interviews at OpenAI, Altman revealed hiring plans.

“We’re going to keep hiring software developers, but we are for the first time and I know every other company and every other startup is thinking about this too, we are planning to dramatically slow down how quickly we grow because we think we’ll be able to do so much more with fewer people.”

“A lot of the impediments that we face or that other companies face is it’s just like the internal policies that have built up at most companies did not contemplate a majority AI co-workers. And that’s going to take a while.”

“What I think we shouldn’t do and what I hope other companies won’t do either is hire super aggressively then realize all of a sudden AI can do a lot of stuff and you need fewer people and have to have some sort of very uncomfortable conversation.”

“So I think the right approach for us will be to hire more slowly but keep hiring and trust that I’m not a believer that like eventually well maybe someday far in the future OpenAI has like zero employees but for a long time I think we’ll just have a gradually increasing number of people doing much more stuff and this is kind of what I expect the shape of the economy to look like more generally.”

On interviews: “In terms of what the interview looks like it has not yet changed as much as it should but I was in a meeting today with people talking about how we want it to change. We basically would like to sit you down with something that would have been impossible for one person to do in two weeks you know this time last year and watch them do it in 10 minutes or 20 minutes or whatever.”

“You want to see that people are going to be able to work in this new way very effectively.”

Companies Must Adopt AI Aggressively or Face Extinction

Altman warned about the future of companies that don’t adopt AI.

“There’s like a more general thing that a few of these questions have hinted at which is is the future going to be you know companies don’t hire many people and have a lot of AI co-workers or is it going to be that companies the companies that win in the future are entirely AI you know like it’s a rack full of GPUs and no people. I really hope it’s the former.”

“There are a bunch of reasons why it seems like it could be something closer to the latter. But if companies don’t adopt AI aggressively, if companies don’t figure out how to hire people that are going to use the tools really effectively, they will eventually just be out competed by a fully AI company that doesn’t have to have the sort of silly policies that prevent big companies from using AI or whatever. And that feels like it’ll be a very destabilizing thing for society.”

“I think it’s very important that companies adopt AI in a big way very quickly.”

Human-Created Art Remains Preferred Over AI Art

A cinematographer asked about the relationship between human creative identity and AI-assisted creation.

“The place that we can study and I think learn the most right now is image generation. It’s been around the longest. The creative community has used it and disliked it and liked it the most.”

“One of them is that consumers of images report dramatically higher appreciation, satisfaction, whatever, if they are told a person made it versus an AI. And I think this is going to be a deep trend in the coming decades is we care a lot about other people and we care very little about the machines.”

Altman shared an internet experiment: “They would go to people who said they really hated AI generated art, like still images. And the people would also say, and I can tell for sure what the AI generated images are because they’re terrible. And they’d show them 10 images and say, ‘Rank your favorite ones.’ Half would be done entirely by a human, half entirely by AI. And like fairly consistently, they would rank the AI ones at the top. And then as soon as they were told that they would say actually I don’t like it and you know this is not the one I want.”

On his personal reaction: “When I finish reading a book that I love the first thing I want to do is like look up the author and understand their life and you know kind of how it led them to do that cuz I felt this connection to this person that I don’t know and now I want to understand them. And I think if I read a great novel and at the end I learned it was written by an AI would sort of be kind of sad and crestfallen.”

“I think this is going to be a deep and durable trend. However, if the art is even a little bit human directed and how little maybe we’ll have to figure out how people feel over time, people don’t seem to have that same strong emotional reaction. And this has been going on for a long time. You know, if digital artists used Photoshop, people still love their art.”

“My expectation given the behavior that we’re seeing now from creators and consumers is that the person and their life story and their editing or curation or whatever goes into that process is going to matter a lot and we’re not going to want the entirely AI generated art broadly speaking at least from what we can learn from images.”

AI Memory: Full Computer Access Without Manual Grouping

When asked about personalization and whether users should group memories into work versus personal categories, Altman shared his vision.

“We’re going to push super hard on memory and personalization. Clearly, people want it and it delivers a way better way to use these tools.”

“I have gone through my own evolution here, but at this point, I am ready for ChatGPT to just look at my whole computer and my whole internet and just know everything. The value from it is so high and I don’t feel uncomfortable about it in the way that I used to.”

“I really hope all AI companies take security and privacy super seriously and I hope that society as a whole does too because the utility is so great like AI is going to know about my whole life. I’m not going to get in the way of that. I don’t yet feel ready to like wear the glasses recording everything. I think that’s still uncomfortable for a bunch of reasons, but I do feel ready to say like, hey, you can just have access to my computer and figure out what’s going on and be useful to me and understand everything and like have a perfect representation of my digital life.”

On manual grouping: “I am lazy. I think most users are lazy too though. And I don’t want to sit there and have to group like this is a work memory, this is a personal memory, this is something that what I want and what I believe is possible is for AI to have such a deep understanding of the complex rules and interactions and sort of hierarchy of my life that it knows what to use when and what to expose where.”

“We better figure that out because I think that’s what most users will want too.”

Most Important Skills: High Agency, Idea Generation, Resilience

An international student from Vietnam asked what skills people should learn in the age of AI.

“These are all kind of like soft skills. None of them are like learn to program was so obviously the right thing you know over recent period of time and now it’s not. But skills like become high agency, get good at generating ideas, be very resilient, be very adaptable to a rapidly changing world. I think these are going to matter more than any specific and I think these are all learnable.”

“This is one of the surprises to me of having been a startup investor is the degree to which you can like take people and in a three-month sort of boot camp style thing make them extremely formidable and do the things on all those axes I was just talking about is very surprising. It was a big update to me and so I think these are the skills that may matter the most and they’re like quite learnable.”

What OpenAI Wants to Build for Developers

Altman closed by asking for developer input.

“We really do want input on what you’d like us to build. Like assume we will have a model that is 100 times more capable than the current model with 100 times the context length, 100x the speed, 100x reduced cost, perfect tool calling, extreme coherence over like we’re going to get there. Tell us what you’d like us to build.”

“If you’re like hey I just need this API or I just need this kind of primitive or I just need this sort of runtime or whatever it is, like we’re building it for you and we’d like to get it right.”

Here is link to full youtube Video – Link

Posted in AI

Leave a Reply

Your email address will not be published. Required fields are marked *