
Sam Altman "The Future of Work" and the next 12 months...
AI Generated Summary
Airdroplet AI v0.2Here's a summary of the video based on the transcript:
At the Sequoia Capital AI Ascent, Sam Altman shared his thoughts on the current state and near-term future of AI, particularly its impact on work and innovation. He highlighted how younger generations are uniquely using AI as an operating system, the critical role of AI in writing code, and where major value creation is expected in the next 1-3 years, focusing on agents, scientific discovery, and eventually physical robots. Altman also contrasted the agility of startups with the challenges faced by larger companies in adopting this rapidly changing technology.
Here are some key takeaways and details discussed:
-
How Different Generations Use AI:
- Younger people, particularly those in college, are surprisingly using AI like an "operating system." This means setting it up in complex ways, connecting it to files, and using sophisticated, sometimes memorized, prompts to manage tasks and information.
- It's pretty wild that some younger users are even asking AI for advice on significant life decisions, relying on its memory capabilities to have a full context of their personal lives and relationships.
- Older users (perhaps those in their 20s and 30s) might use it more as a "life advisor," while older demographics tend to use it primarily as a replacement for basic Google searches.
- The difference in how quickly and deeply younger people are integrating AI compared to older generations is "crazy," reminding one of how kids adopted smartphones instantly while many adults took years to grasp basic functions. This generational divide is also mirrored in how companies are adopting AI.
-
AI's Role in Coding:
- OpenAI uses its own AI models extensively internally to write code. It's not just writing simple filler code, but "meaningful code," although exactly how much is hard to quantify effectively (measuring by lines of code is seen as a "dumb thing").
- AI's ability to write code is considered "more central" to the future of OpenAI than just being one of many products offered.
- The ambition is for AI models to go beyond returning text or images and instead be able to generate entire programs or custom-rendered code based on user requests.
- Writing code is viewed as absolutely central to enabling AI models to "make things happen in the world," essentially acting as the interface to call APIs and automate real-world tasks.
- This powerful coding capability will definitely be exposed through OpenAI's API and platform, and making ChatGPT excellent at writing code is a major focus.
- It feels like AI-powered coding assistants and agents are going to be a massive area for creating value.
-
Where Value Creation Happens Next:
- The core drivers of value creation in AI will continue to be building more and better infrastructure, developing smarter AI models, and creating the necessary "scaffolding" to integrate these AI tools effectively into society.
- Looking specifically at the next year or so (like 2025), the prediction is it will be the "year of agents doing work." Coding is expected to be the "dominant category" within this agent work, but there will be others too. This sounds like the kind of AI impact that will directly change daily tasks.
- The year after that (maybe 2026) could see AIs become more involved in scientific discovery, assisting humans or even making significant findings themselves. This aligns with the idea that scientific progress is key to long-term economic growth.
- Looking further out, maybe around 2027, the expectation is for AI's impact to move from the digital realm into the physical world, with robots evolving from interesting curiosities into serious economic drivers.
- These specific year predictions are just educated guesses made on the spot, but they paint a clear picture of the expected progression.
-
Why Startups Win Against Big Companies:
- It's clear that smaller companies are currently "beating the crap out of" larger ones when it comes to innovating with AI. This isn't new; it happens during every major technology revolution.
- Big companies struggle because they are incredibly "stuck in their ways" and have painfully slow decision-making processes. Trying to adopt rapidly changing AI technology when you have internal committees that meet once a year feels incredibly inefficient and "painful to watch."
- This dynamic is essentially "creative destruction" – how the industry moves forward by allowing more agile players to overtake established ones.
- It feels both disappointing and completely predictable that large companies are lagging. The prediction is they'll waste a couple more years resisting before a frantic, likely "too late," attempt to catch up, while startups will just "blow past" them.
- This mirrors the earlier point about how much faster younger people adopt new technology like AI compared to older generations; companies are experiencing a similar kind of inertia.
-
Dealing with Adversity:
- Facing significant challenges or adversity as a founder gets emotionally easier over time. Even though the abstract "stakes" of the problems increase, your ability to deal with them and the resilience you build makes it less emotionally taxing with each experience.
- A really insightful point is that the hardest part isn't necessarily the immediate moment of crisis itself (when adrenaline and support often kick in).
- The real challenge is managing your psychology and "picking up the pieces" during the "fallout" and rebuilding phase after the acute crisis has passed – focusing on "day 60" after things went wrong, not just "day 0."
- There's much less discussion and guidance available on navigating this aftermath or recovery period compared to advice on handling the crisis moment itself.
- Learning how to manage your mindset and rebuild after a major setback is something founders need to practice and get better at. This feels like a critical, overlooked aspect of navigating the startup journey.