Physical AI, part 1: The basics

We explore how AI is moving from screens to machines that can see, learn and act across robotics, mobility and industry.

Headshot of Vanessa Cook

Vanessa Cook

Headshot of Lynelle Huskey

Lynelle Huskey

February 2026

Key takeaways

  • Physical AI marks the next major phase of AI commercialization, extending intelligence beyond software and into machines that can see, decide and act in the real world — unlocking multi-trillion-dollar opportunities across robotics, autonomous vehicles and drones.
  • Progress is accelerating due to converging advances in models, data, compute and simulation, with multimodal foundation models, synthetic data and world models enabling robots to learn, interact and train more safely and efficiently.
  • With these advancements, we’re seeing robotics shift from rules-based systems to data-driven, end-to-end AI, supported by open-source innovation and improved observability paving the way for adaptable, general-purpose robots and more autonomous industrial systems.
  • This is the first publication in a three-part series that aims to explore what happens when AI leaves the chat, and shows up in the real world.

Read our full analysis for a more in-depth look at these trends.

Additional Materials: