Released April 2026

The Complete Guide to
Physical Intelligence π0.7

The steerable generalist robot foundation model that achieves compositional generalization — tackling new tasks without retraining.

Compositional Generalization Cross-Embodiment Transfer Dexterous Manipulation Language-Guided Control Zero-Shot Task Transfer GPT Moment for Robotics
Robot Task Capability Checker

Select a task category and robot type to see how well π0.7 handles it — with confidence level and explanation.

What is π0.7?

Physical Intelligence's π0.7 is a steerable generalist robotic foundation model — designed to handle a wide range of physical tasks across different robot bodies through a single unified model.

🧠

Foundation Model Architecture

Built like a large language model but for physical interaction. π0.7 encodes robot perception, language instructions, and motor outputs into a unified model trained across diverse tasks and embodiments.

🔀

Steerable by Language

You can guide π0.7 through novel tasks using natural language coaching — including operating appliances it has never seen before, like an air fryer, purely through verbal instructions.

🤝

Cross-Embodiment Transfer

π0.7 works across different robot hardware without extensive hardware-specific fine-tuning. Demonstrated on humanoids, arms, and mobile platforms from multiple manufacturers.

Zero-Shot Generalization

Perhaps the headline capability: π0.7 can fold laundry on a robot it has never trained on, with zero task-specific training data — pure compositional transfer.

Key Capabilities

π0.7 has been demonstrated across a range of physically demanding tasks that previously required separate, specialized models for each.

👕

Laundry Folding

Folds laundry on a new robot embodiment with zero specific training data. Uses compositional transfer of manipulation skills to handle novel cloth geometries and robot kinematics.

🍳

Appliance Operation

Operates unseen kitchen appliances via language coaching alone. Demonstrated with an air fryer — following verbal instructions to locate controls, adjust settings, and complete tasks.

🔩

Screw Installation

Performs precise dexterous manipulation for screw installation tasks. Requires fine motor control, force sensing, and spatial reasoning — all handled by the unified model.

🌀

Pinwheel Assembly

Assembles small, intricate objects like pinwheels. Demonstrates sub-centimeter manipulation precision critical for electronics assembly and small-parts manufacturing.

🏗️

Multi-Step Tasks

Chains complex, multi-step physical workflows without explicit sub-task programming. The model reasons over task structure and adapts to intermediate states dynamically.

🔄

Recovery & Adaptation

Recovers from errors and unexpected states mid-task. Unlike brittle scripted robots, π0.7 handles deviations from expected conditions in real-world environments.

Compositional Generalization Explained

The defining innovation of π0.7: how it tackles tasks it has never seen before.

How Skills Combine →

grasp cloth + new robot arm fold laundry
read display + press button operate air fryer
pinch grip + rotate wrist install screw
insert piece + align parts assembly task

Why This Is the "GPT Moment"

Prior robot AI required training a separate model for each robot + task combination. The combinatorial explosion made scaling impossible.

π0.7's compositional generalization mirrors how GPT-style language models combine linguistic concepts: once the primitives are learned, new combinations emerge at inference time without retraining.

  • Skills learned on Robot A transfer to Robot B
  • Skills from Task X combine to solve new Task Y
  • Language coaching fills gaps for genuinely novel scenarios
  • Failure recovery emerges from the combination of sub-skills

Supported Robot Embodiments

π0.7's cross-embodiment design means it isn't locked to a single hardware platform. The model adapts its action space to each robot's kinematics.

🤖

Humanoid Robots

Full-body manipulation with bipedal locomotion. Demonstrated on multiple humanoid platforms.

✓ Demonstrated
🦾

Robotic Arms

Fixed-base manipulation arms, including 6-DoF and 7-DoF configurations for dexterous tasks.

✓ Demonstrated
🛞

Mobile Manipulators

Wheeled or legged platforms with arm attachments for navigation + manipulation workflows.

✓ Supported
✌️

Bimanual Robots

Dual-arm systems for tasks requiring two-handed coordination, like folding and assembly.

✓ Demonstrated
🖐️

Dexterous Hands

Multi-fingered end-effectors for precision grasping and fine manipulation of small objects.

✓ Supported
⚙️

Custom Embodiments

New robot configurations not seen during training — the zero-shot transfer case that defines π0.7's breakthrough.

✓ Zero-shot

π0.7 vs. Other Robot AI Models

How Physical Intelligence's π0.7 compares to the leading alternatives in the robot foundation model landscape.

Feature π0.7 (Phys. Intel.) Google RT-2 OpenAI Robotics Tesla Optimus
Release April 2026 2023 Discontinued 2019 / Research 2022–ongoing
Compositional Generalization ✓ Core feature ~ Limited — N/A — N/A
Cross-Embodiment Transfer ✓ Yes ✗ Single hardware ✗ Single hardware ✗ Proprietary only
Language Coaching ✓ Novel task steering ✓ Language grounding ~ Research only ~ Limited
Zero-Shot Task Transfer ✓ Demonstrated ~ Partial — Unknown ✗ Scripted tasks
Dexterous Manipulation ✓ Screws, assembly ~ Basic grasping ✓ Dexterous hand research ~ Industrial tasks
API / Model Access ✓ Enterprise + Research ~ Research access ✗ Not available ✗ Proprietary
Deployment Model Cloud + On-device Cloud (Google) Research internal On-device (Optimus)

Real-World Use Cases

Where π0.7 is being applied today and where it's heading next.

🏠

Household Robotics

Laundry folding, tidying, and household chores. π0.7's cloth manipulation and cross-embodiment transfer make it the most capable household robot AI to date.

Demonstrated
🏭

Manufacturing & Assembly

Screw installation, pinwheel assembly, and precision part handling. Enables flexible automation without reprogramming for every product variant.

Demonstrated
🍳

Kitchen & Food Service

Operating commercial kitchen appliances via language coaching. Targeted at food service automation where equipment and layouts change frequently.

Demonstrated
📦

Warehouse Logistics

Sorting, picking, and packing across diverse product categories without per-SKU programming. Compositional generalization handles novel items at scale.

In Deployment
🔬

Lab & Research Automation

Handling lab instruments and pipetting tasks. Language-guided control allows scientists to direct robots in natural language without robotics expertise.

Pilot Programs

Assistive Robotics

Helping people with limited mobility complete daily tasks. Cross-embodiment transfer means π0.7 can adapt to different assistive device configurations.

Coming Soon

API Access

Physical Intelligence offers multiple access paths for enterprises and researchers wanting to deploy π0.7.

π0.7 is available through Physical Intelligence's commercial platform. Access options range from cloud API calls to on-device deployment for production robotics systems.

Enterprise API

REST API for integrating π0.7 into existing robotics control stacks. Supports real-time inference for manipulation tasks with sub-100ms latency.

physicalintelligence.company →

Research Access

Academic and research institutions can apply for access to π0.7 weights and evaluation tools through Physical Intelligence's research program.

Apply for research access →

On-Device Deployment

Optimized model variants for edge deployment on robot compute hardware. Enables offline operation without cloud dependency for production environments.

Edge deployment docs →

Custom Fine-Tuning

Fine-tune π0.7 on proprietary task data to specialize for your use case while retaining the base model's generalization capabilities.

Fine-tuning guide →

FAQ

Frequently asked questions about Physical Intelligence's π0.7 robot foundation model.

What is Physical Intelligence π0.7?
π0.7 (pi-zero-point-seven) is a steerable generalist robotic foundation model released by Physical Intelligence in April 2026. It can handle novel tasks by recombining learned skills — a property called compositional generalization — without requiring task-specific training data. It's widely regarded as a "GPT moment" for robotics.
What makes π0.7 different from previous robot AI models?
π0.7 introduces compositional generalization: it can tackle brand-new tasks by intelligently combining skills it already knows. Unlike prior models that need retraining for each new task, π0.7 transfers across robot embodiments. It can fold laundry on a robot it has never trained on, and operate unseen appliances guided only by natural language coaching.
Which robots does π0.7 support?
π0.7 is designed for cross-embodiment transfer, meaning it works on different robot hardware types — humanoid robots, robotic arms, mobile manipulators, and bimanual systems — without extensive hardware-specific fine-tuning. The model adapts its action space to each robot's kinematics automatically.
Can π0.7 be used for industrial or household tasks?
Yes. Demonstrated use cases include laundry folding (household), screw installation and pinwheel assembly (industrial/dexterous manipulation), and operating kitchen appliances like air fryers via language coaching. The model generalizes broadly across domains without needing task-specific retraining.
How do I access the π0.7 model or API?
Physical Intelligence offers access to π0.7 through enterprise and research programs at physicalintelligence.company. Options include cloud API access, on-device deployment for production systems, and custom fine-tuning programs. Research institutions can apply for academic access to model weights and evaluation tools.
Is π0.7 open source?
π0.7 is not fully open source. Physical Intelligence offers commercial licensing for enterprises and a research access program for academic institutions, but the core model weights are not publicly released under an open-source license. This is consistent with how most frontier AI labs manage their commercial foundation models.
How does π0.7 compare to Tesla Optimus or Google RT-2?
π0.7 uniquely offers compositional generalization and cross-embodiment transfer — neither Tesla Optimus nor Google RT-2 demonstrate these at the same level. Optimus is designed exclusively for Tesla's proprietary hardware and focuses on scripted manufacturing tasks. RT-2 uses language grounding but doesn't demonstrate zero-shot cross-embodiment transfer. π0.7 is the first model to treat this generalization as a core design goal.