The steerable generalist robot foundation model that achieves compositional generalization — tackling new tasks without retraining.
Select a task category and robot type to see how well π0.7 handles it — with confidence level and explanation.
Physical Intelligence's π0.7 is a steerable generalist robotic foundation model — designed to handle a wide range of physical tasks across different robot bodies through a single unified model.
Built like a large language model but for physical interaction. π0.7 encodes robot perception, language instructions, and motor outputs into a unified model trained across diverse tasks and embodiments.
You can guide π0.7 through novel tasks using natural language coaching — including operating appliances it has never seen before, like an air fryer, purely through verbal instructions.
π0.7 works across different robot hardware without extensive hardware-specific fine-tuning. Demonstrated on humanoids, arms, and mobile platforms from multiple manufacturers.
Perhaps the headline capability: π0.7 can fold laundry on a robot it has never trained on, with zero task-specific training data — pure compositional transfer.
π0.7 has been demonstrated across a range of physically demanding tasks that previously required separate, specialized models for each.
Folds laundry on a new robot embodiment with zero specific training data. Uses compositional transfer of manipulation skills to handle novel cloth geometries and robot kinematics.
Operates unseen kitchen appliances via language coaching alone. Demonstrated with an air fryer — following verbal instructions to locate controls, adjust settings, and complete tasks.
Performs precise dexterous manipulation for screw installation tasks. Requires fine motor control, force sensing, and spatial reasoning — all handled by the unified model.
Assembles small, intricate objects like pinwheels. Demonstrates sub-centimeter manipulation precision critical for electronics assembly and small-parts manufacturing.
Chains complex, multi-step physical workflows without explicit sub-task programming. The model reasons over task structure and adapts to intermediate states dynamically.
Recovers from errors and unexpected states mid-task. Unlike brittle scripted robots, π0.7 handles deviations from expected conditions in real-world environments.
The defining innovation of π0.7: how it tackles tasks it has never seen before.
How Skills Combine →
Prior robot AI required training a separate model for each robot + task combination. The combinatorial explosion made scaling impossible.
π0.7's compositional generalization mirrors how GPT-style language models combine linguistic concepts: once the primitives are learned, new combinations emerge at inference time without retraining.
π0.7's cross-embodiment design means it isn't locked to a single hardware platform. The model adapts its action space to each robot's kinematics.
Full-body manipulation with bipedal locomotion. Demonstrated on multiple humanoid platforms.
✓ DemonstratedFixed-base manipulation arms, including 6-DoF and 7-DoF configurations for dexterous tasks.
✓ DemonstratedWheeled or legged platforms with arm attachments for navigation + manipulation workflows.
✓ SupportedDual-arm systems for tasks requiring two-handed coordination, like folding and assembly.
✓ DemonstratedMulti-fingered end-effectors for precision grasping and fine manipulation of small objects.
✓ SupportedNew robot configurations not seen during training — the zero-shot transfer case that defines π0.7's breakthrough.
✓ Zero-shotHow Physical Intelligence's π0.7 compares to the leading alternatives in the robot foundation model landscape.
| Feature | π0.7 (Phys. Intel.) | Google RT-2 | OpenAI Robotics | Tesla Optimus |
|---|---|---|---|---|
| Release | April 2026 | 2023 | Discontinued 2019 / Research | 2022–ongoing |
| Compositional Generalization | ✓ Core feature | ~ Limited | — N/A | — N/A |
| Cross-Embodiment Transfer | ✓ Yes | ✗ Single hardware | ✗ Single hardware | ✗ Proprietary only |
| Language Coaching | ✓ Novel task steering | ✓ Language grounding | ~ Research only | ~ Limited |
| Zero-Shot Task Transfer | ✓ Demonstrated | ~ Partial | — Unknown | ✗ Scripted tasks |
| Dexterous Manipulation | ✓ Screws, assembly | ~ Basic grasping | ✓ Dexterous hand research | ~ Industrial tasks |
| API / Model Access | ✓ Enterprise + Research | ~ Research access | ✗ Not available | ✗ Proprietary |
| Deployment Model | Cloud + On-device | Cloud (Google) | Research internal | On-device (Optimus) |
Where π0.7 is being applied today and where it's heading next.
Laundry folding, tidying, and household chores. π0.7's cloth manipulation and cross-embodiment transfer make it the most capable household robot AI to date.
DemonstratedScrew installation, pinwheel assembly, and precision part handling. Enables flexible automation without reprogramming for every product variant.
DemonstratedOperating commercial kitchen appliances via language coaching. Targeted at food service automation where equipment and layouts change frequently.
DemonstratedSorting, picking, and packing across diverse product categories without per-SKU programming. Compositional generalization handles novel items at scale.
In DeploymentHandling lab instruments and pipetting tasks. Language-guided control allows scientists to direct robots in natural language without robotics expertise.
Pilot ProgramsHelping people with limited mobility complete daily tasks. Cross-embodiment transfer means π0.7 can adapt to different assistive device configurations.
Coming SoonPhysical Intelligence offers multiple access paths for enterprises and researchers wanting to deploy π0.7.
π0.7 is available through Physical Intelligence's commercial platform. Access options range from cloud API calls to on-device deployment for production robotics systems.
REST API for integrating π0.7 into existing robotics control stacks. Supports real-time inference for manipulation tasks with sub-100ms latency.
physicalintelligence.company →Academic and research institutions can apply for access to π0.7 weights and evaluation tools through Physical Intelligence's research program.
Apply for research access →Optimized model variants for edge deployment on robot compute hardware. Enables offline operation without cloud dependency for production environments.
Edge deployment docs →Fine-tune π0.7 on proprietary task data to specialize for your use case while retaining the base model's generalization capabilities.
Fine-tuning guide →Frequently asked questions about Physical Intelligence's π0.7 robot foundation model.