Hero Image of content
Pi0.5 Pytorch Version Deployment

OpenPI Server and Client Quick Start Guide


This tutorial will guide you through the quick deployment and operation of the OpenPI inference server and verify the model effect through the client.

The entire process includes environment configuration, dependency installation, model download, and inference service startup and testing.


1. Server Quick Start

1.1 Create and activate the environment

conda create -n openpi python=3.11.13 -y
conda activate openpi

1.2 Clone the project code

No need to recursively pull submodules:

git clone https://github.com/Physical-Intelligence/openpi.git
cd openpi/

1.3 Install project dependencies

It is recommended to use Tsinghua Source to accelerate dependency download:

# Set pip source to Tsinghua Mirror
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

# Install uv dependency management tool
pip install uv

# Configure uv default mirror source
export UV_DEFAULT_INDEX=https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

# Skip LFS large file download and quickly synchronize dependencies
GIT_LFS_SKIP_SMUDGE=1 uv sync

1.4 Download Pi0.5 model and configuration files

This step will pull the PyTorch weights and normalization parameters of Pi0.5.

# Create model directory
mkdir -p pi_models/pytorch_checkpoints/
cd pi_models/pytorch_checkpoints/

# Clone the Pi0.5 model from ModelScope
git clone https://www.modelscope.cn/lerobot/pi05_libero.git

# Create the assets directory and write the normalized statistical parameters
mkdir -p pi05_libero/assets/physical-intelligence/libero
echo '{
  "norm_stats": {
    "state": {
      "mean": [
        -0.04363870248198509,
        0.03525487706065178,
        0.7637033462524414,
        2.9673683643341064,
        -0.2108035385608673,
        -0.1297520250082016,
        0.027788693085312843,
        -0.028010232374072075
      ],
      "std": [
        0.10337679088115692,
        0.15188011527061462,
        0.38154250383377075,
        0.3545231223106384,
        0.929176390171051,
        0.330748051404953,
        0.014128931798040867,
        0.013960899785161018
      ],
      "q01": [
        -0.3524468903720379,
        -0.26824864755272865,
        0.04083745917417109,
        1.5317653684616088,
        -2.7152330031871794,
        -1.076538143157959,
        0.001715825623134151,
        -0.04003722561979666
      ],
      "q99": [
        0.13891278689503672,
        0.3251991607129573,
        1.2568962905768304,
        3.26276856803894,
        2.4437233173847197,
        0.5638469840288161,
        0.04030780866963323,
        -0.0017131616945378486
      ]
    },
    "actions": {
      "mean": [
        0.026827840134501457,
        0.08886060863733292,
        -0.09983397275209427,
        0.00024006747116800398,
        0.0012838079128414392,
        -0.0029443209059536457,
        -0.1305243819952011
      ],
      "std": [
        0.3311910927295685,
        0.37191954255104065,
        0.45225635170936584,
        0.03948824852705002,
        0.06278067082166672,
        0.07317619770765305,
        0.9914451241493225
      ],
      "q01": [
        -0.747375,
        -0.796125,
        -0.9375,
        -0.11580300460159779,
        -0.16942972007393836,
        -0.194502209174633,
        -1.0
      ],
      "q99": [
        0.937125,
        0.8594999999999999,
        0.937125,
        0.1402260055720806,
        0.18103543001413347,
        0.3115457148551941,
        0.9996
      ]
    }
  }
}' > pi05_libero/assets/physical-intelligence/libero/norm_stats.json

💡 Note: The normal stat should be updated with official iterations.


1.5 Start the inference service

The server will automatically connect to the Hugging Face mirror site by default.

export HF_ENDPOINT=https://hf-mirror.com
uv run scripts/serve_policy.py \
    --env LIBERO \
    policy:checkpoint \
    --policy.config=pi05_libero \
    --policy.dir=pi_models/pytorch_checkpoints/pi05_libero

After starting, you will see output similar to the following, indicating that the service started successfully:

[INFO] Policy server listening on 0.0.0.0:8000

2. Client Quick Start

This client is based on the libero example in the OpenPI framework with minimal modifications to facilitate quick verification of inference effects.

2.1 Create environment and install dependencies

conda create -n pi0_demo python=3.10
conda activate pi0_demo

git clone https://github.com/yueduduo/pi0_fast_deploy.git
cd pi0_fast_deploy
pip install -r requirements.txt

2.2 Run the Demo

Modify the host and port in demo.py (consistent with the server), then run:

python demo.py

If the connection is successful, you can see the action results and simulator on mujoco returned by the model inference when you input a prompt.


3. Frequently Asked Questions (FAQ)

Problem Solution
Cannot connect to the server Check if the port is occupied or blocked by the firewall.
Model download is too slow It is recommended to use the Tsinghua mirror source or manually download the model in advance.
uv sync failed Try deleting .venv and then re-executing the sync command.
HF connection failed Confirm whether HF_ENDPOINT=https://hf-mirror.com has been set correctly.