How to Install paddleocr Windows 11 / VS Code
- Conner Drake

- Dec 16, 2025
- 2 min read
1) Verify GPU + decide which Paddle CUDA wheel line to use
In any terminal:
nvidia-smi
Read the header:
Driver Version: ...
CUDA Version: ... (this is driver capability, not your toolkit list)
Pick the newest official Paddle Windows wheel line available that your driver can run in your case: CUDA Version: 13.1 ⇒ choose cu129 (CUDA 12.9 runtime), since Paddle’s official Windows wheel indices go up through cu129.
2) Make VS Code terminals see conda (without editing Windows PATH)
Add this to your VS Code settings.json (adjust username/path if needed):
"terminal.integrated.env.windows": {
"Path": "C:\\Users\\conne\\miniconda3\\condabin;C:\\Users\\conne\\miniconda3\\Scripts;${env:Path}"
}
Restart VS Code completely.
Check in a new terminal:
conda --version
3) Fix PowerShell script execution policy (needed for conda integration)
Open a PowerShell terminal in VS Code and run:
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
Restart VS Code.
(You already did this; it’s part of the clean “from nothing” path.)
4) One-time: accept conda channel Terms of Service (ToS)
In Git Bash or PowerShell:
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2
5) Initialize conda for Git Bash so conda activate works
In Git Bash:
conda init bash
Then close and re-open the Git Bash terminal (important).
Verify conda is shell-initialized:
type conda
Expected: conda is a function
If it is not a function, use a manual fallback:
source /c/Users/conne/miniconda3/etc/profile.d/conda.sh
6) Create and activate the Python 3.13 environment
In Git Bash (or PowerShell):
conda create -n paddle313 python=3.13 -y
conda activate paddle313
python --version
Expected: Python 3.13.x
Upgrade pip:
python -m pip install --upgrade pip
7) Install PaddlePaddle GPU using the official wheel index
Because you selected cu129:
python -m pip install paddlepaddle-gpu==3.2.2 -i https://www.paddlepaddle.org.cn/packages/stable/cu129/
(You do not need to install a local CUDA 12.9 toolkit for this pip-wheel path; the wheel installs the CUDA 12.9 runtime dependencies it needs.)
8) Verify PaddlePaddle is using the GPU
Run:
python -c "import paddle; print('paddle:', paddle.__version__); print('cuda compiled:', paddle.is_compiled_with_cuda())"
python -c "import paddle; print('device:', paddle.get_device()); print('cuda devices:', paddle.device.cuda.device_count())"
python -c "import paddle; paddle.utils.run_check()"
Expected:
cuda compiled: True
device: gpu:0
cuda devices: 1
run_check() succeeds
9) Install PaddleOCR
Still inside (paddle313):
python -m pip install -U paddleocr
10) Run a clean PaddleOCR GPU smoke test
Copy/paste this exact block in Git Bash:
python - <<'PY'
from paddleocr import PaddleOCR
import paddle
ocr = PaddleOCR(
lang="en",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_textline_orientation=False,
)
result = ocr.predict(input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png")
print("result:", type(result), "len:", len(result))
print("paddle device:", paddle.get_device())
PY
Expected:
It may download models on first run.
Output includes paddle device: gpu:0
result is a list (your run showed length 1)
Optional quality-of-life steps
A) Disable the “model hosters connectivity check”
Git Bash:
export DISABLE_MODEL_SOURCE_CHECK=True
PowerShell:
$env:DISABLE_MODEL_SOURCE_CHECK="True"
B) Remove the No ccache found warning (optional)
Only useful if you compile extensions frequently:
conda install -n paddle313 -c conda-forge ccache -y
Summary
You do GPU selection by driver capability (nvidia-smi) and pick the newest available Paddle Windows wheel line (cu129 for you).
You ensure VS Code terminals can run conda (VS Code-only PATH injection), then enable PowerShell script execution, accept conda ToS, and initialize Git Bash for conda activate.
You create paddle313 (Python 3.13), install paddlepaddle-gpu from the official cu129 index, verify GPU with Paddle checks, install paddleocr, and run a minimal predict() test that confirms gpu:0.
Comments