💬

Hunyuan3D-2 + Blender MCP Setup Guide

Published by KGP Talkie on

Based on: https://youtu.be/ZMlEeW3ygqY


Part 1 — Blender MCP (Windows)

1. Install Blender Addon

Download addon.py from blender-mcp and install in Blender:

Edit → Preferences → Add-ons → Install → select addon.py

In the 3D Viewport press N → BlenderMCP tab → Start MCP Server (port 9876)

2. Create Python 3.12 Conda Env

The default uvx blender-mcp fails on Python 3.14 due to a pyiceberg build issue. Use a dedicated Python 3.12 env:

conda create -n py312 python=3.12 -y
conda activate py312
pip install uv

3. Claude Desktop Config

File: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "blender": {
      "command": "C:\\Users\\laxmi\\anaconda3\\envs\\py312\\Scripts\\uvx.exe",
      "args": [
        "--python",
        "C:\\Users\\laxmi\\anaconda3\\envs\\py312\\python.exe",
        "blender-mcp"
      ]
    }
  }
}

4. Startup Order

  1. Open Blender → Start MCP Server
  2. Start Claude Desktop
  3. Open a fresh conversation (not continued from browser)

Browser sessions expose only a subset of tools — execute_blender_code is only available in Claude Desktop fresh conversations.


Part 2 — Hunyuan3D (WSL2)

1. Install Hunyuan3D

conda create -n hunyuan3d_312 python=3.12 -y
conda activate hunyuan3d_312
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu130
cd ~
git clone https://github.com/Tencent/Hunyuan3D-2
cd Hunyuan3D-2
pip install -r requirements.txt
pip install -e .
pip install sentencepiece tiktoken pybind11 ninja "pybind11[global]" huggingface_hub
cd hy3dgen/texgen/differentiable_renderer
pip install -e .
cd ~/Hunyuan3D-2

2. Download Weights

hf download tencent/Hunyuan3D-2mini --local-dir ~/Hunyuan3D-2/weights

3. Run the Server (for Blender MCP)

conda activate hunyuan3d_312
cd ~/Hunyuan3D-2
python api_server.py --host 0.0.0.0 --port 8081 --model_path ~/Hunyuan3D-2/weights --device cuda

WSL2 auto-forwards ports to Windows — no extra config needed.

4. Blender MCP Panel Settings

In Blender N-panel → BlenderMCP tab:

SettingValue
Use Tencent Hunyuanchecked
Hunyuan API URLhttp://localhost:8081
Octree Resolution256

Octree Resolution vs VRAM

ResolutionVRAMTime
128~8 GB~30s
256~16 GB~90s
512~24 GB~3 min

5. Gradio UI (optional — browser testing only)

Without texture:

cd ~/Hunyuan3D-2
python gradio_app.py --model_path ~/Hunyuan3D-2/weights --device cuda --disable_tex --port 8080 --enable_t23d

With texture (requires Part 3 weights):

cd ~/Hunyuan3D-2
LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.12/site-packages/torch/lib:$CONDA_PREFIX/lib \
python gradio_app.py --model_path ~/Hunyuan3D-2/weights --texgen_model_path ~/Hunyuan3D-2/weights --device cuda --port 8080 --enable_t23d

Access at http://localhost:8080. For Blender MCP always use api_server.py on port 8081.


Part 3 — Texture Generation (optional)

1. Install system library

sudo apt-get install -y libopengl0

2. Download texture model weights

hf download tencent/Hunyuan3D-2 --include "hunyuan3d-paint-v2-0-turbo/*" --local-dir ~/Hunyuan3D-2/weights
hf download tencent/Hunyuan3D-2 --include "hunyuan3d-delight-v2-0/*" --local-dir ~/Hunyuan3D-2/weights

Both are required for --enable_texhunyuan3d-delight-v2-0 is the shadow/highlight removal model (~681 MB).

3. Build custom_rasterizer CUDA kernel (one time only)

Install the matching CUDA toolkit into the conda env, then build:

conda install -c nvidia/label/cuda-13.0.0 cuda-toolkit -y
cd ~/Hunyuan3D-2/hy3dgen/texgen/custom_rasterizer
CUDA_HOME=$CONDA_PREFIX pip install . --no-build-isolation
cd ~/Hunyuan3D-2

CUDA_HOME=$CONDA_PREFIX is required because conda installs nvcc inside the env (at $CONDA_PREFIX/bin/nvcc), not at /usr/local/cuda.

4. Apply code patch (one time only)

DiffusionPipeline.from_pretrained() in the multiview utility needs trust_remote_code=True to load the local custom pipeline. Edit hy3dgen/texgen/utils/multiview_utils.py line 34:

# Before
pipeline = DiffusionPipeline.from_pretrained(
    multiview_ckpt_path,
    custom_pipeline=custom_pipeline_path, torch_dtype=torch.float16)

# After
pipeline = DiffusionPipeline.from_pretrained(
    multiview_ckpt_path,
    custom_pipeline=custom_pipeline_path, torch_dtype=torch.float16,
    trust_remote_code=True)

5. Run the Server with Texture

cd ~/Hunyuan3D-2
LD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.12/site-packages/torch/lib:$CONDA_PREFIX/lib \
python api_server.py \
  --host 0.0.0.0 --port 8081 \
  --model_path ~/Hunyuan3D-2/weights \
  --tex_model_path ~/Hunyuan3D-2/weights/hunyuan3d-paint-v2-0-turbo \
  --device cuda --enable_tex

--tex_model_path must point directly to the paint model subfolder — Hunyuan3DPaintPipeline.from_pretrained() has no subfolder argument.

LD_LIBRARY_PATH is required — custom_rasterizer links against torch libs not on the default search path.


Reference

Model Variants (in weights/)

ModelTypeSpeedQuality
hunyuan3d-dit-v2-miniStandardSlowestBest
hunyuan3d-dit-v2-mini-fastGuidance distillation~2x fasterSlightly lower
hunyuan3d-dit-v2-mini-turboStep distillationFastestGood enough

Default used: hunyuan3d-dit-v2-mini-turbo

Generation Parameters

ParameterDefaultDescription
seed1234Random seed
octree_resolution128Detail level (64 / 128 / 256)
num_inference_steps5Diffusion steps (more = better, slower)
guidance_scale5.0Guidance strength
texturefalseSet true only when server started with --enable_tex

API Endpoints

MethodEndpointDescription
POST/generateSync — returns GLB file directly
POST/sendAsync — returns {"uid": "..."}
GET/status/{uid}Poll async job status


0 Comments

Leave a Reply

Avatar placeholder