Skip to content

RuiningLi/particulate

Repository files navigation

Particulate: Feed-Forward 3D Object Articulation

arXiv Project Page

Teaser

🌟 Overview

Particulate is a feed-forward approach that, given a single static 3D mesh of an everyday object, directly infers all attributes of the underlying articulated structure, including its 3D parts, kinematic structure, and motion constraints.

  • Ultra-fast Inference: our model recovers a fully articulated 3D object with a single forward pass in ~10 seconds.
  • SOTA Performance: our model significantly outperforms prior methods on the task of 3D articulation estimation.
  • GenAI Compatible: our model can also accurately infer the articulated structure of AI-generated 3D assets, enabling full-fledged generation of articulated assets from images or texts when combined with an off-the-shelf 3D generator.

🔧 Installation

Our implementation is tested on pytorch==2.4.0 with cuda 12.4 on Ubuntu 22.04.

conda create -n particulate python=3.10
conda activate particulate
pip install -r requirements.txt

🚀 Inference

To use our model to predict the articulated structure of a custom 3D model (alternatively, you can try our demo on HuggingFace without local setup):

python infer.py --input_mesh ./hunyuan3d-examples/foldingchair.glb

The script will automatically download the pre-trained checkpoint from Huggingface.

Extra arguments:

  • up_dir: The up direction of the input mesh. Our model is trained on 3D models with up direction +Z. To achieve optimal result, it is important to make sure the input mesh follow the same convention. The script will automatically rotate the input model to be +Z up with this argument. You can use the visualization in the demo to determine the up direction.
  • num_points: The number of points to be sampled as input to the network. Note that we uniformly sample 50% of points and sample the remaining 50% from sharp edges. Please make sure the number of uniform points is larger than the number of faces in the input mesh.
  • min_part_confidence: Increasing this value will merge parts that have low confidence scores to other parts. Consider increasing this value if the prediction is over segmented.
  • no_strict: By default, the prediction will be post-processed to ensure that each articulated part is a union of different connected components in the original mesh (i.e., no connected components are split across parts). If the input mesh does not have clean connected components, please specify --no_strict.

💾 Data Preprocessing

Please refer to DATA.md.

🔎 Evaluation

Example of evaluation script:

python evaluate.py 
      --gt_dir dataset/Lightwheel_uniform-100k
      --output_dir eval_result/
      --result_dir result_dir/
      --result_type particulate

The evaluator expects:

  • gt_dir: directory of ground-truth .npz files named {model_name}.npz.

  • output_dir: per-sample outputs and overall summaries:

    • Per-sample: <sample_name>_pred_eval.json; meshes when enabled: <sample_name>_pred_{original,low,high}.obj.
      • With --save_pcd_gt, also <sample_name>_gt_{original,low,high}.obj.
    • Overall: saved next to output_dir as <basename(output_dir)>_eval_overall.json, give the metric averaged over all assets.
  • result_dir: directory of prediction .npz files that follow the meta schema. If you first need to convert meshes to point clouds with articulation metadata, see Resampling below.

  • result_type: custom prediction or particulate prediction

For details of evaluation data format, please refer to DATA.md.

We provide options to sample points cloud on given mesh(.obj), by using the commands:

python evaluate.py 
      --gt_dir dataset/Lightwheel_uniform-100k \
      --output_dir eval_result/ \
      --result_dir result_dir/ \
      --result_type custom \
      --resample_points \
      --meta_dir dataset/custom_data/ \
  • meta_dir: directory containing the mesh/point cloud and articulation information per asset.

Where:

|
|-- meta_dir
|   |-- Asset_name1
|   |   |-- original.obj
|   |   |-- meta.npz
|   |
|   |-- Asset_name2
|   |   |-- original.obj
|   |   |-- meta.npz
|   |
|   ...
  • original.obj: triangle mesh used for uniform surface sampling.
  • meta.npz: contains vert_to_bone (|V|, ), which indicate the part_id for each face in the mesh. The motion_hierarchy, is_part_revolute, is_part_prismatic, revolute_plucker, revolute_range, prismatic_axis, prismatic_range, are also required. They are the same as the {Asset_name}.npz in DATA.md.

TODO

  • Release data preprocessing code.
  • Release the Lightwheel benchmark & evaluation code.
  • Release training code.

Citation

@article{li2025particulate,
    title   = {Particulate: Feed-Forward 3D Object Articulation},
    author  = {Ruining Li and Yuxin Yao and Chuanxia Zheng and Christian Rupprecht and Joan Lasenby and Shangzhe Wu and Andrea Vedaldi},
    journal = {arXiv preprint arXiv:2512.11798},
    year    = {2025}
}

About

Official Implementation of Particulate: Feed-Forward 3D Object Articulation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published