This framework provides a suite of motion imitation methods for training motion controllers. This codebase is designed to be clean and lightweight, with minimal dependencies. A more detailed overview of MimicKit is available in the Starter Guide. For a more feature-rich and modular motion imitation framework, checkout ProtoMotions.
Instructions for each method are available here:
- DeepMimic
- AMP - Adversarial Motion Priors
- AWR - Advantage-Weighted Regression
- ASE - Adversarial Skill Embeddings
- LCP - Lipschitz-Constrained Policies
- ADD - Adversarial Differential Discriminator
This framework supports different simulator backends (referred to as Engines). We highly recommend using a package manager, like Conda, to create dedicated Python environments for each simulator.
- Install the simulator of your choice.
Isaac Gym
Install Isaac Gym.
To use Isaac Gym, specify the argument --engine_config data/engines/isaac_gym_engine.yaml when running the code.
Isaac Lab
Install Isaac Lab. This framework has been tested with 2ed331acfcbb1b96c47b190564476511836c3754.
To use Isaac Lab, specify the argument --engine_config data/engines/isaac_lab_engine.yaml when running the code.
Newton
Install Newton. This framework has been tested with 447d6ed874aa816e993c612e345ed5caa5f52687.
To use Newton, specify the argument --engine_config data/engines/newton_engine.yaml when running the code.
- Install the requirements.
pip install -r requirements.txt
To train a model, run the following command:
python mimickit/run.py --mode train --num_envs 4096 --engine_config data/engines/isaac_gym_engine.yaml --env_config data/envs/deepmimic_humanoid_env.yaml --agent_config data/agents/deepmimic_humanoid_ppo_agent.yaml --visualize true --out_dir output/
--modeselects eithertrainortestmode.--num_envsthe number of parallel environments used for simulation. Not all environments support parallel envs, this is mainly used for Isaac Gym envs and other environments, like DeepMind Control Suite does not support this feature and should therefore use 1 for the number of envs.--engine_configconfiguration file for the engine to select between different simulator backends.--env_configconfiguration file for the environment.--agent_configconfiguration file for the agent.--visualizeenables visualization. Rendering should be disabled for faster training.--out_dirthe output directory where the models and logs will be saved.--loggerthe logger used to record training stats. The options are texttxt, TensorBoardtb, orwandb.--videoeithertrueorfalseto enable headless video recording, which are then recorded by the logger. Only Isaac Gym and Isaac Lab currently support video logging.
Instead of specifying all arguments through the command line, arguments can also be loaded from an arg_file:
python mimickit/run.py --arg_file args/deepmimic_humanoid_ppo_args.txt --visualize true
The arguments in arg_file are treated the same as command line arguments. Arguments for all algorithms are provided in args/.
To test a model, run the following command:
python mimickit/run.py --arg_file args/deepmimic_humanoid_ppo_args.txt --num_envs 4 --visualize true --mode test --model_file data/models/deepmimic_humanoid_spinkick_model.pt
--model_filespecifies the.ptfile that contains the parameters of the trained model. Pretrained models are available indata/models/, and the corresponding training log files are available indata/logs/.
To use distributed training with multi-CPU or multi-GPU:
python mimickit/run.py --arg_file args/deepmimic_humanoid_ppo_args.txt --devices cuda:0 cuda:1
--devicesspecifies the devices used for training, which can becpuorcuda:{i}. Multiple devices can be provided to parallelize training across multiple processes.
- Camera control: Hold
Altkey and drag with the left mouse button to pan the camera. Scroll with the mouse wheel to zoom in/out. - Pause Simulation:
Enterkey can be used to pause/unpause the simulation - Step Simulation:
Spacekey can be used to step the simulator one step at a time.
When using the TensorBoard logger during training, a TensorBoard events file will be saved in the same output directory as the log file. The log can be viewed with:
tensorboard --logdir=output/ --port=6006 --samples_per_plugin scalars=999999
The output log.txt file can also be plotted using the plotting script plot_log.py.
Motion data is stored in data/motions/. The motion_file field in the environment configuration file can be used to specify the reference motion clip. In addition to imitating individual motion clips, motion_file can also specify a dataset file, located in data/datasets/, which will train a model to imitate a dataset containing multiple motion clips.
The view_motion environment can be used to visualize motion clips:
python mimickit/run.py --mode test --arg_file args/view_motion_humanoid_args.txt --visualize true
Motion clips are represented by the Motion class implemented in motion.py. Each motion clip is stored in a .pkl file. Each frame in a motion specifies the pose of the character according to
[root position (3D), root rotation (3D), joint rotations]
where 3D rotations are specified using 3D exponential maps. Joint rotations are recorded in the order that the joints are specified in the .xml file (i.e. depth-first traversal of the kinematic tree). For example, in the case of humanoid.xml, each frame is represented as
[root position (3D), root rotation (3D), abdomen (3D), neck (3D), right_shoulder (3D), right_elbow (1D), left_shoulder (3D), left_elbow (1D), right_hip (3D), right_knee (1D), right_ankle (3D), left_hip (3D), left_knee (1D), left_ankle (3D)]
The rotations of 3D joints are represented using 3D exponential maps, and the rotations of 1D joints are represented using 1D rotation angles.
Motion retargeting can be done using GMR. A script to convert GMR files to the MimicKit format is available in tools/gmr_to_mimickit/.
A script to convert SMPL motion files from AMASS to the MimicKit format is available in tools/smpl_to_mimickit/.
If you find this codebase helpful, please cite:
@article{
MimicKitPeng2025,
title={MimicKit: A Reinforcement Learning Framework for Motion Imitation and Control},
author={Peng, Xue Bin},
year={2025},
eprint={2510.13794},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2510.13794},
}


