motion
Table of Contents
- parent: domain
- GenMM: Example-based Motion Synthesis via Generative Motion Matching
- training free, generated in a fraction of a second
- DiffMimic: Efficient Motion Mimicking with Differentiable Physics
- Physics-based Motion Retargeting from Sparse Inputs
- from sparse human sensor data to characters of various morphologies
- TC4D: Trajectory-Conditioned Text-to-4D Generation
- factors motion into global and local components
- synthesis of scenes animated along arbitrary trajectories
1. PAE - PHASE
2. DIFFUSION MOTION
- parent: diffusion INTERACTIONS
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
- semantic augmented database, + previous motion features sequences
- MotionDiffuser: Controllable Multi-Agent Motion Prediction using Diffusion (future trajectories over multiple agents)
- TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis
- adapt the gradual diffusion concept (along a diffusion time-axis) into temporal-axis of the motion sequence
- extend DDPM framework to temporally varying denoising, thereby entangling the two axes; long-term motion
- Generating Fine-Grained Human Motions Using ChatGPT-Refined Descriptions
- FG-MDM: Fine-Grained Human Motion Diffusion Model
- DreaMo: Articulated 3D Reconstruction From A Single Casual Video
=best=
- diffusion model to hallucinate invisible parts and to enhance the geometry
- CAGE: Controllable Articulation GEneration
- attention modules designed to extract correlations between part attributes, connectivity graph as input
2.1. HUMAN
- MoMask: Generative Masked Modeling of 3D Human Motions
- progressively predict the next-layer tokens based on the results from current layer
- hierarchical quantization scheme is employed to represent human motion as motion tokens
- Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model
- low-dimensional latent consistent promt, while high-dimensional follows detail-enhancing process
- Realistic Human Motion Generation with Cross-Diffusion Models
- model reverses either 2D or 3D noise into clean motion during training (2D motion data)
- HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback
- personalized and style-aware human motion generation
- MotionMix: Weakly-Supervised Diffusion for Controllable Motion Generation
- unannotated motions
- Generative Human Motion Stylization in Latent Space
- Self-Correcting Self-Consuming Loops For Generative Model Training [training]
- successfully avoid collapse (of nullifying opposite movements)
2.1.1. TIMELINE
- Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation
=best=
- using a single text prompt as input lacks the fine-grained control needed by animators
- multi-track timeline of multiple prompts organized in temporal intervals that may overlap
- Seamless Human Motion Composition with Blended Positional Encodings
=best=
- diffusion-based model for seamless human motion compositions
- global coherence at absolute stage, smooth transitions at relative stage
- Large Motion Model for Unified Multi-Modal Motion Generation
- multimodal input: text, speech, music, video
2.2. EDIT
- DNO: Optimizing Diffusion Noise Can Serve As Universal Motion Priors
- preserve while accommodating editing modes: changing trajectory, pose, joint location, obstacles