motion

Table of Contents

1. PAE - PHASE

  • PAE paper details: DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds
  • decompose movement in composable phase skeletons

2. DIFFUSION MOTION

  • parent: diffusion INTERACTIONS
  • ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
    • semantic augmented database, + previous motion features sequences
  • MotionDiffuser: Controllable Multi-Agent Motion Prediction using Diffusion (future trajectories over multiple agents)
  • TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis
    • adapt the gradual diffusion concept (along a diffusion time-axis) into temporal-axis of the motion sequence
    • extend DDPM framework to temporally varying denoising, thereby entangling the two axes; long-term motion
  • Generating Fine-Grained Human Motions Using ChatGPT-Refined Descriptions
    • FG-MDM: Fine-Grained Human Motion Diffusion Model
  • DreaMo: Articulated 3D Reconstruction From A Single Casual Video =best=
    • diffusion model to hallucinate invisible parts and to enhance the geometry
  • CAGE: Controllable Articulation GEneration
    • attention modules designed to extract correlations between part attributes, connectivity graph as input

2.1. HUMAN

  • MoMask: Generative Masked Modeling of 3D Human Motions
    • progressively predict the next-layer tokens based on the results from current layer
    • hierarchical quantization scheme is employed to represent human motion as motion tokens
  • Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model
    • low-dimensional latent consistent promt, while high-dimensional follows detail-enhancing process
  • Realistic Human Motion Generation with Cross-Diffusion Models
    • model reverses either 2D or 3D noise into clean motion during training (2D motion data)
  • HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback
    • personalized and style-aware human motion generation
  • MotionMix: Weakly-Supervised Diffusion for Controllable Motion Generation
    • unannotated motions
  • Generative Human Motion Stylization in Latent Space
  • Self-Correcting Self-Consuming Loops For Generative Model Training [training]
    • successfully avoid collapse (of nullifying opposite movements)

2.1.1. TIMELINE

  • Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation =best=
    • using a single text prompt as input lacks the fine-grained control needed by animators
    • multi-track timeline of multiple prompts organized in temporal intervals that may overlap
  • Seamless Human Motion Composition with Blended Positional Encodings =best=
    • diffusion-based model for seamless human motion compositions
    • global coherence at absolute stage, smooth transitions at relative stage
  • Large Motion Model for Unified Multi-Modal Motion Generation
    • multimodal input: text, speech, music, video

2.2. EDIT

  • DNO: Optimizing Diffusion Noise Can Serve As Universal Motion Priors
    • preserve while accommodating editing modes: changing trajectory, pose, joint location, obstacles

Author: Tekakutli

Created: 2024-04-07 Sun 13:56