Browsing by Author "Kim, Nam Hee"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
- Advancements in video game graphics
Perustieteiden korkeakoulu | Bachelor's thesis(2021-12-20) Jokila, Jeremias - Discovering Fatigued Movements for Virtual Character Animation
A4 Artikkeli konferenssijulkaisussa(2023-12-11) Cheema, Noshaba; Xu, Rui; Kim, Nam Hee; Hämäläinen, Perttu; Golyanik, Vladislav; Habermann, Marc; Theobalt, Christian; Slusallek, PhilippVirtual character animation and movement synthesis have advanced rapidly during recent years, especially through a combination of extensive motion capture datasets and machine learning. A remaining challenge is interactively simulating characters that fatigue when performing extended motions, which is indispensable for the realism of generated animations. However, capturing such movements is problematic, as performing movements like backflips with fatigued variations up to exhaustion raises capture cost and risk of injury. Surprisingly, little research has been done on faithful fatigue modeling. To address this, we propose a deep reinforcement learning-based approach, which—for the first time in literature—generates control policies for full-body physically simulated agents aware of cumulative fatigue. For this, we first leverage Generative Adversarial Imitation Learning (GAIL) to learn an expert policy for the skill; Second, we learn a fatigue policy by limiting the generated constant torque bounds based on endurance time to non-linear, state- and time-dependent limits in the joint-actuation space using a Three-Compartment Controller (3CC) model. Our results demonstrate that agents can adapt to different fatigue and rest rates interactively, and discover realistic recovery strategies without the need for any captured data of fatigued movement. - A Hybrid Generator Architecture for Controllable Face Synthesis
A4 Artikkeli konferenssijulkaisussa(2023-07-23) Mensah, Dann; Kim, Nam Hee; Aittala, Miika; Laine, Samuli; Lehtinen, JaakkoModern data-driven image generation models often surpass traditional graphics techniques in quality. However, while traditional modeling and animation tools allow precise control over the image generation process in terms of interpretable quantities - e.g., shapes and reflectances - endowing learned models with such controls is generally difficult. In the context of human faces, we seek a data-driven generator architecture that simultaneously retains the photorealistic quality of modern generative adversarial networks (GAN) and allows explicit, disentangled controls over head shapes, expressions, identity, background, and illumination. While our high-level goal is shared by a large body of previous work, we approach the problem with a different philosophy: We treat the problem as an unconditional synthesis task, and engineer interpretable inductive biases into the model that make it easy for the desired behavior to emerge. Concretely, our generator is a combination of learned neural networks and fixed-function blocks, such as a 3D morphable head model and texture-mapping rasterizer, and we leave it up to the training process to figure out how they should be used together. This greatly simplifies the training problem by removing the need for labeled training data; we learn the distributions of the independent variables that drive the model instead of requiring that their values are known for each training image. Furthermore, we need no contrastive or imitation learning for correct behavior. We show that our design successfully encourages the generative model to make use of the internal, interpretable representations in a semantically meaningful manner. This allows sampling of different aspects of the image independently, as well as precise control of the results by manipulating the internal state of the interpretable blocks within the generator. This enables, for instance, facial animation using traditional animation tools. - Learning High-Risk High-Precision Motion Control
A4 Artikkeli konferenssijulkaisussa(2022-11-03) Kim, Nam Hee; Kirjonen, Markus; Hämäläinen, PerttuDeep reinforcement learning (DRL) algorithms for movement control are typically evaluated and benchmarked on sequential decision tasks where imprecise actions may be corrected with later actions, thus allowing high returns with noisy actions. In contrast, we focus on an under-researched class of high-risk, high-precision motion control problems where actions carry irreversible outcomes, driving sharp peaks and ridges to plague the state-action reward landscape. Using computational pool as a representative example of such problems, we propose and evaluate State-Conditioned Shooting (SCOOT), a novel DRL algorithm that builds on advantage-weighted regression (AWR) with three key modifications: 1) Performing policy optimization only using elite samples, allowing the policy to better latch on to the rare high-reward action samples; 2) Utilizing a mixture-of-experts (MoE) policy, to allow switching between reward landscape modes depending on the state; 3) Adding a distance regularization term and a learning curriculum to encourage exploring diverse strategies before adapting to the most advantageous samples. We showcase our features’ performance in learning physically-based billiard shots demonstrating high action precision and discovering multiple shot strategies for a given ball configuration. - Online Planning and Control of Physics-Based Skateboarding Animation
Perustieteiden korkeakoulu | Master's thesis(2024-01-22) Mikkola, EliasThis thesis introduces a simulated system for skateboarding, integrating a mechanical approximation of a real-life skateboard into a simulated and simplified skateboard model onto which we apply online planning-based control techniques. Our system design focuses on replicating realistic skateboarding movements, using MuJoCo Model Predictive Control (MJPC) framework and utilizing well-validated implementations of model-based planning algorithms, such as iterative Linear-Quadratic Gaussian with Sampling (iLQS). We incorporate a humanoid model to enact the skateboarding maneuvers, formulating skateboarding as a robotic control task, and delegate the control task to reference pose tracking, where subtasks like pushing and steering are solved with hand-crafted heuristic poses and carefully engineered reward functions without the need for recorded motion capture data. We demonstrate the validity of our approximate skateboard mechanics and our system's performance in replicating the pushing and steering movements. Our experiments with the fundamental Ollie trick demonstrate limited success in going beyond the locomotion on flat terrain. Our quantitative and qualitative analyses offer insights into the trade-offs between task efficiency and robustness and the performance differences across sampling and gradient-based planning algorithms. This work serves as a foundation for future research, providing a baseline and benchmark environment for various forms of assisted locomotion styles and skill sets.