shuhongchen

shu

shuhong (shu) chen
陈舒鸿

site navigation
home
research publications
experience
top anime
artwork

info
email: shuhong[a]terpmail.umd.edu
web: shuhongchen.github.io
github: ShuhongChen
myanimelist: shuchen
linkedin: link
cv: latest

academic
google scholar: TcGJKGwAAAAJ
orcid: 0000-0001-7317-0068
erdős number: ≤4

shu

I’m a computer science PhD student at University of Maryland - College Park advised by Prof. Matthias Zwicker; I entered in 2019, and plan to graduate in 2024.

My research uses graphics and computer vision techniques for anime-style content creation, including animation, illustration, and 3D characters.

Often I’ll also collaborate outside academia, with both the tech and anime industries (TikTok, Meta, OLM Digital, Arch Inc., etc.).

I just so happen to like anime; my favorite is Nichijou (2011). To trash on my taste, please see my mal profile or my top anime page.

research

[sauce] [model]

AI-assisted animation. Traditional animation is laborious. To create the expressive motions loved by millions, professional and amateur animators alike face the intrinsic cost of ~12 illustrations per second. As the medium rapidly enters mainstream, the sheer manual line-mileage demanded continues to increase. This begs the question of whether modern data-driven computer vision methods can offer automation or assist the creative process. While some work exists for colorization, cleanup, in-betweening, etc., we’re still missing foundational domain-specific infrastructure to train models at scale. Many models for illustration tagging, pose, sketch extraction, segmentation, etc. are still in their infancy. By studying animation industry practices, scaling data pipelines, bridging domain gaps, leveraging 3d priors, etc., I hope to uncover what AI can do for animation.

[sauce] [sauce]

3d character modeling. While 3d human priors are crucial for the above animation topic (form and surface anatomy are animator fundamentals), 3d character modeling itself may also benefit from new techniques. As AR/VR apps and virtual creators become more popular, there will soon be major demand for stylized 3d avatars. But current template-based designers are restrictive, with custom assets still requiring expert software to create. Recent works use implicit reconstruction, differentiable rendering, 3d pose etc. to create realistic 3d humans, but comparatively little has been done to suit the design challenges of non-photorealistic characters. My work tries to democratize 3d character creation, bringing customizable experiences to the next generation of social interaction.

[source]

Deep rendering. I firmly believe graphics is the future of computer vision. As such, I’m also interested in new 3d representations and rendering techniques, whether it be novel ways of solving the rendering equation or the implicit stuff everyone’s so hyped about.