shuhongchen

shu

shuhong (shu) chen
陈舒鸿

site navigation
home
publications
experience
research philosophy

info
email: shuhong[a]cs.umd.edu
web: shuhongchen.github.io
github: ShuhongChen
myanimelist: shuchen
linkedin: link
cv: latest

academic
google scholar: TcGJKGwAAAAJ
orcid: 0000-0001-7317-0068
erdős number: ≤4

shu

I’m shu, a comp sci PhD student at University of Maryland - College Park under Prof. Matthias Zwicker; I entered 2019, and plan to graduate 2024.

My research uses graphics and computer vision techniques for non-photorealistic content creation, with emphasis on animated, illustrated, and 3D characters.

I like anime; my favorite is Nichijou (2011). To trash on my taste, please see my mal profile.

This site is incomplete; for more shu, please see the latest cv.

research

[sauce] [model]

AI-assisted animation. Traditional animation is laborious. To create the expressive motions loved by millions, professional and amateur animators alike face the intrinsic cost of ~12 illustrations per second. As the medium rapidly enters mainstream, the sheer line-mileage demanded has even led to incredibly unfortunate work standards. This begs the question of whether modern data-driven computer vision methods can offer automation without losing creative control. While some work exists for colorization, cleanup, in-betweening, etc., we’re still missing foundational domain-specific infrastructure to train models at scale. Many models for illustration tagging, pose, sketch extraction, segmentation, etc. are still in their infancy. By studying animation industry practices, scaling pipelines, collecting data, bridging domain gaps, leveraging 3d priors, etc., I hope to make AI-assisted animation a reality.

[sauce] [sauce]

3d character modeling. While 3d human priors are crucial for the above animation topic (form and surface anatomy are animator fundamentals), 3d character modeling itself may also benefit from new techniques. As AR/VR apps and virtual creators become more popular, there will soon be major demand for stylized 3d avatars. But current template-based designers are restrictive, with custom assets still requiring expert software to create. Recent works use implicit reconstruction, differentiable rendering, 3d pose etc. to create realistic 3d humans, but comparatively little has been done to suit the design challenges of non-photorealistic characters. My work tries to democratize 3d character creation, bringing customizable experiences to the next generation of social interaction.

[source]

Deep rendering. I firmly believe graphics is the future of computer vision. As such, I’m also interested in new 3d representations and rendering techniques, whether it be novel ways of solving the rendering equation or the implicit stuff everyone’s so hyped about.