# Will Wang
**Source**: https://homes.cs.washington.edu/~wwill/
**Parent**: https://idl.uw.edu/papers
| | | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | | | --- | --- | | Huichen Will Wang I'm a third-year Computer Science PhD student at the University of Washington. My work sits at the intersection of information visualization, human-computer interaction, and generative AI. I am fortunate to be co-advised by [Jeffrey Heer](https://homes.cs.washington.edu/~jheer/bio/) and [Leilani Battle](https://homes.cs.washington.edu/~leibatt/bio.html). Prior to UW, I did my undergrad at Amherst College. I had many great mentors, including [Cindy Bearfield](https://cyxiong.com/), [Kate Follette](https://www.amherst.edu/people/facstaff/kfollette), and [Lee Spector](https://lspector.github.io/). [Email](mailto:wwill@cs.washington.edu) / [Google Scholar](https://scholar.google.com/citations?user=P8MF9AQAAAAJ&hl=en) / [Twitter](https://x.com/will_wang_whc) / [LinkedIn](https://www.linkedin.com/in/huichenwang/) | | | | | --- | | Research My research focuses on developing and evaluating human-centric tools for information visualization and data science. Chef's Choice *\* denotes equal contribution.* | | | | | --- | --- | | | [ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning](https://thinkmorph.github.io/) [Jiawei Gu\*](https://scholar.google.com/citations?user=7p8yEHAAAAAJ&hl=zh-CN), [Yunzhuo Hao\*](https://scholar.google.com/citations?user=0SPl0hcAAAAJ&hl=zh-CN&oi=ao), **Huichen Will Wang\***, [Linjie Li\*](https://www.microsoft.com/en-us/research/people/linjli/), [Michael Qizhe Shieh](https://michaelshieh.com/), [Yejin Choi](https://yejinc.github.io/), [Ranjay Krishna](https://www.ranjaykrishna.com/index.html), [Yu Cheng](https://ych133.github.io/) *International Conference on Learning Representations (ICLR)*, 2026 [Project Page](https://thinkmorph.github.io/) / [PDF](https://arxiv.org/pdf/2510.27492) / [arXiv](https://arxiv.org/abs/2510.27492) We contribute ThinkMorph, a unified model fine-tuned on approximately 24K high-quality multimodal interleaved reasoning traces. ThinkMorph is capable of generating progressive text-image reasoning steps that manipulate visual content while maintaining coherent verbal logic, outperforming strong baselines. | | | [FullFront: Benchmarking MLLMs Across the Full Front-End Engineering Workflow](https://arxiv.org/pdf/2505.17399) [Haoyu Sun](https://scholar.google.com/citations?user=P2h7aTUAAAAJ&hl=zh-CN&oi=sra), **Huichen Will Wang**, [Jiawei Gu](https://scholar.google.com/citations?user=7p8yEHAAAAAJ&hl=zh-CN), [Linjie Li](https://www.microsoft.com/en-us/research/people/linjli/), [Yu Cheng](https://ych133.github.io/) *ArXiv*, 2025 [PDF](https://arxiv.org/pdf/2505.17399) / [arXiv](https://arxiv.org/abs/2505.17399) / [code](https://github.com/Mikivishy/FullFront) We contribute FullFront, a benchmark spanning the full front-end development pipeline: Webpage Design, Webpage Perception, and Webpage Code Generation. | | | [Do You "Trust" This Visualization? An Inventory to Measure Trust in Visualizations](https://arxiv.org/pdf/2503.17670) **Huichen Will Wang**, [Kylie Lin](https://phogzone.com/), [Andrew Cohen](https://www.umass.edu/psychological-brain-sciences/about/directory/andrew-cohen), [Ryan Kennedy](https://ryanpkennedy.weebly.com/), [Zach Zwald](https://uh.edu/class/political-science/faculty-and-staff/professors/zwald/), [Carolina Nobre](https://carolinanobre.com/), [Cindy Xiong Bearfield](https://cyxiong.com/) *IEEE Transactions on Visualization and Computer Graphics*, 2026 [PDF](https://homes.cs.washington.edu/~wwill/papers/Trust in Vis Inventory.pdf) / [arXiv](https://arxiv.org/abs/2503.17670) Through Exploratory Factor Analysis, we derive an operational definition of trust in visualizations and contribute an inventory to measure it. | | | [Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations](https://arxiv.org/pdf/2502.13902) [Minsuk Chang](https://minsukchang.info/), [Yao Wang](https://marcwong.github.io/), **Huichen Will Wang**, [Andreas Bulling](https://www.collaborative-ai.org/people/bulling/), [Cindy Xiong Bearfield](https://cyxiong.com/) *Eurographics Conference on Visualization (EuroVis Short Paper)*, 2025 [PDF](https://arxiv.org/pdf/2502.13902) / [arXiv](https://arxiv.org/abs/2502.13902) / [code](https://github.com/jangsus1/Grid-Labeling) We introduce Grid Labeling—a novel annotation method for collecting task-specific "saliency" on visualizations. | | | [Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark](https://emma-benchmark.github.io/) [Yunzhuo Hao\*](https://scholar.google.com/citations?user=0SPl0hcAAAAJ&hl=zh-CN&oi=ao), [Jiawei Gu\*](https://scholar.google.com/citations?user=7p8yEHAAAAAJ&hl=zh-CN), **Huichen Will Wang\***, [Linjie Li\*](https://www.microsoft.com/en-us/research/people/linjli/), [Zhengyuan Yang](https://zyang-ur.github.io/), [Lijuan Wang](https://www.microsoft.com/en-us/research/people/lijuanw//), [Yu Cheng](https://ych133.github.io/) *International Conference on Machine Learning (ICML)*, 2025 **(Oral Presentation, top 0.9%)** [Project Page](https://emma-benchmark.github.io/) / [PDF](https://arxiv.org/pdf/2501.05444) / [arXiv](https://arxiv.org/abs/2501.05444) We contribute EMMA (Enhanced MultiModal reAsoning), a benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding. SOTA models struggle big time. | | | [Jupybara: Operationalizing a Design Space for Actionable Data Analysis and Storytelling with LLMs](https://dl.acm.org/doi/pdf/10.1145/3706598.3713913) **Huichen Will Wang**, [Larry Birnbaum](https://www.mccormick.northwestern.edu/research-faculty/directory/profiles/birnbaum-larry.html), [Vidya Setlur](https://www.vidyasetlur.com/) *Conference on Human Factors in Computing Systems (CHI)*, 2025 [PDF](https://dl.acm.org/doi/pdf/10.1145/3706598.3713913) / [code](https://github.com/wwwhhhccc/jupybara) We synthesize a design space for actionable exploratory data analysis and storytelling, operationalizing it through Jupybara—a Jupyter Notebook plugin featuring a multi-agent system. PSA: **Jupybara** = **Jupy**ter Notebook + Capy**bara** | | | [How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts](https://arxiv.org/pdf/2408.06837) **Huichen Will Wang**, [Jane Hoffswell](https://jhoffswell.github.io/#About), Sao Myat Thazin Thane, [Victor S. Bursztyn](https://research.adobe.com/person/victor-s-bursztyn/), [Cindy Xiong Bearfield](https://cyxiong.com/) *IEEE Transactions on Visualization and Computer Graphics (Proceedings of VIS)*, 2025 [PDF](https://arxiv.org/pdf/2408.06837) / [arXiv](https://arxiv.org/abs/2408.06837) Human chart takeaways are sensitive to design choices in a visualization. LLMs struggle to replicate this sensitivity, often generating takeaways that don't match human interpretation patterns. | | | [DracoGPT: Extracting Visualization Design Preferences from Large Language Models](https://arxiv.org/pdf/2408.06845) **Huichen Will Wang**, [Mitchell Gordon](https://mgordon.me/), [Leilani Battle](https://www.vidyasetlur.com/), [Jeffrey Heer](https://homes.cs.washington.edu/~jheer/bio/) *IEEE Transactions on Visualization and Computer Graphics (Proceedings of VIS)*, 2025 [PDF](https://arxiv.org/pdf/2408.06845) / [arXiv](https://arxiv.org/abs/2408.06845) / [code](https://github.com/wwwhhhccc/DracoGPT) We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. LLMs' design preferences diverge from guidelines drawn from human subjects experiments. | | | | --- | | Website design adapted from [Jon Barron](https://jonbarron.info/). | |