Hi! I am a 2nd-year Ph.D. student at the University of Massachusetts Amherst, Manning College of Information & Computer Sciences,
advised by Prof. Ali Sarvghad and Prof. Ravi Karkar.
My research is at the intersection of HCI, AI, Health and Visualization. I design, develop, and evaluate AI-enabled tools for people with disabilities and older adults to address accessibility challenges. I am particularly interested in leveraging large multimodal models and conversational agents to make visualizations and everyday digital tasks more accessible and navigable.
I received a B.S in Computer Science and Engineering from Sungkyungkwan University and completed an exchange student program at the University of Texas at Austin.
I have also worked as a Natural Language Processing(NLP) Researcher at Seoul National University Bundang Hospital
and as a Machine Learning(ML) Engineer at Cipherome, Inc.
Description: Headshot of Jasmine, a Korean woman wearing a white collared shirt under a light gray sweater vest, smiling softly against a plain light pink background.
Under Review
Abstract: Large multimodal models (LMMs) are increasingly capable of interpreting visualizations, yet they continue to struggle with spatial reasoning. One proposed strategy is decomposition, which breaks down complex visualizations into structured components. In this work, we examine the efficacy of scalable vector graphics (SVGs) as a decomposition strategy for improving LMMs' performance on floor plans comprehension.
Floor plans serve as a valuable testbed because they combine geometry, topology, and semantics, and their reliable comprehension has real-world applications, such as accessibility for blind and low-vision individuals. We conducted an exploratory study with three LMMs (GPT-4o, Claude 3.7 Sonnet, and Llama 3.2 11B Vision Instruct) across 75 floor plans. Results show that combining SVG with raster input (SVG+PNG) improves performance on spatial understanding tasks but often hinders spatial reasoning, particularly in pathfinding. These findings highlight both the promise and limitations of decomposition as a strategy for advancing spatial visualization comprehension.
Read more
Under Review
Abstract: We introduce agentic accessibility, a conceptual paradigm for graphics accessibility for blind and low-vision individuals (BLV). In this paradigm, an AI-enabled conversational agent analyzes a graphic, such as a chart, map, or floor plan, and engages in open-ended dialogue with BLV to support progressive exploration and information-seeking. We situate agentic accessibility within the broader landscape of graphics accessibility and highlight how it complements and extends existing approaches.
Building on this foundation, we propose an initial research agenda spanning the technological, human, and regulatory fronts of agentic accessibility. We then empirically ground the concept through a user study comparing how BLV and sighted participants engaged with a prototype conversational accessibility agent to seek descriptive and navigational information about a residential floor plan. Our findings reveal differences in information-seeking strategies and in participants’ ability to conversationally construct accurate mental representations of a test floor plan.
Read more
26th International Conference on Artificial Intelligence in Education, 2025
Jaewook Lee, Jeongah Lee, Wanyong Feng, Andrew Lan
Abstract: Advances in large language models (LLMs) offer new possibilities for enhancing math education by automating support for both teachers and students. While prior work has focused on generating math problems and high-quality distractors, the role of visualization in math learning remains under-explored.
Diagrams are essential for mathematical thinking and problem-solving, yet manually creating them is time-consuming and requires domain-specific expertise, limiting scalability. Recent research on using LLMs to generate Scalable Vector Graphics (SVG) presents a promising approach to automating diagram creation. Unlike pixel-based images, SVGs represent geometric figures using XML, allowing seamless scaling and adaptability. Educational platforms such as Khan Academy and IXL already use SVGs to display math problems and hints. In this paper, we explore the use of LLMs to generate math-related diagrams that accompany textual hints via intermediate SVG representations. We address three research questions: (1) how to automatically generate math diagrams in problem-solving hints and evaluate their quality, (2) whether SVG is an effective intermediate representation for math diagrams, and (3) what prompting strategies and formats are required for LLMs to generate accurate SVG-based diagrams. Our contributions include defining the task of automatically generating SVG-based diagrams for math hints, developing an LLM prompting-based pipeline, and identifying key strategies for improving diagram generation. Additionally, we introduce a Visual Question Answering-based evaluation setup and conduct ablation studies to assess different pipeline variations. By automating the math diagram creation, we aim to provide students and teachers with accurate, conceptually relevant visual aids that enhance problem-solving and learning experiences.
Read more
PDF
IEEE Network Magazine, 2023
Yoseop Joseph Ahn, Minje Kim, Jeongah Lee, Yiwen Shen, Jaehoon Paul Jeong
Abstract: This paper proposes an Internet-of-Things (IoT) Edge-Empowered Cloud System (called IoT Edge-Cloud) for the visual control of IoT devices in a user’s smartphone. This system uses the combination of existing technologies (e.g., DNSNA, SALA, SmartPDR, and PF-IPS), for DNS naming and indoor localization to support the visual control of IoT devices.
For the visual control of IoT devices, the IoT devices register their auto-generated DNS names and the corresponding IPv6 addresses with the IoT Edge-Cloud. Each DNS name embeds an IoT device’s type (e.g., fire sensor, television, refrigerator, or air conditioner) and its location information, which is obtained through an Indoor Positioning System (IPS). With the DNS name, a user’s smartphone can display each IoT device and its location in an indoor place (e.g., home, office, and classroom), so that the IoT device can be located in the smartphone’s screen. Through performance evaluation, this paper proposes a localization scheme for a smartphone with average localization error of 1.08 meters. Also, it proposes a localization scheme for IoT devices (especially, at the center area in a testbed) with average localization error of 1.11 meters.
Read more
PDF
Preparing participatory design study
Data analysis after user study
Preliminary testing
• [2024 Fall] CICS 110 Foundations of Programming
• [2025 Spring] CS 383 Artificial Intelligence
• [2025 Summer] CS 571 Data Visualization and Exploration
• [2025 Fall] INFO 348: Data Analytics with Python
• As a member of the Digital Health Care Research Team, I developed a model that predicts lung cancer TNM stage using an Electronic Health Record (EHR) dataset
• Finetuned Large Language Models (LLMs) in resource-restricted settings, optimizing performance while employing prompt engineering techniques
• As a member of the Advanced Research Team, I developed the pipeline for machine learning module within a clinician-focused medical data analysis platform
• Conducted research on patient clustering, leveraging Common Data Model (CDM) data
• Crafted wireframes that embody comprehensive UI/UX enhancements to elevate the overall user experience
• Led projects on Semantic Text Similarity, Relation Extraction, Open-Domain Question Answering, and Chatbot Development tasks
• Supervised by Prof. Jaehool Paul Jeong (Internet-of-Things(IoT) Lab)
• Developed the application that manages smart devices through visualization
• Improved location tracking accuracy by 32%~39% by combining Smart Pedestrian Dead-Reckoning(SmartPDR) and Particle Filter-Indoor Positioning System(PF-IPS)
• 3rd Place (Grand Prize), Chung-ang University AI and Humanities Academic Paper contest | Jan 2023
• 1st Place (Grand Prize), Kookmin University self-driving contest | Nov 2021
• 3rd Place (Grand Prize), Sungkyunkwan University AI x Bookathon contest | Jan 2021
• Volunteering Excellence Prize, NIA (National Information Society Agency) | Dec 2020
• Academic Excellence Scholarship (top 12%) | 2022
• Creative Scholarship (100% tuition support) | 2021
• Sungkyun Software Scholarship (100% tuition support) | 2019
• MegastudyEdu Scholarship (external) | 2019