Research Projects


Wearables in Neurodegenerative Diseases Detection

Date:



While prior research on wearable-based behavioral sensing for Mild Cognitive Impairment (MCI) has primarily focused on analyzing walking patterns in controlled environments, recent efforts in human activity recognition have turned toward quantifying kitchen activities—an instrumental activity of daily living—due to the impact of visuospatial deficits in MCI on functional independence during such tasks. This study investigates the use of wrist and eye-tracking wearable sensors to quantify kitchen activities in individuals with MCI. We collected multimodal datasets from 19 older adults (11 with MCI and 8 with normal cognition) while preparing a yogurt bowl. Our multimodal analysis model could classify older adults with MCI from normal cognition with a 74% F1 score. The feature importance analysis showed the association of weaker upper limb motor function and delayed eye movements with cognitive decline, consistent with previous findings in MCI research. This pilot study demonstrates the feasibility of monitoring behavior markers of MCI in daily living settings, which calls for further studies with larger-scale validation in individuals’ home environments.


Integrating Multi-Modality in All-round LLM-based Recommender System

Date:

For more details, check out my note on Notion: [link]


I have investigated state-of-the-art LLM-based Recommender Systems, with a particular focus on my mentor’s project, A-LLMRec. I understand how to construct efficient LLM framework for downstream recommendation task without fine-tuning the LLM by stably blending pretrained CF-RecSys embeddings with natural language embeddings. I have gained insights how to create joint collaborative item-text embeddings using autoencoder while avoiding over-smoothed representation. Additionally, I have devised an alignment network that robustly aligns item embeddings from CF-based RecSys in the token space of LLM. Furthermore, I have acquired knowledge of designing LLM prompt which can incorporate modality information and integrate collaborative knowledge with recommendation instructions without fine-tuning the LLM

Building upon the findings of this study, I have enhanced the framework to seamlessly incorporate multi-modal data like item images. Instead of a single item encoder trained by matching loss with item text description encoder, I have implemented cross attention mechanism and contrastive learning to effectively integrate the multi-modality between item embeddings and meta data embeddings. These new integrated item encoder have produced better embeddings for soft prompt in LLM recommendation tasks.


Explainable Deep Clustering for Financial Customer Profiling

Date:

For more details, check out my note on Notion: [link]


Starting as a project for BigData Competition in NH Investment & Securities with the topic of Advanced Customer Profiling and Personalized Investment Portfolio Curation, I extended our project with teammates into an academic research initiative using high-dimensional cross-sectional data from Korea Institute of Public Finance.

Effective customer segmentation and communication of these findings to non-experts is a pressing task in the financial services sector, with the potential for widespread applications. This study employs a three-stage dimension reduction and clustering technique to segment a large, high-dimensional dataset, emphasizing explainability and intuitive visualization. We present the high-dimensional data and feature set using novel network-based visualization methods and identify the multi-stage process’s optimal configuration. Finally, we derive investment portfolios for each segment to demonstrate an expert system application in financial investment advisory to underscore the importance of explainable segmentations.

This paper is published in EAAI Vol 128.