Skip to Main Content
Stony Brook University

University Libraries STEM Speaker Series

First Lecture: Vasudha Varadarajan, PhD Candidate, Department of Computer Science

Title: "Modeling Individual Cognitive Patterns and Mental Health from Language"

Language is a powerful means to communicate to the world and ourselves what we are thinking about, and why we behave in certain ways. Associating linguistic patterns with specific cognitive patterns and mental states can be an insightful tool in understanding the unique cognitive styles of individuals -- not only can it help understand implicit behavioral traits such as a person’s judgment and decision-making abilities, reasoning skills, but it can also help model complex mental health conditions such as depression, PTSD, anxiety, mood disorders, etc. which can be used for diagnosis and therapy to improve our well-being. In this talk, I will introduce novel techniques to train precise models on social media discourse to capture individual cognitive styles. I will then present my work on language-based assessments for mental health, demonstrating an improved methodology for eliciting and capturing relevant discourse in mental health assessment.

Biography:

Vasudha Varadarajan, a PhD candidate is working in the areas of natural language processing and the relationships between cognition, mental health and language. I work with Prof. H. Andrew Schwartz in the State University of New York at Stony Brook. I apply NLP and data science techniques to problems in computational social science to study psychological phenomena through language use at scale, and improving conversational AI for psychometric surveys. I am also interested in interpretability and explainability of language models, and exploring NLP for low-resource settings.

 

Date: Tuesday, February 25, 2025

Time: 1pm-2pm

Location: Special Collections Seminar Room, E-2340, second floor of the Melville Library

Register here!

Second Lecture: Dr. Haibin Ling, Department of Computer Science

Title: "Intelligent Projector Systems for Spatial Augmented Reality" 

The rapid advancement of imaging techniques and artificial intelligence has revolutionized research and applications in visual intelligence (VI). In this talk, I will present our studies covering a broad range of topics in VI, including visual recognition, video understanding, visual enhancement, and relevant machine learning techniques, with applications in virtual/augmented reality, biomedical research, and more.  

I will then present our recent work applying AI to projector systems for spatial augmented reality tasks. In particular, image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. In practice, this may be cumbersome for SAR applications to address them one-by-one. In this talk, we propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering. In our experiments, DeProCams shows clear advantages over previous arts with promising quality and meanwhile being fully differentiable. Moreover, by solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations and structured light patterns. This is a joint work with Bingyao Huang.

Biography:

Haibin Ling received the B.S. and M.S. degrees from Peking University in 1997 and 2000, respectively, and the Ph.D. degree from the University of Maryland, College Park, in 2006. From 2000 to 2001, he was an assistant researcher at Microsoft Research Asia. From 2006 to 2007, he worked as a postdoctoral scientist at the University of California Los Angeles. In 2007, he joined Siemens Corporate Research as a research scientist; then, from 2008 to 2019, he worked as an Assistant Professor and then Associate Professor at Temple University. In fall 2019, he joined Stony Brook University as a SUNY Empire Innovation Professor in the Department of Computer Science. His research interests include computer vision, augmented reality, medical image analysis, machine learning, and human computer interaction. He received Best Student Paper Award at ACM UIST (2003), Best Journal Paper Award at IEEE VR (2021), NSF CAREER Award (2014), Yahoo Faculty Research and Engagement Award (2019), and Amazon Machine Learning Research Award (2019). He serves or served as Associate Editors for IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), IEEE Trans. on Visualization and Computer Graphics (TVCG), Computer Vision and Image Understanding (CVIU), and Pattern Recognition (PR), and as Area Chairs various times for CVPR, ICCV, ECCV and WACV. He is a fellow of IEEE.

 

Date: Tuesday, April 8, 2025

Time: 1pm-2pm

Location: Online

Register here!