Friday, March 26, 2021

PhD Dissertation Defense Raniero Lara Garduno March 12, 2021

PhD Dissertation Defense

Friday, March 12, 2021                                                         
Virtual Defense via Zoom
                                                                                          

Title: Machine Learning and Digital Sketch Recognition Methods to Support Neuropsychological Diagnosis and Identification of Cognitive Decline

Abstract: With approximately 15 to 20 percent of adults aged 65 and older living with Mild Cognitive Impairment (MCI), researchers in neuropsychology have placed increasing emphasis in early detection to best preserve quality of life. This dissertation presents digital diagnosis tools by adapting existing neuropsychological tests and fully automating what is otherwise a subjective process requiring domain expertise. We present the first fully-automated Rey-Osterrieth Complex Figure grader that can recognize all 18 grading details using a series of agent-based graph traversal algorithms combined with a modified template- matching gesture recognition model. We also present among the first systems to recognize MCI on digitized Trail-Making tests combining machine learning methods with digital sketch recognition.


Biography:  Raniero specializes in the intersection between neuropsychology and digital sketching, more broadly in how subjects' behavior when interacting with digitized examinations. He has studied the effects of cognitive decline on touch tablets, stylus input, digitized paper-and-pencil sketching, and tests integrating augmented reality. He hopes to see technology improve such that Mild Cognitive Impairment can be detected early through the analysis on how people interact with various digital input modalities in test and everyday life.



Advisor: Dr. Tracy Hammond

Thursday, March 11, 2021

SRL MS Thesis Defense of Duc Hoang March 2, 2021

MS Thesis Defense

Tuesday, March 2, 2021                                                         
Virtual Defense via Zoom
                                                                                          
                                                             
Title: 3M-Pose: Multi-resolution, Multi-path and Multi-output neural architecture search for bottom-up pose prediction


Abstract: Human pose estimation is a challenging computer vision task and often hinges on carefully handcrafted architectures. This paper aims to be the first to apply Neural Architectural Search (NAS) to automatically design a bottom-up, one-stage human pose estimation model with significantly lower computational costs and smaller model size than existing bottom-up approaches. Our framework dubbed 3M-Pose co-searches and co-trains with the novel building block of Early Escape Layers (EELs), producing native modular architectures that are optimized to support dynamic inference for even lower average computational cost. To flexibly explore the fine-grained spectrum between the performance and computational budget, we propose Dynamic Ensemble Gumbel Softmax (Dyn-EGS), a novel approach to sample micro and macro search spaces by allowing varying numbers of operators and inputs to be individually selected for each cell. We additionally enforce a computational constraint with a student-teacher guidance to avoid the trivial search collapse caused by the pursuit of lightweight models. Experiments demonstrate 3M-Pose to find models of drastically superior speed and efficiency compared to existing works, reducing computational costs by up to 93% and parameter size by up to 75% at the cost of minor loss in performance

Biography:  Duc is an aspiring graduate researcher hailing from Vietnam. He has specialized in the application of Machine Learning in Computer Vision tasks. He is currently pursuing a Master's degree in Computer Science at Texas A&M and later a Ph.D. degree in Electrical andoang Computer Engineering at the University Texas of Austin.

Advisor: Dr. Tracy Hammond

Monday, January 11, 2021

New Diversity and Inclusion Discussion Groups


Join the Institute for Engineering Education and Innovation (IEEI) every Thursday this semester for two literature discussion groups on improving diversity and inclusion in the classroom. Each week, students and faculty will cover one chapter from
 Uncomfortable Conversations with a Black Man by Emmanuel Acho and discuss what they think faculty could do to create a more inclusive classroom.
The student group will meet virtually every Thursday at 3:00 PM and is open to undergraduate and graduate students. To join the group, students will need to register at https://tx.ag/studentdiscussion.
The faculty group will meet virtually every Thursday at 4:00 PM and is open to all faculty members. To join the group, faculty will need to register at https://tx.ag/literature.
IEEI and our partners will provide a free physical copy of the book to all participants, and no advanced reading is required.
More information can be found at ieei.tamu.edu/programs/discussions.

Friday, November 29, 2019

SRL PhD Dissertation defense Blake Williford. Friday December 13. Title: Exploring Methods for Holistically Improving Drawing Ability with Artificial Intelligence

Dissertation Defense
Friday, December 13, 2019
2:00pm Engineering Activities Building C, Room 112C

Title: Exploring Methods for Holistically Improving Drawing Ability with Artificial Intelligence

Abstract: This dissertation explores various methods for improving drawing ability with intelligent systems. It focuses on not just improvements in motor function and perception related to drawing but also cultivating a healthy psychology of confidence and motivation around learning drawing by utilizing real-time feedback and game mechanics. This defense will cover various technology probes and their efficacy in improving drawing ability, a framework for motivating students with sketch-based gameplay, a novel perspective accuracy recognition algorithm and intelligent user interface, an exploration of improving drawing creativity with ambiguous stimuli, and insights in to the benefits and limitations of integrating these tools directly in to existing curricula and pedagogy.

Biography: Blake Williford is a designer and technologist with a passion for empowering others with technology. After receiving a MS in Human-Computer Interaction from Georgia Tech he came to Aggieland to pursue a PhD with Dr. Tracy Hammond and continue following his passion for educational technology. His award-winning design and research has explored methods for improving drawing ability with artificial intelligence. As a prolific design and innovation consultant, Blake has worked with more than 40 companies over the past 10 years including recent stints with Microsoft Research and Adobe Research.

Adviser: Dr. Tracy Hammond

Thursday, June 20, 2019

SRL PhD Dissertation defense Paul Taele. Thursday July 18. Title: A Sketch Recognition-Based Intelligent Tutoring System for Richer Instructor-Like Feedback on Chinese Characters

Dissertation Defense
Friday, September 27, 2019
2:00pm Engineering Activities Building C, Room 112C

Title: A Sketch Recognition-Based Intelligent Tutoring System for Richer Instructor-Like Feedback on Chinese Characters 

Abstract: Students wishing to achieve strong fluency in East Asian languages such as Chinese and Japanese must master various language skills such as the reading and writing of those languages' non-phonetic symbols of Chinese characters. For such students with English fluency learning an East Asian language as a foreign language and with only primary fluency in English, mastery of such languages' written component is challenging due to vastly distinct linguistic differences in reading and writing. In this dissertation, I developed a sketch recognition-based intelligent tutoring system for providing richer assessment and feedback that emulates human language instructors, specifically for novice students' introductory course study of East Asian languages and their written Chinese characters. The system relies on various sketch recognition heuristics for evaluating the performance of students' writing technique of introductory Chinese characters through features such as metric scores and visual animations. From evaluating the proposed system from instructor feedback for classroom students and self-study learners, I provide a stylus-driven solution for novice language students to study and practice introductory Chinese characters with deeper assessment levels, so that they may have richer feedback to improve their writing performance.

Biography: Paul Taele is currently a doctoral candidate studying Computer Science at Texas A&M University, and a member of the Sketch Recognition Lab. He received his dual Bachelor of Science in Computer Sciences and in Mathematics at the University of Texas at Austin, and his Master of Science in Computer Science at Texas A&M University. His research interests lie at the intersection of human-computer interaction and artificial intelligence, specifically in intelligent tutoring systems for educational domains in languages, music, and engineering.

Adviser: Dr. Tracy Hammond

SRL MS Thesis Defense of Megha Yadav. Monday, June 3. Title: Mitigating Public Speaking Anxiety Using Virtual Reality and Population-Specific Models

Thesis Defense
Monday, June 3

Title:
Mitigating public speaking anxiety using virtual reality and population-specific models.

Abstract: Public speaking is essential in effectively exchanging ideas, persuading others, and making a tangible impact. Yet, public speaking anxiety (PSA) ranks as a top social phobia among many people. This research utilizes wearable technologies and virtual reality (VR) to expose individuals to PSA stimuli and quantify their PSA levels via group-based machine learning models. These machine learning models leverage common information across individuals and fine-tune their decisions based on specific individual and contextual factors. In this way, prediction decisions would be made for clusters of people with common individual-specific factors which would benefit the overall system accuracy. Findings of this study will enable researchers to better understand ante-decedents and causes of PSA contributing to behavioral interventions using VR.

BioMegha Yadav is pursuing her Master’s degree in Computer Science in the Department of Computer Science & Engineering. She received her Bachelor’s degree in Computer Science from Manipal Institute of Technology in India in 2013. Megha’s research interest lies in exploring machine learning and deep learning techniques within the health care domain.

Co-Advisor: Dr. Tracy Hammond

SRL MS Thesis Defense of Siddharth Subramaniyam. Monday June 3. Title: Sketch Recognition Based Clarification for Eye Movement Biometrics in Virtual Reality

Thesis Defense
Monday, June 3


Title:  Sketch Recognition Based Classification for Eye Movement Biometrics in Virtual Reality

Abstract: 
Biometrics is an active area of research in the HCI, pattern recognition, and machine learning communities. In addition to various physiological features such as fingerprint, DNA, and facial recognition, there has been an interest in using behavioral biometric modalities such as gait, eye movement patterns, keystroke dynamics signature, etc. In this work, we explore the effectiveness of using eye movement as a biometric modality by treating it as a sketch and develop features using sketch recognition techniques. For testing our methods, we built a system for authentication in virtual reality (VR) that combines eye movement biometric with passcode based authentication for an additional layer of security against spoofing attacks.


Bio:Siddharth is currently a Masters student in Computer Science at Texas A&M University, working in the Sketch Recognition Lab. His research interests are in computer human interaction, especially the application of statistics and machine learning to understand human perception and cognitive behavior.


Advisor: Dr. Tracy Hammond

SRL MS Thesis Defense of Sharmistha Maity. Thursday May 30. Title: Combining Paper -Pencil Techniques with Immediate Feedback for Learning Chemical Drawings


Thesis Defense
Thursday, May 30

Title:
Combining Paper-Pencil Techniques with Immediate Feedback for Learning Chemical Drawings

Abstract:
Introductory chemistry courses teach the process of drawing basic chemical molecules with the use of Lewis dot diagrams. Many beginner students, however, have difficulty in mastering these diagrams. While several computer applications are being developed to help students learn Lewis dot diagrams, there is a potential hidden benefit from paper and pencil that many students may not realize. Sketch
recognition has been used to identify advanced chemical diagrams, however using the recognition in an educational setting requires a focus beyond identifying the final drawing. The goal of this research is to infer whether paper-pencil techniques provide educational benefits for learning Lewis dot diagrams. An analysis of pre-post assessments
shows how combining sketch recognition of paper-pencil techniques and immediate feedback allows
greater benefits for students with a basic chemistry understanding.

Biography:
Sharmistha Maity is currently a Masters student studying Computer Science at Texas A&M University. She
received her Bachelors in Electrical Engineering at The University of Texas at Austin, and her research
interests include Human-Centered Computing, Artificial Intelligence, and Cognitive Science. As a graduate
student in the Sketch Recognition Lab, she is studying the integration of educational psychology and
computer science, and how it can be combined to improve education that effectively reaches a wider
audience and increases the motivation to learn.

Advisor: Dr. Tracy Hammond

Wednesday, September 20, 2017

SRL MS Thesis Defense of Jung In Koh. Thursday, June 15. Title: Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communication

Thesis Defense
Thursday, June 15

Title: Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communication



Jung In Koh

1 PM Thursday, June 15, 2017

Teague 326

Abstract



Recent trends in computer-mediated communications (CMC) have not only led to expanded instant messaging (IM) through the use of images and videos, but have also expanded traditional text messaging with richer content, so-called visual communication markers (VCM) such as emoticons, emojis, and stickers. VCMs could prevent a potential loss of subtle emotional conversation in CMC, which is delivered by nonverbal cues that convey affective and emotional information. However, as the number of VCMs grows in the selection set, the problem of VCM entry needs to be addressed. Additionally, conventional ways for accessing VCMs continues to rely on input entry methods that are not directly and intimately tied to expressive nonverbal cues. One such form of expressive nonverbal that does exist and is well-studied come in the form of hand gestures.
In this work, I propose a user-defined hand gesture set that is highly representative to VCMs and a two-stage hand gesture recognition system (feature-based, shape based) that distinguishes the user-defined hand gestures. The goal of this research is to provide users to be more immersed, natural, and quick in generating VCMs through gestures. The idea is for users to maintain the lower-bandwidth online communication of text messaging to largely retain its convenient and discreet properties, while also incorporating the advantages of higher-bandwidth online communication of video messaging by having users naturally gesture their emotions that are then closely mapped to VCMs. Results show that the accuracy of user-dependent is approximately 86% and the accuracy of user independent is about 82%. 

Biography

Jung In Koh is a Master's student in the Department of Computer Science and Engineering at Texas A&M University and a research assistant in the Sketch Recognition Lab. Before joining Texas A&M, she received the bachelor's degree in Computer Science from Sookmyung Women's University in South Korea. Her research interests include motion-detection and data mining.

Advisor: Dr. Tracy Hammond

 

Monday, June 19, 2017

SRL MS Thesis Defense of Seth Polsley. Monday, June 5. Title: Identifying Outcomes of Care from Medical Records to Improve Doctor-Patient Communication

Thesis Defense
Monday, June 5

Title: 
Identifying Outcomes of Care from Medical Records to Improve Doctor-Patient Communication



Seth Polsley

3 PM Monday, June 5, 2017

Teague 326

Abstract

Between appointments, healthcare providers have limited interaction with their patients, but patients have similar patterns of care. Medications have common side effects; injuries have an expected healing time; etc. By modeling patient interventions with outcomes, healthcare systems can equip providers with better feedback. In this work, we present a pipeline for analyzing medical records according to an ontology directed at allowing closed-loop feedback between medical encounters. Working with medical data from multiple domains, we use a combination of data processing, machine learning, and clinical expertise to extract knowledge from patient records. While our current focus is on technique, the utlimate goal of this research is to inform development of a system using these models to provide knowledge-driven clinical decision-making.


Biography

Seth Polsley is a graduate student in the Department of Computer Science and Engineering at Texas A&M University and a research assistant in both the Sketch Recognition Lab and College of Medicine Biomedical Informatics Research group. Before joining A&M, he received a B.S. in Computer Engineering from the University of Kansas where he worked with the Speech and Applied Neuroscience Lab. His research interests may be broadly described as intelligent systems, which has led to work on multiple learning- based systems in the domains of education and health.
 


Advisor: Dr. Tracy Hammond

Thursday, June 8, 2017

SRL MS Thesis Defense of Josh Cherian. Friday, December 9. Title: Recognition of Everyday Activities through Wearable Sensors and Machine Learning

Thesis Defense
Friday, December 9

Title: Recognition of Everyday Activities through Wearable Sensors and Machine Learning



Josh Cherian

10 AM Friday, December 9, 2016

Teague 326

Abstract
Over the past several years, the use of wearable devices has increased dramatically, largely due to their increasingly smaller and more personal form factors, greater sensor reliability, and increasing utility and affordability.  This has helped many people live healthier lives and achieve their personal fitness goals, as they are able to quantifiably and graphically see the results of their efforts every step of the way. While these systems work well within the fitness domain, they have yet to achieve a convincing level of functionality in the larger domain of healthcare.To facilitate the increased use of wearable devices to aid in healthcare, we present a two tier recognition system for identifying health activities in real time based on accelerometer data. To do this we run a series of users studies to collect data for six everyday activities: brushing one's teeth, combing one's hair, scratching one's chin, washing one's hands, taking medication, and drinking, achieving an f-measure of 0.85 when identifying these activities in a controlled setting. To evaluate our recognition system's ability to recognize activities in a naturalistic setting, we identify instances of brushing teeth over the course of a day. We initially achieve an f-measure of 0.68; however we are able to improve this to 0.85 by proposing and extracting several novel features. Through recognition of these activities, we aim to encourage the use of wearable devices for everyday personal health management. 

Biography
Josh Cherian is a MS candidate at Texas A&M University in the department of Electrical Engineering working under Dr. Tracy Hammond in the Sketch Recognition Lab. Josh completed his BS in Biomedical Engineering from Georgia Institute of Technology. His primary research interest is activity recognition, with a specific focus on recognizing daily health activities such as brushing one's teeth, washing one's hands, and taking medication.

Advisor: Dr. Tracy Hammond
    

Tuesday, March 7, 2017

SRL MS Thesis Defense of Nahum Villanueva Luna. Monday, March 6. Title: ARCaching: Using Augmented Reality on Mobile Devices to Improve Geocacher Experience

Thesis Defense
Monday, March 6

Title: ARCaching: Using Augmented Reality on Mobile Devices to Improve Geocacher Experience



Nahum Villanueva Luna

10 AM Monday, March 6, 2017

516 H.R. Bright Building

Abstract


ARCaching is an augmented reality application designed to help Geocachers on their quest to find hidden containers around the world. Geocaching is a popular treasure hunting game that uses GPS


coordinates and mobile devices to guide players to hidden object. ARCaching is trying to test the effects that using augmented reality on this kind of tasks could have and if it helps to improve the user's experience while Geocaching. 

Biography
  Born on Mérida Yucatán Mexico on April 29 in 1991. I got my bachelor's degree as a Software Engineer on 2013 at "Universidad Autónoma De Yucatán" in Mexico. I'm currently finishing my master's degree on computer engineering at Texas A&M at the Sketch Recognition Lab.

Advisor: Dr. Tracy Hammond