Dr. Hargsoon Yoon
Associate Professor of the Engineering Department, the Director of Nano-Electronics and Neural Engineering Laboratory at the Norfolk State University, and the Adjunct Associate Professor of Anatomy and Pathology Department at the Eastern Virginia Medical School.
He earned his Ph.D. degree in Engineering Science from Pennsylvania State University in 2003. He has developed several flexible polymer nanoelectronic devices for neural recordings and functional imaging in the brain using electrical impedance tomography. His research works have been funded by NSF, NASA, DoD and NIH in external funding and led to the publications in many journals and conferences. He serves as a Guest Editor in a journal Biosensors, an editorial board member of a journal The Scientific Pages of Translational Medicine, and as a program committee member of the SPIE International Conference for Nano-, Bio-, Info-Tech Sensors and Systems. He also serves as a review panel of the NIH Brain Initiative Grant Program in 2015-2017. He is also involved in the peer-review processes of several internationally reputed journals in addition to holding membership in the Society for Neuroscience and the IEEE Engineering in Medicine and Biology Society as a Senior Member.
Sensing and Functional Imaging of Dynamic and Networked Neural Activity in the Brain
Recent neural sensing and imaging technologies have allowed the exploration of the brain at the molecular-cell-biological level for the understanding of the mechanisms of mentation and brain diseases. Group cellular events are associated with the global brain function and the local neuro-physiological changes. Despite various advances, understanding and interfacing molecular and cellular events with neural network functions is still quite challenging.
To address the challenge, recording and imaging of neural activity in multiple spatial and temporal scales are critically important. The integrative sensing and functional imaging can lead to radical advances in understanding brain function and enable quantitative mathematical modeling and analysis of neural systems.
This presentation introduces neurochemical and electrophysiological sensing in molecular and cellular levels of dynamic neural activity. This presentation also discusses fast neural imaging technology using electrical impedance tomography to illustrate functional neuronal networks in the mesoscale level of cell populations, especially in deep brain structures. The principle of this imaging technology is based on the electrical impedance change by molecular transport through ion channels on nerve membranes.
Stevens Insitute of Technology
Samantha Kleinberg is an Assistant Professor of Computer Science at Stevens Institute of Technology. She received her PhD in Computer Science from New York University in 2010 and was a Computing Innovation Fellow at Columbia University in the Department of Biomedical informatics from 2010-2012. She is the recipient of NSF CAREER and JSMF Complex Systems Scholar Awards and her work is also supported by the NIH through an R01. She is the author of Causality, Probability, and Time (Cambridge University Press, 2012) and Why: A Guide to Finding and Using Causes (O’Reilly Media, 2015).
Understanding Human Health and Disease from Data at Multiple Scales
Massive amounts of medical data from electronic health records are now being mined by researchers, and can be used to improve care such as by finding factors affecting the recovery of patients in intensive care, or early risk factors for heart failure. However, patients are highly heterogeneous, the data are noisy, and we lack ground truth for evaluating algorithms. Further, many important events occur outside of medical settings. This is particularly true for chronic diseases such as diabetes and obesity. In this talk I discuss why outpatient data are needed to understand chronic disease, the computational challenges of inferring causality in these data and why we need automated monitoring of nutrition. In the second part of this talk I discuss our recent work using multiple sensing modalities to automatically identifying eating and how this can ultimately be used to support individuals with chronic disease.
VP at Morgan Stanley, Washington University in St. Louis
Georgetown University Medical Center
Vishakha Sharma is a Postdoctoral Research Fellow at the Innovation Center for Biomedical Informatics at Georgetown University Medical Center. She received her PhD in Computer Science from Stevens Institute of Technology in Hoboken, New Jersey in 2015. She holds an MS in Computer Science from Stevens Institute of Technology, and a Bachelor’s of Engineering in Computer Science from Pune University in India.
Her doctoral research focused on programming languages and their applications to computational biology and systems modeling. As a part of her doctoral research, with her advisor Adriana Compagnoni, she has designed and implemented BioScape, a high-level modeling and simulation language for the stochastic simulation of biological and biomedical systems. She has developed various computational models such as drug response in breast cancer, JAK-STAT signal transduction pathway, pH-triggered antibacterial coatings, bifunctional surfaces, intracellular viral transport, and effects of counterfeit components in the performance of a complex multi-component system.
Her postdoctoral research work has been funded by NIH Big Data to Knowledge (BD2K) initiative. She is a member of the ACM-W scholarships committee and serves as a reviewer for the 2017 AMIA Joint Summits on Translational Science.
An Eye-tracking Study to Enhance Usability of Big Data in Cancer Precision Medicine
Genomic testing is routinely used in cancer precision medicine. As a result, it is critical that oncologists interpret molecular diagnostic test results and evidence associated with the clinical utility of these biomarkers correctly to enable personalized cancer treatment. We leverage human factors engineering (HFE) to make the complex molecular and clinical information extracted from the literature more accessible and actionable to clinicians.
We are developing software, MACE2K (Molecular and Clinical Extraction to Knowledge) to automate information extraction and categorization of such evidence using big data wrangling approaches such as Natural Language Processing (NLP) and information retrieval methods, to automatically extract molecular and clinical information. Once extracted, this information must be synthesized and summarized for use by clinicians who generally have little time to sort through the evidence. The science of human factors is focused on understanding human capabilities and developing visualization and other tools and technologies to meet these capabilities.
To better understand how the clinician visually processes information on the interfaces and how the interfaces support the cognitive processes of the clinician (e.g. reasoning and decision-making), we use a unique methodology consisting of an eye-tracking device with a talk-aloud verbal protocol. We defined different areas of the user interfaces as areas of interest (AOIs). The AOIs serve as meaningful aspects of the interface to study the eye movement data patterns in conjunction with the verbal interpretations provided by the clinicians. This methodology allows the clinicians and clinical researchers to more clearly identify which aspects of the interface support reasoning and decision-making and which aspects of the interface are less optimal.
The application of human factors engineering to our evidence based information on clinically actionable biomarkers will make this information more easily usable in real clinical applications.