We study how man-made and biological sensory systems interact with, learn from, and adapt to their environments.
We develop interface circuits, interfaces for mobile platforms, biosignal processing algorithms, pattern recognizers and biofeedback games. We integrate these systems and evaluate them through human studies.
We develop accent-conversion methods for non-native speech, speech therapy tools for children with motor disabilities, and techniques for animating 3D facial models from speech.
We integrate chemosensor systems, develop pattern recognition methods to extract information from sensor signals, and machine learning methods to optimize sensor tunings on-the-fly to adapt to the environment.