Research



Sensor Network for Active Play

While inertial sensors do have some presence in the current literature surrounding the concepts of motion capture in gaming, few involve the full body. Our system allows for player motions to be mapped those of their on screen character in real time. This system functions entirely off of quaternion orientation data from the body worn sensors. Based off the quaternions a simple way to calibrate the orientation of the sensors is presented, in addition to a hierarchical skeletal model. This skeletal model allows for very easy and natural navigation of the user around the virtual environment by moving the anchor point to the planted foot rather than the torso like other systems. The game created highlights the strengths of the system's design as well as its potential for virtual reality applications.


We also examine the system for its potential for quaternion based gesture recognition. The goal of which is to have a unified motion tracking and gesture recognition system. These two concepts are only ever examined exclusively, rather than in a combined context. The quaternion based recognition algorithms we present are based around Hidden Markov Models and Markov Chains. Despite the wide spread adoption of the Hidden Markov Model in gesture recognition, our results show that our modified Markov Chain model outperforms the Hidden Markov Model in both accuracy and computation time.


The final system results in a unified tracking and gesture recognition system for real times applications. It also lends itself to fairly simple scalability, where full body tracking and recognition is need or simply a portion of it. While the focus of this work was the use for gaming, the results are easily extendable to other topics where body area sensor networks could be useful.

Human/Robot Interaction

Many studies have been completed exploring the effectiveness of companion robots to a patient's well-being, as well as robots that are intended to physically assist patients. One of the most famous examples is that of PARO, a robotic seal who has been the subject of many papers, and has been shown to help elderly patients cope with dementia and Alzheimer's, and may even hold some benefits over real animals in this treatment. These robots, however, tend to only focus on either the emotional connection and do not physically assist the patient, or tend to be solely assistive with no emotional connection. Usually, they provide only one specific need when it is possible they could be providing more. The direction of this research will take is to explore the practicality of a robot companion that can also assist the patient in a physical nature. This will lead to the creation of a more valuable companion than the robots that have been studied in the past through the merging of these two ideas.

A human-sized android is currently in the process of being created. This build will be extended with human-like sensor inputs for the android, such as haptic, visual, and auditory sensor inputs, which will be the focus of upcoming research. By researching robotic sensors and controls, we will be able to determine what sensors work best for the android to understand its environment, and what sensors work best for affective human-robot interaction. It will explore possible methods of interaction and control from autonomous interaction to direct human control, such as using motion controllers to utilize the android as a type of avatar for a human being, and the possibility of using electroencephalography headset systems as a way to control the android through brain waves or thought control.

The implications of this research are far-reaching. They include multiple emerging technologies such as humanoid robotics and 3D printing, both of which hold their own implications, as well as implications in the humanities such as psychology and medicine, particularly geriatric care. Our research on interaction and control of such a device will also be important, as more definition in the benefits of different methods of control will equate to a better quality of life for those in need.

Computer Vision

Computer vision initially started out as a research project for an MIT summer student (in 1966) lead by Marvin Minksy to “connect a television camera to a computer and describe what it sees” The student, apparently, never worked on computer vision again! The complexity of the problem has only grown over the past 5 decades. In general, much of what we look at is traditionally considered pattern matching and extends to single dimensional data as well as multi dimensional data such a images.

We tend to work on problems that intend to improve the computer vision process such as:

  • Feature Matching: The process of determining which parts of an image are the same in another image. NOTE: This is particularly hard when dealing with historical and modern photos.
  • Object Detection: Finding and annotating objects (such as the horizon from an aerial view, or objects in a scene)
  • Multi-view analysis: From single moving cameras or from multiple cameras.

Links to legacy projects