Project Overview

From application point of view, our research enable robots to assist humans in many scenarios including natural observation, autonomous vehicles, space exploration, surveillance, infrastructure maintenance, health and elder care, agriculture, construction monitoring, search and rescue, journalism, entertainment, etc.

However, from robotics research perspective, these research projects can be classified into the following categories: perception with sensor fusion, vision-based robot navigation, networked teleoperation and crowd supported robots, networked sensors and robots, recongizing environments, animal and activity, etc. It is worth noting that we are particularly interested in using cameras as the primary sensing modality in combination with other sensors such as GPS, laser range finders (a.k.a. lidars), inertial measurement units (IMUs), ultrasonic sensors, ground penetration radar (GPR), and encoders. We also work a lot with sensors in mobile devices.

Our research takes a balanced approach between theoretical developments and real world physical system-based experimentation. Therefore, our projects range from low level system design and implementation, such as device design, Printed Circuit Board (PCB) design and fabrication, to high level system modeling and algorithmic developments in artificial intelligence, such as robot perception, scheduling and planning. Students need to develop strong system and theoretical backgrounds.

1. PERCEPTION WITH SENSOR FUSION

Optoacustic Material and Structure Sensing (OMASS) for robotic grasping (Fall 2017-present)

Collaborting with Prof. Jun Zou, an MEMS expert, we are developing a new finger-mounted sensor for robot grasping of unknown objects. When the robot finger is close to the object, the OMASS sensor emits modulated laser beams to knock on the material surface which generates acoustic signals through thermal expansion and contraction. This is known as optoacoustic effect. We analyze signal specturm, time of flight, and magnitude signatures to obtain matieral type and close-to-surface internal structure information, which are important for estimating grasping force and fiction coefficent. More information can be found here.

Optical and Acoustic Dual Modal Communication and Ranging Devices for Underwater Vehicles  (Fall 2017-present)

Collaborting with Prof. Jun Zou, an MEMS expert, we are developing a new dual-modality ranging and communcation sensor for underwater robots. It generates co-centered and co-registered ultrasonic and laser signals to facilitate sensor fusion at device level. More information can be found here.

General Motors (GM) Autodrive Challenge  (Fall 2017-present)

We are honored to be selected to participate GM Autodrive challenge. With a drive-by-wire enabled Chevy Bolt deonated by GM, our lab is currently leading the development of TAMU Autodrive project. We focus on multimodal perception, scene understanding, real time planning and control, and optimization for Autodrive Challenge competition. More information can be found here.

Bridge-MINDER: Minimally Invasive Robotic Non Destructive Evaluation and Rehabilitation for Bridge Decks (Fall 2014 - Fall 2019)

Collaborating with Rutgers University, we worked on combining state-of-the-art SLAM techniques with ground penetration radar (GPR) for in-traffic bridge deck inspection. Project website is here.

Navigation for an autonomous motorcycle (2003-2007)

Collaborating with Anthony Levandowski and UC Berkeley, we develop the first autonomous motorcycle [video: mp4] with focus on its navigation system. This vehicle attended Darpa Grand Challenges (DGC) 2004 and 2005. We develop algorithms to enable vehicles to navigate on ill-structured roads by analyzing road surface characteristics and tracing previous vehicle tracks.

2. VISION-BASED ROBOT NAVIGATION

Robonaut Navigation (2017-present)

We are working with NASA Johnson Space Center (JSC) to provide vision-based navigation capability to the fantastic robot. We overcome issues caused by nonlinear deformation caused by uneven materials and imprecise timing across cameras to provide estimation of robot poses. See news report here.

 Motion Vector-based Visual SLAM (Summer 2011- Fall 2014)

We are using motion vectors from MPEG streams to perform SLAM tasks. It is a lightweight algorithm that fits best for small robots in urban environments. Moreover, it is able to perform SLAM in dynamic environments without the assumption that the environment has to be largely stationary.

High level landmarks for guiding robots (HIGUR) (Spring 2013)

Collaborating with Kitware and funded by US Army, this project will develop a real time scene understanding technique-enabled robot navigation and teleoperation scheme. It can be viewed as a simultaneous localization and mapping (SLAM) at scene structure level. Details of the project will be to be announced soon.

 Multi-layered feature graph for robots in urban area (2011-present)

Multi-layered feature graph is a data structure enabling quick understanding of 3D scene structure. It consists of vanishing points, primary planes, line segments in 2D and 3D space and organize them in geometry relationships such as adjacency, parallelism, collinearity, and coplanarity. Project website is here.

Detection of mirroring surfaces (2010-present)

Detecting mirror-like surface is important for obstacle avoidance for indoor robots and UAVs which fly close to buildings. Moreover, addressing this problem is critical for building exterior survey regarding energy load. Exploring scene symmetricity and its inherent geometric constraints, we develop algorithms for robust detection of mirror-like surfaces.

Vertical line-based visual odometry (2010-2011)

The primary purpose is to develop a lightweight localization scheme for small robots in urban area. The idea employs vertical lines as landmarks due to the abundance of building edges and poles. Vertical lines are easy to be extracted form the images and insensitive to lighting and shadow conditions. They are sensitive to the robot horizontal movements. Hence they are nice landmarks for the accurate estimation of the robot ego motion on the road plane. Details of the project can be found here.

Reduce depth ambiguity via planning (2005-2008)

Small robots are often equipped with a single camera. The monocular vision system has difficulty to obtain depth information along baseline direction, which is often coincident with moving direction. Planned lateral movements need to be added to address the problem.

4. NETWORKED TELEOPERATION AND CROWD SUPPORTED ROBOTS

Robotic BioTelemetry (2007-2012)

Successor to the CONE project, this project aimed to develop new algorithms and systems to quantitatively measure natural habitats and animal activities via remotely controlled networked robotic cameras. Collaborating with natural scientists, the project builds prototypes and investigates new metrics, mathematical models, algorithms, and architectures in an integrated research and educational project that emphasize active robotic actuation, automation, collaboration, and optimal system design. Details of the project can be found here.

Collaborative Observatories for Natural Environments (CONE) (2005-2009)

Collaborating with UC Berkely, the project proposes a new class of hybrid teleoperated/autonomous robotic "observatories" that allow groups of scientists, via the internet, to remotely observe, record, and index detailed animal activity. Such observatories are made possible by emerging advances in robotic cameras, long-range wireless networking, and distributed sensors. The CONE project has been deployed in several sites and spawn a variety of sub projects including the assist for searching of the legendary ivory-billed woodpeckers in central Arkansas and investigating the potential link between bird range change and climate change in south Texas. For more information please see the project website.

Observe/Co-Opticon/Demonstrate/ShareCam (2002-2005)

The Observe/Co-opticon/ShareCam is a machine for democratic optics, allowing a network of participants to cooperatively control the viewpoint of a shared video camera. The system combines a networked robotic video camera with a graphical user interface that allows many internet-based viewers to share simultaneous control of the camera by specifying desired viewing frames. Algorithms compute the optimal camera frame based on all requests, and position the camera accordingly. It also been used to observe the Berkeley Sproul Plaza during the 60th anniversary of free speech movements.

Active Panorama and Evolving Panorama (2003-2006)

Our Active Panorama project provides a context + focus interface for applications such as videoconferencing or remote observation with limited bandwidth. We use one pre-calibrated pan-tilt-zoom camera to construct a high resolution panoramic image, which serves as context of the remote environment. We superimpose a live video stream on top of the panorama so that the focused activity appear to live in the panorama. We update the background panorama on the fly as the camera moves.

The Tele-Actor (2000-2004)

The Tele-Actor is a skilled human with cameras and microphones connected to a wireless digital network. Live video and audio are broadcast to participants via the Internet or interactive television. Participants not only view, but interact with each other and with the Tele-Actor by voting on what to do next. Our "Spatial Dynamic Voting" (SDV) interface incorporates group dynamics into a variety of online experiences.

For more information please see the project website.

The Tele-twister (2003-2004)

The Tele-Twister is a game designed for the Internet. As in the original, the game is played with human bodies(the twisters), but in this version you get to play along and direct their moves from the comfort of your computer. As a player, you log in and are automatically assigned to either the Red or Blue team. You view and play from your computer screen. You see two twisters (real humans), one dressed in red, the other in blue. They respond to moves chosen by the Red and Blue online teams. Your team chooses moves for the twisters (eg, "right hand YELLOW") using a Java technology-based online interface.

4. RECOGNIZING ENVIRONMENTS, ANIMALS, AND ACITIVIES

 Enabling Bird-Computer Interaction Through Android Game for Studying Animal Intelligence (Fall 2017-Fall 2018)

Believe it or not, we are developing motion games for birds! It is an andriod-based motion game system for parrots. Collaborating with biologists, we are interested in decoding bird intelligence through gaming. More information will come later.

 NETWORKED MOTION SENSORS FOR TRACK MONITORING (Fall 2017-present)

Rail track deformation and cracking has been the one of the most costly items in railway operations. Current detection methods include exclusive manual scanning, track-mounted or car-mounted. The track-mounted methods are expensive and existing car-mounted asset monitoring systems is limited in capability (only monitoring individual car status) Combine networked IMU and GPS to form a new array of car-mounted track monitoring system is cost effective, less false positive, and more capable. We will report our discoveries as results emerge.

 Automatic bird detection and species recognition via crowd sourced videos (2010-2014)

To understand nature, biologists need to track birds over large region and extended period of time. Crowd sourced videos are a valuable source of data. Given that bird videos may be taken by untrained amateurs using unknown cameras under different lighting and back-ground conditions, understanding such data is a challenging problem...

5. NETWORKED SENSORS AND ROBOTS

Targeted Observation of Severe Local Storms Using Aerial Robots (2015-2019)

This project addresses the development of self-deploying aerial robotic systems that will enable new in-situ atmospheric science applications. Fixed-wing unmanned aerial vehicles (UAVs) has advanced to the point where platforms fly persistent sampling missions far from remote operators. Likewise, complex atmospheric phenomena can be simulated in near real-time with increasing levels of fidelity. Collabrating with researchers from University of Colorado, Boulder, University of Minnesota, Texas Tech University, and University of Nebraska, Lincoln, we are developing sensing-aware planning algorithms to guide UAVs to acquire atomospheric data.

Search of Transient Targets (2010-2017)

Mobile robots are often employed to perform searching tasks such as finding a black box in a remote area after an airplane crash, searching for victims after an earthquake or a mine collapse disaster, or locating artifacts on the ocean floor. In many cases, the target can intermittently emit short duration signals to assist searching. How to efficiently search for such targets requires detailed analysis of sensing characteristic and robot motion plans. This project studies the fundamental problems behind such searching process. More information is here.

Localization of Hostile Wireless Sensor Networks (2006-2011)

This project studies the localization of sensor network nodes using mobile robots. Different from other similar projects, we focus on the localization in a hostile environment. One typical scenario is to detect and destroy a sensor network deployed by the enemy in a battlefield. In such an environment, we cannot decode the received packet to know the network information. We are developing a scheme to guide the robot through the hostile environment to search and locate the sensor nodes based on signal strength and communication patterns. This scheme can be adapted for applications such as search and rescue.

6. OTHERS

TamuBot (2004-2011)

During the research process, we have accumulated quite experience in working with robotic hardware. Over the years, we have also developed our own version of four-wheeled skid-steering robots. It is named as TamuBot project. The robot has been a workhorse for our research projects. We have released the design details including mechanisms, motor control board design, software source code in this website. We hope to contribute to robotics community for those who plan to build their own robot.

Repsond-R (2009-2013)

This is a project focusing on developmeng of a mobile, distributed instrument for response research. Details about the project can be found here.