When the Kinect came out, the community quickly found out that it can be used for robotics applications. One approach is to exploit gesture recognition to tele-operate or send commands to robots. Video 1 shows an experiment done by Taylor Veltrop to control a Nao humanoid robot. From the software side, this experiment relies on ROS.
Video1: Nao Humanoid Teleoperation using the Kinect
Other roboticists use the Kinect as a sensor to allow robots autonomously build maps of their environment. It was used for commercial wheeled robots such as the Bilibot and the TurtleBot. It was also used for research purposes such as the quadrotor flying robot by Patrick Bouffard from Berkeley. As shown in Video 2, the Kinect is used by this aerial robot for obstacle avoidance. Again, the software part of this quadrotor as well as Bilibot and TurtleBot rely on ROS support for the Kinect.
Video 2: The Kinect Used for Obstacle Avoidance In a Quadrotor Flying Robot
But, the time of the Kinect seem to be over. At least for the motion tracking. A startup located in San Francisco showcased Leap, a high precision gesture recognition device. It can track in real time several objects with millimeter scale accuracy. Video 3 gives an overview of the Leap and its capabilities.
Video 3: Introducing the Leap Sensor
The Leap will be available by fall 2012. Compared to the Kinect, it is not only smaller, but it is also cheaper. It can be pre-ordered for $70. The specifications of the Leap are not available yet. Though, we expect that its power consumption will be much less that the Kinect. A pending question is about maximum depth for detecting objects. Demos introduce the Leap as a replacement for the keyboard and the mouse, located about 50cm from the users hands. For robotic applications, it has to do at least as good as the Kinect which can track objects between 1.2m and up to 3.5m.