We designed the new teleoperation interface for a hexapod walking robot. The interface is based on the Kinect sensor and hand tracking libraries. The set of gestures is used to switch between motion modes of the robot. The estimated position of the operator's hands is used to define the reference motion of the robot. We show in the experiments that the operator can control the robot to reach the goal position and manipulates objects using hand gestures only.

We propose a new environment model for legged robots. The model stores information about the shape of the obstacles and terrain type. We also propose the motion planner which takes into account terrain types when planning the motion of the robot. The planner is based on A* graph search method that guides RRT-Connect method for precise motion planning. With the proposed navigation system the robot is capable of planning its motion on the terrain with various terrain types and avoid regions which are potentially risky. Finally, we provide results of the experiments which show the properties of the proposed perception system and motion planner.

Many recent solutions to the RGB-D SLAM problem use the pose-graph optimization approach, which marginalizes out the actual depth measurements. In this paper we employ the same type of factor graph optimization, but we investigate the gains coming from maintaining a map of RGB-D point features and modeling the spatial uncertainty of these features. We demonstrate that RGB-D SLAM accuracy can be increased by employing uncertainty models reflecting the actual errors introduced by measurements and image processing. The new approach is validated in simulations and in experiments involving publicly available data sets to ensure that our results are verifiable.

We present the application of the RGB-D sensor in the navigation system of a six-legged walking robot. The RGB-D sensor is used in the SLAM subsystem to estimate pose of the robot and to build dense environment model (elevation map). The paper presents the navigation system of the robot. The system includes SLAM subsystem, mapping module, motion planner and robot's controller. The results of the experiments on the real robot are provided. The influence of the localization system on the quality of the obtained elevation map is presented.

In the paper we present the module which uses triangle mesh of the robot to detect collisions between parts of the robot. To this end, the triangle to triangle intersection test is applied. To speed up the computation the bounding box test is carried out at the beginning. We show the properties and performance of the collision detection module. Then, we propose the method which uses Gaussian mixture to determine self- collision model. The method is significantly faster than the method which uses triangle meshes but less precise. The collision detection methods can be applied during motion planning as well as during execution of the planned motion to detect infeasible configurations of the robot.