Vision-based Navigation and Manipulation

From Robot Intelligence
Revision as of 20:24, 26 May 2016 by Kistvision (Talk | contribs) (Concept)

Jump to: navigation, search

Indoor Navigation

Concept

  • As a map representation, we proposed a hybrid map using object-spatial layout-route information.
  • Global localization is based on object recognition and its pose relationship, and our local localization uses 2D-contour matching by 2D laser scanning data.
  • Our map representation is like this:
Map-repres.jpg





















  • The Object-based global localization is as follows:
GL.jpg





























Related papers



Unknown Objects Grasping

Concept

  • With a stereo vision(passive 3D sensor) and a Jaw-type hand, we studied a method for any unknown object grasping.
  • In the context of some practical actions with only one-shot image, three graspable directions such as lift-up, side and frontal direction are suggested, and an affordance-based grasp, handle graspable, is also suggested.
  • Our grasp directions are as follows:
  • The schema of our whole grasping process is like this:

Related papers