Difference between revisions of "Vision-based Navigation and Manipulation"

From Robot Intelligence
Jump to: navigation, search
(Concept)
(Concept)
Line 2: Line 2:
 
==Concept==
 
==Concept==
 
* As a map representation, we proposed a hybrid map using object-spatial layout-route information.
 
* As a map representation, we proposed a hybrid map using object-spatial layout-route information.
* Global localization is based on object recognition and its pose relationship, and our local localization uses 2D-contour matching by 2D laser scanning data.
+
* Our global localization is based on object recognition and its pose relationship, and the local localization uses 2D-contour matching by 2D laser scanning data.
 
* Our map representation is like this:
 
* Our map representation is like this:
 
::[[File:map-repres.jpg|600px|left]]
 
::[[File:map-repres.jpg|600px|left]]

Revision as of 20:43, 26 May 2016

Indoor Navigation

Concept

  • As a map representation, we proposed a hybrid map using object-spatial layout-route information.
  • Our global localization is based on object recognition and its pose relationship, and the local localization uses 2D-contour matching by 2D laser scanning data.
  • Our map representation is like this:
Map-repres.jpg





















  • The Object-based global localization is as follows:
GL.jpg





























Related papers



Unknown Objects Grasping

Concept

  • With a stereo vision(passive 3D sensor) and a Jaw-type hand, we studied a method for any unknown object grasping.
  • In the context of perception with only one-shot 3D image, three graspable directions such as lift-up, side and frontal direction are suggested, and an affordance-based grasp, handle graspable, is also proposed.
  • Our experimental movie clip : https://www.youtube.com/watch?v=YVfTltLy2w0
  • Our grasp directions are as follows:
Grasp directions.jpg

















  • The schema of our whole grasping process is like this:
Grasp-sche.jpg














Related papers