Conference Papers

Object Recognition and Localization While Tracking and Mapping

R O Castle, and D W Murray, Proc 8th IEEE/ACM International Symposium on Mixed and Augmented Reality, Orlando, Florida, Oct 19 – 22, 2009. doi:10.1109/ISMAR.2009.5336477.

Abstract

This paper demonstrates how objects can be recognized, reconstructed, and localized within a 3D map, using observations and matching of SIFT features in keyframes. The keyframes arise as part of a frame-rate process of parallel camera tracking and mapping, in which the keyframe camera poses and 3D map points are refined using bundle adjustment. The object reconstruction process runs independently, and in parallel to, the tracking and mapping processes. Detected objects are automatically labelled on the user’s display using predefined annotations. The annotations are also used to highlight areas of interest upon the objects to the user.

Video-rate Localization in Multiple Maps for Wearable Augmented Reality

R O Castle, G Klein, and D W Murray, Proc 12th IEEE International Symposium on Wearable Computers, Pittsburgh PA, Sept 28 – Oct 1, 2008. doi:10.1109/ISWC.2008.4911577. This paper won the Best Paper award.

Abstract

We show how a system for video-rate parallel camera tracking and 3D map-building can be readily extended to allow one or more cameras to work in several maps, separately or simultaneously. The ability to handle several thousand features per map at video-rate, and for the cameras to switch automatically between maps, allows spatially localized AR workcells to be constructed and used with very little intervention from the user of a wearable vision system. The user can explore an environment in a natural way, acquiring local maps in real-time. When revisiting those areas the camera will select the correct local map from store and continue tracking and structural acquisition, while the user views relevant AR constructs registered to that map.

Video-rate recognition and localization for wearable cameras

R O Castle, D J Gawley, G Klein, and D W Murray, Proc 18th British Machine Vision Conference, Warwick, Sept 2007. Download.

Abstract

Using simultaneous localization and mapping to determine the 3D surroundings and pose of a wearable or hand-held camera provides the geometrical foundation for several capabilities of value to an autonomous wearable vision system. The one explored here is the ability to incorporate recognized objects into the map of the surroundings and refer to them. Established methods for feature cluster recognition are used to identify and localize known planar objects, and their geometry is incorporated into the map of the surrounds using a minimalist representation. Continued measurement of these mapped objects improves both the accuracy of estimated maps and the robustness of the tracking system. In the context of wearable (or hand-held) vision, the system’s ability to enhance generated maps with known objects increases the map’s value to human operators, and also enables meaningful automatic annotation of the user’s surroundings.

Towards simultaneous recognition, localization and mapping for hand-held and wearable cameras

R O Castle, D J Gawley, G Klein, and D W Murray, Proc. International Conference on Robotics and Automation, Rome, April 2007. doi:10.1109/ROBOT.2007.364109.

Abstract

This paper presents a system which combines single-camera SLAM (Simultaneous Localization and Mapping) with established methods for feature recognition. Besides using standard salient image features to build an on-line map of the camera’s environment, this system is capable of identifying and localizing known planar objects in the scene, and incorporating their geometry into the world map. Continued measurement of these mapped objects improves both the accuracy of estimated maps and the robustness of the tracking system. In the context of hand-held or wearable vision, the system’s ability to enhance generated maps with known objects increases the map’s value to human operators, and also enables meaningful automatic annotation of the user’s surroundings. The presented solution lies between the high order enriching of maps such as scene classification, and the efforts to introduce higher geometric primitives such as lines into probabilistic maps.

This site uses Cookies - By using this site or closing this you agree to our Cookies policy.
Accept
x