Keyframe-based recognition and localization during video-rate parallel tracking and mapping
R O Castle, D W Murray, Journal of Image and Vision Computing, Vol 29, No 8, pp 524-532, 2011. doi:10.1016/j.imavis.2011.05.002
Generating situational awareness by augmenting live imagery with collocated scene information has applications from game-playing to military command and control. We propose a method of object recognition, reconstruction, and localization using triangulation of SIFT features from keyframe camera poses in a 3D map. The map and keyframe poses themselves are recovered at video-rate by bundle adjustment of FAST image features in the parallel tracking and mapping algorithm. Detected objects are automatically labelled on the user’s display using predefined annotations. Experimental results are given for laboratory scenes, and in more realistic applications.
Wide-area Augmented Reality using Camera Tracking and Mapping in Multiple Regions
R O Castle, G Klein and D W Murray, Journal of Computer Vision and Image Understanding, Volume 115, Issue 6, June 2011, Pages 854-867. doi:10.1016/j.cviu.2011.02.007
We show how a system for video-rate parallel camera tracking and 3D map-building can be readily extended to allow one or more cameras to work in several maps, separately or simultaneously. The ability to handle several thousand features per map at video-rate, and for the cameras to switch automatically between maps, allows spatially localized AR workcells to be constructed and used with very little intervention from the user of a wearable vision system. The user can explore an environment in a natural way, acquiring local maps in real-time. When revisiting those areas the camera will select the correct local map from store and continue tracking and structural acquisition, while the user views relevant AR constructs registered to that map. The method is shown working in a progressively larger environments, from desktop to large building.
Combining monoSLAM with Object Recognition for Scene Augmentation using a Wearable Camera
R O Castle, G Klein and D W Murray, Journal of Image and Vision Computing, Volume 28, Issue 11, November 2010, Pages 1548-1556. doi:10.1016/j.imavis.2010.03.009
In wearable visual computing, maintaining a time-evolving representation of the 3D environment along with the pose of the camera provides the geometrical foundation on which person-centred processing can be built. In this paper, an established method for the recognition of feature clusters is used on live imagery to identify and locate planar objects around the wearer. Objects’ locations are incorporated as additional 3D measurements into a monocular simultaneous localization and mapping process, which routinely uses 2D image measurements to acquire and maintain a map of the surroundings, irrespective of whether objects are present or not. Augmenting the 3D maps with automatically recognized objects enables useful annotations of the surroundings to be presented to the wearer. After demonstrating the geometrical integrity of the method, experiments show its use in two augmented reality applications.