TY - GEN
T1 - Augmented reality using spatially multiplexed structured light
AU - Torres, Michael
AU - Jassel, Richard
AU - Tang, Ying
PY - 2012/12/1
Y1 - 2012/12/1
N2 - Augmented reality (AR) superimposes computer-generated 3-D graphics or 2-D information on the user's view of a surrounding environment in real-time, enhancing the user's perception of the real world. Typical AR systems either use fiducial markers or information about an object in the scene for AR tracking. The former, called marker-based AR, requires artificial markers be placed in the scene to be tracked by the system. When applied to mobile applications, these markers clutter the small view in the mobile device. Often times in the latter method, markerless AR, there is a requirement for offline training of some sort making it unsuitable for unprepared environments. In applications which do not have these requirements, such as structure from motion (SFM) approaches, there can still be the requirement of an initial reference object in the application. Based on these remarks, this research focuses on markerless AR in mobile applications and proposes a tracking method using structured light (SL). The proposed architecture consists of a three-stage process. In the first stage, invariant feature tracking methods are implemented to track real regions between successive video frames. Spatially multiplexed SL, involving projecting a known active image onto a scene, is used to extract 3D range data from a captured image in the second stage. The final stage utilizes the captured 3D data and tracking information to impose AR activities into the live video.
AB - Augmented reality (AR) superimposes computer-generated 3-D graphics or 2-D information on the user's view of a surrounding environment in real-time, enhancing the user's perception of the real world. Typical AR systems either use fiducial markers or information about an object in the scene for AR tracking. The former, called marker-based AR, requires artificial markers be placed in the scene to be tracked by the system. When applied to mobile applications, these markers clutter the small view in the mobile device. Often times in the latter method, markerless AR, there is a requirement for offline training of some sort making it unsuitable for unprepared environments. In applications which do not have these requirements, such as structure from motion (SFM) approaches, there can still be the requirement of an initial reference object in the application. Based on these remarks, this research focuses on markerless AR in mobile applications and proposes a tracking method using structured light (SL). The proposed architecture consists of a three-stage process. In the first stage, invariant feature tracking methods are implemented to track real regions between successive video frames. Spatially multiplexed SL, involving projecting a known active image onto a scene, is used to extract 3D range data from a captured image in the second stage. The final stage utilizes the captured 3D data and tracking information to impose AR activities into the live video.
UR - http://www.scopus.com/inward/record.url?scp=84876055246&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84876055246&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84876055246
SN - 9780473204853
T3 - 2012 19th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2012
SP - 385
EP - 390
BT - 2012 19th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2012
T2 - 2012 19th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2012
Y2 - 28 November 2012 through 30 November 2012
ER -