Augmented reality (AR) superimposes computer-generated 3-D graphics or 2-D information on the user's view of a surrounding environment in real-time, enhancing the user's perception of the real world. Typical AR systems either use fiducial markers or information about an object in the scene for AR tracking. The former, called marker-based AR, requires artificial markers be placed in the scene to be tracked by the system. When applied to mobile applications, these markers clutter the small view in the mobile device. Often times in the latter method, markerless AR, there is a requirement for offline training of some sort making it unsuitable for unprepared environments. In applications which do not have these requirements, such as structure from motion (SFM) approaches, there can still be the requirement of an initial reference object in the application. Based on these remarks, this research focuses on markerless AR in mobile applications and proposes a tracking method using structured light (SL). The proposed architecture consists of a three-stage process. In the first stage, invariant feature tracking methods are implemented to track real regions between successive video frames. Spatially multiplexed SL, involving projecting a known active image onto a scene, is used to extract 3D range data from a captured image in the second stage. The final stage utilizes the captured 3D data and tracking information to impose AR activities into the live video.