Augmented Reality (AR) is an innovative technology that lets you present an enhanced view of the real world around you. A good example would be displaying match scores embedded alongside the live football action from the ground. The embedding is done via an augmented reality application which renders a live camera image and augments a virtual object on top of the real world image and displays on your camera preview screen using vision-based recognition algorithms.
In this blog, we will explain the development process of a sample AR app using Qualcomm Vuforia SDK, one of the best SDKs available to build vision-based AR apps for mobile devices and tablets. Currently it’s available for Android, iOS and Unity. The SDK takes care of the lowest level intricacies, which enable the developers to focus entirely on the application.
We will begin with an overview of key concepts used in the AR space. It’s important to understand these before starting an AR app development.
The above diagram shows the core components of Vuforia AR application.
TargetManager – The first step is to create targets using Vuforia Target Manager to use as device database. For this, you first need to do free registration on Vuforia developer portal. This account can be used to download resources from Vuforia and post issues on Vuforia forums. After login,
Installation and Downloading Sample Code
Developer Work Flow
When a user launches the sample app, it starts creating and loading textures by calling loadTextures(). Here the bitmaps from assets folder are converted into Textures required by Vuforia SDK engine for rendering of 3D objects on AR view.
Simultaneously the app will start the process for configuring engine by calling initAR(). Here the app will asynchronously initialize the Vuforia SDK engine. After successful initialization, the app starts loading trackers data asynchronously. It will load and activate the datasets from target XML file created using TargetManager.
After engine configuration, the app calls initApplicationAR() where it will create and initialize OpenGLES view and attaches a renderer to it, which is required to display video on top of camera screen if imagetarget is found. The renderer attached with OpenGL ES view requests loading of videos through VideoPlaybackHelper and calls OpenGL DrawFrame() method to track camera frames. During this process the OpenGL textures from textures objects loaded in the beginning and textures for video data are also created.
The app then calls startAR() to start the camera. As soon the camera starts the renderer start tracking frames. When imageTarget is found the app displays the video. The video can be configured to play automatically or can be start on tap.
All the resources are unloaded by app by calling onDestroy() when user closes the app.
With this blog, we have now seen how we can use Vuforia to track ImageTargets and display video on top of it. Vuforia can be used for lot of purposes like text detection, occlusion management, run time target generation from cloud. To know more about Vuforia check developer.vuforia.com.