DEV Community

whatminjacodes [she/they]
whatminjacodes [she/they]

Posted on

Overview of mobile Augmented Reality fall 2020

I have been creating Augmented Reality (AR) applications for most of my career but during the past year my focus has been on other subjects. I decided to go through the AR SDKs that I have been using previously just to keep up on the new features that have been published!

This blog post is not a tutorial on how to use AR, but this can still be useful if you are interested to start developing AR products! This is a good overview of the most popular SDKs and their current features and I have also added a link to the documentation of each of these SDKs. There's a good set of tutorials on the websites so just check what are the features that are available, go to the documentation and start creating!

I have mostly been using Unity game engine for creating mixed reality applications, but native mobile development is also an option!

Google ARCore

ARCore is a SDK that's developed by Google for creating Augmented Reality applications. Newest version right now is v1.20.0. They have been releasing updates every two months or so.

Current features in each of the supported platforms:

Android (Java) & Android NDK
  • Instant Placement API (place AR objects instantly, without establishing full tracking and surface detection)
  • Depth API (uses supported device's camera to create depth maps allowing objects to appear, for example, behind real world objects)
  • Lighting Estimation API (depending on a chosen mode it analyzes the camera view in real time and provides information about the lighting for more realistic rendering of virtual objects)
  • Augmented Images API (detect and augment objects on 2D images in the user's environment, such as posters or product packages)
  • Cloud Anchors API (create multiplayer or collaborative AR experiences on a space that can be shared between Android and iOS users)
  • Augmented Faces API (detects the user's face and overlays assets on it, so basically face filters)
  • Cloud Anchors API
  • Augmented Faces API
  1. ARCore extension for AR Foundation (Unity's framework for creating AR experiences):
    • Depth API
    • Augmented Images API
    • Cloud Anchors API
  2. ARCore SDK for Unity:
    • Instant Placement API
    • Depth API
    • Lighting Estimation API
    • Augmented Faces API
    • Augmented Images API
    • Cloud Anchors API
  • Augmented Faces API
  • Augmented Images API
  • Cloud Anchors API

Apple ARKit

ARKit is a SDK that's developed by Apple for creating Augmented Reality applications. Newest version is ARKit 4. Just for iOS devices.

Currently available features:

  • World Tracking (track surfaces, images, objects, people or user faces)
  • Geotracking (track specific geographic areas of interest and render them in an AR experience)
  • Face Tracking (detect faces and overlay virtual content on top of it)
  • People (react to people that ARKit identifies in the camera feed)
  • Image Tracking (recognize images and track their position and orientation)
  • Object Detection (recognize known objects at run-time by first scanning them with an app)
  • Rendering Effects (for adding more realistic reflections and light to virtual objects)
  • Multiuser (communicate with other devices to create a shared AR experience)
  • Custom Display (create AR experience by implementing your own custom renderer)

AR Foundation (Unity)

AR Foundation is an API for creating multi-platform Augmented Reality applications that support the same functionality in Unity. AR Foundation doesn't implement any AR features itself so you also need to download separate Unity XR packages for the target platforms. Currently supported platforms are ARCore, ARKit, Magic Leap and Windows XR (HoloLens). Note that not all the features work on all the platforms.

Currently available features:

  • Device Tracking (track device's position and orientation)
  • Plane Detection (detect horizontal and vertical surfaces)
  • Point Clouds (feature points, an AR device uses the device’s camera and image analysis to track specific points in the world, and uses these points to build a map of its environment)
  • Anchor (an arbitrary position and orientation that the device tracks)
  • Environmental Probe (for generating a cube map to represent reflections from physical environment)
  • Face Tracking (detect and track faces)
  • 2D Image Tracking (detect and track 2D images)
  • 3D Object Tracking (detect and track 3D objects)
  • Meshing (generate triangle meshes that correspond the physical space)
  • Body Tracking (recognize people in physical space)
  • Collaborative Participants (track the position and orientation of other devices in a shared AR experience)
  • Human Segmentation and Occlusion (blend physical and virtual objects by applying distance to rendered 3D content)
  • Raycast (queries physical surroundings for detected planes and feature points)


Vuforia is a SDK for creating AR applications. Current version is 9.5.

Currently available features:

  • Model Targets (recognize objects by shape using pre-existing 3D models)
  • Area Targets (scan your surroundings using iPad with LiDAR or Matterport 3D camera and augment objects in the environment)
  • Image Targets (attach content onto real world images)
  • Object Targets (scan small objects and recognize them in AR)
  • Multi-Targets (use more than one Image Target simultaneously)
  • Cylinder Targets (recognize images wrapped onto objects that are cylindrical)
  • VuMarks (bar code that works as an AR target)
  • External Camera (access video data from camera outside of the one in a phone when creating AR experiences)
  • Ground Plane (place content on horizontal surfaces)
  • Vuforia Fusion (framework for creating Vuforia AR experiences on cross-platform)

Zappar WebAR

Create AR by bringing it directly to the mobile web browser. No app required (but they also have a mobile app for publishing AR content).

Currently available features:

  • World Tracking (keep position of virtual content constant in the real world environment)
  • Face Tracking (track users face and augment content)
  • Image Tracking (track an existing image and augment content on it)
  • Lighting (illuminate 3D content in your scene)
  • Add buttons and interactivity

Adobe Aero

A platform for creating AR experiences without needing to code. Supports iOS, Windows and macOS.

Currently available features:

  • objects can be animated and placed on real world

Snapchat Lens Studio

Snapchat is a social media platform that lets users share pictures and videos that disappear after a while. They are known for having different AR face filters. Current version is 2.3

Currently available features:

  • Marker Tracking (track content of a physical image)
  • Object Tracking (attach 2D images and animations to certain objects found in the scene, currently supports tracking of a Cat, Dog, Hand and Body)
  • Device Tracking (place objects that appear in front of the user, can detect surfaces and rotations)
  • Particles (shader based system for emitting particles)
  • Segmentation (can be used to hide certain areas of the scene)
  • Hand Gestures (hand recognition feature for triggering events that can be used to perform actions)
  • Audio Effects (applied to the audio that microphone has recorded, like pitch shift the user's voice)

Facebook (Instagram) Spark AR

AR effects used in Facebook platforms. Current version is v100. New versions are released every 2 weeks.

Currently available features:

  • People Tracking (hand and face tracker)
  • Plane Tracker (track horizontal surfaces)
  • Target Tracker (track images, like posters, in real world)
  • Environment Textures (mimic the light in a real world environment)
  • Retouching Material (add retouching effects like skin smoothing)
  • Particles (emit particle effects)


The idea of this blog post was to have a centralized location of currently available features in these different AR SDKs. There's obviously many more possible SDKs to use but I wanted to concentrate on the platforms that I have been using before or that are popular.

Some of the glossary that's used here might not make a lot of sense if you are not familiar with AR, but the documentations have longer and better explanations of all the features.

If you are interested to start testing how AR apps work, I recommend you to just pick something that sounds interesting and look for some beginner tutorials! I think back in 2016 when I started with Vuforia, I had done my first simple image recognition AR app in 10 minutes ^_____^

Top comments (0)