ARKit Tutorial Basic 2


DOF(Degree of Freedom)

3DOF: The 3 axis are roll, yaw and pitch -> for 360 video
6DOF(requiring A9 chip or newer): The 3 axis are roll, yaw, pitch besides orientation and position -> for AR

6DOF helps a phone knows where is your 3D object related position and by this way, it shows you a 2D image rendered from the 3D object.
Remember that there is only a relationship here is your phone and 3D object. You can say a cup (3D object) is 5m far from your phone but you can’t say how far from the cup to the real chair in your room.

Feature points

Feature Point is very important for AR because if you want to be able to feel like you are interacting with the real world you need to know that where the user clicked on in the real world. Feature Points can be seen as yellow points when you do this:

sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]

with feature points, ARKit can detect a specific 3D geometry (only horizontal plane is available right now but I suspect in the future ARKit will detect more complex 3D geometry). Once we detect a plane, we can place virtual 3D objects in real world

There are something you need to know to avoid a poor ARKit Feature Point Extraction

  • Poor lighting — not enough light or too much light with shiny specular highlights. Try and avoid environments with poor lighting.
  • Lack of texture — if you point your camera at a white wall there is really nothing unique to extract, ARKit won’t be able to find or track you. Try to avoid looking at areas of solid colors, shiny surfaces etc.
  • Fast movement — this is subjective for ARKit, normally if you are only using images to detect and estimate 3D poses, if you move the camera too fast you will end up with blurry images that will cause tracking to fail. However ARKit uses something called Visual-Inertial Odometry so as well as the image information ARKit uses the devices motion sensors to estimate where the user has turned. This makes ARKit very robust in terms of tracking.

Useful functions


 open func hitTest(_ point: CGPoint, types: ARHitTestResult.ResultType) -> [ARHitTestResult]

How to detect a flat surfaces

ARKit provides two ways to locate real-world flat surfaces in a scene.
Plane detection(existingPlane, existingPlaneUsingExtent, existingPlaneUsingGeometry
How it works: continuously analyzing the scene to accurately map the position and extent of any planes in view. This is an ongoing process with many analyzing sub-processes so it takes time. To sum up, we technically detect an ARPlaneAnchor (an object containing information about the detected horizontal plane) from every frame was rendered

if let worldSessionConfig = sessionConfig as? ARWorldTrackingConfiguration {
            worldSessionConfig.planeDetection = [.horizontal]
  , options: nil)

By setting the planeDetection property of ARWorldTrackingConfiguration to .horizontal(vertical), this tells ARKit to look for any horizontal plane. Once ARKit detects a horizontal plane, that horizontal plane will be added into sceneView’s session.This protocol method gets called every time the scene view’s session has a new ARAnchor added

  func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//scan all archors that were added into scence
        DispatchQueue.main.async {
                        if let planeAnchor = anchor as? ARPlaneAnchor {
                           //do something

Instead of using plane detection you can fall back to plane estimation that faster but less accurate.

Plane estimation (estimatedHorizontalPlane, estimatedVerticalPlane)
How it work:

Explain ARHitTestResult.ResultType

featurePoint: check if a point is a feature point
estimatedHorizontalPlane: check if a point is a feature point

A nature, universe, science, music, love lover

Leave a Reply

Your email address will not be published. Required fields are marked *