I would like to place a hairstyle after facial landmark detection. I'm able to render 2D images properly. I would like to render 3D model. I thought of using SceneKit to render 3D model. I would like to know how Instagram, snapchat and other face filter apps are rendering 3D models. I observe SceneKit coordinate system is different from UIKit coordinate system. I have googled but couldn't find the conversion of coordinate system. Could anyone help. Thanks.
            Asked
            
        
        
            Active
            
        
            Viewed 497 times
        
    1 Answers
1
            Look for worldUp and simdWorldUp instance properties to understand how ARKit constructs a scene coordinate system based on real-world device motion (also, you can inspect ARConfiguration.WorldAlignment enum). 
Please, look at this SO post: Understand coordinate spaces in ARKit for complete info.
And remember, ARAnchor is your best friend when placing 3D object. Click here
 for further details. 
        Andy Jazz
        
- 49,178
 - 17
 - 136
 - 220
 
- 
                    Hello, I don't know if I need to recreate same question but what will be the response to this question on MacOS (so without ARKit and without TrueDepth camera) ? Thanks – MattOZ Sep 20 '18 at 13:42
 - 
                    I didn't try it in macOS. So, you'd better recreate this question for macOS. – Andy Jazz Sep 20 '18 at 14:06