Code Structure Improvements

TODO: collapasable, copyable, download, etc

I'm having trouble with lifecycle and structuring my RealityKit based app. I have this code ``` /// ModelEntity+Extension.swift import SwiftUI import RealityKit import RealityKitContent

extension Entity { class func createFingertip() async -> Entity {

    if let entity = try? await Entity(named: "Scene", in: realityKitContentBundle) {
        
        
        
        /* ModelEntity(
         mesh: .generateSphere(radius: 0.01),
         materials: [UnlitMaterial(color: .cyan)],
         collisionShape: .generateSphere(radius: 0.005),
         mass: 0.0
         
         )*/
        // Model3D(named: "Scene", bundle: realityKitContentBundle)
        /* {
         model in
         model  .resizable()
         .scaledToFit()
         .rotation3DEffect(Rotation3D(
         eulerAngles: .init(angles: orientation, order: .xyz)
         ))
         }*/
        
        entity.components.set(PhysicsBodyComponent(mode: .kinematic))
        entity.components.set(OpacityComponent(opacity: 1.0))
        return entity
    } else {
        // Fallback, should never happen
        return ModelEntity(
            mesh: .generateSphere(radius: 0.01),
            materials: [UnlitMaterial(color: .cyan)],
            collisionShape: .generateSphere(radius: 0.005),
            mass: 0.0
            
            )
    }
    
}

}```

Does this have to be async?Your code includes an asynchronous function call to Entity(named:in:), which requires it to be async. This is because the function likely performs some operations that take time to complete, such as loading an entity from a file or a remote resource.

If you want to avoid using async, you'll need to use synchronous methods. However, if Entity(named:in:) is inherently asynchronous, you can't avoid async. Here's an explanation and a refined version of your code for better structure:

  1. Ensure that createFingertip is properly used in an async context.
  2. Improve readability and structure.

Here's a refined version:

// ModelEntity+Extension.swift
import SwiftUI
import RealityKit
import RealityKitContent

extension Entity {
    class func createFingertip() async -> Entity {
        if let entity = try? await Entity(named: "Scene", in: realityKitContentBundle) {
            entity.components.set(PhysicsBodyComponent(mode: .kinematic))
            entity.components.set(OpacityComponent(opacity: 1.0))
            return entity
        } else {
            // Fallback, should never happen
            return createFallbackFingertip()
        }
    }
    
    private class func createFallbackFingertip() -> Entity {
        let modelEntity = ModelEntity(
            mesh: .generateSphere(radius: 0.01),
            materials: [UnlitMaterial(color: .cyan)]
        )
        modelEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 0.005)]))
        modelEntity.components.set(PhysicsBodyComponent(mode: .kinematic))
        modelEntity.components.set(OpacityComponent(opacity: 1.0))
        return modelEntity
    }
}

Usage in SwiftUI Context

When using this async function in your SwiftUI context, ensure that you call it from within an asynchronous context, such as an async task:

struct ContentView: View {
    @State private var fingertipEntity: Entity?
    
    var body: some View {
        VStack {
            // Your UI code here
        }
        .onAppear {
            Task {
                fingertipEntity = await Entity.createFingertip()
            }
        }
    }
}

Explanation

  1. Async Function: The createFingertip method remains async due to the asynchronous call to Entity(named:in:).
  2. Fallback Method: A private helper method createFallbackFingertip is created to handle the fallback entity creation, improving readability.
  3. Collision and Physics Components: Set the collision andphysics components properly in both the main method and the fallback method to ensure consistency.

Handling Async in SwiftUI

Ensure that the async method is called within a proper asynchronous context, like the .task modifier or within an @MainActor isolated method if needed:

struct ContentView: View {
    @State private var fingertipEntity: Entity?

    var body: some View {
        VStack {
            // Your RealityKit content here
        }
        .task {
            fingertipEntity = await Entity.createFingertip()
        }
    }
}

Debugging Lifecycle Issues

If you're encountering lifecycle issues, consider the following tips:

  1. Ensure Proper Task Execution: Verify that the async function calls are made at appropriate lifecycle events (e.g., .onAppear or .task).
  2. Error Handling: Add error handling to gracefully manage any issues during entity loading.
  3. Main Thread Updates: Ensure UI updates occur on the main thread if necessary. You can use DispatchQueue.main.async if you need to update the state from a background thread.
  4. Async/Await Usage: Make sure all async calls are awaited correctly to prevent race conditions or incomplete initializations.

Here’s an example incorporating these suggestions:

import SwiftUI
import RealityKit
import RealityKitContent

extension Entity {
    class func createFingertip() async -> Entity {
        do {
            let entity = try await Entity(named: "Scene", in: realityKitContentBundle)
            entity.components.set(PhysicsBodyComponent(mode: .kinematic))
            entity.components.set(OpacityComponent(opacity: 1.0))
            return entity
        } catch {
            // Handle errors if needed
            return createFallbackFingertip()
        }
    }
    
    private class func createFallbackFingertip() -> Entity {
        let modelEntity = ModelEntity(
            mesh: .generateSphere(radius: 0.01),
            materials: [UnlitMaterial(color: .cyan)]
        )
        modelEntity.components.set(CI will show you more of my code, let's work on taking through a better way to structure the code. ```
/// HandTrackingViewModel.swift
import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    private var contentEntity = Entity()
    private var meshEntities = [UUID: Entity]()
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:];
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await .createFingertip(),
            .right: await .createFingertip()
        ]
    }
    
    func setupContentEntity() -> Entity {
        for entity in fingerEntities.values {
            contentEntity.addChild(entity)
        }
        return contentEntity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print ("Failed to start session \(error)")
        }
    }
...

and ``` /// TrainerView.swift import SwiftUI import RealityKit import RealityKitContent

struct TrainerView: View { @StateObject var model = HandTrackingViewModel() var body: some View { RealityView { content in content.add(model.setupContentEntity()) }.task { // run ARKitSession await model.runSession() }.task { // process hand updates await model.processHandUpdates() }.task { // process our world reconstruction ...


My main issue here is that the models need to be loaded before `model.setupContentEntity()` runs.To address the issue of ensuring that models are loaded before `setupContentEntity` runs, we need to better manage the sequence of asynchronous operations in the SwiftUI lifecycle. We'll use a combination of `@State` properties and lifecycle modifiers to control when each step occurs. Here’s a structured approach:

1. **Initialize Fingertips Before Setup**: Ensure that fingertips are initialized before setting up the content entity.
2. **Sequential Async Tasks**: Use a single `.task` modifier to sequentially run asynchronous operations.

Here's the updated `HandTrackingViewModel` and `TrainerView`:

### HandTrackingViewModel.swift
```swift
import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    private var contentEntity = Entity()
    private var meshEntities = [UUID: Entity]()
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() {
        for entity in fingerEntities.values {
            contentEntity.addChild(entity)
        }
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var isContentEntitySetup = false
    
    var body: some View {
        RealityView { content in
            if isContentEntitySetup {
                content.add(model.setupContentEntity())
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            model.setupContentEntity()
            isContentEntitySetup = true
            
            // Run ARKit session
            await model.runSession()
            
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Key Changes:

  1. State Management: Added a @State property isContentEntitySetup to control when the content entity is added to the RealityView.
  2. Sequential Async Tasks: Combined asynchronous tasks into a single .task modifier to ensure proper sequencing:
    • First, initialize the fingertips.
    • Then, set up the content entity.
    • Finally, run the session and process hand updates.

This approach ensures that setupContentEntity is called only after the fingertips are initialized, resolving the issue with the lifecycle and structuring your RealityKit-based app effectively.This won't work. Look at the signature of setupContentEntity. We take it's output and add it to the contentThank you for pointing that out. Let's revise the approach by making sure the setupContentEntity function returns the contentEntity after ensuring that the fingertips are initialized. We'll modify the TrainerView to wait for the fingertips to be initialized and then set up the content entity correctly.

Here’s the revised code:

HandTrackingViewModel.swift

import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    private var contentEntity = Entity()
    private var meshEntities = [UUID: Entity]()
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() -> Entity {
        for entity in fingerEntities.values {
            contentEntity.addChild(entity)
        }
        return contentEntity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var contentEntity: Entity?

    var body: some View {
        RealityView { content in
            if let entity = contentEntity {
                content.add(entity)
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            contentEntity = model.setupContentEntity()
            
            // Run ARKit session
            await model.runSession()
            
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Key Changes:

  1. State Management: Added a @State property contentEntity to store the setup content entity.
  2. Single Task Modifier: Combined asynchronous tasks into a single .task modifier to ensure proper sequencing:
    • First, initialize the fingertips.
    • Then, set up the content entity and assign it to contentEntity.
    • Finally, run the session and process hand updates.

By structuring the code this way, the RealityView will only attempt to add the content entity after it has been fully initialized and set up, ensuring proper lifecycle management.WHen or how does the content in the RealityView get populated with contentEntity?The RealityView's content gets populated with the contentEntity when content.add(entity) is called within the RealityView closure. To ensure this happens correctly, we need to wait for the asynchronous initialization of the contentEntity to complete before attempting to add it to the RealityView.

Here’s how we can ensure that the contentEntity is correctly added to the RealityView:

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var contentEntity: Entity?

    var body: some View {
        RealityView { content in
            if let entity = contentEntity {
                content.add(entity)
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            contentEntity = model.setupContentEntity()
            
            // Run ARKit session
            await model.runSession()
            
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Explanation

  1. RealityView Content Population: The RealityView has a closure that runs whenever the view is updated. The if let entity = contentEntity check ensures that contentEntity is only added to the RealityView when it is not nil, i.e., after it has been fully initialized and set up.
  2. Sequential Task Execution: Using a single .task modifier ensures that the asynchronous steps are executed sequentially:
    • First, initialize the fingertips asynchronously.
    • Then, set up the contentEntity using the initialized fingertips.
    • Finally, run the ARKit session and process hand updates.

By structuring it this way, the RealityView will only attempt to add the contentEntity after it has been initialized and set up, ensuring proper lifecycle management and avoiding premature access to an uninitialized entity.What will trigger the view to be updated? Shouldn't we guard against calling content.add more than once?To ensure the view is updated correctly and to avoid calling content.add more than once, we can use a state variable to track if the contentEntity has already been added. This will prevent the content.add call from being triggered multiple times.

We can achieve this by adding a flag to track whether the contentEntity has been added, and making sure content.add is only called once.

Here’s the updated code:

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var contentEntity: Entity?
    @State private var isContentEntityAdded = false

    var body: some View {
        RealityView { content in
            if let entity = contentEntity, !isContentEntityAdded {
                content.add(entity)
                isContentEntityAdded = true
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            contentEntity = model.setupContentEntity()
            
            // Run ARKit session
            await model.runSession()
            
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Explanation

  1. State Variable isContentEntityAdded: Added a state variable isContentEntityAdded to track if the contentEntity has been added to the RealityView's content.
  2. Guard Against Multiple Adds: The check if let entity = contentEntity, !isContentEntityAdded ensures that content.add(entity) is only called once. After the entity is added, isContentEntityAdded is set to true, preventing any further calls to content.add(entity).

How the View Update is Triggered

The view will be updated when the state variables contentEntity or isContentEntityAdded change. When contentEntity is set, it triggers a re-render of the view, and the RealityView closure is executed. The guard clause ensures content.add(entity) is only called the first time contentEntity is non-nil.

This approach ensures that contentEntity is added only once, preventing multiple additions and managing the lifecycle correctly.I just ran on device. That view update is never triggered. I put in a print statement, that block of code in RealityView only runs onceTo address this issue, we'll need to ensure that the RealityView properly responds to changes in state. One approach is to use an explicit binding for the contentEntity within the RealityView. This will ensure that the view re-renders when contentEntity is updated.

Here’s a revised approach:

  1. Use a custom ObservableObject for managing the state of the content entity.
  2. Explicitly notify the RealityView of changes using @Published.

HandTrackingViewModel.swift

import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    @Published var contentEntity: Entity?
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() {
        let entity = Entity()
        for fingertip in fingerEntities.values {
            entity.addChild(fingertip)
        }
        contentEntity = entity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()

    var body: some View {
        RealityView { content in
            if let entity = model.contentEntity {
                content.add(entity)
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            model.setupContentEntity()
            
            // Run ARKit session
            await model.runSession()
            
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Explanation

  1. ObservableObject and @Published: The contentEntity is now a @Published property of the HandTrackingViewModel. This ensures that any changes to contentEntity will trigger a view update.
  2. RealityView Binding: The RealityView closure uses the model.contentEntity directly, which will be observed for changes.

Ensuring View Updates

  • By marking contentEntity as @Published, any change to this property will trigger a view update in SwiftUI.
  • The RealityView will automatically respond to changes in model.contentEntity and re-run its closure, ensuring that the entity is added when it's available.

This should resolve the issue of the view not updating when contentEntity is set.if let entity = model.contentEntity { content.add(entity) } only runs once. It skips adding the content, as model.contentEntity is nilTo ensure that the view updates correctly when model.contentEntity is set, we'll use a more explicit state management approach within SwiftUI. We will leverage the @State property to track the loading state and ensure the view updates accordingly.

Here's how we can modify the code:

HandTrackingViewModel.swift

import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    @Published var contentEntity: Entity?
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() {
        let entity = Entity()
        for fingertip in fingerEntities.values {
            entity.addChild(fingertip)
        }
        contentEntity = entity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

TrainerView.swift

import SwiftUI
import RealityKit
import RealityKitContent

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var isContentEntityAdded = false

    var body: some View {
        RealityView { content in
            if let entity = model.contentEntity, !isContentEntityAdded {
                content.add(entity)
                isContentEntityAdded = true
            }
        }
        .task {
            // Initialize fingertips
            await model.initFingertips()
            
            // Setup content entity after fingertips are initialized
            model.setupContentEntity()
        }
        .task {
            // Run ARKit session
            await model.runSession()
        }
        .task {
            // Process hand updates
            await model.processHandUpdates()
        }
    }
}

Explanation

  1. Separate State for Content Entity Addition: The @State variable isContentEntityAdded is used to ensure that content.add(entity) is only called once.
  2. Ensure Sequential Initialization: Separate .task modifiers ensure that each step of the process (initializing fingertips, setting up the content entity, running the ARKit session, and processing hand updates) is handled in sequence.
  3. StateObject Observation: By observing model.contentEntity, the RealityView is updated whenever contentEntity is set. The check for isContentEntityAdded ensures that the entity is added to the content only once.

Potential Adjustments

If the view still doesn’t update as expected, you can use a more granular approach to force a view update when contentEntity is set. One way to do this is by using Combine’s objectWillChange publisher manually to force a view update:

import Combine

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    @Published var contentEntity: Entity?
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    private var lastCubePlacementTime: TimeInterval = 0
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() {
        let entity = Entity()
        for fingertip in fingerEntities.values {
            entity.addChild(fingertip)
        }
        self.contentEntity = entity
        self.objectWillChange.send()  // Force a view update
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

By ensuring self.objectWillChange.send() is called when setting contentEntity, you explicitly trigger a view update when contentEntity is set. This should ensure the RealityView updates accordingly. None of this is working. Let's try another approach. In TrainerView... in a .task, is there any way I can access and modify the content variable that is availabe in the RealityView block?In SwiftUI, directly modifying variables inside a view's closure from outside the closure (e.g., from a .task modifier) is not straightforward because the closure creates a local scope. However, we can use a different approach to achieve the desired behavior by creating a custom UIViewRepresentable that wraps RealityKit content and provides a way to update its content from an external state.

Here’s how we can achieve this:

  1. Create a UIViewRepresentable wrapper for RealityView.
  2. Use a coordinator to manage the content updates.

RealityKitView.swift

import SwiftUI
import RealityKit

struct RealityKitView: UIViewRepresentable {
    @Binding var contentEntity: Entity?

    func makeUIView(context: Context) -> ARView {
        let arView = ARView(frame: .zero)
        context.coordinator.arView = arView
        if let contentEntity = contentEntity {
            arView.scene.addAnchor(contentEntity)
        }
        return arView
    }

    func updateUIView(_ uiView: ARView, context: Context) {
        if let contentEntity = contentEntity, !context.coordinator.isContentAdded {
            uiView.scene.anchors.append(contentEntity)
            context.coordinator.isContentAdded = true
        }
    }

    func makeCoordinator() -> Coordinator {
        Coordinator()
    }

    class Coordinator {
        var arView: ARView?
        var isContentAdded = false
    }
}

TrainerView.swift

import SwiftUI
import RealityKit

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var contentEntity: Entity?

    var body: some View {
        RealityKitView(contentEntity: $contentEntity)
            .task {
                // Initialize fingertips
                await model.initFingertips()
                
                // Setup content entity after fingertips are initialized
                contentEntity = model.setupContentEntity()
                
                // Run ARKit session
                await model.runSession()
                
                // Process hand updates
                await model.processHandUpdates()
            }
    }
}

HandTrackingViewModel.swift

import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() -> Entity {
        let entity = Entity()
        for fingertip in fingerEntities.values {
            entity.addChild(fingertip)
        }
        return entity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

Explanation

  1. RealityKitView: A UIViewRepresentable that wraps an ARView and binds a contentEntity.
  2. Coordinator: Manages the ARView and ensures content is added only once.
  3. TrainerView: Uses the RealityKitView with a binding to contentEntity, ensuring the RealityKitView updates when contentEntity is set.
  4. ViewModel: Handles asynchronous initialization and setup, with methods for initializing fingertips, setting up the content entity, running the session, and processing hand updates.

This approach ensures the RealityKit content is managed and updated properly, allowing the contentEntity to be added to the RealityKitView when it is ready.A different approach. Can I start with simple synchronous ModelEntity and then "upgrade" to a model loaded async later?Yes, you can start with a simple synchronous ModelEntity and later replace it with a model loaded asynchronously. This approach involves initializing the scene with a basic entity and then updating it once the asynchronous loading completes. Here’s how you can structure your code to achieve this:

HandTrackingViewModel.swift

import RealityKit
import SwiftUI
import ARKit
import RealityKitContent

@MainActor class HandTrackingViewModel: ObservableObject {
    private let session = ARKitSession()
    private let handTracking = HandTrackingProvider()
    private let sceneReconstruction = SceneReconstructionProvider()
    @Published var contentEntity: Entity?
    @Published var basicEntity: ModelEntity?
    
    private var fingerEntities: [HandAnchor.Chirality: Entity] = [:]
    
    init() {
        setupBasicEntity()
    }
    
    func setupBasicEntity() {
        let basicEntity = ModelEntity(
            mesh: .generateSphere(radius: 0.01),
            materials: [UnlitMaterial(color: .cyan)]
        )
        basicEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 0.005)]))
        basicEntity.components.set(PhysicsBodyComponent(mode: .kinematic))
        self.basicEntity = basicEntity
    }
    
    func initFingertips() async {
        fingerEntities = [
            .left: await Entity.createFingertip(),
            .right: await Entity.createFingertip()
        ]
    }
    
    func setupContentEntity() {
        let entity = Entity()
        for fingertip in fingerEntities.values {
            entity.addChild(fingertip)
        }
        contentEntity = entity
    }
    
    func runSession() async {
        do {
            try await session.run([sceneReconstruction, handTracking])
        } catch {
            print("Failed to start session \(error)")
        }
    }

    func processHandUpdates() async {
        // Add your hand update processing logic here
    }
}

RealityKitView.swift

import SwiftUI
import RealityKit

struct RealityKitView: UIViewRepresentable {
    @Binding var contentEntity: Entity?
    @Binding var basicEntity: ModelEntity?

    func makeUIView(context: Context) -> ARView {
        let arView = ARView(frame: .zero)
        context.coordinator.arView = arView
        if let basicEntity = basicEntity {
            arView.scene.anchors.append(basicEntity)
        }
        return arView
    }

    func updateUIView(_ uiView: ARView, context: Context) {
        if let entity = contentEntity, !context.coordinator.isContentAdded {
            uiView.scene.anchors.removeAll { $0 === context.coordinator.basicEntity }
            uiView.scene.anchors.append(entity)
            context.coordinator.isContentAdded = true
        }
    }

    func makeCoordinator() -> Coordinator {
        Coordinator()
    }

    class Coordinator {
        var arView: ARView?
        var isContentAdded = false
        var basicEntity: Entity?
    }
}

TrainerView.swift

import SwiftUI
import RealityKit

struct TrainerView: View {
    @StateObject var model = HandTrackingViewModel()
    @State private var contentEntity: Entity?
    @State private var basicEntity: ModelEntity?

    var body: some View {
        RealityKitView(contentEntity: $model.contentEntity, basicEntity: $model.basicEntity)
            .task {
                // Initialize fingertips
                await model.initFingertips()
                
                // Setup content entity after fingertips are initialized
                model.setupContentEntity()
                
                // Run ARKit session
                await model.runSession()
                
                // Process hand updates
                await model.processHandUpdates()
            }
    }
}

Explanation

  1. Initial Basic Entity: The HandTrackingViewModel initializes a basic synchronous ModelEntity in its initializer. This entity is used as a placeholder until the asynchronous model is loaded.
  2. Binding to RealityKitView: RealityKitView takes contentEntity and basicEntity as bindings, allowing it to react to changes in these properties.
  3. Updating the ARView: In the updateUIView method of RealityKitView, the basic entity is replaced with the content entity once it is available.
  4. State Management: The TrainerView ensures the model is properly updated and bound to the RealityKitView.

This structure ensures that a simple model is displayed immediately, and later "upgraded" to the asynchronously loaded model, providing a smooth transition and better user experience.

Code Structure Improvements created at Tue Jan 20 1970 20:55:10 GMT+0000 (Coordinated Universal Time)

Related Chats

Generate Sphere in AR. 0.590

SwiftUI Drag Rotation Implementation 0.556

Convert Entity to ModelEntity 0.552

Move Entity Up 0.005 0.514

Find Sub-Component Cylinder 0.491

Create 3D Models Prog. Swiftly 0.485

Material Error Correction 0.482

Fix Warning: Use StateObject 0.464

Frame Photos with SwiftUI 0.463