The new Apple Vision Pro headset introduces a captivating feature that transforms your appearance into a lifelike digital avatar during FaceTime calls. Demonstrated at WWDC 2023, the headset employs a “neural network” to scan your face, creating a hyperrealistic virtual representation that appears in video conversations.
Apple Vision Pro’s neural network will make your Digital Avatar life-like
Unlike the more cartoonish avatars available in apps like Microsoft Teams and Meta’s Horizon Worlds, Apple strives to generate a virtual rendition of your true likeness. In the showcased video, a user holds the Vision Pro headset in front of their face, allowing the device to scan their features using an advanced encoder-decoder neural network. Apple emphasizes that this network has been trained on a diverse range of thousands of individuals.
Once scanned, the Apple Vision Pro headset generates your digital persona, which can accurately track your facial expressions and hand movements during FaceTime interactions. While the video alone doesn’t provide a comprehensive assessment of the feature’s performance, the persona appears remarkably realistic, although it seems to lack a detailed hair texture.
This functionality offers a practical solution for using FaceTime when a camera is not directly focused on your face, such as when using the app on an iPhone or Mac. However, the headset’s performance in real-world scenarios and the limitations of face scanning (e.g., wearing masks) remain questions to be answered.
Powering the Apple Vision Pro headset is Apple’s new mixed reality operating system, visionOS. Set to be released early next year, the headset will be priced at $3,499. Excitement surrounds the launch, as users eagerly anticipate experiencing the immersive capabilities of the Vision Pro and exploring its diverse features beyond the showcased scenarios.
via TheVerge