Deciphering the Apple Vision Pro Specs, Value & Performance Insights | A Thorough Walkthrough by Tech Experts at ZDNET
Revolutionary AI Breakthrough: Apple’s Next-Gen Innovations Set to Transform Your iPhone Experience - Insights From the Latest Research
June Wan/ZDNET
Apple is taking a deep dive into artificial intelligence technology, according to two recently published research papers showcasing the company’s work. The research shows Apple is working to develop on-device AI tech, including a groundbreaking method to create animatable avatars and a novel way to run large language models from an iPhone or iPad.
Also: Do companies have ethical guidelines for AI use? 56% of professionals are unsure, survey says
Aptly named “LLM in a flash ,” Apple’s research on efficiently running LLMs on devices with limited memory enables complex AI applications to run smoothly on iPhones or iPads. This could also involve running a generative-AI-powered Siri on-device that simultaneously assists with various tasks, generates text, and features an improved ability to process natural language.
HUGS stands for Human Gaussian Splats, a method to create fully animatable avatars from short video clips captured on an iPhone in as little as 30 minutes. HUGS is a neural rendering framework capable of training with as little as a few seconds of video to create a detailed avatar that users can animate however they’d like.
Newsletters
ZDNET Tech Today
ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.
Subscribe
What this means for the iPhone and Vision Pro
There have been reports about Apple working on its own AI chatbot , used internally and called ‘Apple GPT.’ The new research shows that the company is making strides in running LLMs by leveraging flash memory on smaller, less powerful devices like an iPhone. This could make sophisticated generative AI tools available on-device and could mean a generative AI-powered Siri.
Also: Microsoft Copilot can write songs for you now. Here’s how to try it
Beyond Siri’s much-needed improvement, having an efficient LLM inference strategy like the one described in LLM in a Flash could lead to more accessible generative AI tools, significant advancements in mobile technology, and improved performance in a wide range of applications on everyday devices.
Arguably the biggest advancement of the two, HUGS is a method that can create malleable digital avatars from just a few seconds of monocular video, or 50-100 frames, to be exact. These human avatars can be animated and placed on different scenes, as the platform uses a disentangled representation of humans and scenes.
HUGS lets users create avatars of themselves that can be animated and placed on a scene. This is Apple’s example of three avatars animated in sync.
Apple
According to Apple, HUGS outperforms competitors at animating human avatars with rendering speeds 100 times faster than previous methods and with a significantly shorter training time of only 30 minutes.
Creating an avatar by leveraging the iPhone’s camera and processing power could deliver a new level of personalization and realism for iPhone users in social media, gaming, educational, and augmented reality (AR) applications.
HUGS could seriously reduce the creep factor for the Apple Vision Pro’s Digital Persona , showcased during the company’s last Worldwide Developers’ Conference (WWDC) last June. Vision Pro users could wield the power of HUGS to create a highly realistic avatar that can move fluidly with a 60fps rendering time.
Also: Apple’s Vision Pro may launch in February - with its most sophisticated buying process yet
The speed of HUGS would also allow for real-time rendering, which can be crucial for a smooth AR experience and could enhance social, gaming, and professional applications with realistic, user-controlled avatars.
Apple tends to shy away from using buzzwords like ‘AI’ to describe its product features, preferring to focus on machine learning instead. However, these research papers suggest a deeper involvement in new AI tech. Still, Apple hasn’t publicly acknowledged implementing generative AI into its products and has yet tto confirm its work with Apple GPT officially
Artificial Intelligence
How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
6 ways to write better ChatGPT prompts - and get the results you want faster
6 digital twin building blocks businesses need - and how AI fits in
Google’s Gems are a gentle introduction to AI prompt engineering
- How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
- 6 ways to write better ChatGPT prompts - and get the results you want faster
- 6 digital twin building blocks businesses need - and how AI fits in
- Google’s Gems are a gentle introduction to AI prompt engineering
Also read:
- [New] 2024 Approved Cognitive Clash - Ultimate GK Video Quizzes
- [New] 2024 Approved Elevate Visual Content with Effective Snapchat Zoom Techniques
- Determining the Best Intervals for Mobile Phone Upgrades
- Get Your Windows 10 Search Back: Proven Strategies for Resolution
- How To Locate and Repair libcurl.dll Not Found Issues
- In 2024, What Pokémon Evolve with A Dawn Stone For Vivo Y100t? | Dr.fone
- Navigating Beginner's Vlogging Landscape
- Simple Solution: Stop the DeVolder Game Crashes on Your Computer
- Step-by-Step Tutorial for Playing Android Games in Windows 11
- The Evolution of AI Language Models at OpenAI: Analyzing the Advancements From GPT-1 Up to GPT-4
- The Power of YouTube's Creative Commons for Videographers for 2024
- Unlocking Your Phone's Features: How to Navigate Snapchat From Your Personal Computer
- Title: Deciphering the Apple Vision Pro Specs, Value & Performance Insights | A Thorough Walkthrough by Tech Experts at ZDNET
- Author: Stephen
- Created at : 2024-11-01 21:55:56
- Updated at : 2024-11-05 23:03:04
- Link: https://tech-recovery.techidaily.com/deciphering-the-apple-vision-pro-specs-value-and-performance-insights-a-thorough-walkthrough-by-tech-experts-at-zdnet/
- License: This work is licensed under CC BY-NC-SA 4.0.