There’s a lot of excitement about the back-to-back releases of ARKit (from Apple) and ARCore (from Google).
They’re very similar, since they’re both derived from the same original source code, and they both provide the same basic information to an app — the camera’s position in the physical world, a sparse point cloud, and a set of bounded planes.
That’s sufficient to put virtual objects and animated characters on tabletops, or on the floor, or even sticking out from a wall. Very cool… but also very limited. Here’s a list of the shortcomings of the toolkits:
- No occlusion. Since there’s no depth camera, neither ARKit nor ARCore can provide a true 3D model of the space you’re in. That means that (unless a developer is very, very clever) the 3D models that are displayed will always be rendered in front of the real-world scene. You might have a dancing bear on the tabletop, but it won’t be walking around behind the salt and pepper shakers because those objects don’t exist in the app.
- No stereo depth. Since ARKit and ARCore run only on specific phones, and not in any kind of headset with separate displays for each eye, the images that are added to the world will always be flat. They won’t have the depth or realism that you would get from stereoscopic 3D.
- No connection to the real world. All the toolkits can tell the app is where the points and planes are in the user’s immediate vicinity. There’s no way of connecting that to specific objects in the real world, so there won’t be any augmented street signs or new facades on buildings or anything like that. There are ways of solving this (e.g. fiducial markers), but neither toolkit supports those. Theoretically you might be able to send the point data and GPS coordinates to the cloud to figure out what the user is looking at, but at the moment no such capability exists.
- Without any connection to specific points in the real world, multi-user AR experiences are difficult or impossible to implement. So everything will be single-user only.
- Holding up your cellphone all the time is really uncomfortable, and the novelty will wear off as soon as your arm gets tired.
Given those limitations, I suspect developers will have a hard time coming up with actual applications for ARKit and ARCore beyond some cool demos of scary clowns and dancing mice.
Devices like the Hololens and the Meta 2 glasses solve some of the problems listed above, but they’re extremely bulky and insanely expensive.
Impressive though ARKit and ARCore are as technical achievements, they’re only a small step in bringing AR to the consumer market.
AR is definitely coming, but it’s not here yet.