Apple has launched its new mixed reality device, called Apple Vision Pro, introducing a new paradigm they describe as spatial computing. With this, the company aims to distance itself slightly from the previous buzzword — the metaverse, championed by Meta, which many now consider a failure. Yet, both are pieces of the same puzzle.
Neither concept is truly new, but they’ve become far more relevant as technological capabilities have advanced to the point where these ideas no longer seem far-fetched.
Spatial computing refers to the use of the three dimensions of space as a mode of interaction and visualization with a device. In other words, it allows us to interact with machines in a much more intuitive way, similar to how we already interact with reality. This means we’re talking about creating new human–machine interaction methods, and therefore, the focus is on the device itself.
The metaverse, on the other hand, relies on spatial computing — it’s the necessary foundation that makes it possible. Spatial computing is the means of accessing the metaverse.
And although the term has been defined in many ways, if we break down the word itself, it becomes simpler to understand. It comes from meta and verse — meta referring to “beyond” or “about itself,” and verse from “universe.” So, metaverse means a “universe of universes.” Think of the movie Inception (2010) with Leonardo DiCaprio — dreams within dreams. In this case, we create alternate realities layered on top of one another.
Examples include augmented reality (AR) and virtual reality (VR). While AR overlays digital elements onto our physical world, making them appear as though they exist around us, VR creates a completely new world. AR might let you see how a new sofa would look in your living room; VR can take you on a trip to another country without leaving your home.
But where things become truly interesting is when the experience ceases to be individual and becomes social — when two or more people can interact within the same digital environment. In that case, the focus shifts from the device to the experience.
There’s no doubt that these two paradigms — along with artificial intelligence (one of the most promising and fastest-growing technologies, though I won’t delve into it here) — will play a major role in the not-so-distant future we’re already working toward at Sosadiaz.
The Apple Vision Pro: Innovation or Overkill?
The Apple Vision Pro is undeniably an impressive device. It brings important aesthetic design innovations, and its technical specifications surpass competitors in many ways: 4K resolution per eye, 12 cameras, 5 sensors, and 5 microphones. It introduces a new feature called Eyesight, which displays the wearer’s eyes externally to “humanize” interactions. Without having tested it firsthand, Apple also promises more natural interactions through voice commands, hand gestures, and eye tracking.
And perhaps one of its biggest contributions is Apple’s validation of this future — a major signal from one of the world’s technology and marketing giants.
However, it’s important to recognize a crucial point: innovation for its own sake does not exist. Using trendy language or adopting new technologies just to say you’re using them isn’t true innovation. Companies that do this aren’t leveraging technology for real benefit — either for themselves or for their customers — but merely using it as a marketing gimmick (which, in the online world, might as well be called clickbait).
Technology is only as valuable as the problems it solves. And therefore, more isn’t always better.
Take photography, for example. For the average person, we’ve reached a point where having more cameras on a phone isn’t just unnecessary — it’s detrimental. More cameras consume more energy, take up physical space, and produce larger files. As a result, phones lose battery faster, waste storage space, and require more disk capacity — all for minimal perceptible improvements in photo quality. In fact, most platforms like Instagram and Facebook compress and optimize images, negating these benefits. For the average user, more can mean less. Of course, for a professional photographer, that equation changes completely.
This analogy helps illustrate why Apple’s Vision Pro and Meta’s Metaverse (Quest 2, Pro, and now Quest 3) fall short as visions of the future — they’re solving the wrong problems.
From a future-oriented standpoint, Apple Vision Pro doesn’t add much beyond what Meta proposed years ago. Its only major differentiator is the integration with Apple’s ecosystem, allowing for more intuitive and seamless interaction across devices.
But the vision behind both Apple and Meta is essentially the same: translating our flat screen world into headsets. Their promotional videos show us working across multiple monitors, watching giant floating screens, and making Zoom-like calls. But do these benefits really justify the product? My answer: no.
These devices still aren’t comfortable for extended use, may cause eye strain, are expensive ($3,500 in Apple’s case), have limited battery life, and socially, they remain awkward and isolating (even with Eyesight — nice try, Apple). More practical, affordable alternatives exist — like Nreal glasses, which act as lightweight external displays, or simply a good-quality monitor at your desk.
The issue isn’t the technology itself — it’s the solutions used to justify it.
Rethinking the Problem
Spatial computing is here to stay, as are new digital universes. But we must ask: how can we truly harness their power? And in what contexts are they the best solution?
Some examples to consider:
- Why replicate two-dimensional experiences in three dimensions? Why rely on screens at all?
- Why not aim for something closer to holograms?
- Why replicate interfaces designed for a mouse or touchscreen? (Adding shadows doesn’t change much.)
- Why have a program icon when you could grab a 3D notebook in the environment and dictate into it, or paint directly on a digital canvas?
We need to rethink user experience and interaction models in these environments.
We have the power to create infinite worlds — so why rebuild the one we already know?
The Real Opportunity
There’s an incredible opportunity to design immersive experiences, with gaming as the most natural entry point — something neither Apple nor Meta has made central. But the potential goes far beyond gaming:
- Education could be transformed — imagine students experiencing different moments in history.
- Professional training could reach new levels through realistic simulations.
- Virtual factory tours could become common practice.
These are situational, short-term experiences — and that’s an advantage, as users wouldn’t be exposed to prolonged headset use.
Our brains naturally organize memories by context — for instance, recalling recipes when we’re in the kitchen. Being able to visualize information in the right place and time is invaluable.
Augmented reality makes this possible with digital information, and devices like those from Apple and Meta make it feel like magic.
For businesses, this could mean superpowered employees — for example, warehouse workers viewing package data just by looking at it, or production engineers seeing real-time telemetry just by glancing at machinery.
Conclusion
Technology continues to advance, and we must take advantage of the benefits it offers — but always with the goal of solving real problems.
Sometimes, more is less.
At Sosadiaz, we focus on using creativity and the power of technology to create the best possible solutions for your challenges.a.

