My first day on the job as a VR developer with zero experience could be best described as intimidating. The sheer amount of knowledge and technical groundwork I needed to cover as a fresh graduate seemed like a jagged mountain I was too fragile to climb. Having no senior level guidance to point me in the right direction, I frantically searched for a structured learning path that would give me a jump start and make sense of all the jargon and buzzwords associated with this field. What is XR? How is it any different from MR? Where does VR and AR fit in? What even is the Metaverse? What does it take to build a VR experience from scratch?
Luckily, I came across Unity Learn's free-of-cost resources that offered a plethora of up-to-date courses, tutorials, and entire structured pathways to get you up on your feet. A majority of the concepts and takeaways I have highlighted in this post, I have learned from there.
XR or Extended Reality, is the umbrella/generic term that encompasses everything VR, AR, and MR. More specifically, the term XR is most often used by developers, documentation, and technical writing as a placeholder for any VR/AR/MR experiences. Toolkits, SDKs, and documentations use XR extensively when referring to functionality that can be catered for VR/AR/MR.
VR or Virtual Reality refers to experiences where the environment you are viewing and interacting with is completely virtual(generated). This means that you are completely cut off from the real world, at least in terms of visual senses.
AR or Augmented Reality is where smaller components or chunks of digital elements are layered on top of your actual physical view of an environment. This essentially means you can see the real world around you, but it is also overlayed with 3D objects and other digital elements such as UIs. Think Pokémon GO!
MR or Mixed Reality, as the name suggests, is a blend of both VR and AR. MR experiences may vary in how much inclination they have towards VR/AR and may be best expressed as a virtuality continuum.
This is best summarized in this concise blog by the Interaction Design Foundation, as seen in the diagram below.
It would be adequate to term the Metaverse as the next significant development in global communications, as highlighted:
" There will come a day when we look upon what we have accomplished, and know that we have done it. As of now, we do not know exactly what shape the Metaverse will take. That does not matter, either. What matters is that someday, a global network of spatially organized, predominantly 3D content will be available to all without restriction, for use in all human endeavors — a new and profoundly transformational medium, enabled by major innovations in hardware, human-computer interface, network infrastructure, creator tools and digital economies." - Tony Parisi
You can read more on the Seven Rules of The Metaverse by Tony Parisi, Head of XR Ads/E-Commerce at Unity Technologies, where he narrates a thoughtful definition of how really to understand the Metaverse and its implications.
If you're as confused as I am, or to get a quick laugh, check out the more satirical take The Seven Present Laws of the Metaverse by the Ghost Howls.
Now that we have all that jargon out of the way, what does it take to make a VR experience from scratch? I use the term experience quite often as a placeholder for applications and games alike, because as I have discovered, the vast lot of skills, technologies, practices, and functionalities resonate, even with different use cases. The bottom line is, If you can make a VR game, you can make a VR app, and vice-versa. In-fact, a VR game is a VR app.
Because VR thrives on complete virtual environments, a Game Engine becomes almost necessary to create experiences tailored to VR. We need a robust system that will handle all rendering, physics, utilities, and the dirty work so that we can focus on the experience itself. We use game engines such as Unity & Unreal to build the 'front-end' ( in the case of VR development, this would refer to the virtual environment, objects, scenes, interactables and interactions, User interfaces, client app experience, etc.)
Chances are if you want to implement a cool feature in VR, there's already an SDK or Software development Kit built for it. Want to add hand-tracked interactions with virtual objects? Use an Interaction SDK. Want to easily integrate avatars into your experience? Use an Avatar SDK.
There are so many SDKs and toolkits that achieve specific functionalities that there is no point in making your own. You are better off using well-documented and highly extensible software kits that have been developed carefully and evolved to perfectly cater to whatever you are looking to achieve.
By extensible we mean that we can extend/customize these kits to meet our exact needs.
Take, for example, the concept of Sockets in Unity's XR Interaction Toolkit. Sockets allow us to hold interactables(objects) in place at a certain predetermined orientation as if docking them in. We can extend the Socket Interactor class inside of the XR Interaction toolkit to be able to detect the vertical swiping of a card to implement a card reader in VR. This, however, requires creativity and problem-solving as well as ample experience in understanding C# code and vectors in Unity.
In a similar stance, a lot of the time we require a backend service or a group of backend services to allow us to store and transfer data. This may be for user accounts, multiplayer features, score leaderboards, and other data/network-oriented services. Many organizations now offer industry-standard backend-as-a-service to wholistically take care of everything, such as the PhotonEngine, which provides backend cloud services as well as networking libraries for easy integration with game engines like Unity.
If you are looking to add multi-user features to your VR experience, these services are your best bet!
Because the field of XR is rapidly evolving, the hardware landscape is constantly changing.
My understanding of XR (or simply VR at the time) was that it was still in primitive stages, however, after experiencing Meta's demo app First Steps which showcased the capability and potential of VR devices, I was blown away. Hand tracking, eye tracking, 6DOF(degrees of freedom), positional audio, 120hz refresh rate, and beautifully crafted virtual environments make the experience surreal.
It is noted that HMDs or Head Mounted Displays ( devices that allow us to experience XR) are primarily of 2 types, Tethered HMDs, and Standalone HMDs. The key difference between the two is that tethered HMDs perform all rendering and processing via a PC that is physically connected to the HMD and the device is simply used to display the output, whereas standalone HMDs perform all rendering, processing, and calculations on the headset itself, all the while displaying content too.
The obvious trade-off between the two is that tethered devices can cater to more powerful experiences that are graphically and computationally demanding, while standalone are limited in their capabilities. This may change as progress is made to make HMDs lighter, more powerful, and economical, but most standalone HMDs are now able to function as hybrids (connecting to a PC or using standalone) such as the Meta Quest 2.
Then some HMDs do only VR, only AR, or both (XR). It all depends on the base use cases of the devices and what they are tailored for(gaming, business, efficiency, and education).
When building a VR experience, the most fundamental characteristic of the experience to focus on is, inherently, immersion.
Everything revolves around immersion. Immersion in VR can be defined as the perception of being physically present in a non-physical/virtual environment. Ensuring that experiences are immersive requires us to solve some key challenges. Let's relay some concepts that are critical to immersion:
Getting real-world scale is essential for immersive experiences in VR. Objects and environments in VR must be the right size and proportion to give a sense of immersion. As a reference, 1 unit in Unity translates to 1 meter in real life. Both programmers and designers must be aware of the world scale and its importance in VR immersion.
Locomotion? That's just a fancy word for movement. More specifically, the movement of the user inside the virtual environment. How do we ensure that a user inside of a virtual world can freely move and explore the environment around them?
Locomotion is a major design decision in VR that ties directly to the immersion of the user. Locomotion techniques in VR include:
Interactions in VR are closely tied to gameplay mechanics, and player agency in Video Games, hence, essential for immersion. Users in a VR experience must have some sort of impact on the virtual environment, and we achieve this using interaction. Interactions in VR can be broken down into two components, interactables, and interactors.
Comfort and accessibility in a VR experience generally raise quality-of-life questions such as:
As we previously talked about limitations in the processing power of HMDs, performance optimization becomes essential when developing apps for VR. Depending on the type of experience and its target platform, accounting for constraints and navigating around hardware limitations while building an immersive app is critical.
Some metrics to keep in mind include but are not limited to:
To optimize performance, we want to minimize draw calls, minimize poly count, and optimize textures, particles, lighting, and post-processing.
Here's a list of some tips and techniques typically used for optimizing performance:
I have linked below my learning path and frequently update it as I progress:
A Structured Learning Path for VR Development with Unity
Did you like this post? Tell us
Leave a comment
Log in with your itch.io account to leave a comment.