Getting Started With VR Development

Virtual reality is here and it’s big. Your clients will want to have their own VR projects, so start embracing this new wave and jump on the virtual train before it’s too late. We wrote some tips for you on how to get started with VR development.

 

First, Let’s Have an Overview of VR Hardware.

Oculus Rift, HTC Vive and PlayStation VR are the most advanced consumer VR sets at the moment, but they are also expensive. Most of the time, your clients will want to have their apps available to the broader audience, so building apps for those platforms might not be the best solution.

Cheaper solutions are VR glasses like Google Cardboard or Samsung GearVR. They offer a similar experience, but they’re powered by smartphones (which everyone has in their pockets nowadays). The idea is – you just take your smartphone, stick it into one of those headsets and you’re ready for some VR action.

One downside of the Google Cardboard and similar headsets are poor control possibilities. With Oculus, Vive and PlayStation VR, you get hand controllers which will map your hand movements into the virtual world, so you can actually touch, grab and hold objects in the virtual world. Google Cardboard only has one magnetic or conductible button, which serves as a trigger within the app/game. We will describe later how you can interact in the virtual world with such a limited set of controls. On Google I/O 2016, Google announced a new headset that will be coming this fall and feature a hand controller, so we are looking forward to it.

What Will You Need to Build VR Apps/Games for Smartphones?

Let’s Have a Look at the Development Side of the Story

Just a quick market research before we start. Google Cardboard and similar glasses can be bought for a few bucks on eBay or similar stores. Samsung’s Gear VR is a bit more expensive, so the cardboard option will surely be the more approachable option. To make the development of the VR app/game as easy as possible, we used the Cardboard SDK, which is now available as a part of Google VR SDK. It’s available for both native Android and iOS platforms, but more importantly, it is also available for Unity. The cool thing about Unity is that, when you make a project using this engine, you can build it for various platforms, Android and iOS being one of them. Your clients will appreciate not having to pay double for two platforms, and you will have to do less work for the same result. The downside is that if you’re not familiar with Unity engine and C#/Javascript programming language, you’ll have to do some learning and training.

Unity Editor allows you to model your 3D world as you like and C#/Javascript scripts allow you to define game logic, object behaviour, animations, networking, etc.

Google VR SDK for Unity is a set of C# scripts that have rules and behaviours for proper VR rendering. The most important prefab asset in the GoogleVR SDK is the GvrMain. It consists of a Head and Stereo Render objects.

Go Stereoscopic

Head holds the MainCamera which will render your scene in a stereoscopic manner – your smartphone screen is divided into two smaller screens, one for each eye. That way you can be sure the SDK will provide the best stereoscopic experience possible for your cardboard glasses.

Also, it carries the PhysicsRaycaster script which will allow you to trigger actions within the virtual world just by looking at objects. Of course, to know where your center of the view is, you can add the GvrReticle prefab with GvrReticle script. It will add a small circle on the screen, so you can point at stuff and activate them with the help of the EventSystem which we’ll describe later.

Spatialize Your Audio

Besides that, MainCamera carries the Gvr Audio Listener which is used for audio spatialization. If you wear headphones, sound will be played in a way that approximates the position of the source in the scene. Your objects need to carry Gvr Audio Source on them to work properly with the Gvr Audio Listener. So if there’s an object to your left in the virtual world, you’ll hear it with greater intensity on the left side of the headphones, and less on the right side.
Important note – Unity will not enable the GVR Audio Spatializer by default in the settings, so you’ll have to turn it on by yourself in the Edit->ProjectSettings->Audio menu and switching the Spatializer Plugin from None to GVR Audio Spatializer.
Have a look at the example scene from our VR maze game.

This scene has the aforementioned GvrMain prefab which holds the camera, two canvases (one for highscores and one for the game menu), a floor (which is a plain object), one light source and Degordian logo.

To model your scene, you can just drag&drop those items from your assets folder. You can import assets from other modeling tools (Blender, SketchUp) or you can find them on the Unity Asset Store.

What About Interactions?

This scene is very minimalistic, and you can’t do much with it, for now, so let’s add some interaction. As we mentioned before, GvrMain holds a PhysicsRaycaster on its MainCamera object which, well, casts rays that can hit objects. When a ray hits an object, GvrReticle (which is a small circle mentioned above) will indicate that we are looking at something interactable.

So, how can we make something interactable? If you look at the previous screenshot, you will notice an EventSystem object in the list on the left. This object contains EventSystem, GazeInputModule and TouchInputModule scripts. Those scripts are available in the SDK, and they will help us achieve some interaction in the scene. The final thing on our way to making something interactable is adding an EventTrigger script to the object itself.

In this scene, the Degordian logo is interactable. When you look towards it, you can trigger an event on it with a magnetic or conductible button on your cardboard glasses. That’s a cute Easter egg in the menu scene – you can point and shoot at the Degordian logo and it will teleport around you as you shoot at it. So what does an EventTrigger script have to do with it?

We added this EventTrigger script to the Degordian logo object. You can define which actions will be invoked when certain events occur.

Two triggers are Pointer Enter and Pointer Exit. These will trigger when you start and stop looking at the object. When those events occur, you can define which actions will be invoked. In this example, a method SetGazedAt from the Teleport script will be called with the given parameter (true or false).

The third event is the Pointer Click. This will occur when you’re actually looking at the object and trigger a button on your cardboard glasses. When this happens, a TeleportRandomly method is called which teleports the object to a random place in the scene.

It’s that simple, now you have an interactable object. There are some more events available to be specified – you can link them to any method you like and do something accordingly.

Those were some simple examples of scene setup in Unity Editor, so let’s now take a look at scripts.

Implement Your Logic

Scripts are written in C# or Javascript programming language. If you’ll want to access object’s properties and change them in the script, you will most likely use and extend the MonoBehaviour class. It will provide you with some methods you can override and make use of. You will have a Start() and Awake() methods, which are used for initialization.

Let’s look at the Teleport class, which is actually a part of Google VR SDK example.

public class Teleport : MonoBehaviour {
private Vector3 startingPosition;

void Start() {
startingPosition = transform.localPosition;
SetGazedAt(false);
}
}

First, there is some initialization. We declare the starting position of an object. Then there’s a SetGazedAt method which will get the Renderer component of the object and change its color according to the gazedAt value. You can retrieve any of the object’s component by calling the GetComponent<{ComponentName}>() and use its properties.

public void SetGazedAt(bool gazedAt) {
GetComponent<Renderer>().material.color = gazedAt ? Color.green : Color.red;
}

Also, there are Update and LateUpdate methods available. Update is called on every single rendered frame and you can use it for some continuous work, like changing the position of the object or checking for some input events.

For example, if you put this in your Update method, the object itself will start moving in the “forward” direction. This code is actually used in the game for moving the camera forward and simulation of walking through the maze.

void Update(){
GetComponent<Rigidbody> ().velocity = transform.forward * 2.0f;
}

Rigidbody is a component that describes physical properties of the object. That means you can give it velocity and it will start moving.

LateUpdate can be used for trigger checking. Here are two examples of its usage.

void LateUpdate(){
if (GvrViewer.Instance.BackButtonPressed) {
Application.Quit ();
}

if (GvrViewer.Instance.Triggered) {
SomeAction ();
}
}

As you can see, Google VR SDK offers BackButtonPressed flag which will be set to true if the user pressed a back button on their phone. Also, there is Triggered flag which is set if the user triggers a magnetic or conductible button on their cardboard glasses. The main idea for interactions is – look towards an interactable object and trigger some action with the magnetic or conductible button on your cardboard glasses.

That’s about it for our Unity VR 101 blog post. Be sure to keep up with our blog for new mini-series in which we’ll describe specific parts and details of Unity VR development.

Be sure to check out our Curious Maze (Android and iOS) and see how we made use of the Google VR SDK to build a cute (as well as scary) adventure game.