360 VR player for the web using Three.js and WebGL
Join 6000+ Students
Upgrade your programming skills to include Virtual Reality. A premium step-by-step training course to get you building real world Virtual Reality apps and website components. You’ll have access to 40+ WebVR & Unity lessons along with their source code.
Start Learning Now
How can a 360 VR player be made on the web? What tech is being used to create a VR player that displays 360 content? Let’s learn those and see we can integrate what we learn into the product for 360 VR. If you haven’t read the last week’s post, make sure you read that first. We talked about API for VR.
360 VR player for the web – How to build
First let’s note down what we hope to achieve. We had earlier looked into the requirements for 360 VR player. With that in mind, let’s get into what we need to do in order to make such a player.
In this post, we’ll focus on the web part of the player. This player will be designed to work with the VR API that we talked about earlier.
Three.js and WebGL in VR
To power all our 360 and VR, we need to use WebGL. Since VR requires high performance, it needs to take advantage of GPU. This is why we need to use WebGL for our VR applications.
WebGL provides a Javascript API for rendering interactive 2D and 3D graphics inside <canvas>
element. You can draw different shapes, manipulate its colors, use different textures to create anything you want.
Three.js is a library that abstracts some of the functionalities of WebGL to make it easier to create things. For example, Three.js has built in geometries, textures, materials and multitudes of other options to create anything with WebGL. Using Three.js would significantly reduce the time to create something with WebGL. Three.js also gives the option to use custom shader when we need to create something custom.
We’ll be using Three.js to create our player. Why not use Aframe or ReactVR? These frameworks are built over Three.js to make it even more easier to make VR applications. When adding another framework, the size of the application increases and makes it more difficult for customizations that are beyond the scope of the framework later on. For this reason, we’re using Three.js directly.
It should be noted that, you need to learn fundamentals of WebGL and how it works even if you’re using Aframe or ReactVR to get most out of it. Aframe and ReactVR are great places to start when building a VR app. As the platform develops, more tools will emerge to make VR development more easier. As a side note, I’ll be sharing tutorials to develop on Aframe and ReactVR on Tutorials For VR, so be sure to be subscribed to that. I would still recommend learning Three.js even if you’re using these frameworks since they are using Three.js objects behind the scenes.
A word about image projections
In the earlier article, I talked about different projections. And I also mentioned why I’ll be using cubemap projections. If you haven’t read that, make sure that you read that first.
For using cubemap projections, Three.js gives us an inbuilt geometry to handle such a projection. It is called BoxGeometry
.
But before getting into geometries, we need to learn some Three.js fundamentals.
Three.js fundamentals
Scene
The first thing to understand is the scene
in three.js. Scene
represents what we see on the screen. Three.js will only render those objects added to the scene. We can add any number of items to the scene.
Camera
What we see on the screen is determined by the camera
. There are different types of cameras available in Three.js. We can use this to create the experience we need. For VR, we’ll place the user at the camera. And we need to move the camera along with the user’s head.
Geometry
Three.js also comes with different geometries
. On top of this, we can also create custom geometries in three.js. For displaying 360 images in cubemap format, we’ll be using BoxGeometry
.
Textures and materials
Textures
and materials
are another important part of three.js. You can apply different textures based on the requirement. Three.js also comes with certain built in textures to make it easier to create common textures such as CubeTexture
and VideoTexture
. Textures are applied to materials and they determine how they interact with the surroundings. For example, MeshBasicMaterial
has no effect with the ambient light. However, MeshLambertMaterial
is a material for non shiny surfaces that is affected by ambient light.
Creating web 360 VR player with Three.js
We have previously explored why cubemap projections are better for our player due to its efficient representation of data. If you’re interested in making a 360 viewer which uses equirectangular images, you can read the previous tutorial that I had written on building 360 photo viewer.
There are couple of ways to get the cubemap projections from the server. The six faces required for the cubemap can be combined into one or they can be retrieved individually. To simplify the article, I’m going to talk about how to display the 360 player with six cube faces separate. Just remember that if the if we get all the faces together in a single image, we need to write the logic to split the image into individual faces.
Let’s create a new Scene
and a PerspectiveCamera
.
1 2 3 4 |
var scene = new THREE.Scene(); var camera = new THREE.PerspectiveCamera(60, width / height, 0.1, 10000); |
The latest versions of Three.js comes with a CubeTexture
to make loading cubemaps even easier.
1 2 3 4 5 6 7 8 9 10 11 12 |
var loader = new THREE.CubeTextureLoader(); loader.setPath( 'textures/cube/' ); var textureCube = loader.load( [ 'px.png', 'nx.png', 'py.png', 'ny.png', 'pz.png', 'nz.png' ] ); var material = new THREE.MeshBasicMaterial( { envMap: textureCube } ); |
We’ll also create a BoxGeometry
for the same.
1 2 3 |
var geometry = new THREE.BoxGeometry(1,1,1); |
We need to create a Mesh
before we can all this to our scene
.
1 2 3 |
var mesh = new THREE.Mesh(geometry, material); |
Now we can add it to our scene.
1 2 3 |
scene.add(mesh); |
Adding VR capabilities to our 360 player
Now that we have a basic viewing mechanism for viewing our 360 images. However, we can’t use it with WebVR just yet.
There has been great changes in the recent update of Three.js which makes making the content ready even easier. Three.js now ships with WebVR renderer code. Before the inclusion of this code, we had to include additional scripts like VREffect
and VRControls
to make it work. As the time progresses, more such features will be included within the main Three.js script. Please note that we need to include the special WebVR.js file which includes a check for availability of WebVR in our browser. This is to improve the usability of our viewer.
To enable VR on our Three.js renderer, we need to add:
1 2 3 4 |
renderer.vr.enabled = true; renderer.animate( update ); |
1 2 3 4 5 6 7 8 |
WEBVR.getVRDisplay( function ( device ) { renderer.vr.setDevice( device ); document.body.appendChild( WEBVR.getButton( device, renderer.domElement ) ); } ); |
For this to work, make sure that you’re using at least version 86 of Three.js and you have included WebVR.js. This will add a button to your scene if WebVR is supported by the browser. Including webvr-polyfill is a good idea too since it acts a fallback and simulate WebVR when it is not available.
WebVR is being shipped to more browsers much faster than before. And with the recent news that Apple joined WebVR community we can hope that WebVR standard will be common across all the browsers.
Final thoughts on the WebVR player
I have intentionally simplified this post to make it easy for the beginners to understand. There are more concepts like controls and effects in three.js that are required to create a true cross browsers 360 player. Building customizations on top of the 360 player is another challenge in itself. We’ll be covering these advanced topics in a later post. Don’t forget to subscribe to the list if you haven’t done so already. If you have any feedback, please share them in the comments below.
Leave a Reply