Loading...

Creating 3D Assets for Virtual Reality

crew
  • Software:Blender 2.79  ·
  • Difficulty:Intermediate

Virtual Reality is not going away any time soon! In this video I’ll go over what makes modeling and texturing for VR different than for regular games. Any 3d model can technically be used in a virtual reality environment, but there are a lot of limitations specific to the platform that should be taken into account in order to get the most out of each polygon and pixel. 

 1. Build to scale 

It is often helpful for artists to use accurate measurements when building game assets in order to maintain consistency, but when working for VR and AR it becomes even more important. Virtual worlds don’t need to always be realistic, but they do need to be believable enough in order for the player to be comfortable with navigating the experience. The most important things to scale and position correctly are things that can be gripped by a hand, such as a door handle, weapon, or steering wheel. The second most important objects to get right are those that people are already familiar with in real life. The proper height and width of cars, stairs, light switches, desks, chairs, windows, doors, and ceilings can help make players feel more at home in an otherwise alien universe. 

3 to 3.5 feet (around 1 meter) is a good height for things that should be grabbed, and 5 feet (roughly 1.5 meters) is a good height for things that should be at eye level. Those are good rules of thumb based on averages, but also keep in mind that kids and people of all different sizes will be playing your game. 

If you’ve done that and things are still feeling a bit off, try adding a few smaller details like trim or screws and see how big of a difference it makes. This goes for texturing just as much as modeling, so be extra mindful when placing those brick and tile textures. 

2. Think like a designer 

Most games have a limited amount of ways that the player can use objects, and use the same command for each one. You don’t have to know what something is or how it works in order to “Press X to Interact”. In VR, however, that type of mechanic feels stale and extremely limiting since players want to explore and use their hands to interact with objects like they would in real life. As a result, 3d modelers need to use visual design techniques to communicate how an object is intended to be used. Even if it’s a totally strange tool that does something the player has never seen before, they should be able to look at it and figure out how to use it without any interjecting tutorials. 

Visual cues that help people understand the function of an object in their environment is called an affordance. Wheels tell you which direction something can move in, knobs look like they are meant to be twisted, handles are built for pulling and picking up, and buttons, bars, and flat areas communicate pushing. 

Every object that can be interacted with should look and feel fully functional. Since the player can look at things from any angle, any attempt to take a shortcut on this will look sloppy. As an example, I had to modify the revolver concept shown in the video in order for the handle to fit a hand, the trigger to be the right distance for the finger, the cylinder to fit the bullets, the reloading mechanism to pop the cylinder out at the correct angle for reloading, the hammer strike to line up with the bullet and barrel, the barrel to be the right size for the bullet, and the sights to work realistically. Most of these changes are relatively minor, but some (like the reloading mechanism) require altering the proportions enough to significantly deviate from the original concept (the resulting model needed to be much thinner).

There’s a huge amount to cover when it comes to affordances and how to design things so that the player feels natural interacting with them, but that can be a topic for another day. I suggest you read “The Design of Everyday Things” by Don Norman if you’re interested in learning more. 

3. Pixels over polygons 

Not only do VR games need to run at least 60 to 90 frames per second in order to prevent motion sickness, everything needs to be rendered twice - once for each eye. There are a few tricks that can speed up the rendering process, but it’ll never be as fast as rendering a single viewpoint. As a result, we need to be extremely careful with how much memory our assets will take up. The more optimized our assets are, the more details we can afford to add. 

The two ways to add detail to a 3d scene are to add more geometry or increase texture resolution. Adding more geometry always looks more realistic, but since we can’t model everything down to the atoms, at some point we need to use textures to approximate everything that’s going on. Because we need to run everything at a higher framerate and draw things twice, VR assets need to have a much lower polycount than what you would see in a regular PC game. Even though it’s less realistic, we’ll need to rely more on texture detail than modeled detail since it can be rendered faster. Reducing poly count is a topic for another day, but here are a few tricks you can use to get yours even lower: 

Intersect small details instead of connecting them to the surface. The tradeoff is that this will waste the texture space underneath so I wouldn't do it for huge details, but for smaller things this can save loads of tris. 

Try to limit the amount of sharp edges or UV seams. This might be a bit out of your control because it depends on the design of the asset in the first place, but be aware that smooth shading and fewer UV islands is slightly faster. 

Lastly, any vertex that is not significantly defining the object’s silhouette or is not needed to define a UV seam can be slid to the side and removed. 

4. Sharing (data) is caring 

While we can focus on getting detail from textures, it’s also best to not have too many of them. It’s a lot better for a game engine to load one really large texture than many smaller textures. Use a texture atlas whenever possible to combine many textures into one. Either make it like that from scratch, or use tools like Mesh Baker to combine them for you. 

What’s so cool about the PBR pipeline is that many different objects can all share the same material as well. A metal chair, a leather couch, painted walls, and wooden table can all share the same textures and the same material, which is excellent for performance. There’s an extra draw call per material, so the more objects that share the same material, the less draw calls you will have. 

Another way to share object data and speed up your game is with instancing. Many objects sharing the same mesh and materials is a great way to add a bit of visual complexity without impacting speeds too much. The trick here is to use modular pieces that can be combined with other objects in a variety of ways. You can also set per-instance properties and slightly tweak a few settings for each one so that the effect isn’t quite so obvious.

5. Use even texel density 

In my courses about modeling and texturing first person weapons, I mentioned how we can get the most out of our textures by sizing UV’s in proportion to their distance from the camera. In VR, however, players can stick the camera just about anywhere by moving their heads, so we need to take a slightly different approach. Instead, try to aim for a fairly even texel density. The only areas to optimize if you need to are places that the player likely won’t look at, such as the underside of a shelf. 

6. Be careful with normal maps 

Normal maps are my favorite way to add extra detail to game objects, but in VR their effectiveness is essentially cut in half. They still work for fine detail, or for detail that is far away which you can’t inspect up close, but those slick beveled edges that we’ve come to know and love won’t look quite as good. 

I’d still recommend using normal maps anyway if you can afford the extra textures since it still looks better than nothing, but if you really need the light to glance off an edge, you’ll need to bevel the mesh itself. This is one of the two exceptions of the Pixels over Polygons rule. To compensate try increasing the edge wear in your textures to mask it a bit and still highlight the edges, just in a different way. 

7. Don’t count on reflections 

There are three main ways of getting reflections in a game engine: screen space reflections, baked reflection probes, and real time reflection probes. 

Screen space reflections are really hit or miss. They can be quite expensive since you’re rendering two screens, and they don’t look very accurate, especially near the sides of your vision. It could be worth trying them out for super simple scenes as a subtle effect, but for most things it’ll look quite unnatural when you move your head around. 

Baked reflection probes are the best bet for VR, but they’re still going to take precious resources, so only use them if you must. These will only reflect static objects but are much cheaper to render than real time probes.

Real time reflection probes look the best in theory, but they’re extremely resource intensive. If the only way to use them is to turn down the quality so low that it becomes really flickery and pixelated, it’s probably best not to use it at all. Only use these if you absolutely need to see the player in a mirror or something similar. 

A big aspect of great PBR shading is the roughness map - subtle variations in how blurry the reflections of a surface are can really help sell a material as realistic. As you can imagine, blurring things takes resources, so roughness itself can be expensive to use. When possible, try to use mobile friendly materials that use the old school diffuse and specular method, and save the PBR roughness for objects that will really benefit from it.

8. Cut it out with the cutouts 

It’s pretty common practice to use alpha mapped textures to make really low poly objects seem more detailed. It’s especially useful when it comes to grass and trees, but as with everything else in VR, there’s a caveat to how it should be used. In Unity, opaque objects are drawn front to back, but transparent objects are drawn back to front. When a transparent object is drawn on top of another object behind it, we have an overdraw of 1, meaning that those pixels need to be drawn twice. An overdraw of 6 means that they will be drawn 7 times, and since we’re in VR, that problem is multiplied by two, so it’s actually drawn 14 times. Just for some grass cards! We can get away with using some transparency in our materials, but if we overdo it then we’ll see some significant performance decreases. Some common culprits to be aware of are grass, foliage, trees, sci-fi computer screens, and particles.

9. Bake everything 

Anything dynamic is going to take a lot more resources, so when making assets for vr, try to bake as much as possible. I already mentioned using pre-baked reflection probes, but also try to bake your lighting, with its shadows and global illumination. Batching isn’t quite the same as baking, but the idea of keeping as many things non-dynamic as possible is similar, so I’ll include it here as well. Be sure to set any non-moving objects as static. That way they can all be combined together, or batched, and rendered faster as one large mesh. 

10. Use LOD’s 

Even when batching and instancing, polycount can still be an issue for scenes that have wide open spaces. A common way to minimize polycount for common objects that you’ll see both up close and far away is to use levels of detail, a.k.a. LOD’s. 

Our friends at Phosphor Studios wrote a great guide for how to make nature LOD’s for VR, and it’s a bit more complex than just reducing the number of polygons. 

Remember the Pixels over Polygons rule? The second exception to that is when it comes to transparency since overdraw is even more of a problem than a couple extra polygons. So the LOD closest to the camera has cutouts around most of the transparency, and the ones in the middle don’t have transparency at all, and the ones farthest away can rely on transparency since they’re not likely to be drawn in front of opaque objects. They go over it in much more detail, so I recommend checking out their site to learn more.

Recap 

So to recap, build things to scale, think like a designer, use more pixels instead of more polygons except when it comes to transparency, share as many textures, materials, and other data between objects as possible, use fairly even texel density, be careful with normal maps, don’t count too much on reflections, reduce your overdraw, and cleverly use levels of detail. With those in mind, you’re ready to start making assets for VR!

Credits and further resources: 

The Design of Everyday Things by Don Norman

Mesh Baker by Ian Dean

Making Nature LOD's for VR by Phosphor Studios

Rick and Morty: Virtual Rick-ality by Owlchemy Labs, gameplay by Let's STFU and Play

Fallout 4 by Bethesda Softworks

Grav|Lab by by Mark Schramm

RUSH by The Binary Mill, gameplay by Notopik Gaming

MIT Explains: How Does Virtual Reality Work? by MITK12Videos

HTC Vive model by Eternal Realm

Virtuix Omni: An Immersive Virtual Reality Gaming Experience by Rackspace Studios, SFO

Virtual Reality at Kitchner Public Library by Kevin Page