Managing 3rd Party Plugins Without Compromising Game Design

Something that was new to me when I started Unity development was the vast quantity of third party plugins. In the years I spent developing games with custom in-house engines, third party code did exist, but it generally existed in the form of either plugins for artist tools such as Maya, or larger scale technology like Havok physics and Scaleform. Between Unity, mobile platforms that are almost always connected to the internet, and the rise of free to play development, the demand and creation of third party plugins has risen.

Unity plugins come in a variety of types, and are built to address every possible step in the game development process. Some cover gaps in the Unity feature set, such as UI toolkits. Some exist to improve workflow, such as visual scripting tools. There are packages in the Unity Asset Store that are just full of raw content: 3D models, textures, and shaders. There are plugins to help you integrate third party SDKs, for things like in app purchasing and analytics tools.

The Unity asset store does a great job of emphasising the feature set of these plugins, but provides no information on the performance cost. This is unfortunate, but not an easy problem to solve. Even if a developer were to take the time to measure the increase to build size, the runtime memory costs, and the runtime CPU costs of their plugin, those numbers would be purely in a vacuum, there’s no telling how the plugin would actually interact with your game. It is easy to just look at a list of plugins as if you’re at a buffet, and ask for one of each, and maybe go back for seconds on some like advertisement plugins. This gets out of control quickly, and you will discover that there is actually no more room left in available memory for your game, instead you’ve just created Frankenstein’s Monster out of plugins, lots of parts glued together in something that might resemble a game, but it’s not quite human. The village full of gamers are going to recognize that the thing you created isn’t actually a game, and will be at your door with torches to burn it down.

How can you achieve a balance of using some of these helpful plugins, but still leaving enough runtime memory and CPU time to actually build a game? Build strict budgets, measure the cost of individual plugins, keep a clear understanding of what each plugin does and why you need it, and build a pipeline that lets you disable and remove plugins on low end platforms. Yes, this is going to be a lot of work, but it is the only way to reach that sweet spot of maximizing the number of plugins you can use while still building the best game you can.

The first step is budgeting. This does not happen at smaller developers too often, and even professional developers tend to start slipping on this step. The two primary budgets you need to build are runtime memory during gameplay and app download size. Depending on the design of your game, you might want to build memory budgets for other moments, such as menu flow. Many disciplines contributing to the game are going to want as much memory and app space as possible. Your sound engineers are going to want the best quality audio they can get, and will want a variety of audio. Your environment artists are going to want high resolution textures for their backgrounds. The animation team is not going to want to spend time tweaking compression settings to shrink the size of their animations. Once you implement a plugin, it is going to be difficult to convince management that you need to remove it from your game if you want that memory back. If you are in the full swing of production and have not made a budget, you will quickly find that you are no longer an engineer, you are now a diplomat, arranging trade agreements between the nations of game development. You will go to the environment artists and ask them if they can give you any runtime memory back, all of their 1024×1024 textures are just so big. They will respond that “It’s been like that in the game for months no problem! Why don’t you go ask someone else! I bet it’s the audio engineers eating up all the memory!” After a week of back and forth, begging in people’s offices, you might be able to arrange a trade: the environment artists will drop textures to 512×512 on even numbered levels, and the sound team will make the audio mono on odd numbered levels. No one is happy with this. Also, this isn’t even a realistic solution, what you will usually end up with is 512×512 and mono on all levels.

Hopefully I’ve scared you into wanting to actually make a budget, but how do you begin? First, you need to figure out what your memory target actually is. On modern platforms, memory is shared between many apps running. I covered some information on targeting platforms and memory available on iOS here: https://josephstankowicz.wordpress.com/2013/06/04/ios-device-consideration/. Even if you are only targeting iOS for release, you are still making a multiplatform game, and will have to make a set of budgets for each memory level of devices. Because of this range of devices, you will want to figure out what your flexible, and fixed memory costs are. You can always use lower resolution textures on lower end devices, which makes texture usage fairly flexible. The memory overhead of the Unity engine is very static, as well as memory used by your C# code. You are still going to have to do some diplomatic negotiation at this point, to work with content contributors to lock down their limits. UI artists, animators, environment artists, audio engineers, and whoever is asking you to implement plugins are all going to ask for more more more, and you are going to have to work to keep people as happy as possible, while still making a game that can actually run on your target devices.

At this point, you have somehow made a budget, and have not realized a budget is useless if you have no way to measure and audit everyone’s memory usage. Spending engineering time to build performance measuring tools is vital to making a game intended to push the limits of iOS hardware with Unity. With game content this is a bit of a time consuming endeavour, but not very difficult. Unity provides some useful functionality, like the ability to get the memory size of an asset at runtime in development builds http://docs.unity3d.com/Documentation/ScriptReference/Profiler.GetRuntimeMemorySize.html. Using functionality like this, you can measure the size of the content in your game at runtime, and compare that information against your budgets to make sure each discipline staying within the boundaries you gave them. It might take some time pasting this data into a spreadsheet and sorting it, but it will keep your game under control.

Unfortunately, tracking and managing memory of third party plugins is not as easy. These third party plugins are usually black boxes, and you have no control over their memory and other costs. Sometimes, you’ll be lucky and there will be configuration options, such as disabling videos for an ad service on lower end platforms. The easiest way to get an idea of the costs of a plugin is to find someone else who has already measured it. Maybe it’s another person within your company, or maybe you just have to post online and ask people. You probably will not be so lucky, so the next way to find the cost of a plugin is to profile your game before and after integrating the plugin. You might be able to do this in an empty Unity project, or you might need to do this within your game. Now you know the memory cost of that plugin, and can keep it if it fits, or drop it if it doesn’t! Or what will more likely happen is your producer will tell you the new plugin needs to be put into the game, and you need to get that memory back. If you did not actually build any budgets, you now get to go beg between disciplines asking for people to reduce memory usage to get that back, or begging your producer to let you remove a different plugin. If you did build a budget, you have much more leverage here, and can try and hold people to it, share it with your producer and explain that all memory is already accounted for, and work with your team to juggle memory around to fit the new plugin, such as removing a different plugin to make room for the new one.

This brings you to the biggest problem with using third party plugins: removal. The ideal Unity plugin would feature some sort of on/off switch, and when off would behave as if it was not in your game in any way. I don’t think I have ever seen a plugin that was even close to this ideal. Every Unity plugin that we have tried removing / disabling has been a time consuming endeavour, and we were often still finding files related to plugins we had removed over a year after the initial removals. You might think that you’re a clever engineer, you will just build a custom wrapper for every plugin you integrate, to provide an easy on/off point, so you don’t have to search through your codebase for calls to the function. Very few plugins are simple enough to wrap up like this. Many plugin developers are going to make use of the power of Unity, and the plugins will come with custom components and editor scripts. This will provide many points of entry to a plugin, and a first attempt at a wrapper might end up just as complicated as the plugin was in the first place. Keeping this information in mind can help you build a partial wrapper, to at least ease the process of removing a plugin a little bit. Properly documenting how the plugin is implemented in the first place, what all the pieces and files are, and how they fit together is going to be your best tool for plugin removal later. You will also want to let your producers and managers know that removing plugins is going to be a lengthy task, and why. Letting people know this ahead of time helps keep schedules manageable, you don’t want to surprise people when they assume that removing the plugin will be a quick task, only to be waiting on you for a week or longer.

The conclusion you might jump to after that is “I will never remove a plugin then. What I put into the game will be what I ship with. If I want to switch ad services or something, I will do that between projects.” Unfortunately there are many valid reasons to remove plugins during game development. The biggest one is Apple’s constantly changing requirements and recommendations for iOS development. You might have a plugin you have shipped multiple games with, and find out that the features of the plugin are no longer allowed under Apple’s latest iOS development rules. You’ll find out that not only can you not ship a new game with that plugin, but you will have to go back and update all of your previous releases to remove the plugin, or they will be removed from the app store. Another big cause of removal you probably have never dealt with if you have never used third party tools before, is factors related to the third party company, or your company’s relationship with them. A big cause of these problems is the plugin or SDK provider getting bought out, or going out of business. OpenFeint did not last long after Gree bought it a few years ago, and once Gree shut down OpenFeint, you could not release a game with OpenFeint, forcing you to remove the plugin before you can ship your game. Another thing to note you might not realize is many of the third party plugins are written by a third party from the SDK the plugin is wrapped around, so you need to keep up with both the status of the SDK, as well as the plugin. This becomes a problem if the SDK requires an update, to ship a game, but the plugin developer is either too slow to update to the latest SDK, or not able to update.

Hopefully that is enough information to make your game development life working with plugins easier, and under control.

Advertisements
Posted in Unity3D, Unity3D Performance

Writing Shaders 3 : Surface Shaders

Last time, I covered how to write a simple vertex lit shader influenced by a single light source. Adding support for more lights with the standard CG shader code can be extremely tedious, and this is where Unity provides a fantastic solution: Surface shaders. Surface shaders automate much of the process so you can implement a complex lighting model without writing a lot of repetitive code.

To get started with surface shaders, create a new shader in your project, and open it. You will have a file that looks very similar to this:

Shader “Custom/SurfaceShaderSample” {

Properties {

_MainTex (“Base (RGB)”, 2D) = “white” {}

}

SubShader {

Tags { “RenderType”=”Opaque” }

LOD 200

 

CGPROGRAM

#pragma surface surf Lambert

sampler2D _MainTex;

struct Input {

float2 uv_MainTex;

};

void surf (Input IN, inout SurfaceOutput o) {

half4 c = tex2D (_MainTex, IN.uv_MainTex);

o.Albedo = c.rgb;

o.Alpha = c.a;

}

ENDCG

}

FallBack “Diffuse”

}

The first important change here, that is really easy to miss, is the shader logic sits in the SubShader, there is no Pass. This is a limitation of surface shaders, when Unity compiles this shader file, it will automatically generate multiple passes as necessary. It does make writing a multipass shader that uses surface shaders difficult. The second change in this file is the #pragma surface surf Lambert. In the previous shader posts, we used #pragma to define the vertex and fragment shader programs. The #pragma surface command tells Unity that this is a surface shader, and defines the surface shader program, as well as the lighting model. There are two built in lighting models you can use, Lambert and BlinnPhong. Lambert is the diffuse lighting model we have been working with so far, and BlinnPhong is the specular lighting model that makes things appear shiny. You can also specify some optional parameters at the end of the #pragma surface line to further configure the shader.

The surface shader program functions very similar to the fragment programs from previous lessons. The surface shader program will contain all of your shader logic except for the lighting behavior. The output of this is a little different, aside from the obvious change to modifying an inout variable instead of returning a color value, you have probably noticed you are setting a value called “Albedo” instead of one labeled color.

Chances are, if you are reading this shader tutorial you are not very familiar with lighting terminology. Albedo is also known as the “reflection coefficient”, and it is the diffuse reflecting power of a surface. A simpler description, within the context of writing these shaders, is the albedo is the diffuse color of an object, the color it will be under full, white light.

The final change in this file is the fallback near the end. With modern game development, and multiplatform tools like Unity, very few developers are going to willingly limit their game to a single piece of hardware, often the farthest you might go is limiting your game to a single platform. Even within a single platform, such as iOS, there is a range of hardware and graphics processors. Specifying a fallback gives Unity the information to know what shader to substitute to on platforms that cannot run the shader.

Once you understand these basics of surface shaders, then you should be able to understand the wonderful Unity documentation and example code on the subject. I’m going to give you a few links to read, and the rest of this blog post will cover terminology Unity uses but does not define. This is the base page on surface shaders for Unity, and will be your primary reference point http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaders.html. This page goes through the process of writing custom lighting models for surface shaders http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderLightingExamples.html. This page covers everything except the lighting models for surface shaders http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderExamples.html.

The first set of terms to define are those contained in the SurfaceOutput structure on the reference page. I’ve already defined Albedo. If you have somehow forgotten what Normal means, it is the facing of the vertex / fragment being rendered, and is the core piece of data for computing how much a given light is effecting the fragment. Emission is the color the object will be in the absence of light. Specular and Gloss both modify the specular highlight, Gloss effects the size, and Specular effects the intensity. Specular highlights are the bright spots that appear on shiny objects. The Alpha value contains the transparency of the object, if the object is setup to be transparent.

In the lighting examples page, the concept of subsurface scattering is mentioned in the diffuse wrap example. Subsurface scattering is when light enters the surface of a translucent object, bounces around under the surface, and exits at another point. Human skin is a real world material affected by subsurface scattering, which is why it is something often simulated in shader logic. Wrapped diffuse lighting is one way of faking subsurface scattering. In the example on the Unity page, what they mean by wrapped is the shift from -1 to 1 for the normal value of the lighting to 0 to 1, by halving the dot product between the normal and light direction, and adding 0.5 to it.

The toon ramp example covers a very important shader concept: using textures as a way to pass data into a shader. Shader programs are very performance demanding, and doing any complex calculation can cause a shader to drop the framerate of your game. An important thing to note when pulling a color value out of a texture with a shader is you index into the texture’s horizontal and vertical with a 0 to 1 value. This means that any time you have math in your shader logic that maps nicely to a percentage, such as the previously mentioned wrapped diffuse value, you can use this value to look up information in a texture. Generally these data textures are treated by shader logic as one dimensional, even though they are generally going to be something like 256 pixels wide by 8 pixels tall, or a square 256×256 due to texture restrictions. In the toon ramp example, they index into a texture, which is the gradient seen below the two sample pictures. You can get creative with this, and create interesting effects by changing the input texture from a smooth gradient to a really blocky gradient, you can create a very cartoony looking effect. Here is an example of this, using the sample toon ramp shader: http://i.imgur.com/BBrERWS.png.

On the surface shaders example page, one example generates rim lighting by setting the emission to some math that generates a rim value. The rim of an object is defined by the fragments that are pointing at a 90 degree angle away from the camera’s view direction. The saturate function clamps a value within the range of 0 to 1. Taking the dot product of the camera’s view direction and the normal of the fragment and saturating it, we get a 0 to 1, with 1 being fragments facing directly at the camera, and 0 being fragments facing 90 degrees away. Flipping this value, by subtracting it from 1, gives us math that defines the rim of an object, with anything at value 1 facing a 90 degree angle away from the camera, and anything at 0 facing towards the camera.

This rim value allows us to simulate Fresnel reflection. Think of a soap bubble, and how, even though you can easily see through the middle of it, you can also clearly see a circular outline for it, no matter what angle you look at it from. When light exits a partially transparent substance, such as water, the direction the light is moving at changes due to refraction. If the angle of the light is past what is called the critical angle, then the light will not exit the medium, and will instead reflect within the medium. Fresnel equations are used to figure out how much light is refracted, versus how much light is reflected. So, for an object like a bubble, you can see the edges more clearly because you are seeing the light that has reflected internally instead of immediately refracting out of the object. We simulate this effect in shader logic by making an object more opaque the closer to the rim, and more transparent the more the fragment is facing the camera.

This is a simple shader that makes a fragment more opaque the closer to the rim it is, and you can see it in action here: http://i.imgur.com/28Cl7aY.png.

Shader “Custom/SurfaceShaderSample” {

Properties {

_MainTex (“Texture”, 2D) = “white” {}

}

SubShader {

Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

 

CGPROGRAM

#pragma surface surf Lambert finalcolor:mycolor

struct Input {

float2 uv_MainTex;

float3 viewDir;

};

sampler2D _MainTex;

 

void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)

{

half rim = 1 – saturate(dot (normalize(IN.viewDir), o.Normal));

color.a = rim;

}

 

void surf (Input IN, inout SurfaceOutput o) {

o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;

}

ENDCG

}

}

The first new thing in this block of shader code is the logic to put this shader into the transparent rendering queue, and inform the renderer how to blend the pixels from this shader against the pixels it is rendering in front of. The next new thing is the use of a final color. This is where you can do any logic after the lighting is applied. I could have done the rim math within the surface function, but in a later example I am going to also adjust the color here.

Earlier, in the section on the toon shader, I mentioned how powerful using textures as data can be. This Fresnel shader logic we have generates a 0 to 1 value, and my favorite thing to do with percentage values like this is to pull data out of a texture.

Shader “Custom/SurfaceShaderSample” {

Properties {

_MainTex (“Texture”, 2D) = “white” {}

_Ramp (“Shading Ramp”, 2D) = “gray” {}

}

SubShader {

Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

 

CGPROGRAM

#pragma surface surf Lambert finalcolor:mycolor

struct Input {

float2 uv_MainTex;

float3 viewDir;

};

sampler2D _MainTex;

sampler2D _Ramp;

 

void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)

{

half rim = 1 – saturate(dot (normalize(IN.viewDir), o.Normal));

half4 rimCol = tex2D (_Ramp, float2(rim,0));

color.a = rimCol.a;

}

 

void surf (Input IN, inout SurfaceOutput o) {

o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;

}

ENDCG

}

}

Indexing into the texture using the rim value gives us a lot of control over this shader, and can produce many different effects http://i.imgur.com/9cvv0Eg.png, http://i.imgur.com/MZKekMe.png and http://i.imgur.com/NvatwjG.png.

The next iteration of this shader, I am going to mix the color of the ramp texture with the color of the main texture. This lets us give a really strong glow outline to the edge of the rim, as seen here http://i.imgur.com/PneclFt.png.

Shader “Custom/SurfaceShaderSample” {

Properties {

_MainTex (“Texture”, 2D) = “white” {}

_Ramp (“Shading Ramp”, 2D) = “gray” {}

}

SubShader {

Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

CGPROGRAM

#pragma surface surf Lambert finalcolor:mycolor

struct Input {

float2 uv_MainTex;

float3 viewDir;

};

sampler2D _MainTex;

sampler2D _Ramp;

float _RampOffset;

void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)

{

half rim = 1 – saturate(dot (normalize(IN.viewDir), o.Normal));

half4 rimCol = tex2D (_Ramp, float2(rim,0));

color.rgb = color.rgb * (1-rimCol.a) + rimCol.rgb * (rimCol.a);

color.a = rimCol.a;

}

void surf (Input IN, inout SurfaceOutput o) {

o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;

}

ENDCG

}

}

So far we have been working with static, unanimated data. Unity’s animation tools allow us to animate any shader property we want. So we’re going to add a new property as an offset for the ramp lookup, and animated that, which will give us this effect http://i.imgur.com/BvOLVng.gif.

Shader “Custom/SurfaceShaderSample” {

Properties {

_MainTex (“Texture”, 2D) = “white” {}

_Ramp (“Shading Ramp”, 2D) = “gray” {}

_RampOffset (“Ramp Offset”, Range(0.0,1.0)) = 0.0

}

SubShader {

Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}

Blend SrcAlpha OneMinusSrcAlpha

CGPROGRAM

#pragma surface surf Lambert finalcolor:mycolor

struct Input {

float2 uv_MainTex;

float3 viewDir;

};

sampler2D _MainTex;

sampler2D _Ramp;

float _RampOffset;

void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)

{

half rim = 1 – saturate(dot (normalize(IN.viewDir), o.Normal));

half4 rimCol = tex2D (_Ramp, float2(rim+_RampOffset,0));

color.rgb = color.rgb * (1-rimCol.a) + rimCol.rgb * (rimCol.a);

color.a = rimCol.a;

}

void surf (Input IN, inout SurfaceOutput o) {

o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;

}

ENDCG

}

}

That is the basics of surface shaders. Hopefully you’re ready at this point to start experimenting and constructing your own shaders in Unity.

Posted in Unity3D

Writing Shaders 2 : Vertex Lighting

Writing Shaders 2 : Vertex Lighting

Last week’s post on shaders covered building a simple, unlit shader with a texture. While unlit shaders render extremely quickly, they are boring, and generally artists are going to want some form of lighting available in their shader selection. Unity provides some useful pages on shader lighting models, such as this one http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderLightingExamples.html. This week, I am going to cover the most basic lighting model, diffuse lighting, applied through the vertex program.

To begin, we need to cover a few basics of lighting terminology. For the lights themselves, the ambient light is a scene wide lighting value, applied to everything influenced by lights. Ambient lights can be useful for ensuring that nothing in the scene ends up pure black, and is visible in some fashion. Directional lights are similar to ambient lights in that they are global, but have a directional facing, and cast light based on that direction. Point lights, area lights, spot lights, and other lighting types cast light within specific areas. Here is Unity’s documentation page on lights http://docs.unity3d.com/Documentation/Components/class-Light.html.

The other side of lighting terminology is the light reflection models used by the shaders. Diffuse is the most simple, scattering the light it reflects and appearing very flat. Real world diffuse reflection can be seen on things like paper or fabric. This behavior is caused by the light scattering in many directions when it reflects off the object. The specular lighting model mimics lighting reflecting in a much more uniform direction. Shiny, reflective objects like metals and smooth plastic are often represented with a specular lighting model. The first lit shaders you write are going to be a combination of one or two simple lighting techniques. As you learn more individual shader techniques, you will eventually be able to piece together a very complex shader that combines diffuse and specular lighting with a few other techniques to emulate real world objects, techniques such as environment maps, normal maps, and subsurface scattering. For now, we are going to keep things simple, and just focus on diffuse lighting.

First we are going to cover vertex lighting. Generally, for any given lit object, you will have less vertices than pixels rendered. This results in a fairly fast lighting technique, as the lighting math is done per vertex, and interpolated per pixel rendered. The visual effect is not perfect, as you can see when you compare this vertex lit sphere to this pixel lit sphere, but it can often be good enough, especially on mobile devices.

 Vertex lit: vertexlit_diffuse     Pixel lit: pixellit_diffuse

This vertex lit shader can only be lit by a single directional light and the ambient light.

Shader “Custom/Vertex_Lit_Texture”

{

Properties

{

_MainTex (“Texture”, 2D) = “white” { }

}

SubShader

{

Pass

{

Tags { “LightMode” = “ForwardBase” }

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

 

#include “UnityCG.cginc”

#include “Lighting.cginc”

 

sampler2D _MainTex;

float4 _MainTex_ST;

 

struct appdata

{

float4 vertex : POSITION;

float3 normal : NORMAL;

float2 texcoord : TEXCOORD0;

};

struct v2f

{

float4  pos : SV_POSITION;

float4 col : COLOR0;

float2  uv : TEXCOORD0;

};

v2f vert (appdata v)

{

v2f o;

o.pos = mul (UNITY_MATRIX_MVP, v.vertex);

o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);

 

// UnityCG.cginc includes UnityShaderVariables.cginc

// This file defines _WorldSpaceLightPos0 and _World2Object

// UNITY_LIGHTMODEL_AMBIENT is a built in variable, defined here

// http://docs.unity3d.com/Documentation/Components/SL-BuiltinStateInPrograms.html

// _LightColor0 is defined in Lighting.cginc

 

float4x4 modelMatrixInverse = _World2Object;

float3 normalDirection = normalize(float3(mul(float4(v.normal, 0.0), modelMatrixInverse)));

float3 lightDirection = normalize(float3(_WorldSpaceLightPos0));

float3 diffuseReflection = float3(_LightColor0) * max(0.0, dot(normalDirection, lightDirection));

 

o.col = float4(diffuseReflection, 1.0) + UNITY_LIGHTMODEL_AMBIENT;

 

return o;

}

 

fixed4 frag (v2f i) : COLOR

{

return tex2D (_MainTex, i.uv) * i.col;

}

ENDCG

}

}

}

The first change made is the shader tag marking the light mode as forward base. This tells Unity that this shader only needs a single rendering pass, the base pass, which will only supply it a single light. You can read more information on pass tags here http://docs.unity3d.com/Documentation/Components/SL-PassTags.html and on forward rendering here http://docs.unity3d.com/Documentation/Components/RenderTech-ForwardRendering.html.

The second change is the inclusion of Lighting.cginc. This file contains some helpful variables we can use for building our lighting model. The next change is the inclusion of the vertex normal in the input for the vertex program. The normal is used to determine the facing of the vertex, which we compare against the lighting direction to determine how much that light influences the vertex. There is now a color value passed from the vertex program to the fragment program, which is multiplied against the texture to give a final pixel color.

The first step in the lighting math is to transform the vertex normal into the local object space. I recommend changing that line of code to “float3 normalDirection = v.normal;” to see what happens if you leave your vertex in world space. If you apply lighting without applying the local object’s space, then the rotation, position, and scale of the object will not affect the lighting on the object, it will behave as if it was unrotated at the origin.

If you need a quick refresher on how the dot product works, the dot product of vectors A and B gives you the magnitude of vector A times the magnitude of vector B times the cosine of the angle between the two vectors. By normalizing both input vectors, the dot product gives us the cosine of the angle between both vectors. If the normal of the vertex is facing the exact direction the lighting is facing, then we get a value of one, which means that vertex is fully influenced by that light, and we can apply the color of that light fully. As the angle between the light and the vertex normal approaches 90 degrees, the influence of the light will approach zero. We are clamping the lighting influence to 0, because the light does not subtract color from normals facing away from the light. This lighting influence value is then multiplied against the color of the light, which is then added to the color of the ambient light to give a final color for the vertex.

This is a good stopping point, I know when I first learned shaders I found a lot of this overwhelming. It was easier for me to pick this knowledge up in smaller, focused bite size pieces.

Posted in Unity3D

Writing Your First Shader

I know many engineers who are interesting in writing shaders, but don’t know where to begin. Shaders, especially in Unity, are a lot of fun to write and work on. Generally when you’re working on a shader it’s for a specific feature, and your goal is to make that thing look awesome. You can iterate quickly on shaders, and Unity does a great job of providing an abstraction layer so you don’t have to worry as much about individual platforms. In other engines I have worked with, the pipeline for adding a new shader involved a lot of work. I would write a new shader file to be referenced in Maya, add the new shader to the interface in Maya for selecting shaders, update the code that exports data from Maya to export the new shader, update the per-platform compilation process for the game to push the new shader data through, and then work with the graphics programmers on each platform to load in that data in-game, and write the shader code per-platform. This entire process could take as long as three weeks to write a new shader. In Unity, it can be as quick as five minutes. Shaders are assigned within the Unity editor, after the art has been exported from Maya, and Unity handles compiling your shader for each platform.

Unity provides you three ways to write shaders: fixed function shaderlab shaders, surface shaders, and the standard CG vertex and fragment shaders. I’m going to focus on the vertex and fragment shaders written in Nvidia’s C for Graphics language, as it’s a shader language not exclusive to Unity, so the code is a little more portable. Unity has a lot of great reference available on its site for writing shaders, such as herehttp://docs.unity3d.com/Documentation/Components/SL-Reference.html. Even with that infor available, developers are still hungry for more info on shaders. I’m going to walk through a simple shader setup here, at the end of this walkthrough you will have an unlit shader that displays a texture.

First, make a new Unity project to work in. Next, right click in your project view, and create your shader. For now, you should just give it a name like “Unlit_Texture.” Naming shaders can be pretty difficult, but it is extremely important you provide a strong name for your shaders that make it obvious what it does, so the content creators who are browsing through the shader list later have a clear idea of the purpose of each shader. Next, create a material in your project, and assign your new shader to that material. After that, get a texture file and place it in your project. If you have nothing available, do a google image search for something like “The best texture ever”, and use one of those as your test texture. Finally, create a sphere in your Unity scene, and apply the material to it. You now have an object in your scene using the shader you will work in. As you make changes, you can tab from your code editor back to Unity, and see the changes you have made to your shader easily.
Now that you have an environment setup to work on a shader, open that shader up in your code editor of choice. You’ll see Unity has pre-populated it with some logic and functionality. What you have by default with a new shader is a diffuse lit surface shader. Today we are focusing on CG shaders, so let’s clear the file out down to the following code:

Shader “Custom/Unlit_Texture”
{

}

All this does is define the name of a shader, and provide the scope for writing the shader within the curly braces. I recommend matching the shader name to the name of the file, so it’s easy to match the code to the interface for selecting shaders. The “Custom” folder path is how Unity will present the shader interface for content creators to find shaders. For now this is OK, but as you write more shaders, I recommend devising a strict naming convention for folder paths, so content creators can find the shaders they intend to use.

Next, add a subshader to your shader.

Shader “Custom/Unlit_Texture”
{

SubShader
{

}

}

We’re only going to have one subshader in this file, subshaders are generally used to handle different video cards and different platforms. The shader we are building is simple enough we don’t need to worry about this.

After a subshader, add a pass to your shader.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

}

}

}

When rendering an object with a shader, each pass will be rendered. For our shader, we will only have one pass. Multiple passes are used for a variety of reasons, such as rendering a character in an RTS with a solid color if they are obscured by a building.

Next, we need define this shader as a C for Graphics shader.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

CGPROGRAM
ENDCG

}

}

}

Any code within the “CGPROGRAM” and “ENDCG” tags is handled as CG code.

We are finally getting to the meat of the shader, up next are the function declarations for the vertex and fragment programs.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

CGPROGRAM
#pragma vertex vert
#pragma fragment frag

vert()
{

}

frag() : COLOR0
{

}

ENDCG

}

}

}

The pragmas there tell the compiler what the vertex and fragment program are called. You will often hear fragment and pixel shader used interchangeably, fragment is technically more accurate, but people seem more familiar with the concept of a pixel shader. Another pattern you’ll start seeing a lot in shaders is a colon after a declaration, followed by a word. In the declaration of the fragment program in the above code, you’ll notice : COLOR0. This is called a semantic, it is used as a system to inform the shader compiler the intended usage of whatever it is next to. See the MSDN page on semantics for more information:http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx. Selecting semantics can sometimes get tricky, and if you select your semantics incorrectly, you might have problems that only occur on certain video card families.

Up next we are going to add some structure declarations.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

CGPROGRAM
#pragma vertex vert
#pragma fragment frag
struct v2f
{

};

struct appdata
{

};

v2f vert(appdata v)
{

}

frag(v2f i) : COLOR0
{

}

ENDCG

}

}

}

The basic flow of rendering a mesh is, each vertex is run through the vertex program you define in the shader. Then, each pixel between the vertices that form a triangle are rendered with the fragment program. The data input into the fragment program is automatically interpolated from the three vertices in the triangle the pixel is a part of. The appdata structure defines what information we want passed into our vertex program. The v2f structure is the data we want passed from the vertex program to the fragment program.

The next set of code will be the first shader you can actually run. It will render the sphere you assigned this shader to entirely white.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

CGPROGRAM
#pragma vertex vert
#pragma fragment frag

struct v2f
{

float4 pos : SV_POSITION;

};

struct appdata
{

float4 vertex : POSITION;

};

v2f vert (appdata v)
{

v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
return o;

}

fixed4 frag (v2f i) : COLOR0
{

return fixed4(1,1,1,1);

}

ENDCG

}

}

}

A few new things here. The position of the vertex is passed into the vertex program. This position is local to the model you are rendering, so it needs to be moved into the world to be rendered. The vertex position is multiplied by the model * view * projection matrix to place it in the world. You can see other global state data available to you with Unity shaders herehttp://docs.unity3d.com/Documentation/Components/SL-BuiltinStateInPrograms.html. Finally, the fragment program returns a color value for the pixel rendered. In this case, the fragment program is returning pure white, an RGBA value of 1,1,1,1. I’m sure you are tempted at this point to get something a little more interesting of a color, maybe display that position value as a color. If you change the fragment program to return i.position, you will notice that your game object turns bright pink in the Unity editor, and you have an error “Shader error in ‘Custom/Unlit_Texture’: Program ‘frag’, Shader model ps_4_0_level_9_3 doesn’t allow reading from position semantics. (compiling for d3d11_9x) at line 27.” The position value is used by the GPU to calculate where to display the fragment, and is not something you can access directly with your fragment program.

Lets get some basic color in there, we’ll pass a color value from the vertex program to the fragment program.

Shader “Custom/Unlit_Texture”
{

SubShader
{

Pass
{

CGPROGRAM
#pragma vertex vert
#pragma fragment frag

struct v2f
{

float4 pos : SV_POSITION;
float4 col : COLOR0;

};

struct appdata
{

float4 vertex : POSITION;

};

v2f vert (appdata v)
{

v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.col = o.pos;
return o;

}

fixed4 frag (v2f i) : COLOR0
{

return i.col;

}

ENDCG

}

}

}

You’ll notice that the color of the object depends on it’s position in the scene, move the camera around in the scene and notice that the sphere changes colors. What we’ve added here is a color value to the v2f structure, populated by the position of the vertex in the vertex program, and referenced by the fragment program to return as a color.

We are ready for the final, unlit texture shader, as follows.

Shader “Custom/Unlit_Texture”
{

Properties
{

_MainTex(“Texture”, 2D) = “white” { }

}
SubShader
{

Pass
{

CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include “UnityCG.cginc”
sampler2D _MainTex;
float4 _MainTex_ST;

struct appdata
{

float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;

};

struct v2f
{

float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;

};

v2f vert(appdata v)
{

v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
return o;

}

fixed4 frag(v2f i) : COLOR
{

return tex2D (_MainTex, i.uv);

}

ENDCG

}

}

}

The first new thing here is the properties block. This is the data that needs to be set externally from the shader. In this case, we are looking for a texture. If no texture is provided, the fallback value is white. Next, you’ll notice that we defined a sampler2D _mainTex and a float4 _MainTexture_ST within the shader pass. This is a collection of information that our vertex and fragment programs can use to access the texture. It includes tiling and offset information, as well as the information to pull a color out of the texture. Within the appdata structure, you will notice we have added a texture coordinate. This is a piece of data stored on every vertex on the mesh, a two dimensional vector generally referenced as “UV” coordinates, used to reference a location within a texture. Generally, the artists will define these texture coordinates within their content creation tool, such as Maya, and build and layout the texture to be used by the mesh. The TRANSFORM_TEX is a function defined within the file we included, “UnityCG.cginc.” These include files contain a lot of really useful tools and shortcuts for building your shaders, and are worth looking through to see what is available to you. What this function does is apply the offset and tiling information to the texture. To see the effect, change the tiling and offset information on your material. You’ll notice the texture in the scene view changes position on your sphere. If you change the line of code to “o.uv = v.texcoord” you will notice that you lose this tiling and offset information, the texture behaves as if you have a tiling of 1,1 and an offset of 0,0. The final new line of code here is in the fragment program, the call to the tex2D function. This function takes in the texture we want to pull information from, and the UV coordinates, and returns the color value at that position.

Now you have a very simple shader that renders an unlit texture. Next time, I will cover lighting, to bring a little more life to the model displayed.

Tagged with:
Posted in Unity3D

3D Printing : Unclogging The Hot End

In my last post on 3D printing, I mentioned that the hot end of my 3D printer was clogged up with some red PLA. I got some ideas from asking around on handling this, and I ended up trying the “drill it out” solution first. Well, the first thing I did was place an order for a new hot end, that way I knew I had a replacement on the way in case I destroyed my current hot end. This kind of seemed to work, the drill went through… but I was a little offset and ended up widening the hole in the PEEK insulator that the plastic is meant to travel through. I don’t know how much this matters, but it was probably not good.

After drilling through, I was able to use a thin metal tool to pull a lot of red plastic out. After this, I grabbed a strand of black ABS, and started pushing it it through the hot end, while the hot end was heated up. This seemed to work, I was getting red plastic coming out the bottom of the hot end for a while. I kept at this, trying to keep pushing things out until I was getting black plastic, and this seemed to work. Next, I switched to a white ABS to push through until another color change, I wanted to make sure I had a clear hot end. While pushing out the remaining black plastic, I was getting spurts of red plastic extruding as well.

After pushing this for a while… the entire thing started clogging up. At this point, I spent a few hours using different tools to push in plastic and pull it back out, trying to clear whatever the jam was. Every time I would think I had made a lot of progress on clearing the clog, having pulled out a ton of plastic, I would try pushing a strand of plastic through and it would always end up stuck again.

While trying to unclog my hot end, I noticed all of a sudden that my hot end was down to room temperature, and the software on my laptop was giving me an error that there was no connection to the hot end. After making sure it was safe to do so, I disconnected my hot end to check it out. One of the cables had become frayed. At this point, I got out my soldering iron, and soldered the wire back together. Luckily this fixed the connection, and I was able to heat up my hot end again.

I was still not able to clog it, and I decided to try out some new problem solving ideas. I figured at this point any clog would hopefully be ABS and not PLA, I had managed to extrude a lot of ABS since digging out the PLA. ABS plastic melts in acetone, PLA does not. When my hot end was fully cooled off, I used my clamp to hold it vertically over a container of acetone, and poured some acetone into the top of the hot end. Again, if I ruined things I did have a new hot end on its way in the mail, so I was willing to learn through making mistakes on this current hot end. After leaving this for a while, I did notice a strand of partially dissolved plastic in the acetone, which meant that something had come out of the hot end.

I left the hot end outside the acetone for a day to make sure it was fully dried and cleared off, and plugged it in again to heat it up and see if I could extrude some plastic. Again, I was having the same problems, where I was getting no plastic leaving the tip of the hot end. I am wondering if, at this point the hot end is jammed with plastic from the PEEK insulator that I scraped off with the drill when I originally tried to remove the PLA plastic? I might spend some more time trying to clean out the hot end this weekend, maybe bring it to Metrix to see if anyone there has ideas. Worst case this thing is broken, and I use the new hot end on its way in the mail.

Tagged with:
Posted in 3D Printing

Short Blogging Break

I’m going to take a short break from my Unity3D blogs for a few days to try and build up a good backlog of topic ideas, as well as keeping myself from burning out. Here’s my current list of topics to cover, anyone have any suggestions to add to the list? I’m mainly focused on performance and pipeline for Unity, especially as it relates to iOS and mobile devices.

Shaders for Content Creators : Selecting Shaders for Your Art (define terminology like cubemaps)
Shaders for Engineers : Making Shaders Go Fast
Animation With Shaders and Materials
Using Textures As Precomputed Data Tables For Shaders
Runtime Lighting (include lighting through spherical harmonics in here)
Lightmaps (would be a description of the idea behind lightmapping, and not a walkthrough of lightmapping)
Level of Detail (Cover runtime performance benefits, and using LoD to handle lower end devices)
Unity Physics

Tagged with:
Posted in Unity3D, Unity3D Performance

Building a Custom Resource Loading Solution

There is going to be a point in the development of many projects for iOS and Android that you might decide your system for loading content at runtime needs a major upgrade. Maybe you’re targeting a 50 MB size for your app so people can download it over a cellular connection. Perhaps your engineers are getting frustrated tracking what content needs to be loaded with what file loading system. You might decide you want to be able to replace content without pushing a full app update. To solve these issues, I recommend writing a custom resource loading system.

A custom resource loading system is going to have multiple parts, working together to give your team a focused pipeline for loading and displaying content at runtime. The solution I recommend would exist as follows. In the middle of the system, the function game code would call should be as simple as “CustomAssetLoader.LoadAssetAsync(assetID)”. First the custom asset loader would check if the asset was already loaded by checking refcounts it keeps track of, and if so it could return a reference to the loaded asset. If the asset was not already loaded, the custom asset loader would then internally have a list of individual asset loaders it could use to load an asset. The custom asset loader would use a hinting system to decide which asset loader to use for this particular asset. These individual asset loaders would implement different Unity file loading methods, Resources.Load, AssetBundles, the WWW class. Even though not all resource loading behavior of Unity supports asynchronous loading, I recommend you write your code as if everything is asynchronous, keeping it so all code that interacts with this system can assume all assets are asynchronously loaded will force strong development patterns.

Deciding what to use for your hinting system for your custom asset loader can be a bit tricky. I recommend abstracting your asset loading system so game code does not know or care about file paths, and instead uses a asset identification system. This would allow the custom asset loader to take in that asset identification, and check the asset database for the path to load. You could then build other useful information for the asset loading system into this, such as an identifier of which file loading system to use for that asset. You can also include dependency information in your asset database, which becomes very useful if you have a texture you want to make sure is always loaded when a certain model is loaded. While building this, you need to be very conscious of the pipeline for adding and changing assets, and try and provide as much automation as you can. For asset bundles, you can automate this as a step in your asset bundle build process, have the asset bundles add their manifest of assets to the asset identification system, as well as marking those assets as included in asset bundles, and which bundles they are in. For Resources.Load, you can write a function that scans through all Resources folders in your project, and adds them to the asset identification system. This function can then be called as a step in the build process. For assets kept on a server, you might add functionality into the system that you use to put those assets on the server to also update the asset database.

With a smart asset manifest system, implementing the custom asset loader becomes much simpler. At run-time, the location of each asset is stored in the asset database, so the custom asset loader’s “LoadAssetAsync” function has a few simple steps. First, check if the asset is already loaded and cached, and if so, just return the cached asset. Second, access the asset database, and check what individual asset loader to use to load that asset. Finally, call that individual asset loader with the information it needs to load the asset. This system is also modular, and you can begin with support for just one custom asset loader, such as Resources.Load. At this point you can upgrade all your game code to work around this system, and then will not have to make any major changes as new asset loading systems come online.

The other big benefit of this system is running the game within the Unity editor. A very common problem with trying to build an asset loading solution, especially for asset bundles, is testing in editor can become very frustrating. Generally, your team will want to be able to iterate quickly in editor, and if they have to wait on asset bundles to build every time they make a change to an asset, you will have a very frustrated team. If your custom asset loader checks if you are running Unity in the editor, then you can have it fall back to loading content outside of asset bundles by using Resources.LoadAssetAtPath, a function that only works when running the game in the Unity Editor.
The next step, after you have a function that can load in a single asset asynchronously, is upgrading your custom asset loader to handle groups of assets. Often times, game code will know it needs to load a bunch of assets at once, such as the texture, model, and animation for a character, and the person writing this game code just wants to pass off this list of assets to the asset loading system to load everything at once. The simplest way to implement this is a coroutine that does not consider itself finished until it has loaded every asset in the list.

A custom asset loader also needs to be able to handle asset loading failure gracefully. Maybe the user has no internet connection and your game is trying to download an asset. Maybe the server is taking a long time to respond, or download speeds are extremely slow. You will probably want to build a timeout into the custom asset loader to bail if an asset fails to load. If you are using the asset fallback system I will describe next, then your game code should not even need to care if an asset fails to load, as it will just display one of the fallbacks. If you are using any sort of event reporting system, such as Flurry, for tracking user behavior, then this is a great spot for an event, you can track what assets fail to load, and why.

Dependencies are very tricky in Unity. Different back end file loading systems for Unity handle the automatic loading of dependencies very differently, and if you have dependencies across systems, by default Unity does not do any system-wide reference tracking, and you can easily accidently load the same asset multiple times, if it had a dependency from an asset bundle, as well as from a Resources asset. I recommend handling dependencies by forcing them through your asset database, try and keep your in-game assets as free of dependencies as possible. These dependencies are things such as any assets referenced or used by a prefab, or the texture and material associated with a model. I think the cleanest way to handle this is to make your game as factory driven as possible, using your asset database and other in-game databases to construct content made of multiple assets at runtime. This is not always possible, and if it is not, you might need to define clean lines, and treat all content loaded with a prefab as a single content chunk, that prefab. If another asset uses the same texture as the prefab, then you might end up just loading that texture twice to avoid dealing with dependencies across different resource loading systems.

This brings up the concept of unloading assets. The way Unity handles dependencies, and unloading content is handled very differently based on how it is loaded. To explain this, think of the basic dependency layer of a model file with an associated material and texture that are loaded with it. With Resources.UnloadAsset, Unity will automatically reload that asset if there is still a loaded dependency. This means if you try and unload that texture without unloading the model file, your game code might assume the texture is unloaded, whereas Unity will reload it. With AssetBundles, this is not the case, and once you unload an asset Unity will not reload it if there is a dependency on it. Unity provides some useful editor functionality for checking asset dependencies, such as EditorUtility.CollectDependencies. You can use these functions to figure out asset dependencies at build time, and use that information at runtime to give your custom asset loader information on dependencies that might exist.

To really give this system a punch, I recommend building a asset fallback system that calls into the custom asset loader. The asset fallback system would take in a core asset ID, and then check a database for a list of fallback assets. This list of fallback assets would be organized so the asset at the front of the list is the desired final asset, and as you near the back of the list you have more generic fallback assets. An example of this for a builder game might be a desired model for a high resolution, specific level of a building, which is normally stored on a server and downloaded when needed. The fallback from there is the default first level of that building, which was included in the initial iOS app download, and stored locally on device. The final fallback is a generic swirling cloud graphic used as a fallback everywhere, that is generally always loaded. This fallback system would start the initial load, requesting all three of those assets to load asynchronously. The swirling cloud graphic is always loaded, and the fallback system would set the viewable, in-game art to the swirling cloud. The default first level of the building would be a relatively quick load, and the fallback system would swap that for the visible art once it finished loading. The first time a request to display this building, the high resolution model would not be downloaded yet, and the default first level building will be displayed a while, until the high resolution building finishes downloading. Once that finishes downloading and is loaded into memory, then the fallback system would swap that in as the visible asset, and the fallback system would be finished for this asset. On subsequent calls to this asset, the high resolution building will be cached locally on device, and will be displayable much more quickly.

Ideally, your user will never see these asset fallbacks. You want your game to start loading assets ahead of time, before they are needed. You can accomplish this by starting asset loads before the asset is immediately needed. An example of this might be the menu flow for a shop for your game. The first menu of the shop probably provides the players some options of what type of thing to shop for, maybe a submenu for weapons, armor, and consumables. Or for a builder type game, maybe resource structures, combat structures, and decorative structures. When the player is at the front end shop menu, the game can begin loading items for sale within the submenus on the next menu. This can even be hinted at based on what the player was doing previously, and the player’s current game state. If the player has recently suffered an attack, they might be headed to the defensive structure menu, and the game can start loading those first, hopefully to have them loaded by the time the player gets there. If the player just leveled up their town and has new buildings available, then they are probably looking for those in the shop, so the game should start loading the new buildings. If there is a lot of content on the server for the game, then whenever possible, the game should be asynchronously loading this data, and caching it locally on device. The priority of these downloads should be based on how soon the content might need to be available. There is no reason to load end-game content if the player is only a few hours into your game, unless everything else is already downloaded and cached on device. The background loading code should prioritize content to be downloaded by how soon the player will need that content for gameplay. For loading content into memory, you have much less room to work with, and will want to keep content loaded that is immediately needed, or may be needed soon. If you have a more traditional gameplay flow, maybe the user selects a level to play, selects some settings for that level such as equipment, and then enters the level, the game can begin loading content for that level when the player is on the equipment select screen. The longer the player takes to set themselves up, the more content that will be loaded and ready to go by the time the player starts the level.

With these systems in place, a custom asset loader, an asset fallback system, and a background downloading and loading system, you will have very fine tuned control over your gameplay experience, and can really focus on things like reducing time spent on loading screens with no player interaction. You will also have separated game code from asset loading code well enough that you can move assets around between asset bundles, an online server, and the Resources folder on device without needing to do any major adjustments to your game code.

Tagged with:
Posted in Unity3D, Unity3D Performance