Kiosk Mode

I am working with a Cornish College of the Arts design student on building an application for her final project, in Unity. This application is going to run on a tablet, and will be placed in an installation. This means we have to solve locking down the tablet on both the physical and software side. On the software side, the standard name for this seems to be “kiosk mode”.

Locking down the software includes a few things. Hiding the Android software keys: the back button, home button, and menu button. Ensuring that when the tablet starts up, it automatically runs our application. If the application closes for any reason, it should boot back into the application. Basically lock the tablet down where all any standard user can access is our application.

The first thing we looked at was, how do other people do this. Most kiosks we could find were running on iOS devices, not Android, and were never locked down in any meaningful way. You could usually press the home key on these iOS devices to access the home screen. Some kiosks had locked all of this down, but most that we found, such as devices at food trucks used with a Square payment system, were rarely locked down. We didn’t really find many Android tablets setup as kiosks, which seemed to mirror the difficulty we were having in only finding iOS hardware to lock down a device. Our tech is running on Android tablets.

After that, I looked in the Unity asset store for anything. Nothing meaningful showed up on any searches. I broadened the search outside of the asset store, and looked for ideas on how to run an application in kiosk mode. At this point I had a small level of success. I found some custom theme and manifest settings I could include in my application.

I couldn’t quite get these custom settings to work, and they would have only removed the hardware keys and set up the app as a bootloader. Some more searching lead me to potentially writing a Java based plugin to override the Unity application and handle commands as necessary. I started down this path, and realized it would be very time consuming. This project has a pretty tight deadline, so I took my search in another direction.

Eventually I found that there are actually apps that can handle getting a device into kiosk mode in the Google Play app store. The one that most met our needs was pretty pricey, though, at $50: SureLock. It did everything we needed, it hides the software keys, it locks the device down to only running a single application, it automatically boots into that application.

This was where it came to the tough decision on low budget projects. Is the time saved buying an existing solution worth the cost, or would we be better off building our own solution. We decided to purchase the kiosk application. I was able to get the evaluation version running in about an hour, whereas it would have taken me many hours to write my own. This allows those extra hours to be spent on polish and features for the application.

Tagged with:
Posted in Unity3D

Iteration and Polish

A common saying in creative work is “the last 10% takes 90% of the time.” This holds especially true when building something in Unity. I’m collaborating with a student at Cornish College of the Arts to build her an alternative interface for a video browsing service like Netflix. Her goal in designing this is to build something that would feel at home on the Wii U: use Miis (or a cartoony model from the asset store) to help guide selection, and put the selection into a 3D space to match being on a gaming platform. The interactive part that I have been building in Netflix is a small part of the overall project, and exists as a proof of concept for a particular point in her design flow.

The following four screenshots shows some iteration on this project over the last week or so. I spent more time on this between the first screenshot and the last here, than I had done with all of the work leading up to the first screenshot. All of this change was also directly driven by the designer. As the engineer, I could have easily been “done” with the first one here. However, the designer had a very clear vision of what she wanted, and worked with me to get to the current state of the project. We still have a few more things to do with this, but this is how the project stands now.

Image

This is a “good enough” moment for me as the engineer. It technically does what it is supposed to. However the designer rightly pointed out a bunch of polish that needed to be done. She didn’t like the orthographic view of the 3D content, she didn’t like the spacing of the movies, and she didn’t like how the genres look.

Image

I spent some time tweaking camera settings and angles. I changed the scene to use two cameras: an orthographic for the 2D UI in the background, and a perspective camera for the 3D content. The movie genres were accidentally disabled for this build. The designer’s feedback was that this was on the right track, but it needed more work. The movies were packed in too tight together. She wanted the genres to be on the sphere, between each row. The character was way too big. The sphere needed more segments: it was either still 16 rows here, or I might have bumped it up to 24, but it needed to be at 36.

Image

 

The sphere now has much wider spacing between each row. The genre titles are back. The camera angle has been adjusted a bit. The feedback here was: The sphere was showing too many rows at once, making it hard to parse. The character’s texture was distracting and out of place. We were using it because it was a free cartoony looking character from the asset store, the closest looking thing we could get to a Mii. The kerning on the font for the genres was very poor. The selected row, and selected movie pop up too high.

Image

I switched the font to a font the designer sent to me. I switched from using Unity’s built in text renderer to 2D Toolkit’s text renderer. I greatly changed the camera angle, to reduce the number of rows shown. The character was given a black and white cel shaded look to make it less distracting. A very slight fog was applied, to make the backs of the movies become a slightly darker red, to give the sphere a better feeling of depth. The height the rows and genres lift up when the player is standing on them was greatly reduced.

This is where we have stopped for now. There is still some more polish to be done, but the designer wants to get some feedback from 3rd parties, such as her teachers, next.

 

Tagged with:
Posted in Unity3D

3D Printer Calibration Process

Every time I switch out to a different roll of plastic, I have to spend a lot of time adjusting things to get proper extrusion again. The list of things that can be adjusted is fairly extensive, I am going to cover what I adjust, and to what effect.

First up are hardware settings. There are a few ways the printer allows me to adjust the Z home. The Z axis is the vertical axis on my printer, and 0 Z is where the first layer will be printed. The printer detects Z home when the build plate bumps against a switch. This can be adjusted through a set of screws on the build plate, or a screw in the back of the printer. Generally I use the screw to adjust this. When the Z home is too far from the nozzle, then the plastic will not adhere to the build plate, and will instead clump up at the end of the nozzle. When the Z home is a little too close, the first layer will be smushed into the build plate, and the print will be a little off in size. If the Z home is too close, then the nozzle will crash into the build plate, which is bad. The ideal Z home distance between the nozzle and the build plate is a little thicker than a sheet of paper. I generally adjust this by twisting the screw in the back of my printer, and sliding a piece of paper under the nozzle to see if I’m at the right height.

Image

 

The screw in the back here is what I can use to adjust the Z home.

Another important part of adjusting the Z home is also leveling the build plate. The ideal build plate is perfectly leveled. If it is a little off then the first layer will be different thickness at different parts of the print. If it is heavily off, then the nozzle might crash into the build plate when printing the first layer. This is adjusted on my printer through a series of screws. Ideally I would use a level here, but I don’t actually have one, so for now I just use the same paper test I used for getting the Z home, but I use it in all four corners of the print bed.

Image

The three screws are what I use to level the build plate. Note the poor condition of the kapton tape under the glass. Kapton tape maintenance was a pain in the ass, which lead to me getting glass plates cut.

The next hardware adjustment I need to make is the swing arm. This one is especially tricky. The swing arm pushes a bearing against the plastic, holding it against the gear that pushes down the plastic. I don’t think my swing arm should be this tight, but if it’s any looser I do not get stable, consistent extrusion. I calibrate my swing arm by marking along the plastic with a sharpie, and making sure that when I press “extrude 10 millimeters”, the plastic goes down 10 millimeters. If the swing arm is extremely loose or extremely tight, it is easy to recognize here, as I will not get stable extrusion at all. Unfortunately it can take a lot of adjustment and time to fix this. What is more difficult is getting it absolutely correct. If it’s only slightly too loose, or slightly too tight, it might be minutes, or hours, of printing before the plastic slips and a print messes up.

Image

Notice how tight the swing arm is currently. It shouldn’t be this tight, but I can’t get stable extrusion without tightening it this much.

Software settings are another area with a lot of adjustments. The most common setting I fiddle with is print temperature. Ideally, I should have a good way to identify what temperature a given roll of plastic should print at. Different manufacturers of ABS have different ideal print temperatures, and this ideal temperature also varies based on the printer itself. Where the nozzle temperature is measured is different on each printer. So far, this has been a lot of trial and error. Generally, with ABS, I will print anywhere between 180 and 210 degrees celsius. Some rolls of plastic work extremely well at the low end of that temperature range, and some work well at the high end. The teal plastic I have is intended for even higher temperatures, but my printer can’t go any higher.

The final hardware setting I adjust between prints is the actual build platform. The plastic needs to stick to the build platform as it comes out of the nozzle, and stay stuck to it, otherwise it curls up and the print is ruined. There are a lot of adjustments that I can make to this. The printer ships with a metal platform coated in kapton tape. Unfortunately this tape is easy to scratch when I pull a print off the platform. I had some glass cut to the size of my build platform, and placed it on top of it. The plastic sticks much better to the glass. I sometimes spray hairspray down before the print goes, to make it even stickier. I tried using blue painter’s tape, which works well in theory, but I am very bad at laying down the tape flat without bubbles, which causes my build platform to no longer be flat. Getting my print to actually stay flat on the platform, and not curl up is a problem I am always dealing with, I haven’t found a solution that works really well yet. This is a big reason why some people prefer PLA to ABS plastic.

Another software setting I adjust is the build plate temperature. ABS plastic likes to stick to itself more than other things. This means that it will naturally curl and lift up as a print goes on. To prevent this, the build plate has a heater and can be set to a certain temperature. As this is a large plate, it is difficult to heat it past a certain point. Currently I have mine set to print at 89 degrees celsius, which takes over half an hour to heat up to start a print. If I raise it any higher and it takes much longer. If I drop it any lower, and the print is more likely to curl up.

The next software setting I adjust is the speed the nozzle moves at. The default speed my printer shipped at just gave me a lot of trouble getting consistent, smooth extrusion. However, slowing my printer down greatly has given me much more stable results.

The final set of software settings I adjust is the slicer settings. When I go to print something, I start with a standard 3D model, exactly the same as I might use in a video game. I then run this model through software called a “slicer”. This software builds a sequential series of 2D patterns that my printer can use to build the model. It’s kind of the opposite of the deli slicer, if you took every slice of ham and placed them on top of each other, you could re-create the original big cut of ham. There are a lot of configuration options on the slicer software. The two bigs ones I adjust with prints are number of horizontal shells, and number of vertical shells. To save on plastic, generally the inside of a 3D print is mostly hollow, with a criss-cross hash inside to provide support for layers printed on top later. Adding more shells does consume more plastic in the print. However, I have found that it vastly improves the quality of prints that have a very narrow slope. When I have more shells, the plastic will better overlap the previous layer.

Image

This is an example of a part I re-printed with more shells. Notice that the layers do not overlap each other very well.

Where am I at right now? Slowly getting this white plastic configured. Here is a series of prints to give you a visual look at the process. The prints get gradually better, but I’m still not there yet. This is also why I really dislike changing out a roll of plastic, it adds many hours of adjustments to my schedule to get a good print out again.

Image

I printed this part in the teal plastic, and decided I wanted it in white, instead. The first white print here the plastic stopped extruding smoothly and went all over. The next print had the plastic fail even sooner, as I adjusted the swing arm.

Image

 

The next print, a strange checkerboard pattern was visible, as the extrusion was periodically stumbling. I believe this was happened every time the plastic was tight enough that the extruder made the roll of plastic rotate a bit. I thought the swing warm was too loose, and tightened it a bit, to slightly better results. The next print shows that a print can go a while, and eventually fail. It was a gradual failure, too, as you see that the interior grid does not line up very well. The third print here I saw a strange result, the plastic was just not consistent and didn’t laminate well. To me this implied an issue with temperature.

Image

At this point, I spent time both adjusting the temperature and the swing arm. The prints gradually improved, but I’m still not getting prints at the quality I like. I am not sure if this is happening because I’m printing at too high a temperature, or too low a temperature.

 

 

 

Tagged with:
Posted in 3D Printing

What you need to develop mobile games with Unity

Figuring out what you need to actually build and run a Unity game on a mobile device can be pretty confusing. If you are running on a tight project, building games on weekends, it can seem especially expensive to get up and running. I’m going to cover what you need for Android development, and what you need for iOS development.

To develop for a mobile device with Unity, you will need these things: A mobile device, a development computer, and maybe a development license.

The first thing to understand about development is the licensing costs. Android is straightforward: you can begin developing and testing apps on actual hardware without paying a dime. With Android, the development account setup and fees come when you want to release an app, and are paid for with each Android app store. Apple will let you run the iOS simulator for free, but to push an app to your device, you will need to register with Apple’s $100 a year developer program.

The next piece of the puzzle is development hardware. You can only develop for iOS on OS X. The Android development environment runs on Windows, OS X, and Linux. Unity’s editor does not run on Linux, so you’re really looking at OS X versus Linux. If you are picking up a new machine for development, then getting a Mac will let you develop for both platforms. If you already have a capable Windows box, then maybe iOS development should wait until you build and release an Android app. If your personal devices are iOS, It is much cheaper to pick up a new / used Android tablet than it will cost to buy a Mac Mini or Macbook.

The final required hardware cost is a development device. Yes, you can do a lot of iteration in the Unity editor, but you will want to actually run on hardware every once in a while. You might be tempted to try things out with just the Android emulators and iOS simulators, but these are both very poor on performance, especially for Unity apps, and will only frustrate you. Chances are you already have a smartphone, and might even have a tablet. A fantastic thing about modern game development is personal hardware can absolutely work fine as development hardware. It also means you will have your latest build on you at all times, so you can show off your game to friends and family. The challenge in selecting your development device is going to be balancing the other costs against getting a new tablet or phone. Even if you already have an OS X machine, and an iPhone or iPad, you might still want to start by developing on Android, you might be able to get ahold of a used Android phone or tablet for around the price of the iOS developers license, or cheaper. If you have an iOS device, but no OS X machine, you will almost definitely want to start with Android, a new or used Android tablet is going to cost you far less than even the cheapest Apple hardware, a Mac Mini.

If you haven’t ever bought or used an Android device, it’s worth learning something about the Android ecosystem compared to iOS. Android devices are often far less expensive than iOS. The flagship development device for Android, the Nexus 7, is just about the same price as an iPod Touch. Something else that I learned when I started looking at Android tech versus iOS: Apple hardware maintains its value much better than Android and Windows. When we ordered the previous model Nexus 7, the 2012 model, we were able to get them for a little over $100 each. The iPad 2, originally released in 2011, is still difficult to find for less than $200. This obviously makes these devices great for purchasing as a consumer, as a developer it makes Android super attractive. Even trying to get ahold of a Mac to develop on OS X, the entire Apple Mac line does a great job of keeping a high value, and used Mac Minis are often still pretty pricy.

What about device performance? What about dealing with the large amount of Android devices? Supporting a variety of devices is a daunting task. There are thousands of unique Android devices. If you have a low budget, you might want to just start out supporting devices similar to the one you develop on, and expand your reach as your sales go up. Most of my development now is for fun with no intention of releasing anything, so I’m not too familiar with how the Android store operates.

If you plan to develop and release a game on a mobile platform, I highly recommend getting your build running on a device as quickly as possible. It’s easy to think you can work in the Unity editor for a while, and then get a device when you are nearly done, but I don’t think you should do this. Having a build running on physical hardware is a huge motivating factor in development. It’s much easier to share builds with friends and family. It also means you will catch all of those development edge cases much faster, the mouse on your computer is far more accurate and operates very differently than an actual touch interface. Actually running your game on a device frequently will really help you push the overall polish of your game up.

Tagged with:
Posted in Uncategorized

Small Project – Video Selection Interface

I’m working with a design student at the Cornish College of the Arts on some projects. When I work solo on projects, I usually work directionless and want to move on to a new project constantly. Working with someone who gives me a concrete design to build towards works much better for me, and I actually make meaningful progress. My default solution for anything is “build it in Unity” so all of my help ends up along that path. Unity is a great choice for small projects because it can export to browser as well as export to an Android device super easy.

The project I was helping her with today was a theoretical video selection interface for a service like Netflix. Most of the project was her work, wireframes, concept art, and mockups. However, she had one piece of complexity in her project she was struggling to show off in 2D form. She wanted the movie selection to take place on a sphere, with rows of movies along the outside of the sphere. I offered to throw something together really quick in Unity to help prove out her idea.

I’m going to start with the demo of the project, and then work through what I did to achieve it. Use W A S D keys to rotate the sphere.

https://dl.dropboxusercontent.com/u/39582011/HeatherNetlixBuild/HeatherNetlixBuild.html

Image

First, I created a sphere with 16 subdivisions in Blender. Next, I deleted all faces except for one slice of the sphere. Finally, I setup UVs by selecting the topmost quad and used “follow active quads”. I don’t know if this was the best way to setup UVs, I’m not an artist, but it accomplished what I needed it to, each quad was effectively the full square texture. For the texture, I opened an image editing tool and loosely pasted in five movie posters next to each other. I did this twice to get enough visual variety to show the basic idea.

Image

After creating a sphere slice, I then constructed a sphere of these slices in Unity. The slice itself was quick to setup with a material, I just had to set the texture to tile along the X axis at 0.2, which stretches out the texture 5 times along the X axis, getting the movie posters to the desired size on the sphere. I copied my sphere slice game object 16 times, rotating it 22.5 degrees on the Y axis each time. This gave me a complete sphere. I included the Y angle of the slice in its name, so I could look these up in code later. Next, I created two parents for these slices. The grandparent I was able to use to physically position the sphere relative to the camera. The parent is used to really easily apply a local rotation along the Y axis.

The final step for creating this demo was to write a couple lines of code to simulate the movie selection process. The Y axis was really easy, I just rotated the parent object’s Y axis based on Y axis input. The X axis was just a little more complicated. During the Start of this script, I had cached the slices with this code:

for(int i = 0; i < totalSlices; i++)
{
    string sliceName = string.Format(“SphereSlice{0:F1}”,i*sliceSize);
    slices[i] = GameObject.Find(sliceName);
}

To figure out which row to rotate, I need to figure out the active slice of the sphere. This is accomplished by taking the Y rotation of the parent, dividing by the slice size, and converting to an integer to get a number value between 0 and 15. I then offset this by a few slices to line up to where I had placed the character visually. I was then able to use this to look up the slice in the cached list, and adjust the renderer’s material’s main texture offset for the slice based on the horizontal axis input.

Once this was all complete, I was able to export this demo: https://dl.dropboxusercontent.com/u/39582011/HeatherNetlixBuild/HeatherNetlixBuild.html

The designer was happy with the progress, and I’ll make any final adjustments necessary. I imagine she might want some some text hovering over each row describing that row’s genre.

 

 

Tagged with:
Posted in Uncategorized

A Beginners Introduction to Programming

On the way to dinner tonight, my fiance asked me why there are so many different programming languages, and what the differences between them all are. While she has a firm grasp of technology, she is not an engineer, so I tried to figure out a way to explain the differences without getting too deep into the details. Hopefully this will be useful to other people, especially artists, designers, and producers in games who hear language names like C# and Objective C but have no understanding of what the differences mean.

A programming language is a series of rules and instructions for a computer processor to follow. To start an analogy, I will be the computer and you will be writing instructions for me. You decide that you want to tell me to go to the grocery store, get you a banana, and bring it home. To accomplish this, you get out a sheet of paper and write down the steps for me to follow on it. The instructions on this paper is programming code.

In video games, programming starts with a language called “Assembly.” With the advent of home video game consoles in the late ‘70s and early ‘80s, you had low cost machines with not very powerful processors for running instructions. You want to tell me to go to the grocery store, but I have a mind as simple as the Atari 2600. You will need to write very specific and simple instructions:

Step 1: Lift left leg.
Step 2: Move left leg forward.
Step 3: Place left leg down.
Step 4: Lift right leg.
Step 5: Move right leg forward.
Step 6: Place right leg down.
Step 7: Turn to the left slightly.

et cetera

As you can imagine, this will get me somewhere. If you wrote it exactly correct, it will get me to the grocery store, get me to buy a banana, and get me home. The problem is this is very tedious and error prone. What if you miscounted the number of steps to take? When you give me this paper with these instructions and they are slightly off, I might walk into a car in the parking lot, attempt to buy a banana, fail, and then walk past my home on the way back. What if you decide that you want to send me somewhere else? You have to write down an entirely new set of instructions for walking me to the new place.

A few years ago, I started writing Atari 2600 games in my spare time for fun, in Assembly. As you can imagine, I made a lot of mistakes, and if I miscounted the number of instructions I was running my code would explode and I would get garbage on the screen instead of what I intended.

At this point, you are going to want to invent a new way to write these instructions, a new programming language. Up next is C, a procedural based programming language. Now you can turn these tedious instructions into something a little more manageable, it might look like this:

Step 1: Joe – Walk To – GroceryStore
Step 2: Joe – Buy Banana
Step 3: Joe – Walk To – Home

Of course, you need to now tell me what it means to walk to a place, and what it means to buy a banana. As I’m not slightly smarter than an Atari 2600, and even a little smarter than a NES, I can follow more complex rules without my brain exploding. So as part of writing this procedural based instruction set, you now need to tell me what it means to walk to a place, and what it means to buy a banana. You hand me another piece of paper with this set of instructions on it:

“Walk To” : Takes a Person and a Place, and moves the person to that place.
Turn Person to look at Place if they are not
While Person is not at Place, do the following: Have Person take a step towards Place

Of course now you need to tell me what it means to turn, and what it means to walk, and will keep defining the process into smaller and simpler steps until I understand everything.

Fantastic, now not only is your list of instructions on the piece of paper easier for you to write, you can even send me to new places without needing to tell me how to walk. As long as I have that piece of paper that tells me how to walk to a place, you can tell me to walk to anywhere with another set of simple instructions.

Unfortunately for the NES, C is a little too slow. Games written in C run much slower than those written in Assembly on the NES. In the analogy, imagine that every time I take a step I have to sit down, shuffle through all the papers to understand what it means to take a step. I’m walking so much slower than I did with the extremely specific Assembly instructions. Step. Look at instructions. Step. Look at instructions. This is a big driving force in deciding what language to use, sometimes you’re working with a slow idiot who can’t remember how to walk. Fortunately technology marches onwards, and I’m replaceable. You’re frustrated with slow NES Joe following C instructions slowly, and you hate writing the tedious Assembly instructions. Luckily a shipment of Sega Genesis Joes just arrived, and he’s smart enough to use the C instructions and take steps without looking at the paper every time, and can get you your banana pretty quickly.

Your next door neighbor comes over at this point, impressed with your ability to tell me to go buy bananas. He asks for a copy of your instructions so he can send his friend Steve to go buy apples.

OK this is great, you’ve got instructions for telling a person how to walk somewhere, but what happens if you decide to tell your dog to walk somewhere? A dog is not a person, your dog cannot follow people instructions. If you give him the slip of paper telling him to walk somewhere, he will just slobber on it.

This is where C++ comes in. C++ is an object based programming language. You are a very astute person, and have probably realized at this point that dogs and people both have legs, and both can technically walk. What if there was a way you could write instructions anything that could walk could follow?

Joe:

Step 1: Joe – Walk To – Grocery Store
Step 2: Joe – Buy Banana
Step 3: Joe – Walk To – Home

Dog:

Step 1: Dog – Walk To – Mailbox
Step 2: Dog – Bark
Step 3: Dog – Walk To – Home

At this point you realize you have a magic dog that can read, lucky you. Unfortunately the dog will not follow any instructions meant for people, so you can’t just give him what you wrote in C. You decide it’s time to move on, and invent C++, it’s like C, but better! You decide that, because people and dogs can both walk, you will call anything that can walk a “Walker”, and write some instructions that anything that can walk can follow.

“Walk To” takes a Walker and a Place, and moves the walker to the place.
Turn Walker to look at Place if they are not
While Walker is not at Place, do the following: Have Walker take a step towards Place

That looks pretty familiar, doesn’t it? Well, there is one hitch in this plan, when you give this piece of paper to the dog and to me, we are both going to stare at you. I don’t know what a Walker is, the dog does not know what a Walker is. You get out your pen again, and write this down:

Joe, you are a Person. A Person is a Walker, but better. You can buy bananas, and other Walkers cannot.

Dog, you are a Dog. A Dog is a Walker, but better. You can bark, and other walkers cannot.

Great, we are both on our way now. Look how much our process for writing instructions has progressed, you can tell anything that can walk to go anywhere!

You’re so excited about this, you run over to your neighbor’s house, who has been sending Steve to buy apples, and tell him about C++ and how you can control anything that walks. He looks at you and says he had to figure out the same thing, and invented Objective C.

So far you’ve probably figured out that another driving force for creating new languages is to make use of more powerful hardware to allow you to more easily write complex instructions. Another big reason so many languages exist is, in a pre-internet age, many people were solving the same problems at the same time. While C++ was being created at Bell labs in 1979, and named C++ in 1983, Objective C was created in 1983 at a company called Stepstone.

Hopefully that is a good start in explaining the differences in programming languages, and why so many exist, and how each one can serve a new and different purpose. Dozens of languages exist, each serve very different purposes.

Tagged with:
Posted in Programming

Shader Q&A

Last week, Mike asked me a bunch of really well worded and well thought out questions on my “Writing Your First Shader” post https://josephstankowicz.wordpress.com/2013/06/22/writing-your-first-shader/#comment-72. Answering these questions seemed like a great topic for this week’s post.

1. As I was going through the steps, I wasn’t initially clear how to assign my shader to the material. I think this is because I looked in the Shader drop-down with the material selected and couldn’t find the one I had just created (the trick is that it was in the Custom menu, but I think I solved it by dragging and dropping). This is actually a really minor issue in retrospect but someone unfamiliar with the process might not know what to do here.

Mike covered the solutions in his question, but it’s probably worth covering with an animated gif. Unity’s shader selection menu for a material sorts things into submenus based on the shader name. When naming your shader, which is generally the first line in the shader file, you can use “/” to create subfolders in the path. Here is an example of a shader I named “Path/To/Shader/ExampleShader” : http://i.imgur.com/eh37jiA.png. You can also assign a shader by dragging it into the material’s shader field, as seen in this animated gif: http://i.imgur.com/mh1BEeD.gif.

2. Applying the created material to the newly created sphere might need a little bit of explanation, since beginners might not know that you need to assign it to the materials property of the Mesh Renderer component, since a sphere has a Sphere Collider component which also has a Material Property (for physics material). Again in retrospect this seems like a really basic issue but I’m thinking of someone not completely familiar with Unity.

There are two things to cover here: how to actually assign a material to an object, and the difference between physics material and rendering materials.

You can apply a material to an object that has a “Mesh Renderer” component a few different ways. The material for the mesh is in the material list for the mesh renderer http://i.imgur.com/rjYLC2l.png. This does mean you can have more than one material per mesh. The way Unity handles multiple materials per mesh is through multiple passes, it renders the parts of the mesh assigned to the first material in one pass, then renders the mesh again for the second material. One way you can assign a material to a mesh is to press the selection circle to the right of the material name, and use Unity’s selection window to pick out a material. You can also assign a material to a mesh by dragging the material onto a game object that has a mesh renderer in your hierarchy view, as seen here http://i.imgur.com/jxAWDsI.gif. Another way to assign the material is to drag it onto the mesh in your 3D scene view, as seen here http://i.imgur.com/pxM8HD5.gif. Like most components in Unity, you can drag the material over the component field you are replacing it with, as seen here http://i.imgur.com/IFgqzZa.gif. The last common way to assign a material is to drag it into the empty space in the inspector near the bottom http://i.imgur.com/uz49fWX.gif. You are probably wondering why it shows the material as if it is a component, while it is also a field in the mesh renderer component. This is just Unity providing a custom inspector to make it easy to edit and customize materials when you have an object selected.

An unfortunate part of software engineering is name collisions. The term “material” we have been using has been to describe just the visual properties of an object. In the real world, material is used to describe a lot of other properties of an object besides how the object looks. The result of this is the physics system in Unity has its own “physics material”. This is a different file, a different set of properties unrelated to how the object looks, and instead how the object behaves within the physics system. A physics material contains the friction and bounciness information of an object.

3. I was never really clear what a SubShader is or how this differs from a pass. Can I have multiple SubShaders, and what’s the difference between having multiple subshaders versus multiple passes? I guess it just struck me as something unexpected, because I would just think “Hey, we have a shader… why are we defining something called a ‘sub’ shader?”

Generally, multiple passes are used for multipass rendering. This is when you want to render an object a few times, with different rules or other behaviour each time. There are many uses for this, the example I will share is often used in RTSes to keep units behind buildings visible. Here is an animated gif showing the two pass behavior http://i.imgur.com/0HW1oef.gif. The first pass does the opposite of a standard Z test. Normally, a pixel is rejected for rendering if anything is in front of it, but in this case the first pass does a Z test for “GEqual”, or greater than or equal to, and will only render pixels that have something between themselves and the camera. This pass renders all pixels in solid red. The second pass is the standard unlit texture from the “writing your first shader’ example. Here is the source for this shader:

Shader “RTSShader” {

Properties {

_MainTex(“Texture”, 2D) = “white” { }

}

SubShader {

Pass {

ZWrite Off

ZTest GEqual

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

#include “UnityCG.cginc”

struct appdata {

float4 vertex : POSITION;

};

struct v2f {

float4 pos : SV_POSITION;

};

 v2f vert(appdata v) {

v2f o;

o.pos = mul (UNITY_MATRIX_MVP, v.vertex);

return o;

}

fixed4 frag(v2f i) : COLOR {

return fixed4(1,0,0,1);

}

ENDCG

}

Pass {

ZTest LEqual

CGPROGRAM

#pragma vertex vert

#pragma fragment frag

#include “UnityCG.cginc”

 

sampler2D _MainTex;

float4 _MainTex_ST;

 

struct appdata {

float4 vertex : POSITION;

float2 texcoord : TEXCOORD0;

};

 struct v2f {

float4 pos : SV_POSITION;

float2 uv : TEXCOORD0;

};

 v2f vert(appdata v) {

v2f o;

o.pos = mul (UNITY_MATRIX_MVP, v.vertex);

o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);

return o;

}

 

fixed4 frag(v2f i) : COLOR {

return tex2D (_MainTex, i.uv);

}

ENDCG

}

}

}

Subshaders are generally used to account for different video cards. Very few game developers are going to be in a position where they are making a game for a single target piece of hardware. Even if you are targeting a single platform, such as iOS, Apple releases many new iPhones, iPads, and iPod Touches each year, each with slightly different hardware. If you are releasing a PC game or Android game, the available graphics hardware is going to be extremely variable. Subshaders are used to account for this, providing a different implementation of the shader for different video cards. The first use of this is probably obvious, you can write shaders that have less features and less functionality for weaker video cards. Maybe you write two subshaders for your “Normal Mapped Specular Lit” shader, the first subshader is the full featured shader, and the second subshader is the fallback, which might use a faster but uglier specular lighting algorithm and drop the normal mapping behavior. The second use of subshaders is to write different implementations of the same shader that do the exact same thing, just optimized for different video cards. All iOS devices uses PowerVR GPUs, whereas most of the more powerful Android devices use nVidia’s Tegra line of GPUs. If you compare similarly powerful devices driven by both product lines, each has its own set of strengths and weaknesses, and you can use subshaders to make sure that your shader runs as fast as it can on each GPU platform.

4. I really often see shader code that has very simple, short, single-letter variable names like ‘v’, etc.? There’s no harm if I make long variable names right? I doubt it, I just want to make sure there isn’t some convention at work here.

Yeah, there is no problem using longer variable names in shader code. Some engineers just prefer shorter names.

5. I may have missed it, but I didn’t see an explanation of the types used and their typical purposes. Seeing a “float4″, for example, was pretty self-explanatory, but it threw me off when the fragment shader returned a “fixed4″. I would have just expected it to return a “float4″ but I didn’t see any explanation as to why this was different.

I have not covered this yet, it is a slightly more advanced topic I would probably get to eventually. As you write more advanced shaders, performance can become a big concern. Float, half, and fixed are the three variable types for tracking non-integer values with a decimal point. Floats are the most accurate of these variable types, but are the heaviest and slowest, whereas fixed are the least precise, but take the least amount of space, and half sits in the middle. Fixed4 / half4 / float4 are all different ways to represent a set of 4 values together, and are useful for representing a color, with red, green, blue, and alpha values. Unity has a page on shader performance here that covers more http://docs.unity3d.com/Documentation/Components/SL-ShaderPerformance.html. Note that you want to use the lowest precision variable type you can for a given piece of data, but it is important to avoid changing data types, it is expensive to convert between float, half, and fixed.

6. I was confused about the Properties block. So, is that just data that gets initialized for the shader before it runs? Is it initialized once for the entire render operation? The _MainTex syntax was confusing also and I wasn’t sure what all the components meant. It looks like an odd method call with parameters being passed, or just some properties table or something, and the parentheses at the end didn’t make sense to me. It reminds me of Lua script or something. I also see that _MainTex is being referenced elsewhere in the program, so clearly it’s important. Is “Texture” some reserved string in that initialization? What does the ’2D’ mean?

Shader properties are the data input to the shader from the material. It is not a function call, but a variable declaration. This is the information that you will set in the Unity interface, data you can control from outside the shader code. “_MainTex” is the internal variable name you will use in the shader. The value “Texture” in the quotes is the name the property will have in the material view in Unity. As you write more complex shaders, you might need to provide more detailed descriptions here to help those using the shader properly set the data. You might have a data texture you use as input, and have a description like “Texture (RGB) Intensity (A)” if you are using the alpha channel as a special data set in the texture file. The “2D” is the type of the property, a “2D” is a two dimensional texture. The braces at the end are for any extra options on texture properties. To use the shader property in your shader code, you declare the variable again, within your shader code, matching the “_MainTex” name. Generally, people use “_” at the beginning of shader property names to help make it clear that it is a shader property and not a local variable. Here is Unity’s page on shader properties, covering the different property types Unity supports http://docs.unity3d.com/Documentation/Components/SL-Properties.html and here is a Unity page on usage of properties http://docs.unity3d.com/Documentation/Components/SL-PropertiesInPrograms.html.

7. Where is _MainTex_ST ever used? I see it declared but never used, which is really confusing.

This is used in the TRANSFORM_TEX macro. TRANSFORM_TEX is defined in UnityCG.cginc. On Windows, this file is at C:\Program Files (x86)\Unity411\Editor\Data\CGIncludes and Applications/Unity/Contents/CGIncludes on Mac. The TRANSFORM_TEX macro is defined as:

#define TRANSFORM_TEX(tex,name) (tex.xy * name##_ST.xy + name##_ST.zw)

If you were not using the macro, the code would look like:

o.uv = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;

The X and Y values of the _MainTex_ST are the scale, and the zw are the transform. When viewing a material in Unity, these are the “Tiling” and “Offset” values as seen here: http://i.imgur.com/otT8Nb9.png

That covers Mike’s questions, if anyone else has any questions go ahead and ask them! You’re not going to be the only person with those questions.

Posted in Unity3D, Unity3D Performance