C# is a fantastic programming language. You can get things up and running quickly, and it automates things like memory management much more than in older languages. An unfortunate side effect to this, is it’s really easy to accidently write a block of C# code that runs much slower than you expect. Mono is the framework Unity uses for implementing and running C#.
When building a game you will at some point devise and follow some coding standards, or at the very least patterns. It is really easy to forget that the libraries you use do not follow your standards. A common C# convention is to treat properties like simple get/set behavior for accessing private variables in a class. For your first Unity project, you will probably treat properties within the UnityEngine and UnityEditor namespace as if they follow this convention. Unity’s properties do not behave this way. Accessing something like gameObject.transform actually triggers a lookup, and peppering your code with this can lead to an unexpected performance drop. How can you resolve this issue? Treat properties in Unity namespaces as if they are function calls, cache results locally, and expect side effects from accessing properties. Another example of a property access that might surprise you is renderer.material. If the renderer is using a material shared with other renderers in the scene, then calling this property will create a duplicated instance of the material just for the object you access it on. This can easily result with bumping draw calls up and reducing performance because these objects can no longer be batched, as well as causing unexpected behavior if you were intending on changing a property on the shared material. If you intend to access and change the shared material, you access this through renderer.sharedMaterial.
Another important consideration that is easy to forget is the performance of the devices you intend to release your game on can be very far from the performance on your PC or Mac, especially if you are targetting mobile platforms, iOS and Android. It is easy to fall in a habit of developing and iterating mostly on your PC or Mac, making and testing changes is so quick, and making and running a build on target can take a much longer amount of time. Many C# performance issues are trivial on your PC or Mac, and will only become a problem when you are running on device. It is easy to fall into a habit of not testing your game on platform for weeks at at time, and when you finally do get around to making a build, discovering that your framerate is best measured as seconds per frame, and not frames per second. When developing games, it is much easier to tread water on performance, than to dive deep like this and hope you can swim back up to a stable frame rate in time to ship. My recommendation for avoiding these issues is to test on device often, and treat performance drops as a large development concern, worth adressing as soon as they are discovered.
There are many great libraries within C# that allow you to do some really interesting things when developing your game. The library I am talking about in particular here is reflection. If you aren’t familiar with reflection, it’s a C# library that lets you find out information about your code, allowing you access information at runtime about class types, variable names, and attributes. This allows for you to write many really interesting tricks, however most reflection calls are actually very expensive at runtime, and can bring the performance of your game to a crawl if you overuse them. If you have not done much with reflection yet, it is absolutely a library worth learning, in particular for tools development. One common thing I use it for is iterating through a list of all the public variables in a class to populate an editor window. This allows me to add and remove public variables from the class the window accesses, without needing to change any code for the window itself.
Loops are another place you might find performance much worse than you expect on target. For and foreach loops can result in different performance concerns, and it is important to understand the difference between them when you use them. Foreach generates two extra internal variables when you use it, whereas the for loop does exactly what you see, running the initialition once, and then checking the condition and calling the iterator every iteration of the loop. I’m sure most engineers at this point are thinking “Yes, I know how for loops and foreach loops work. I’ve used them for years, why are you even covering this?” I have seen many for loops that were unecessarily expensive the last few years. A few paragraphs ago, I mentioned that Unity’s properties behave like you might expect functions to behave, complete with potential performance costs, and side effects. If the condition in your for loop is one of these expensive property calls, you might have a for loop that takes far longer to complete than it should, dropping the framerate of your game. Using foreach as your default loop lets you avoid these issues, it will cache the end condition automatically, improving performance over a for loop in these cases. You can also force yourself into the habit of always caching your condition for your for loop, if you prefer using for to foreach loops. Again, this might all seem like basic stuff, but it was a very common performance issue I would find when helping game teams speed up their games.
If you are building a Unity project and intend to push iOS and Android hardware as far as you can, you will have to become intimately familiar with the monoheap and garbage collection. Monoheap is the total memory used by your C# code in your Unity project. The monoheap has a used size, and total available size. If you have a build that can run on target, enable the internal profiler following the instructions here http://docs.unity3d.com/Documentation/Manual/iphone-InternalProfiler.html, and watch that bottom line, “mono-memory” in your device’s output. When the used mono memory approaches the allocated heap, Unity will expand the mono heap automatically, giving mono a little more breathing room. Once your used heap drops, the allocated heap will never reduce in size. The takeaway from this is your maximum C# memory load is going to be your C# memory load at all points the game is active, you cannot get memory back from monoheap for assets or other memory consumers. How can you handle this? You want to monitor your used mono heap, and look for places that memory peaks out and pushes the allocated size up. If you can, address these areas to reduce the memory load of your C# code. What are common consumers of large amounts of mono memory? Data collections, such as lists, arrays, dictionaries, and any other data structures that hold large amounts of data.
The two most common points of maxing out the mono heap are game initialization, and late game gameplay. During initialization, it is often common to pull data out of databases, XML files, and other places, and prepare it for use in a game. During late game gameplay, the user will probably have the most pieces of the game active at once, the most number of objects active on screen. The easiest way to handle memory during initialization is to spread out the initialization of large datastructures over multiple ticks, and storing the minimum of data you need. Often times the first pass game initialization function will be run within a single start tick, creating multiple data collections for holding different types of data, such as enemy character information, item information, and level information. In this example, your peak memory usage is going to be the size of the raw input data plus the data meant for use in-game added up across all three databases. Even if you release the input data between each data table setup, garbage collection and memory housekeeping is not going to be run between each database’s initialization, resulting in more memory used than you expected. If you are able to use more generic data structures, and are able to re-use the memory between each table initialization, you can get some memory back. If you spread this example out to run over three different ticks, in three separate steps, then your peak memory usage now will be whichever of the databases is largest. You can take this a step farther, and set up your initialization routine to work towards a maximum amount of memory each tick, and then wait until the next tick to do more work. This might be as simple as initializing ten database entries a tick, and then waiting for the next tick to initialize the next ten database entries. Managing memory during peak gameplay is a much more complicated process, and will take a mix of having strict limits on number of unique character types on screen, and carefully managing how much memory each active game object uses.
One of the biggest concerns for engineers really trying to push iOS and Android hardware to the limit with Unity is going to be garbage collection. If you’re not familiar with the concept of garbage collection, when memory is released from use, it is not immediately available for use elsewhere. Garbage collection is the process of preparing that memory for use again elsewhere. It is a much more complicated process, than that, but that simple description should be enough to cover the basics. Garbage collection is a slow process, long enough to cause an obvious framerate stutter whenever it occurs. Garbage collection happens automatically, often when the used memory peak size approaches the allocated peak size, to clear out memory for use before bumping up that allocated peak size. Garbage collection can also be triggered manually, through System.GC.Collect. Calling garbage collection manually can cause major projects you might not expect. A basic description of what happens when you trigger garbage collection is, mono assigns memory allocations to multiple levels of identification: short, medium, and long term memory. When garbage collection occurs, it will attempt to clean out short term memory first. Anything left in short term memory will most likely be promoted to medium term memory at this point, mono assumes that because this memory is still around, it does not need to check it as frequently as short term memory. The same things happens when promoting medium term memory to long term memory. The result of calling garbage collection too frequently is memory that should be short term memory can end up promoted to medium term memory, and memory that should be medium can end up promoted to long term memory. When this occurs, garbage collection calls will take even longer, and may result in the garabage collector not releasing memory promoted to longer term storage. When should you call garbage collection manually? If you can help it, you should never call it, and should instead try and write your code in a way that requires less frequent garbage collection. If you do have natural pauses in gameplay that you can afford to spend a little more frame time on, a place that might already have a natural stutter in gameplay or otherwise might cause the user to expect a brief pause, that might be a place worth investigating adding a manual call to garbage collection. One example of this might be immediately after a new wave of enemies spawn in a wave based combat game, especially if you slide a HUD element on screen the user is expected to read, making it easy for them to miss a framerate stutter.
One other major problem with mono heap and garbage collection can occur when the used heap is near the size of the maximum heap, and a large allocation occurs before garbage collection occurs. A common place this might occur is during gameplay when a scene transition that includes a large amount of memory allocation occurs. If this is a repeatable situation, you might even have a place that users can raise the memory used by the mono heap indefinitely, until the device gives you low memory warnings and shuts down your app. A pattern that a builder style game might fall into here is if the user is able to trigger something that allocates and releases memory a few times, such as playing a sound effect or visual effect, and then opens a shop menu that includes a large amount of memory allocations. Each time that effect plays, the memory will be allocated and then released, raising the used memory closer to the allocated memory ceiling, the memory will not fully released until garbage collection occurs. If the shop menu is opened when the used memory is near that allocated memory peak, then the shop might trigger an allocation before any garbage collection can occur, resulting in the allocated heap expanding in size. Close the shop menu and trigger the effects over and over, and a user can repeat this process to crash your app.
This brings me to some more concerns with memory usage. The first big concern is that creating temporary variables in updates can be much more damaging to your project than keeping them as private variables within a class. A temporary variable in an update will get allocated and freed every single time that update function is called, which is generally once a tick. Because that memory is generally not re-usable until garbage collection occurs, this can result in a used monoheap that climbs and climbs every tick, approaches the allocated heap size and triggers a garbage collection. The first major effect of this is the frequency of garbage collection will rise because of this variable. With a large code base with many updates all performing different logic, this can easily result in a garbage collection frequency of just a few seconds, resulting in a very stuttery gameplay experience. The second effect of this slow climb in used memory is a user with poor timing can trigger a larger memory allocation, such as opening a menu, when the used heap is near the allocated heap size, bumping up the size of the allocated heap. The second big concern is about strings in particular: most string manipulation functions lead to many small allocations. This might seem small at first, you might think you are not actually doing a lot with strings, but C# does a lot of things internally with strings, especially with reflection. You might find a function call that seems like it should be inexpensive triggers many small memory allocations and deallocations, creating more memory that needs to be garbage collected before you can use it again.
For some game developers, this might sound like doom and gloom, but these problems can be a little easier to work around than you might expect. Generally you don’t need to actually do as much in your update functions as you might think you do. Much of the time, behavior in update logic could easily be pulled out and re-written to be event based. One common place this happens is the HUD. The first time you write a health meter for your HUD, you might write the logic to poll the player’s health, and construct the health meter off of this health value every tick. The process of updating the health meter is probably expensive, so you will quickly change this to store a “health last update” value, and compare that against the player’s health this tick, and only updating the health meter when it changes. This might still result in some tiny memory allocations every tick, and this behavior really does not need to be in an update. The best solution here is to have the player fire off an event that the HUD subscribes to every time the player health changes.