Unity 4 0 1 – High End Game Development App

broken image


  1. Unity is the ultimate game development platform. Use Unity to build high-quality 3D and 2D games, deploy them across mobile, desktop, VR/AR, consoles or the Web, and connect with loyal and enthusiastic players and customers.
  2. The company's 1,800+ person research and development team keeps Unity at the forefront of development by working alongside partners to ensure optimized support for the latest releases and platforms. Apps developed by Unity creators were downloaded more than five billion times per month in 2020. For more information, please visit www.unity.com.

What you will get from this page: Graphics optimization tips to ensure your console games run fast. These optimizations were made to an especially difficult scene to ensure smooth 30fps(frames per second) performance. Thanks to Rob Thompson, a console graphics developer at Unity (who presented these at Unite) for the tips.

Unity Plus, Pro and Enterprise subscription plans all include the core Unity real-time development platform, continuous updates, beta access, and more - all royalty-free. Compare plans to see the different features, resources, services, and options you can get with each plan, and to determine your eligibility. To use Unity to create a game experience for players on Android, follow these steps: Download and install the Unity Hub. Start the Unity Hub. On the Installs tab, add a version of the Unity Editor that supports 64-bit apps. Note that these versions support Android App Bundles, which enable smaller, more optimized downloads. Unity is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at Apple Inc. 's Worldwide Developers Conference as a Mac OS X -exclusive game engine. The engine has since been gradually extended to support a variety of desktop, mobile, console and virtual reality platforms.

A focus on GPU optimization

The Book of the Dead (BOTD) was produced by Unity's demo team. It's a real-time rendered animated short that showcases the visual quality possible with the High-Definition Render Pipeline (HDRP).

The HDRP is a high-fidelity Scriptable Render Pipeline built by Unity to target modern (Compute Shader compatible) platforms. The HDRP utilizes Physically-Based lighting techniques, Linear lighting, HDR lighting and a configurable hybrid Tile/Cluster deferred/Forward lighting architecture.

All of the assets and all of the script code for BOTD are available for you in the Asset Store.

The objective with the demo was to offer an interactive experience where people could wander around inside that environment and get their hands on it and experience it in a way that was familiar from a traditional AAA games perspective. Specifically, we wanted to show BOTD running on Xbox One and PS4. We had the performance requirements of 1080p at 30fps, or better.

As it's a demo, and not a full game, the main focus for optimizations was on the rendering.

Generally, performance for BOTD is fairly consistent as it doesn't have any scenes with thousands of particles suddenly spawning into life, for example, or loads of animated characters appearing.

Rob and the demo team found the view that was performing most poorly re: GPU load, shown in the above image.

What's going on in the scene is pretty much constant; what varies is what's within the view of the camera. If they could make savings on this scene, they'd ultimately increase performance throughout the entire demo.

The reason why this scene performed poorly is that it's an exterior view of the level looking into the center of it, so the vast majority of assets in the scene are in the camera frustum. This results in a lot of draw calls.

In brief, here is how this scene was rendered:

  • With the HDRP
  • Most of the artist-authored textures are between 1K and 2K sized maps, with a handful at 4K.
  • It uses Baked Occlusion and Baked GI for the indirect lighting, and a Single Dynamic Shadow Casting Light source for direct lighting from the sun.
  • It issues around a few thousand draw calls at any point (draw calls and compute shader dispatches)
  • At the start of the optimization pass, the view was GPU bound on PS4 Pro at around 45 milliseconds.

Finding the performance bottlenecks

Rob and the team looked at the GPU frame step by step, and saw the following performance:

  • The Gbuffer was at 11ms
  • Motion Vectors and Screen Space Ambient Occlusion was pretty fast, at .25 and .6ms respectively
  • Shadow maps from the directional shadow casting with dynamic lights came in at a whopping 13.9ms
  • Deferred lighting was at 4.9ms
  • Atmospheric scattering was at 6.6ms

The image above shows what their GPU frame looked like, from start to finish:

As you can see, they're at 45 milliseconds and the two vertical orange lines show where they needed to be to hit 30fps and 60fps respectively.

Let's look at 10 things the team did to improve performance for this scene.

CPU performance was not a big issue for the team because BOTD is a demo, so it doesn't have the complexities of the script code that goes along with all of the systems necessary for a full game.

However, keeping the batch count low is still a valuable tip for any platform. If your project uses one of the built-in renderers then you can do this by using Occlusion Culling, and, primarily, GPU instancing. Avoid using Dynamic batching on consoles unless you are sure it's providing a performance win.

If you are using one of the SRPs then you can control batch count with the SRP Batcher. The SRP Batcher reduces the GPU setup between DrawCalls by batching a sequence of Bind and Draw GPU commands. To get the maximum performance for your rendering, these batches must be as large as possible. To achieve this, you can use as many different Materials with the same Shader as you want, but you must use as few Shader Variants as possible.

Another takeaway: The number of individual assets used to create this scene is actually very small. By using good quality assets, and placing them intelligently, the team created complex scenes that don't look repetitive.

Unity 4 0 1 – High End Game Development App

Both Xbox One and PS4 are multi core devices, and in order to get the best CPU performance, we need to try and keep those cores busy all of the time.

Unity's new high performance multithreaded system, DOTS, makes it possible for your game to fully utilise the multicore processors available today (and in the future). DOTS comprises three subsystems: the Entity Component System, C# Job System and Burst Compiler.

Please note that some of the DOTS packages are in Preview and therefore we do not recommend using it for production.

Unity 4 0 1 – High End Game Development Apps

However, you can make use of multiple cores via the Graphics Jobs mode under Player Settings -> Other Settings.

Graphics Jobs provides a performance optimization in almost all circumstances on console unless you're only drawing a handful of batches. There are two types available:

  • Legacy Jobs, available on PS4, and DirectX 11 for Xbox One
    • Takes pressure off the main thread by distributing work to other cores. Be aware that in very large scenes it can a bottleneck in the 'Render Thread', a thread that Unity uses to talk to the platform holders graphics API.
  • Native Jobs, (available as the default in 2019.3 for new projects) on PS4, and DirectX 12 for Xbox One
    • Distributes the most work across available cores and is the best option for large scenes.

Learn more about multithreaded rendering and graphics jobs here.

Microsoft and Sony provide excellent tools for analyzing your project's performance on both the CPU and on the GPU. These tools are available for free if you're developing on console. Learn them early on and keep using them throughout your development cycle. Pix for Xbox One and Razor Suite for PlayStation are key tools in your arsenal when it comes to optimization on these platforms.

Post-processing effects can take up a great deal of the framerate. Often this is caused by downloading post-processing assets from the Asset Store that are authored primarily for PC. They appear to run fine on console but in fact are not optimized to do so.

When applying such effects, profile how long they take on the GPU, and iterate until you find a happy balance between visual quality and performance. And then, leave them alone, because they comprise a static cost in every scene, which means you know how much GPU bandwidth is left over to work with.

5. Avoid using tessellation (unless for a good reason)

In general don't use tessellation in console game graphics. In most cases, you're better off using the equivalent artist authored assets than you are runtime tessellating them on the GPU.

App

Both Xbox One and PS4 are multi core devices, and in order to get the best CPU performance, we need to try and keep those cores busy all of the time.

Unity's new high performance multithreaded system, DOTS, makes it possible for your game to fully utilise the multicore processors available today (and in the future). DOTS comprises three subsystems: the Entity Component System, C# Job System and Burst Compiler.

Please note that some of the DOTS packages are in Preview and therefore we do not recommend using it for production.

Unity 4 0 1 – High End Game Development Apps

However, you can make use of multiple cores via the Graphics Jobs mode under Player Settings -> Other Settings.

Graphics Jobs provides a performance optimization in almost all circumstances on console unless you're only drawing a handful of batches. There are two types available:

  • Legacy Jobs, available on PS4, and DirectX 11 for Xbox One
    • Takes pressure off the main thread by distributing work to other cores. Be aware that in very large scenes it can a bottleneck in the 'Render Thread', a thread that Unity uses to talk to the platform holders graphics API.
  • Native Jobs, (available as the default in 2019.3 for new projects) on PS4, and DirectX 12 for Xbox One
    • Distributes the most work across available cores and is the best option for large scenes.

Learn more about multithreaded rendering and graphics jobs here.

Microsoft and Sony provide excellent tools for analyzing your project's performance on both the CPU and on the GPU. These tools are available for free if you're developing on console. Learn them early on and keep using them throughout your development cycle. Pix for Xbox One and Razor Suite for PlayStation are key tools in your arsenal when it comes to optimization on these platforms.

Post-processing effects can take up a great deal of the framerate. Often this is caused by downloading post-processing assets from the Asset Store that are authored primarily for PC. They appear to run fine on console but in fact are not optimized to do so.

When applying such effects, profile how long they take on the GPU, and iterate until you find a happy balance between visual quality and performance. And then, leave them alone, because they comprise a static cost in every scene, which means you know how much GPU bandwidth is left over to work with.

5. Avoid using tessellation (unless for a good reason)

In general don't use tessellation in console game graphics. In most cases, you're better off using the equivalent artist authored assets than you are runtime tessellating them on the GPU.

But, in the case of BOTD, there was a good reason for using tessellation: rendering the bark of the trees.

Tessellated displacement allowed them to add the deep recesses and gnarly details into the geometry that will self-shadow correctly in a way that normal mapping won't.

As the trees are 'hero' objects in much of BOTD, it was justified. This was done by having the same mesh used on the trees at LOD 0 and LOD 1. The difference between them is simply that the tessellated displacement was scaled back so that it's no longer in effect by the time they reached LOD one.

6. Aim for healthy wavefront occupancy at all times on the GPU

You can think of a wave front as a packet of GPU work. When you submit a draw call to the GPU, or a compute shader dispatch, that work is then split into many wave fronts and those wave fronts are distributed throughout all of the SIMDs within all of the compute units that are available on the GPU.

Each SIMD has a maximum number of wave fronts that can be running at any one time and therefore, we have a maximum total number of wave fronts that can be running in parallel on the GPU at any one point. How many of those wave fronts we are using is referred to as wave front occupancy, and it's a useful metric for understanding how well you are using the GPU's potential for parallelism.

Pix and Razor can show wave front occupancy in great detail. The graphs above are from Pix for Xbox One. On the left we have an example of good wave front occupancy. Along the bottom on the green strip we can see some vertex shader wave fronts running and above that in blue we can see some pixel shader wave fronts running.

On the right though we can see there's a performance issue. It's showing a lot of vertex shader work that's not resulting in much pixel shader activity. This is an under utilization of the GPU's potential. This brings us to the next optimization tip.

How does this come about? This scenario is typical when we're doing vertex shader work that doesn't result in pixels.

Some more analysis on Pix and Razor showed that the team were getting a lot of overdraw during the Gbuffer pass. This is particularly bad on console when looking at alpha-tested objects.

On console, if you issue pixel discard instructions or write directly to depth in your pixel shader, you can't take advantage of early depth rejection. Those pixel shader wave fronts get run anyway even though the work is going to be thrown out at the end.

The solution here was to add a Depth Prepass. A Depth Prepass involves rendering the scene in advance to depth only, using very light shaders, that can then be the basis of more intelligent depth rejection where you've got your heavier Gbuffer shaders bound.

The HDRP includes a Depth Prepass for all alpha tested objects, but you can also switch on a full Depth Prepass if you want. The settings for controlling HDRP, what render passes are used, and features enabled, are all made available via the HD Render PipeLine Asset.

If you search in a HDRP project for the HD Render PipelineAsset you'll find a great big swath of checkboxes that control everything that HDRP is doing. Edgeview 2 1 990 – cutting edge image viewer free.

For BOTD, using Depth Prepass was a great GPU win but keep in mind that it does have the overhead of adding more batches to be drawn on to the CPU.

8. Reduce the size of your shadow mapping render targets

As mentioned earlier the shadow maps in this scene are generated against a single shadow casting directional light. Four Shadow map splits were used and initially they were rendering to a 4K Shadow map at 32-bit depth, as this is the default for HDRP projects. When rendering to Shadow maps the resolution of the Shadow map is almost always the limiting factor here; this was backed up by analysis in Pix and Razor.

Reducing the resolution of the Shadow map was the obvious solution, even though it could impact on quality.

The shadow map resolution was dropped to 3k, which provided a perfectly acceptable trade-off against performance. The demo team also added an option specifically to allow developers to render to 16-bit depth Shadow maps. If you want to give that a go for yourself download the project assets.

Finally, by changing the resolution of their Shadow map, they also had to change some settings on the light.

At this point, the team had made their shadow map revisions and repositioned their shadow mapping camera to try and get the best utilization out of the newly-reduced resolution they had. So, what did they do next?

9. Only draw the last (most zoomed-out) Shadow map split once on level load

As the shadow mapping camera doesn't move much, they could get away with this. That most zoomed-out split is typically used for rendering the shadows that are furthest from the Player camera.

They did not see a drop in quality. It turned out to be a very clever optimization because it saved them both GPU framerate time and reduced batch numbers on the CPU.

After this series of optimizations, their shadow map creation phase went from 13ms to just under 8ms; lighting pass went from 4.9ms to 4.4ms, and atmospherics pass went from 6.6ms to 4.2ms.

This is where the team was at the end of the shadow mapping optimization. They were now within the boundary where they could run at 30fps on PS4 Pro.

Async Compute is a method available for minimizing periods of underutilization on the GPU with useful compute shader work. It's supported on PS4 and has now become available on Xbox One with the 2019 cycle. It's accessible through Unity's Command Buffer interface. It's a method meant to be used with the SRP mainly, though not exclusively. Code examples are available in the BOTD assets and the HDR PSOS.

The depth only phase, which is what you're doing with shadow mapping, is traditionally a point where you're not making full use of the GPU's potential. Async Compute allows you to move your compute shader work to run in parallel with your graphics queue, thereby making use of resources that the graphics queue is underutilizing.

BOTD uses Async Compute for it's tiled light list gather which is part of the deferred lighting, all of which is mostly done with compute shaders on console in HDRP. It also uses it for its SSAO calculations. Both of these overlap with the shadow map rendering to fill in the gaps in the wave front utilization.

For a run-through of some conceptual code where Async Compute is employed, tune into Rob's Unite session at 35:30.

Yes!
Meh.
-->

August 2014

Volume 29 Number 8

As a software architect, I've written many systems, reverse-­engineered native code malware, and generally could figure things out on the code side. When it came to making games, though, I was a bit lost as to where to start. I had done some native code graphics programming in the early Windows days, and it wasn't a fun experience. I then started on DirectX development but realized that, although it was extremely powerful, it seemed like too much code for what I wanted to do.

Then, one day, I decided to experiment with Unity, and I saw it could do some amazing things. This is the first article in a four-part series that will cover the basics and architecture of Unity. I'll show how to create 2D and 3D games and, finally, how to build for the Windows platforms.

What Unity Is

Unity is a 2D/3D engine and framework that gives you a system for designing game or app scenes for 2D, 2.5D and 3D. I say games and apps because I've seen not just games, but training simulators, first-responder applications, and other business-focused applications developed with Unity that need to interact with 2D/3D space. Unity allows you to interact with them via not only code, but also visual components, and export them to every major mobile platform and a whole lot more—for free. (There's also a pro version that's very nice, but it isn't free. You can do an impressive amount with the free version.) Unity supports all major 3D applications and many audio formats, and even understands the Photoshop .psd format so you can just drop a .psd file into a Unity project. Unity allows you to import and assemble assets, write code to interact with your objects, create or import animations for use with an advanced animation system, and much more.

As Figure 1 indicates, Unity has done work to ensure cross-platform support, and you can change platforms literally with one click, although to be fair, there's typically some minimal effort required, such as integrating with each store for in-app purchases.


Figure 1 Platforms Supported by Unity

Perhaps the most powerful part of Unity is the Unity Asset Store, arguably the best asset marketplace in the gaming market. In it you can find all of your game component needs, such as artwork, 3D models, animation files for your 3D models (see Mixamo's content in the store for more than 10,000 motions), audio effects and full tracks, plug-ins—including those like the MultiPlatform toolkit that can help with multiple platform support—visual scripting systems such as PlayMaker and Behave, advanced shaders, textures, particle effects, and more. The Unity interface is fully scriptable, allowing many third-party plug-ins to integrate right into the Unity GUI. Most, if not all, professional game developers use a number of packages from the asset store, and if you have something decent to offer, you can publish it there as well.

What Unity Isn't

I hesitate to describe anything Unity isn't as people challenge that all the time. However, Unity by default isn't a system in which to design your 2D assets and 3D models (except for terrains). You can bring a bunch of zombies into a scene and control them, but you wouldn't create zombies in the Unity default tooling. In that sense, Unity isn't an asset-creation tool like Autodesk Maya or 3DSMax, Blender or even Adobe Photoshop. There's at least one third-party modeling plug-in (ProBuilder), though, that allows you to model 3D components right inside of Unity; there are 2D world builder plug-ins such as the 2D Terrain Editor for creating 2D tiled environments, and you can also design terrains from within Unity using their Terrain Tools to create amazing landscapes with trees, grass, mountains, and more. So, again, I hesitate to suggest any limits on what Unity can do.

Where does Microsoft fit into this? Microsoft and Unity work closely together to ensure great platform support across the Microsoft stack. Unity supports Windows standalone executables, Windows Phone, Windows Store applications, Xbox 360 and Xbox One.

Getting Started

Download the latest version of Unity and get yourself a two-button mouse with a clickable scroll wheel. There's a single download that can be licensed for free mode or pro. You can see the differences between the versions at unity3d.com/unity/licenses. The Editor, which is the main Unity interface, runs on Windows (including Surface Pro), Linux and OS X.

I'll get into real game development with Unity in the next article, but, first, I'll explore the Unity interface, project structure and architecture.

Architecture and Compilation

Unity is a native C++-based game engine. You write code in C#, JavaScript (UnityScript) or, less frequently, Boo. Your code, not the Unity engine code, runs on Mono or the Microsoft .NET Framework, which is Just-in-Time (JIT) compiled (except for iOS, which doesn't allow JIT code and is compiled by Mono to native code using Ahead-of-Time [AOT] compilation).

Unity lets you test your game in the IDE without having to perform any kind of export or build. When you run code in Unity, you're using Mono version 3.5, which has API compatibility roughly on par with that of the .NET Framework 3.5/CLR 2.0.

You edit your code in Unity by double-clicking on a code file in the project view, which opens the default cross-platform editor, Mono­Develop. If you prefer, you can configure Visual Studio as your editor.

You debug with MonoDevelop or use a third-party plug-in for Visual Studio, UnityVS. You can't use Visual Studio as a debugger without UnityVS because when you debug your game, you aren't debugging Unity.exe, you're debugging a virtual environment inside of Unity, using a soft debugger that's issued commands and performs actions.

To debug, you launch MonoDevelop from Unity. MonoDevelop has a plug-in that opens a connection back to the Unity debugger and issues commands to it after you Debug | Attach to Process in MonoDevelop. With UnityVS, you connect the Visual Studio debugger back to Unity instead.

When you open Unity for the first time, you see the project dialog shown in Figure 2.


Figure 2 The Unity Project Wizard

In the project dialog, you specify the name and location for your project (1). You can import any packages into your project (2), though you don't have to check anything off here; the list is provided only as a convenience. You can also import a package later. A package is a .unitypackage file that contains prepackaged resources—models, code, scenes, plug-ins—anything in Unity you can package up—and you can reuse or distribute them easily. Don't check something off here if you don't know what it is, though; your project size will grow, sometimes considerably. Finally, you can choose either 2D or 3D (3). This dropdown is relatively new to Unity, which didn't have significant 2D game tooling until fairly recently. When set to 3D, the defaults favor a 3D project—typical Unity behavior as it's been for ages, so it doesn't need any special mention. When 2D is chosen, Unity changes a few seemingly small—but major—things, which I'll cover in the 2D article later in this series.

This list is populated from .unitypackage files in certain locations on your system; Unity provides a handful on install. Anything you download from the Unity asset store also comes as a .unitypackage file and is cached locally on your system in C:UsersAppData­RoamingUnityAsset Store. As such, it will show up in this list once it exists on your system. You could just double-click on any .unitypackage file and it would be imported into your project.

Continuing with the Unity interface, I'll go forward from clicking Create in the dialog in Figure 2 so a new project is created. The default Unity window layout is shown in Figure 3.


Figure 3 The Default Unity Window

Here's what you'll see:

  1. Project: All the files in your project. You can drag and drop from Explorer into Unity to add files to your project.
  2. Scene: The currently open scene.
  3. Hierarchy: All the game objects in the scene. Note the use of the term GameObjects and the GameObjects dropdown menu.
  4. Inspector: The components (properties) of the selected object in the scene.
  5. Toolbar: To the far left are Pan, Move, Rotate, Scale and in the center Play, Pause, Advance Frame. Clicking Play plays the game near instantly without having to perform separate builds. Pause pauses the game, and advance frame runs it one frame at a time, giving you very tight debugging control.
  6. Console: This window can become somewhat hidden, but it shows output from your compile, errors, warnings and so forth. It also shows debug messages from code; for example, Debug.Log will show its output here.

Unity 4 0 1 – High End Game Development App Download

Of important mention is the Game tab next to the Scene tab. This tab activates when you click play and your game starts to run in this window. This is called play mode and it gives you a playground for testing your game, and even allows you to make live changes to the game by switching back to the Scene tab. Be very careful here, though. While the play button is highlighted, you're in play mode and when you leave it, any changes you made while in play mode will be lost. I, along with just about every Unity developer I've ever spoken with, have lost work this way, so I change my Editor's color to make it obvious when I'm in play mode via Edit | Preferences | Colors | Playmode tint.

About Scenes

Everything that runs in your game exists in a scene. When you package your game for a platform, the resulting game is a collection of one or more scenes, plus any platform-­dependent code you add. You can have as many scenes as you want in a project. A scene can be thought of as a level in a game, though you can have multiple levels in one scene file by just moving the player/camera to different points in the scene. When you download third-party packages or even sample games from the asset store, you typically must look for the scene files in your project to open. A scene file is a single file that contains all sorts of metadata about the resources used in the project for the current scene and its properties. It's important to save a scene often by pressing Ctrl+S during development, just as with any other tool.

Typically, Unity opens the last scene you've been working on, although sometimes when Unity opens a project it creates a new empty scene and you have to go find the scene in your project explorer. This can be pretty confusing for new users, but it's important to remember if you happen to open up your last project and wonder where all your work went! Relax, you'll find the work in a scene file you saved in your project. You can search for all the scenes in your project by clicking the icon indicated in Figure 4 and filtering on Scene.

Unity 4 0 1 – High End Game Development App Free


Figure 4 Filtering Scenes in the Project

In a scene, you can't see anything without a camera and you can't hear anything without an Audio Listener component attached to some GameObject. Notice, however, that in any new scene, Unity always creates a camera that has an Audio Listener component already on it.

Project Structure and Importing Assets

Unity projects aren't like Visual Studio projects. You don't open a project file or even a solution file, because it doesn't exist. You point Unity to a folder structure and it opens the folder as a project. Projects contain Assets, Library, ProjectSettings, and Temp folders, but the only one that shows up in the interface is the Assets folder, which you can see in Figure 4.

The Assets folder contains all your assets—art, code, audio; every single file you bring into your project goes here. This is always the top-level folder in the Unity Editor. But make changes only in the Unity interface, never through the file system.

The Library folder is the local cache for imported assets; it holds all metadata for assets. The ProjectSettings folder stores settings you configure from Edit | Project Settings. The Temp folder is used for temporary files from Mono and Unity during the build process.

Unity 4 0 1 – High End Game Development Applications

I want to stress the importance of making changes only through the Unity interface and not the file system directly. This includes even simple copy and paste. Unity tracks metadata for your objects through the editor, so use the editor to make changes (outside of a few fringe cases). You can drag and drop from your file system into Unity, though; that works just fine.

The All-Important GameObject

Virtually everything in your scene is a GameObject. Think of System.Object in the .NET Framework. Almost all types derive from it. The same concept goes for GameObject. It's the base class for all objects in your Unity scene. All of the objects shown in Figure 5 (and many more) derive from a GameObject.


Figure 5 GameObjects in Unity

A GameObject is pretty simple as it pertains to the Inspector window. You can see in Figure 6 that an empty GameObject was added to the scene; note its properties in the Inspector. GameObjects by default have no visual properties except the widget Unity shows when you highlight the object. At this point, it's simply a fairly empty object.


Figure 6 A Simple GameObject

A GameObject has a Name, a Tag (similar to a text tag you'd assign via a FrameworkElement.Tag in XAML or a tag in Windows Forms), a Layer and the Transform (probably the most important property of all).

The Transform property is simply the position, rotation and scale of any GameObject. Unity uses the left-hand coordinate system, in which you think of the coordinates of your computer screen as X (horizontal), Y (vertical) and Z (depth, that is, coming in or going out of the screen).

In game development, it's quite common to use vectors, which I'll cover a bit more in future articles. For now, it's sufficient to know that Transform.Position and Transform.Scale are both Vector3 objects. A Vector3 is simply a three-dimensional vector; in other words, it's nothing more than three points—just X, Y and Z. Through these three simple values, you can set an object's location and even move an object in the direction of a vector.

Components

You add functionality to GameObjects by adding Components. Everything you add is a Component and they all show up in the Inspector window. There are MeshRender and SpriteRender Components; Components for audio and camera functionality; physics-related Components (colliders and rigidbodies), particle systems, path-finding systems, third-party custom Components, and more. You use a script Component to assign code to an object. Components are what bring your GameObjects to life by adding functionality, akin to thedecorator pattern in software development, only much cooler.

I'll assign some code to a new GameObject, in this case a simple cube you can create via GameObject | Create Other | Cube. I renamed the cube Enemy and then created another to have two cubes. You can see in Figure 7 I moved one cube about -15 units away from the other, which you can do by using the move tool on the toolbar or the W key once an object is highlighted.


Figure 7 Current Project with Two Cubes

The code is a simple class that finds a player and moves its owner toward it. You typically do movement operations via one of two approaches: Either you move an object to a new position every frame by changing its Transform.Position properties, or you apply a physics force to it and let Unity take care of the rest.

Doing things per frame involves a slightly different way of thinking than saying 'move to this point.' For this example, I'm going to move the object a little bit every frame so I have exact control over where it moves. If you'd rather not adjust every frame, there are libraries to do single function call movements, such as the freely available iTween library.

The first thing I do is right-click in the Project window to create a new C# script called EnemyAI. To assign this script to an object, I simply drag the script file from the project view to the object in the Scene view or the Hierarchy and the code is assigned to the object. Unity takes care of the rest. It's that easy.

Figure 8 shows the Enemy cube with the script assigned to it.


Figure 8 The Enemy with a Script Assigned to It

Take a look at the code in Figure 9 and note the public variable. If you look in the Editor, you can see that my public variable appears with an option to override the default values at run time. This is pretty cool. You can change defaults in the GUI for primitive types, and you can also expose public variables (not properties, though) of many different object types. If I drag and drop this code onto another GameObject, a completely separate instance of that code component gets instantiated. This is a basic example and it can be made more efficient by, say, adding a RigidBody component to this object, but I'll keep it simple here.

Figure 9 The EnemyAI Script

In code, I can get a reference to any component exposed in the editor. I can also assign scripts to a GameObject, each with its own Start and Update methods (and many other methods). Assuming a script component containing this code needs a reference to the EnemyAI class (component), I can simply ask for that component:

After you edit code in MonoDevelop or your code editor of choice and then switch back to Unity, you'll typically notice a short delay. This is because Unity is background compiling your code. You can change your code editor (not debugger) via Edit | Preferences | External Tools | External Script Editor. Any compilation issues will show up at the very bottom status bar of your Unity Editor screen, so keep an eye out for them. If you try to run your game with errors in the code, Unity won't let you continue.

Writing Code

In the prior code example, there are two methods, Start and Update, and the class EnemyHealth inherits from the MonoBehavior base class, which lets you simply assign that class to a GameObject. There's a lot of functionality in that base class you'll use, and typically a few methods and properties. The main methods are those Unity will call if they exist in your class. There are a handful of methods that can get called (see bit.ly/1jeA3UM). Though there are many methods, just as with the ASP.NET Web Forms Page Lifecycle, you typically use only a few. Here are the most common code methods to implement in your classes, which relate to the sequence of events for MonoBehavior-derived classes:

Awake: This method is called once per object when the object is first initialized. Other components may not yet be initialized, so this method is typically used to initialize the current GameObject. You should always use this method to initialize a MonoBehavior-derived class, not a constructor. And don't try to query for other objects in your scene here, as they may not be initialized yet.

Start: This method is called during the first frame of the object's lifetime but before any Update methods. It may seem very similar to Awake, but with Start, you know the other objects have been initialized via Awake and exist in your scene and, therefore, you can query other objects in code easily, like so:

Update: This method is called every frame. How often is that, you ask? Well, it varies. It's completely computation-dependent. Because your system is always changing its load as it renders different things, this frame rate varies every second. You can press the Stats button in the Game tab when you go into play mode to see your current frame rate, as shown in Figure 10.


Figure 10 Getting Stats

FixedUpdate: This method is called a fixed number of times a second, independent of the frame rate. Because Update is called a varying number of times a second and isn't in sync with the physics engine, it's typically best to use FixedUpdate when you want to provide a force or some other physics-related functions on an object. FixedUpdate by default is called every .02 seconds, meaning Unity also performs physics calculations every .02 seconds (this interval is called the Fixed Timestep and is developer-adjustable), which, again, is independent of frame rate.

Unity-Generated Code Projects

Once you have code in your project, Unity creates one or more project files in your root folder (which isn't visible in the Unity interface). These are not the Unity engine binaries, but instead the projects for Visual Studio or MonoDevelop in which you'll edit and compile your code. Unity can create what might seem like a lot of separate projects, as Figure 11 shows, although each one has a an important purpose.


Figure 11 Unity-Created Projects

If you have a simple Unity project, you won't see all of these files. They get created only when you have code put into various special folders. The projects shown in Figure 11 are broken out by only three types:

  • Assembly-CSharp.csproj
  • Assembly-CSharp-Editor.csproj
  • Assembly-CSharp-firstpass.csproj

For each of those projects, there's a dupli­cate project created with -vs appended to it, Assembly-CSharp-vs.csproj, for example. These projects are used if Visual Studio is your code editor and they can be added to your exported project from Unity for platform-specific debugging in your Visual Studio solution.

The other projects serve the same purpose but have CSharp replaced with UnityScript. These are simply the JavaScript (UnityScript) versions of the projects, which will exist only if you use JavaScript in your Unity game and only if you have your scripts in the folders that trigger these projects to be created.

Now that you've seen what projects get created, I'll explore the folders that trigger these projects and show you what their purposes are. Every folder path assumes it's underneath the /Assets root folder in your project view. Assets is always the root folder and contains all of your asset files underneath it. For example, Standard Assets is actually /Assets/Standard Assets. The build process for your scripts runs through four phases to generate assemblies. Objects compiled in Phase 1 can't see those in Phase 2 because they haven't yet been compiled. This is important to know when you're mixing UnityScript and C# in the same project. If you want to reference a C# class from UnityScript, you need to make sure it compiles in an earlier phase.

Phase 1 consists of runtime scripts in the Standard Assets, Pro Standard Assets and Plug-ins folders, all located under/Assets. This phase creates the Assembly-CSharp-firstpass.csproj project.

Phase 2 scripts are in the Standard Assets/Editor, Pro Standard Assets/Editor and Plug-ins/Editor folders. The last folder is meant for scripts that interact with the Unity Editor API for design-time functionality (think of a Visual Studio plug-in and how it enhances the GUI, only this runs in the Unity Editor). This phase creates the Assembly-CSharp-Editor-firstpass.csproj project.

Phase 3 comprises all other scripts that aren't inside an Editor folder. This phase creates the Assembly-CSharp-Editor.csproj project.

Phase 4 consists of all remaining scripts (those inside any other folder called Editor, such as /Assets/Editor or /Assets/­Foo/Editor). This phase creates the Assembly-CSharp.csproj project.

There are a couple other less-used folders that aren't covered here, such as Resources. And there is the pending question of what the compiler is using. Is it .NET? Is it Mono? Is it .NET for the Windows Runtime (WinRT)? Is it .NET for Windows Phone Runtime? Figure 12 lists the defaults used for compilation. This is important to know, especially for WinRT-based applications because the APIs available per platform vary.

Figure 12 Compilation Variations

PlatformGame Assemblies Generated ByFinal Compilation Performed By
Windows Phone 8MonoVisual Studio/.NET
Windows Store.NETVisual Studio/.NET (WinRT)
Windows Standalone (.exe)MonoUnity - generates .exe + libs
Windows Phone 8.1.NETVisual Studio/.NET (WinRT)

When you perform a build for Windows, Unity is responsible for making the calls to generate the game libraries from your C#/UnityScript/Boo code (DLLs) and to include its native runtime libraries. For Windows Store and Windows Phone 8, it will export a Visual Studio solution, except for Windows standalone, in which Unity generates the .exe and required .dll files. I'll discuss the various build types in the final article in the series, when I cover building for the platform. The graphics rendering at a low level is performed on the Windows platforms by DirectX.

Designing a game in Unity is a fairly straightforward process:

  • Bring in your assets (artwork, audio and so on). Use the asset store. Write your own. Hire an artist. Note that Unity does have native support for Maya, Cheetah3d, Blender and 3dsMax, in some cases requiring that software be installed to work with those native 3D formats, and it works with .obj and .fbx common file formats, as well.
  • Write code in C#, JavaScript/UnityScript, or Boo, to control your objects, scenes, and implement game logic.
  • Test in Unity. Export to a platform.
  • Test on that platform. Deploy.

But Wait, I Want More!

This article serves as an overview of the architecture and process in Unity. I covered the interface, basics of assigning code, GameObjects, components, Mono and .NET, plus more. This sets us up nicely for the next article where I'll dive right into assembling game components for a 2D game. Keep an eye on Microsoft Virtual Academy, as I'll be doing a two-day Unity learning event late summer. And watch for local regional learning events at unity3d.com/pages/windows/events.

Adam Tuliperis a senior technical evangelist with Microsoft living in sunny Southern California. He's an indie game dev, co-admin of the Orange County Unity Meetup, and a pluralsight.com author. He and his wife are about to have their third child, so reach out to him while he still has a spare moment at adamt@microsoft.com or on Twitter at twitter.com/AdamTuliper.

Thanks to the following technical experts for reviewing this article: Matt Newman (Subscience Studios), Jaime Rodriguez (Microsoft) and Tautvydas Žilys (Unity)





broken image