Wednesday, 3 December 2014

Game Engines Blog #5: 2LoC too many?

Alright so for my final blog I'm going to do something that I have no doubt that many of my classmates have already done: My personal gripes with 2LoC and what I would do to change it. Without further preamble, let's dive headlong into it!

Camera
First things first, setting up the camera is a massive chore to me. It's the kind of thing where you don't need to make a lot of cameras unless you're doing some special effects (i.e. reflection) so setting up a function to make it easier feels like it might me spending more time making a solution than you do on the actual problem. For an idea of why I might hate the camera, here's sample code to get a perspective camera working:

cameraEnt =
pref_gfx::Camera(m_entityMgr.get(), m_cpoolMgr.get())
.Near(1.0f)
.Far(1000.0f)
.VerticalFOV(math_t::Degree(60.0f))
.Position(math_t::Vec3f(0.0f, 5.0f, 5.0f))
.Create(m_win.GetDimensions());
{
//Set camera settings
math_t::AspectRatio ar(math_t::AspectRatio::width((tl_float)m_win.GetWidth()),
math_t::AspectRatio::height((tl_float)m_win.GetHeight()));
math_t::FOV fov(math_t::Degree(60.0f), ar, math_t::p_FOV::vertical());

math_proj::FrustumPersp::Params params(fov);
params.SetFar(1000.0f).SetNear(1.0f);

math_proj::FrustumPersp fr(params);
fr.BuildFrustum();

cameraEnt->GetComponent<gfx_cs::Camera>()->SetFrustum(fr);
}

Now, setting all the variables such as field of view, near and far during the camera construction only to have to re-declare this stuff when making a frustum feels REALLY redundant. I would think that setting this stuff up in the camera would automatically handle setting up the frustum during creation. Now you might say "Oh you have no indicator to set it as perspective, that's why!" Well, there is a function where you set perspective with a boolean value, but this doesn't seem to do anything. Overall it makes the whole process of making a camera, one of the most necessary entities in a game, feel like it has six more lines than it should while being simultaneously unintuitive as I don't think most users would think to create a frustum at this point in creation of the game.

Shader Limitations
Next up is the limitations on how you can use shaders with 2LoC. Creating materials and such is actually pretty simple and I do like it...But being unable to access the geometry shader (as well as the other lesser known ones) ends up hampering development. Case in point: Dealing with particles. Normally for a particle you could just send in a single vertex and then create a quad in the geometry shader allowing for things to be much more efficient in terms of calculations since you just have to apply physics calculations to the single vertex in the shader versus doing it 4 times (assuming you have vertex indexing in the engine!) on each corner of a quad. So this not only becomes a limitation of what one can do with shaders, it also becomes a limitation of how efficient you can make your processes and hurts development in the long-run!

General user unfriendliness
There's a few other problems I can go into such as the extensive namespaces which make dealing with things difficult (I have had many times where I couldn't copy a material over to another material because as it turns out material 1 is in namespace and material 2 is in a different one...Despite both being materials) as well as things that the engine could benefit from but doesn't have for school reasons (i.e. Scripting and Particles) but above all else is just how obnoxious the engine can be to use. There are a lot of tricks that you need to sort of work out for yourself in order to get some things to work and it might be easier if we had some rudimentary documentation to go by (i.e. A list of what the different functions do) but without this it makes it very challenging to work out how to do what should be very simple things. For example, when I create an entity during runtime its mesh and material won't appear unless I call the appropriate systems and say "Initialize(ent)". I would not have guessed that I need to do this until a friend pointed it out to me.

Overall, the engine seems to suffer from a serious problem in that it's usable once you have become used to (which isn't really praise since it can apply to ANYTHING) the engine but it's very unfriendly to new users and would benefit from a bit of documentation to go with the samples provided. Even something as simple as "Here is a list of basic functions you should know but aren't immediately found and what they do:" Just to provide users with the ability to easily reference something like they would with Unity's documentation. In the end, the engine has a long way to go before I'd say it would work well as an engine for general use.

That's it! Sorry if I seem a little bit harsh but this engine has bitten me too many times to count and has left me a bit miffed. I hope you enjoyed the read, cheers!

Friday, 28 November 2014

Game Design Blog #3

This week I'm going to talk about one of my favourite games, and one of the reasons I love it: Its pacing! The game in question, is the lovely, the fantastic....

So first things first, I will end up going more into depth over the overall pace of the game rather than the pace in specific levels. This is because there isn't a whole lot to write about in terms of the pacing of levels, it's not that it's bad it's just that it's pretty standard. Now then, why do I love the overall pacing for the game? Well, to put it simply the game does a good job of keeping a steady but ever-building pace. When the game starts (and after the tutorial), the player only has 2 members on their squad and they are given 4 missions which reward the players with additional squad members. By doing this the player is given a chance to experiment with their squad without the number of unused squadmates getting out of hand. After a point in the game the player unlocks an additional 4 missions which are much the same, allowing for a break before the player gets new squad members.

Another interesting thing is the Loyalty Missions. These are missions which require the player to go in with 1 squad mate preselected for them. This gives the player a chance to interact with characters they might not necessarily interact with and get used to working with them. This also serves as a strong way of preparing the player for the finale of the game in which they go in with every squad member and have to divide it up into different teams while making use of their different skill sets.

In terms of the story pacing, the game does an excellent job of providing the players with points that ramp up into action while providing brief interludes where the player doesn't have a lot of stress on them but the game isn't boring, i.e. the missions to acquire new squad members are in the slower points of the game and serve to help build up anticipation for the next big missions. The overall pacing for the story does a good job of building the players anticipation and interest until the big finale of the game which puts the player in a position where they simply do not want to stop until the end of the game.

Overall, the pacing for Mass Effect 2 is quite good at maintaining the players interest and allowing them plenty of time to learn the skills and squad members that they gain over the course of the game.

That's it for now!

Game Design Blog #2

Alright, time for Game Design blog numero two! Unlike the last blog I'm not going to talk about a specific game, rather I'm going to talk about a control scheme that has become so prevalent in a certain genre of game and put in some guesses as to how it as persisted for so long in gaming culture. What control scheme might I be talking about? Well...

That's right, I'm going to be talking about the First Person Shooter (and Third Person Shooter) control scheme on the PC! The above picture is for the game Team Fortress 2 so there are some parts of it that aren't going to carry across games, but the main things that you need to focus on are WASD, R, Ctrl and Space keys. Now first of all, what do these keys represent? Well WASD is movement in the 4 cardinal directions relative to the player, R is for reloading, Ctrl is usually for crouching and Space is used for jumping. This layout has been carried across many, MANY games in the past. Why might this be? 

Well if you look at the layout of the WASD keys, they're positioned in such a way that the player can leave their left hand in a position they normally have it on the keyboard allowing the fingers to line up in a normal and comfortable way. Another advantage is that the ring, middle and index fingers line up vertically with the 4 keys in to three distinct columns, allowing for easy use. Finally, the WASD keys form a sort of compass by which can easily be interpreted by the player. This analogy between the player and the game works incredibly well as the keyboard movement of forward, left, right and backward carries over perfectly between the players perspective and the perspective of the character in game.

Now let's take a look at the placement of the other keys:The position of the R key keeps it within easy reach of the player's index finger and it's also tied to a button that players can easily reach based on muscle memory as they only have to think about press R to Reload. The Space key would have been chosen as jumping is an action that players will end up using quite frequently which makes its size and position relative to the players thumb allows for easy and frequent use. Lastly, the CTRL key makes use of the players remaining finger while also staying within easy reach of a single hand.


Overall this layout allows the player to easily use one hand to perform all their movements and any necessary miscellaneous movements. However, shooters also rely heavily on the player's mouse which acts as a camera control through movement. This allows for rapid yet precise manipulation of the player's perspective. The left and right mouse buttons will also see use for a weapon's primary (left) and alternate (right) attacks. This works well as the left click on the mouse is often used for most primary actions such as selecting icons or pressing buttons, allowing the player to make an easy association between the click and the action. The same can be said for the right click which sees use for additional actions in day-to-day use.


Overall, the control scheme for shooters is one that allows the user to make many easy associations with the use of the keyboard and immediate actions in the game, as well as analogies which allow the user to perform actions with the keyboard without having to put too much thought into them. This scheme is very sensible and will likely be around for many more years to come.

That's it for now!

Game Engines 4: Valve Hammer Editor

Okay so this blog will be a bit more jumbled since I'm going to be talking primarily about 2 very different topics: Binary Space Partitioning and different methods for culling. The one thing that links these two topics is that I'll be talking about them in the context Valve's Hammer Editor (a level-editing tool for the Source Engine), and most of the talks will be in regards to how it works with Team Fortress 2 as this is the area in which I have the most experience using the editor.

First of all, let's talk about what Binary Space Partitioning is. BSP will essentially parse a file (such as a level) and continuously subdivide it into different nodes until a requirement is met. This allows a level to be broken down into far more manageable chunks. Unsurprisingly, this method was once (and still in fact today in some cases) seen primarily in Shooters as they usually end up having a variety of rooms which can be interpreted as nodes on a tree graph. In this scenario, using a BSP allows the engine to cut down on the amount of checks it needs to make for render culling as it can easily work out all the rooms attached to the one the player is currently standing in.

The practicality of this system in regards to shooters makes it no surprise that it's used in the Source engine which is primarily used to develop shooters. The system can be further enhanced by using a portal-based rendering (which is another prominent feature in the Source engine). Portal-based rendering is essentially the process of treating every door or window connecting two areas as a 'portal' that the player can see through. When the engine goes to render a scene it will use frustum culling to remove any objects that are not within view of the player...But it then encounters a problem where it will try to render an object behind a wall. In order to cut down on this, we only render the objects in one room before casting frustums from each visible portal to figure out what objects there are in a different room which we can see, once the engine knows what objects we can see in other rooms it adds them to the render list. With BSP and portal-rendering combined we can pretty effectively cut down on the demands put on the renderer as we reduce the number of rooms that are being checked for rendering and cutting down the size of the render list by a significant amount.

Now, how does the engine know where to find the portals as well as figure out ways to make subdividing the level easier? Simple, you leave it up to the level designer (or atleast someone who is making the level)! One of the things about the Hammer Editor is that it has a LOT of materials that do not get drawn during render time which are meant to be used more for letting the engine have an easier time or for gameplay (i.e. Spawn areas will often use a material that has a No Draw texture, but it will have gameplay triggers tied to the box representing the spawn). For some idea of what the editor looks like while you work in it, here's an idea:


I'm sorry to have brought up the Source Engine again, but I just find it to be such a great little engine that has lead to a lot of interesting games. While it may not be on the same level of flexibility as Unreal or Unity, it's still capable of producing some truly magnificent games and its level editor can give a lot of unique insights into how the engine as a whole operates (from interpreting game systems to rendering). For an idea of how flexible the level editor is, it allows for something that could be seen as making rudimentary scripts for its gameplay systems (i.e. this object will move when this condition is met) and a level designer can VERY EASILY set this all up.

That's about it for my ramble about Source. If you made it this far, then congratulations. Cheers!

(Image of the Hammer Editor gotten from: http://www.moddb.com/engines/source/images/hammer-editor )

Friday, 7 November 2014

Game Engines 3: Scripting

And now for Game Engines blog #3! This week I'll be giving a breakdown of Scripting and how it applies to game engines. First of all, what is scripting? Scripting is essentially creating a smaller program that can be executed by the game They come with a variety of perks as well which make them invaluable in game development:  They can be easily used in an ECS; They can compile a lot faster than normal C++ files;And be recompiled at runtime. One of the most important things is that they can easily be edited and programmed.

First things first, how do scripts work? A script can be written in notepad (although you really shouldn't since notepad is horrible to work with) like a text file and is then interpreted by the system in such a way that it knows how to act in terms of the game system. It's pretty straightforward and it's more or less identical to how one would set up a program in C++. In fact, when you compare how scripting is supposed to work and the method in which we create our custom components for 2LoC, it's very similar with the main difference being that the custom components take longer to compile than the script.

Now, why do these features make them so great? Well for starters when it comes to an ECS you can make a script component which only needs a path assigned to it to be created, much like you would with a mesh component. This then saves you the trouble of creating custom components and custom component systems like you have to for creating a custom component with 2LoC and cuts down on the number of files that have to be linked during compilation. It also allows you to keep the number of different components you need to have in your ECS to a minimum as you aren't forced to create unique ones for each circumstance and can instead rely on multiple script components to do the same job.

Scripting also has a few perks in terms of hardware, as mentioned a few times earlier. Their very nature eliminates the need to create and link different objects for classes and cuts down on the overall amount of code drastically, allowing for developers to have much faster compile times. The other advantage to working with scripts is that they can easily be recompiled without interrupting the system overall. This is actually a huge advantage as it allows you to change code on-the-fly without having to rebuild a project. All you have to do is open the script, edit and save then recompile the scripts.

Lastly, scripts are useful for non-programming developers. They are generally easier to grasp than the underlying code of engines and are easier to play around with. The ability to edit and reload them without interrupting the game is also very useful in terms of design as it allows designers to see changes to values immediately rather than having to stop and recompile an entire project or ask a programmer to make the change for them.

Overall, scripting is an invaluable tool in game development. It allows us to save time in various areas from building the final product to quickly testing the game as we go. It also helps by reducing the complexity of developing the game and can help drastically improve a system's versatility: Case in point, Unity is used to create a large range of games in different genres while developers can only add their own code through scripts. The only difficulty comes in setting up a scripting language to work with your engine, but once that's done you've overcome the hardest problem.

Until next time!

Friday, 17 October 2014

Game Engines 2: Components

Time for blog #2 for Game Engines! Today I'm going to be talking about one particular topic that has become very prominent in the Game Industry and in game engines in general: Component based systems!

So first of all what is a component based system? Essentially a component system relies on two things: Entities and Components. The Entity is the container for all the components and is basically an object in the game world. The entity only has to worry about having a tag and an ID (and even then, an ID is usually enough) to allow for developers to easily track down an entity in the system. On the other hand we have the Components in the system which is the meat of the object that can then make it unique. Components can range from things such as transformations to meshes to scripts and contain different values that allow the entity to interact with the game world.

So now that we know what an Entity and a Component are, what does this mean? Basically we can have a huge range of flexibility with our Entities. In previous system designs that rely heavily on inheritance we would have to deride from a long range of classes to create an entity that can do what we want (i.e. Entity->Character->Enemy->Flying Enemy->Drone) and we would have a harder time crossing over features from different classes. With components we would only have to worry about adding something such as a Flying component, Enemy component and any components that might be specific to the Drone enemy. This allows for anyone working with the engine to easily make an entity that can fit their needs (so long as they can make their own components).

That's the general summary of how an Entity Component System works on the surface. In my next blog I'm going to go into some of the lesser well-known components that you can use in an ECS (and that isn't currently implemented into 2LoC: Scripting!

Friday, 26 September 2014

Game Design Blog #1

And hot-on-the-heels of my Game Engines blog is Game Design! For this one I'm not totally sure what to do, so I'm going to play it safe and talk about the recent topics and how they relate to one of the few games I have played recently: Tales of Xillia 2! The two topics I'm going to focus most prominently on are the controls and the UI.

First a little summary of the game: Tales of Xillia 2 is a role-playing game with instanced, party-based combat. When the player enters combat they are in a circular arena with a character they control and up to 3 AI (or player controlled) characters. Now on to gameplay!

The controls of the game, on a PS3 controller. The game itself switches between two states (not including things like menus and not going into a LOT of detail): The overworld state and the combat state. The overworld state is when the player is able to run around freely (i.e. towns, fields and dungeons), obviously this has been simplified but that's more to cut down on length. In this state the player can encounter monsters (in the latter two) and controls by using the left joystick to move their avatar and the X key to interact with objects. The player can then transition to the combat state by colliding with an enemy on-screen and in this state the controls become drastically more complex. The player moves their character by moving the left joystick left or right (the player is on a fixed axes for the most part), but can press L2 to enter a 'free-run' mode where they can move in any direction on the field with the left joystick. The player can attack and guard using the X and square buttons respectively. The player can also use the circle button with the left joystick or the just the right joystick to activate special attacks. This is more or less the basic gameplay (not going into the depth of special stuff). Now a quick breakdown of how intuitive the gameplay feels!

The overworld gameplay doesn't have that much that needs to be said about it. It worked pretty well but it wasn't exactly special. The gameplay in the combat state though flows very smoothly, the basic attack handles very smoothly and the idea of tying the special attacks to the movement in a small-but-unaffecting way actually helps make the gameplay much easier to handle without breaking the flow. I don't think that there's anything that could be done to improve the gameplay as it requires the least amount of movements on the players part to change actions.

One gameplay mechanic for combat that I had previously omitted was the ability to partner with one other party member to work with them in combat (i.e. the character can block to stop you from being attacked from behind or flank an enemy you're fighting). The way a player links with another character is by pressing on the D-pad in the direction of the character's icon on the HUD. My reason for listing this is separately from everything else is to tie into the....

UI! The UI for the overworld state is pretty straight forward with a mini-map that shows where the player and other things of note are, as well as the option for pop-ups like objectives. It's very simple to understand and isn't intrusive, so it gets a pass! The combat state UI is very much geared towards melding with the controls by having it so that important meters are on the left and right while the names and stats of the 4 party members is laid out in a diamond on the bottom-center portion of the screen. This last part works very well with the the ability to link with the other party members as they line-up perfectly with the D-pad so that it's very intuitive. Overall, the UI is very clean and easy to read (for me, anyway). But I'll let you decide by finally providing a screenshot of the game!


And I'm not going to be going into super-fine detail on everything (I'm sure you can see that there are quite a few finer detalis) but that will just entail going into a lot of depth that you learn over-time.

At the end of the day, the game is very well-designed and friendly to both old and new players. Though far more importantly than that: The game is FUN!

And with that, I'm going to call this blog done...Namely because I'm rambling.

Thanks for reading, cheers!

Game Engines 1: The Source

Alright, after a long delay I'm back! Jumping right into this I'm gonna be talking about one of my favourite game engines for this new class, the wonderful engine that has given us so many games by Valve: The Source Engine! I'll be doing a quick-review off it from its beginnings to its future while going through some of the special features implemented it as time went by, but first I'll give a quick summary of what the engine is!

As mentioned before, the Source Engine is Valve's in-house game engine. It was designed to be used for first-person shooters (a genre which Valve excels in) but has been successfully repurposed to be used for other genres (such as top-down shooters like Alien Swarm).  It evolved from the GoldSource Engine which was built off of the Quake Engine.

Now, there's something rather special about the Source Engine in relation to our class: It is made entirely in C++ (well, C++ and then it uses OpenGL and DirectX3D for shaders). This means that there are only two things to stop a UOIT student from making an engine as powerful as Source: Experience and manpower. I consider that fantastic motivation to start learning the ins-and-outs of engines to make my own. Anyway, on to the features that were added!

So the Source Engine is constantly evolving with new games adding new features to it. For example Left 4 Dead 2 added dynamic 3D wounds for the engine, these essentially being a unique way of creating wounds on characters that would change the appearance of the model in a drastic way (i.e. arms falling off and having the bone sticking out). The game DOTA 2 introduced keyframed vertex animation to the engine and then there's a much longer list of features (with things like blending skeletal animation, inverse kinematics, dynamic water effects, etc.). The development of the engine does not have them keeping track of it by versions like you would typically see (i.e. version 1.0.0, version 1.0.1, etc.), instead simply updating in smaller or bigger ways as required.

Looking at its design, the Source Engine has the capacity to constantly grow as time goes on and add new features. Though this approach would ultimately prove a bad idea as new techniques and design patterns are constantly emerging and the base code for the engine simply could not easily be altered without potentially ruining the engine. This seems to be something that Valve has picked up on as they are currently developing (or, if rumors are to be believed, have already released) a Source Engine 2. This engine should see an overhaul to the lower-level architecture of the system so that it could run much more efficiently and have greater potential than the previous one.

On a semi-unrelated note though, the Source Engine is also a great example of how powerful a game engine can be in the right hands. Valve has used their engine not only to make games but also to make offline tools such as the Source Filmmaker and Hammer Editor which they could then use to improve their production pipelines. The Source Filmmaker allows for easy development of in-engine cutscenes by allowing them to create a scene in the engine like you would with Maya, but it instead uses the engines lighting systems. The best example of its potential is the Meet the Team videos for Team Fortress 2. Meanwhile, the Hammer Editor functions as the company's in-house map editing tool, Not only does it allow users to create the shape of the map, it also allows them to set up things such as game triggers which the engine can then use to construct the full level experience.

Overall, the Source Engine is a fantastic piece of technology, in my mostly biased opinion. Unfortunately, I have not ever used it so I could not make any comments on how user-friendly it is or some of the downsides it has developing for it. I hope you've enjoyed reading this rambling blog!

Cheers!

Wednesday, 2 April 2014

Curtain Call

So this is the last blog for the class, 10/10 (probably wouldn't read again), the big finale...I know that in the past I've generally tried to talk about stuff that I find really interesting like Finite State Machines or Valve's water shader but I'm not going to take that approach with this blog...Nor will I play it safe and talk about some topic we were taught in class and just repeat everything that was said with maybe a few additions. Instead I plan to do two things in one today: I'm going to do a review of my own and talk about myself!...By that I mean I'm going to talk about what I've learned, what I plan to do/learn and if I feel I have the time then talk about why I plan to do it.

What I learned in class
So let's talk about the fun things that I've learned from this class as well as from what I've learned doing research because of the class. To start things off, FBOs are a fantastic concept to me. They can let you do so much and can lead to some pretty interesting stuff. The flexibility for things like Bloom, Thresholding and Tone-mapping are fine examples of what you can do with FBOs and reasons for why I'm quite glad to have learned them!

Another topic is Normal Mapping. The fact that they allow you to get much more out of lower-resolution models is fantastic as it gives you a similar quality of visual without putting as much strain as on the GPU and you can load in models much faster! The idea behind normal mapping also gave way to a few other ideas early on in the term so it had the nice side-effect of sowing the seeds for some of my other ideas.

Next up is reflection, this is pretty easy as it's just rendering the scene from a different point of view but it can allow for quite a bit of flexibility (i.e. Mirrors and water surfaces among other things).

Then there's also Fluid Dynamics, Implicit Surfaces, Shadow Mapping and Deferred Rendering which are all interesting topics that I've only had the time to read up on but not actually apply sadly...Though that will hopefully change within the next 12 months.

What I plan to do and learn
So some of this stuff has already been alluded to in previous blogs but I'm going to reiterate anyway. Within the next year I plan to apply most of the stuff that I have yet to apply, primarily Shadow Mapping and Deferred Rendering. I also hope to perform Mesh Skinning with Dual Quaternions as it seems to be the most efficient method at this point in time as well as having the least amount of artifacts. I also hope to learn Inverse Kinematics as it can provide a lot of useful features as evidenced by its use in Uncharted as we learned last year. I also hope to apply a few other techniques which we learned in class like Depth of Field and SSAO...Long story short, I want to be able to say that I completed every single question we were given for homework, even the insane upgrades.

On top of all of this I have two major goals: Getting my style of programming up to a more professional level where I can better manage memory and have it well optimized (ideally able to use DOD where it should be used) and I want to have started work on my engine without it looking like I stopped the moment school started up...Ideally in a years time I'd be past my lowest goal of a quarter into the development.

Why am I doing this? (Warning: Some of this stuff may get weird)
I'm sure this is a question that has been asked about me many times as I often seem to take harder paths than most (i.e. I set out with the intention of writing blog posts about everything BUT what we covered in class and last term I made a point of doing harder things like skeletal animation before we even got close to touching the concepts) and I've also shown a tendency for taking on more than I can chew. So why would I do all of this? Well, it goes back to two very simple questions, ones that I've seen quite a few students ask and be asked at one point or another: "Why are you here?" and "Why do you want to make games?" They're innocent enough questions and everyone should definitely be asked them at one point or another because it really makes you reevaluate yourself and your goals in life.

So what're my answers to that question? Well you see: Video games as a whole have done a lot for me in the past. They've picked me up when I was down and they've given me the strength of will to get past the obstacles in my life. They've also brought me close to a lot of people, many of whom have become like close family members despite the distances between us. They've done so much for my life and, knowing that, I can safely say that I want to do that for someone else. I could probably go on to talk about a lot more but there's a time and place for everything and any further statements may look like I'm trying to get some special treatment.

So why am I saying this? Well, in this current situation I feel that it's best to voice my personal motivations as it adds a layer of perspective to all questions that I ask regarding programming and game design. Suddenly all my questions about engines and such as well as choosing to look into topics that go beyond the class material goes from idle curiosity to goal-driven research that can help me out when I try to start my own company.

I hope you've enjoyed reading this long, dragged out post that's really just me ranting incoherently because I have sincerely enjoyed this class! In the end it has brought me one step closer to my end goals, given me a lot of wonderful information and experiences to draw on in the future and has really helped me improve drastically as a developer. Thank you for a wonderful term!

Until next term, cheers!
Cameron Nicoll

P.S. Managed to get this in 7 minutes before April 3rd, totally didn't make my closing statements from my last blog look like a lie!

Tuesday, 1 April 2014

Flowing Fluids

THE PENULTIMATE BLOG!

Yeah, I just needed to get that off my chest. I mean, how often do you get to use the word penultimate? So for this penultimate blog I was trying to think of unique and fun topics to discuss. One certain professor recommended Implicit Surfaces and I won't lie, they made very little sense to me. I have yet to develop the necessary skills for working through the usual jargon you find in academic papers. So instead I'm going to talk about another topic: Fluid Dynamics!

So I had looked into Fluid Dynamics before, namely because when you have your prof saying that you shouldn't do something from scratch that's either a warning or a challenge...I chose to see it as a challenge (which I, regrettably, could not meet). So I've only really looked at some of the basics of Fluid Dynamics and as such my description won't exactly be up the level you would expect from something like, say, an academic paper. So to begin with, a simple way of looking at Fluid Dynamics is by looking at it in terms of a grid. Each block on the grid is a particle that is a part of the fluid and contains information such as colour, direction and magnitude. As the simulation runs through its updates, we can use the information in each particle to figure out what each particle should look like (i.e. should the colour become more vivid because more force is accumulating or is the particle untouched and so it should have no colour).

One way of sort of simplifying fluid dynamics is to think about it in terms of a scalar field (Field A) and a vector field (Field B). Field A is used to denote what colour each point in the simulation should be and then we can use bicubic interpolation between each one to figure out the colour on a per-pixel basis during the render process. This will result in a smoother colour blend for things such as smoke. Field B is then used purely for calculations and is ignored during the render step. In order to figure out how each particle changes we need some way of remembering the direction that the particle is travelling in as well as the force behind it in order to calculate the colour. This is important for things such as diminishing colour as the particle 'moves'.

This more or less my understanding of the very basics of fluid dynamics. There is the option to use more equations to create more realistic dynamics such as the navier-stokes equation which takes into account other factors such as pressure. Unfortunately, I will not be going into these at this point in time as I feel that I've still got quite a bit to learn in that area.

So yeah, that's more or less my understanding of Fluid Dynamics summed up in what seems to be one of my shorter blogs. I was kind of hoping that this one would come across as bigger and more interesting but I guess I didn't maintain the momentum from the last one. Oh well, they can't all be winners.

You can expect to see my final blog probably tomorrow/later today or tomorrow-tomorrow (as in April 3) but cheers for now!

Friday, 28 March 2014

Start Your Engines!

So I'm going to do my usual thing and work against the grain here by talking about something that's kind of related to our class but at the same time it's not something that we'd talk about at all really: Game Engines! This is sort of a field that's captured my interest and it can pertain to graphics as there are some engines specifically designed for rendering (i.e. OGRE) so I'm not entirely off the rails with this! Now, I get the feeling that this is going to be a rather long blog as I'm going to be starting off with just a run through of what Game Engines are and what they do, some examples of engines and a little bit about their development cycle (Source and Crystal Tools) and finally some of my own plans *insert evil laugh here* so without further preamble...

Disclaimer: I'm still pretty ignorant when it comes to a lot of this stuff so don't expect grade-A, dead on explanations.

Okay so what is a Game Engine? It's pretty much all there in the title: It's the engine for the game, it's the power house which can help get a game moving from the get go. In a way, it's pretty similar to a Framework (which are used as a jumping off point for engines)  however it comes with a lot more functionality that is designed to streamline the development cycle. One of the major things for engines is that they usually come equipped with the basic necessities for games such as audio, collision and so on. The three basic types of game engines are all-in-one engines, framework-based engines (such as Source) and graphics engines. From what I've understood the all-in-one engines are the ones sort of like Unity where you can make pretty much everything right off the bat with it without needing anything extra while the framework-based ones usually end up spawning some tools that can be utilized with them (i.e. The Source Engine comes equipped with the Hammer Editor which is a tool specifically for building levels that will work with in the engine, as evidence by the ever-increasing list of custom TF2 maps). Then graphic engines just specialize in the rendering process of games and can be used for pre-existing rendering techniques as well as managing scenes and what not. That's more or less a jumbled and quick breakdown of Engines

So like I had mentioned earlier I'm going to talk a bit about two game engines: One I know a fair bit about and the other which I don't know nearly as much about. Let's start with the one that I do know: Valve's Source Engine. So when you purchase a Source game (such as Team Fortress 2, Half-Life, etc.) you get access to a few of the Source engine tools. These include things such as the Hammer Editor, a model viewer, as well as some tools for making mods and so on. The Source Engine is constantly being updated and with it every game that is supported by it, as many owners of Valve games on Steam are aware of when they get little updates for all their games that generally don't do anything to the game itself. This engine is in constant development and because it's an in-house engine for Valve that they can constantly edit it with their own new tricks such as when they introduced a new method of rendering water in Left 4 Dead 2! One key difference between the development of the Source Engine and the next engine I'm going to go into to is that the Source Engine was initially built with a specific game in mind: Half-Life. This gave the team a clear goal when designing much of the features as they knew that they would have to prepare it to be specialized for first-person shooters; a fact which has held true decades later but has been expanded on with time.

So the next engine I'm going to talk about is one that I don't have a whole lot of info on and serves more as a warning for why you should have a single game in mind from the start: Square Enix's Crystal Tools Engine. So this engine has produced some pretty visually impressive games (the entire Final Fantasy XIII collection) and seems to work pretty well...Though, reading up on some stuff revealed that it had a somewhat shaky start off. The engine was initially built with Final Fantasy XIII in mind as the title for it to support but it quickly started taking on multiple projects within the company, each one demanding different specifications and causing the development team to loose track of what they needed to do. This would eventually result in them making it so that some games couldn't even include their assets! So I don't have much to say on this matter other than the fact that this does solidify the importance of having a game in mind from the start.

So that's my technical rambling that's probably missing the mark in a few places...If you want to verbally tear apart everything I've said and you happen to be Dr. Hogue or one of the more well-versed students at UOIT then feel free to tweet me and we can arrange a time to chat over coffee or something!

Now then, on to my last part: My own plans with this information!

...It's not that much of a secret, if you haven't figured it out now then that's genuinely surprising. Obviously I plan to try and begin my own game engine this coming summer. I don't plan to use it for GDW but instead for my own purposes outside of school. I've begun working on a game idea for this engine to be based around but it's still very early in the pre-production stage and with exams and Level-Up looming on the horizon I don't have much time to properly mull things over, so development is pretty much relegated to when I have spare brainpower. The way that things are looking to go, I'll have to put heavier emphasis on making smooth close-combat with irregular shapes which will require creating a more elaborate collision system that works more along the lines of your standard shooter (where each limb has its own box and such) as well as accounting for different shapes. Ideally I'd want to make something similar to Unity where I can toggle between different types of collision but simultaneously be able to add a lot of depth to it. Another heavy emphasis will be on real-time cutscene rendering so I'd like to have something set up that would allow me to easily create a camera track with speed control. I'd also want to put a lot of work into designing a good base for AI as I'll need support for friendly combat AI.

I apologize for the large chunk of text above as this was more me trying to plan it out for myself as well as I find I do develop my ideas better my trying to explain them to others. I'd rather not edit the above area or delete it as I think it does explain a bit more about me as a person...But to summarize it:

Things I will need to emphasize
-A larger variety of collision with varying depths of complexity
-A method of better controlling cutscenes
-A strong AI-base

I also plan on making my engine a framework-style engine so that I can produce some tools to better work with it. Ideally I'd want to create the following:
-A model viewer/render tester: So that I can experiment with different shaders while making sure they run in-game and produce the intended results
-A custom level editor: Ideally one that can take in OBJs and allow me to set up different types of triggers and such
-A cutscene creator: Something similar to the level editor except that I could create a track for the camera to follow and control its speed as it progresses. I also would like to make it so that I could export custom animations for the scene as well as load in level layouts so that I could experiment with moving characters around and seeing how the scene plays out before exporting it as a file type that's easy for the engine to read in at the necessary time.

I recognize that a few of these are probably very unrealistic given my current skill level and I'll probably have a lot of optimization errors in my first pass. I also know that it won't even be close to being a quarter finished by the end of the summer but that doesn't really matter in the end. At the end of the day, even if it doesn't work out I'll still hopefully have learned something or driven myself so crazy that it won't matter anymore!

On that note I'm going to wrap up this post. Now I just need to think of another topic to write about as I already have my final post planned out to some degree. Cheers for now!

Side-note: I am well aware that we take Game Engines in 3rd year, I'll be picking up the copy of my textbook from home after Level-Up. I also know I could have just waited before starting something like this, but as Dan Buckstein can attest: I very rarely wait until it gets taught in class.

Saturday, 22 March 2014

Ramble Ramble Ramble

So last time on this blog: I had mentioned either talking about mo-cap data or portals and did a lot of ranting on AI! This time: I'm going to talk incoherently about different stuff and doing the Portals for Portal will randomly be explained somewhere in the middle. So if I'm going to ramble then what's the point of this blog post, you ask? Simple! I'm going to take this chance to talk about a lot of small things which I don't think I could use for an entire blog post or which would mean retreading ground I've already covered. So without further adieu...

Reflections!

What do I mean by reflections? I mean making something like a mirror that renders in real time. Doing this is a pretty simple 2-part process so I'll just walk through it with some explanations along the way.

Part 1 - Render the scene from the point of view of the mirror, to do this you want to first get the vector between the camera and the mirror in question, this gives you your viewing angle which you will then reflect along the normal vector of the mirror to give you a reflection viewing angle. The actual equation you would want to use here is: V - 2 * (V dot N) * N. This vector is the direction that your mirror camera is looking at, once you have this you simply calculate the Up vector and you can generate a View Matrix for your mirror. Use this view matrix to render the entire scene once, storing it into a Framebuffer Object. Now that you have the scene the mirror sees, move on to the next step!

Part 2 - Now that we have the scene that the mirror sees, we render the scene like we do normally and apply the FBO texture from the previous pass to the mirror: It will now display the reflection in real time and it will change based on the position of the player's camera.

Now that we have reflections working, we can apply some fun stuff to it. For example, we could make a rippling surface as if it's on a lake, to do this we have multiple options: We could use a tesselation shader and use a ripple texture as a displacement map; We could use an algorithm to change how we sample the image, making it distort in places to simulate a ripple appearance; We could also use surface normals to change how the light reflects off of it to make it appear like there's troughs and crests to fool the player into thinking it's rippling.

Portals!

So the portals that we did in class are pretty similar to how you would do a reflection: You're just rendering the scene from a different point of view. In this case you're taking the direction vector between the viewer's position and the position of the portal they're looking through and rotating it so that it matches the orientation of the portal they're looking out of. Then you render the scene from the position of the out-Portal with the direction you calculated previously to a FBO and apply that FBO's texture to the in-Portal.

For added effects such as the rings they have in Portal, make a texture and it so that you have a pass through colour (i.e. just set the alphas to be 0) and a discard colour (i.e. black, blue, etc.) and when you go to render the portal, send in the FBO texture as well as the portal texture. While sampling, if the sampled UV for the portal texture is the discard value then nothing is drawn while if it's the pass through colour the FBO is rendered and then anything else is blended based on alphas. This results in the fragment shader cropping away any unwanted pieces so that it blends in smoothly with anything behind it such as the walls.

Uniquely Shaped Meters!

So this is something I found interesting, it's a technique for creating functioning meters (i.e. player health) that have unique shapes such as being circular or just a wavy line. Basically what happens is you have two textures which are being sampled from: The actual, rendered texture and a 'cutoff' texture. The cutoff texture matches up with the rendered texture and is just a smooth gradient from 1.0 to 0.0 in the direction you want the meter to go. When you render the meter you would pass in a normalized value to use as your cutoff value.
Cutoff Texture
Render Texture with Base

So say you have a cutoff texture that just goes from black to white in a smooth gradient and it's rendering the player health, the player currently has 750/1250HP. This means that the player is currently at 60% health and so you would pass in a value of 0.6. Now, when it goes to check if it should render the health meter, it will sample from the cutoff texture and if the value is 0.6 or less, it will then proceed to sample from the render texture and output the appropriate colour. If the value is greater, it will simply not render the fragment. I like this technique a lot because it just gives you a lot of versatility in how you might want to render things such as timers or health bars or ammo...It just gives you and your artists a lot of flexibility!

Dual Quaternions!

Yeah...No...Not this time. They warrant a full blog post.

So that's my rambling for the night, I hope you enjoyed me try to blunder my way through how to do simple graphics stuff that I find kind of interesting and fun...

I actually don't have anything witty to say here so I'm just going to put a picture of Phong just to make this blog look longer than it is. Cheers!


Saturday, 15 March 2014

AI...Whats the A stand for?

So I'm starting to run out of topics that interest me for Graphics (and which haven't been covered pretty well in class) so I'm going to take a bit of a break and talk about a programming topic near and dear to my heart: AI!

So last term I talked to Dan Buckstein about some AI stuff and he put me down an interesting path, one filled with weirdly named terms like Fuzzy Logic, and I've gotten the chance to apply a few the fun concepts that I picked up from him. My absolute favourite is Finite State Machines, which are basically anything that can have multiple states which the system will transition between once certain conditions are met. The most basic example of a FSM is a light which switches between on and off when the light switch is put into different positions. Of course, they can get a lot more complex than that (for example, games are FSMs themselves) but for this blog I'll be talking strictly in terms of how they can be applied for AI.

For our enemies in our GDW game, I use a combination of polymorphism and states allowing me to reduce a large portion of the update function for each enemy to just one simple line:

currentState->Execute(dt, this);

Where the variable currentState is a pointer of the root State class, dt is the change in time for the update and this is a pointer to the enemy currently being updated. Now what does this do? Well, firstly I'm going to break down the root State class. This class has 3 functions: Enter, Execute and Exit. Enter is called only when a new state is entered, Execute is called during each update and Exit is called when the current state is being exited. Each state also has a BehaviourTag enumerator so that we can keep track of which state an enemy is in. Now that we have our root class defined, we can derive all other behaviours from it, in this case we have a FollowPath state, MeleeAttack state and Death state. Each one has its own unique Execute function (i.e. the FollowPath state will find the path to the enemies target and follow it while the Death state will count down the enemies death timer) which essentially allows us to swap out the enemies update function while being able to recycle the code between multiple enemy types! I tried to make it a bit more efficient by also making each derived state a singleton as this relies on making a static pointer in an object to its self which is returned with the GetInstance function. This way, we will always only have one instance of each state and won't have to create new ones when we want to change states.

Now then, for another interesting topic which was mentioned earlier: Fuzzy Logic! This is effectively the blending of different behaviours of similar natures. For example, say we have three behaviours which dictate how an enemy will move, these behaviours are Seek, Flee and Wander. The Seek behaviour has the enemy heading directly towards a point or entity, Flee has it moving in the completely opposite direction from a point or entity and Wander has it choosing a random point and moving towards it. These behaviours all return a normalized Vector3 for the enemy to use as a movement direction. By adding weights to each output which sum up to a value of 1.0, we can simulate more than these three behaviours. For example: 0.5 to Seek and 0.5 to Flee will make the enemy stand still while 0.5 to Flee and 0.5 to Wander will make the enemy seem like it's trying to avoid the player but it isn't actively fleeing from them. That's sort of the quick and dirty info for Fuzzy Logic. Unfortunately I have yet to actually apply it with code and only know this bit of theory regarding it but I look forward to getting that opportunity!

So yeah, there's my blog on all the fun AI stuff that I've learned and applied this year. I've greatly enjoyed using FSMs as they help clean up the code drastically, they're highly reusable and they're just so much more flexible than having a collection of if else statements in the update function of an enemy. I hope you've enjoyed reading me ramble on for a...LONG time. I apologize for the length of this blog, but I didn't want to do another that was broken up into two parts.

Next blog: Either BVHs and Mo-cap or Thinking With Portals!

Note:
In regards to the title, if you don't know the reference here it is: http://youtu.be/WnTGaTbR7C0?t=1m33s

Note 2:
Most of this informatoin is what I can recall from Programming Game AI By Example by Mat Buckland, a fantastic book which I was given the chance to borrow and which I keep meaning to purchase for myself.

Friday, 28 February 2014

Go with the flow

Brief start warm-up before we get to the actual topic at hand: I love Valve. Their games are just fantastic, Team Fortress 2 was the first game I ever started to get competitive in, Half-Life is just a fantastically executed series and Left 4 Dead is one of the greatest examples of co-op games out there. They just do so much in terms of visual styles and gameplay, as evidenced by the numerous breakdowns of TF2 (including the one we watched in-class). So I'm going to overlap my interests with my love here and talk about how Valve flows.

So firstly, let's talk about the two types of water that Valve can employ: The expensive and the cheap. Basically this denotes the overall quality of the water effect because you need to trade off quality for performance in some cases. Really, the main difference between the two is that Expensive water utilizes real-time reflection and refraction which can result in some splendid effects
Just look at that beautiful effect
Meanwhile, Cheap water doesn't get these luxuries and instead uses environment mapping to generate a facsimile of a reflection. It still gets the job done and it's much cheaper, but it's not quite as nice.

Next let's talk about the over under...Or rather the above and the underwater. The Source Engine separates the water into two parts that should be rendered differently. You have the above water, which is the surface that you see when you look over it (i.e. in the picture above) and you have under the water which can affect the view of the game. There's really not much else to say about it.

Now let's talk about one of the bigger additions to the engine (Even though it did happen just over 4 years ago), the ability to control the flow of the water...Naturally this is just a purely visual effect as doing proper fluid dynamics on the scale that they would need it would be quite the challenge for most processors. The method for the actual flow is pretty similar to how one does a normal map, but rather than storing the normal of the surface in a texture we store the direction that the water is flowing in. Here's the guide they use for extracting the direction of the flow:

So how do we use this reference to make the water seem to move? Well, they actually use it to figure out how to manipulate the normal map of the water at that point, using it to denote the direction it should be moving in and how it should be skewed to match the direction. The result is quite impressive as seen in the demo: 

And that's about that on the Valve's water. As a final note, I'll put up some images showing how the flow map actually looks on the maps: 

And with that I'll call it a night. I hope you enjoyed this splashing blog and this flood of information...I almost made it to the end without making water puns but I just couldn't resist.

For a reference I used the Valve Developer Wiki, an invaluable resource for anyone working with the Source Engine whether it be coding to level editing. 

Friday, 14 February 2014

Particles 2: The Re-Particling

So I'm going to finally continue on from my previous particle post! Last time I talked about how I had set up my own particle system as well as how we could utilize the effects with a FBO. Today I'm probably just going to ramble on about a few things for improving the systems and other effects before finishing up with a joke. Without further adieu...

So one way of improving the systems is using Point Particles as opposed to the Billboard Particles. As mentioned before, Billboard Particles are just sprites rendered to a plane. Point Particles, in the mean time are a single vertex of a certain size with a sprite rendered on top of it. The advantage of a Point Particle vs a Billboard particle is in their efficiency. For a Billboard particle you can either calculate the position of the particle on the CPU and then translate the vertices during rendering (1 calculation on the CPU) or we can use the velocity, acceleration and current life of the particle you can calculate the new position of each vertex in a vertex shader (6 calculations on the GPU, bit better than the CPU). With Point Particles, we can do the calculations in the CPU like we do with Billboards or we can calculate the new position in the shader, with the difference being that we are only doing 1 calculation making it 6 times faster than the Billboard! This difference can be very noticeable when we have to calculate hundreds of particles. If we have one system with 200 particles, then that's either 200 calculations on the CPU, 1200 on the GPU with Billboards or 200 on the GPU with Points.

Knowing now how to help speed up the calculations, let's talk about something fun we can do. Like pseudo colliding particles! (For the record, this is being taken from a presentation done by Halo Reach's Effect Tech: Chris Tchou at GDC 2011) A method for creating pseudo collision particles is to sample from the depth buffer of the current frame and then check where the particle is in relation to it. If we see that the particle is penetrating then we can move it towards the camera until it is a suitable distance from any obstacles and, using the surface normals (which we should have output to a texture) we can then find the orientation of the face that the particle is hitting which can be combined with the particle's current velocity to figure out how it should bounce. This would involve being able to store the particles current information during each update step but gives us the appearance of particles bouncing off surfaces (e.g. if we were to do sparks flying off of a wire and bouncing off the ground). This is also cheaper to process than colliding the particle with the collision or environment mesh on the CPU which makes it even more interesting!

I could go on and ramble about a few other stuff but I'm not quite feeling up to it right now so I'm going to call it a night here. But before I do, I'm going to leave you with a horrible joke:

I applied Phong Shading to a pretty crappy model of Phong!...Get it? Phong Shading Phong?...
...
...
As in Phong from ReBoot?
...
...
I did say it was a horrible joke.

Friday, 31 January 2014

Theory Time: The Last Of Us

So the last two weeks have sort of been off weeks for me and I haven't quite been up to trying to get my thoughts to congeal into something useful...Needless to say that Particles 2: The Re-Particling is being saved for another day and will probably be a lot bigger than anticipated (I'll try not to make part 2 have its own two parts). I also don't want to just sit here and type up a blog regurgitating everything we learned in class because that would literally be me just drawing out things like toon shading into 3 paragraphs when it could be summed up pretty quickly. 

Instead what I'm going to do is take this chance to introduce what will (hopefully) become a regular thing I do: Attempting to break down how they created an effect in a game without actually looking it up! This is just because I think it'd be a fun exercise and a nice way to actually think about the application of what we've learned. So without further preamble...I will now talk about The Last of Us.

In The Last of Us the player can enter a mode where sounds are more amplified for the character. In order to convey the change in action to the player and to simulate the effect, the screen becomes desaturated and sound sources are given a white glow. This is a demonstration of multiple effects working together as we have the fullscreen effect (desaturation) and the target specific effect (glow around sources). 

                       

I believe the method for this effect was done by first getting an outline of the targets with a glow using a similar method for getting an outline with a toon shader so we would check and see if the dot product of the normal of the fragment and the vector from the position to the eye meets a certain specification (e.g. it's equal to 0) to determine if it should be used for the outline. Once we have the outline we would use that to create a new texture to store and then blur (to give the fuzzy outline). In order to give the clear outline of the body we would again use the target but instead of trying to find an outline we would want to generate a silhouette which we could use to create a mask. When we go to combine the two we only render the parts of the blurred outline which are not blocked by the mask. Once this is done we now have a white fuzzy outline of all the entities which are producing sound in the game, we can then apply a shader to the FBO to desaturate it before overlaying and blending the glow outlines resulting in the final product that you see above.

And so there's my post! In the coming weeks I may spend some time trying to recreate this effect as I just described here to see if I was right or not, but until then it's just fun to theorize. Here's hoping that next week I'll have a lot more to write about.

Cheers,
Cameron Nicoll

Friday, 17 January 2014

Particle Fun

So in this past week my group managed to get particles working in the game, the system used is one that I had made last term and it was updated by the lead programmer (Kenny) to work with the rest of the game. In light of this, I'm going to take the chance to talk about the development of the system, any future plans for it as well as what interesting effects we should be able to attempt later on in the term as well as particles in general.

Introduction
So to give a bit of an overview of what our system contains: We have a Particle class, which is used for each individual particle in the system and contains the force being applied to the particle, the position, colours, etc. The particles I use are simple billboards, that is to say they consist of two triangles with a sprite applied to them; A Particle System class, which contains is used to update the list of particles as well as all the information relative to spawning new particles (e.g. the spawn deviation (the area around the center of the system in which a particle can spawn), the spread deviation (the range for the particles impulse velocity), etc.); A Particle Manager, which is a singleton that holds a list of Particle Systems which can are all updated and drawn through this manager. The specifics of each of these will be explained in more detail as I go through the development.

The Particle and Particle System
So I'm going to sort of speed through the first few hours of the development. I started off with a single Particle, it wasn't tied to any system (the System class not having been made yet) and lacked some of the functionality it has today. The particle could have an impulse force applied giving it acceleration which would make it move and such.

Once I had the particle moving and rendering properly, I started work on the System aspect. This started it's life as something that would just spawn a bunch of particles and let them loose...It was pretty boring. That's when I began to go a little crazy with it and began to add all these little extra things as they popped into my head. I think it's easier if I just list them at this point so here's all the extra features I went ahead and added as time went on (in order):
Spawn Deviation - This allowed me to specify an area around the center of the system which a particle would be randomly spawned in. So if I specify a minimum spread of [-5,-5,-5] and a max of [5,5,5] then the particle will spawn anywhere within a 10x10x10 cube, treating the System's position as the origin.
Spread Deviation - This  would add an impulse velocity of a random value to a particle, based on the same min-max idea as the Spawn Deviation. This is particularly useful as it can let me make particles spread out from a source or fly in random directions along a plain or make it look like a steady wind is affecting the particles.
Decay Values - I decided to spruce up the particles themselves by adding an initial/decay colour and initial/decay transparency. The initial values are the values of the particle when it spawns and the decay values are the values of the particle when it dies. I then interpolate between these values LERP and the particle's current life (normalized) to figure out what the current colour should be.
Life Range - So the standard particle has a life time to it, which basically says how much long the particle will be around for before it dies. To make the system appear a bit more dynamic I made it so that you could specify a range which will change how long the particle is alive for. For example, if you add a Life Range of 0.5 and the life of the particle is 2, then particles will have a life time that will be randomly determined to be within 1.5 and 2.5
Scale Deviation - This lets you add a randomization to the scale of each particle so that some will appear smaller and some will appear larger. This is also to help with the more dynamic appearance of the system.
Volume Types - This let me specify the different volume types for the systems. This really only impacts the spawn positions of the particles so that they're limited to be within a cylinder or a sphere, rather than a regular and boring cube. When coupled with the next feature it can also lead to other things such as rings.
Min-Max Radii - This let's you specify a minimum radius and a maximum radius within which the particle should spawn. This discards any particles which are spawned outside of the specified range.
Explosion Types - To make things a bit more interesting I added the ability to set the system to an explosion type. These will make the particles fly away from the center of the system or towards the center of the system to simulate an explosion or an implosion respectively.

Okay. That's it for the development of my System (the manager was introduced later but it doesn't add any interesting visuals to it, and then Kenny updated it to render in newer versions of OpenGL using the GPU). To take a break from the wall of text I'm going to just show you a couple of examples of what the results of the system are. I'll include the sprite beside the system it's used with.





 So these are the kinds of particles we have now. Really simple, just slight modifications to the colours of the actual sprite and letting them blend with each other. Some of the results can be quite impressive but it is pretty limited in terms of the types of visuals you can pull off, even with shaders...
OR SO I THOUGHT!

Particles with Frame Buffer Objects
So FBOs (Frame Buffer Objects)are just fantastic with shaders. They let you take things up to another level by allowing you to influence the raw final data of the system during each update rather than being able to only affect each individual particle. For example, let's break down this ink particle effect from the game Brutal Legends which can be found here: http://vimeo.com/10082765

So as you can tell right off the bat, each particle is just a semi-transparent black spot which is moving around (though that's going into Fluid Dynamics which is a tale for another day). We let the particles move around according to their dynamics and when we go to render we begin thresholding in the fragment shader. If a value that will be outputted is below a certain value then it will be discarded, eliminating the more translucent aspects from the render process while they can still be used to calculate the positions of the other particles. After the values have been discarded we do another pass of the image and add a blur to it which gives the particles their lighter outline. The result is a flowing ink effect.

Naturally there's plenty more we can do now that we can actually apply effects to the final results of a particle system rather than each individual particle, which should help make things plenty interesting. Though that doesn't mean we should make each system render using a FBO, in some instances like the first particle system I showed we may just want to use it as is, in which case a VBO would be better as it would not be as stressing on the hardware though this goes more into optimization than anything else.

Looking back I may have jumped the gun on this blog a bit. It's starting to drag out a bit so I think I'll call this part 1 of a series of posts on particles. Next time (which may or may not be next week, we'll find out) I'll go into other interesting things we can do with particles as well as how we can use the GPU to help improve the current system.

Until next time, cheers!

Friday, 10 January 2014

From Humble Beginnings and Other Cliche First Titles

So before I get into talking about the course there is something I should probably get off my chest and out into the open:
FLUIDDYNAMICSANDREFLECTIONANDPARTICLESANDALLTHEFUNSTUFFICAN'TWAITTOGETSTARTED!

Okay now that I've gotten that out of my system I can move on to the actual blog. This course has been the focus of much of my excitement once I began to properly understand the implications of shaders and such so I'm coming in with a handful of topics that greatly interest me (aside from the standard ones such as shadow mapping).

One thing I'd really like to cover in-depth a bit is the different effects we can now achieve with particle systems. I had worked on one last term which had become my pride and joy as it was quite versatile in terms of the effects that could be pulled off with it as well as looking quite good. However, being a CPU based system, it was pretty limited. I know that one effect which we'll be able to achieve by the end of the term is the ink particle used in Brutal Legends but being given a demonstration of how it would work as well as what other interesting particle effects we could do would be really interesting.

I'd also like to get into Fluid Dynamics as it's an interesting topic and produces such interesting effects (such as the ink particle previously mentioned). I think that it could be used for a lot of interesting particle systems as well as applications for other things such as cloth.

Finally I would love to go into making realistic water with reflection and refraction as I find it to be an interesting effect that can add nice little touches to the appearances of games in some cases and can be hugely important in others.

I'm sure that all of these topics will be covered at one point or another and I'm really looking forward to it. That being said, it'd be fantastic if we had a lecture that was just going over interesting particle systems and covering how they were created. 

That's it until the next time. Cheers!