Wednesday, March 12, 2014

First Person "Fly-Over Simulator"


For this project, there were a lot of components that needed to be implemented. The first part that was important to get in quickly was the terrain, which is used twice - once for the terrain itself, and once for the water. In order for everything to work as intended, the planes that make up the water and terrain had to be made up of many vertices, instead of being one big quadrilateral. This way the heights of the vertices could be adjusted to create the intended effects. To implement this, I generated all of the vertex positions, and then went through and created a list of indices so that when they were made into triangle strips, they would be in the right order. In order to make one long triangle strip, degenerate triangles, or triangles with an area of 0, were needed to effectively reset the strip and start over from the beginning, one row down. This creates an effect like this:
This is repeated for the width and height required, which is determined by the width and height of the terrain's heightmap - one pixel on the heightmap is equal to one vertex. In this video, the heightmap I used was this one:

Using SDL, I loaded the image and read the RGB value of each pixel from the image, and used the average of that value (for a grey scale image, all three are the same number) to alter the Y value of the vertex, which resulted in the terrain. However, for lighting to work properly, it was also important to calculate the new normal of each vertex, since it would no longer be straight up as it was when the plane was flat. To do this, I found the two vectors between the vertex and two of its neighbors (in the image of the triangle strip above, vector 0-1 and 0-4, for example), and found the cross product of those, resulting in a vector that was perpendicular to both, or the normal of that vertex. The last aspect of the terrain that needed to be implemented was the texture blending between the sand, grass, and rock, which I based on the height of the vertex. In the fragment shader for the terrain, I use the height of each vertex and depending on the value, blend the grass texture with either the sand or rock textures, and calculate lighting based on that interpolated value. 

In order to make the textures look better from a distance, I used OpenGL's built in mipmap generation. By generating the mipmaps for each of the terrain textures and using GL_LINEAR_MIPMAP_LINEAR as the minification filter, the textures will automatically be scaled down to a proper resolution based on their relative size at runtime. This creates an effect that reduces the graininess of the textures when looked at from a distance. For comparison, this is what the world I created looks like without mipmapping:


Once mipmapping is enabled, the same scene looks smoother, like this:


The water was generated similarly, but without the heightmap - the Y values of all of the vertices were set to the same value to create a flat plane, and then the vertex shader uses a sinusoidal function to alter the value so that it creates a wave effect. The water is set at a certain level, and any piece of the terrain that goes below the water level will let the water through. Doing that means that the normals have to be calculated again, since the heights of the vertices are changing. Because this calculation has to be done in the vertex shader after the heights are changed, it has to be done in a slightly different way. I take the partial derivative of the function used to calculate the heights with respect to X and Z, and take the cross product of those vectors to find the normal.


Wednesday, December 4, 2013

Goal-Oriented Action Planning

Goal-oriented action planning (GOAP) is a technique used to enhance the ways that NPCs make decisions, in order to make them seem more lifelike than if they were using a state machine. With a finite state machine (FSM), NPCs are constrained to follow basic rules - "If my health is below x%, then do y." "If I see the player, start chasing it." GOAP allows the AI a bit more freedom on what to do in order to achieve their goal. The AI has a set of goals it is trying to complete, and actions that it can take to affect those goals. Each action will have an affect on one or more goals, and the AI will choose the action that will best satisfy the current goals at that moment. To use an example that would apply to a real-life scenario, the AI may have a goal to eat, and a goal to sleep. Each goal has a value that represents how urgent that goal is, and those values are all calculated together to form a "discontentment" value. The AI would then cycle through each action it has available to it and figure out which action will lower the overall discontentment the most, and perform that action.

In the tech demo linked below, there are 4 goals that each unit follows: Gathering food, wood, and gold, and keeping its energy level up. To accomplish these goals, there are 5
different actions they can take - again, gathering food, wood, and gold, and eating food or sleeping to recover energy. Running this simulation gives a simple demonstration of how GOAP works in a game. Pressing the + and - buttons will add or delete units, respectively, and clicking on a unit displays the available actions and the estimated discontentment level after the action is performed.

This technique can be applied in a great number of game genres. Like the demo shows, real-time strategy games are a prime example of a type of game that can greatly benefit from GOAP, but other games can get just as much use out of it. Some of the first games that used GOAP were No One Lives Forever 2 and F.E.A.R., both of which are first-person shooters, and both were highly regarded for their innovation in AI. Another example that is probably fairly obvious is a game like The Sims, where the goals would be the needs that each Sim has and the actions would be the available options to affect those needs. Almost every game could find a good use for GOAP as a way to enhance their AI.


Tech Demo


Sources: