Saturday, December 31, 2016

Advance Collision


Brief Overview
In this tech demo, I have implemented Ray to triangle Intersection test, Ray to Sphere Intersection test, Ray to Cylinder test, Ray to Capsule Intersection test, Moving Sphere to Triangle Intersection test and finally Moving Sphere to Mesh Intersection test.

Picking


Brief Overview
In this demo, I have implemented line segment to triangle intersection test which will help in detecting triangles nearest to start of the line and then went ahead with picking algorithm which is usually known for selecting objects in the scene using a mouse or cursor but here I have tried to apply reverse process by taking a point in screen space and then translating to client space and transforming it by the inverse of the model view and projection matrix.

Friday, December 30, 2016

Spatial Partioning



Brief Overview
In this application, I have implemented five test – i) Testing the frustum object directly against all Kd Tree leaf bounds, without traversing the tree; ii) Testing the frustum object against the Kd Tree leaf bounds by traversing the tree; iii) Testing frustum object against the Kd Tree leaf bounds by traversing the tree and testing the frustum against the objects contained in the tested leaves; iv) Testing the frustum object against the scene objects in the Kd tree by traversing the tree; v) Testing the scene object for collision against other scene objects in the Kd Tree by traversing through the tree.

Spatial Partioning
Uniform Grid - Each red line-segment represents a sphere to sphere collision test.

Thursday, December 29, 2016

Bounding Volume Hierarchy



Brief Overview
For this collision detection demo, I have implemented Axis Aligned Bounding Boxes which forms rectangular shape around the object by aligning the faces and normal to the coordinate system axis. Moving on with the application, I tried my hands on partition collection system that uses Splitting Along Object Mean Method.

Axis Aligned Bounding Boxes
It is one of the quickest and memory-efficient algorithm to determine whether two game entities are overlapping or not. It consists of wrapping game entities in non-rotated(thus axis-aligned) box, and checking the positions of these boxes in the 3D coordinate space to see if they're overlapping.

The implementation of AABB is quite easy, we can start by creating min and max point or have a center point and weight/height/depth. For this example, I will be using an AABB created with two point(min and max). The collision check between two AABB's is very simple and fast, since it has many early out's. The bad thing about AABB's is that they can't be rotated. Once they're rotated, they stop being AABB's since they're not aligned to the X, Y and Z axes anymore. For objects that rotate, a better option would be to use spheres, capsules or even OBBs(oriented bounding boxes).

To check if two AABB's are colliding, we just need to check that the first AABB's max point is greater than the second one's min point and that the first one's min point is less than the second one's max point. 

Buggy



Brief Overview
This is an engine application specifically designed to showcase collision detection algorithms. I started by working on Look-At algorithm that helps the buggy and it's gun to point towards the target and fire. Then proceeded to implement Turn-To algorithm for the buggy to move forward and backward so that it faces the same direction as the buggy's gun. Performed Ray to Sphere intersection to test against all targets in the world. And if there is any collision with the target, the crosshair is set to point the target by using "Orient in the direction the object is moving " algorithm, and finally once the crosshair has shot the target, I attempt to use Kill() function in update to removes the child targets, and perform the spinning of the target. This application also supports the ability where the buggy moves with the crosshair around the terrain using Line to Triangle Intersection algorithm.

lookAt Matrix/Camera
The above algorithm is used to construct view matrix where a camera is located at the eye position and looking at (or rotating to) the target point. The eye position and target are defined in world space. Camera's lookAt transformation consists of 2 transformations - translating the whole scene inversely from the eye position to the origin, and then rotating the scene with reverse orientation, so the camera is positioned at the origin and facing to the z_axis.

Camera Orientation

Wednesday, December 28, 2016

Search Algorithms - Making of Path Finding API

Breadth First Search(B.F.S) - Speaking of this algorithm, it is a generalization of the tree traversal method. In addition, it's derived searches use a tree structure to search the state space(i.e., the terrain) so a tree node data structure for planning will be necessary. Here's an example of the skeleton of such a node data structure: PlannerNode{ Node* parent; SearchNode* vertex};

Here's a sketch of how a breadth-first search would work over tiles:

Path Planner Application

Greedy Best First Search(G.B.F.S) - While the BFS algorithm will find the solution path with the least number of planner nodes, it may create an extremely large number of them before that happens. The main issue here is that the open list is treated like a regular queue. By changing the behavior of the open list so that it orders a node certain way, attempting a more efficient search is a good idea!

The GBFS orders its nodes so that the ones closest to the goal are the nearest to the front of the open list using a priority queue. This algorithm is informed because its ordering is based on goal information. Using the distance to the goal to estimate which open node is "best" is an example of heuristic.

Here's a sketch of how a greedy search would work over tiles:

Path Planner Application

Uniform Cost Search(U.C.S) - Both the BFS and GBFS assume that each passable tiles incur same cost to traverse. The U.C.S algorithm which is actually a variant of Dijkstra's algorithm that searches one goal instead of multiple goals - handles the different tile weights by incorporating them into each planner node's given cost, which the algorithm  then uses to order its open nodes.

Since the ordering imposed on the open nodes is not based on any goal information, this algorithm, like BFS is uninformed. It is also optimal in terms of the solution it finds which always have lowest cost, while the BFS algorithm is optimal with regards to the number of the nodes in the path.

Here's a sketch of how uniform cost search would work over tiles:

Path Planner Application

A*(A-Star) - A* combines the look-ahead abilities of the GBFS algorithm with look-behind abilities of the UCS algorithm by taking into account both the heuristic cost and the given cost of each planner node. While the definition of the given cost hasn't changed, the heuristic cost is now the product of the greedy search heuristic cost and the heuristic weight, whose value can be changed to customize the behavior of A*. The sum of the new heuristic cost and the given cost is the final cost, which A* then uses to sort the open nodes.

A* favors paths that head more directly toward the goal, it tries to avoid crossing regions with higher weights until there is no other choice.

Here's a sketch of how A* would work over tiles:

Path Planner Application

Path Finding API



Brief Overview
Path Finding is an engine API designed to create an architecture for AI behaviors by analyzing the problem to be solved and customizing the system of data structures based on the need. In this application,  I started sketching out the search graph using the Tile Map containing raw data such as - X, Y, & Weight and then moved on to create Hex Tile Adjacency, a terrain map made up of tiles that are hexagons, however they are stored in 2D array on console window. Then I proceeded to code Breadth First Search Algorithm and worked my way up to A*. The purpose behind building Breadth First Search, Greedy Best First Search, Uniform Cost Search, and A* is to recognize the difference in cost and speed of each to reach their end goal. All of them vary with regards to the neighboring nodes they traverse till their way to the goal.

Mechanics Of Path Finding API
A path is found using a graph composed of search nodes, each of which stores information regarding the corresponding tiles and its successors, and the planner nodes , which store information about the paths and its cost. In the application, the blue circles represent all the planner nodes that the search algorithm has visited so far. Nodes with the green inner green circles are open nodes that the algorithm can use to extend existing paths; the lighter the green interior, the closer the node is becoming the current node. Note that the current node is not an open node, so it doesn't have a green inner circle; instead, the application draws a red line from this node through each successive parent node until the starting tile is reached.

While the application is paused, depending on the algorithm, either the orange circles will appear around the neighbors of the current node, or orange lines will extend toward those neighbors. The algorithm evaluates only these successor nodes along with the current node at each iteration.

After setting the current node at the start of each iteration, the search algorithm checks whether or not this node is at the goal tile. As long as this is not the case, the size of the current path and the number of iterations so far will appear as red text in the information display.

When the search algorithm finally reaches the goal, the red line will be decorated with large blue squares that indicate the solution path.

Github Link -https://github.com/jessjohn123/Artificial_Intelligence_And_Machine_Learning_Algorithms/tree/master/Path_Search