Read XNA%20Tutorial%20for%20CSharp.pdf text version

XNA Tutorial for C# overview ­ Series 1

This part of the site shows you how easy it is to get a XNA program up and running! This tutorial is aimed at people who haven't done any 3D programming so far and would like to see some results in the shortest possible time. To this end, XNA is an ideal programming environment. Using C# as programming language, the code looks very much like Java, so anyone having some notions of Java should be able to start right away. Even more, this tutorial is written in such a way that anyone who has any programming experience should be able to understand and complete it! This first series of XNA tutorial gives you a general introduction to XNA. It is divided in several chapters, which you can find listed below. In every chapter you'll find a basic XNA feature:

Starting a project: setting up and using the Development Environment The effect file: effects are needed to draw stuff on the screen The first triangle: defining points, displaying them using XNA World space: defining points in 3D space, defining camera position Rotation & translation: rotating and moving the scene Indices: removing redundant vertex information to decrease AGP/PCIX bandwidth Terrain basics: bringing altitude into our program Terrain from file: create a terrain from an image Keyboard: read user input on the keyboard using XNA Importing bmp files: change your terrain from within Paint! Adding colors: add simple color to you terrain Lighting basics: lighting can be complex to fully understand, a whole chapter on this subject Terrain lighting: making use of what we learned in the previous chapter to enable lighting over our terrain

Starting your project

Welcome to the first entry of this XNA Tutorial. This tutorial is aimed at people who haven't done any 3D programming so far and would like to see some results in the shortest possible time. Released in December 2006, XNA is a new language, built around DirectX, which eases game programming in a lot of ways. With all this software installed, you can start up XNA game studio, which you can find in the start menu. Next, go to the File menu, and select New Project. As project template, we of course need Windows game (XNA). Deploying your game on an Xbox360 will be discussed later. Fill in XNAtutorial as name for the project, and hit the OK button! A small project is being made, and on the right you see it already contains 2 code files: Game1.cs and Program.cs. You can look at the code in the files by right clicking on them and selecting View code. Your program starts in the Program.cs file, in the Main method. The main method simply calls up the code in the Game1.cs file. There's nothing we need to change in the Program.cs file. Open the code in the Game1.cs file. Although it's littered with comments, we can discover the structure of a XNA game program:

The constructor method Game1() is called once at startup. It is used to load some variables needed by the XNA framework. The Initialize method is also called once on startup. This is the method where we should put our initialization code. The (Un)LoadGraphicsContent method is used for importing media, such as images, objects and audio. The Update method is called every frame before drawing. Here we will put the code that needs to be updates throughout the lifetime of our program, such as the code that reads the keyboard and updates the geometry of our scene. As often as you computer (and especially you graphics card) allows, the Draw method is called. This is where we should put the code that actually draws our scene to the screen. As you can see, there is no code needed to open a window, as this will be done automatically for us. When you run your project by pressing F5, you will already get a nice blue window.

Let's move on, and add a link to the graphics device. In short, a device is a direct link to your graphical adapter. It is an object that gives you direct access to the piece of hardware inside your computer. First, we'll declare this variable, by adding this line to the top of your class, exactly above the Game1() method:

GraphicsDevice device;

Next, we're going to fill this variable. We'll create a small method SetUpXNADevice for this. Put this somewhere in the middle of the page, I put it immediately after the Initialize method:

private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 1"; }

The first line stores the link to our graphics device in the device variable. Next, we set the size of our backbuffer, which will contain what will be drawn to the screen. We also indicate we want our program to run in a window, after which we apply the changes. The last line sets the title of our window. If we want this method to be executed on startup, we still need to call this method from within our Initialize method:

SetUpXNADevice();

When you run this code, you should see a window of 500x500 pixels, with the title you set, as shown below.

One of the main differences between DirectX 9 and XNA is that we now need an effect for everything we draw. So what exactly is an effect? In 3D programming, all objects are represented using triangles. Even spheres can be represented using triangles, if you use enough of them. An effect is some code that instructs your hardware (the graphics card) how it should display these triangles. An effect file contains one or more techniques, for example technique A and technique B. Drawing triangles using technique A will for example draw them semitransparent, and drawing them using technique B will for example draw all objects using only blue-gray colors as seen in some horror movies. Don't worry too much about this, as this is already more advanced stuff which we will handle in Series 3. However, XNA needs an effect file to draw even the simplest triangles, so I've written an effect file that contains some very basic techniques. You can download it here. Right-click on the link and select `Save As...' You should put the file in the same map as your code files. If you don't know where that is, first go back to your code, and to the File menu, and select `Save [project name] as...'. By default, the code files will be saved in the map `C:\Documents and Settings\[user name]\My Documents\Visual Studio 2005\Projects\[project name]\[project name]'. Now you have downloaded the effect file to the same map as your code files, we will link our program to this effect file. We will declare a new Effect object, so put this at the beginning of your class:

Effect effect;

Now go back to our SetUpXNADevice method, and add these lines to it:

CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null);

Looks like advanced stuff, but all it does is load the effect file. The first file loads the HLSL code from my .fx file, and compiles it into assembler (the language of the graphics card). We need to put the @../../../../ in front of our filename, because the program is run from the .exe file, which is in the /bin/x86/debug/ map, so we need to back a few maps. The arguments of the other arguments of this method are beyond the scope of this series, but are explained in Series 3. The second line loads the compiled code into our effect variable, which is now ready to use! With all the necessary variables loaded, we can concentrate on the Draw method. You'll notice the first line start with graphics.GraphicsDevice. We can replace this by the shortcut we made in the previous chapter: `device'. The first line clears the buffer of our window to a specified color. Let's make it DarkSlateBlue, just for fun: device.Clear(Color.DarkSlateBlue);

XNA uses a buffer to draw to, instead of drawing directly to the window. At the end of the Draw method, the contents of the buffer is drawn on the screen in one time. This way, the screen will not flicker as it would when we would draw each part of our scene separately to the screen. Running this code will already give you the image you see below, but I would first like to add some additional code. As discussed above, to draw something we first need to specify a technique from an Effect object. So we need to put this code below the call to the Clear method:

effect.CurrentTechnique = effect.Techniques["Pretransformed"]; effect.Begin();

You see we select the Pretransformed technique, which we will learn about next chapter. Then we tell the effect to begin. A technique can be made up of multiple passes, so we need to iterate through them. Add this code below the code you just entered:

foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); pass.End(); } effect.End();

You see each pass needs a call to Begin and a call to End. The scene must be drawn between these 2 calls. In the end, we need to put a call to effect.End to tell our effect no more objects will be drawn using this technique. Finally, we're through the initialization part! If you don't understand everything about effects and techniques, there's no need to worry as we will discuss them in detail in Series 3. With all of this code set up, we're finally ready to start drawing things on the screen, which is what we will do next chapter.

Last chapter we drew a triangle, using 'pretransformed' coordinates. These coordinates are already 'transformed' so you can directly specify their position on the screen. However, you will usually use the untransformed coordinates, the so called World space coordinates, which we specify in 3D coordinates. These allow you to create a whole scene using simple 3D coordinates, and, also very important, to position a camera through which the user will look at the scene. So we'll start by redefining our triangle coordinates in world space. Replace the code in your SetUpVertices method with this code:

vertices = new VertexPositionColor[3]; vertices[0].Position = new Vector3(0f, 0f, 0f); vertices[0].Color = Color.Red; vertices[1].Position = new Vector3(5f, 10f, 0f); vertices[1].Color = Color.Yellow; vertices[2].Position = new Vector3(10f, 0f, -5f); vertices[2].Color = Color.Green;

As you can see, from here on we'll be using the Z-coordinate. Of course, you're free to try changing the positions at the end of this chapter. Because we are no longer using pretransformed coordinates (where x and y coordinate should be in the [1, 1] region), we need to select a different technique from our effect file. I called the technique `Colored', so this is how we load it:

effect.CurrentTechnique = effect.Techniques["Colored"];

Let's run this code. Very nice, your triangle has disappeared again. Why's that? Easy, because you haven't told DirectX yet where to position the camera and where to look at! To position our camera, we need to define some matrices. Stop ... matrices??

First a small word about matrices. We define our points in 3D space. Because our screen is only 2D, it is only logical that our 3D points somehow need to be `transformed' to 2D space. This is done by multiplying our 3D positions with a matrix. So in short, you should see a matrix simply as a mathematical element that holds a transformation. If you multiply a 3D position with such a matrix, you get the transformed position. (If you want to know more about matrices, you can find more info in the Extra Reading section of this site) Because there are a lot of properties that need to be defined when transforming our points from 3D world space to our 2D screen, this transformation is split in 2 steps, so we get 2 matrices. Add this code to our program:

private void SetUpCamera() { Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(0, 0, 40), new Vector3(0, 0, 0), new Vector3(0, 1, 0)); Matrix projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 50.0f); }

The first line creates a matrix that stores the position and orientation of the camera, through which we look at the scene. The first parameter defines the position of the camera. We position it 40 units above our (0,0,0) point, the origin. The next parameter sets the target point the camera is looking at. We will be looking at our origin. At this point, we have defined the viewing axis of our camera, but we can still rotate our camera around this axe. So we still need to define which vector will be considered as 'up'. The second line creates a matrix which stores how the camera looks at the scene. The first parameter sets the view angle, 90° in our case. Then we set the view aspect ratio, which is 1 in our case, but will be different if our window is a rectangle instead of a 500x500 square. The last parameters define the view range. Any objects closer to the camera than 1f will not be shown. Any object further away than 50f won't be shown either. These distances are called the near and the far clipping planes, since all objects not between these planes will be clipped (=not drawn). Now we have these matrices, we need to pass it to our technique, where they are combined. This is done by the next lines of code, which we need to add to the bottom of the SetUpCamera method:

effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity);

Although the first 2 lines are explained above, they are discussed in much more detail in Series 3. The third line sets another parameter, which is discussed in the next chapter. We still need to call this method from the Initialize method:

SetUpCamera();

Now run the code. You should see the image below: a triangle, of which the top point is not exactly in the middle. This is because the bottom-right corner has a negative Z coordinate. One thing you should notice: you'll see the green corner of the triangle is on the right side of the window, which seems normal because you defined it on the positive x-axis. So, if you would position your camera on the negative z-axis:

Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(0, 0, -40), new Vector3(0, 0, 0), new Vector3(0, 1, 0));

you would expect to see the green point in the left half of the window. Try to run this now.

This might not be exactly what you expected. Something very important has happened. XNA only draws triangles that are facing the camera. DirectX specifies that triangles facing the camera should be defined clockwise relative to the camera. If you position the camera on the negative z-axis, the corner points of the triangle in our vertices array will be defined counter-clockwise relative to the camera, and thus will not be drawn! Culling can greatly improve performance, as it can reduce the number of triangles to be drawn. However, when designing an application, it's better to turn culling off by putting this line of code in the Draw method:

device.RenderState.CullMode = CullMode.None;

This will simply draw all triangles, even those not facing the camera. You should note that this should never be done in a final product, because it slows down the drawing process, as all triangles will be drawn, even those not facing the camera! Now put the camera back to the positive part of the Z axis.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/World_space.php>

This chapter we'll make our triangle spin around. Since we are using world space coordinates, this is very easy. Let's first add a variable 'angle' to our class to store the current rotation angle. Just add this one to your variables.

private float angle = 0f;

Now, we should increase this variable with 0.05f every frame. The Update method is an excellent place to put this code, as it is called 60 times each second:

angle += 0.05f;

With our angle increasing automatically, all we have to do is to rotate the world coordinates. I hope you remember from your math class this is done using transformation matrices ;) Luckily, all you have to do is specify the rotation axis and the rotation angle. All the rest is done by XNA! The rotation is stored in what is called the World matrix. Add this code to your Draw method, before your call to effect.Begin:

Matrix worldMatrix = Matrix.CreateRotationY(3 * angle); effect.Parameters["xWorld"].SetValue(worldMatrix);

The first line creates our World matrix, which holds a rotation around the Y axis. The second line passes this World matrix to the effect, which needs it to perform its job. From now on, everything we draw will be translated along the Y axis by the amount currently stored in `angle'! When you run the application, you will see that your triangle is spinning around its (0,0,0) point! This is of course because the Y axis runs through this point, so that is the only point of our triangle that remains the same. Now imagine we would like to spin it through the center of the triangle. One possibility is to redefine the triangle so the (0,0,0) would be in the center of our triangle. The better solution would be to first move (=translate) the triangle a bit to the left and down, and then rotate it. To do this, simply first multiply your World matrix with a translation matrix:

Matrix worldMatrix = Matrix.CreateTranslation(-5, -10 * 1 / 3, 0) * Matrix.CreateRotationZ(angle);

This will move the triangle so the (0,0,0) point is positioned in the gravity point of the triangle. Then our triangle is rotated around this point, along the Z axis, giving us the desired result. Note the order of transformations. Go ahead and place the translation AFTER the rotation. You will see a triangle rotating around one point, moved to the left and below. This is because in matrix multiplications M1*M2 is NOT the same as M2*M1! More on this in the ExtraReading chapters on matrices. You can easily change the code to make the triangle rotate around the X or Y axis. Make sure to try one of them, to get the first feeling of 3D. Remember that one point of our triangle has a Z coordinate of 5, which explains why the triangle won't seem to rotate symmetrical sometimes. A bit more complex is the Matrix. CreateFromAxisAngle, where you first specify your own custom rotation axis :

Vector3 rotAxis = new Vector3(3*angle, angle, 2*angle); rotAxis.Normalize(); Matrix worldMatrix = Matrix.CreateTranslation(-5, -10 * 1 / 3, 0) * Matrix.CreateFromAxisAngle(rotAxis, angle);

This will make our triangle spin around an every changing axe. The first line defines this axis (which is also changed every frame as it depends on the variable angle). The second line normalizes this axis, which is needed to make the CreateFromAxisAngle method work properly (Normalize() changes the coordinates of the vector, so the distance between the vector and the (0, 0, 0) point is exactly 1).

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Rotation_-_translation.php>

The triangle was nice, but what about a lot of triangles ? We would need to specify 3 vertices for each triangle. Consider next example:

Only 4 out of 6 vertices are unique. So the other 2 are simply a waste of bandwidth to your graphics card! It would be better to define the 4 vertices in an array from 0 to 3, and to define triangle 1 as vertices 1,2 and 3 and triangle 2 as vertices 2,3 and 4. This way, the complex vertex data is not duplicated. This is exactly the idea behind IndexBuffers. Suppose we would like to draw these 2 triangles :

Normally we would have to define 6 vertices, now we will define only 5. So change our SetUpVertices method as follows:

private void SetUpVertices() { vertices = new VertexPositionColor[5]; vertices[0].Position = new Vector3(0f, 0f, 0f); vertices[0].Color = Color.White; vertices[1].Position = new Vector3(5f, 0f, 0f); vertices[1].Color = Color.White; vertices[2].Position = new Vector3(10f, 0f, 0f); vertices[2].Color = Color.White; vertices[3].Position = new Vector3(5f, 5f, 0f); vertices[3].Color = Color.White; vertices[4].Position = new Vector3(10f, 5f, 0f); vertices[4].Color = Color.White; vb = new VertexBuffer(device, sizeof(float) * 4 * 5, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); vb.SetData(vertices); }

You already know the first part of the method, where we define the position and color of the 5 vertices. However, the 2 last lines are new. The first one creates a new VertexBuffer, which will hold the 5 vertices. The second argument defines how big the buffer must be, in bytes. Because a position uses up 3 floats and a color 1 float, we end up with 4 floats per vertex. The last line links our vertex array to the vertex buffer. Of course, we still need to declare vb at the top of our class before this will compile :

private VertexBuffer vb; private IndexBuffer ib;

The last line declares an IndexBuffer, ib, which we are going to fill next. We'll create another small method for this:

private void SetUpIndices() { short[] indices = new short[6]; indices[0] indices[1] indices[2] indices[3] indices[4] indices[5] = = = = = = 3; 1; 0; 4; 2; 1;

ib = new IndexBuffer(device, typeof(short), 6, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); ib.SetData(indices); }

As with our VertexDefinition method, the first line declares the buffer that will be used to draw triangles from. The indices array holds the order in which the vertices from vb will be drawn. As you can see, we now will need 6 indices, since 1 triangle is defined by 3 indices. Vertex number 1 is used twice, this was our initial goal. In this case, the profit is rather small, but in bigger application (as you will see soon ;) this is the way to go. Also note that the triangles have been defined in a clockwise order again, so DirectX will see them as facing the camera. The last 2 lines initialize the index buffer and fill it with the data from the indices array.

Make sure to call this method from our Main method :

SetUpIndices();

All that's left for this chapter is to draw the triangles from our buffer! Make the following changes to your Draw method:

device.Vertices[0].SetSource(vb, 0, VertexPositionColor.SizeInBytes); device.Indices = ib; device.VertexDeclaration = new VertexDeclaration(device, VertexPositionColor.VertexElements); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 5, 0, 2);

We first set vb as active VertexBuffer source. We also need to specify from which vertex we will start, and how many bytes one vertex occupies. Luckily, this size is stored in the VertexPositionColor struct (but before we calculated that we need 4 floats for 1 vertex). We also need to set the active index buffer, and we still need to specify what kind of information is contained in each vertex. Finally, we call the DrawIndexedPrimitives method. We still offer a list of separate triangles. The first zero indicates at which index to start counting in your indexbuffer. Then we indicate the minimum amount of used indices. We give 0, which will bring no speed optimization. Then the amount of used vertices and the starting point in our vertexbuffer. Finally, we have to indicate how many primitives (=triangles) we want to be drawn. That's it! When you run the program, you'll see 2 solid white triangles next to each other, but they're still rotating. To stop the rotation, replace your World matrix with this:

Matrix worldMatrix = Matrix.Identity;

The Identity matrix is the unity matrix, so your World space coordinates will be reset. Now the triangles have stopped moving, but they're still solid. Try putting this line as first line of the Draw method :

device.RenderState.FillMode = FillMode.WireFrame;

This will only draw the outlines of our triangles, instead of solid triangles.

At last, we've seen enough topics to start creating our terrain. Lets start small, by connecting 4x3 specified points. However, we will make our engine dynamic, so next chapter we can easily load a much larger number of points. To do this, we have to create 2 new variables in our class:

private int WIDTH = 4; private int HEIGHT = 3;

We will suppose the 4x3 points are equidistant. So the only thing we don't know about our points is the Z coordinate. We will use an array to hold this information, so we'll also add this line to the top of our class as well:

private int[,] heightData;

For now, use this method to fill the array :

private void LoadHeightData() { heightData = new int[4,3]; heightData[0,0]=0; heightData[1,0]=0; heightData[2,0]=0; heightData[3,0]=0; heightData[0,1]=1; heightData[1,1]=0; heightData[2,1]=2; heightData[3,1]=2; heightData[0,2]=2; heightData[1,2]=2; heightData[2,2]=4; heightData[3,2]=2; }

Since we only have to load the data once, we will call it from our Initialize method. Place it as the first call in your Main method:

LoadHeightData();

With our height array filled, we can now create our vertices. Since we have a 4x3 terrain, 12 (=WIDTH*HEIGHT) vertices will do. The points are equidistant (the distance between them is the same), so we can easily change our SetUpVertices method. For now, we will not be using the Z coordinate yet, to see the difference later on in this chapter.

private void SetUpVertices() { vertices = new VertexPositionColor[WIDTH * HEIGHT]; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { vertices[x + y * WIDTH].Position = new Vector3(x, y, 0); vertices[x + y * WIDTH].Color = Color.White; } } vb = new VertexBuffer(device, sizeof(float) * 4 * WIDTH*HEIGHT, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); vb.SetData(vertices); }

Nothing magical going on here, you simply define your 12 points and make them white, after which you load them into the VertexBuffer. Next comes a more difficult part: defining the indices to create the needed triangles to connect the 12 vertices. The best way to do this is by creating two sets of vertices:

We'll start by drawing the set of triangles drawn in solid lines. To do this, change your SetUpIndices method like this:

private void SetUpIndices() { int[] indices = new int[(WIDTH - 1) * (HEIGHT - 1) * 3]; for (int x = 0; x < WIDTH - 1; x++) { for (int y = 0; y < HEIGHT - 1; y++) {

indices[(x + y * (WIDTH - 1)) * 3] = (x + 1) + (y + 1) * WIDTH; indices[(x + y * (WIDTH - 1)) * 3 + 1] = (x + 1) + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 3 + 2] = x + y * WIDTH; } } ib = new IndexBuffer(device, typeof(int), (WIDTH - 1) * (HEIGHT - 1) * 3, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); ib.SetData(indices); }

We will need 2 rows of 3 triangles, giving 6 triangles. These will require 6 * 3 = 18 indices (=(WIDTH1)*(HEIGHT-1)*3). You create the needed array for this in the first line. Then, you fill it. Again you scan the X and Y coordinates, and this time you create your triangles. Every triangle needs 3 indices, that's where the *3 comes from. Remember culling? It requires us to define the points in counterclockwise order. So first you define the top-right vertex, then the bottom-down vertex and the bottomleft vertex. That's all there is to it. The only thing that's left is to draw the triangles by changing this piece of code in your Draw method :

device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH*HEIGHT, 0, (WIDTH-1)*(HEIGHT-1));

We'll be drawing 3*2 triangles from 4*3 vertices. This code should already run, but the triangle will look tiny. So try positioning your camera at (0,0,15) and rerun the program. You should see 6 triangles in the right half of your window, every point of every triangle at the same Z coordinate. Now change the height of your points according to your heightData array:

vertices[x+y*WIDTH].Position = new Vector3(x, y, heightData[x,y]);

Running this, you will notice how some corners of the triangles are slightly moving upwards. Now it's time to draw the second set of triangles. We need the same vertices, so the only thing we have to change is the SetUpIndices method:

private void SetUpIndices() { int[] indices = new int[(WIDTH - 1) * (HEIGHT - 1) * 6]; for (int x = 0; x < WIDTH - 1; x++) { for (int y = 0; y < HEIGHT - 1; y++) { indices[(x + y * (WIDTH - 1)) * 6] = (x + 1) + (y + 1) * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 1] = (x + 1) + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 2] = x + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 3] = (x + 1) + (y + 1) * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 4] = x + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 5] = x + (y + 1) * WIDTH; } } ib = new IndexBuffer(device, typeof(int), (WIDTH - 1) * (HEIGHT - 1) * 6, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); ib.SetData(indices); }

We will be drawing twice as much vertices now, that's why the *3 has been replaced by *6 everywhere. You see the second set of triangles also has been drawn clockwise relative to the camera. Remember we also need to update the Draw method, as we'll be drawing 3*2*2 triangles now:

device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH*HEIGHT, 0, (WIDTH-1)*(HEIGHT1)*2);

Running this code will give you a better 3 dimensional view. We've especially taken care only to use the variables WIDTH and HEIGHT, so these are the only things we need change to increase the size of our map, together with the contents of the heightData array. It would be nice to find a mechanism to fill this last one automatically, which we'll do in the next chapter.

It's time to finally create a nice looking landscape. Instead of manually entering the HeightData array, we are going to fill it from a file. To do this, we are going to load a 64x64 black&white image, and use the 'white value' of every pixel as the Z coordinate for the corresponding pixel! You can download my example file here (link). Put the file in the same map as your .cs files. To open and read files, you need to add the following line to your using-block :

using System.IO;

Change your LoadHeightData method to this:

private void LoadHeightData() { heightData = new int[WIDTH,HEIGHT]; FileStream fs = new FileStream("../../../XNA.raw",FileMode.Open, FileAccess.Read); BinaryReader r = new BinaryReader(fs); for (int i = 0;i< HEIGHT;i++) { for (int y=0; y< WIDTH; y++) { int height =(int)(r.ReadByte()/50); heightData[WIDTH-1-y,HEIGHT-1-i] = height; } } r.Close(); }

First we create a heightData array capable of storing the 64x64 Z coordinates. The 2 following lines open the file heightdata.raw that should be in the same directory as your .exe file for binary access. In a .raw file, the 'white value' of every pixel is stored byte after byte. So the only thing we have to do is load byte after byte into our heightData array! We divide by 50, otherwise the Z coordinates would be way too high. We have to use the WIDTH-1-y structures, because the data is stored inversely in the .raw format. Now change our width and height variables so we can display the whole terrain :

private int WIDTH = 64; private int HEIGHT = 64;

You can try running this code, but you'll notice that you can't see the whole terrain with our current camera settings. First we'll introduce a translation before drawing the triangles, so the middle of the terrain is in the (0,0,0) position. Change your World matrix to this:

Matrix worldMatrix = Matrix.CreateTranslation(-HEIGHT / 2, -WIDTH / 2, 0); effect.Parameters["xWorld"].SetValue(worldMatrix);

Make sure you have this executed before you start drawing your terrain. I placed it in my Draw method. With the terrain in the center of our window, the only thing left to do is reposition our camera!

Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(0, -40, 60), new Vector3(0, -5, 0), new Vector3(0, 1, 0)); Matrix projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 150.0f); Don't forget to set the far clipping plane to 150f or the points further than 50 units away from the camera won't be drawn! Just set the background color to black to have a nicer result. Now run the program and you'll see a nice terrain.

You might already have a rotating terrain, but it definitely would be better looking filled with some colors instead of just plain white lines. One idea to do this, is to use natural colors, like the ones that we find in the mountains. At the bottom we have blue lakes, then the green trees, the brown mountain and finally snow topped peaks. To keep this tutorial general for every image, you can't expect every image to have a lake at height 0, and a mountain peak at height 255 (the maximum value for a .bmp pixel). Imagine an image with height values only between 50 and 200, this image would then probably produce a terrain without any lakes or snow topped peaks. To remain as general as possible, we first have to detect the minimum and maximum heights in our image. We will store these in the following global variables, which we initialize with the opposite values:

private int MinimumHeight = 255; private int MaximumHeight = 0;

When you load your image, you are now going to check if the current pixel's height is below the current MinimumHeight or above the current MaximumHeight :

heightData[WIDTH-1-y,HEIGHT-1-i] = height; if (height < MinimumHeight) { MinimumHeight = height;

} if (height > MaximumHeight) { MaximumHeight = height; }

With these variables filled, you can specify the 4 regions of your colors:

Now when you declare your vertices and their colors in the VertexDeclaration method, you are going to define the desired colors to the correct height regions as follows:

vertices[x+y*WIDTH].Position = new Vector3(x, y, heightData[x,y]); if (heightData[x,y] < MinimumHeight + (MaximumHeight - MinimumHeight)/4) { vertices[x+y*WIDTH].Color = Color.Blue.ToArgb(); } else if (heightData[x,y] < MinimumHeight + (MaximumHeight - MinimumHeight)*2/4) { vertices[x+y*WIDTH].Color = Color.Green.ToArgb(); } else if (heightData[x,y] < MinimumHeight + (MaximumHeight - MinimumHeight)*3/4) { vertices[x+y*WIDTH].Color = Color.Brown.ToArgb(); } else { vertices[x+y*WIDTH].Color = Color.White.ToArgb(); }

When your run this code, you will indeed see a nicely colored network of lines. When we want to see the whole colored terrain, we just have to remove this line :

device.RenderState.FillMode = FillMode.WireFrame;

When you execute this, take a few moments to rotate the terrain a couple of times. On some computers, you will see that sometimes the middle peaks get overdrawn by the `invisible' lake behind it. This is because we have not yet defined a `Z-buffer'! This Z buffer is nothing more than an array where your video card keeps track of the depth coordinate of every pixel that should be drawn on your screen (so in our case, a 500x500 matrix!). Every time your card receives a triangle to draw, it checks whether the

triangle's pixels are closer to the screen than the pixels already present in the Z-buffer. If they are closer, the Z-buffer's contents is updated with these pixels for that region. Of course, this whole process if fully automated. All we have to do, is to initialize our Z buffer with the largest possible distance to start with. So in fact, we have to first fill our buffer with ones. To do this automatically every update of our screen, change this line in the Draw method:

device.Clear(ClearOptions.Target|ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); (The | is a bitwise OR operator, in this case it means both the Target (the colros) as well as the DepthBuffer have to be cleared) Now everyone should see the terrain rotating as expected.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Adding_colors.php>

Even when using colors and a Z buffer, your terrain seems to miss some depth detail when you turn on the Solid FillMode. By adding some lights, it will look much better. This chapter we will see the impact of a light on 2 simple triangles, so we can have a better understanding of how lights work in XNA. We will be using the code from the `World space' chapter, so reload that code now. This chapter, we will be using a directional light. Imagine this as the sunlight: the light will travel in one particular direction. To calculate the effect of light hitting a triangle, XNA needs another input: the 'normal' in every vertex. Consider next figure:

If you have a light source a), and you shine it on the shown 3 surfaces, how is XNA supposed to know that surface 1 should be lit more intensely than surface 3? If you look at the thin red lines in figure b), you'll notice that their length is a nice indication of how much light you would want to be reflected (and thus seen) on every surface. So how can we calculate the length of these lines? Actually, XNA does the job for us. All we have to do is give the blue arrow perpendicular (with an angle of 90 degrees, the thin blue lines) to every surface and XNA does the rest (a simple cosine projection) for us! This is why we need VertexPositionColor contain a position, Let's put this code to add normals (the perpendicular blue lines) to our vector data. The will no longer do, and unfortunately, XNA does not offer a structure that can a color and a normal. But that's no problem, we can easily create one of our own. at the top of our class, immediately above our variable declarations:

public struct VertexPositionNormalColored { public Vector3 Position; public Color Color; public Vector3 Normal; public static int SizeInBytes = 7 * 4; public static VertexElement[] VertexElements = new VertexElement[] { new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ), new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0 ), new VertexElement( 0, sizeof(float) * 4, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0 ), }; }

This might look complicated, but I'm sure you understand the first 3 lines: the struct can hold a postion, a color and a normal. The bottom of the struct is a bit more complex, and we'll discuss it in detail in Series 3. Anyway, in our Draw method, when we set the IndexBuffer we need to specify how many bytes are occupied by 1 vertex, which is a value we store in the SizeInBytes member. The last member of the struct, VertexElements, contains the description of which data type can by found where. This is also needed when setting an IndexBuffer. This allows us to change our vertex variable declaration:

VertexPositionNormalColored[] vertices;

Now we could start defining triangles with normals. But first, let's have a look at the next picture, where the arrows at the top represent the direction of the light and the color bar below the drawing represents the color of every pixel along our surface:

If we simply define the perpendicular vectors, it is easy to see there will be an 'edge' in the lighting (see the bar directly above the a)). This is because the right surface receives (and thus also reflects) 'more' of the light then the left surface. So it will be easy to see the surface is made of separate triangles. However, if we place in the shared top vertex a 'normal' as shown in figure b), XNA automatically interpolates the lighting in every point of our surface! This will give a much smoother effect, as you can see in the bar above the b). This vector is of course half of the sum of the 2 top vectors of a). To demonstrate this example, we will first reset the camera position:

Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(0, -40, 100), new Vector3(0, 50, 0), new Vector3(0, 1, 0)); Matrix projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 200.0f);

Next, we will update our SetUpVertices method with 6 vertices that define the 2 triangles of the example above:

private void SetUpVertices() { vertices = new VertexPositionNormalColored[6]; vertices[0].Position = new Vector3(0f, 0f, 50f); vertices[0].Color = Color.Blue; vertices[0].Normal = new Vector3(1, 0, 1); vertices[1].Position = new Vector3(50f, 0f, 00f); vertices[1].Color = Color.Blue; vertices[1].Normal = new Vector3(1, 0, 1); vertices[2].Position = new Vector3(0f, 50f, 50f); vertices[2].Color = Color.Blue; vertices[2].Normal = new Vector3(1, 0, 1); vertices[3].Position = new Vector3(-50f, 0f, 0f); vertices[3].Color = Color.Blue; vertices[3].Normal = new Vector3(-1, 0, 1); vertices[4].Position = new Vector3(0f, 0f, 50f); vertices[4].Color = Color.Blue; vertices[4].Normal = new Vector3(-1, 0, 1); vertices[5].Position = new Vector3(0f, 50f, 50f); vertices[5].Color = Color.Blue;

vertices[5].Normal = new Vector3(-1, 0, 1); }

This defines the 2 surfaces of the picture above. By adding a Z (other than 0), the triangles are now 3D. You can notice that I've defined the normal vectors perpendicular to the triangles, as to reflect example a) of the image above. All there's left to do is change the section between the Begin and End of each pass in our Draw method:

device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalColored.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2);

Which draws the 2 triangles defined in our vertices array. When you run this code, you should see an arrow (our 2 triangles), but you don't see any depth because we haven't yet defined the light! We can define this by adding this code immediately before the call to effect.Begin():

effect.CurrentTechnique = effect.Techniques["Colored"]; effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, 0, -1.0f));[

{

This instructs our technique to enable lighting calculations (now the technique needs the normals), and we set the direction of our light. You might also want to change the background color to black, so you get a better view. Now run this code and you'll see what I mean with 'edged lighting': the light shines brightly on the left panel and the right panel is darker. You can clearly see the difference between the two triangles! This is what was shown in the left part of the example image above. Now it's time to combine the vectors on the edge that is shared by the 2 triangles from (-1,0,1) and (1,0,1) to (-1+1,0,1+1)/2 = (0,0,1):

vertices[0].Position = new Vector3(0f, 0f, 50f); vertices[0].Color = Color.Blue; vertices[0].Normal = new Vector3(0, 0, 1); vertices[1].Position = new Vector3(50f, 0f, 00f); vertices[1].Color = Color.Blue; vertices[1].Normal = new Vector3(1, 0, 1); vertices[2].Position = new Vector3(0f, 50f, 50f); vertices[2].Color = Color.Blue; vertices[2].Normal = new Vector3(0, 0, 1); vertices[3].Position = new Vector3(-50f, 0f, 0f); vertices[3].Color = Color.Blue; vertices[3].Normal = new Vector3(-1, 0, 1); vertices[4].Position = new Vector3(0f, 0f, 50f); vertices[4].Color = Color.Blue; vertices[4].Normal = new Vector3(0, 0, 1); vertices[5].Position = new Vector3(0f, 50f, 50f); vertices[5].Color = Color.Blue; vertices[5].Normal = new Vector3(0, 0, 1);

When you run this code, you'll see that the reflection is nicely distributed from the dark right tip to the brighter left panel. It's not difficult to imagine that this effect will give a much nicer effect on a large number of triangles, such as our terrain.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Lighting_basics.php>

Experimenting with Lights in XNA Even when using colors and a Z buffer, your terrain seems to miss some depth detail when you turn on the Solid FillMode. By adding some lights, it will look much better. This chapter we will see the impact of a light on 2 simple triangles, so we can have a better understanding of how lights work in XNA. We will be using the code from the `World space' chapter, so reload that code now. This chapter, we will be using a directional light. Imagine this as the sunlight: the light will travel in one particular direction. To calculate the effect of light hitting a triangle, XNA needs another input: the 'normal' in every vertex. Consider next figure:

If you have a light source a), and you shine it on the shown 3 surfaces, how is XNA supposed to know that surface 1 should be lit more intensely than surface 3? If you look at the thin red lines in figure b), you'll notice that their length is a nice indication of how much light you would want to be reflected (and thus seen) on every surface. So how can we calculate the length of these lines? Actually, XNA does the job for us. All we have to do is give the blue arrow perpendicular (with an angle of 90 degrees, the thin blue lines) to every surface and XNA does the rest (a simple cosine projection) for us! This is why we need VertexPositionColor contain a position, Let's put this code to add normals (the perpendicular blue lines) to our vector data. The will no longer do, and unfortunately, XNA does not offer a structure that can a color and a normal. But that's no problem, we can easily create one of our own. at the top of our class, immediately above our variable declarations:

public struct VertexPositionNormalColored { public Vector3 Position; public Color Color; public Vector3 Normal; public static int SizeInBytes = 7 * 4; public static VertexElement[] VertexElements = new VertexElement[] { new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ), new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0 ), new VertexElement( 0, sizeof(float) * 4, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0 ), }; }

This might look complicated, but I'm sure you understand the first 3 lines: the struct can hold a postion, a color and a normal. The bottom of the struct is a bit more complex, and we'll discuss it in detail in Series 3. Anyway, in our Draw method, when we set the IndexBuffer we need to specify how many bytes are occupied by 1 vertex, which is a value we store in the SizeInBytes member. The last member of the struct, VertexElements, contains the description of which data type can by found where. This is also needed when setting an IndexBuffer. This allows us to change our vertex variable declaration:

VertexPositionNormalColored[] vertices;

Now we could start defining triangles with normals. But first, let's have a look at the next picture, where the arrows at the top represent the direction of the light and the color bar below the drawing represents the color of every pixel along our surface:

If we simply define the perpendicular vectors, it is easy to see there will be an 'edge' in the lighting (see the bar directly above the a)). This is because the right surface receives (and thus also reflects) 'more' of the light then the left surface. So it will be easy to see the surface is made of separate triangles. However, if we place in the shared top vertex a 'normal' as shown in figure b), XNA automatically interpolates the lighting in every point of our surface! This will give a much smoother effect, as you can see in the bar above the b). This vector is of course half of the sum of the 2 top vectors of a). To demonstrate this example, we will first reset the camera position:

Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(0, -40, 100), new Vector3(0, 50, 0), new Vector3(0, 1, 0)); Matrix projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 200.0f);

Next, we will update our SetUpVertices method with 6 vertices that define the 2 triangles of the example above:

private void SetUpVertices() { vertices = new VertexPositionNormalColored[6]; vertices[0].Position = new Vector3(0f, 0f, 50f); vertices[0].Color = Color.Blue; vertices[0].Normal = new Vector3(1, 0, 1); vertices[1].Position = new Vector3(50f, 0f, 00f); vertices[1].Color = Color.Blue;

vertices[1].Normal = new Vector3(1, 0, 1); vertices[2].Position = new Vector3(0f, 50f, 50f); vertices[2].Color = Color.Blue; vertices[2].Normal = new Vector3(1, 0, 1); vertices[3].Position = new Vector3(-50f, 0f, 0f); vertices[3].Color = Color.Blue; vertices[3].Normal = new Vector3(-1, 0, 1); vertices[4].Position = new Vector3(0f, 0f, 50f); vertices[4].Color = Color.Blue; vertices[4].Normal = new Vector3(-1, 0, 1); vertices[5].Position = new Vector3(0f, 50f, 50f); vertices[5].Color = Color.Blue; vertices[5].Normal = new Vector3(-1, 0, 1); }

This defines the 2 surfaces of the picture above. By adding a Z (other than 0), the triangles are now 3D. You can notice that I've defined the normal vectors perpendicular to the triangles, as to reflect example a) of the image above. All there's left to do is change the section between the Begin and End of each pass in our Draw method:

device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalColored.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2);

Which draws the 2 triangles defined in our vertices array. When you run this code, you should see an arrow (our 2 triangles), but you don't see any depth because we haven't yet defined the light! We can define this by adding this code immediately before the call to effect.Begin():

effect.CurrentTechnique = effect.Techniques["Colored"]; effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, 0, -1.0f));[

{

This instructs our technique to enable lighting calculations (now the technique needs the normals), and we set the direction of our light. You might also want to change the background color to black, so you get a better view. Now run this code and you'll see what I mean with 'edged lighting': the light shines brightly on the left panel and the right panel is darker. You can clearly see the difference between the two triangles! This is what was shown in the left part of the example image above. Now it's time to combine the vectors on the edge that is shared by the 2 triangles from (-1,0,1) and (1,0,1) to (-1+1,0,1+1)/2 = (0,0,1):

vertices[0].Position = new Vector3(0f, 0f, 50f); vertices[0].Color = Color.Blue; vertices[0].Normal = new Vector3(0, 0, 1); vertices[1].Position = new Vector3(50f, 0f, 00f); vertices[1].Color = Color.Blue; vertices[1].Normal = new Vector3(1, 0, 1); vertices[2].Position = new Vector3(0f, 50f, 50f); vertices[2].Color = Color.Blue; vertices[2].Normal = new Vector3(0, 0, 1); vertices[3].Position = new Vector3(-50f, 0f, 0f); vertices[3].Color = Color.Blue; vertices[3].Normal = new Vector3(-1, 0, 1); vertices[4].Position = new Vector3(0f, 0f, 50f); vertices[4].Color = Color.Blue; vertices[4].Normal = new Vector3(0, 0, 1);

vertices[5].Position = new Vector3(0f, 50f, 50f); vertices[5].Color = Color.Blue; vertices[5].Normal = new Vector3(0, 0, 1);

When you run this code, you'll see that the reflection is nicely distributed from the dark right tip to the brighter left panel. It's not difficult to imagine that this effect will give a much nicer effect on a large number of triangles, such as our terrain.

We'll be adding normal data to all vertices of our terrain, so our graphics card can perform some lighting calculations on it. This chapter, we'll see an approach that is easy to understand and easy to implement. Although mathematically not 100% correct, it gives satisfactory results and requires a lot less computations. The mathematically correct approach is presented in the next chapter. I have made the image below to explain the principle. The blue lines are 2 lines of our terrain grid. What we want to find, is the direction of the red line, which is the shared normal in the common point. The black line at the bottom represents the unity distance between the points. The idea: the smaller the difference between the Z coordinates of both neighbors of a point, the more the normal will point upwards. This difference in Z coordinates is given by the vertical green line. The length of this green line is used to tell how much the normal will point to the side. To make it mathematically more correct, we must divide the length of the green bar in 2 (because the difference in X coordinates of the 2 neighbors in 2 units). So: the bigger the difference, the bigger the green bar, the more the normal will point to the side. I have made 3 examples of this, so you can verify for yourself how it works:

The only remaining problem is that each vertex of our terrain not only has neighbors on the X axis, but also on the Y axis. This is very easy to solve: we calculate the normal for the X direction, as well as the normal for the Y direction and we simply add them. In the end, we need to normalize this sum, so the total distance of our normal vector becomes 1. That's quite an explanation, but as promised, it's easy to code. First we have to reload our code from the `Adding colors' chapter, after which we need to add the struct that allows normal data to be added to our vectors:

public struct VertexPositionNormalColored { public Vector3 Position; public Color Color; public Vector3 Normal; public static int SizeInBytes = 7 * 4; public static VertexElement[] VertexElements = new VertexElement[] { new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ), new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0 ), new VertexElement( 0, sizeof(float) * 4, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0 ), }; }

Don't forget to update your vertices declaration:

VertexPositionNormalColored[] vertices;

Now we're ready to add the normals. Go to the SetUpVertices method, and add this code before we put the vertices in the VertexBuffer:

for (int x = 1; x < WIDTH-1; x++) { for (int y = 1; y < HEIGHT-1; y++) { Vector3 normX = new Vector3((vertices[x - 1 + y * WIDTH].Position.Z - vertices[x + 1 + y * WIDTH].Position.Z) / 2, 0, 1); Vector3 normY = new Vector3(0, (vertices[x + (y - 1) * WIDTH].Position.Z - vertices[x + (y + 1) * WIDTH].Position.Z) / 2, 1); vertices[x + y * WIDTH].Normal = normX + normY; vertices[x + y * WIDTH].Normal.Normalize(); } }

For each of the vertices that is not on the border of our terrain, we first calculate the normal for the X direction. Then the normal for the Y direction, after which we add them together and normalize the vector. At this moment, the normals of the vertices on the border haven't been defined yet. Let's make them point upwards, by initializing this as normal vector for each vertex (in the first for loop of the SetUpVertices method):

vertices[x + y * WIDTH].Position = new Vector3(x, y, heightData[x, y]); vertices[x + y * WIDTH].Normal = new Vector3(0, 0, 1);

That's it! Just a few more adaptions. Because we're using an IndexBuffer, we need to update this so it will be able to store the larger vertices (larger, because now they contain normal data):

vb = new VertexBuffer(device, VertexPositionNormalColored.SizeInBytes * WIDTH*HEIGHT, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); vb.SetData(vertices);

Also, the Draw method needs a final update. First we need to define the light (put these before your call to effect.Begin):

effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, 0.5f, -1));

And we need to indicate we're using VertexPositionNormal're using VertexPositionNormalColored vertices:

device.Vertices[0].SetSource(vb, 0, VertexPositionNormalColored.SizeInBytes); device.Indices = ib; device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalColored.VertexElements); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH*HEIGHT, 0, (WIDTH-1)*(HEIGHT1)*2);

That's it! When you run the code, you should see the image below.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Terrain_lighting.php>

After all tutorials are ported to XNA, I will add a page that contains the 100% mathematically correct algorithm to calculate normals, but I think my algorithm works OK, and it sure is a lot faster to calculate.

The final code of our project: using using using using using using using using using System; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAtutorial { public class Game1 : Microsoft.Xna.Framework.Game { public struct VertexPositionNormalColored { public Vector3 Position; public Color Color; public Vector3 Normal; public static int SizeInBytes = 7 * 4; public static VertexElement[] VertexElements = new VertexElement[] { new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ), new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0 ), new VertexElement( 0, sizeof(float) * 4, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0 ), }; }

GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; VertexPositionNormalColored[] vertices; private float angle = 0f; private VertexBuffer vb; private IndexBuffer ib; private int WIDTH = 64; private int HEIGHT = 64; private int[,] heightData; private int MinimumHeight = 255; private int MaximumHeight = 0;

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { base.Initialize(); LoadHeightData(); SetUpXNADevice(); SetUpVertices(); SetUpIndices(); SetUpCamera(); } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 1"; CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); } private void SetUpVertices() { vertices = new VertexPositionNormalColored[WIDTH * HEIGHT]; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { vertices[x + y * WIDTH].Position = new Vector3(x, y, heightData[x, y]); vertices[x + y * WIDTH].Normal = new Vector3(0, 0, 1);

if (heightData[x, y] < MinimumHeight + (MaximumHeight - MinimumHeight) / 4) { vertices[x + y * WIDTH].Color = Color.Blue;

} else if (heightData[x, y] < MinimumHeight + (MaximumHeight - MinimumHeight) * 2 / 4) { vertices[x + y * WIDTH].Color = Color.Green; } else if (heightData[x, y] < MinimumHeight + (MaximumHeight - MinimumHeight) * 3 / 4) { vertices[x + y * WIDTH].Color = Color.Brown; } else { vertices[x + y * WIDTH].Color = Color.White; } } }

for (int x = 1; x < WIDTH - 1; x++) { for (int y = 1; y < HEIGHT - 1; y++) { Vector3 normX = new Vector3((vertices[x - 1 + y * WIDTH].Position.Z - vertices[x + 1 + y * WIDTH].Position.Z) / 2, 0, 1); Vector3 normY = new Vector3(0, (vertices[x + (y - 1) * WIDTH].Position.Z vertices[x + (y + 1) * WIDTH].Position.Z) / 2, 1); vertices[x + y * WIDTH].Normal = normX + normY; vertices[x + y * WIDTH].Normal.Normalize(); } } vb = new VertexBuffer(device, VertexPositionNormalColored.SizeInBytes * WIDTH*HEIGHT, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); vb.SetData(vertices); } private void SetUpIndices() { int[] indices = new int[(WIDTH - 1) * (HEIGHT - 1) * 6]; for (int x = 0; x < WIDTH - 1; x++) { for (int y = 0; y < HEIGHT - 1; y++) { indices[(x + y * (WIDTH - 1)) * 6] = (x + 1) + (y + 1) * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 1] = (x + 1) + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 2] = x + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 3] = (x + 1) + (y + 1) * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 4] = x + y * WIDTH; indices[(x + y * (WIDTH - 1)) * 6 + 5] = x + (y + 1) * WIDTH; } } ib = new IndexBuffer(device, typeof(int), (WIDTH - 1) * (HEIGHT - 1) * 6, ResourceUsage.WriteOnly, ResourceManagementMode.Automatic); ib.SetData(indices); } private void SetUpCamera() { Matrix viewMatrix = Matrix.CreateLookAt(new Vector3(80, 0, 160), new Vector3(-20, 0, 0), new Vector3(0, 0, 1)); Matrix projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 250.0f);

effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); } private void LoadHeightData() { int offset; FileStream fs = new FileStream("../../../heightmap.bmp", FileMode.Open, FileAccess.Read); BinaryReader r = new BinaryReader(fs); r.BaseStream.Seek(10, SeekOrigin.Current); offset = (int)r.ReadUInt32(); r.BaseStream.Seek(4, SeekOrigin.Current); WIDTH = (int)r.ReadUInt32(); HEIGHT = (int)r.ReadUInt32(); r.BaseStream.Seek(offset-26, SeekOrigin.Current); heightData = new int[WIDTH, HEIGHT]; for (int i = 0; i < HEIGHT; i++) { for (int y = 0; y < WIDTH; y++) { int height = (int)(r.ReadByte()); height += (int)(r.ReadByte()); height += (int)(r.ReadByte()); height /= 8; heightData[WIDTH - 1 - y, HEIGHT - 1 - i] = height; if (height < MinimumHeight) { MinimumHeight = height; } if (height > MaximumHeight) { MaximumHeight = height; } } } } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { } } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); ProcessKeyboard();

} private void ProcessKeyboard() { KeyboardState keys = Keyboard.GetState(); if (keys.IsKeyDown(Keys.Delete)) { angle += 0.05f; } if (keys.IsKeyDown(Keys.PageDown)) { angle -= 0.05f; } } protected override void Draw(GameTime gameTime) { device.RenderState.FillMode = FillMode.Solid; device.RenderState.CullMode = CullMode.None; device.Clear(ClearOptions.Target|ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); effect.CurrentTechnique = effect.Techniques["Colored"]; Matrix worldMatrix = Matrix.CreateTranslation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.CreateRotationZ(angle); ; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, 0.5f, -1)); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin();

device.Vertices[0].SetSource(vb, 0, VertexPositionNormalColored.SizeInBytes); device.Indices = ib; device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalColored.VertexElements); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH*HEIGHT, 0, (WIDTH1)*(HEIGHT-1)*2); pass.End(); } effect.End(); base.Draw(gameTime); } } }

Welcome to this second series of XNA for C# Tutorials! In the first series, you learned some basic features of XNA. This list of features will be further expanded in this series, so after completing this series you'll have created your own 3D game!! In this second series of XNA for C# Tutorials, you'll learn how to create a complete flight simulator. This will include flying your aircraft in a true 3D city and firing bullets at objects! Here's a sample screenshot of what you'll create:

Again, the main goal of this series is to cover XNA features. This means the physical flight model will not contain gravity influences, it just let's you manoeuvre your aircraft.

This is a list of the features you'll learn in this second series of XNA Tutorials: Adding textures to your triangles Dynamically generating the 3D city environment Adding the skybox to get rid of the black background Basic, but accurate flight modelling Camera movement Point sprites, basic billboarding Alpha blending

Below you can find a list of screenshots:

Hi there! Glad you made it to this second series of XNA Tutorials. We're going to cover some new XNA features, put them together into one project, and end up with a real flight simulator! Again, the main goal is to show you some principles of XNA, so don't expect complete realistic flight physics, such as gravity, coriolis and others. This would add too much maths, and draw our attention away from the XNA part. If you have finished this series, you can always exoand the math sections as you like. The only purpose of this first chapter is to set up our starting code. The code is based on the World space chapter of the first series of C# tutorials and contains nothing new. This is what the starting code does: Linking to the device Loading the effect Positioning the camera and passing this to the effect Clearing the window and Z buffer in the Draw method

So open a new Windows Game project as described in chapter one of the first series, I named my project XNAseries2. You're free to give your project a different name, but then you must replace the namespace of my code with your project name. This line is the first line under your using-block in my code. If you haven't done this already, you can download my standard effect file here, which you need to put in the same map as the code files (ending with a .cs extension). This effect file contains all techniques we're going to need in this second series. Remember, you'll learn everything you need about coding effect files in the third series. Now simply copy-paste the code below into the Game1.cs file. Compiling and running the code should give you an empty window, cleared to a color of your choice by XNA:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Starting_point.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Starting_point.php>

We're ready to start our second project.

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); }

protected override void Initialize() { base.Initialize(); } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); viewMatrix = viewMatrix = Matrix.CreateLookAt(new Vector3(0, 0, 30), new Vector3(0, 0, 0), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect(); } } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); base.Draw(gameTime); } }

}

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Starting_point.php> Up till now, the only way you've seen to add some color to our scene is to declare separate vertices for every different color. Of course, this is not the way today's great games are being made. XNA supports a very efficient way of adding images to the scene: you can simply put an image on a triangle. Such images are called textures. As a first example, we're going to draw 1 simple triangle, and paste a texture over it. You can find a sample texture here (link) (you can download the image as a file by right-clicking on the image, and selecting Save Image). Store the image in the same map as your code files. Once again we're going to define 3 vertices, which we'll be storing in an array. This time, the vertex format will be VertexPositionTexture, so declare this variable at the top of your code:

VertexPositionTexture[] vertices;

Next we'll be defining the 3 vertices of our triangle in our SetUpVertices method we'll create:

private void SetUpVertices() { vertices = new VertexPositionTexture[3]; vertices[0].Position = new Vector3(-10f, 10f, 0f); vertices[0].TextureCoordinate.X = 0; vertices[0].TextureCoordinate.Y = 0; vertices[1].Position = new Vector3(10f, -10f, 0f); vertices[1].TextureCoordinate.X = 1; vertices[1].TextureCoordinate.Y = 1; vertices[2].Position = new Vector3(-10f, -10f, 0f); vertices[2].TextureCoordinate.X = 0; vertices[2].TextureCoordinate.Y = 1; }

As you see, for every vertex we first define its position. Notice again that we have defined our vertices in a clockwise way, so XNA will not cull them. The next 2 settings define which point in our texture image we want to correspond with the vertex. These u and v coordinates are simply the 2 coordinates of the texture, with the (0,0) point being the top left point of the texture image, and the (1,1) point the bottom-right. Don't forget to call the SetUpVertices method from your Initialize method:

SetUpVertices ();

Now we have our vertex data, it's time we load the image into our XNA project. Find your Solution Explorer in the top-right corner of you window, and rightclick on your project's name (the line in bold). Next, select Add -> Existing Item, as shown in the image below:

At the bottom of the new window, click on the options for `Files of type', and select the second: Content Pipeline Files. Find the .bmp file you downloaded in the beginning of this chapter, select it and click Add. You can see the image file has been added to your XNA project! (You can also simply drag the image file from Windows Explorer into your Solution Explorer) When you click on the file in your Solution Explorer, you can see this asset has the name `riemerstexture'. You can change this to your liking, but leave it like this for now. In our code, we are going to add a new variable to hold this texture image. Add this line to the top of your code:

Texture2D texture; Now find the LoadGraphicsContent method in your code, which is created automatically for you when you open a new XNA project. Add this line under the call to LoadEffect;

texture = content.Load<Texture2D> ("riemerstexture");

This line binds the asset we just loaded in our project to the texture variable! OK, we have our vertices set up, and our texture image loaded into a variable. Let's draw the triangle! Go to our Draw method, and add this code after our call to the Clear method:

Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(texture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionTexture.VertexElements);

device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 1); pass.End(); } effect.End();

First, we need to instruct our graphics card to sample the color of This is exactly what the Textured technique of my effect file does, set the Identity matrix as world matrix. Next, we of course need to Then we actually draw our triangle from our vertices array, as seen

every pixel from the texture image. so we set it as active technique. We pass our texture to the technique! in the first series.

Running this should already give you a textured triangle, displaying half of the texture image! To display the whole image, we simply have to expand our SetUpVertices method by adding the second triangle:

private void SetUpVertices() { vertices = new VertexPositionTexture[6]; vertices[0].Position = new Vector3(-10f, 10f, 0f); vertices[0].TextureCoordinate.X = 0; vertices[0].TextureCoordinate.Y = 0; vertices[1].Position = new Vector3(10f, -10f, 0f); vertices[1].TextureCoordinate.X = 1; vertices[1].TextureCoordinate.Y = 1; vertices[2].Position = new Vector3(-10f, -10f, 0f); vertices[2].TextureCoordinate.X = 0; vertices[2].TextureCoordinate.Y = 1; vertices[3].Position = new Vector3(10.1f, -9.9f, 0f); vertices[3].TextureCoordinate.X = 1; vertices[3].TextureCoordinate.Y = 1; vertices[4].Position = new Vector3(-9.9f, 10.1f, 0f); vertices[4].TextureCoordinate.X = 0; vertices[4].TextureCoordinate.Y = 0; vertices[5].Position = new Vector3(10.1f, 10.1f, 0f); vertices[5].TextureCoordinate.X = 1; vertices[5].TextureCoordinate.Y = 0; }

We simply added another set of 3 vertices for a second triangle, to complete the texture image. Don't forget to adjust your Draw method so the 2 triangles will be drawn:

device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2);

Now run this code, and you should see the whole texture image, displayed by 2 triangles!

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Textures.php>

If you appreciate the amount of time I spend creating and updating these pages, feel free to donate -- any amount is welcome !

Click here to go to the forum on this chapter!

Or click on one of the topics on this chapter to go there: Code brings up blank screen Hi, Excellent tutorials :) I'm having a bit o...

You'll notice the small gap between both triangles... This is of course because we defined the positions of the vertices that way, so you can actually see the image is made out of two separate triangles. Try to remove the gap between the triangles yourself. Also, try playing around with the U and V coordinates, it's worth it!! You can choose any value between 0 and 1. The code for displaying this texture:

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2

{ public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; VertexPositionTexture[] vertices; Texture2D texture;

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { SetUpVertices(); base.Initialize(); } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); viewMatrix = Matrix.CreateLookAt(new Vector3(0, 0, 30), new Vector3(0, 0, 0), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); }

private void SetUpVertices() { vertices = new VertexPositionTexture[6]; vertices[0].Position = new Vector3(-10f, 10f, 0f); vertices[0].TextureCoordinate.X = 0; vertices[0].TextureCoordinate.Y = 0; vertices[1].Position = new Vector3(10f, -10f, 0f);

vertices[1].TextureCoordinate.X = 1; vertices[1].TextureCoordinate.Y = 1; vertices[2].Position = new Vector3(-10f, -10f, 0f); vertices[2].TextureCoordinate.X = 0; vertices[2].TextureCoordinate.Y = 1; vertices[3].Position = new Vector3(10.1f, -9.9f, 0f); vertices[3].TextureCoordinate.X = 1; vertices[3].TextureCoordinate.Y = 1; vertices[4].Position = new Vector3(-9.9f, 10.1f, 0f); vertices[4].TextureCoordinate.X = 0; vertices[4].TextureCoordinate.Y = 0; vertices[5].Position = new Vector3(10.1f, 10.1f, 0f); vertices[5].TextureCoordinate.X = 1; vertices[5].TextureCoordinate.Y = 0; }

protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect();

texture = content.Load<Texture2D> ("riemerstexture");

} } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0);

Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(texture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) {

pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, 2); pass.End(); } effect.End();

base.Draw(gameTime); } } } Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Textures.php> Now we've seen how we can import simple images into our XNA project and have XNA display them on triangles, it's not that difficult to create a large amount of images. It's more important to find a way to get the computer to define all of the vertices for us. As a small example, let's simply create a raster of 3x3 images, with the center image missing. This means 8 images, thus 16 triangles, and 48 vertices. Instead of defining all these vertices manually, let's create a variable int_Floorplan, at the top of your code:

int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList();

Before we can expect our program to know the ArrayList, we need to place this line in our using-block, at the very top of our code:

using System.Collections;

We declare 2 more variables, WIDTH and HEIGHT, to store the size of the floor to be created. Of course, these values have to correspond to the size of the int_Floorplan array. We've also indicated how many different kinds of buildings we will have in our game, which we will store in the differentbuildings variable. Replace your vertices variable from the previous chapter by the verticesarray variable. Use this name so you won't get confused later in this series, when we'll be using more vertices. Also notice that this time we'll be adding normal data to our vertices, so later on we can easily switch on lights (as you've seen in Series 1). You'll find some info on the arraylist a few paragraphs below. We'll first need a small method that fills the int_Floorplan array with data:

private void LoadFloorplan() { WIDTH = 3; HEIGHT = 3; int_Floorplan = new int[,] { {0,0,0}, {0,1,0}, {0,0,0}, }; }

In this data, a 0 means `draw a floor texture' and a 1 means to leave that tile open. Later in this series, a 1 will mean a building, and a 0 `not a building'. This method contains all the flexibility our program needs: simply changing a 0 to a 1 will result in an extra building drawn! Load this method from within the Initalize method, immediately before you call the SetUpVertices method (before, because our SetUpVertices will need the data in the in_Floorplan array):

LoadFloorplan();

Now we'll have to update our SetUpVertices method, so it reads the data inside the array and automatically creates the corresponding vertices. In the last chapter you've learned how to paste images on triangles. This time, we're going to load 1 texture image file, which is composed of several images next to each other. The leftmost part of the texture will be the floor tile, followed by a wall and a roofing image for each different type of building. You can download my texturefile (link) to see what I mean. It is in short one image that contains multiple images. In the texture, I've included self-drawn images as well as real-life pictures, so later on you'll be able to see the difference. You can delete the `riemerstexture' asset from your Solution Explorer by right-clicking on it and selecting Delete. Next, add the image you just downloaded to your XNA project, as you learned last chapter. You should see it in your Solution Explorer. In the end, we're going to rename our `texture' variable to `scenerytexture', because later in the game we'll be using more than 1 texture. Change the name of the variable at the top of the code, and make sure you replace name of the asset in your LoadGraphicsContent method:

scenerytexture = content.Load<Texture2D> ("texturemap");

We can delete the contents of the SetUpVertices method. We'll start the method by defining how many images are contained in the big texture image, 1 floor image and 2 images for every building type (have another look at the texturempap.jpg file to better understand this):

float imagesintexture = 1 + differentbuildings * 2;

Now the method has to scan the data from the int_Floorplan and draw a floorimage everytime a 0 is found:

private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 0) { verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, 0), new Vector3(0, 0, 1), new Vector2(0, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0), new Vector3(0, 0, 1), new Vector2(0, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, 0), new Vector3(0, 0, 1), new Vector2(0, 0)));}

} } }

As you can see, we've been using the Arraylist. This is simply a kind of array of which you do not need to specify the length, you can simply add elements to it by using the Add method. Very useful in cases like this, because initially we don't know how many surfaces/triangles we will need to draw (it depends on how many zeroes the int_Floorplan array contains). Another related advantage is that we can get the amount of elements stored in the list by reading the arraylist.Count variable. This will come in useful when we need to specify the amount of triangles to draw. Every time a 0 is encountered, 2 triangles are defined. The normal vectors are pointing upwards towards the sky (0,0,1), and the correct portion of the texture image is pasted over the image: the rectangle between [0,0] and [1/imagesintexture,1]. Have another look at the texture file to fully understand this. When this arraylist has been created, we need to convert it to a usual array. This can be done very easily with the ToArray command of the Arraylist. We only need to specify the kind of data stored in the list, and typecast it.

verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture));

Now we have our array containing the vertices of our floorplan! With this method finished, we can move on to the Draw method. We'll still be using the Textured technique, but we need to indicate that this we'll be using VertexPositionNormalTexture vertices instead of VertexPositionTexture vertices. We'll also be drawing some more triangles:

device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3);

Notice we'll be drawing 1 triangle for every 3 vertices in our verticesarray. This code should be runnable! You should see a small square with a hole in the middle, just as you defined in the LoadFloorPlan method. It might be a good idea to reposition our camera a bit:

viewMatrix = Matrix.CreateLookAt(new Vector3(3, -2, 5), new Vector3(2, 1, 0), new Vector3(0, 0, 1));

This should give you the following image:

Or click on one of the topics on this chapter to go there: Arrays What are you doing and what kind of program are yo... Try playing around with the contents of the int_Floorplan variable. Remember you need to change the WIDTH and HEIGHT variables when you want to add extra rows or columns! This chapter we've seen how to load parts of a texture and how to use the Arraylist, a useful feature of C#. The code at this point:

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; Texture2D scenerytexture;

int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList();

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { LoadFloorplan(); SetUpVertices(); base.Initialize(); }

private void LoadFloorplan() { WIDTH = 3; HEIGHT = 3; int_Floorplan = new int[,] { {0,0,0}, {0,1,0}, {0,0,0}, }; }

private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null);

viewMatrix = Matrix.CreateLookAt(new Vector3(3, -2, 5), new Vector3(2, 1, 0), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix);

effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); } private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 0) { verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, 0), new Vector3(0, 0, 1), new Vector2(0, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0), new Vector3(0, 0, 1), new Vector2(0, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0), new Vector3(0, 0, 1), new Vector2(1f / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, 0), new Vector3(0, 0, 1), new Vector2(0, 0))); } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); }

protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect();

scenerytexture = content.Load<Texture2D> ("texturemap");

} } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)

this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin();

device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3);

pass.End(); } effect.End(); base.Draw(gameTime); } } }

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Loading_the_floorplan.php> To be honest, there is not much new XNA stuff you'll learn in this chapter: the code of last chapter will be expanded, so you'll also draw buildings on your floorplan, and then again roofs on your buildings. For this reason, I'll summarize the adjustments and additions. Because our program will support multiple buildings (5 in our case), we'll need an array that contains the heights of these different buildings. So define this array at the top of your code:

private int[] buildingheights = new int[] {0,10,1,3,2,5};

Next, we'll process our int_Floorplan array, so the numbers in it reflect the building type. A 0 will remain a 0, but a 1 will be replaced by a random number between 1 and 5, the number of different buildings. To do this, add this code to the bottom of our LoadFloorPlan method:

Random random = new Random(); for (int x=0; x< WIDTH; x++) { for (int y=0; y< HEIGHT; y++) { if (int_Floorplan[x,y] == 1) { int_Floorplan[x, y] = random.Next(differentbuildings) + 1;

} } }

We first initialize a random number generator, from which we receive random numbers by calling the Next method. This Next method will return a random positive number, lower than the argument passed. So every time a 1 is found in the array, it is replaced by a positive integer between 1 and 5. Now we'll start expanding the SetUpVertices method. We start by letting the code we have also draw the roofings:

for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } }

Some interesting changes: first, the Z-coordinate is taken from the buildingheights array we initialized at the beginning of this chapter. Then you see that the X-coordinate from the texture (in fact, the Ucoordinate, since we're talking about the image space here) is dynamically calculated. Since we have 5 different buildings, the U-coord of the left side of the floor tile is 0, of the left side of the first roof is 2/11, of the second roof 4/11, .., of the last roof 10/11. In a formula: currentbuilding * 2 / imagesintexture. The U-coord of the right side of the floor is 1/11, of the first roof 3/11, ..., of the last roof 11/11. In a formula: (currentbuilding * 2 + 1) / imagesintexture . In a nutshell: if a 0 is encountered, the floor image is drawn, if a building number is encounters, the corresponding roofing image is drawn at the corresponding height, found in the buildingheights array. You can try running this code, and each time you run the program you'll see another image roof at a different height, because the type of buildings are chosen at random. The good news: we've already covered all the new stuff of this chapter. The bad news: drawing the walls of the buildings implies copying most of this code, only with different coordinates, normals and texture coordinates. To be complete, I'll list the whole method here and quickly discuss it:

private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++)

{ int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); if (y > 0) { if (int_Floorplan[x, y - 1] != int_Floorplan[x, y]) { if (int_Floorplan[x, y - 1] > 0) { currentbuilding = int_Floorplan[x, y - 1]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0)));

verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); } } } if (x > 0) { if (int_Floorplan[x - 1, y] != int_Floorplan[x, y]) { if (int_Floorplan[x - 1, y] > 0) { currentbuilding = int_Floorplan[x - 1, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } } } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); }

Whow THAT's what I call a method! Quite impressive, but it contains nothing new to you. First the floors or roofs are drawn. Then we identify the places where 2 buildings of a diffent kind are sharing a wall. If this is the case, these walls are drawn. That's about all this method does. I have been sent a more compact solution, but it is quite complex, maybe too complex to put in a tutorial. But if you want, I can post it in the Forum.

You can also notice the arraylists Add method comes in very handy, as there's no easy compact way to find out exactly how many walls will have to be drawn, a number we would otherwise need when initializing a usual array. To make this a bit more impressive, let's change the dimensions and contents of our int_Floorplan array:

WIDTH = 20; HEIGHT = 15; int_Floorplan = new int[,] { {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,1,1,0,0,0,1,1,0,0,1,0,1}, {1,0,0,1,1,0,0,0,1,0,0,0,1,0,1}, {1,0,0,0,1,1,0,1,1,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,1,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,1,1,0,0,0,1,0,0,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,1,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,1,0,0,0,0,1}, {1,0,1,1,0,0,0,0,1,1,0,0,0,1,1}, {1,0,0,0,0,0,0,0,1,1,0,0,0,1,1}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, };

This should already be runnable, but let's first reposition our camera:

viewMatrix = Matrix.CreateLookAt(new Vector3(20, 5, 13), new Vector3(8, 7, 0), new Vector3(0, 0, 1));

Now run this code! This is what you should see:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Creating_the_3D_city.php>

If you appreciate the amount of time I spend creating and updating these pages, feel free to donate -- any amount is welcome !

Click here to go to the forum on this chapter!

The complete code up to this point:

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; Texture2D scenerytexture;

int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; private int[] buildingheights = new int[] { 0, 10, 1, 3, 2, 5 };

VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList();

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { LoadFloorplan(); SetUpVertices(); base.Initialize(); } private void LoadFloorplan() { WIDTH = 20; HEIGHT = 15; int_Floorplan = new int[,] { {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,1,1,0,0,0,1,1,0,0,1,0,1}, {1,0,0,1,1,0,0,0,1,0,0,0,1,0,1}, {1,0,0,0,1,1,0,1,1,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,1,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,1,1,0,0,0,1,0,0,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,1,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,1,0,0,0,0,1}, {1,0,1,1,0,0,0,0,1,1,0,0,0,1,1}, {1,0,0,0,0,0,0,0,1,1,0,0,0,1,1}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, }; Random random = new Random(); for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 1) { int_Floorplan[x, y] = random.Next(differentbuildings) + 1; } } }

} private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null);

viewMatrix = Matrix.CreateLookAt(new Vector3(20, 5, 13), new Vector3(8, 7, 0), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); } private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); if (y > 0) { if (int_Floorplan[x, y - 1] != int_Floorplan[x, y])

{ if (int_Floorplan[x, y - 1] > 0) { currentbuilding = int_Floorplan[x, y - 1]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); } } } if (x > 0) { if (int_Floorplan[x - 1, y] != int_Floorplan[x, y]) { if (int_Floorplan[x - 1, y] > 0) { currentbuilding = int_Floorplan[x - 1, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0)));

} if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } } } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect();

scenerytexture = content.Load<Texture2D> ("texturemap"); } } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture);

effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3); pass.End(); } effect.End(); base.Draw(gameTime); } } }

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Creating_the_3D_city.php> The city is nice, but how are we supposed to draw an airplane in our every single vertex of this object? Luckily, we can load models that Model is a structure that holds all necessary information to draw an all vertices, as well as the normal data, color info, and if needed, its geometrical data in vertexbuffers and indexbuffers, which we can scene? Would we have to program were saved in a file. In short: a object. It contains the position of the texture coordinates. It stores simply load from the file.

But that is not all. A Model can contain multiple parts, describing one object. Imagine we would load a tank model from a file. This file could contain one part that describes the hull of the tank, another part that describes the turret, another one for the door and two more parts for each of the caterpillars. The Model stores all vertices in one big vertexbuffer, and each part of the model holds the indices that refer to vertices in the big vertexbuffer. The model stores all indexbuffers for all parts of the model after each other in one big indexbuffer. You can see these large buffers at the top right of the image below.

Each part of the model simply contains the data that indicates which part of the large indexbuffer belongs to the part. Each part also contains an effect, and if applicable, the texture image that should be used for that part of the model. This way, you can let your graphics card draw the turret as a shiny reflective metal, and draw the caterpillars using another effect that uses a texture. Enough theory, let's see how this works in practice. We will be loading a spaceship into our scene, which you can download here. Download this file and save it in the same map as your code files. Now add this file to your Solution Explorer, the same way as you did with images! You should now have an asset called `xwing'. Next, we'll be assigning it to a variable, so add this to the top of the code:

Model spacemodel;

Next, we'll be adding a small method, FillModelFromFile, that loads a model asset into a Model variable:

private Model FillModelFromFile(string asset) { Model mod = content.Load<Model> (asset); return mod; }

The method takes in the name of the asset. It loads all Model data from the file into the newly created mod object, and returns this filled Model. This is already nice, but unless your .x files have been created especially for your program, the .x file will not yet contain useful effect information. This means we have to copy our own effect into each part of the model! This is simply done by putting this piece of code before the line that returns the mod object:

foreach (ModelMesh modmesh in mod.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device);

For each part of each mesh in our model, we place a copy of our effect into the part. Now our model has been loaded and initialized completely! Because we'll be loading a few more meshes in this Series, we'll create another small method, LoadModels, where we will load all of our models:

private void LoadModels() { spacemodel = FillModelFromFile("xwing"); }

Now we still need to call this method from our LoadGraphicsContent method:

LoadModels();

We've seen quite a lot of theory above, but luckily this did not result in a lot of difficult code. With our model loaded into our spacemodel variable, we can move on to our Draw method. We'll be drawing the spacemodel after the code that draws the city. The xwing model contains vertices with color information so we will be using the `Colored' technique. Only this time, we need to select this technique for each effect in every part of the model:

foreach (ModelMesh modmesh in spacemodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(0.0005f, 0.0005f, 0.0005f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateRotationZ((float)Math.PI)*Matrix.CreateTranslation(new Vector3(19,5,12)); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); } modmesh.Draw(); }

For each part of the model, we set the correct technique, as well as the world matrix for our mesh. It needs to be scaled down a lot, and we rotate it so it isn't drawn upside down. Then we transate (=move) it so it is within view of our camera. Finally, all the parts are drawn to the screen. That should be it! When you run this code, you should see your city and the xwing, as shown below:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Loading_a_Model.php>

If you appreciate the amount of time I spend creating and updating these pages, feel free to donate -- any amount is welcome !

Click here to go to the forum on this chapter!

Or click on one of the topics on this chapter to go there: Loading more than one model In your XNA tutorial for loading a 3d model you sa...

Load Model with textures? Hi again, sorry for the amount of questions recent... Typo / Inconsistancy for 360 deployment. Hey, first off thanks for the great tutorials! ... 3ds Max Hi! I have a model from 3ds Max 8, but i can't i... Problem with rendering model Hi, I managed to get the tutorial to compile an... Problem with "X-wing" model I'm pretty new to C# and XNA and have enjoyed fol... That doesn't look too realistic yet, because of the blue background and because we're not yet using lighting. More on this in a few chapters!

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; Texture2D scenerytexture; int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; private int[] buildingheights = new int[] { 0, 10, 1, 3, 2, 5 }; VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList(); Model spacemodel;

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { LoadFloorplan(); SetUpVertices(); base.Initialize(); } private void LoadFloorplan() { WIDTH = 20; HEIGHT = 15;

int_Floorplan = new int[,] { {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,1,1,0,0,0,1,1,0,0,1,0,1}, {1,0,0,1,1,0,0,0,1,0,0,0,1,0,1}, {1,0,0,0,1,1,0,1,1,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,1,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,1,1,0,0,0,1,0,0,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,1,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,1,0,0,0,0,1}, {1,0,1,1,0,0,0,0,1,1,0,0,0,1,1}, {1,0,0,0,0,0,0,0,1,1,0,0,0,1,1}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, }; Random random = new Random(); for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 1) { int_Floorplan[x, y] = random.Next(differentbuildings) + 1; } } } } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); viewMatrix = Matrix.CreateLookAt(new Vector3(20, 5, 13), new Vector3(8, 7, 0), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); }

private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); if (y > 0) { if (int_Floorplan[x, y - 1] != int_Floorplan[x, y]) { if (int_Floorplan[x, y - 1] > 0) { currentbuilding = int_Floorplan[x, y - 1]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 /

imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); } } } if (x > 0) { if (int_Floorplan[x - 1, y] != int_Floorplan[x, y]) { if (int_Floorplan[x - 1, y] > 0) { currentbuilding = int_Floorplan[x - 1, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } } } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice();

LoadEffect();

scenerytexture = content.Load<Texture2D> ("texturemap");

LoadModels(); } }

private void LoadModels() { spacemodel = FillModelFromFile("xwing"); } private Model FillModelFromFile(string asset) { Model mod = content.Load<Model> (asset); foreach (ModelMesh modmesh in mod.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device); return mod; }

protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3);

pass.End(); } effect.End();

foreach (ModelMesh modmesh in spacemodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(0.0005f, 0.0005f, 0.0005f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateRotationZ((float)Math.PI)*Matrix.CreateTranslation(new Vector3(19,5,12)); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); } modmesh.Draw(); }

base.Draw(gameTime); } } }

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Loading_a_Model.php> This will be a pretty short chapter, as we've seen the basics about lighting in the first Series. As a first type of lighting, we're simply going to use a directional light, such as we've used in Series 1. All vertices of the objects in our scene already contain normal data: we've added it explicitly to the vertices of our 3D city, and .x file we loaded the Model from already contained normal info. So all we have to do is find the LoadEffect method, where we load and initialize our effect. Add this code to the bottom of the method:

effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, -1, -1));

Where we activate our directional light (once again, you can think of it as light coming from the sun). Try to run this code. First of all, you should notice our airplane is being lit nicely now, and already looks a lot better. The rest of the city, however, still could use some improvement. Because 2 sides of each building is not lit by the sunlight, the lighting factor will be 0, and it will be drawn completely black! However, in real life, even the shadowed sides still have some color. This is because in real life we also have ambient light: this is light that is reflected by other objects. In short, when we want to add ambient lighting, we simply have to add some constant amount of lighting to all of our objects. In our code, all we have to do is set the corresponding parameter in our effect. So add this line to the bottom of the LoadEffect method:

effect.Parameters["xAmbient"].SetValue(0.4f);

When you run this code, you should see that there are no more black sides on our buildings. Most sides are still pretty dark, but this is only because we're looking at the shadowed side of our city. You should already see some sides at the left side of your city that are being lit by our sunlight; they should appear brighter. Try changing the intensity of the ambient lighting, the direction of the sunlight, and maybe also the position of the camera to get a feeling of the impact of a directional light on your city.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Ambient_and_diffuse.php>

If you appreciate the amount of time I spend creating and updating these pages, feel free to donate -- any amount is welcome !

Click here to go to the forum on this chapter!

It is entirely possible to achieve better-looking lighting effects by experiencing with other types of lights. I didn't include this in the effects file, because in the 3rd series you'll learn how to do this yourself.

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix;

Matrix projectionMatrix; Texture2D scenerytexture; int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; private int[] buildingheights = new int[] { 0, 10, 1, 3, 2, 5 }; VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList(); Model spacemodel;

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { LoadFloorplan(); SetUpVertices(); base.Initialize(); } private void LoadFloorplan() { WIDTH = 20; HEIGHT = 15; int_Floorplan = new int[,] { {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,1,1,0,0,0,1,1,0,0,1,0,1}, {1,0,0,1,1,0,0,0,1,0,0,0,1,0,1}, {1,0,0,0,1,1,0,1,1,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,1,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,1,1,0,0,0,1,0,0,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,1,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,1,0,0,0,0,1}, {1,0,1,1,0,0,0,0,1,1,0,0,0,1,1}, {1,0,0,0,0,0,0,0,1,1,0,0,0,1,1}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, }; Random random = new Random(); for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 1) { int_Floorplan[x, y] = random.Next(differentbuildings) + 1; } }

} } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); viewMatrix = Matrix.CreateLookAt(new Vector3(20, 5, 13), new Vector3(8, 7, 0), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity);

effect.Parameters["xEnableLighting"].SetValue(true); effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, -1, -1)); effect.Parameters["xAmbient"].SetValue(0.4f); } private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0)));

if (y > 0) { if (int_Floorplan[x, y - 1] != int_Floorplan[x, y]) { if (int_Floorplan[x, y - 1] > 0) { currentbuilding = int_Floorplan[x, y - 1]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); } } } if (x > 0) { if (int_Floorplan[x - 1, y] != int_Floorplan[x, y]) { if (int_Floorplan[x - 1, y] > 0) { currentbuilding = int_Floorplan[x - 1, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0)));

verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } } } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect();

scenerytexture = content.Load<Texture2D> ("texturemap");

LoadModels(); } }

private void LoadModels() { spacemodel = FillModelFromFile("xwing"); } private Model FillModelFromFile(string asset) {

Model mod = content.Load<Model> (asset); foreach (ModelMesh modmesh in mod.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device); return mod; }

protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3); pass.End(); } effect.End();

foreach (ModelMesh modmesh in spacemodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(0.0005f, 0.0005f, 0.0005f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateRotationZ((float)Math.PI) * Matrix.CreateTranslation(new Vector3(19, 5, 12)); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); } modmesh.Draw(); }

base.Draw(gameTime); } } }

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Ambient_and_diffuse.php> Imagine our xwing would be flying through our scene. We would like our camera to always follow our xwing, so the camera would always positioned behind the xwing, no matter which rotation of translation our xwing has.

First we're going to introduce 2 new variables, meshposition and meshangles, which will contain the position and the angles of our airplane relative to the 3D city. These variables will also determine the position of our camera, as the position of the camera needs to change as our xwing moves/rotates. Go ahead and declare these 2 variables at the top of your code:

Vector3 spacemeshposition = new Vector3(8, 2, 1); Vector3 spacemeshangles = new Vector3(0, 0, 0);

As you can see, we've already initialized a starting position and rotation. To accomplish the goal of this chapter, we will draw the 3D city, then draw the airplane at its correct position and rotation, and then reposition the camera immediately behind our airplane. So in our Draw method, we can already change the world matrix for our spacemesh:

worldMatrix = Matrix.CreateScale(0.0005f, 0.0005f, 0.0005f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateRotationZ((float)Math.PI) * Matrix.CreateRotationY(-(float)spacemeshangles.Y) * Matrix.CreateRotationX(-(float)spacemeshangles.X) * Matrix.CreateRotationZ(-(float)spacemeshangles.Z) * Matrix.CreateTranslation(spacemeshposition);

This looks to be a very complex line, but it's not. First the model is scaled down, and it is rotated among the X and Z axis so its nose is in the correct direction to start with. Then, the spaceship is rotated among the angles specified in the spacemeshangles variable. Finally, the mesh is translated to its correct position. Now we have our xwing at the correct going to create a method to do this. some maths. Don't worry if you don't the general picture. It contains the ;) position, it's time to position the camera behind the xwing. We're As a small warning, I would like to note that this method contains understand everything, but you can read through it and try to get 5 most difficult lines of this series, but hey, it's only 5 lines

So our method will create a new viewMatrix and projectionMatrix that depend on the current position and angles of the xwing. Let's start with this:

private void UpdateCamera() { Vector3 campos = new Vector3(0, -0.6f, 0.1f); }

This vector defines where we want our camera to be, relative to the position of the xwing: we want to position the camera a bit behind and above our xwing, so the vector only has a Y and Z component. This vector needs to be transformed, so it will also be behind the xwing when we rotate our xwing. Let's first create a matrix that corresponds to this rotation:

Matrix camrot = Matrix.CreateRotationY(-(float)spacemeshangles.Y) * Matrix.CreateRotationX((float)spacemeshangles.X) * Matrix.CreateRotationZ(-(float)spacemeshangles.Z);

This matrix holds the rotation of our xwing. Now, we need to apply this rotation to our campos vector:

campos = Vector3.Transform(campos, camrot);

Now the campos variable holds the vector that will always be behind (and a bit above) our xwing no matter what its rotation is, IF the xwing is in the (0,0,0) position. Because the position of our xwing will constantly change, we need to move (=translate) our campos vector to the position of our xwing, which is done using the following line:

campos = Vector3.Transform(campos, Matrix.CreateTranslation(spacemeshposition));

OK, so now we have the vector that will always be a bit behind and a bit above our xwing, no matter which rotation and/or translation our xwing has! Remember, when we create a viewMatrix, we not only need the position and target of our camera, but also the vector that indicates the `up'-position of the xwing. We also know the target of our camera (the position of the xwing), but we don't yet have the up-vector. This is found exactly the same way: we start with the vector that points up, and rotate it with the xwing rotation matrix:

Vector3 camup = new Vector3(0, 0, 1); camup = Vector3.Transform(camup, camrot);

Now we have everything to create our camera matrices: the position, target and up-vector, so we can create our new matrices:

viewMatrix = Matrix.CreateLookAt(campos, spacemeshposition, camup); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f);

Only the first line contains the newly created vectors, the second line has remained the same. Quite a method, let's make sure we call it from the Update method:

UpdateCamera();

Now we have our camera matrices updated every advancement of our game, we need to pass these parameters to our effect every time we draw something. So we'll need these lines in the Draw method, before we draw the 3D city:

effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture);

And for our xwing:

currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix);

That's it! When you run this code, you'll see the camera has been positioned behind and above your xwing, as on the image below:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Dynamic_camera.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Dynamic_camera.php>

Now we have our camera centered on the xwing, it's time to make our xwing fly. Of course, this will be done through the Update method. Let's first have a look at how the angles of our airplane are defined (Z is pointing upward, toward the viewer):

Let's explain the meaning of these angles. In an airplane, when you move to the left, your flaps will be adjusted so the plane rotates around the Y axis displayed above. Then, when you pull the joystick to you, the nose of the plane will be lifted up, which corresponds to a rotation among the Y axis. If your plane already was tilted around the X axis, lifting the nose up will also result in a rotation around the Z axis. The ProcessKeyboard method will read keyboard input and change the values of the angles accordingly:

private void ProcessKeyboard(float speed) {

KeyboardState keys = Keyboard.GetState(); if (keys.IsKeyDown(Keys.Right)) { spacemeshangles.Y -= 2.5f * speed; } if (keys.IsKeyDown(Keys.Left)) { spacemeshangles.Y += 2.5f * speed; } }

All this does, is rotate the plane around its Y axis when you push the left or right button, as discussed above. You'll notice that this variation again depends on a speed variable. Let's try this out, by placing a call to this method from the Update method. We will need to pass the amount of movement, and we will relate this to the amount of time that has passed, so everybody will experience the same amount of rotation, no matter how fast or slow your computer is. Put this line before your call to UpdateCamera:

ProcessKeyboard(gameTime.ElapsedGameTime.Milliseconds/500.0f*gamespeed);

You see we also use a variable gamespeed, which we will increase when the player is playing well, and decrease when the player crashes. We still need to define this variable at the top of our code:

float gamespeed = 1.0f;

Now run the program! When you push the left or right arrow button on your keyboard, the plane will spin around its Y axis. Let's try to pull up the nose of our xwing. So let's add this code to the ProcessKeyboard method:

if (keys.IsKeyDown(Keys.Down)) { spacemeshangles.Z -= (float)(speed spacemeshangles.X -= (float)(speed } if (keys.IsKeyDown(Keys.Up)) { spacemeshangles.Z += (float)(speed spacemeshangles.X += (float)(speed }

* Math.Sin(spacemeshangles.Y)); * Math.Cos(spacemeshangles.Y));

* Math.Sin(spacemeshangles.Y)); * Math.Cos(spacemeshangles.Y));

Pressing the down arrow will result in pulling up the nose of the airplane. If the rotation around the Y axis is 0, this will only change the rotation around the X axis, because sin(0) = 0. If, however, the rotation around the Y axis is not 0, the rotation around the Z axis will also be changed, which makes your airplane actually `turn'. Although these maths are the basics flight simulators rely on, it's not 100% complete. For example, looping will cause the Math.Sin to switch from positive to negative, making your airplane fly backwards. Although 4 additional mathematical if-checks would do the trick, learning maths is not the goal of this series of XNA tutorials. I think this and the previous chapter already have more than enough maths in them. Running this code should enable you to rotate your xwing in any angle you want! Not too much fun, because our xwing isn't actually moving. So let's create another method, UpdatePosition, which will update the spacemeshposition according to the spacemeshangles:

private void UpdatePosition(ref Vector3 position, Vector3 angles, float speed)

{ Vector3 addvector = new Vector3(); addvector.X += (float)(Math.Sin(angles.Z)); addvector.Y += (float)(Math.Cos(angles.Z)); addvector.Z -= (float)(Math.Tan(angles.X)); addvector.Normalize(); position += addvector * speed; }

First, we calculate the direction of movement from the angles. Then we normalize this direction (so its length becomes 1), multiply it by a speed variable, and add it to the current position. We need to call this method from our Update methode, immediately before the call to UpdateCamera. Once again, we need to pass in the amount of movement:

UpdatePosition(ref spacemeshposition, spacemeshangles, gameTime.ElapsedGameTime.Milliseconds/500.0f*gamespeed);

And there you have it! When you run this code, you should be able to fly your xwing through the 3D city, as shown in the image below:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Flight_kinematics.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Flight_kinematics.php> As you have probably already guessed, this chapter we're going to detect when our plane has collided with an element of the scene ­ this far, our 3D city. To do this, we're going to model our airplane as a sphere, an abstraction which is good enough for this purpose.

The previous 2 chapter contained the difficult maths of this series, from this chapter on things will go more easily. By using some very basic checks, it should be possible to detect collisions. To do this, we're going to create a new method, called CheckCollision. This function will return an integer ­ 0 meaning no collision, 1 meaning a collision with an object of the 3D city and a 2 meaning the plane has gone too high. Of course, the method needs the position of our airplane, as well as its radius. Start with this code:

private int CheckCollision(Vector3 position, float radius) { if (position.Z - radius < 0) return 1; if (position.Z + radius > 15) return 2; if ((position.X < -10) || (position.X > WIDTH+10)) return 2; if ((position.Y < -10) || (position.Y > HEIGHT+10)) return 2; return 0; }

We've already discussed the interface corresponding to the first line. Inside the method, we first check if our plane hasn't crashed into the ground. Position.Z corresponds to the center of our plane, we need to subtract the radius from it if we want to know the lowest point of our airplane. If this bottom point is lower than 0, our plane has crashed into the floor of the city. Bad flying practice, and the method will return value 1. Exactly the same check occurs for checking whether the plane has gone too high. In that case, however, a 2 is returned instead of a 1. This way, the calling method is able to know what kind of `collision' has occurred, and subtract points accordingly. This way, you'll be able to subtract the player more points when he's crashed into the floor than when he's flown too high. The next two lines simply perform a similar check to see whether the player hasn't flown too far away from the city (more than 10 units of distance out of the city limits). If that's the case, a 2 will be returned, again indicating that the `collision' was not really a crash. If none of the checks turn out to be positive, a 0 is returned, meaning no collision of any kind has happened. Now it's time to check for collisions between the plane and the buildings. Add these lines before the `return 0' line:

if ((position.X - radius > 0) && (position.X + radius < WIDTH) && (position.Y - radius> 0) && (position.Y + radius < HEIGHT)) { if (position.Z - radius < buildingheights[int_Floorplan[(int)position.X, (int)position.Y]]) return 1; }

The middle line simply checks whether the lowest point of the plane isn't lower than the top of the building at that position. Remember the int_Floorplan contains the type of building at that position, while we can find the corresponding height in the buildingheights variable. Notice, however, that we're accessing the int_Floorplan variable, which means the X and Y coordinates of our plane must not be smaller than 0 or larger than the dimension of this variable. It is exactly this condition that is being checked for in the first line. The only thing we have to do, is react to a collision. So add these lines in our Update method, after our call to UpdateCamera:

if (CheckCollision(spacemeshposition, spacemodel.Meshes[0].BoundingSphere.Radius * 0.0005f) > 0) { spacemeshposition = new Vector3(8, 2, 1); spacemeshangles = new Vector3(0, 0, 0); gamespeed /= 1.1f; }

The first line calls the CheckCollision method. The position of our xwing is passed to the method, and the second argument has to be the radius, the maximal distance between any vertex of the xwing and the center of the xwing. Such a value is already contained in the Model itself, so we retrieve it and pass it to the method. If the method detects a collision, it returns a value greater than 0. If this is the case, the position and angles of our plane will be reset to the starting conditions. Also, every time the plane crashes, the speed of the game will be turned down a notch! That should do the job! Run the program and crash as fast as possible. Your plane should be repositioned to your starting position, and after a few crashes you should notice the game speed has decreased. Have fun!

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Collision_detection.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Collision_detection.php> With our scene set up and our airplane flying through it, it's time to add some targets. In this example, we'll be using simple spheres. Once again, we will load this model from a file. First we'll define a variable at the top of our code that indicates the maximal number of targets in our city, as well as an ArrayList that will hold all our targets;

int maxtargets = 80; ArrayList targetlist = new ArrayList(); Model targetmodel;

The last variable will hold the model of our targets. Let's first fill this variable. You can download an example target model here: link. It is a simple red sphere. Next, add this .x file to your solution, as seen in the chapter on Textures of this series. With the `target' asset added to our solution, all we need to do is add this line to our LoadModels method:

targetmodel = FillModelFromFile("target");

Which calls the FillModelFromFile method that fills our targetmodel variable, and replaces all effects in that model with copied of our own effect. For every target, we will need to keep track of 2 values : the position of the target and it's size. We are going to define a new struct that can hold both values. Add this at the very top of your code, immediately before the declaration of your variables:

struct targetstruct { public Vector3 position; public float radius; }

As you can see, we'll be storing the position and radius of each target. These targets will be added to the targetlist in a new method, AddTargets. Add this piece of the method to your code:

private void AddTargets() { while (targetlist.Count < maxtargets) { Random random = new Random(); int x = random.Next(WIDTH); int y = random.Next(HEIGHT); float z = (float)random.Next(2000) / 1000f + 1; float radius = (float)random.Next(1000) / 1000f * 0.2f+ 0.01f; bool acceptableposition = true; if (int_Floorplan[x, y] > 0) acceptableposition = false; foreach (targetstruct currenttarget in targetlist) { if (((int)currenttarget.position.X == x) && ((int)currenttarget.position.Y == y)) acceptableposition = false; } } }

Don't run this code yet, as it will loop forever. Wait until we've added the final code to the method. This code will loop until enough targets have been added. The codeblock in the middle first generates a random position for a new target, as well as a random size for the target, the radius. Then we check whether the generated position is acceptable. A first condition is that there mustn't be a building on that position, meaning the int_Floorplan must contain a 0 for that position. Then we check if there isn't yet an existing target on that position. If both checks are negative, meaning it's OK to place a new target at the new position, the variable acceptableposition has remained `true'. To actually add the new target, add this code inside the while-loop:

if (acceptableposition) { targetstruct newtarget = new targetstruct(); newtarget.position = new Vector3(x +0.5f, y+0.5f, z); newtarget.radius = radius; targetlist.Add(newtarget); }

If it's OK to add a new target at the generated position, we create a new targetstruct variable. We simply set the position and radius values of this variable.

With all the values of the new target filled, we add it to the ArrayList. This will cause the targetlist.Count to increase by 1, so after looping a few time through the code, this targetlist.Count value will have reached maxtargets and the method will stop. Make sure you don't set maxtargets higher than the number of floors in your city, or the method will loop forever because it can't find enough places to add all the targets! Call this method from your Initialize method:

AddTargets();

Running this code shouldn't give any problems, but you won't notice a difference either as we're not yet drawing the targets. Add this code to the bottom of our Draw method:

foreach (targetstruct currenttarget in targetlist) { foreach (ModelMesh modmesh in targetmodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(currenttarget.radius, currenttarget.radius, currenttarget.radius) * Matrix.CreateTranslation(currenttarget.position); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix); } modmesh.Draw(); } }

Quite simple, only more of the same: for each target in our targetlist, we set the correct position and scaling in the world matrix, and draw the target. Running this code should display the targets in your city. For now, we can still fly through our targets, so we'll end this chapter by adding these lines to the CheckCollision method:

for (int i = 0; i < targetlist.Count; i++ ) { targetstruct currenttarget = (targetstruct)targetlist[i]; if (Math.Sqrt(Math.Pow(currenttarget.position.X - position.X, 2) + Math.Pow(currenttarget.position.Y - position.Y, 2) + Math.Pow(currenttarget.position.Z - position.Z, 2)) < radius + currenttarget.radius) { targetlist.RemoveAt(i); i--; return 3; } }

For every target in our targetlist, we check if the distance between the center of our plane and the center of the target isn't smaller than the sum of the radius of both objects. This would indicate a collision between our plane and the target, thus a 3 is returned. Remember this method is already called from within your OnPaint method. So this is all we have to do this chapter, running this code will draw your targets and bumping into them will cause a collision.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Adding_targets.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Adding_targets.php> This chapter we'll be adding bullets to our game. For these bullets, we could use real 3D spheres, but this will ask a lot from our graphics card. Instead, we'll be using a very simple 2D image of a fireball, which you can download here (link). We will only supply the central point of the image in 3D space to XNA, and XNA will display the image, always facing the viewer and scaled to reflect the distance between the viewer and the point in 3D space. This technique is called billboarding. A 2D image is also called a sprite, and since XNA needs only the center point of the image as 3D location, these 2D sprites used in 3D are called point sprites. Since only 1 point is needed, this method consumes very little bandwidth to our graphics card in our PCI express slot. Once again, we will load our 2D image into a texture variable. As we need to keep track of all the bullets we've fired, we'll need an ArrayList to store all our bullets, so we can put these lines at the top of your code:

Texture2D bullettexture; ArrayList spritelist = new ArrayList();

Go ahead and load the image into your solution. Next we'll load the 2D image into our texture variable. This is done by adding this line to the LoadGraphicsContent method:

bullettexture = content.Load<Texture2D> ("bullet");

You can download my sample image here (link). When being fired, the bullets will move forward continuously. Thus, for every bullet, we'll need to keep track of the position and the 3 angles, just as with our plane. So we're going to define a new struct at the top of our code:

struct spritestruct { public Vector3 position; public Vector3 angles; }

Now, every time the user presses the spacebar, we want a new bullet to be created and be added to our ArrayList, so add this code at the bottom of your ReadUserInput method:

if (keys.IsKeyDown(Keys.Space)) { spritestruct newsprite = new spritestruct(); newsprite.position = spacemeshposition; newsprite.angles = spacemeshangles; spritelist.Add(newsprite); }

When the spacebar is pressed, we'll create a new spritestruct, and the current position and angles of the airplane will be given to it. This is of course because we want the bullet to travel in the same direction as our plane was flying the moment the bullet was fired. Then the new bullet is added to the ArrayList. Now we want to let the bullets move forward. We've already created a method that does all the calculations: UpdatePosition. We're going to create a new method, UpdateSpritePositions, that will be called from within out Update method. This method will scroll through our spritelist, and will update the position of every sprite. This is the code:

private void UpdateSpritePositions(float speed) { for (int i = 0; i < spritelist.Count; i++) { spritestruct currentsprite = (spritestruct)spritelist[i]; UpdatePosition(ref currentsprite.position, currentsprite.angles, speed * 5); spritelist[i] = currentsprite; } }

Pretty straightforward. The position of every bullet in our spritelist is updated. This method will also get the amount of elapsed time, so you can notice our bullets will travel 5 times as fast as our airplane. Call this method from within the Update method:

UpdateSpritePositions(gameTime.ElapsedGameTime.Milliseconds / 500.0f * gamespeed);

Next, we want to draw our sprites. For each bullet, XNA needs to be passed a vertex definition containing 2 things: the position and the size of the sprite. We need to pass the size, because we want the images to become smaller as they move away from our xwing. As in Series 1, we find ourselves at a point where we cannot use one of the predefined vertexformats XNA offers. So we need to define our very own ertexformat, a structure that is able to hold the necessary data. So put this struct at the top of your code:

private struct ourspritevertexformat { private Vector3 position; private float pointSize; public ourspritevertexformat(Vector3 position, float pointSize) { this.position = position; this.pointSize = pointSize; } public static VertexElement[] Elements = {

new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Single, VertexElementMethod.Default, VertexElementUsage.PointSize, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 1); }

This struct can hold the data XNA needs to draw point sprites: the position and the size. We also define the array of VertexElements, which our graphics card needs. More info on this in the 3rd series. Now we will make a method, DrawSprites, that draws the bullets stored in our spritelist. It looks quite difficult, but contains nothing we haven't seen before:

private void DrawSprites() { if (spritelist.Count > 0) { ourspritevertexformat [] spritecoordsarray = new ourspritevertexformat [spritelist.Count]; foreach (spritestruct currentsprite in spritelist) { spritecoordsarray[spritelist.IndexOf(currentsprite)] = new ourspritevertexformat (currentsprite.position, 50.0f); } effect.CurrentTechnique = effect.Techniques["PointSprites"]; Matrix worldMatrix = Matrix.Identity; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); this.effect.Parameters["xTexture"].SetValue(bullettexture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, ourspritevertexformat.Elements); device.DrawUserPrimitives(PrimitiveType.PointList, spritecoordsarray, 0, spritecoordsarray.Length); pass.End(); } effect.End(); } }

If our spritelist is not empty, and array of ourspritevertexformats is created and filled with one element for each sprite in our spritelist. This element will get the current position of the sprite, and a maximal size of the image rectangle. If the distance between the camera and the image is 1, the image rectangle will measure 50x50 pixels. Next, we need to let our graphics card know we'll be using point sprites. This is what the `PointSprites' technique in our effect file does. We'll be selecting this technique, and set its world, view and projection matrix. Because the position of all bullets are defined relative to the absolute axis (the same as our 3D city), we're using the identity matrix as world matrix. We also need to set the image of the bullet at active texture, as this is where our graphics card need to find the color for each pixel. We've already seen the last part a few times: for each part of the effect, we draw a certain number of elements from an array containing vertices. Because new every image requires only 1 vertex, this number of elements to draw will be the same as the number of elements in our spritecoordsarray. This concludes the method; All we have to do is call this method from the bottom of our Draw method:

DrawSprites();

Now try to run this code! You should see a screen as the one below: Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php> As a difficult audience, the result of last chapter probably didn't fully please you. My guess is the black borders around our fireballs didn't seem too normal to you. And you're completely right, the borders should be blended with whatever is behind them. Welcome to this chapter on alpha blending. Up till now, if 2 objects shared the same pixel, the pixel takes the color of the object closest to the viewer. In the simplest form of alpha blending, the colors of both objects are added together, and the pixel takes this `sum' as color. For example. Imagine having a completely blue background. Adding a red triangle in front of it would now, in our case, simply display a red triangle. With alpha blending turned on, the pixel of the triangle would contain blue+red, thus the whole triangle would be purple. Black is a special case. In terms of XNA, it's not a color. It's nothing. So black+blue gives simply blue. In general, black + a color results in this color. This is why I've drawn the borders of the fireball black, because the more black the image gets, the more of the background will be let through. When do we need to turn on alpha blending? We need our bullets to be blended with the background, but all the rest of our scene mustn't change. So we need to turn on alpha blending before drawing our bullets, and turn it off again afterwards. Add the following code to our DrawSprites method, before the call to effect.Begin:

device.RenderState.AlphaBlendEnable = true; device.RenderState.SourceBlend = Blend.One; device.RenderState.DestinationBlend = Blend.One;

In short, when 2 objects compete for one pixel, their colors will simply be summed, as both of their weights will be One: FinalColor = Color1*SourceBlend + Color2*DestinationBlend Which gives in our case: FinalColor = Color1*1 + Color2*1 FinalColor = Color1 + Color2 Told you, it's simply the sum. Remember we need to disable alpha blending after we've drawn our bullets, so put this line at the end of the method:

device.RenderState.AlphaBlendEnable = false;

Running this should give you a MUCH nicer effect on your bullets! Running at slow gamespeeds, or having a supercomputer results in shooting 100 bullets every second when you hold the spacebar. Find the lines where you process the spacebar keyboard input. We're going to build in a check to make sure there's a minimal distance between 2 bullets:

if (keys.IsKeyDown(Keys.Space)) { double distance = 0; if (spritelist.Count > 0) { spritestruct lastsprite = (spritestruct)spritelist[spritelist.Count - 1]; distance = Math.Sqrt(Math.Pow(lastsprite.position.X - spacemeshposition.X, 2) + Math.Pow(lastsprite.position.Y - spacemeshposition.Y, 2) + Math.Pow(lastsprite.position.Z spacemeshposition.Z, 2)); } if ((distance > 0.8f) || (spritelist.Count == 0)) { spritestruct newsprite = new spritestruct(); newsprite.position = spacemeshposition; newsprite.angles = spacemeshangles; spritelist.Add(newsprite); } }

The first lines calculate the distance between the current position of the plane and the position of the last bullet. The remainder of the code is only executed if this distance is at least 0.4, of if there are no bullets yet. Try running this code. You'll notice the firing mechanism has become a bit more realistic, and it no longer depends on the speed of your computer! We have nice bullets, but what good are bullets if they don't destroy our targets? Luckily, we already have a method that can check if 2 objects collide. So go to your UpdateSpritesPositions method, and add this code to the end of the for-loop:

int collisionkind = CheckCollision(currentsprite.position, 0.05f); if (collisionkind > 0) { spritelist.RemoveAt(i); i--; if (collisionkind == 3) { gamespeed *= 1.02f;

AddTargets(); } }

First the program checks if the given bullet collides with an object in the scene. If this is not the case, a 0 will be stored in collisionkind. If there is a collision, a number indicating the kind of collision will be stored in that variable. So if there is a collision, the bullet is removed from the spritelist, making sure XNA never draws this bullet anymore. More important, if collisionkind contains 3, this would indicate a collision between our bullet and a target. The CheckCollision method will already take care of the removal of the target, all we have to do is increase the speed of the game a bit and replace the deleted target by a new one. Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Alpha_blending.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Alpha_blending.php> This chapter, we'll do something about the solid background color surrounding our city. Using the methods we've created thus far, we could expect this to be quite easy, as a skybox is nothing more than a mesh. Only this time, we need to load some texture files to accompany the mesh. You can download the mesh file itself (link), as well as the texure files (link). I've put the texture files in a zip file, if you've got problems opening it, you can also find the files here (link). The meshfile itself used to be supplied with the XNA SDK. You can find lots of skybox texturefiles on the internet. This time, we will need 2 variables: one to store the model, and an extra one to store the textures (because each subset of the model can have a different texture). So create these variables at the top of your code:

Model skybox; Texture2D[] skyboxtextures;

Normally, we would use the FillModelFromFile method to load our skybox, but we cannot do this, because the method doesn't load the textures. We'll simply put this code at the end of the LoadModels method:

skybox = content.Load<Model> ("skybox2"); int i = 0; skyboxtextures = new Texture2D[skybox.Meshes.Count]; foreach (ModelMesh mesh in skybox.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) skyboxtextures[i++] = currenteffect.Texture; foreach (ModelMesh modmesh in skybox.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) { modmeshpart.Effect = effect.Clone(device); effect.Parameters["xEnableLighting"].SetValue(false); }

The first line loads the data from file into our variable. Remember all parts of the model can hold a different effect? These effects also hold the name of the texture, corresponding to its part of the model. So we cycle through each effect in our model, and save all the textures in the skyboxtextures array! The second part does almost the same as the FillModelFromFile method does: for each part of the model, we set our effect as active effect. Only this time, we turn off lighting, as we don't want some parts of the skybox to be shadowed of course. So far for loading the skybox. All we have to do now, is draw it. What makes a skybox special? When your plane is moving, the city has to move relative to it. Not so for your skybox, which has always to be at a constant distance from your airplane. This will make our skybox look like it's infinitely far away! So when our airplanes moves, we move the skybox with it, so our xwing will always be in the middle of it. So put this code at the end of our Draw method, but before the call to DrawSprites (at the end of the chapter, see what happens when you put this code BEHIND the call to DrawSprites):

int i = 0; foreach (ModelMesh modmesh in skybox.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Textured"]; worldMatrix = Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateScale(2, 2, 2) * Matrix.CreateTranslation(spacemeshposition); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix); currenteffect.Parameters["xTexture"].SetValue(skyboxtextures[i++]); } modmesh.Draw(); }

Once again, we cycle through all parts of our model, We're using the Textured techniques, because, well, our model has some textures that need to be drawn. Next, we define the World transform for the skybox. As you can see, it is moved along with our xwing. It's also rotated 90° so the top of the mesh is pointing upward, and enlarged by a factor 2 so our city fits in it. That's it for this chapter! Running the code should give you an image as displayed below:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Skybox.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Skybox.php>

This concludes the second series of XNA tutorials. You've come a long way, from drawing a simple triangle, to a real flight simulator! Of course, this is not the endpoint, so after you take a break, there's a 3rd series waiting for you.

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAseries2 { public class Game1 : Microsoft.Xna.Framework.Game { struct targetstruct { public Vector3 position; public float radius; } struct spritestruct { public Vector3 position; public Vector3 angles; } private struct ourspritevertexformat { private Vector3 position;

private float pointSize; public ourspritevertexformat(Vector3 position, float pointSize) { this.position = position; this.pointSize = pointSize; } public static VertexElement[] Elements = { new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Single, VertexElementMethod.Default, VertexElementUsage.PointSize, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 1); } GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Matrix viewMatrix; Matrix projectionMatrix; Texture2D scenerytexture; int[,] int_Floorplan; int WIDTH; int HEIGHT; int differentbuildings = 5; private int[] buildingheights = new int[] { 0, 10, 1, 3, 2, 5 }; int maxtargets = 80; ArrayList targetlist = new ArrayList(); Model targetmodel; Texture2D bullettexture; ArrayList spritelist = new ArrayList(); Model skybox; Texture2D[] skyboxtextures;

VertexPositionNormalTexture[] verticesarray; ArrayList verticeslist = new ArrayList(); Model spacemodel; Vector3 spacemeshposition = new Vector3(8, 2, 1); Vector3 spacemeshangles = new Vector3(0, 0, 0); float gamespeed = 1.0f; public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); } protected override void Initialize() { LoadFloorplan(); SetUpVertices(); AddTargets(); base.Initialize(); } private void LoadFloorplan() { WIDTH = 20; HEIGHT = 15;

int_Floorplan = new int[,] { {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,1,1,0,0,0,1,1,0,0,1,0,1}, {1,0,0,1,1,0,0,0,1,0,0,0,1,0,1}, {1,0,0,0,1,1,0,1,1,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,1,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,1,1,0,0,0,1,0,0,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,0,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,0,0,0,0,0,1}, {1,0,0,0,0,1,0,0,0,1,0,0,0,0,1}, {1,0,1,0,0,0,0,0,0,1,0,0,0,0,1}, {1,0,1,1,0,0,0,0,1,1,0,0,0,1,1}, {1,0,0,0,0,0,0,0,1,1,0,0,0,1,1}, {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, }; Random random = new Random(); for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { if (int_Floorplan[x, y] == 1) { int_Floorplan[x, y] = random.Next(differentbuildings) + 1; } } } } private void SetUpXNADevice() { device = graphics.GraphicsDevice; graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 2"; } private void LoadEffect() { CompiledEffect compiledEffect = Effect.CompileEffectFromFile("@/../../../../effects.fx", null, null, CompilerOptions.None, TargetPlatform.Windows); effect = new Effect(graphics.GraphicsDevice, compiledEffect.GetEffectCode(), CompilerOptions.None, null); viewMatrix = Matrix.CreateLookAt(new Vector3(20, 5, 13), new Vector3(8, 7, 0), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); effect.Parameters["xEnableLighting"].SetValue(true);

effect.Parameters["xLightDirection"].SetValue(new Vector3(0.5f, -1, -1)); effect.Parameters["xAmbient"].SetValue(0.4f); } private void SetUpVertices() { float imagesintexture = 1 + differentbuildings * 2; for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { int currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2((currentbuilding * 2 + 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y + 1, buildingheights[currentbuilding]), new Vector3(0, 0, 1), new Vector2(currentbuilding * 2 / imagesintexture, 0))); if (y > 0) { if (int_Floorplan[x, y - 1] != int_Floorplan[x, y]) { if (int_Floorplan[x, y - 1] > 0) { currentbuilding = int_Floorplan[x, y - 1]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, 1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, 1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, 0f), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f),

new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x + 1, y, buildingheights[currentbuilding]), new Vector3(0, -1, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(0, -1, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); } } } if (x > 0) { if (int_Floorplan[x - 1, y] != int_Floorplan[x, y]) { if (int_Floorplan[x - 1, y] > 0) { currentbuilding = int_Floorplan[x - 1, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } if (int_Floorplan[x, y] > 0) { currentbuilding = int_Floorplan[x, y]; verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, 0f), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, 0f), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 1))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y + 1, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2(currentbuilding * 2 / imagesintexture, 0))); verticeslist.Add(new VertexPositionNormalTexture(new Vector3(x, y, buildingheights[currentbuilding]), new Vector3(-1, 0, 0), new Vector2((currentbuilding * 2 - 1) / imagesintexture, 0))); } } } } } verticesarray = (VertexPositionNormalTexture[])verticeslist.ToArray(typeof(VertexPositionNormalTexture)); } private void AddTargets()

{ while (targetlist.Count < maxtargets) { Random random = new Random(); int x = random.Next(WIDTH); int y = random.Next(HEIGHT); float z = (float)random.Next(2000) / 1000f + 1; float radius = (float)random.Next(1000) / 1000f * 0.2f + 0.01f; bool acceptableposition = true; if (int_Floorplan[x, y] > 0) acceptableposition = false; foreach (targetstruct currenttarget in targetlist) { if (((int)currenttarget.position.X == x) && ((int)currenttarget.position.Y == y)) acceptableposition = false; } if (acceptableposition) { targetstruct newtarget = new targetstruct(); newtarget.position = new Vector3(x + 0.5f, y + 0.5f, z); newtarget.radius = radius; targetlist.Add(newtarget); } } } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect();

scenerytexture = content.Load<Texture2D> ("texturemap"); bullettexture = content.Load<Texture2D> ("bullet"); LoadModels(); } } private void LoadModels() { spacemodel = FillModelFromFile("xwing"); targetmodel = FillModelFromFile("target");

skybox = content.Load<Model> ("skybox2"); int i = 0; skyboxtextures = new Texture2D[skybox.Meshes.Count]; foreach (ModelMesh mesh in skybox.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) skyboxtextures[i++] = currenteffect.Texture; foreach (ModelMesh modmesh in skybox.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) { modmeshpart.Effect = effect.Clone(device);

effect.Parameters["xEnableLighting"].SetValue(false); }

} private Model FillModelFromFile(string asset) {

Model mod = content.Load<Model> (asset); foreach (ModelMesh modmesh in mod.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device); return mod; } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); ProcessKeyboard(gameTime.ElapsedGameTime.Milliseconds / 500.0f * gamespeed); UpdatePosition(ref spacemeshposition, spacemeshangles, gameTime.ElapsedGameTime.Milliseconds / 500.0f * gamespeed); UpdateCamera(); UpdateSpritePositions(gameTime.ElapsedGameTime.Milliseconds / 500.0f * gamespeed); if (CheckCollision(spacemeshposition, spacemodel.Meshes[0].BoundingSphere.Radius * 0.0005f) > 0) { spacemeshposition = new Vector3(8, 2, 1); spacemeshangles = new Vector3(0, 0, 0); gamespeed /= 1.1f; } base.Update(gameTime); } private void ProcessKeyboard(float speed) { KeyboardState keys = Keyboard.GetState(); if (keys.IsKeyDown(Keys.Right)) { spacemeshangles.Y -= 2.5f * speed; } if (keys.IsKeyDown(Keys.Left)) { spacemeshangles.Y += 2.5f * speed; } if (keys.IsKeyDown(Keys.Down)) { spacemeshangles.Z -= (float)(speed * Math.Sin(spacemeshangles.Y)); spacemeshangles.X -= (float)(speed * Math.Cos(spacemeshangles.Y)); } if (keys.IsKeyDown(Keys.Up)) {

spacemeshangles.Z += (float)(speed * Math.Sin(spacemeshangles.Y)); spacemeshangles.X += (float)(speed * Math.Cos(spacemeshangles.Y)); } if (keys.IsKeyDown(Keys.Space)) { double distance = 0; if (spritelist.Count > 0) { spritestruct lastsprite = (spritestruct)spritelist[spritelist.Count - 1]; distance = Math.Sqrt(Math.Pow(lastsprite.position.X - spacemeshposition.X, 2) + Math.Pow(lastsprite.position.Y - spacemeshposition.Y, 2) + Math.Pow(lastsprite.position.Z spacemeshposition.Z, 2)); } if ((distance > 0.8f) || (spritelist.Count == 0)) { spritestruct newsprite = new spritestruct(); newsprite.position = spacemeshposition; newsprite.angles = spacemeshangles; spritelist.Add(newsprite); } } if (keys.IsKeyDown(Keys.V)) gamespeed = 0; } private void UpdatePosition(ref Vector3 position, Vector3 angles, float speed) { Vector3 addvector = new Vector3(); addvector.X += (float)(Math.Sin(angles.Z)); addvector.Y += (float)(Math.Cos(angles.Z)); addvector.Z -= (float)(Math.Tan(angles.X)); addvector.Normalize(); position += addvector * speed; } private void UpdateCamera() { Vector3 campos = new Vector3(0, -0.6f, 0.1f); Matrix camrot = Matrix.CreateRotationY(-(float)spacemeshangles.Y) * Matrix.CreateRotationX((float)spacemeshangles.X) * Matrix.CreateRotationZ(-(float)spacemeshangles.Z); campos = Vector3.Transform(campos, camrot); campos = Vector3.Transform(campos, Matrix.CreateTranslation(spacemeshposition)); Vector3 camup = new Vector3(0, 0, 1); camup = Vector3.Transform(camup, camrot); viewMatrix = Matrix.CreateLookAt(campos, spacemeshposition, camup); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 0.2f, 500.0f); } private int CheckCollision(Vector3 position, float radius) { if (position.Z - radius < 0) return 1; if (position.Z + radius > 15) return 2; if ((position.X < -10) || (position.X > WIDTH + 10)) return 2; if ((position.Y < -10) || (position.Y > HEIGHT + 10)) return 2; if ((position.X - radius > 0) && (position.X + radius < WIDTH) && (position.Y - radius > 0) && (position.Y + radius < HEIGHT)) { if (position.Z - radius < buildingheights[int_Floorplan[(int)position.X, (int)position.Y]]) return 1; }

for (int i = 0; i < targetlist.Count; i++) { targetstruct currenttarget = (targetstruct)targetlist[i]; if (Math.Sqrt(Math.Pow(currenttarget.position.X - position.X, 2) + Math.Pow(currenttarget.position.Y - position.Y, 2) + Math.Pow(currenttarget.position.Z - position.Z, 2)) < radius + currenttarget.radius) { targetlist.RemoveAt(i); i--; return 3; } } return 0; } private void UpdateSpritePositions(float speed) { for (int i = 0; i < spritelist.Count; i++) { spritestruct currentsprite = (spritestruct)spritelist[i]; UpdatePosition(ref currentsprite.position, currentsprite.angles, speed * 5); spritelist[i] = currentsprite; int collisionkind = CheckCollision(currentsprite.position, 0.05f); if (collisionkind > 0) { spritelist.RemoveAt(i); i--; if (collisionkind == 3) { gamespeed *= 1.02f; AddTargets(); } } } } private void DrawSprites() { if (spritelist.Count > 0) { ourspritevertexformat[] spritecoordsarray = new ourspritevertexformat[spritelist.Count]; foreach (spritestruct currentsprite in spritelist) { spritecoordsarray[spritelist.IndexOf(currentsprite)] = new ourspritevertexformat(currentsprite.position, 50.0f); } effect.CurrentTechnique = effect.Techniques["PointSprites"]; Matrix worldMatrix = Matrix.Identity; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); this.effect.Parameters["xTexture"].SetValue(bullettexture); device.RenderState.AlphaBlendEnable = true; device.RenderState.SourceBlend = Blend.One; device.RenderState.DestinationBlend = Blend.One; effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin();

device.VertexDeclaration = new VertexDeclaration(device, ourspritevertexformat.Elements); device.DrawUserPrimitives(PrimitiveType.PointList, spritecoordsarray, 0, spritecoordsarray.Length); pass.End(); } effect.End(); } device.RenderState.AlphaBlendEnable = false; } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); Matrix worldMatrix = Matrix.Identity; effect.CurrentTechnique = effect.Techniques["Textured"]; effect.Parameters["xWorld"].SetValue(worldMatrix); effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xTexture"].SetValue(scenerytexture); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements); device.DrawUserPrimitives(PrimitiveType.TriangleList, verticesarray, 0, verticesarray.Length / 3); pass.End(); } effect.End(); foreach (ModelMesh modmesh in spacemodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(0.0005f, 0.0005f, 0.0005f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateRotationZ((float)Math.PI) * Matrix.CreateRotationY(-(float)spacemeshangles.Y) * Matrix.CreateRotationX(-(float)spacemeshangles.X) * Matrix.CreateRotationZ(-(float)spacemeshangles.Z) * Matrix.CreateTranslation(spacemeshposition); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix); } modmesh.Draw(); } foreach (targetstruct currenttarget in targetlist) { foreach (ModelMesh modmesh in targetmodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Colored"]; worldMatrix = Matrix.CreateScale(currenttarget.radius, currenttarget.radius, currenttarget.radius) * Matrix.CreateTranslation(currenttarget.position); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix); } modmesh.Draw(); }

}

int i = 0; foreach (ModelMesh modmesh in skybox.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = currenteffect.Techniques["Textured"]; worldMatrix = Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateScale(2, 2, 2) * Matrix.CreateTranslation(spacemeshposition); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xView"].SetValue(viewMatrix); currenteffect.Parameters["xProjection"].SetValue(projectionMatrix); currenteffect.Parameters["xTexture"].SetValue(skyboxtextures[i++]); } modmesh.Draw(); }

DrawSprites(); base.Draw(gameTime); } } }

Welcome to this 3rd installment of my XNA Tutorials for C#. This Series has as its main objective to introduce HLSL to you, what it is, what you can do with it and, of course, how to use it. This 3rd Series is written to be a complete hands-on HLSL Tutorial. Here's a sample screenshot of what you'll create:

Once again, we'll start very small, by drawing a simple triangle, and move on to more advanced topics and integrate them into our project. This time, we will no longer be using my effectx.fx file, as we will code our own. As main goal of this Series, we will be rendering a scene, which is being lit from a light sources. So what's so difficult about this? You already should have an idea about how to set up a light in a scene, using regular XNA code. In that case, however, all triangles in your scene would be lit by an amount of light that depends of how the triangle is facing your light source. But you will not see any shadows! This is because XNA doesn't know if there are any objects between the triangle and the light sources. So I thought having a light casting its shadows would already be nice goal for an introductionary Series on HLSL. Have a quick look at the screenshots at the bottom of this page. Don't be mistaken ­ this is already quite an advanced topic, and we'll move quickly through the first sets of pages of this Series, as you already know how to draw triangles. This Series will put its main focus on HLSL. So, what do I expect you to know already? You can click on each item to be taken to the page where the concept was introduced.

Required (concepts that will be expanded on)(links to be replaced with XNA links): Camera initialization Drawing triangles from a vertex buffer (only vertex buffers, no index buffers) Adding textures to triangles A basic understanding of lighting (dot product) Optional (code simply copied from previous chapters): Effect loading Loading a textured model from file So what are you waiting for? Let's move on to the first chapter!

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/series3.php> Welcome to this 3rd installment of my Tutorials on XNA. Because this Series will cover a lot of ground, I would like to take a jumpstart by starting from the code presented below. As you can see on the screenshot below, it will only draw a simple triangle. Other than the check for shader 2.0 support (which is explained in the corresponding Short Tut), there is nothing in this code that hasn't been covered yet in the previous 2 series. Once again, the device variable is created in the SetUpXNADevice method, which is called from the LoadGraphicsContent method. I have included an empty LoadModels method, where we will load .x model

files later on in this XNA tutorial. My standard effects.fx file is loaded so we are able to render the triangle, but soon my effect file will be replaced by one of your own. Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Starting_point.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Starting_point.php>

That's about all there is to say about our starting code. If you have been following my tutorials up to this point, simply copy-paste this code into Game Studio Express. Remember, you might have to change my namespace to yours (or vice versa). The only requirement is that both namespaces in the Game1.cs and Program.cs files are the same.

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAtutorialSeries3 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Vector3 CameraPos; Matrix viewMatrix; Matrix projectionMatrix; VertexBuffer vb;

public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); if (GraphicsAdapter.DefaultAdapter.GetCapabilities(DeviceType.Hardware).MaxPixelShaderProfile < ShaderProfile.PS_2_0) graphics.PreparingDeviceSettings += new EventHandler<PreparingDeviceSettingsEventArgs>(SetToReference); } protected override void Initialize() { base.Initialize(); } private void SetUpVertices() { VertexPositionColor[] vertices = new VertexPositionColor[3]; vertices[0] = new VertexPositionColor(new Vector3(-2, -2, 2), Color.Red); vertices[1] = new VertexPositionColor(new Vector3(0, 2, 0), Color.Yellow); vertices[2] = new VertexPositionColor(new Vector3(2, -2, -2), Color.Green); vb = new VertexBuffer(device, VertexPositionColor.SizeInBytes * 3, ResourceUsage.WriteOnly); vb.SetData(vertices); } private void SetUpXNADevice() { graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 3"; device = graphics.GraphicsDevice; } void SetToReference(object sender, PreparingDeviceSettingsEventArgs e) { e.GraphicsDeviceInformation.CreationOptions = CreateOptions.SoftwareVertexProcessing; e.GraphicsDeviceInformation.DeviceType = DeviceType.Reference; e.GraphicsDeviceInformation.PresentationParameters.MultiSampleType = MultiSampleType.None; } private void LoadEffect() { effect = content.Load<Effect> ("effects"); CameraPos = new Vector3(0, -6, 5); viewMatrix = Matrix.CreateLookAt(CameraPos, new Vector3(0, -1, 0), new Vector3(0, 1, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)this.Window.ClientBounds.Width / (float)this.Window.ClientBounds.Height, 1.0f, 20.0f); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { SetUpXNADevice(); LoadEffect(); LoadModels();

SetUpVertices(); } } private void LoadModels() { } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect.CurrentTechnique = effect.Techniques["Colored"]; effect.Parameters["xView"].SetValue(viewMatrix); effect.Parameters["xProjection"].SetValue(projectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, VertexPositionColor.VertexElements); device.Vertices[0].SetSource(vb, 0, VertexPositionColor.SizeInBytes); device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1); pass.End(); } effect.End(); base.Draw(gameTime); } } }

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Starting_point.php> Welcome to this introduction on XNA and HLSL. - HSL ­ What ?!? - HLSL, the High Level Shader Language - But I don't care about this HLSL, just show me some more XNA code I can copy-paste into my own application! Indeed, I could go on showing you more and more XNA commands, defining some more renderstates, etc. Looking a bit further, it's clear that all of these commands are at some point translated into commands for the hardware, the graphical card in your pc. Before 2002, game programmers could only use the Fixed Function Pipeline, meaning the commands provided by DirectX. Since DirectX 8, a lot of flexibility has been added to the way programmers can control their graphics cards. Since then, it's possible to directly program the vertex and pixel shaders in the

GPU, the Graphical Processing Unit. This way, programmers are able to program every graphical effect they could think of, thus bypassing the limited set of XNA instructions. - So what you're saying is that I can throw away everything I've learnt about XNA programming and start learning HLSL?? By all means, no. We're still going to need a full 100% of what we've seen up till now. The difference is that this time we're going to write our own effects. In a few chapters, you'll see what is happening in there, how vertices are transformed, etc - Why would I care about all these low-level commands? The nice thing about XNA is that it takes care of all the maths for us! The more you can do manually, the more power you have about what is actually drawn on the screen. Hey, this is the 3rd series, it's time we move on to something more advanced! It is still a `high level' language, so you won't be seeing any low-level commands, like assembler. - So, in a nutshell, why would I want to start using this HLSL? HLSL is used not to improve the gameplay, but to enhance the quality of the final image. Every vertex that is drawn will pass through your vertex shader, and even every pixel drawn will have passed through your pixel shader. The shaders can perform pretty much any manipulation you can think of on their data. HLSL is the only missing link between XNA code and what you see on the screen, so no doubt you'll benefit from this knowledge. It is also incredibly useful when debuggin. To demonstrate the use of HLSL and shaders, I have written this 3rd Series of XNA tutorials. Have a look at the lighting on one of the screenshots. You can see all lights cast shadows. This is a nice example of something that would be quite impossible to achieve without shaders. As with the previous Series, we'll start by showing the basics, and gradually build up our application. In the end, you'll have a complete overview of the meaning of shaders, and have a good understanding of what you can do with them! Pretty much what a tutorial should do, I guess.. So much for this introduction to HLSL. You might still be wondering where HLSL fits into the big picture. The image below demonstrates this, and will be explained while writing our first vertex and pixel shader in the next 2 chapters.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/HLSL_introduction.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/HLSL_introduction.php>

Let's have a look at the staring point in our flowchart: the big arrow starting in our XNA app, going to our vertex shader:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Vertex_format.php> Defining our own vertex format Let's have a look at the staring point in our flowchart: the big arrow starting in our XNA app, going to our vertex shader:

It represents the flow of vertex data is triggered every time we issue some XNA app to our vertex shader, we need contained in the vertex stream. Using in the stream, and where.

from our XNA kind of draw to pass some shaders, you

app to the vertex shader on the graphics card. This command in XNA. When we pass the vertex data from our information with it, describing what kind of data is need to specify exactly what information can be found

So what we need is: a structure that can hold the necessary data for each vertex and a definition of the data, so the vertex shader knows which data is included with every vertex. In the starting code, we've been using the VertexPositionColor struct which satisfies both requirements. Here, we're going to define a new struct, myownvertexformat, that will be exactly the same as the VertexPositionColor struct, to see what's in there and why it is needed. This will allow us to expand it further on in this series. To satisfy the 2 requirements, in the example of a simple colored triangle, we need our vertices to hold Position data as well as Color data. So this is how we start out struct (you can put this at the top of your code):

private struct myownvertexformat { private Vector3 position; private Color color; public myownvertexformat(Vector3 position, Color color) { this.position = position; this.color = color; } }

We simply defined a new structure, and defined it so it can hold a vector3 and a color. We also defined a constructor, so later in our program we can create and fill a new instance of this struct in one line. The first requirement has been satisfied.

Next, we need a way to tell the vertex shader what the data represents. Although we gave the elements straightforward names (position, color), the vertex shader needs to be told explicitly which data it will receive. To this end, we need to create an array of VertexElements, which will contain 1 entry for each type of data accompanying each vertex. Put this code inside, at the bottom of our struct:

public static VertexElement[] Elements = { new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Color, VertexElementMethod.Default, VertexElementUsage.Color, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 1);

For each type of data, we define how many bytes it occupies, what it is used for, and where it can be found. The first argument is the number of the datastream we'll be describing. Because we'll only be using one, we indicate 0. The second argument is very important. It indicates at which offset IN BYTES the type of data can found in the vertex stream. So the first type of data, the 3D position, starts at offset 0. Next, we indicate how that kind of data is stored, such as int, float1, short4 and more. A position comes in a Vector3 (which is composed of 3 floats). Concerning the next argument, you'll almost always want to use DeclarationMethod.Default, maybe I can write a small short tut later on the other uses. The next argument is the most important one: it describes the kind of information, such as position, color, texture coordinate, tangent, etc. This is needed, so XNA can automatically link the right data from our vertex stream to the right variables in our shaders. For the last argument, have to pass 2 sets of argument, which is the vertex, this will be 0 suppose you would like to add 2 textures to a triangle. This implicates you would texture coordinates along with each vector. For this, you can use the last index of each info. Since we'll only be passing 1 position and 1 color for each for both lines.

For our color, we do pretty much the same. It is still part of vertexstream 0, but because it is preceded by the position it can't be found at offset 0. The position consists of 3 floats, this is what we need to indicate (a float occupies 4 bytes, so sizeof(float)*3 is 12, which is the offset in bytes to the color information). Although a color is in fact a Vector4 (4 floats: the RGBA values), we need to specify Color, because each value needs to be mapped within the range [0..1] before it can be passed to the vertex shader as a color. This is an exception. We also supply our myownvertexformat with a SizeInBytes member, which stores the number of bytes 1 vertex occupies in memory. Because the position uses 3 floats and the color 1 float, it will take up the memory of 4 floats, which is 16 bytes. Now let's change our code so we'll be using our own myownvertexformat instead of the VertexPositionColor struct. Change our SetUpVertices to this:

private void SetUpVertices() { myownvertexformat[] vertices = new myownvertexformat[3]; vertices[0] = new myownvertexformat(new Vector3(-2, -2, 2), Color.Red); vertices[1] = new myownvertexformat(new Vector3(0, 2, 0), Color.Yellow); vertices[2] = new myownvertexformat(new Vector3(2, -2, -2), Color.Green); vb = new VertexBuffer(device, myownvertexformat.SizeInBytes * 3, ResourceUsage.WriteOnly); vb.SetData(vertices);

}

Also, when we instruct XNA to draw our triangle, we need to inform it our vertices are defined as myownvertexformats:

device.VertexDeclaration = new VertexDeclaration(device, myownvertexformat.Elements); device.Vertices[0].SetSource(vb, 0, myownvertexformat.SizeInBytes);

OK, we have recreated the pre-built VertexPositionColor struct. When you run the code, you should see the same triangle.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Vertex_format.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Vertex_format.php>

Last chapter we've seen how to create our own vertex format. This chapter you will write your first HLSL code, together with your first vertex shader. This would be a nice moment to have another look at our flowchart below. You notice the big arrow from our XNA app toward the vertex shader. At this point, we have the vertex stream (the position and color of our 3 vertices), as well as the metadata, which describes what's in this vertex stream, and the memory size of each vertex. When looking at the image, you'll see it's time to begin coding on our vertex shader! Although one of the main goals of my tutorials is to keep all code in 1 file, we cannot get around this one. You'll have to create a new empty effect file and give it a name (I named mine OurHLSLfile.fx). Now you have to choose which program you're going to use to code your .fx file. Of course you can use Visual Studio, but I don't really like the compiler output. To code .fx files, most people use NVIDIA's FX Composer, which you can download here for free. After installation, you can simply double-click on your .fx file (just an empty file you created with the .fx extension), and it'll be opened in FX Composer. If you're

presented with an empty screen, find the dark grey block at the left of your screen, and click on the only item in it. You should now see your file.

To edit the file in Visual Studio, go to your Project menu, and find Add existing item. Now select your .fx file, and you'll see that it's been added to the solution explorer in Visual Studio. Whether you're using Visual Studio or NVidia's FX Composer, you'll be presented an empty coding page. We'll now start putting some HLSL code in the .fx file. Although HLSL is not 100% the same as C# code, you will have absolutely no problem reading and writing the code. I could give you an extremely dry summary of the HLSL grammar, but I prefer to introduce you the syntax by showing some examples. At the end of this Series, you'll be able to read and write almost any HLSL code you want. As you have experienced throughout the previous series, an .fx file can describe one or more techniques. One technique can have multiple passes as you'll see in the next chapters, but let's start by defining a simple technique with only one pass. You can already put this as your first HLSL code in your .fx file:

technique Simplest { pass Pass0 { VertexShader = compile vs_1_1 SimplestVertexShader(); PixelShader = NULL; } }

This defines a technique, Simplest, which has one pass. This pass has a vertex shader (SimplestVertexShader), but no pixelshader. This indicates our vertex shader will pass its output data to the default pixel shader. In FX Composer, you can test your code by pressing Ctrl+S, which also saves the file. Before we start coding our vertex shader, we would better define a structure to hold the data our vertex shader will send to the default pixel shader. The vertex shader method we will create, SimplestVertexShader, will simply transform the vertices it receives from our XNA app to screen pixel coordinates, and send them together with the color to the pixel shader, so put this code at the top of your .fx file:

struct VertexToPixel { float4 Position float4 Color };

: POSITION; : COLOR0;

This again looks very much like C#, only for the :POSITION and :COLOR0. These are called semantics and indicate how our GPU should use the data. More on this in the next paragraph. Let's start our SimplestVertexShader method, so you'll better understand this. Place this method between the structure definition and our technique definition:

VertexToPixel SimplestVertexShader(float4 inPos : POSITION) { VertexToPixel Output = (VertexToPixel)0; Output.Position = mul(inPos, xViewProjection); Output.Color = 1.0f; return Output; }

This again looks a lot like C#. The first line indicates our method (our vertex shader) will return a filled VertexToPixel structure to the pixel shader. It also indicates your vertex shader receives the position from your vertex stream, as indicates by the POSITION semantic. This is very important: it links the data inside your vertex stream (as indicated in your VertexDeclaration) to your HLSL code. Remember this method is called for every vertex in your vertex stream. The first line in the method creates an empty output structure. The second line takes the 3D coordinates of the vertex, and transforms them to 2D screen coordinates by multiplying them by the combination of the View and Projection matrix. For more information on this, you can always have a look at the Matrix sessions in my `Extra Reading' section, which you can find at the right of every page. Then we fill the Color member of the output structure. When you look at the definition of our output structure, you'll see this has to be a float4: one float for each of the 3 color components, and an extra float for the alpha (transparency) value. You could fill this color by using the following code:

Output.Color.r Output.Color.g Output.Color.b Output.Color.a

= = = =

1.0f; 0.0f; 1.0f; 1.0f;

This would indicate purple, as you combine red and blue. The following code specifies exactly the same:

Output.Color.rga = 1.0f; Output.Color.g = 0.0f;

This is called a swizzle, and helps you to code faster. Instead of rgbw, you can also use xyzw. The rgba swizzle is usually used when working with colors, while the xyzw swizzle is used in combination with coordinates, but they are exactly the same. You can also use indices, which is useful for use in an algorithm:

Output.Color[0] Output.Color[1] Output.Color[2] Output.Color[3]

= = = =

1.0f; 0.0f; 1.0f; 1.0f;

In our example above, the color are all set vertices to 2D screen shader. This means in window.

our vertex shader simply sets Output.Color = 1.0f, to 1.0f, which corresponds to white. So our vertex coordinates, and pass them together with the color our case of 1 triangle, our pixel shader will draw

which means the 4 components of shader will transform our 3D white to the default pixel a solid white triangle to the

There's still something missing. When you press ctrl+s (save and compile) in NVidia's FX Studio, you will see we still need to define xViewProjection. So put this line just above your vertex shader method:

float4x4 xViewProjection;

This indicates xViewProjection is a matrix with 4 rows and 4 columns, so it can hold a standard XNA matrix. Our XNA app will fill this matrix in the next chapter. That's it for our first HLSL code! Of course, we still need to call the technique from our XNA app, as well as set the xViewProjection matrix. Because this chapter would otherwise become too lengthy, we'll discuss the XNA part in the next chapter.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Vertex_shader.php> The first Vertex Shader Last chapter we've seen how to create our own vertex format. This chapter you will write your first HLSL code, together with your first vertex shader. This would be a nice moment to have another look at our flowchart below. You notice the big arrow from our XNA app toward the vertex shader. At this point, we have the vertex stream (the position and color of our 3 vertices), as well as the metadata, which describes what's in this vertex stream, and the memory size of each vertex. When looking at the image, you'll see it's time to begin coding on our vertex shader! Although one of the main goals of my tutorials is to keep all code in 1 file, we cannot get around this one. You'll have to create a new empty effect file and give it a name (I named mine OurHLSLfile.fx). Now you have to choose which program you're going to use to code your .fx file. Of course you can use Visual Studio, but I don't really like the compiler output. To code .fx files, most people use NVIDIA's FX Composer, which you can download here for free. After installation, you can simply double-click on your .fx file (just an empty file you created with the .fx extension), and it'll be opened in FX Composer. If you're presented with an empty screen, find the dark grey block at the left of your screen, and click on the only item in it. You should now see your file.

To edit the file in Visual Studio, go to your Project menu, and find Add existing item. Now select your .fx file, and you'll see that it's been added to the solution explorer in Visual Studio. Whether you're using Visual Studio or NVidia's FX Composer, you'll be presented an empty coding page. We'll now start putting some HLSL code in the .fx file. Although HLSL is not 100% the same as C# code, you will have absolutely no problem reading and writing the code. I could give you an extremely dry summary of the HLSL grammar, but I prefer to introduce you the syntax by showing some examples. At the end of this Series, you'll be able to read and write almost any HLSL code you want. As you have experienced throughout the previous series, an .fx file can describe one or more techniques. One technique can have multiple passes as you'll see in the next chapters, but let's start by defining a simple technique with only one pass. You can already put this as your first HLSL code in your .fx file:

technique Simplest { pass Pass0 { VertexShader = compile vs_1_1 SimplestVertexShader(); PixelShader = NULL; } }

This defines a technique, Simplest, which has one pass. This pass has a vertex shader (SimplestVertexShader), but no pixelshader. This indicates our vertex shader will pass its output data to the default pixel shader. In FX Composer, you can test your code by pressing Ctrl+S, which also saves the file. Before we start coding our vertex shader, we would better define a structure to hold the data our vertex shader will send to the default pixel shader. The vertex shader method we will create, SimplestVertexShader, will simply transform the vertices it receives from our XNA app to screen pixel coordinates, and send them together with the color to the pixel shader, so put this code at the top of your .fx file:

struct VertexToPixel {

float4 Position float4 Color };

: POSITION; : COLOR0;

This again looks very much like C#, only for the :POSITION and :COLOR0. These are called semantics and indicate how our GPU should use the data. More on this in the next paragraph. Let's start our SimplestVertexShader method, so you'll better understand this. Place this method between the structure definition and our technique definition:

VertexToPixel SimplestVertexShader(float4 inPos : POSITION) { VertexToPixel Output = (VertexToPixel)0; Output.Position = mul(inPos, xViewProjection); Output.Color = 1.0f; return Output; }

This again looks a lot like C#. The first line indicates our method (our vertex shader) will return a filled VertexToPixel structure to the pixel shader. It also indicates your vertex shader receives the position from your vertex stream, as indicates by the POSITION semantic. This is very important: it links the data inside your vertex stream (as indicated in your VertexDeclaration) to your HLSL code. Remember this method is called for every vertex in your vertex stream. The first line in the method creates an empty output structure. The second line takes the 3D coordinates of the vertex, and transforms them to 2D screen coordinates by multiplying them by the combination of the View and Projection matrix. For more information on this, you can always have a look at the Matrix sessions in my `Extra Reading' section, which you can find at the right of every page. Then we fill the Color member of the output structure. When you look at the definition of our output structure, you'll see this has to be a float4: one float for each of the 3 color components, and an extra float for the alpha (transparency) value. You could fill this color by using the following code:

Output.Color.r Output.Color.g Output.Color.b Output.Color.a

= = = =

1.0f; 0.0f; 1.0f; 1.0f;

This would indicate purple, as you combine red and blue. The following code specifies exactly the same:

Output.Color.rga = 1.0f; Output.Color.g = 0.0f;

This is called a swizzle, and helps you to code faster. Instead of rgbw, you can also use xyzw. The rgba swizzle is usually used when working with colors, while the xyzw swizzle is used in combination with coordinates, but they are exactly the same. You can also use indices, which is useful for use in an algorithm:

Output.Color[0] Output.Color[1] Output.Color[2] Output.Color[3]

= = = =

1.0f; 0.0f; 1.0f; 1.0f;

In our example above, our vertex shader simply sets Output.Color = 1.0f, which means the 4 components of the color are all set to 1.0f, which corresponds to white. So our vertex shader will transform our 3D vertices to 2D screen coordinates, and pass them together with the color white to the default pixel

shader. This means in our case of 1 triangle, our pixel shader will draw a solid white triangle to the window. There's still something missing. When you press ctrl+s (save and compile) in NVidia's FX Studio, you will see we still need to define xViewProjection. So put this line just above your vertex shader method:

float4x4 xViewProjection;

This indicates xViewProjection is a matrix with 4 rows and 4 columns, so it can hold a standard XNA matrix. Our XNA app will fill this matrix in the next chapter. That's it for our first HLSL code! Of course, we still need to call the technique from our XNA app, as well as set the xViewProjection matrix. Because this chapter would otherwise become too lengthy, we'll discuss the XNA part in the next chapter.

Here you can find already what you should have as HLSL code:

struct VertexToPixel { float4 Position float4 Color };

: POSITION; : COLOR0;

float4x4 xViewProjection; VertexToPixel SimplestVertexShader( float4 inPos : POSITION) { VertexToPixel Output = (VertexToPixel)0; Output.Position =mul(inPos, xViewProjection); Output.Color = 1.0f; return Output; } technique Simplest { pass Pass0 { VertexShader = compile vs_1_1 SimplestVertexShader(); PixelShader = NULL; } } Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Vertex_shader.php> Up to this point, we have a vertex buffer, filled with only 3 vertices defining a single triangle. We have the metadata: the VertexDeclaration, which describes what kind of data is encapsulated in the vertex data, together with the offset to that kind of data. We also have a very simple vertex shader. From our vertex stream, it extracts only the position data. For each vertex, this 3D position is transformed to 2D screen coordinates, and passed on to the pixel shader. To perform this transformation, we multiply each vertex with the matrix which is the combination of the View and Projection matrix, which is at this point not yet being passed on to our HLSL code. In our XNA app, it is time to load our own effect file we created, and set the transformation matrix. So load the effect file into the Solution Explorer, like you have done before with images. You should see an additional asset in your Solution Explorer. Let's load this asset into our effect variable, which is done in the LoadEffect method:

effect = content.Load<Effect> ("OurHLSLfile");

We're ready to move on to the Draw method. To draw our triangle using our own technique, we first have to specify our technique and set its parameters. In our case, the only parameter we have to set is xViewProjection, which is the combination of the viewMatrix and the projectionMatrix. So replace the existing code with this code:

effect.CurrentTechnique = effect.Techniques["Simplest"]; effect.Parameters["xViewProjection"].SetValue(viewMatrix*projectionMatrix);

Make sure you remove the lines where you try to set other parameters such as xView, xWorld,.. because these don't exist in our .fx file. Anyway, our XNA code is ready! However, when you try to run your program, you'll get an error stating `Both a valid vertex shader and pixel shader (or valid effect) must be set on the device before draw operations may be performed`. This is because although our technique contains a vertex shader, it doesn't yet contain a valid pixel shader! So let's go back to our .fx file. The pixel shader receives its input (position and color) from our vertex shader, and needs to output only color. So let's define its output structure at the top of our .fx file:

struct PixelToFrame { float4 Color };

: COLOR0;

Our first pixel shader will be a very simple method, here it is:

PixelToFrame OurFirstPixelShader(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.Color; return Output; }

First, an output structure is created, and the color received from the vertex shader is put in the output structure. That's all it does! Now we still need to set this method as pixel shader for our technique, at the bottom of the file:

PixelShader = compile ps_1_1 OurFirstPixelShader();

That's it! We have 3 colored 3D vertices, pass them to the vertex shader which transforms them into 2D positions and pass them on to the pixel shader, which simply puts the color on the screen! Save your .fx file by hitting Ctrl+S, and run your XNA program! You should see the same as the image below: a white triangle. White, because we coded our vertex shader to draw every vertex white. Not a lot of fun, so let's add some color to it. Go to the vertex shader in your .fx file, and update it to this code:

VertexToPixel SimplestVertexShader( float4 inPos : POSITION, float4 inColor : COLOR0) { VertexToPixel Output = (VertexToPixel)0;

Output.Position =mul(inPos, xViewProjection); Output.Color = inColor; return Output; }

Remember the vertices in our vertexbuffer also contain color information? So now we can simply use the color of every vertex as input in our vertex shader. This is what is updated in the first line of the code above. The only change that has been made to the interior of the method, is that we simply route this color we get from our XNA app immediately to the output of the vertex shader, instead of simple white. Now, when you run this code, you should see the same colored triangle as before. Our vertex shader simply transforms the 3D coordinates to 2D screen coordinates, and passes these coordinates together with the correct color to the pixel shader. This pixel shader doesn't change anything to the color, and passes it on to the screen.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Pixel_shader.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Pixel_shader.php> So now our XNA program passes vertices to our vertex shader, which transform their coordinates to 2D screen coordinates and sends these coordinates together with the color to the pixel shader, which takes the color and draws the pixel on the screen. Now have another look at our flowchart, as we last chapter we skipped a few steps to finally get something on the screen. You'll notice the rasterizer between the vertex and the pixel shader. This rasterizer determines which pixels on the screen are occupied by our triangle, and makes sure these pixels are also sent to the pixel shader. Without this rasterizer, only the 3 points corresponding to the vertices would be sent to the pixel shader. And now something important: what would be the color of these extra pixels, which the pixel shader receives at its input? The interpolator next to the rasterizer calculates this value, by interpolating the color value of the corner points. This means that a pixel exactly in the middle between a blue and a

red corner point, will get the color purple. In the example of our program, this means for all pixels the triangle occupies, the colors are nicely shaded from corner to corner. Our next code will demonstrate this, and how this can cause problems. So you can be asking yourself ,What's the use of all this ??. Even at this point, you can see we can perform some extra manipulations. For example, you could change the position or color of each vertex. You could also do this in your XNA app, but then your CPU would have to perform those calculations, which would lower your framerate. Now you can have these calculations done by the GPU, which is A LOT faster at it, leaving your CPU free to perform more important calculations. Using the vertex shader, you could also adjust the color, which we've done before (we made our whole triangle white) and which we'll do now again as little exercise. So for this example, we will throw away the color information provided to us by the vertex stream, and define our own colors. Say, we want our vertex shader make the red color component indicate the y coordinate of each vertex, the green component the x coordinate, and the blue color component indicate the z coordinate. To do this, insert this line of code into your vertex shader:

Output.Color.rgb = inPos.yxz;

Note that for any color the value 0 and below means `none', and 1 and above means `full'. You'll notice I have chosen the coordinates of our vertexes in the XNA app between the [-2,2] range, which I did especially for this example. So, for every color, you could expect that the color is not present in the [-2,0] region, then shaded from `none' to full color in the [0,1] region, to remain at full color in the [1,2] region. For example, the point (0,0,0), which is part of our triangle, should be drawn completely black (0 red component, 0 green component and 0 blue component). (If you fail miserably at imagining this, you can have a peek at the bottom image of this page, where the correct colors are shown.) Try to run the code with the new line instead of the Output.Color = inColor; line in our vertex shader. You'll see the colors remains nicely shaded over the full size of the triangle, which isn't what we wanted (for example, the (0,0,0) point in the very middle of our triangle had to be black). So what is happening? Before the colors are passed to the interpolator, every color value is being clipped to the [0,1] region. For example, the (-2,-2,2) vertex should have -2, -2 and 2 as rgb color values, but it gets 0, 0 and 1 as color values. Next, the interpolator simply gives every pixel in the triangle the color that is the interpolation of the 3 clipped color values. So, for example, the (0,0,0) point gets a color value that is an interpolation of color values between the [0,1] region, and thus will never be completely 0,0,0 (=black). In short: when only using a vertex shader, the colors of a triangle can only change linearly, and then again only with outer values in the [0,1] range.

When you take a look at the flowchart, you'll notice there are 2 arrows going from the vertex shader to the pixel shader. The left one is necessary, as it is the position of the pixel in 2D screen coordinates. The rasterizer and interpolator, as well as the pixel shader, need it to do their work. One remark: by default you can NOT use this position as input to your pixel shader. The bigger arrow is not necessary, but you'll always want to use it, as it contains the data the pixel shader uses as input. This can be, for example, the interpolated color, interpolated normal, interpolated texture coordinates, etc. I guess by now you noticed the emphasis I put on the word `Interpolated'. If you want, you can also pass a copy of the 2D screen position in this arrow, so your pixel shader can use it as input.

When you look at the flowchart, you'll see the pixel shader eventually sends its output to the frame buffer. This is where multiple renderings get blended in case of alpha blending. Now it's time to show you something specific about pixel shaders: we're going to program the color of each of the pixels separately. We're going to do the same exercise again, only this time using a pixel shader. Remember, we want each of the 3 color channels to take the values of each of the three 3Dcoordinates of the pixel. Because we're going to derive the color from the 3D position, we need the 3D position as input to our pixel shader. The only coordinate currently being passed to the pixel shader is the 2D screen position (remember we cannot use this value in our pixel shader, as it is the left arrow in our flowchart). So what we need to do is pass the 3D coordinate from our vertex shader to our pixel shader. First redefine the output structure of your vertex shader:

struct VertexToPixel { float4 Position float4 Color float4 Position3D };

: POSITION; : COLOR0; : TEXCOORD0;

You see we added a member to store the 3D position. As semantic, we used TEXCOORD0. You can use TEXCOORD0 to TEXCOORD16 to pass float4 values from your vertex shader to your pixel shader. Next we'll update our vertex shader so it routes the 3D position it receives from the vertex stream to its output. To do this, we simply have to add the following line to our vertex shader:

Output.Position3D = inPos;

This will have our 3D position sent to the interpolator, which will interpolate the correct 3D position of every pixel in the triangle. For each pixel, this interpolated 3D coordinate will be sent to the pixel shader. Now we will have our pixel shader set the color components to the value of this 3D coordinate. In your pixel shader, put this line as color definition:

Output.Color.rgb = PSIn.Position3D.yxz;

By now you should know what this does: it sets the value of the y coordinate as the red color component, and so on. Compile the code by pressing Ctrl+S. When you did everything well, you shouldn't see any error messages. Now go to Game Studio Express, and run the code. You should see the same as the image below: a simple triangle, with pixels that display color that aren't linear interpolations of the colors of the corner points. So we have programmed the color of every pixel individually! As a quick check, you can see that now the (0,0,0) middle point of our triangle is black.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Per-pixel_colors.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Per-pixel_colors.php> When you take another look at our flowchart, covered pretty much everything starting from also set a shader constant, xViewProjection, Colored technique from my default effects.fx you'll see we've already seen the most part of it. We've our vertex stream to the output of the pixel shader. We've from within our XNA app. This means we have implemented the file!

A logical next step would be to load a texture from within our XNA app, and have our pixel shader sample the correct color for each pixel. The first part would be to load the texture in our XNA app, and to update the vertex stream as well as the VertexDeclaration, so they also send texture coordinate information to the vertex shader.

We will immediately start by loading our street texture, which you can download here. I got them from this site, it has a lot of very nice textures you can use in your own app. You can already put this line at the top of your XNA code:

Texture2D StreetTexture;

Import the image into your Solution Explorer as seen in the chapter Textures of Series 2. Add this line to the LoadGraphicsContent method:

StreetTexture = content.Load<Texture2D> ("streettexture");

The next thing to do would be to update the myownvertexformat structure at the top of our code, so it can handle texture coordinates. We'll remove the Color entry, as we'll no longer use it:

struct myownvertexformat { private Vector3 position; private Vector2 TexCoord; public myownvertexformat(Vector3 position, Vector2 TexCoord) { this.position = position; this.TexCoord = TexCoord; } }

Now each vertex can now hold a position as well as a texture coordinate, so let's update them in out SetUpVertices method:

vertices[0] = new myownvertexformat(new Vector3(-2, -2, 2), new Vector2(0.0f, 0.0f)); vertices[1] = new myownvertexformat(new Vector3(0, 2, 0), new Vector2(0.125f, 1.0f)); vertices[2] = new myownvertexformat(new Vector3(2, -2, -2), new Vector2(0.25f, 0.0f));

This defines the 3D position as well as the 2D texture coordinate. Remember, to have this correctly connected to your vertex shader, you also need to update your VertexDeclaration accordingly:

public static VertexElement[] Elements = { new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 2);

You see we have replaced the Color entry by this TextureCoordinate entry, which is stored in a Vector2. Also note, that each vector now takes up (3+2) floats, since a color is stored as 1 float and a Vector2 needs 2 floats. So we need to adjust out SizeInBytes for correct operation. So far for the XNA part, let's turn to our HLSL file again. Before moving on to our shaders, let's first add these lines to the top of our code:

Texture xColoredTexture;

sampler ColoredTextureSampler = sampler_state { texture = <xColoredTexture> ; minfilter = LINEAR; mipfilter=LINEAR; AddressU = mirror; AddressV = mirror;};

magfilter = LINEAR;

The first line defines a variable that will hold our texture. We'll need to fill this variable from within our XNA app. The second line sets up the sampler. A sampler is linked to a texture, and describes how the texture should be processed. We set the min- and magfilters, together with the mipfilter, to linear, so we'll always get nicely shader colors, even when the camera is very close to the triangle. See the chapter `Texture filtering' in the Extra Reading section for more info. We set the texture coordinate states to mirror, which means that texture coordinate (2.2f, 1.4f) will be automatically mapped to the [0,1] region and will thus be replaced by (0.2f, 0.6f). Next, we'll instruct our vertex shader to simply route the texture coordinates from its input to its output. Therefore, we first need to adjust its output structure, VertexToPixel:

struct VertexToPixel { float4 Position float2 TexCoords };

: POSITION; : TEXCOORD0;

Once again, we're using the TEXCOORD0 semantic to pass additional data from our vertex shader to our pixel shader. Although we're only passing 2 floats instead of the maximum 4, this is the correct choice. Now update our vertex shader to this:

VertexToPixel SimplestVertexShader( float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0) { VertexToPixel Output = (VertexToPixel)0; Output.Position = mul(inPos, xViewProjection); Output.TexCoords = inTexCoords; return Output; }

Where we transform our 3D postions into screen coordinates, and pass them along with the texture coordinates to the output of the vertex shader. Next in line is the pixel shader. Think of what the pixel shader receives from the interpolater: the interpolated 2D screen position and the interpolated 2D texture coordinate. The pixel shader needs to output the pixel color, which will be sampled from our texture at the correct position. So change the correct line in your pixel shader to this:

Output.Color = tex2D(ColoredTextureSampler, PSIn.TexCoords);

This command simply retrieves the color of the pixel in the StreetTexture image, corresponding to the 2D coordinate in PSIn.TexCoords. When you hit Ctrl+S, the HLSL code should compile without problems. There remains only one thing to do: set the xColoredTexture from within our XNA app. So add this line to your Draw method:

effect.Parameters["xColoredTexture"].SetValue(StreetTexture);

Which loads the StreetTexture variable of our XNA app into the xColoredTexture variable of our HLSL code.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Textured_triangle.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Textured_triangle.php> This chapter has nothing to do with HLSL ­ I've written it so people who are not following the Series can understand how to use triangle strips just by reading this chapter. Up till now, we've only drawn a single triangle. This chapter we'll expand our scene, so it'll look a bit more like a street. I've divided the scene into 5 parts: the road, the 2 sides of the pavement border, the pavement itself and the wall. 5 textured quads, which have to be drawn by 10 textured triangles. To draw these 10 triangles, we could simply define 30 vertices, each holding their 3D position and 2D texture coordinate, which can be presented this way:

I've only put a few vertex numbers on the picture; otherwise it would be a mess. Every triangle has 3 vertices, and the vertices are all declared in a clockwise manner.

By looking at the lot of redundancy reduce the amount with the previous

image, I hope you notice almost all vertices are declared 2 or 3 times! This means a in the information we send over to our PCI express slot, so there must be a way we can of vertices we send. For cases like this, where every triangle is sharing 2 vertices one, the TriangleStrip should be used. The idea is illustrated in the image below:

The idea behind TriangleStrips is that you should define each vertex only once. So for the first triangle, you have to define vertices 0,1 and 2. Now, for the next triangle, you only have to add vertex 3! XNA will always use the last 3 vertices to draw the triangle, so for the second triangle it uses vertices 1, 2 and 3, which is correct. For the fourth triangle, you only have to add vertex 4, and XNA will use vertices 2,3 and 4. In a formula, the n-th triangle is defined by the (n-1)-th, the n-th and the (n+1)-th vertex, with n starting from 1. This way, you see the total amount of vertices has been decreased by a huge amount! Of course, this will yield much higher framerates when working with a larger number of triangles, such as our terrain of Series 1 (which could indeed also be defined using a TriangleStrip!). There's still one problem remaining. When you look at the red arrows, you'll see it's impossible to define all vertices in a clockwise manner around your triangles. This will always be the case, so when using a TriangleStrip, the rule is you switch your vertex definition from clockwise to counterclockwise and back every triangle. Let's change the vertex definitions:

myownvertexformat[] vertices = new myownvertexformat[12]; vertices[0] = new myownvertexformat(new Vector3(-20, -10, 0), new Vector2(-0.25f, 25.0f)); vertices[1] = new myownvertexformat(new Vector3(-20, 100, 0), new Vector2(-0.25f, 0.0f)); vertices[2] = new myownvertexformat(new Vector3(2, -10, 0), new Vector2(0.25f, 25.0f)); vertices[3] = new myownvertexformat(new Vector3(2, 100, 0), new Vector2(0.25f, 0.0f)); vertices[4] = new myownvertexformat(new Vector3(2, -10, 1), new Vector2(0.375f, 25.0f)); vertices[5] = new myownvertexformat(new Vector3(2, 100, 1), new Vector2(0.375f, 0.0f)); vertices[6] = new myownvertexformat(new Vector3(3, -10, 1), new Vector2(0.5f, 25.0f)); vertices[7] = new myownvertexformat(new Vector3(3, 100, 1), new Vector2(0.5f, 0.0f)); vertices[8] = new myownvertexformat(new Vector3(13, -10, 1), new Vector2(0.75f, 25.0f)); vertices[9] = new myownvertexformat(new Vector3(13, 100, 1), new Vector2(0.75f, 0.0f)); vertices[10] = new myownvertexformat(new Vector3(13, -10, 21), new Vector2(1.25f, 25.0f)); vertices[11] = new myownvertexformat(new Vector3(13, 100, 21), new Vector2(1.25f, 0.0f));

As we'll expand the number of triangles we're storing in our vertex buffer, we have to indicate this as well at its initialization:

vb = new VertexBuffer(device, myownvertexformat.SizeInBytes * 12, ResourceUsage.WriteOnly); vb.SetData(vertices);

As you can see, only 12 vertices are needed to define 10 triangles. You can notice I have used horizontal texture coordinates such as -0.25f and 1.25f. Because in our shader we set the AdressU and AdressV states to Mirror, these points are mapped to 0.25f and 0.75f respectively, which creates a mirrored view, as you can see in the image below. The same trick was used with the vertical coordinates, where the texture was mirrored 25 times! If we wouldn't have mirrored the texture image, that small image would have been stretched over our whole street. You can find more info on mirrored texture coordinates in this forum thread.

Nothing has changed to the kind of information contained in our vertices: we're still sending position and texture information for each vertex. So there's no need to change the VertexDeclaration. Before drawing, we'll change the camera position, to get a nicer view. So change the contents of the LoadEffect method to this:

CameraPos = new Vector3(-25, -18, 13); viewMatrix = Matrix.CreateLookAt(CameraPos, new Vector3(0, 12, 2), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, this.Window.ClientBounds.Width / this.Window.ClientBounds.Height, 1.0f, 200.0f);

I've also chosen black as my background color, I guess you know how to change this. All that's left to do, is specify in the Draw method the amount of triangles we want to be drawn (10), and that we've defined them in a strip of triangles, instead of a list of triangles:

device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 10);

Where we declare we'll be drawing a TriangleStrip, made out of 10 triangles. That's it! This chapter you've learned another way to help you reduce bandwidth, and thus to increase your framerate.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Triangle_strip.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Triangle_strip.php> In HLSL, you have to perform all transformations yourself. As we've already seen, most of them are performed in the vertex shader. To give all people the possibility of completely understanding what they're doing in this 3rd Series, I've decided to write this extra chapter on the World transform. We'll be drawing the cars and lampposts from 2 model files, so you would like to add these variables:

Model LamppostModel; Texture2D[] LamppostTextures; Model CarModel; Texture2D[] CarTextures;

As you can see, the car model will contain textures, and the lamppost will include color information for each of its vertices. As always, we need to start with loading the models into our Object Explorer, like you've done before in Series 2. You can download both meshes here and here. Next we need to link our variables to our assets, which we have done before in the chapters Loading a Model and Textured Models of Series2. This code comes straight from there, so put it in the LoadMeshes method (which was still empty):

LamppostModel = content.Load<Model> ("lamppost"); LamppostTextures = new Texture2D[7]; int i = 0; foreach (ModelMesh mesh in LamppostModel.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) LamppostTextures[i++] = StreetTexture; foreach (ModelMesh modmesh in LamppostModel.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts)

modmeshpart.Effect = effect.Clone(device);

CarModel = content.Load<Model> ("racer"); i = 0; CarTextures = new Texture2D[7]; foreach (ModelMesh mesh in CarModel.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) CarTextures[i++] = currenteffect.Texture; foreach (ModelMesh modmesh in CarModel.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device);

We load both models from file, and save their textures. No problem for the car model, but I could only find a lamppost model without textures, only colors. This would mean we would have to duplicate our technique, one for textured vertices and one for colored vertices. Instead of doing this, we're going to cheat a little bit: for the whole lamppost, we're going to pass the StreetTexture as texture, and sample the color of the pixel in the upper-left pixel, which is brown. So, in the LamppostTextures array, we store the StreetTexture. (If you can supply me with a textured lamppost model, I would be very thankful ;) OK, with both models loaded, it's time to draw them. But don't we first have to set the World transform ? Remember when the World transform is used? Whenever you want to draw some triangles (from a vertex buffer or from a mesh), you have to set the world transform first. If you wouldn't, all triangles would be drawn relative to the origin, the (0,0,0) point. Imagine you want to draw 2 objects from the same mesh, like our 2 lampposts. If you wouldn't set a different world transform before drawing them, both objects would be drawn at the same place, so you would see only one of them. What you would rather like to do, is tell XNA to draw the first object `3 units to the left and 2 units up', and draw the second object `3 units to the right and rotated around the Y axis for 40 degrees and twice as big as the first one'. This `transformation' is called the World transform, and is stored in a matrix. For more info on matrices, what they look like and what you can do with them, you can check out the matrix entries in the Extra Reading section. Something like the example above is displayed in the image below: the big axes represent our World axes, with its origin in the World (0,0,0) point. Say you would like to draw the first object from a mesh. First, you would have to tell XNA to create a new axis, with origin where you would like the center of the object to be drawn. This new location, as well as its rotation and its scaling, are stored in a matrix M1. When you draw your mesh, the mesh will be drawn around the new axis.

The same story for the second object: you first have to set M2 as World transform, so object 2 will be drawn around the correct axis. To obtain this, we have to multiply our vertices with this World matrix in our vertex shader. The correct order of multiplication would be: World, View and then Projection. We will perform this multiplication in our XNA app, so we need to perform only the multiplication of our vertices with this combined matrix in our vertex shader. So go to our .fx file, and change all instances of xViewProjection to xWorldViewProjection:

float4x4 xWorldViewProjection;

And

Output.Position = mul(inPos, xWorldViewProjection);

The first line means HLSL expects the XNA app to fill this matrix, and the second line multiplies every vertex with this combined matrix. The result is that every vertex is first transformed to its proper local axis, and then transformed to 2D screen coordinates! Now we still need to fill this matrix. Go to the Draw method in our XNA code, and replace the line where you fill the xViewProjection matrix by this line:

effect.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * viewMatrix * projectionMatrix);

Because our TriangleStrip containing the street actually needs to be drawn around the World origin, we don't need a World matrix for the street. In that case, you can specify Matrix.Identity, which is the unity element in matrix maths: multiplying matrix M by the identity matrix gives M. When you run this code, you should see exactly the same as last chapter, which is OK. Now we're going to add the first car. This code comes straight from series 2, and you can put it at then end of our Draw method:

int i = 0; foreach (ModelMesh modmesh in CarModel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = effect.Techniques["Simplest"]; Matrix worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi) * Matrix.CreateTranslation(-3, 15, 0); currenteffect.Parameters["xWorldViewProjection"].SetValue(worldMatrix * viewMatrix * projectionMatrix); currenteffect.Parameters["xColoredTexture"].SetValue(CarTextures[i++]); } modmesh.Draw(); }

The second line defines the World matrix for the first lamp post. First, the axis is scaled, so the carmodel will nicely fit into our scene. Then, it is rotated so it is correctly positioned. Lastly, the axis is translated to its proper spot. Of course we need to pass this matrix to the effect, which is done in the 3rd line. When you run the code, you'll see your first car! Because we'll be adding another car and 2 lampposts, we would have to copy this code 3 more times. This means we better factorize is away, by putting it in a method:

private void DrawModel(string technique, Model currentmodel, Matrix worldMatrix, Texture2D[] textures, bool useBrownInsteadOfTextures) { int i = 0; foreach (ModelMesh modmesh in currentmodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) { currenteffect.CurrentTechnique = effect.Techniques[technique]; currenteffect.Parameters["xWorldViewProjection"].SetValue(worldMatrix * viewMatrix * projectionMatrix); currenteffect.Parameters["xColoredTexture"].SetValue(textures[i++]); currenteffect.Parameters["xUseBrownInsteadOfTextures "].SetValue(useBrownInsteadOfTextures); } modmesh.Draw(); } }

This method takes all parameters it needs, passes them to the effect and draws the model! We need to call this method from within our Draw method, so replace our existing code that draws the car by this:

Matrix worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi) * Matrix.CreateTranslation(-3, 15, 0); DrawModel("Simplest", CarModel, worldMatrix, CarTextures, false);

I've created the world matrix on a separate line, so you can better see it. Then it is passed to the method, together with the other arguments. The last argument instructs of the color needs to be sampled from the textures, or if the model needs to be drawn in brown (the lamppost model doesn't have any textures, as discussed previously). This argument is passed to the xUseBrownInsteadOfTextures argument of the effect, which we need to define. So go to our effect file, and add this line:

bool xUseBrownInsteadOfTextures;

For our lamppost which has no textures, this will be true, so we need to pass (0,0) as texture coordinates, so the first pixel of the streettexture is sampled, and the whole lamppost is drawn in brown. Add these lines to the bottom of our vertex shader:

if (xUseBrownInsteadOfTextures) Output.TexCoords = (0,0);

Hit Ctrl+S and try to run the code again, you should see the car again. Not much of a change, you would think, but add this code to the end of our Draw method:

worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi * 5.0f / 8.0f) * Matrix.CreateTranslation(-28, -1.9f, 0); DrawModel("Simplest", CarModel, worldMatrix, CarTextures, false); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 35, 1); DrawModel("Simplest", LamppostModel, worldMatrix, LamppostTextures, true); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 5, 1); DrawModel("Simplest", LamppostModel, worldMatrix, LamppostTextures, true);

So now we have 8 lines that draw out 4 models, which keeps our Draw method tidy. Notice that the xUseBrownInsteadOfTextures variable gets set to true for the lampposts, and when you run the code, you'll see that our lampposts are drawn in solid brown.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/World_transform.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/World_transform.php> Before we can start defining our own lights, we need to add normals to every vertex. If you're wondering why, take a quick look at the ,Lighting Basics chapter of Series 1. The normal has to be included in the vertex stream, so let's first redefine the myownvertexformat structure:

private struct myownvertexformat { private Vector3 Position; private Vector2 TexCoord; private Vector3 Normal; public myownvertexformat(Vector3 Position, Vector2 TexCoord, Vector3 Normal) { this.Position = Position; this.TexCoord = TexCoord; this.Normal = Normal; } public static VertexElement[] Elements = { new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0), new VertexElement(0, sizeof(float)*5, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 2 + 3); }

You see we've included an entry for the normal data in the vertexelement array. This normal data follows after the 5 floats (3 of the position, 2 of the texture coords). A normal is stored as 3 floats, so we also need to adjust our SizeInBytes Now it's time to expand our vertices with normal data. It's very easy: the normals of the horizontal quads (such as the street) are pointing upward, and the normals of the vertical quads (such as the wall) are pointing to the left, this is in the negative X direction in our case. However, because we're using a triangle strip, some vertices are shared by 2 triangles that should have a different normal! This is a problem of a triangle strip, and because sooner or later you'll run into problems like this, I'll cover it here. One solution would be to give the shared vertices the interpolated normal. This would however give bad results, as we want to clearly see the edges. A better approach would be to add `ghost triangles'. Here we simply add 2 new vertices at the places where 2 triangles with different normals share a side. In the image below, these sides are indicated by a red line, and the extra vertices are also red. Note that vertices 4 and 5 should also be drawn in red, but they didn't fit into the image :). Note that the coordinates of the red vertices are exactly the same as those of the previous blue vertices, only the normals should be different.

As you can see, we'll be defining 18 vertices, defining 10 normal triangles and 6 ghost triangles. Even in this almost-worst-case example, we still have to store 12 vertices less than when we would have stored them in a Triangle List. This is the array containing all vertices, with normal data added:

myownvertexformat[] vertices = new myownvertexformat[18]; vertices[0] = new myownvertexformat(new Vector3(-20, -10, 0), new Vector2(-0.25f, 25.0f), new Vector3(0, 0, 1)); vertices[1] = new myownvertexformat(new Vector3(-20, 100, 0), new Vector2(-0.25f, 0.0f), new Vector3(0, 0, 1)); vertices[2] = new myownvertexformat(new Vector3(2, -10, 0), new Vector2(0.25f, 25.0f), new Vector3(0, 0, 1)); vertices[3] = new myownvertexformat(new Vector3(2, 100, 0), new Vector2(0.25f, 0.0f), new Vector3(0, 0, 1)); vertices[4] = new myownvertexformat(new Vector3(2, -10, 0), new Vector2(0.25f, 25.0f), new Vector3(-1, 0, 0)); vertices[5] = new myownvertexformat(new Vector3(-2, 100, 0), new Vector2(0.25f, 0.0f), new Vector3(-1, 0, 0)); vertices[6] = new myownvertexformat(new Vector3(2, -10, 1), new Vector2(0.375f, 25.0f), new Vector3(-1, 0, 0)); vertices[7] = new myownvertexformat(new Vector3(2, 100, 1), new Vector2(0.375f, 0.0f), new Vector3(-1, 0, 0)); vertices[8] = new myownvertexformat(new Vector3(2, -10, 1), new Vector2(0.375f, 25.0f), new Vector3(0, 0, 1)); vertices[9] = new myownvertexformat(new Vector3(2, 100, 1), new Vector2(0.375f, 0.0f), new Vector3(0, 0, 1)); vertices[10] = new myownvertexformat(new Vector3(3, -10, 1), new Vector2(0.5f, 25.0f), new Vector3(0, 0, 1)); vertices[11] = new myownvertexformat(new Vector3(3, 100, 1), new Vector2(0.5f, 0.0f), new Vector3(0, 0, 1)); vertices[12] = new myownvertexformat(new Vector3(13, -10, 1), new Vector2(0.75f, 25.0f), new Vector3(0, 0, 1)); vertices[13] = new myownvertexformat(new Vector3(13, 100, 1), new Vector2(0.75f, 0.0f), new Vector3(0, 0, 1)); vertices[14] = new myownvertexformat(new Vector3(13, -10, 1), new Vector2(0.75f, 25.0f), new Vector3(1, 0, 0)); vertices[15] = new myownvertexformat(new Vector3(13, 100, 1), new Vector2(0.75f, 0.0f), new Vector3(-1, 0, 0)); vertices[16] = new myownvertexformat(new Vector3(13, -10, 21), new Vector2(1.25f, 25.0f), new Vector3(1, 0, 0)); vertices[17] = new myownvertexformat(new Vector3(13, 100, 21), new Vector2(1.25f, 0.0f), new Vector3(1, 0, 0));

Let's not forget to indicate we'll need our vertex buffer to store a few vertices more:

vb = new VertexBuffer(device, myownvertexformat.SizeInBytes * 18, ResourceUsage.WriteOnly);

Where you should notice the 6 added vertices, as well as the normal data. Don't forget to update the number of triangles you're about to draw in the Draw method:

device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 18);

Our models already have normal data included, in the previous chapters we simply didn't use it. So there's nothing we have to change to the rest of the code. When you run this code, you should see the same image as last chapter. This time, however, our vertex stream includes the correct normal data, so we're finally ready to define our first light!

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/World_normals.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/World_normals.php>

This chapter we'll create a point light. This is a point in 3D space, which shines light in every direction. Every object is lit by an amount of light, which is the dot product between that object's normal and the direction of the incoming light. If you want a picture to illustrate this, have a look at `XNA light basics' in Series 1. At this moment we have normal data included in our vertex stream, so we can go straight to the HLSL code. First, let's have a look at how we can light our quads:

In the left quad, the dot product for every vertex is calculated by the vertex shader. Imagine our light is exactly above the center of the quad. In that case, the angle between the direction (the thin blue lines) of the light and the plane is 45 degrees in every vertex. In that case, the dot product between this light direction and the normal is 0.5 for all four vertices. So 0.5 would be the output of the vertex shader for each of the 4 vertices. Now the interpolator comes into play. For each pixel of our quad, it interpolates this value, and sends the interpolated value to the pixel shader. In this case, it's very easy: the interpolated value is 0.5 for each pixel! This means it will seem as if every pixel in our quad is lit the same way, which is wrong. The right quad illustrates what should happen. For each pixel, the dot product has to be calculated separately: each pixel will get its correct value. For example, the corner points will still get value 0.5, whereas the pixel exactly below the light will get value 1.0, as the direction of the normal is exactly the same as the direction of the light. So now you're completely convinced the pixel shader is the way to go, let's start by creating a method that calculates the dot product, if you give it the 3D position of the light, the 3D position of the pixel and the normal in that pixel:

float DotProduct(float4 LightPos, float3 Pos3D, float3 Normal) { float3 LightDir = normalize(LightPos - Pos3D); return dot(LightDir, Normal); }

First the direction of the light is calculated, this is the vector between the 3D position of the light and the 3D position of the pixel. Then we calculate the dot product between this light direction and the normal in the pixel, which is what the method returns. When you try to compile this code, FX Composer will warn you that the HLSL normalize method will only work in Pixel Shader version 2.0 code. So let's hope your card supports HLSL 2.0 code (if it doesn't, you can always code your own normalization method) and change your technique definition like this:

technique Simplest { pass Pass0 { VertexShader = compile vs_2_0 SimplestVertexShader(); PixelShader = compile ps_2_0 OurFirstPixelShader(); } }

We will call the DotProduct from within our pixel shader, as discussed above. You can see this method requires the 3D position as well as the normal to be available to the pixel shader, so we need to update our VertexToPixel structure:

struct VertexToPixel { float4 Position float2 TexCoords float3 Normal float3 Position3D };

: POSITION; : TEXCOORD0; : TEXCOORD1; : TEXCOORD2;

Once again, we're using the TEXCOORDn semantic to pass floatn values from our vertex shader to our pixel shader. Next, let's change our vertex shader method so it reads in normal data from the vertex stream:

VertexToPixel SimplestVertexShader( float4 inPos : POSITION0, float3 inNormal: NORMAL0, float2 inTexCoords : TEXCOORD0)

Now it's time to fill these values in the vertex shader. The normal, however, needs some more explanation. Think of our meshes: they contain normal data. However, before we actually draw the meshes, we have rotated them and translated them. This means we also need to rotate the normals. We musn't translate them (because the length of a normal should always be 1, rotating a normal will also give a normal of length 1. This means, that if we translate a normal over 10 units to the left, all of the normals of the model will points to the left!). Currently, we only pass the WorldViewProjection matrix to our effect. From this matrix, it is impossible to reconstruct rotation data, so we will also pass the World matrix:

float4x4 xWorld;

This 4x4 matrix contains rotation, translation and scaling information. However, if we cast this 4x4 matrix to a 3x3 matrix, we will only retain the rotation information! To see why, see the mathematical chapter on this in the Extra Reading section. The next 2 lines fill the output of the vertex shader:

Output.Normal = normalize(mul(inNormal, (float3x3)xWorld)); Output.Position3D = mul(inPos, xWorld);

You see the normal is multiplied by the 3x3 world matrix (containing only rotation data). Then, it is normalized, so we make sure the length becomes 1 again. I guess the second line requires no explanation, as the 3D position of the vertices is passed straight on to the interpolator, which interpolates the 3D position for each pixel. So far for the vertex shader, but before we move on to the pixel shader, let's define 2 more variables, one that holds the position of the light, and another one that allows the XNA application to set the strength of the light:

float4 xLightPos; float xLightPower;

Let's move on to our pixel shader. This is the contents of the PSIn structure your pixel shader receives from the interpolator: PSIn.Position : the 2D position of the current pixel in screen coordinates; remember our pixel shader can NOT use this PSIn.TexCoords : the 2D coordinates indicating the position in the texture image that has to be sampled from PSIn.Normal : the direction of the normal in the current pixel PSIn.Position3D : the 3D coordinate of the current pixel It's time to start updating our pixel shader. These should be your first 2 lines:

PixelToFrame Output = (PixelToFrame)0; float DiffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal);

The new line calls our DotProduct method for the current pixel. As a result, we obtain a value which indicates the amount of light that is caught, and thus reflected by the object the current pixel represents. Because the result of a dot product is always within the [-1, 1] range, we can later on safely multiply this with the color. For now, we will simply display this value as color:

Output.Color = DiffuseLightingFactor;

So the original color of the pixel is multiplied by this dot product, as well as by a factor xLightPower we can set from within our XNA app. That's it for the HLSL code! When you hit Ctrl+S, you shouldn't get any errors. Of course we still have to set all the xSomething variables from within XNA. So go to our XNA code, where we'll be creating a new method, only to set this data. The benefit of this, is that you can call it from the Update method, to have the position or strength of your light changed every frame. First we'll create 2 variables to hold the data:

Vector4 LightPos; float LightPower;

And here's the method:

private void UpdateLightData() { LightPos = new Vector4(-10, 0, 4, 1); LightPower = 2.0f; }

That's it, don't forget to call this method from our Update method:

UpdateLightData();

Although now the method will set 2 constant values, feel free to make them change every frame. We of course need to pass these values to our effect file. Let's first add this to our Draw method, for our trianglestrip:

effect.Parameters["xLightPos"].SetValue(LightPos); effect.Parameters["xLightPower"].SetValue(LightPower);

And also for our models, in the DrawModel method. Notice that here we need to set the parameters of the `currenteffect', instead of the `effect':

currenteffect.Parameters["xLightPos"].SetValue(LightPos); currenteffect.Parameters["xLightPower"].SetValue(LightPower);

Next, we need to pass the xWorld value for every object we draw using our technique. For the trianglestrip in our Draw method, we simply pass the Identity matrix (the unity matrix, because our street isn't scaled, rotated or translated):

effect.Parameters["xWorld"].SetValue(Matrix.Identity);

And in the DrawModel method:

currenteffect.Parameters["xWorld"].SetValue(worldMatrix);

Now when you run this code, you should see the image below. It indicates how much light is falling on each pixel. You can see clearly where the light is positioned. Notice that we're not yet taking the distance into account; the only reason why the end of the wall gets less illuminated is because the angle to the light is getting sharper.

Of course we don't want a grayscale image; it's just a debugging image. What we want are colors, and this simple line suffices:

Output.Color = tex2D(ColoredTextureSampler, PSIn.TexCoords)*DiffuseLightingFactor*xLightPower;

We take the color from the texture, and modulate it by the intensity of illumination (the grayscale value), and the strength of the light. When you compile the HLSL code and run your XNA app again, you should see this image:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Per-pixel_lighting.php> Now we know some Let's see how we dynamic: we want within its range HLSL basics and have defined our first light, it's time for something more complex. can add real shadowing to our scene. We want this shadowing algorithm to be completely to define the position and direction of our light only one time, and every object should automatically cast a shadow.

Let's start with the car at the bottom left corner of our screen. Its headlights are shining light towards the right side of our scene. So the light hits the lampposts and the other car, which have to cast shadows on the wall. The question: how do we know which pixels on the wall are shadowed?

While I explain the Depth Mapping algorithm, you can have a look at the image below, where the 2 major steps are illustrated. The first step would be to draw the scene, as seen by the headlights. This means we have to move our camera position to the position of the headlights. Using this point of view, the only thing we are interested in is the distance of every pixel to the headlights. For example, the first lamppost would be 4 meters away of the headlights. Very important: the pixels of the wall behind this first lamppost are not seen by the headlights. At the location of these pixels, 4 meters was stored as distance to the headlights. We store this distance information of the whole scene in what is called a shadow map or depth map. During the second phase, we draw the scene the usual way; that is, from our camera's point of view. Only this time, for each pixel we calculate the distance to the headlights, and compare this depth to the depth stored in the depth map. For most objects, both distances will be the same. Our lamppost, for example, will still be 4 meters away from the headlights. However, when we calculate the distance for the pixels of the wall behind the light, we find that the distance to the headlights is 5 meters. This is not the same as the distance of 4 meters that was stored for these pixels. This way, we know these pixels can not be seen by the headlights, and are in the shadow.

This will all be better illustrated as we move on to the first step: drawing the depth map. We'll be defining a new technique for this in our HLSL code, ShadowMap, that renders the depth of the scene to the screen:

technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } }

Our vertex shader will be very simple, as this is the only output it needs to generate:

struct SMapVertexToPixel { float4 Position : POSITION; float3 Position2D : TEXCOORD0; };

As always, we need to supply the interpolator and pixel shader with the 2D screen coordinates. Only this time, we need this information also in our pixel shader, thus we need to pass it using one of the TEXCOORDn semantics. We've already covered the biggest part of this, so this is our new vertex shader:

SMapVertexToPixel ShadowMapVertexShader( float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightWorldViewProjection); Output.Position2D = Output.Position; return Output; }

Something very important to notice here: we're using xLightWorldViewProjection here instead of xWorldViewProjection, because this time we need to look at the scene as seen by the headlights, instead of as by our camera. This is a new matrix, so we need to initialize it at the top of our HLSL code:

float4x4 xLightWorldViewProjection;

Which we'll fill from within our XNA app later on in this chapter. Next, we'll code our pixel shader, which again only has to calculate the color:

struct SMapPixelToFrame { float4 Color : COLOR0; };

And this will be our pixel shader:

SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z/xMaxDepth; return Output; }

Remember HLSL values have to be between the [0, 1] interval. For this, we cannot simply use the Z component of the transformed position as the color, as the transformed Z coordinate indicates the distance from the object to the camera in XNA units. We first need to divide this value by the maximum this value could possibly be, something like the far clipping plane. Of course, this value xMaxData needs to be set from within our XNA app. This way, the normalized value will be smaller than 1.0f, and can be used as a color. The last thing to do would be to declare this xMaxDepth variable:

float xMaxDepth;

So far for the HLSL part. Let's go to our XNA code, where we need to pass all variables to our effect. Let's start with xMaxDepth. This needs to be set to a constant value, for each effect. So let's put this line in our LoadEffect method, immediately after the line that loads the effect from file. This way, if we clone the effect, this value will already be present in the cloned effects of our models.

effect.Parameters["xMaxDepth"].SetValue(60);

This indicates we don't expect any object to be further away from the light than 60 units. Next in line is the xLightWorldViewProjection variable. Because this depends on the position of the light, we'll update it in the UpdateLightData method, and we need to add this variable to our code:

Matrix lightViewProjectionMatrix;

Now we'll change the contents of our UpdateLightData method. We change the position of our light to the front of the car, change its intensity and the 3rd line is new:

LightPos = new Vector4(-18, 2, 5, 1); LightPower = 0.7f; lightViewProjectionMatrix = Matrix.CreateLookAt(new Vector3(LightPos.X, LightPos.Y, LightPos.Z), new Vector3(-2, 10, -3), new Vector3(0, 0, 1)) * Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, (float)this.Window.ClientBounds.Width / (float)this.Window.ClientBounds.Height, 1f, 100f);

Because to draw the shadow map we need to draw the scene as seen by the headlights, we need to create a corresponding ViewProjection matrix (identically to setting up a camera). We again need to pass this variable before drawing our trianglestrip, and before drawing our models. So add this line to our Draw method:

effect.Parameters["xLightWorldViewProjection"].SetValue(Matrix.Identity * lightViewProjectionMatrix);

And in the DrawModel method:

currenteffect.Parameters["xLightWorldViewProjection"].SetValue(worldMatrix * lightViewProjectionMatrix);

Now we still need to select the proper technique to render our scene. Change all occurrences of ,Simplest to ,ShadowMap, like so:

effect.CurrentTechnique = effect.Techniques["ShadowMap"];

You also need to update this in the 4 lines, at the bottom of the Draw method, which draw our models ! Now when you run this code, you should see the image below. It might look pretty strange at first sight, but it's perfect: it is the depth map of the scene as seen by the headlights of the car. The more white the pixel, the further away it is from the headlights.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php> This chapter again has nothing to do with HLSL, so anyone only interested in the Render-To-Texture technique will be able to follow.

Except for this paragraph :) As I told you last chapter, the second step of the shadow mapping algorithm involves comparing values in our depth map. To do this, we will save the screen of last chapter (the depth map) into a texture, so we can use this texture during the second step. So we'll be drawing our scene not to the screen, but to a texture instead. This can be useful for example to create a mirror in your scene: first you render the scene as seen by the mirror into the texture, afterwards you draw your scene while texturing two triangles with the rendered texture. Thanks to XNA and some of its helper classes, this rendering of our scene into a texture is made pretty easy. We'll need to add these variables to our code:

RenderTarget2D renderTarget; Texture2D texturedRenderedTo;

The first line is the rendertarget where we'll be drawing to. The default rendertarget is your screen, this is another one ;) The second variable is the texture where we will store the result of the rendertarget to. We only need to initialize the rendertarget, so put this code in the SetUpXNADevice method:

renderTarget = new RenderTarget2D(device, 512, 512, 1, SurfaceFormat.Color);

We need to specify how many pixels we want our rendertarget to be, and how many mipmap levels we want our target to have. A texture can have multiple mipmap levels. The first level will be the most detailed, the second one will be half the width and the height of the first one and so on. When the texture is close to the camera, the full-detailed first level map will be used to draw from. When the texture is further away from the camera, however, a smaller lower-detail mipmap is used, which reduces bandwidth. Because a texture can be created from the rendertarget, we need to specify how many levels we want to be created. Now we've got our variables ready, it's time to modify the Draw method a bit. It's very easy: we simply need to specify the active rendertarget before drawing. So put this as first line in your Draw method:

device.SetRenderTarget(0, renderTarget);

From that line on, the rendertarget will get cleared, and our shadow map will be drawn to it. When you run your code now, the last available memory from your graphics card will be displayed on your screen. When the last thing you saw in your form was the result of last chapter, chances are you'll still see it. But when you run another 3D program, or you reboot your computer, you'll generally see ­verystrange stuff on your screen. Simply because you're not longer rendering to your form! At the end of our Draw method, after everything has been drawn, we need to indicate the drawing process has ended:

device.ResolveRenderTarget(0); texturedRenderedTo = renderTarget.GetTexture();

The last line retrieves the contents of the rendertarget and puts it in our texture! It's really that easy. OK, that's all very nice, but as an audience you of course want proof of all this. I'll show you 2 ways how you can prove this, although we will remove the code at the end of the chapter. The first one is very straightforward: we will simply save the texture to a file. This is a way to make a screenshot of your game. Simply put this line at the very end of our Draw method:

texturedRenderedTo.Save("screenshot.bmp", ImageFileFormat.Bmp);

When you run the code, you can find an image file called screenshot.bmp in de debug directory of your project! Because this file is overwritten every frame of our project, remove the line again. The second way would be to simply display the texture on the screen, as a 2D image. For this, we first need to set our form as active rendertarget, and use a SpriteBatch to render the 2D image:

device.SetRenderTarget(0, null); device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); using (SpriteBatch sprite = new SpriteBatch(device)) { sprite.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); sprite.Draw(texturedRenderedTo, new Vector2(0, 0), null, Color.White, 0, new Vector2(0,0), 0.4f, SpriteEffects.None, 1); sprite.End(); }

The first line sets our form as active rendertarget, and then it is cleaned. Next, we create a SpriteBatch object, which we use to draw our texture to the screen! Notice we're using the SaveStateMode.SaveState directive, because otherwise our device would remain in the renderstates used by the spritebatch (such as alpha on by default). Now, the renderstates are saved before the spritebatch is activated, and are restored after the texture is rendered. To render the texture, we need to supply these arguments: the texture itself, the screen position where we want the texture to be drawn (we specify the top-left corner), which part of the texture to draw (null means the whole image), with which color of light to shine on the texture (white means normal colors), the rotation, where to start drawing from the texture, and the scaling. I put it in bold, because it is the only important argument for our case. That's it! When you run this code, you should see the texture, scaled by a factor 0.4f, as in the image below.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Render_to_texture.php> Now we have a Shadow Map, it's time to draw the scene as pixels should be shadowed or lit. This can be decided by location. As goal for this chapter, we'll be drawing the color of each pixel being the color stored in the Shadow seen by our camera, and check whether the sampling our Shadow Map at the correct scene from our camera's point of view, with the Map for that pixel.

First of all, this means we'll be drawing our scene using 2 different techniques: the first one will generate our Shadow Map which we store in a texture, the second one will draw the scene to the screen again. You can already add this second technique to your HLSL file:

technique ShadowedScene { pass Pass0 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } }

Because we need 2 brand new shader methods, it's good practice to define new structs that hold the output data of both methods:

struct SSceneVertexToPixel { float4 Position

: POSITION;

float4 ShadowMapSamplingPos : TEXCOORD0; }; struct SScenePixelToFrame { float4 Color : COLOR0; };

You can see the vertex shader will pass the (required) 2D position to the pixel shader, as well as the coordinate that will indicate which position of the Shadow Map corresponds to the current pixel that is being drawn. The pixel shader will again only output the pixel's color. Our pixel shader will be using the Shadow Map, which can be treated as a standard texture:

Texture xShadowMap; sampler ShadowMapSampler = sampler_state { texture = <xShadowMap> ; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = clamp; AddressV = clamp;};

Let's start by coding the vertex shader. A quick reminder: this second technique has to draw the scene, as seen by the camera. When this second technique is being called, the first technique has already finished drawing the shadow map. Important: this shadow map was drawn as seen by the headlights, at the bottom left of our scene. So now, for every pixel seen by the camera, we have to find which part of shadow map corresponds to this pixel. In other words, we need the 2D screen coordinates of the shadow map that correspond with every pixel being drawn. Because this can be difficult to grasp, I've added a small picture below:

I've indicated the points of interest with a red dot. Say we want to find the color of the dotted pixel of the wall. What we want to know is the 2D coordinate of the corresponding pixel in our Shadow Map. The first step would be to project the 3D coordinates of the red-dotted pixel of the wall to 2D camera space of the light. This can be done by our vertex shader and will be stored in the ShadowMapSamplingPos variable:

SSceneVertexToPixel ShadowedSceneVertexShader( float4 inPos : POSITION)

{ SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.ShadowMapSamplingPos = mul(inPos, xLightWorldViewProjection); return Output; }

Now our pixel shader has access to the homogeneous screen coordinates of our light. Quite a difficult line. The `homogeneous' part means we need to divide the X,Y and Z components by the W component. If you're interested why, you can check out my articles on matrices you can find in the `Extra Reading' section at the left of the site. For now, remember we need to divide by the W component. Because screen coordinates are 2D, we're not using the Z component. So the difficult line reduces to: `divide X and Y by W'. When we do this, we would get a 2D coordinate, with the X and Y component between the [-1, 1] region. The coord (-1,-1) would represent the upper left corner, whereas coord (0,1) would represent the pixel in the middle of the right. Remember texture coordinates have to be between the [0, 1] region, so we need a simple remap: point (1,-1) has to become (0,0) and point (1,1) has to stay (1,1). This is what we need:

Most people use the `Projective matrix' to do this, sometimes because they don't know what it's doing :) For a change, we're simply going to code this formula in our pixel shader:

SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.ShadowMapSamplingPos.x/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; ProjectedTexCoords[1] = -PSIn.ShadowMapSamplingPos.y/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; Output.Color = tex2D(ShadowMapSampler, ProjectedTexCoords); return Output; }

You see we receive the homogeneous coordinates as input, and derive the 2D texture coordinates from them. With these coordinates, we sample the color of our Shadow map at that position and set this as output color for the pixel. Because this all is not that trivial, you can find some extra discussing with sample code in the forum of this chapter. That's it for the HLSL code, now let's switch over to the XNA code. Because now we have 2 techniques, we have to draw our scene twice. Instead of doubling the code in our Draw method, we'll create a new method that takes the name of the technique, and draw the scene using this technique:

private void DrawScene(string technique) { effect.CurrentTechnique = effect.Techniques[technique]; effect.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * viewMatrix * projectionMatrix); effect.Parameters["xColoredTexture"].SetValue(StreetTexture);

effect.Parameters["xLightPos"].SetValue(LightPos); effect.Parameters["xLightPower"].SetValue(LightPower); effect.Parameters["xWorld"].SetValue(Matrix.Identity); effect.Parameters["xLightWorldViewProjection"].SetValue(Matrix.Identity * lightViewProjectionMatrix); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, myownvertexformat.Elements); device.Vertices[0].SetSource(vb, 0, myownvertexformat.SizeInBytes); device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 18); pass.End(); } effect.End(); Matrix worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi) * Matrix.CreateTranslation(-3, 15, 0); DrawModel(technique, CarModel, worldMatrix, CarTextures, false); worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi * 5.0f / 8.0f) * Matrix.CreateTranslation(-28, -1.9f, 0); DrawModel(technique, CarModel, worldMatrix, CarTextures, false); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 35, 1); DrawModel(technique, LamppostModel, worldMatrix, LamppostTextures, true); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 5, 1); DrawModel(technique, LamppostModel, worldMatrix, LamppostTextures, true); }

We've simply taken the code from the Draw method, and replaced every occurrence of ,ShadowMap by the technique argument. Now, to get the same result as last chapter, this code is all we need for our Draw method!

protected override void Draw(GameTime gameTime) { device.SetRenderTarget(0, renderTarget); device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); DrawScene("ShadowMap"); device.ResolveRenderTarget(0); texturedRenderedTo = renderTarget.GetTexture(); base.Draw(gameTime); }

The first five lines set up the rendertarget, draw the scene to it, and finish the renderscene. All we have to do now, is draw our scene using the ShadowedScene, so add this code to the Draw method:

device.SetRenderTarget(0, null); device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); DrawScene("ShadowedScene");

That's all we need to do! We first set our form as active rendertarger, clear it, and render our scene. OK, when you run this code, you get a nice black screen, which is because we haven't yet set the xShadowMap variable of the effect! So add this line of code for the trianglestrip, which we should now put in the DrawScene:

effect.Parameters["xShadowMap"].SetValue(texturedRenderedTo);

And for our models, in the DrawModel method:

currenteffect.Parameters["xShadowMap"].SetValue(texturedRenderedTo);

Now when you run the code, you should see something like the image below. It looks like our normal scen, except for the colors: these are taken from the Shadow map. When you look at a lamppost, you see it has the same color as in the Shadow map. But when you look at the part of the wall behind the lamppost, you can see it has the same color as the lamppost! Because of this, next chapter we'll be able to detect the shadowed areas.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Projective_texturing.php> Now we can sample the correct position in our Shadow map for each pixel in our scene, we have all the ingredients we need to create our shadows. We will be drawing our scene with real colors and lighting like we did before, so our vertex shader will need to pass the texture coordinates, the normal and the 3D position of every vertex to the interpolator and pixel shader. This chapter, we'll also test the real distance between the pixel and the light against the value found in the Shadow Map, so we'll also need this real distance. Thus we can add these 4 variables to our SSceneVertexToPixel struct:

float4 float2 float3 float3

RealDistance TexCoords Normal Position3D

: TEXCOORD1; : TEXCOORD2; : TEXCOORD3; : TEXCOORD4;

We've already seen how to fill the last 3 variables. We can find the real distance between a vertex and the light by transforming the vertex into the camera space of the light and taking the Z component. To be conform with the values stored in the Shadow map, we need to divide this value by xMaxDepth. Remember we need again all info sent to us in the vertex stream:

SSceneVertexToPixel ShadowedSceneVertexShader( float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.ShadowMapSamplingPos = mul(inPos, xLightWorldViewProjection); Output.RealDistance = Output.ShadowMapSamplingPos.z/xMaxDepth; Output.Normal = normalize(mul(inNormal, (float3x3)xWorld)); Output.Position3D = mul(inPos, xWorld); Output.TexCoords = inTexCoords; if (xUseBrownInsteadOfTextures) Output.TexCoords = (0,0); return Output; }

This already starts to look like a real vertex shader. Before moving on to the pixel shader, make sure you re-include our DotProduct method we created in one of the previous chapters:

float DotProduct(float4 LightPos, float3 Pos3D, float3 Normal) { float3 LightDir = normalize(LightPos - Pos3D); return dot(LightDir, Normal); }

This chapter, we'll only draw the pixels that are being lit by the headlights. This means we first have to check if the pixels are in view of the headlights. In other words: if the projected x and y coordinates are within the [0, 1] range. For this, we can use the HLSL `saturate' method, which clips any value to this range. So if the value after clipping is not the same as the value before clipping, we know the value wasn't in the [0, 1] range, and thus isn't in the view of our headlights. In HLSL code:

if ((saturate(ProjectedTexCoords.x) == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords.y) == ProjectedTexCoords.y)) { }

So if the pixel successfully enters this if-block, it can be lit by the light. Next, we will check whether the pixel isn't shadowed by another object. For this, we first retrieve the distance between pixel and light, as stored in the Shadow Map:

float StoredDepthInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).x;

Now we check if the real distance isn't bigger than this stored value:

if ((PSIn.RealDistance.x - 1.0f/100.0f) <= StoredDepthInShadowMap) { }

You see we subtracted a little bias of 1/100. Let me show you why. We store depth information as a color in our Shadow Map. One color component of this map is stored in only 8 bits, so the smallest difference is 1/256. Now say the maximal distance in our scene is 40.0f, so the smallest difference is 40/256 = 0.156f. This means that in even the best case, all points with distances 0.156f to 0.234f will be stored as 0.156f. Take for example a point at real distance 2.0f (which is stored as 0.156f in our shadow map!). You would like to check whether the real and stored distances are the same: if (2.0f == 0.156f) so this would FAIL, and you would think the point is in the shadow of another object. For this, we need to subtract a small value of our real distance while comparing. OK, with that out of the way, we can actually draw our scene, exactly like we did a few chapters ago: by sampling the texture and multiplying that color by the dot-product-factor and the power of the light. So we get as pixel shader:

SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.ShadowMapSamplingPos.x/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; ProjectedTexCoords[1] = -PSIn.ShadowMapSamplingPos.y/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; if ((saturate(ProjectedTexCoords.x) == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords.y) == ProjectedTexCoords.y)) { float StoredDepthInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).x; if ((PSIn.RealDistance.x - 1.0f/100.0f) <= StoredDepthInShadowMap) { float DiffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); float4 ColorComponent = tex2D(ColoredTextureSampler, PSIn.TexCoords); Output.Color = ColorComponent*DiffuseLightingFactor*xLightPower; } } return Output;

}

Now we've had the HLSL part for this chapter, let's move on to the XNA code. Because we're currently only working with one light, let's make it a bit more powerful:

LightPower = 2f;

Running this code should give you something like this:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Real_shadow.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Real_shadow.php> OK, we've got a square light, casting shadows on our scene. How can we make it look more like a real light? A real light shines a round beam of light, so let's start with that. To determine the pixels that are part of the round beam, we could add some (difficult?) mathematical checks. That would be OK for a normal light, but for the headlights of a car? This should have 2 beams, so more maths.. In my opinion, a much easier method is to throw in another texture that can be used by our pixel shader:

By looking at the image above I think you get the idea: only the white part will be lit. You can download it by right-clicking on it or by clicking here. We can already add the texture sampler to our HLSL code:

Texture xCarLightTexture; sampler CarLightSampler = sampler_state { texture = <xCarLightTexture> ; magfilter = LINEAR; minfilter=LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp;};

Now we can simply sample the color value of this image, which gives us a value between 0 and 1. We simply need to multiply our final color by this value, so this is the core of our pixel shader:

float LightTextureFactor = tex2D(CarLightSampler, ProjectedTexCoords).r; float DiffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); float4 ColorComponent = tex2D(ColoredTextureSampler, PSIn.TexCoords); Output.Color = ColorComponent*LightTextureFactor*DiffuseLightingFactor*xLightPower;

That's it for the HLSL code! In our XNA code, we're going to load the texture file, so import it into your project and add this variable to our code:

Texture2D CarLight;

And this line to our LoadGraphicsContent method:

CarLight = content.Load<Texture2D> ("carlight");

Now we still need to pass the texture to our effect file. Because this doesn't change throughout the lifetime of our application, we can set it immediately after loading the effect, in the LoadEffect method:

effect.Parameters["xCarLightTexture"].SetValue(CarLight);

Et voila! That's all there is to it. You should see the image below:

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shaping_the_light.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shaping_the_light.php> Maybe I should have put this chapter before the other chapters on lighting, as ambient lighting is a lot easier: it is done by adding a constant amount of light to the scene. As a little extra, we'll be adding the glow around the light bulbs of our lampposts. There is very little we have to do in our XNA code. We'll start by setting the amount of diffuse lighting, so add this line to our LoadEffect method:

effect.Parameters["xAmbient"].SetValue(0.4f);

We'll also be needing the 3D position of our camera and both lampposts, as well as the ViewProjection matrix. We already store the position of our camera in a variable, but we still need to add a variable to holds the position of our lampposts:

Vector4[] LamppostPos;

Which we fill in our UpdateLightData method:

LamppostPos = new Vector4[2]; LamppostPos[0] = new Vector4(4.0f, 5.0f, 11f, 1); LamppostPos[1] = new Vector4(4.0f, 35.0f, 11f, 1);

Now we need to pass all variables to our effect, so add these lines to the DrawScene method:

effect.Parameters["xLamppostPos"].SetValue(LamppostPos); effect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1)); effect.Parameters["xViewProjection"].SetValue(viewMatrix * projectionMatrix);

We also need to set the variable for each of our models, so add this line to the DrawModel method:

currenteffect.Parameters["xLamppostPos"].SetValue(LamppostPos); currenteffect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1)); currenteffect.Parameters["xViewProjection"].SetValue(viewMatrix * projectionMatrix);

Of course, we need to receive these variables in our HLSL file:

float4x4 xViewProjection; float4 xCameraPos; float4 xLamppostPos[2]; float xAmbient;

Now, we will start by extending our last pixel shader, ShadowedScenePixelShader, by adding some ambient lighting to our scene. No matter where the pixel is in the screen, we want to add the correct color to it, multiplied by the xAmbient variable. So put this line before the last line, which returns the output structure:

Output.Color += ColorComponent*xAmbient;

Of course, if we want this line work, we need to get the line where we define the ColorComponent in from of the if structure. So find this line:

float4 ColorComponent = tex2D(ColoredTextureSampler, PSIn.TexCoords);

And put it as one of the first lines of the method. Now when you compile the code, and run the program from within Game Studio Express, you should see the same image as last chapter, but with some light added to the whole scene! Let's add the glow around the light bulbs. The concept is quite easy, but it shows calculations in 2D screen space, so it's worth to be included in a tutorial. We will calculate the 2D screen position of the light. Then we calculate the distance between the 2D screen pos of the light, and the 2D screen pos of the current pixel. If this distance is smaller than a certain amount, we add some white to the final color value for that pixel. First we should calculate the 2D screen pos. We have done this before: first we needed our vertex shader to supply the 2D position of each vertex to the pixel shader. Our vertex shader already calculates this as Output.Position, but remember our pixel shader cannot use this as input (it was the small arrow in our flowchart). So add this element to our SSceneVertexToPixel struct:

Float4 Position2D

: TEXCOORD5;

And this line, which fills this output variable, to our vertex shader:

Output.Position2D = Output.Position;

So now we have the 2D position available in our pixel shader, let's add this code to the bottom of our pixel shader so we become values between the [0,1] region, as seen in the chapter `Projective Texturing':

float2 ScreenPos; ScreenPos[0] = PSIn.Position2D.x/PSIn.Position2D.w/2.0f +0.5f; ScreenPos[1] = -PSIn.Position2D.y/PSIn.Position2D.w/2.0f +0.5f;

For the remainder of the code, we have to iterate through our 2 lampposts:

for (int CurrentLight=0; CurrentLight<2; CurrentLight++) { }

Next we will calculate the 2D position of the light. This is done exactly the same way: first we multiply the 3D position by the worldviewprojection matrix, after which we map the coordinates to the [0 1] region. We've done this before in the `Projective Texturing' chapter.

float4 Light2DPos = mul(xLamppostPos[CurrentLight],xViewProjection); float2 LightScreenPos; LightScreenPos[0] = Light2DPos.x/Light2DPos.w/2.0f +0.5f; LightScreenPos[1] = -Light2DPos.y/Light2DPos.w/2.0f +0.5f;

Now we have both the 2D positions of the light and the pixel, we can calculate the distance between them. We can use the `distance' method of HLSL:

float dist = distance(ScreenPos, LightScreenPos);

Before we move on, we have to define the maximal radius of the glow around our lights. If the light is further away from our camera, the radius of the glow around this light has to decrease, so it has to be inversely related to the distance between our camera and the light:

float radius = 3.5f/distance(xCameraPos, xLamppostPos[CurrentLight]);

Note that this time the distance between 2 3D coordinates is being calculated, while in the previous case we were dealing with 2 2D coordinates. Adjust the 3.5f value to size you glows. Now we have all data we need! We can check if the distance between the pixel and the light is smaller than the maximal radius. If this is the case, we will add an amount of light that is related to this distance:

if (dist < radius) { Output.Color.rgb += (radius-dist)*8.0f; }

That's it! Running this code should already give you the image you can see the final image of the series.

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/2D_screen_processing.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/2D_screen_processing.php> When you take a look at our XNA code where we set the variable in our effect file, it is a bit disappointing. There is a lot of redundancy in the code that takes care of the interface to our HLSL file. First we set the xWorldViewProjection matrix, then the xWorld matrix, the xLightWorldViewProjection matrix, and finally the xViewProjection matrix. You see we pass the world information more than one time. This way, every new transformation we would like to do in HLSL, would require a new SetValue call, with the data being passed probably already contained in one of the previous sets of data passed to the HLSL code (which is now the case with the world, view and projection data). All of this data has, in fact, nothing to do with the purpose of the method, which has to position and draw the objects in our scene. If we would have to pass a new matrix for each of our following lights, the method would definitely become a mess. That's why it's time we discuss preshaders. We would like to pass in the view and projection matrices of our camera and light, and the world matrix only once. The question that arises is this: in the end, these matrices have to be multiplied, so where and when would that be? Up to this point, for each object, the multiplications were done once by our XNA app, thus done by the CPU (your Intel or AMD processor). This is shown on the left side of the image below: every frame of a game, the game is updated (responding to user input if there is any), the matrices are multiplied and sent to HLSL. Next, the vertex shader is called for every vertex.

But we don't want to do the multiplication in our XNA method, as this litters our code. If we would perform the multiplications in our vertex shader, these multiplications would have to be performed by our GPU (your GeForce or Radeon) in the graphics card for every vertex it has to draw! This is shown in the middle column of the image above. This is where preshaders kick in: when the HLSL code is being compiled, the compiler checks for code that will be the same for every vertex. So if we put our matrix multiplications inside our vertex buffers, that part of HLSL code will get stripped away by the compiler, and put in a `preshader'. This preshader is executed on the CPU before the vertex shader is actually called, and the resulting constants are passed to the HLSL code. This way, we can code the multiplications in our vertex shader (where they belong, as you'll see), yet they will be processed only once on the CPU. This is represented on the right part of the image. Enough for the theory, let's move on to the code. We'll put the matrices into pieces: instead of passing the WorldViewProjection matrix, we'll be passing the World matrix, and the ViewProjection matrix (OK, we could also send the View and Projection matrices separately, but they are always usd together). Let's start with our camera: this delivers the view and projection matrices. Because we'll never need them separated, we'll pass the combined matrix. Replace all lines in our DrawScene method that set effect parameters, and start by adding this one:

effect.Parameters["xCameraViewProjection"].SetValue(viewMatrix * projectionMatrix);

We can do the same for our light

effect.Parameters["xLightViewProjection"].SetValue(lightViewProjectionMatrix);

Next in line is the World matrix:

effect.Parameters["xWorld"].SetValue(Matrix.Identity);

With these 3 matrices, we can make all necessary combinations in our effect file! Of course, we have some remaining variables we need to pass:

effect.Parameters["xColoredTexture"].SetValue(StreetTexture); effect.Parameters["xLightPos"].SetValue(LightPos); effect.Parameters["xLightPower"].SetValue(LightPower); effect.Parameters["xShadowMap"].SetValue(texturedRenderedTo); effect.Parameters["xLamppostPos"].SetValue(LamppostPos); effect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1));

To be useful in the DrawModel code, we need to pass parameters to the currenteffect, and we need to pass in the appropriate World matrix and textures:

currenteffect.Parameters["xCameraViewProjection"].SetValue(viewMatrix * projectionMatrix); currenteffect.Parameters["xLightViewProjection"].SetValue(lightViewProjectionMatrix); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xColoredTexture"].SetValue(textures[i++]); currenteffect.Parameters["xLightPos"].SetValue(LightPos); currenteffect.Parameters["xLightPower"].SetValue(LightPower); currenteffect.Parameters["xShadowMap"].SetValue(texturedRenderedTo); currenteffect.Parameters["xLamppostPos"].SetValue(LamppostPos); currenteffect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1)); currenteffect.Parameters["xUseBrownInsteadOfTextures"].SetValue(useBrownInsteadOfTextures);

That's it for the XNA code. Note that no data is sent twice. Now it's time to have a look at the HLSL code, where we have to process the matrix multiplications. First remove all of the old matrix constants in the HLSL code, and replace them with these:

float4x4 xCameraViewProjection; float4x4 xLightViewProjection; float4x4 xWorld;

Next in line is our first vertex shader. The ShadowMapVertexShader holds only 2 lines of code, but they use and old variable: xLightWorldViewProjection. Now we have to create this constant, starting from the 4 matrices we just defined. Have a look at this code:

float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); From the world and lightviewprojection matrix, we can create the lighthworldviewprojection matrix. This lighthworldviewprojection matrix will be identified as being the same for every vertex of the current object, so it will be extracted by the HLSL compiler to be run on the CPU. Because of this, I've called the result preLightWorldViewProjection. With this preLightWorldViewProjection, each vertex will be multiplied in the vertex shader. The total vertex shader becomes:

SMapVertexToPixel ShadowMapVertexShader( float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); Output.Position = mul(inPos, preLightWorldViewProjection); Output.Position2D = Output.Position; return Output; }

Since our corresponding pixel shader doesn't use any matrix constants, we can move on to the second vertex shader, ShadowedSceneVertexShader. This one needs the WorldViewProjection matrices of both our camera and our light, which we need to calculate. This is done exactly the same as before:

float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); float4x4 preCameraWorldViewProjection = mul (xWorld, xCameraViewProjection);

Now we need to replace the old matrix names by these new ones, for the remainder of the vertex shader, which becomes:

SSceneVertexToPixel ShadowedSceneVertexShader( float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); float4x4 preCameraWorldViewProjection = mul (xWorld, xCameraViewProjection); Output.Position = mul(inPos, preCameraWorldViewProjection); Output.ShadowMapSamplingPos = mul(inPos, preLightWorldViewProjection); Output.RealDistance = Output.ShadowMapSamplingPos.z/xMaxDepth; Output.Normal = normalize(mul(inNormal, (float3x3)xWorld)); Output.Position3D = mul(inPos, xWorld); Output.Position2D = Output.Position; Output.TexCoords = inTexCoords; if (xUseBrownInsteadOfTextures) Output.TexCoords = (0,0); return Output; }

The second pixel shader also requires some modifications, as it also uses one old-style matrices: xViewProjection instead of xCameraViewProjection, so change that line to this:

float4 Light2DPos = mul(xLamppostPos[CurrentLight],xCameraViewProjection);

When you hit CTRL+S, FX Composer shouldn't give any error messages. When you runs this code, you should get the same result as last chapter! Only this time, we've passed in only the basic matrices to our effect, and combined them in our shaders. This is a much more scalable approach. Before ending this chapter, let me give you some proof of my whole story. Using the command prompt, you can let the compiler show you what assembler code your HLSL file would be transformed into. Go to the directory that holds you .fx file, and type: fxc /Tfx_2_0 OurHLSLfile.fx /Fc:output.fxc

In case your .fx file is named OurHLSLfile.fx, this will compile you HLSL code and put the assembler code in the file output.fxc. By default, the fxc compiler will enable preshaders. If you want to see the difference, you can let the compiler know to disable preshaders by using the /Od parameter:

fxc /Od /Tfx_2_0 OurHLSLfile.fx /Fc:output.fxc

Below you can file the assembly code of our first vertex shader. Because the total listing would be too large, I snipped out a part of it, but you should get the idea anyway:

In the left part, you will see all matrix multiplications are done in the vertex shader itself, which corresponds to the `No way' column of my first image on this page. This way, up to 67 vertex instructions have to be performed for each vertex, which is quite a lot. In the right part, you'll see these multiplications have been identified by the compiler and extracted into the preshader. In this case, only 6 shader instructions have to be performed for each vertex!

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Preshaders.php>

Pasted from <http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Preshaders.php>

Or click on one of the topics on this chapter to go there: My Subaru's Sunk! I got the Direct X using C# code to work fine but ... This chapter we haven't introduced a new XNA technique, but we have discussed an elegant way to clean up the interface between our XNA code and our HLSL code. The XNA pass only the basic matrices to the effect, and the HLSL code creates the matrices it needs. This approach is much cleaner and more scalable. Our HLSL code:

struct PixelToFrame { float4 Color };

: COLOR0;

float4x4 xCameraViewProjection; float4x4 xLightViewProjection; float4x4 xWorld;

float4 xLightPos; float4 xCameraPos; float4 xLamppostPos[2]; float xAmbient; float xLightPower; float xMaxDepth; bool xUseBrownInsteadOfTextures;

Texture xColoredTexture; sampler ColoredTextureSampler = sampler_state { texture = <xColoredTexture> ; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = mirror; AddressV = mirror;}; Texture xShadowMap; sampler ShadowMapSampler = sampler_state { texture = <xShadowMap> ; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = clamp; AddressV = clamp;}; Texture xCarLightTexture; sampler CarLightSampler = sampler_state { texture = <xCarLightTexture> ; magfilter = LINEAR; minfilter=LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp;}; //------- Technique: ShadowMap -------struct SMapVertexToPixel { float4 Position : POSITION; float3 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; SMapVertexToPixel ShadowMapVertexShader( float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0;

float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); Output.Position = mul(inPos, preLightWorldViewProjection);

Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z/xMaxDepth; return Output; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } //------- Technique: ShadowedScene -------struct SSceneVertexToPixel

{ float4 float4 float4 float2 float3 float3 float4 }; struct SScenePixelToFrame { float4 Color : COLOR0; }; SSceneVertexToPixel ShadowedSceneVertexShader( float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Position : POSITION; ShadowMapSamplingPos : TEXCOORD0; RealDistance : TEXCOORD1; TexCoords : TEXCOORD2; Normal : TEXCOORD3; Position3D : TEXCOORD4; Position2D : TEXCOORD5;

float4x4 preLightWorldViewProjection = mul (xWorld, xLightViewProjection); float4x4 preCameraWorldViewProjection = mul (xWorld, xCameraViewProjection); Output.Position = mul(inPos, preCameraWorldViewProjection); Output.ShadowMapSamplingPos = mul(inPos, preLightWorldViewProjection);

Output.RealDistance = Output.ShadowMapSamplingPos.z/xMaxDepth; Output.Normal = normalize(mul(inNormal, (float3x3)xWorld)); Output.Position3D = mul(inPos, xWorld); Output.Position2D = Output.Position; Output.TexCoords = inTexCoords; if (xUseBrownInsteadOfTextures) Output.TexCoords = (0,0); return Output; } float DotProduct(float4 LightPos, float3 Pos3D, float3 Normal) { float3 LightDir = normalize(LightPos - Pos3D); return dot(LightDir, Normal); } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.ShadowMapSamplingPos.x/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; ProjectedTexCoords[1] = -PSIn.ShadowMapSamplingPos.y/PSIn.ShadowMapSamplingPos.w/2.0f +0.5f; float4 ColorComponent = tex2D(ColoredTextureSampler, PSIn.TexCoords); if ((saturate(ProjectedTexCoords.x) == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords.y) == ProjectedTexCoords.y)) { float StoredDepthInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).x; if ((PSIn.RealDistance.x - 1.0f/100.0f) <= StoredDepthInShadowMap) { float LightTextureFactor = tex2D(CarLightSampler, ProjectedTexCoords).r; float DiffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); Output.Color = ColorComponent*LightTextureFactor*DiffuseLightingFactor*xLightPower;

} } float2 ScreenPos; ScreenPos[0] = PSIn.Position2D.x/PSIn.Position2D.w/2.0f +0.5f; ScreenPos[1] = -PSIn.Position2D.y/PSIn.Position2D.w/2.0f +0.5f; Output.Color += ColorComponent*xAmbient; for (int CurrentLight=0; CurrentLight<2; CurrentLight++) {

float4 Light2DPos = mul(xLamppostPos[CurrentLight],xCameraViewProjection);

float2 LightScreenPos; LightScreenPos[0] = Light2DPos.x/Light2DPos.w/2.0f +0.5f; LightScreenPos[1] = -Light2DPos.y/Light2DPos.w/2.0f +0.5f; float dist = distance(ScreenPos, LightScreenPos); float radius = 3.5f/distance(xCameraPos, xLamppostPos[CurrentLight]); if (dist < radius) { Output.Color.rgb += (radius-dist)*8.0f; } } return Output; } technique ShadowedScene { pass Pass0 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } }

And our cleaned XNA code:

using using using using using using using using using using

System; System.Collections; System.Collections.Generic; Microsoft.Xna.Framework; Microsoft.Xna.Framework.Audio; Microsoft.Xna.Framework.Content; Microsoft.Xna.Framework.Graphics; Microsoft.Xna.Framework.Input; Microsoft.Xna.Framework.Storage; System.IO;

namespace XNAtutorialSeries3 { public class Game1 : Microsoft.Xna.Framework.Game { private struct myownvertexformat { private Vector3 Position;

private Vector2 TexCoord; private Vector3 Normal; public myownvertexformat(Vector3 Position, Vector2 TexCoord, Vector3 Normal) { this.Position = Position; this.TexCoord = TexCoord; this.Normal = Normal; } public static VertexElement[] Elements = { new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0), new VertexElement(0, sizeof(float)*3, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0), new VertexElement(0, sizeof(float)*5, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Normal, 0), }; public static int SizeInBytes = sizeof(float) * (3 + 2 + 3); } GraphicsDeviceManager graphics; ContentManager content; GraphicsDevice device; Effect effect; Vector3 CameraPos; Matrix viewMatrix; Matrix projectionMatrix; VertexBuffer vb; Texture2D StreetTexture; Model LamppostModel; Model CarModel; Texture2D[] CarTextures; Texture2D[] LamppostTextures; Vector4 LightPos; float LightPower; Matrix lightViewProjectionMatrix; RenderTarget2D renderTarget; Texture2D texturedRenderedTo; Texture2D CarLight; Vector4[] LamppostPos; public Game1() { graphics = new GraphicsDeviceManager(this); content = new ContentManager(Services); if (GraphicsAdapter.DefaultAdapter.GetCapabilities(DeviceType.Hardware).MaxPixelShaderProfile < ShaderProfile.PS_2_0) graphics.PreparingDeviceSettings += new EventHandler<PreparingDeviceSettingsEventArgs>(SetToReference); } protected override void Initialize() { base.Initialize(); } private void SetUpVertices() { myownvertexformat[] vertices = new myownvertexformat[18]; vertices[0] = new myownvertexformat(new Vector3(-20, -10, 0), new Vector2(-0.25f, 25.0f),

new Vector3(0, 0, 1)); vertices[1] = new myownvertexformat(new Vector3(-20, 100, 0), new Vector2(-0.25f, 0.0f), new Vector3(0, 0, 1)); vertices[2] = new myownvertexformat(new Vector3(2, -10, 0), new Vector2(0.25f, 25.0f), new Vector3(0, 0, 1)); vertices[3] = new myownvertexformat(new Vector3(2, 100, 0), new Vector2(0.25f, 0.0f), new Vector3(0, 0, 1)); vertices[4] = new myownvertexformat(new Vector3(2, -10, 0), new Vector2(0.25f, 25.0f), new Vector3(-1, 0, 0)); vertices[5] = new myownvertexformat(new Vector3(-2, 100, 0), new Vector2(0.25f, 0.0f), new Vector3(-1, 0, 0)); vertices[6] = new myownvertexformat(new Vector3(2, -10, 1), new Vector2(0.375f, 25.0f), new Vector3(-1, 0, 0)); vertices[7] = new myownvertexformat(new Vector3(2, 100, 1), new Vector2(0.375f, 0.0f), new Vector3(-1, 0, 0)); vertices[8] = new myownvertexformat(new Vector3(2, -10, 1), new Vector2(0.375f, 25.0f), new Vector3(0, 0, 1)); vertices[9] = new myownvertexformat(new Vector3(2, 100, 1), new Vector2(0.375f, 0.0f), new Vector3(0, 0, 1)); vertices[10] = new myownvertexformat(new Vector3(3, -10, 1), new Vector2(0.5f, 25.0f), new Vector3(0, 0, 1)); vertices[11] = new myownvertexformat(new Vector3(3, 100, 1), new Vector2(0.5f, 0.0f), new Vector3(0, 0, 1)); vertices[12] = new myownvertexformat(new Vector3(13, -10, 1), new Vector2(0.75f, 25.0f), new Vector3(0, 0, 1)); vertices[13] = new myownvertexformat(new Vector3(13, 100, 1), new Vector2(0.75f, 0.0f), new Vector3(0, 0, 1)); vertices[14] = new myownvertexformat(new Vector3(13, -10, 1), new Vector2(0.75f, 25.0f), new Vector3(-1, 0, 0)); vertices[15] = new myownvertexformat(new Vector3(13, 100, 1), new Vector2(0.75f, 0.0f), new Vector3(-1, 0, 0)); vertices[16] = new myownvertexformat(new Vector3(13, -10, 21), new Vector2(1.25f, 25.0f), new Vector3(-1, 0, 0)); vertices[17] = new myownvertexformat(new Vector3(13, 100, 21), new Vector2(1.25f, 0.0f), new Vector3(-1, 0, 0)); vb = new VertexBuffer(device, myownvertexformat.SizeInBytes * 18, ResourceUsage.WriteOnly); vb.SetData(vertices); } private void SetUpXNADevice() { graphics.PreferredBackBufferWidth = 500; graphics.PreferredBackBufferHeight = 500; graphics.IsFullScreen = false; graphics.ApplyChanges(); Window.Title = "Riemer's XNA Tutorials -- Series 3"; device = graphics.GraphicsDevice; renderTarget = new RenderTarget2D(device, 512, 512, 1, SurfaceFormat.Color); } void SetToReference(object sender, PreparingDeviceSettingsEventArgs e) { e.GraphicsDeviceInformation.CreationOptions = CreateOptions.SoftwareVertexProcessing; e.GraphicsDeviceInformation.DeviceType = DeviceType.Reference; e.GraphicsDeviceInformation.PresentationParameters.MultiSampleType = MultiSampleType.None; } private void LoadEffect() { effect = content.Load<Effect> ("OurHLSLfile"); effect.Parameters["xMaxDepth"].SetValue(60); effect.Parameters["xCarLightTexture"].SetValue(CarLight); effect.Parameters["xAmbient"].SetValue(0.4f);

CameraPos = new Vector3(-25, -18, 13); viewMatrix = Matrix.CreateLookAt(CameraPos, new Vector3(0, 12, 2), new Vector3(0, 0, 1)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)this.Window.ClientBounds.Width / (float)this.Window.ClientBounds.Height, 1.0f, 200.0f); } protected override void LoadGraphicsContent(bool loadAllContent) { if (loadAllContent) { StreetTexture = content.Load<Texture2D> ("streettexture"); CarLight = content.Load<Texture2D> ("carlight"); SetUpXNADevice(); LoadEffect(); LoadModels(); SetUpVertices(); } } private void LoadModels() { LamppostModel = content.Load<Model> ("lamppost"); LamppostTextures = new Texture2D[7]; int i = 0; foreach (ModelMesh mesh in LamppostModel.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) LamppostTextures[i++] = StreetTexture; foreach (ModelMesh modmesh in LamppostModel.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device);

CarModel = content.Load<Model> ("racer"); i = 0; CarTextures = new Texture2D[7]; foreach (ModelMesh mesh in CarModel.Meshes) foreach (BasicEffect currenteffect in mesh.Effects) CarTextures[i++] = currenteffect.Texture; foreach (ModelMesh modmesh in CarModel.Meshes) foreach (ModelMeshPart modmeshpart in modmesh.MeshParts) modmeshpart.Effect = effect.Clone(device); } protected override void UnloadGraphicsContent(bool unloadAllContent) { if (unloadAllContent == true) { content.Unload(); } } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); UpdateLightData(); base.Update(gameTime); } private void UpdateLightData() {

LightPos = new Vector4(-18, 2, 5, 1); LightPower = 1.7f; lightViewProjectionMatrix = Matrix.CreateLookAt(new Vector3(LightPos.X, LightPos.Y, LightPos.Z), new Vector3(-2, 10, -3), new Vector3(0, 0, 1)) * Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, (float)this.Window.ClientBounds.Width / (float)this.Window.ClientBounds.Height, 1f, 100f); LamppostPos = new Vector4[2]; LamppostPos[0] = new Vector4(4.0f, 5.0f, 11.5f, 1); LamppostPos[1] = new Vector4(4.0f, 35.0f, 11.5f, 1); } private void DrawModel(string technique, Model currentmodel, Matrix worldMatrix, Texture2D[] textures, bool useBrownInsteadOfTextures) { int i = 0; foreach (ModelMesh modmesh in currentmodel.Meshes) { foreach (Effect currenteffect in modmesh.Effects) {

currenteffect.CurrentTechnique = effect.Techniques[technique]; currenteffect.Parameters["xCameraViewProjection"].SetValue(viewMatrix * projectionMatrix); currenteffect.Parameters["xLightViewProjection"].SetValue(lightViewProjectionMatrix ); currenteffect.Parameters["xWorld"].SetValue(worldMatrix); currenteffect.Parameters["xColoredTexture"].SetValue(textures[i++]); currenteffect.Parameters["xLightPos"].SetValue(LightPos); currenteffect.Parameters["xLightPower"].SetValue(LightPower); currenteffect.Parameters["xShadowMap"].SetValue(texturedRenderedTo); currenteffect.Parameters["xLamppostPos"].SetValue(LamppostPos); currenteffect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1)); currenteffect.Parameters["xUseBrownInsteadOfTextures"].SetValue(useBrownInsteadOfTe xtures); } modmesh.Draw(); } } private void DrawScene(string technique) { effect.CurrentTechnique = effect.Techniques[technique]; effect.Parameters["xCameraViewProjection"].SetValue(viewMatrix * projectionMatrix); effect.Parameters["xLightViewProjection"].SetValue(lightViewProjectionMatrix); effect.Parameters["xWorld"].SetValue(Matrix.Identity); effect.Parameters["xColoredTexture"].SetValue(StreetTexture); effect.Parameters["xLightPos"].SetValue(LightPos); effect.Parameters["xLightPower"].SetValue(LightPower); effect.Parameters["xShadowMap"].SetValue(texturedRenderedTo); effect.Parameters["xLamppostPos"].SetValue(LamppostPos); effect.Parameters["xCameraPos"].SetValue(new Vector4(CameraPos.X, CameraPos.Y, CameraPos.Z, 1)); effect.Begin(); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); device.VertexDeclaration = new VertexDeclaration(device, myownvertexformat.Elements); device.Vertices[0].SetSource(vb, 0, myownvertexformat.SizeInBytes);

device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 18); pass.End(); } effect.End(); Matrix worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi) * Matrix.CreateTranslation(-3, 15, 0); DrawModel(technique, CarModel, worldMatrix, CarTextures, false); worldMatrix = Matrix.CreateScale(4f, 4f, 4f) * Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi * 5.0f / 8.0f) * Matrix.CreateTranslation(-28, -1.9f, 0); DrawModel(technique, CarModel, worldMatrix, CarTextures, false); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 35, 1); DrawModel(technique, LamppostModel, worldMatrix, LamppostTextures, true); worldMatrix = Matrix.CreateScale(0.05f, 0.05f, 0.05f) * Matrix.CreateRotationX((float)Math.PI / 2) * Matrix.CreateTranslation(4.0f, 5, 1); DrawModel(technique, LamppostModel, worldMatrix, LamppostTextures, true); } protected override void Draw(GameTime gameTime) { device.SetRenderTarget(0, renderTarget); device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); DrawScene("ShadowMap"); device.ResolveRenderTarget(0); texturedRenderedTo = renderTarget.GetTexture(); device.SetRenderTarget(0, null); device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0); DrawScene("ShadowedScene"); base.Draw(gameTime); } } }

Information

165 pages

Find more like this

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

300197


You might also be interested in

BETA
ca2
038-015_Layout 3B (Page 1)
HP Color LaserJet CP1210 Series Printer User Guide - ENWW