• Aucun résultat trouvé

Creating Our First Fragment Shader

Dans le document OpenGL ES 2 for Android (Page 46-51)

Now that we’ve created a vertex shader, we have a subroutine for generating the final position of each vertex. We still need to create a subroutine for gen-erating the final color of each fragment. Before we do that, let’s take some time to learn more about what a fragment is and how one is generated.

The Art of Rasterization

Your mobile display is composed of thousands to millions of small, individual components known as pixels. Each of these pixels appears to be capable of displaying a single color out of a range of millions of different colors. However, this is actually a visual trick: most displays can’t actually create millions of different colors, so instead each pixel is usually composed of just three indi-vidual subcomponents that emit red, green, and blue light, and because each pixel is so small, our eyes blend the red, green, and blue light together to create a huge range of possible colors. Put enough of these individual pixels together and we can show a page of text or the Mona Lisa.

OpenGL creates an image that we can map onto the pixels of our mobile dis-play by breaking down each point, line, and triangle into a bunch of small fragments through a process known as rasterization. These fragments are analogous to the pixels on your mobile display, and each one also consists of a single solid color. To represent this color, each fragment has four compo-nents: red, green, and blue for color, and alpha for transparency. We’ll go into more detail about how this color model works in Section 2.6, The OpenGL Color Model, on page 34.

In Figure 11, Rasterization: generating fragments, on page 33, we can see an example of how OpenGL might rasterize a line onto a set of fragments. The display system usually maps these fragments directly to the pixels on the screen so that one fragment corresponds to one pixel. However, this isn’t always true: a super high-res device might want to use bigger fragments so that the GPU has less work to do.

Writing the Code

The main purpose of a fragment shader is to tell the GPU what the final color of each fragment should be. The fragment shader will be called once for every Chapter 2. Defining Vertices and Shaders

32

Figure 11—Rasterization: generating fragments

fragment of the primitive, so if a triangle maps onto 10,000 fragments, then the fragment shader will be called 10,000 times.

Let’s go ahead and write our fragment shader. Create a new file in your project, /res/raw/simple_fragment_shader.glsl, and add the following code:

AirHockey1/res/raw/simple_fragment_shader.glsl precision mediump float;

uniform vec4 u_Color;

void main() {

gl_FragColor = u_Color;

}

Precision Qualifiers

The first line at the top of the file defines the default precision for all floating point data types in the fragment shader. This is like choosing between float and double in our Java code.

We can choose between lowp, mediump, and highp, which correspond to low precision, medium precision, and high precision. However, highp is only sup-ported in the fragment shader on some implementations.

Why didn’t we have to do this for the vertex shader? The vertex shader can also have its default precision changed, but because accuracy is more important when it comes to a vertex’s position, the OpenGL designers decided to set vertex shaders to the highest setting, highp, by default.

Introducing the OpenGL Pipeline

33

As you’ve probably guessed, higher precision data types are more accurate, but they come at the cost of decreased performance. For our fragment shader, we’ll select mediump for maximum compatibility and as a good tradeoff between speed and quality.

Generating the Fragment’s Color

The rest of the fragment shader is similar to the vertex shader we defined earlier. This time, we pass in a uniform called u_Color. Unlike an attribute that is set on each vertex, a uniform keeps the same value for all vertices until we change it again. Like the attribute we were using for position in the vertex shader, u_Color is also a four-component vector, and in the context of a color, its four components correspond to red, green, blue, and alpha.

We then define main(), the main entry point to the shader. It copies the color that we’ve defined in our uniform to the special output variable gl_FragColor. Our shader must write something to gl_FragColor. OpenGL will use this color as the final color for the current fragment.

2.6 The OpenGL Color Model

OpenGL uses the additive RGB color model, which works with just the three primary colors: red, green, and blue. Many colors can be created by mixing these primary colors together in various proportions. For example, red and green together create yellow, red and blue together create magenta, and blue and green together create cyan. Add red, green, and blue together and you get white (as seen in the following figure).

Figure 12—The RGB additive color model

Chapter 2. Defining Vertices and Shaders

34

This model works differently than the subtractive paint model you might have learned about in school: in the subtractive paint model, adding blue and yellow makes green, and adding a bunch of colors together creates a dark brown or black. This is because paint does not emit light; it absorbs it. The more colors of paint we use, the more light is absorbed and the darker the paint appears.

The additive RGB model follows the properties of light itself: when two beams of light of different colors mix together, we don’t get a darker color; we get a brighter color. When we observe a rainbow in the sky after a heavy rainfall, we’re actually seeing all of the different colors of the visible light spectrum that can combine to make white.

For the curious, Wikipedia goes into a lot more detail.4

Mapping Colors to the Display

OpenGL assumes that these colors all have a linear relationship with each other: a red value of 0.5 should be twice as bright as a red value of 0.25, and a red value of 1 should be twice as bright as a red value of 0.5. These primary colors are clamped to the range [0, 1], with 0 representing the absence of that particular primary color and 1 representing the maximum strength for that color.

This color model maps well to the display hardware used by mobiles and computer displays (however, in The Nonlinear Nature of Your Display, on page 269, we’ll learn that the mapping isn’t quite one to one). These displays almost always use the three primary colors of red, green, and blue (some may include yellow as an additional primary color, for a “purer yellow”), with 0 mapping to an unlit pixel component and 1 mapping to full brightness for that color.

With this color model, almost every color that our eyes can see can be rendered in OpenGL and displayed on the screen.

We’ll learn more about using colors in Chapter 4, Adding Color and Shade, on page 59.

2.7 A Review

We spent most of the chapter just learning how to define our data and the shaders that will move this data along the OpenGL pipeline. Let’s take a moment to review the key concepts that we learned in this chapter:

4. http://en.wikipedia.org/wiki/RGB_color_model

A Review

35

• We first learned how to define a vertex attribute array and copy this array over to native memory so that OpenGL can access it.

• We then wrote a vertex and a fragment shader. We learned that a shader is just a special type of program that runs on the GPU.

In the next chapter, we’ll continue to build on the work in this chapter; by the end of that chapter, we’ll be able to see our air hockey table and we’ll also be ready to continue with further exercises. We’ll start out by learning how to read in and compile the shaders that we’ve defined. Because vertex and fragment shaders always go together, we’ll also learn how to link these shaders together into an OpenGL program.

Once we’ve compiled and linked our shaders together, we’ll be able to put everything together and tell OpenGL to draw the first version of our air hockey table to the screen.

Chapter 2. Defining Vertices and Shaders

36

CHAPTER 3

Compiling Shaders and

Dans le document OpenGL ES 2 for Android (Page 46-51)