# Joey Vries - Learn OpenGL - Graphics Programming (Highlights) ![rw-book-cover|256](https://readwise-assets.s3.amazonaws.com/media/uploaded_book_covers/profile_155788/058bd65c-6530-4f73-bc57-14bf94606295.jpg) ## Metadata **Review**:: [readwise.io](https://readwise.io/bookreview/50654819) **Source**:: #from/readwise #from/zotero **Zettel**:: #zettel/fleeting **Status**:: #x **Authors**:: [[Joey Vries]] **Full Title**:: Learn OpenGL - Graphics Programming **Category**:: #books #readwise/books **Category Icon**:: 📚 **Highlighted**:: [[2025-04-18]] **Created**:: [[2025-04-19]] ## Highlights - this book is geared at core-profile OpenGL version 3.3 ([Page 11](zotero://open-pdf/library/items/VIJZ3LLB?page=10&annotation=W8TX3SH7)) ^879001828 #key - OpenGL is by itself a large state machine: a collection of variables that define how OpenGL should currently operate. The state of OpenGL is commonly referred to as the OpenGL context. ([Page 12](zotero://open-pdf/library/items/VIJZ3LLB?page=11&annotation=H38CH3HK)) ^879001829 - When working in OpenGL we will come across several state-changing functions that change the context and several state-using functions that perform some operations based on the current state of OpenGL. ([Page 12](zotero://open-pdf/library/items/VIJZ3LLB?page=11&annotation=DR5R22MC)) ^879001830 - Those libraries save us all the operation-system specific work and give us a window and an OpenGL context to render in. Some of the more popular libraries are GLUT, SDL, SFML and GLFW. ([Page 14](zotero://open-pdf/library/items/VIJZ3LLB?page=13&annotation=N86NLVY3)) ^879001831 - glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); ([Page 20](zotero://open-pdf/library/items/VIJZ3LLB?page=19&annotation=LMZECXUC)) ^879001832 - Note that on Mac OS X you need to add glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); to your initialization code for it to work. ([Page 20](zotero://open-pdf/library/items/VIJZ3LLB?page=19&annotation=AAXG9E7P)) ^879001833 - GLFW gives us glfwGetProcAddress that defines the correct function based on which OS we’re compiling for. ([Page 21](zotero://open-pdf/library/items/VIJZ3LLB?page=20&annotation=C39FTC36)) ^879001834 - The graphics pipeline can be divided into two large parts: the first transforms your 3D coordinates into 2D coordinates and the second part transforms the 2D coordinates into actual colored pixels. ([Page 26](zotero://open-pdf/library/items/VIJZ3LLB?page=25&annotation=SPCR7LLM)) ^879001835 - A fragment in OpenGL is all the data required for OpenGL to render a single pixel. ([Page 27](zotero://open-pdf/library/items/VIJZ3LLB?page=26&annotation=M5JSM9R3)) ^879001836 - In modern OpenGL we are required to define at least a vertex and fragment shader of our own (there are no default vertex/fragment shaders on the GPU). ([Page 27](zotero://open-pdf/library/items/VIJZ3LLB?page=26&annotation=5AKHS3RS)) ^879001837 - OpenGL only processes 3D coordinates when they’re in a specific range between -1.0 and 1.0 on all 3 axes (x, y and z). ([Page 28](zotero://open-pdf/library/items/VIJZ3LLB?page=27&annotation=J6D9TCHH)) ^879001838 ![](https://blog-1251771406.cos.ap-shanghai.myqcloud.com/uploads/202407/cd2d7b/20240704132550.png) - OpenGL allows us to bind to several buffers at once as long as they have a different buffer type. ([Page 29](zotero://open-pdf/library/items/VIJZ3LLB?page=28&annotation=PFCWBTL8)) ^879001839 - With this knowledge we can tell OpenGL how it should interpret the vertex data (per vertex attribute) using glVertexAttribPointer ([Page 34](zotero://open-pdf/library/items/VIJZ3LLB?page=33&annotation=2ZIRGBXX)) ^879001840 - Each vertex attribute takes its data from memory managed by a VBO and which VBO it takes its data from (you can have multiple VBOs) is determined by the VBO currently bound to GL_ARRAY_BUFFER when calling glVertexAttribPointer. ([Page 35](zotero://open-pdf/library/items/VIJZ3LLB?page=34&annotation=UE66DGM2)) ^879001841 State machine - Now that we specified how OpenGL should interpret the vertex data we should also enable the vertex attribute with glEnableVertexAttribArray giving the vertex attribute location as its argument; vertex attributes are disabled by default. ([Page 35](zotero://open-pdf/library/items/VIJZ3LLB?page=34&annotation=TCDHQC5R)) ^879001842 - A vertex array object (also known as VAO) can be bound just like a vertex buffer object and any subsequent vertex attribute calls from that point on will be stored inside the VAO. This has the advantage that when configuring vertex attribute pointers you only have to make those calls once and whenever we want to draw the object, we can just bind the corresponding VAO. ([Page 35](zotero://open-pdf/library/items/VIJZ3LLB?page=34&annotation=2MMN2NUD)) ^879001843 - Usually when you have multiple objects you want to draw, you first generate/configure all the VAOs (and thus the required VBO and attribute pointers) and store those for later use. The moment we want to draw one of our objects, we take the corresponding VAO, bind it, then draw the object and unbind the VAO again. ([Page 36](zotero://open-pdf/library/items/VIJZ3LLB?page=35&annotation=GT83AUT3)) ^879001844 - An EBO is a buffer, just like a vertex buffer object, that stores indices that OpenGL uses to decide what vertices to draw. This so called indexed drawing is exactly the solution to our problem. ([Page 38](zotero://open-pdf/library/items/VIJZ3LLB?page=37&annotation=9SPEPXA2)) ^879001845 - Similar to the VBO we bind the EBO and copy the indices into the buffer with glBufferData. Also, just like the VBO we want to place those calls between a bind and an unbind call, although this time we specify GL_ELEMENT_ARRAY_BUFFER as the buffer type. ([Page 38](zotero://open-pdf/library/items/VIJZ3LLB?page=37&annotation=E27MT7IT)) ^879001846 - Wireframe mode To draw your triangles in wireframe mode, you can configure how OpenGL draws its primitives via glPolygonMode(GL_FRONT_AND_BACK, GL_LINE). ([Page 40](zotero://open-pdf/library/items/VIJZ3LLB?page=39&annotation=UGAZP3JN)) ^879001847 - OpenGL guarantees there are always at least 16 4-component vertex attributes available ([Page 42](zotero://open-pdf/library/items/VIJZ3LLB?page=41&annotation=9HPAIIY5)) ^879001848 - The vector datatype allows for some interesting and flexible component selection called swizzling. ([Page 43](zotero://open-pdf/library/items/VIJZ3LLB?page=42&annotation=G7Z4ZEQT)) ^879001849 - When the types and the names are equal on both sides OpenGL will link those variables together and then it is possible to send data between shaders (this is done when linking a program object). ([Page 44](zotero://open-pdf/library/items/VIJZ3LLB?page=43&annotation=VDF3L4YW)) ^879001850 - Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU. ([Page 45](zotero://open-pdf/library/items/VIJZ3LLB?page=44&annotation=PY7HAHCV)) ^879001851 - Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they’re either reset or updated. ([Page 45](zotero://open-pdf/library/items/VIJZ3LLB?page=44&annotation=PPC88YKA)) ^879001852 - If you declare a uniform that isn’t used anywhere in your GLSL code the compiler will silently remove the variable from the compiled version which is the cause for several frustrating errors; keep this in mind! ([Page 45](zotero://open-pdf/library/items/VIJZ3LLB?page=44&annotation=PBZE2S9D)) ^879001853 #caveat - Note that finding the uniform location does not require you to use the shader program first, but updating a uniform does require you to first use the program (by calling glUseProgram), because it sets the uniform on the currently active shader program. ([Page 46](zotero://open-pdf/library/items/VIJZ3LLB?page=45&annotation=FMKV9575)) ^879001854 - Fragment interpolation is applied to all the fragment shader’s input attributes. ([Page 50](zotero://open-pdf/library/items/VIJZ3LLB?page=49&annotation=JE8NV6MQ)) ^879001855 - Retrieving the texture color using texture coordinates is called sampling. Texture coordinates start at (0,0) for the lower left corner of a texture image to (1,1) for the upper right corner of a texture image. ([Page 54](zotero://open-pdf/library/items/VIJZ3LLB?page=53&annotation=G3TISZA8)) ^879001856 - Each of the aforementioned options can be set per coordinate axis (s, t (and r if you’re using 3D textures) equivalent to x,y,z) with the glTexParameter* function ([Page 56](zotero://open-pdf/library/items/VIJZ3LLB?page=55&annotation=ZH833L2Z)) ^879001857 - GL_NEAREST (also known as nearest neighbor or point filtering) is the default texture filtering method of OpenGL. ([Page 56](zotero://open-pdf/library/items/VIJZ3LLB?page=55&annotation=B6YBHM9A)) ^879001858 - Texture filtering can be set for magnifying and minifying operations (when scaling up or downwards) so you could for example use nearest neighbor filtering when textures are scaled downwards and linear filtering for upscaled textures. ([Page 57](zotero://open-pdf/library/items/VIJZ3LLB?page=56&annotation=Z9ATS56X)) ^879001859 - Creating a collection of mipmapped textures for each texture image is cumbersome to do manually, but luckily OpenGL is able to do all the work for us with a single call to glGenerateMipmaps after we’ve created a texture. ([Page 58](zotero://open-pdf/library/items/VIJZ3LLB?page=57&annotation=LSU9JGFJ)) ^879001860 - A common mistake is to set one of the mipmap filtering options as the magnification filter. This doesn’t have any effect since mipmaps are primarily used for when textures get downscaled: texture magnification doesn’t use mipmaps and giving it a mipmap filtering option will generate an OpenGL GL_INVALID_ENUM error code. ([Page 58](zotero://open-pdf/library/items/VIJZ3LLB?page=57&annotation=KAS2XNGD)) ^879001861 - stb_image.h is a very popular single header image loading library by Sean Barrett that is able to load most popular file formats and is easy to integrate in your project(s). ([Page 59](zotero://open-pdf/library/items/VIJZ3LLB?page=58&annotation=Z5EIFU48)) ^879001862 - Textures are generated with glTexImage2D ([Page 59](zotero://open-pdf/library/items/VIJZ3LLB?page=58&annotation=JKH5MC9T)) ^879001863 - The second argument specifies the mipmap level for which we want to create a texture for if you want to set each mipmap level manually, but we’ll leave it at the base level which is 0. ([Page 60](zotero://open-pdf/library/items/VIJZ3LLB?page=59&annotation=K4YWUJ9D)) ^879001864 - However, currently it only has the base-level of the texture image loaded and if we want to use mipmaps we have to specify all the different images manually (by continually incrementing the second argument) or, we could call glGenerateMipmap after generating the texture. This will automatically generate all the required mipmaps for the currently bound texture. ([Page 60](zotero://open-pdf/library/items/VIJZ3LLB?page=59&annotation=NYZP9YPF)) ^879001865 - GLSL has a built-in data-type for texture objects called a sampler that takes as a postfix the texture type we want e.g. sampler1D, sampler3D or in our case sampler2D. We can then add a texture to the fragment shader by simply declaring a uniform sampler2D that we later assign our texture to. ([Page 62](zotero://open-pdf/library/items/VIJZ3LLB?page=61&annotation=JW79Q8XC)) ^879001866 - All that’s left to do now is to bind the texture before calling glDrawElements and it will then automatically assign the texture to the fragment shader’s sampler ([Page 62](zotero://open-pdf/library/items/VIJZ3LLB?page=61&annotation=KLIIB6XL)) ^879001867 - Using glUniform1i we can actually assign a location value to the texture sampler so we can set multiple textures at once in a fragment shader. This location of a texture is more commonly known as a texture unit. ([Page 63](zotero://open-pdf/library/items/VIJZ3LLB?page=62&annotation=5VT5KI5G)) ^879001868 - The default texture unit for a texture is 0 which is the default active texture unit so we didn’t need to assign a location in the previous section; note that not all graphics drivers assign a default texture unit so the previous section may not have rendered for you. ([Page 63](zotero://open-pdf/library/items/VIJZ3LLB?page=62&annotation=LL333X77)) ^879001869 - After activating a texture unit, a subsequent glBindTexture call will bind that texture to the currently active texture unit. ([Page 63](zotero://open-pdf/library/items/VIJZ3LLB?page=62&annotation=Q2RANKVV)) ^879001870 - Texture unit GL_TEXTURE0 is always by default activated, so we didn’t have to activate any texture units in the previous example when using glBindTexture. ([Page 63](zotero://open-pdf/library/items/VIJZ3LLB?page=62&annotation=YDF8Z747)) ^879001871 - OpenGL should have a at least a minimum of 16 texture units for you to use which you can activate using GL_TEXTURE0 to GL_TEXTURE15. ([Page 64](zotero://open-pdf/library/items/VIJZ3LLB?page=63&annotation=X6CI9ZZR)) ^879001872 - We also have to tell OpenGL to which texture unit each shader sampler belongs to by setting each sampler using glUniform1i. ([Page 65](zotero://open-pdf/library/items/VIJZ3LLB?page=64&annotation=NYWRG8KH)) ^879001873 - Luckily for us, stb_image.h can flip the y-axis during image loading by adding the following statement before loading any image: stbi_set_flip_vertically_on_load(true); ([Page 65](zotero://open-pdf/library/items/VIJZ3LLB?page=64&annotation=N77VUZ36)) ^879001874 - The dot product is a component-wise multiplication where we add the results together. ([Page 70](zotero://open-pdf/library/items/VIJZ3LLB?page=69&annotation=8LT37LQL)) ^879001875 - Also, whenever the homogeneous coordinate is equal to 0, the vector is specifically known as a direction vector since a vector with a w coordinate of 0 cannot be translated. ([Page 75](zotero://open-pdf/library/items/VIJZ3LLB?page=74&annotation=R9CYMGFF)) ^879001876 - To truly prevent Gimbal locks we have to represent rotations using quaternions, that are not only safer, but also more computationally friendly. ([Page 77](zotero://open-pdf/library/items/VIJZ3LLB?page=76&annotation=ZC2DA9AU)) ^879001877 - GLM stores their matrices’ data in a way that doesn’t always match OpenGL’s expectations so we first convert the data with GLM’s built-in function value_ptr. ([Page 79](zotero://open-pdf/library/items/VIJZ3LLB?page=78&annotation=ZEXQSQZY)) ^879001878 - Remember that the actual transformation order should be read in reverse: even though in code we first translate and then later rotate, the actual transformations first apply a rotation and then a translation. ([Page 80](zotero://open-pdf/library/items/VIJZ3LLB?page=79&annotation=8MAJRS42)) ^879001879 - There are a total of 5 different coordinate systems that are of importance to us: • Local space (or Object space) • World space • View space (or Eye space) • Clip space • Screen space ([Page 82](zotero://open-pdf/library/items/VIJZ3LLB?page=81&annotation=E2JUK2PM)) ^879001880 - To transform the coordinates from one space to the next coordinate space we’ll use several transformation matrices of which the most important are the model, view and projection matrix. ([Page 82](zotero://open-pdf/library/items/VIJZ3LLB?page=81&annotation=CLN4GAA6)) ^879001881 - The reason we’re transforming our vertices into all these different spaces is that some operations make more sense or are easier to use in certain coordinate systems. ([Page 83](zotero://open-pdf/library/items/VIJZ3LLB?page=82&annotation=89VFLJX6)) ^879001882 - The model matrix is a transformation matrix that translates, scales and/or rotates your object to place it in the world at a location/orientation they belong to. ([Page 83](zotero://open-pdf/library/items/VIJZ3LLB?page=82&annotation=DL7VNX9Y)) ^879001883 - These combined transformations are generally stored inside a view matrix that transforms world coordinates to view space. ([Page 84](zotero://open-pdf/library/items/VIJZ3LLB?page=83&annotation=UHU34IPM)) ^879001884 - To transform vertex coordinates from view to clip-space we define a so called projection matrix that specifies a range of coordinates e.g. -1000 and 1000 in each dimension. ([Page 84](zotero://open-pdf/library/items/VIJZ3LLB?page=83&annotation=MBTP6DES)) ^879001885 - This viewing box a projection matrix creates is called a frustum and each coordinate that ends up inside this frustum will end up on the user’s screen. ([Page 84](zotero://open-pdf/library/items/VIJZ3LLB?page=83&annotation=SAUJ56NP)) ^879001886 - To create an orthographic projection matrix we make use of GLM’s built-in function glm::ortho ([Page 85](zotero://open-pdf/library/items/VIJZ3LLB?page=84&annotation=9ALTVNHZ)) ^879001887 - What glm::perspective does is again create a large frustum that defines the visible space, anything outside the frustum will not end up in the clip space volume and will thus become clipped. ([Page 86](zotero://open-pdf/library/items/VIJZ3LLB?page=85&annotation=78BPBPIY)) ^879001888 - V clip = Mprojection · Mview · Mmodel · V local ([Page 87](zotero://open-pdf/library/items/VIJZ3LLB?page=86&annotation=LKKF42FY)) ^879001889 - That is exactly what a view matrix does, we move the entire scene around inversed to where we want the camera to move. ([Page 88](zotero://open-pdf/library/items/VIJZ3LLB?page=87&annotation=ST6CK6QV)) ^879001890 - To understand why it’s called right-handed do the following: • Stretch your right-arm along the positive y-axis with your hand up top. • Let your thumb point to the right. • Let your pointing finger point up. • Now bend your middle finger downwards 90 degrees. ([Page 89](zotero://open-pdf/library/items/VIJZ3LLB?page=88&annotation=PJEQ8DIZ)) ^879001891 - OpenGL stores all its depth information in a z-buffer, also known as a depth buffer. GLFW automatically creates such a buffer for you (just like it has a color-buffer that stores the colors of the output image). ([Page 92](zotero://open-pdf/library/items/VIJZ3LLB?page=91&annotation=GQI6HYTL)) ^879001892 - However, if we want to make sure OpenGL actually performs the depth testing we first need to tell OpenGL we want to enable depth testing; it is disabled by default. We can enable depth testing using glEnable. ([Page 92](zotero://open-pdf/library/items/VIJZ3LLB?page=91&annotation=R79HE7FJ)) ^879001893 - Since we’re using a depth buffer we also want to clear the depth buffer before each render iteration (otherwise the depth information of the previous frame stays in the buffer). ([Page 92](zotero://open-pdf/library/items/VIJZ3LLB?page=91&annotation=HMW9JZ2Q)) ^879001894 - OpenGL by itself is not familiar with the concept of a camera, but we can try to simulate one by moving all objects in the scene in the reverse direction, giving the illusion that we are moving. ([Page 95](zotero://open-pdf/library/items/VIJZ3LLB?page=94&annotation=UYTRPMZE)) ^879001895 - To get the right vector we use a little trick by first specifying an up vector that points upwards (in world space). Then we do a cross product on the up vector and the direction vector from step 2. ([Page 96](zotero://open-pdf/library/items/VIJZ3LLB?page=95&annotation=SL9J6DJ5)) ^879001896 - Now that we have both the x-axis vector and the z-axis vector, retrieving the vector that points to the camera’s positive y-axis is relatively easy: we take the cross product of the right and direction vector ([Page 96](zotero://open-pdf/library/items/VIJZ3LLB?page=95&annotation=EIIRTBZT)) ^879001897 - Using these camera vectors we can now create a LookAt matrix that proves very useful for creating a camera. ([Page 96](zotero://open-pdf/library/items/VIJZ3LLB?page=95&annotation=XHM6VT4W)) ^879001898 - First we will tell GLFW that it should hide the cursor and capture it. Capturing a cursor means that, once the application has focus, the mouse cursor stays within the center of the window (unless the application loses focus or quits). ([Page 102](zotero://open-pdf/library/items/VIJZ3LLB?page=101&annotation=B62SXT7K)) ^879001899 - When the scroll_callback function is called we change the content of the globally declared fov variable. ([Page 105](zotero://open-pdf/library/items/VIJZ3LLB?page=104&annotation=QN9EALZ3)) ^879001900 - glm::vec3 result = lightColor * toyColor; ([Page 110](zotero://open-pdf/library/items/VIJZ3LLB?page=109&annotation=UM9N3TJJ)) ^879001901 - We can thus define an object’s color as the amount of each color component it reflects from a light source. ([Page 111](zotero://open-pdf/library/items/VIJZ3LLB?page=110&annotation=YTXAJKCV)) ^879001902 - One of those models is called the Phong lighting model. The major building blocks of the Phong lighting model consist of 3 components: ambient, diffuse and specular lighting. ([Page 115](zotero://open-pdf/library/items/VIJZ3LLB?page=114&annotation=6JJNIC7W)) ^879001903 - Ambient lighting: even when it is dark there is usually still some light somewhere in the world (the moon, a distant light) so objects are almost never completely dark. To simulate this we use an ambient lighting constant that always gives the object some color. ([Page 115](zotero://open-pdf/library/items/VIJZ3LLB?page=114&annotation=R8DS3D68)) ^879001904 - Diffuse lighting: simulates the directional impact a light object has on an object. This is the most visually significant component of the lighting model. The more a part of an object faces the light source, the brighter it becomes. ([Page 115](zotero://open-pdf/library/items/VIJZ3LLB?page=114&annotation=FE26URMJ)) ^879001905 - Specular lighting: simulates the bright spot of a light that appears on shiny objects. Specular highlights are more inclined to the color of the light than the color of the object. ([Page 115](zotero://open-pdf/library/items/VIJZ3LLB?page=114&annotation=7EYZ5TDQ)) ^879001906 - Adding ambient lighting to the scene is really easy. We take the light’s color, multiply it with a small constant ambient factor, multiply this with the object’s color, and use that as the fragment’s color in the cube object’s shader: ([Page 115](zotero://open-pdf/library/items/VIJZ3LLB?page=114&annotation=ERHF7P7P)) ^879001907 - vec3 ambient = ambientStrength * lightColor; ([Page 116](zotero://open-pdf/library/items/VIJZ3LLB?page=115&annotation=A96QMBYB)) ^879001908 - Diffuse lighting gives the object more brightness the closer its fragments are aligned to the light rays from a light source. ([Page 116](zotero://open-pdf/library/items/VIJZ3LLB?page=115&annotation=ATSCB5YF)) ^879001909 - To measure the angle between the light ray and the fragment we use something called a normal vector, that is a vector perpendicular to the fragment’s surface (here depicted as a yellow arrow); we’ll get to that later. ([Page 116](zotero://open-pdf/library/items/VIJZ3LLB?page=115&annotation=JYFSYIY5)) ^879001910 - A normal vector is a (unit) vector that is perpendicular to the surface of a vertex. Since a vertex by itself has no surface (it’s just a single point in space) we retrieve a normal vector by using its surrounding vertices to figure out the surface of the vertex. ([Page 117](zotero://open-pdf/library/items/VIJZ3LLB?page=116&annotation=WNUI3NC5)) ^879001911 - Then the last thing we need is the actual fragment’s position. We’re going to do all the lighting calculations in world space so we want a vertex position that is in world space first. We can accomplish this by multiplying the vertex position attribute with the model matrix only (not the view and projection matrix) to transform it to world space coordinates. ([Page 119](zotero://open-pdf/library/items/VIJZ3LLB?page=118&annotation=A5FG468S)) ^879001912 - Because we only care about their direction almost all the calculations are done with unit vectors since it simplifies most calculations (like the dot product). ([Page 120](zotero://open-pdf/library/items/VIJZ3LLB?page=119&annotation=SP3RIZI6)) ^879001913 - Next we need to calculate the diffuse impact of the light on the current fragment by taking the dot product between the norm and lightDir vectors. The resulting value is then multiplied with the light’s color to get the diffuse component, resulting in a darker diffuse component the greater the angle between both vectors ([Page 120](zotero://open-pdf/library/items/VIJZ3LLB?page=119&annotation=ZZIHAVRA)) ^879001914 - So if we want to multiply the normal vectors with a model matrix we want to remove the translation part of the matrix by taking the upper-left 3x3 matrix of the model matrix (note that we could also set the w component of a normal vector to 0 and multiply with the 4x4 matrix). ([Page 121](zotero://open-pdf/library/items/VIJZ3LLB?page=120&annotation=QTBDN9BV)) ^879001915 - Second, if the model matrix would perform a non-uniform scale, the vertices would be changed in such a way that the normal vector is not perpendicular to the surface anymore. ([Page 121](zotero://open-pdf/library/items/VIJZ3LLB?page=120&annotation=9849FNAR)) ^879001916 - The trick of fixing this behavior is to use a different model matrix specifically tailored for normal vectors. This matrix is called the normal matrix and uses a few linear algebraic operations to remove the effect of wrongly scaling the normal vectors. If you want to know how this matrix is calculated, I suggest the normal matrix article1 from LightHouse3D. ([Page 121](zotero://open-pdf/library/items/VIJZ3LLB?page=120&annotation=SM72LL8R)) ^879001917 - The normal matrix is defined as ’the transpose of the inverse of the upper-left 3x3 part of the model matrix’. ([Page 121](zotero://open-pdf/library/items/VIJZ3LLB?page=120&annotation=JQHK4MLE)) ^879001918 - Note that most resources define the normal matrix as derived from the model-view matrix, but since we’re working in world space (and not in view space) we will derive it from the model matrix. ([Page 121](zotero://open-pdf/library/items/VIJZ3LLB?page=120&annotation=4LJ2IM7K)) ^879001919 - Inversing matrices is a costly operation for shaders, so wherever possible try to avoid doing inverse operations since they have to be done on each vertex of your scene. For learning purposes this is fine, but for an efficient application you’ll likely want to calculate the normal matrix on the CPU and send it to the shaders via a uniform before drawing (just like the model matrix). ([Page 122](zotero://open-pdf/library/items/VIJZ3LLB?page=121&annotation=A28U2NW7)) ^879001920 - Similar to diffuse lighting, specular lighting is based on the light’s direction vector and the object’s normal vectors, but this time it is also based on the view direction e.g. from what direction the player is looking at the fragment. ([Page 122](zotero://open-pdf/library/items/VIJZ3LLB?page=121&annotation=4IJIFYA2)) ^879001921 - We calculate a reflection vector by reflecting the light direction around the normal vector. Then we calculate the angular distance between this reflection vector and the view direction. The closer the angle between them, the greater the impact of the specular light. The resulting effect is that we see a bit of a highlight when we’re looking at the light’s direction reflected via the surface. ([Page 122](zotero://open-pdf/library/items/VIJZ3LLB?page=121&annotation=7LF64PPJ)) ^879001922 - We chose to do the lighting calculations in world space, but most people tend to prefer doing lighting in view space. An advantage of view space is that the viewer’s position is always at (0,0,0) so you already got the position of the viewer for free. ([Page 123](zotero://open-pdf/library/items/VIJZ3LLB?page=122&annotation=VFS2GAZV)) ^879001923 - This 32 value is the shininess value of the highlight. The higher the shininess value of an object, the more it properly reflects the light instead of scattering it all around and thus the smaller the highlight becomes. ([Page 123](zotero://open-pdf/library/items/VIJZ3LLB?page=122&annotation=IV9DTA3B)) ^879001924 - When the Phong lighting model is implemented in the vertex shader it is called Gouraud shading instead of Phong shading. Note that due to the interpolation the lighting looks somewhat off. The Phong shading gives much smoother lighting results. ([Page 125](zotero://open-pdf/library/items/VIJZ3LLB?page=124&annotation=5NYTMDRC)) ^879001925 - If we want to fill the struct we will have to set the individual uniforms, but prefixed with the struct’s name ([Page 127](zotero://open-pdf/library/items/VIJZ3LLB?page=126&annotation=DDG7K7E3)) ^879001926 - The object is way too bright. The reason for the object being too bright is that the ambient, diffuse and specular colors are reflected with full force from any light source. Light sources also have different intensities for their ambient, diffuse and specular components respectively. ([Page 128](zotero://open-pdf/library/items/VIJZ3LLB?page=127&annotation=YF5XUABJ)) ^879001927 - We’re just using a different name for the same underlying principle: using an image wrapped around an object that we can index for unique color values per fragment. In lit scenes this is usually called a diffuse map (this is generally how 3D artists call them before PBR) since a texture image represents all of the object’s diffuse colors. ([Page 131](zotero://open-pdf/library/items/VIJZ3LLB?page=130&annotation=XB6VS5K6)) ^879001928 - Using tools like Photoshop or Gimp it is relatively easy to transform a diffuse texture to a specular image like this by cutting out some parts, transforming it to black and white and increasing the brightness/contrast. ([Page 134](zotero://open-pdf/library/items/VIJZ3LLB?page=133&annotation=XLF2ZVBA)) ^879001929 - By using a specular map we can specify with enormous detail what parts of an object have shiny properties and we can even control the corresponding intensity. ([Page 135](zotero://open-pdf/library/items/VIJZ3LLB?page=134&annotation=STG42ZPZ)) ^879001930 - When a light source is modeled to be infinitely far away it is called a directional light since all its light rays have the same direction; it is independent of the location of the light source. ([Page 137](zotero://open-pdf/library/items/VIJZ3LLB?page=136&annotation=Y7DJLKW5)) ^879001931 - To reduce the intensity of light over the distance a light ray travels is generally called attenuation. ([Page 140](zotero://open-pdf/library/items/VIJZ3LLB?page=139&annotation=BIYN37CP)) ^879001932 - following formula calculates an attenuation value based on a fragment’s distance to the light source which we later multiply with the light’s intensity vector ([Page 140](zotero://open-pdf/library/items/VIJZ3LLB?page=139&annotation=8S4AQMYG)) ^879001933 - In our environment a distance of 32 to 100 is generally enough for most lights. ([Page 141](zotero://open-pdf/library/items/VIJZ3LLB?page=140&annotation=MG9YPCV9)) ^879001936 - A spotlight is a light source that is located somewhere in the environment that, instead of shooting light rays in all directions, only shoots them in a specific direction. ([Page 142](zotero://open-pdf/library/items/VIJZ3LLB?page=141&annotation=2J8CFAMN)) ^879001937 - Luckily for us, it isn’t too complicated. Setting the uniform values of an array of structs works just like setting the uniforms of a single struct, although this time we also have to define the appropriate index when querying the uniform’s location ([Page 151](zotero://open-pdf/library/items/VIJZ3LLB?page=150&annotation=2NLRERIJ)) ^879001938 - Depth testing is done in screen space after the fragment shader has run (and after the stencil test which we’ll get to in the next chapter. ([Page 174](zotero://open-pdf/library/items/VIJZ3LLB?page=173&annotation=ZD7E5EWY)) ^879001939 - The screen space coordinates relate directly to the viewport defined by OpenGL’s glViewport function and can be accessed via GLSL’s built-in gl_FragCoord variable in the fragment shader. The x and y components of gl_FragCoord represent the fragment’s screen-space coordinates (with (0,0) being the bottom-left corner). ([Page 174](zotero://open-pdf/library/items/VIJZ3LLB?page=173&annotation=Q7JDXHG4)) ^879001940 - If you have depth testing enabled you should also clear the depth buffer before each frame using GL_DEPTH_BUFFER_BIT ([Page 174](zotero://open-pdf/library/items/VIJZ3LLB?page=173&annotation=VMAL26GY)) ^879001941 - OpenGL allows us to disable writing to the depth buffer by setting its depth mask to GL_FALSE ([Page 174](zotero://open-pdf/library/items/VIJZ3LLB?page=173&annotation=GQZ485VJ)) ^879001942 - In practice however, a linear depth buffer like this is almost never used. Because of projection properties a non-linear depth equation is used that is proportional to 1/z. The result is that we get enormous precision when z is small and much less precision when z is far away. ([Page 176](zotero://open-pdf/library/items/VIJZ3LLB?page=175&annotation=MW4VVPS2)) ^879001943 - A stencil buffer (usually) contains 8 bits per stencil value that amounts to a total of 256 different stencil values per pixel. We can set these stencil values to values of our liking and we can discard or keep fragments whenever a particular fragment has a certain stencil value. ([Page 181](zotero://open-pdf/library/items/VIJZ3LLB?page=180&annotation=76HIWVMM)) ^879001944 - glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); ([Page 182](zotero://open-pdf/library/items/VIJZ3LLB?page=181&annotation=TJPAF5IB)) ^879001945 #caveat The stencil mask also affects how the stencil buffer will be cleared. Remember to reset to the all-on mask: `glStencilMask(0xFF);`. - The glStencilOp(GLenum sfail, GLenum dpfail, GLenum dppass) contains three options of which we can specify for each option what action to take ([Page 182](zotero://open-pdf/library/items/VIJZ3LLB?page=181&annotation=MGFD9YW8)) ^879001946 Stencel test is before depth test in the pipeline. - Stencil testing has many more purposes (beside outlining objects) like drawing textures inside a rear-view mirror so it neatly fits into the mirror shape, or rendering real-time shadows with a stencil buffer technique called shadow volumes. ([Page 186](zotero://open-pdf/library/items/VIJZ3LLB?page=185&annotation=PJP4QJE4)) ^879001947 Shadow volume - Wikipedia - GLSL gives us the discard command that (once called) ensures the fragment will not be further processed and thus not end up into the color buffer. ([Page 189](zotero://open-pdf/library/items/VIJZ3LLB?page=188&annotation=WZ64KFIF)) ^879001948 - To prevent this, set the texture wrapping method to GL_CLAMP_TO_EDGE whenever you use alpha textures that you don’t want to repeat ([Page 190](zotero://open-pdf/library/items/VIJZ3LLB?page=189&annotation=2X5DV5K5)) ^879001949 - To render images with different levels of transparency we have to enable blending. Like most of OpenGL’s functionality we can enable blending by enabling GL_BLEND ([Page 190](zotero://open-pdf/library/items/VIJZ3LLB?page=189&annotation=H57F7PST)) ^879001950 - The glBlendFunc(GLenum sfactor, GLenum dfactor) function expects two parameters that set the option for the source and destination factor. ([Page 191](zotero://open-pdf/library/items/VIJZ3LLB?page=190&annotation=FLRP3LMP)) ^879001951 - Note that the constant color vector C ̄ constant can be separately set via the glBlendColor function. ([Page 191](zotero://open-pdf/library/items/VIJZ3LLB?page=190&annotation=YTFBTVMS)) ^879001952 - It is also possible to set different options for the RGB and alpha channel individually using glBlendFuncSeparate ([Page 192](zotero://open-pdf/library/items/VIJZ3LLB?page=191&annotation=MDDYJDMM)) ^879001953 - Right now, the source and destination components are added together, but we could also subtract them if we want. glBlendEquation(GLenum mode) allows us to set this operation and has 5 possible options ([Page 192](zotero://open-pdf/library/items/VIJZ3LLB?page=191&annotation=RJ6ZIGC9)) ^879001954 - So we cannot simply render the windows however we want and expect the depth buffer to solve all our issues for us; this is also where blending gets a little nasty. To make sure the windows show the windows behind them, we have to draw the windows in the background first. This means we have to manually sort the windows from furthest to nearest and draw them accordingly ourselves. ([Page 193](zotero://open-pdf/library/items/VIJZ3LLB?page=192&annotation=97DHWPZS)) ^879001955 #caveat - 1. Draw all opaque objects first. 2. Sort all the transparent objects. 3. Draw all the transparent objects in sorted order. ([Page 194](zotero://open-pdf/library/items/VIJZ3LLB?page=193&annotation=W2DNNTFI)) ^879001956 - By default, triangles defined with counter-clockwise vertices are processed as front-facing triangles. ([Page 197](zotero://open-pdf/library/items/VIJZ3LLB?page=196&annotation=PKLSZ2K5)) ^879001957 - To enable face culling we only have to enable OpenGL’s GL_CULL_FACE option ([Page 198](zotero://open-pdf/library/items/VIJZ3LLB?page=197&annotation=6QIZADQB)) ^879001958 - Do note that this only really works with closed shapes like a cube. We do have to disable face culling again when we draw the grass leaves from the previous chapter, since their front and back face should be visible. ([Page 198](zotero://open-pdf/library/items/VIJZ3LLB?page=197&annotation=Z7XRTB6A)) ^879001959 - OpenGL allows us to change the type of face we want to cull as well. What if we want to cull front faces and not the back faces? We can define this behavior with glCullFace ([Page 198](zotero://open-pdf/library/items/VIJZ3LLB?page=197&annotation=UUD2TNWV)) ^879001960 - We can also tell OpenGL we’d rather prefer clockwise faces as the front-faces instead of counter-clockwise faces via glFrontFace ([Page 198](zotero://open-pdf/library/items/VIJZ3LLB?page=197&annotation=AQYRYNC9)) ^879001961 - The rendering operations we’ve done so far were all done on top of the render buffers attached to the default framebuffer. The default framebuffer is created and configured when you create your window (GLFW does this for us). By creating our own framebuffer we can get an additional target to render to. ([Page 200](zotero://open-pdf/library/items/VIJZ3LLB?page=199&annotation=LXGDIGVM)) ^879001962 - To bind the framebuffer we use glBindFramebuffer ([Page 200](zotero://open-pdf/library/items/VIJZ3LLB?page=199&annotation=K4N9NWFH)) ^879001963 - It is also possible to bind a framebuffer to a read or write target specifically by binding to GL_READ_FRAMEBUFFER or GL_DRAW_FRAMEBUFFER respectively. ([Page 200](zotero://open-pdf/library/items/VIJZ3LLB?page=199&annotation=S26BGASF)) ^879001964 - After we’ve completed all requirements we can check if we actually successfully completed the framebuffer by calling glCheckFramebufferStatus with GL_FRAMEBUFFER. ([Page 200](zotero://open-pdf/library/items/VIJZ3LLB?page=199&annotation=FZC3EGUB)) ^879001965 - If you want all rendering operations to have a visual impact again on the main window we need to make the default framebuffer active by binding to 0 ([Page 201](zotero://open-pdf/library/items/VIJZ3LLB?page=200&annotation=7UJYRHML)) ^879001966 - When attaching a texture to a framebuffer, all rendering commands will write to the texture as if it was a normal color/depth or stencil buffer. The advantage of using textures is that the render output is stored inside the texture image that we can then easily use in our shaders. ([Page 201](zotero://open-pdf/library/items/VIJZ3LLB?page=200&annotation=AX3XJZFW)) ^879001967 - The main differences here is that we set the dimensions equal to the screen size (although this is not required) and we pass NULL as the texture’s data parameter. ([Page 201](zotero://open-pdf/library/items/VIJZ3LLB?page=200&annotation=DA63DSXW)) ^879001968 - If you want to render your whole screen to a texture of a smaller or larger size you need to call glViewport again (before rendering to your framebuffer) with the new dimensions of your texture, otherwise render commands will only fill part of the texture. ([Page 201](zotero://open-pdf/library/items/VIJZ3LLB?page=200&annotation=MLZKK9MK)) ^879001969 - glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0); ([Page 202](zotero://open-pdf/library/items/VIJZ3LLB?page=201&annotation=6LPENBXK)) ^879001970 - To attach a depth attachment we specify the attachment type as GL_DEPTH_ATTACHMENT. Note that the texture’s format and internalformat type should then become GL_DEPTH_COMPONENT to reflect the depth buffer’s storage format. To attach a stencil buffer you use GL_STENCIL_ATTAC HMENT as the second argument and specify the texture’s formats as GL_STENCIL_INDEX. ([Page 202](zotero://open-pdf/library/items/VIJZ3LLB?page=201&annotation=78IV59JA)) ^879001971 - glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 800, 600, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, texture, 0); ([Page 202](zotero://open-pdf/library/items/VIJZ3LLB?page=201&annotation=X4FI3BHL)) ^879001972 - Just like a texture image, a renderbuffer object is an actual buffer e.g. an array of bytes, integers, pixels or whatever. However, a renderbuffer object can not be directly read from. This gives it the added advantage that OpenGL can do a few memory optimizations that can give it a performance edge over textures for off-screen rendering to a framebuffer. ([Page 202](zotero://open-pdf/library/items/VIJZ3LLB?page=201&annotation=QVMQ8TU6)) ^879001973 - When we’re not sampling from these buffers, a renderbuffer object is generally preferred. ([Page 203](zotero://open-pdf/library/items/VIJZ3LLB?page=202&annotation=3HCGVIQB)) ^879001974 - Here we’ve chosen GL_DEPTH24_STENCIL8 as the internal format, which holds both the depth and stencil buffer with 24 and 8 bits respectively. ([Page 203](zotero://open-pdf/library/items/VIJZ3LLB?page=202&annotation=I3BQWILQ)) ^879001975 - The general rule is that if you never need to sample data from a specific buffer, it is wise to use a renderbuffer object for that specific buffer. If you need to sample data from a specific buffer like colors or depth values, you should use a texture attachment instead. ([Page 203](zotero://open-pdf/library/items/VIJZ3LLB?page=202&annotation=PFCQKCKE)) ^879001976 - We also want to make sure OpenGL is able to do depth testing (and optionally stencil testing) so we have to make sure to add a depth (and stencil) attachment to the framebuffer. Since we’ll only be sampling the color buffer and not the other buffers we can create a renderbuffer object for this purpose. ([Page 204](zotero://open-pdf/library/items/VIJZ3LLB?page=203&annotation=7JYV8W3A)) ^879001977 It's required to attach depth and stencil buffer for depth testing and stencil testing. - float average = 0.2126 * FragColor.r + 0.7152 * FragColor.g + 0.0722 * FragColor.b; ([Page 208](zotero://open-pdf/library/items/VIJZ3LLB?page=207&annotation=ME2TQHPZ)) ^879001978 - A kernel (or convolution matrix) is a small matrix-like array of values centered on the current pixel that multiplies surrounding pixel values by its kernel values and adds them all together to form a single value. ([Page 208](zotero://open-pdf/library/items/VIJZ3LLB?page=207&annotation=EHFL5VC7)) ^879001979 - float kernel[9] = float[]( 1.0 / 16, 2.0 / 16, 1.0 / 16, 2.0 / 16, 4.0 / 16, 2.0 / 16, 1.0 / 16, 2.0 / 16, 1.0 / 16 ); ([Page 210](zotero://open-pdf/library/items/VIJZ3LLB?page=209&annotation=3LBZRKTK)) ^879001980 Blur kernel - Below you can find an edge-detection kernel that is similar to the sharpen kernel ([Page 211](zotero://open-pdf/library/items/VIJZ3LLB?page=210&annotation=YV7L4GB7)) ^879001981 ↩︎ ``` 1 1 1 1 -8 1 1 1 1 ``` - Because a cubemap contains 6 textures, one for each face, we have to call glTexImage2D six times with their parameters set similarly to the previous chapters. ([Page 212](zotero://open-pdf/library/items/VIJZ3LLB?page=211&annotation=T92WTK4T)) ^879001982 - Like many of OpenGL’s enums, their behind-the-scenes int value is linearly incremented, so if we were to have an array or vector of texture locations we could loop over them by starting with GL_TEXTURE_CUBE_MAP_POSITIVE_X and incrementing the enum by 1 each iteration, effectively looping through all the texture targets ([Page 213](zotero://open-pdf/library/items/VIJZ3LLB?page=212&annotation=REK6A4KA)) ^879001983 Order: right, left, top, bottom, back, front - Within the fragment shader we also have to use a different sampler of the type samplerCube that we sample from using the texture function, but this time using a vec3 direction vector instead of a vec2. ([Page 213](zotero://open-pdf/library/items/VIJZ3LLB?page=212&annotation=ZDYINWY5)) ^879001984 - A cubemap used to texture a 3D cube can be sampled using the local positions of the cube as its texture coordinates. ([Page 216](zotero://open-pdf/library/items/VIJZ3LLB?page=215&annotation=YVAKV2YJ)) ^879001985 - To draw the skybox we’re going to draw it as the first object in the scene and disable depth writing. This way the skybox will always be drawn at the background of all the other objects since the unit cube is most likely smaller than the rest of the scene. ([Page 217](zotero://open-pdf/library/items/VIJZ3LLB?page=216&annotation=IYRFXJQ5)) ^879001986 - You may remember from the Basic Lighting chapter that we can remove the translation section of transformation matrices by taking the upper-left 3x3 matrix of the 4x4 matrix. We can achieve this by converting the view matrix to a 3x3 matrix (removing translation) and converting it back to a 4x4 matrix ([Page 217](zotero://open-pdf/library/items/VIJZ3LLB?page=216&annotation=E5RBM7P8)) ^879001987 - The problem is that the skybox will most likely render on top of all other objects since it’s only a 1x1x1 cube, succeeding most depth tests. ([Page 218](zotero://open-pdf/library/items/VIJZ3LLB?page=217&annotation=V8V4ADD8)) ^879001988 1x1x1 cube is the whole world. - We need to trick the depth buffer into believing that the skybox has the maximum depth value of 1.0 so that it fails the depth test wherever there’s a different object in front of it. ([Page 218](zotero://open-pdf/library/items/VIJZ3LLB?page=217&annotation=6JW7889U)) ^879001989 1.0 is the fatest depth level. - In the Coordinate Systems chapter we said that perspective division is performed after the vertex shader has run, dividing the gl_Position’s xyz coordinates by its w component. We also know from the Depth Testing chapter that the z component of the resulting division is equal to that vertex’s depth value. Using this information we can set the z component of the output position equal to its w component which will result in a z component that is always equal to 1.0, because when the perspective division is applied its z component translates to w / w = 1.0 ([Page 218](zotero://open-pdf/library/items/VIJZ3LLB?page=217&annotation=AV3ZZXVJ)) ^879001990 `gl_Position = pos.xyww;` - Using a cubemap with an environment, we could give objects reflective or refractive properties. Techniques that use an environment cubemap like this are called environment mapping techniques and the two most popular ones are reflection and refraction. ([Page 219](zotero://open-pdf/library/items/VIJZ3LLB?page=218&annotation=PJZX7BR5)) ^879001991 - Using framebuffers it is possible to create a texture of the scene for all 6 different angles from the object in question and store those in a cubemap each frame. We can then use this (dynamically generated) cubemap to create realistic reflection and refractive surfaces that include all other objects. This is called dynamic environment mapping, because we dynamically create a cubemap of an object’s surroundings and use that as its environment map. ([Page 223](zotero://open-pdf/library/items/VIJZ3LLB?page=222&annotation=YHMG2S2F)) ^879001992 - Instead of filling the entire buffer with one function call we can also fill specific regions of the buffer by calling glBufferSubData. ([Page 224](zotero://open-pdf/library/items/VIJZ3LLB?page=223&annotation=Q95GK7NW)) ^879001993 - By calling glMapBuffer OpenGL returns a pointer to the currently bound buffer’s memory for us to operate on ([Page 224](zotero://open-pdf/library/items/VIJZ3LLB?page=223&annotation=BNKI6DVH)) ^879001994 - By telling OpenGL we’re finished with the pointer operations via glUnmapBuffer, OpenGL knows you’re done. ([Page 224](zotero://open-pdf/library/items/VIJZ3LLB?page=223&annotation=P3CK8NAC)) ^879001995 #caveat Make sure to unmap - Using glVertexAttribPointer we were able to specify the attribute layout of the vertex array buffer’s content. Within the vertex array buffer we interleaved the attributes; that is, we placed the position, normal and/or texture coordinates next to each other in memory for each vertex. ([Page 224](zotero://open-pdf/library/items/VIJZ3LLB?page=223&annotation=FPPNBDU7)) ^879001996 - When loading vertex data from file you generally retrieve an array of positions, an array of normals and/or an array of texture coordinates. It may cost some effort to combine these arrays into one large array of interleaved data. Taking the batching approach is then an easier solution that we can easily implement using glBufferSubData ([Page 225](zotero://open-pdf/library/items/VIJZ3LLB?page=224&annotation=T8L5M9PQ)) ^879001997 However, vertex in Wavefront (.obj) have different indices to position, normal and tex. It's possible to map the buffer as a uniform and use `gl_VertexID` to fetch normal and tex in the frag shader. ↩︎ ``` glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), 0); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)(sizeof(positions))); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)(sizeof(positions) + sizeof(normals))); ``` - However, the interleaved approach is still the recommended approach as the vertex attributes for each vertex shader run are then closely aligned in memory. ([Page 225](zotero://open-pdf/library/items/VIJZ3LLB?page=224&annotation=ZCJ7V99B)) ^879001998 - The function glCopyBufferSubData allows us to copy the data from one buffer to another buffer with relative ease. ([Page 225](zotero://open-pdf/library/items/VIJZ3LLB?page=224&annotation=6LKP7YNM)) ^879001999 - OpenGL gives us two more buffer targets called GL_COPY_READ_BUFFER and GL_COPY_WRITE_BUFFER. We then bind the buffers of our choice to these new buffer targets and set those targets as the readtarget and writetarget argument. ([Page 226](zotero://open-pdf/library/items/VIJZ3LLB?page=225&annotation=QK6MEEMT)) ^879002000 - We’ve already seen two of them in the chapters so far: gl_Position that is the output vector of the vertex shader, and the fragment shader’s gl_FragCoord. ([Page 227](zotero://open-pdf/library/items/VIJZ3LLB?page=226&annotation=EP7JJVR6)) ^879002001 - One output variable defined by GLSL is called gl_PointSize that is a float variable where you can set the point’s width and height in pixels. ([Page 227](zotero://open-pdf/library/items/VIJZ3LLB?page=226&annotation=4TMZVDF5)) ^879002002 - Influencing the point sizes in the vertex shader is disabled by default, but if you want to enable this you’ll have to enable OpenGL’s GL_PROGRAM_POINT_SIZE ([Page 227](zotero://open-pdf/library/items/VIJZ3LLB?page=226&annotation=IKHG2XEC)) ^879002003 - A simple example of influencing point sizes is by setting the point size equal to the clip-space position’s z value which is equal to the vertex’s distance to the viewer. ([Page 227](zotero://open-pdf/library/items/VIJZ3LLB?page=226&annotation=FBSWVT4K)) ^879002004 - The integer variable gl_VertexID holds the current ID of the vertex we’re drawing. When doing indexed rendering (with glDrawElements) this variable holds the current index of the vertex we’re drawing. When drawing without indices (via glDrawArrays) this variable holds the number of the currently processed vertex since the start of the render call. ([Page 228](zotero://open-pdf/library/items/VIJZ3LLB?page=227&annotation=PTX8DR5P)) ^879002005 Can use this to index vertex data in shader. - The gl_FragCoord’s x and y component are the window- or screen-space coordinates of the fragment, originating from the bottom-left of the window. We specified a render window of 800x600 with glViewport so the screen-space coordinates of the fragment will have x values between 0 and 800, and y values between 0 and 600. ([Page 228](zotero://open-pdf/library/items/VIJZ3LLB?page=227&annotation=QYL93UBB)) ^879002006 - The gl_FrontFacing variable tells us if the current fragment is part of a front-facing or a back-facing face. ([Page 229](zotero://open-pdf/library/items/VIJZ3LLB?page=228&annotation=669WKCMJ)) ^879002007 - GLSL gives us an output variable called gl_FragDepth that we can use to manually set the depth value of the fragment within the shader. ([Page 230](zotero://open-pdf/library/items/VIJZ3LLB?page=229&annotation=QXS4BESX)) ^879002008 Cannot use this for skybox since setting this output variable will disable early depth testing. - From OpenGL 4.2 however, we can still sort of mediate between both sides by redeclaring the gl_FragDepth variable at the top of the fragment shader with a depth condition: ([Page 231](zotero://open-pdf/library/items/VIJZ3LLB?page=230&annotation=YTBXC4HC)) ^879002009 - The block name (VS_OUT) should be the same in the fragment shader, but the instance name (vs_out as used in the vertex shader) can be anything we like - avoiding confusing names like vs_out for a fragment struct containing input values. ([Page 232](zotero://open-pdf/library/items/VIJZ3LLB?page=231&annotation=8VSWADEX)) ^879002010 - OpenGL gives us a tool called uniform buffer objects that allow us to declare a set of global uniform variables that remain the same over any number of shader programs. When using uniform buffer objects we set the relevant uniforms only once in fixed GPU memory. We do still have to manually set the uniforms that are unique per shader. ([Page 232](zotero://open-pdf/library/items/VIJZ3LLB?page=231&annotation=4WEQ3FLU)) ^879002011 - Because a uniform buffer object is a buffer like any other buffer we can create one via glGenBuffers, bind it to the GL_UNIFORM_BUFFER buffer target ([Page 232](zotero://open-pdf/library/items/VIJZ3LLB?page=231&annotation=BPI9XD3V)) ^879002012 - First, we’ll take a simple vertex shader and store our projection and view matrix in a so called uniform block ([Page 233](zotero://open-pdf/library/items/VIJZ3LLB?page=232&annotation=BAXI7J2M)) ^879002013 ↩︎ ``` layout (std140) uniform Matrix { mat4 projection; mat4 view; } ``` - By default, GLSL uses a uniform memory layout called a shared layout - shared because once the offsets are defined by the hardware, they are consistently shared between multiple programs. With a shared layout GLSL is allowed to reposition the uniform variables for optimization as long as the variables’ order remains intact. ([Page 234](zotero://open-pdf/library/items/VIJZ3LLB?page=233&annotation=4I9PEGQY)) ^879002014 Shared layout is a private layout that we should not rely on any assumption on the layout. - The std140 layout explicitly states the memory layout for each variable type by standardizing their respective offsets governed by a set of rules. ([Page 234](zotero://open-pdf/library/items/VIJZ3LLB?page=233&annotation=FUR3EWBC)) ^879002015 Is it possible to tell C/C++ compiler to align a struct by following the std140 layout? - With these calculated offset values, based on the rules of the std140 layout, we can fill the buffer with data at the appropriate offsets using functions like glBufferSubData. ([Page 235](zotero://open-pdf/library/items/VIJZ3LLB?page=234&annotation=L8XHIWHX)) ^879002016 - In the OpenGL context there is a number of binding points defined where we can link a uniform buffer to. Once we created a uniform buffer we link it to one of those binding points and we also link the uniform block in the shader to the same binding point, effectively linking them together. ([Page 235](zotero://open-pdf/library/items/VIJZ3LLB?page=234&annotation=YYV8FAPM)) ^879002017 An extra layer of links. - To set a shader uniform block to a specific binding point we call glUniformBlockBinding that takes a program object, a uniform block index, and the binding point to link to. The uniform block index is a location index of the defined uniform block in the shader. This can be retrieved via a call to glGetUniformBlockIndex that accepts a program object and the name of the uniform block. ([Page 236](zotero://open-pdf/library/items/VIJZ3LLB?page=235&annotation=NGTZ3URV)) ^879002018 - From OpenGL version 4.2 and onwards it is also possible to store the binding point of a uniform block explicitly in the shader by adding another layout specifier, saving us the calls to glGetUniformBlockIndex and glUniformBlockBinding. ([Page 236](zotero://open-pdf/library/items/VIJZ3LLB?page=235&annotation=SKDQNKXH)) ^879002019 `layout(std140, binding = 2) uniform Lights { ... };` - Then we also need to bind the uniform buffer object to the same binding point and this can be accomplished with either glBindBufferBase or glBindBufferRange. ([Page 236](zotero://open-pdf/library/items/VIJZ3LLB?page=235&annotation=LH5DKGCN)) ^879002020 - OpenGL has a limit to how much uniform data it can handle which can be queried with GL_MAX_VERTEX_UNIFORM_COMPONENTS. When using uniform buffer objects, this limit is much higher. So whenever you reach a maximum number of uniforms (when doing skeletal animation for example) there’s always uniform buffer objects. ([Page 239](zotero://open-pdf/library/items/VIJZ3LLB?page=238&annotation=H5CKM4YM)) ^879002021 - What makes the geometry shader interesting is that it is able to convert the original primitive (set of vertices) to completely different primitives, possibly generating more vertices than were initially given. ([Page 240](zotero://open-pdf/library/items/VIJZ3LLB?page=239&annotation=S7ILW6UQ)) ^879002022 Can be used to render shadow volume? Yes. [Chapter 11. Efficient and Robust Shadow Volumes](https://developer.nvidia.com/gpugems/gpugems3/part-ii-light-and-shadows/chapter-11-efficient-and-robust-shadow-volumes-using) - To generate meaningful results we need some way to retrieve the output from the previous shader stage. GLSL gives us a built-in variable called gl_in that internally (probably) looks something like this ([Page 241](zotero://open-pdf/library/items/VIJZ3LLB?page=240&annotation=TT8JQ93P)) ^879002023 - Note that it is declared as an array, because most render primitives contain more than 1 vertex. The geometry shader receives all vertices of a primitive as its input. ([Page 241](zotero://open-pdf/library/items/VIJZ3LLB?page=240&annotation=AKF8ZC5C)) ^879002024 - A geometry shader needs to be compiled and linked to a program just like the vertex and fragment shader, but this time we’ll create the shader using GL_GEOMETRY_SHADER as the shader type ([Page 243](zotero://open-pdf/library/items/VIJZ3LLB?page=242&annotation=RJ9BIKX4)) ^879002025 - If we have a total of 6 vertices that form a triangle strip we’d get the following triangles: (1,2,3), (2,3,4), (3,4,5) and (4,5,6); forming a total of 4 triangles. ([Page 244](zotero://open-pdf/library/items/VIJZ3LLB?page=243&annotation=KTYS6486)) ^879002026 For even triangles, swap the first two vertices: (1, 2, 3), (3, 2, 4), (3, 4, 5), (5, 4, 6). (**Reference**:: [[Wikipedia Authors - Triangle strip (Highlights)]]) - The resulting primitive is then rasterized and the fragment shader runs on the entire triangle strip, resulting in a green house for each point we’ve rendered ([Page 245](zotero://open-pdf/library/items/VIJZ3LLB?page=244&annotation=VRC5DPUE)) ^879002027 > When no geometry shader is present, the outputs from the vertex or tessellation evaluation shader are interpolated across the primitive being rendered and are fed directly to the fragment shader. When a geometry shader is present, however, the outputs of the vertex or tessellation evaluation shader become the inputs to the geometry shader, and the outputs of the geometry shader are what are interpolated and fed to the fragment shader. > (**Reference**:: [[Graham Sellers et al. - Primitive Processing in Open GL (Highlights)]]) - Because the geometry shader acts on a set of vertices as its input, its input data from the vertex shader is always represented as arrays of vertex data even though we only have a single vertex right now. ([Page 247](zotero://open-pdf/library/items/VIJZ3LLB?page=246&annotation=CYZ2S2NQ)) ^879002028 - When emitting a vertex, that vertex will store the last stored value in fColor as that vertex’s output value. ([Page 247](zotero://open-pdf/library/items/VIJZ3LLB?page=246&annotation=3HEC79FE)) ^879002029 - When we say exploding an object we’re not actually going to blow up our precious bundled sets of vertices, but we’re going to move each triangle along the direction of their normal vector over a small period of time. ([Page 249](zotero://open-pdf/library/items/VIJZ3LLB?page=248&annotation=SYKMV7WB)) ^879002030 - Do note that if we switched a and b in the cross function we’d get a normal vector that points in the opposite direction - order is important here! ([Page 250](zotero://open-pdf/library/items/VIJZ3LLB?page=249&annotation=77DAT2LS)) ^879002031 OpenGL is a right-handed system. ![](https://blog-1251771406.cos.ap-shanghai.myqcloud.com/uploads/202410/a0b759/triangle-normal.png) - EndPrimitive(); ([Page 252](zotero://open-pdf/library/items/VIJZ3LLB?page=251&annotation=A6N5DYTH)) ^879002032 `EndPrimitive` can be called multiple times to end the `line_strip` and `triangle_strip` early. - To render using instancing all we need to do is change the render calls glDrawArrays and glDrawElements to glDrawArraysInstanced and glDrawElementsInstanced respectively. ([Page 254](zotero://open-pdf/library/items/VIJZ3LLB?page=253&annotation=HX9KZJ7Q)) ^879002033 - Instanced arrays are defined as a vertex attribute (allowing us to store much more data) that are updated per instance instead of per vertex. ([Page 257](zotero://open-pdf/library/items/VIJZ3LLB?page=256&annotation=3QLA3BWI)) ^879002034 - When defining a vertex attribute as an instanced array however, the vertex shader only updates the content of the vertex attribute per instance. ([Page 257](zotero://open-pdf/library/items/VIJZ3LLB?page=256&annotation=S2SVZLKV)) ^879002035 - What makes this code interesting is the last line where we call glVertexAttribDivisor. This function tells OpenGL when to update the content of a vertex attribute to the next element. ([Page 257](zotero://open-pdf/library/items/VIJZ3LLB?page=256&annotation=W27ZWR2V)) ^879002036 - By default, the attribute divisor is 0 which tells OpenGL to update the content of the vertex attribute each iteration of the vertex shader. By setting this attribute to 1 we’re telling OpenGL that we want to update the content of the vertex attribute when we start to render a new instance. By setting it to 2 we’d update the content every 2 instances and so on. ([Page 258](zotero://open-pdf/library/items/VIJZ3LLB?page=257&annotation=QSIZ3QDL)) ^879002037 - At first we had a technique called super sample anti-aliasing (SSAA) that temporarily uses a much higher resolution render buffer to render the scene in (super sampling). Then when the full scene is rendered, the resolution is downsampled back to the normal resolution. ([Page 264](zotero://open-pdf/library/items/VIJZ3LLB?page=263&annotation=34YKGQU8)) ^879002038 - How MSAA really works is that the fragment shader is only run once per pixel (for each primitive) regardless of how many subsamples the triangle covers; the fragment shader runs with the vertex data interpolated to the center of the pixel. ([Page 266](zotero://open-pdf/library/items/VIJZ3LLB?page=265&annotation=QH8N45EZ)) ^879002039 - The number of subsamples covered determines how much the pixel color contributes to the framebuffer. Because only 2 of the 4 samples were covered in the previous image, half of the triangle’s color is mixed with the framebuffer color (in this case the clear color) resulting in a light blue-ish color. ([Page 266](zotero://open-pdf/library/items/VIJZ3LLB?page=265&annotation=MI53MLSY)) ^879002040 - Depth and stencil values are stored per subsample and, even though we only run the fragment shader once, color values are stored per subsample as well for the case of multiple triangles overlapping a single pixel. ([Page 267](zotero://open-pdf/library/items/VIJZ3LLB?page=266&annotation=PH9TWZVL)) ^879002041 - If we want to use MSAA in OpenGL we need to use a buffer that is able to store more than one sample value per pixel. We need a new type of buffer that can store a given amount of multisamples and this is called a multisample buffer. ([Page 267](zotero://open-pdf/library/items/VIJZ3LLB?page=266&annotation=ZPYBKNXG)) ^879002042 - Most windowing systems are able to provide us a multisample buffer instead of a default buffer. GLFW also gives us this functionality and all we need to do is hint GLFW that we’d like to use a multisample buffer with N samples instead of a normal buffer by calling glfwWindowHint before creating the window: ([Page 267](zotero://open-pdf/library/items/VIJZ3LLB?page=266&annotation=R7XQ6SXR)) ^879002043 `glfwWindowHint(GLFW_SAMPLES, 4);` - Now that we asked GLFW for multisampled buffers we need to enable multisampling by calling glEnable with GL_MULTISAMPLE. ([Page 267](zotero://open-pdf/library/items/VIJZ3LLB?page=266&annotation=H27EMIYN)) ^879002044 - To create a texture that supports storage of multiple sample points we use glTexImage2DMultis ample instead of glTexImage2D that accepts GL_TEXTURE_2D_MULTISAPLE as its texture target ([Page 268](zotero://open-pdf/library/items/VIJZ3LLB?page=267&annotation=NTGTBPE2)) ^879002045 - To attach a multisampled texture to a framebuffer we use glFramebufferTexture2D, but this time with GL_TEXTURE_2D_MULTISAMPLE as the texture type ([Page 268](zotero://open-pdf/library/items/VIJZ3LLB?page=267&annotation=LZT9TRN5)) ^879002046 - Like textures, creating a multisampled renderbuffer object isn’t difficult. It is even quite easy since all we need to change is glRenderbufferStorage to glRenderbufferStorageMultisam ple when we configure the (currently bound) renderbuffer’s memory storage ([Page 269](zotero://open-pdf/library/items/VIJZ3LLB?page=268&annotation=P697SFQS)) ^879002047 - However, because a multisampled buffer is a bit special, we can’t directly use the buffer for other operations like sampling it in a shader. ([Page 269](zotero://open-pdf/library/items/VIJZ3LLB?page=268&annotation=QXGIB47T)) ^879002048 - Resolving a multisampled framebuffer is generally done through glBlitFramebuffer that copies a region from one framebuffer to the other while also resolving any multisampled buffers. ([Page 269](zotero://open-pdf/library/items/VIJZ3LLB?page=268&annotation=EPDLPHPS)) ^879002049 - We could also bind to those targets individually by binding framebuffers to GL_READ_FRAMEBUFFER and GL_DRAW_FRAMEBUFFER respectively. The glBlitFramebuffer function reads from those two targets to determine which is the source and which is the target framebuffer. ([Page 269](zotero://open-pdf/library/items/VIJZ3LLB?page=268&annotation=JH2GT3AB)) ^879002050 - It is possible to directly pass a multisampled texture image to a fragment shader instead of first resolving it. GLSL gives us the option to sample the texture image per subsample so we can create our own custom anti-aliasing algorithms. ([Page 271](zotero://open-pdf/library/items/VIJZ3LLB?page=270&annotation=S5JKYYIL)) ^879002051 - To get a texture value per subsample you’d have to define the texture uniform sampler as a sampler2DMS instead of the usual sampler2D ([Page 271](zotero://open-pdf/library/items/VIJZ3LLB?page=270&annotation=IP6JXRKR)) ^879002052 - Using the texelFetch function it is then possible to retrieve the color value per sample ([Page 271](zotero://open-pdf/library/items/VIJZ3LLB?page=270&annotation=5EZ6RHWZ)) ^879002053 - The Blinn-Phong model is largely similar, but approaches the specular model slightly different which as a result overcomes our problem. Instead of relying on a reflection vector we’re using a so called halfway vector that is a unit vector exactly halfway between the view direction and the light direction. The closer this halfway vector aligns with the surface’s normal vector, the higher the specular contribution. ([Page 274](zotero://open-pdf/library/items/VIJZ3LLB?page=273&annotation=SUXF658S)) ^879002054 - As a result, to get visuals similar to Phong shading the specular shininess exponent has to be set a bit higher. A general rule of thumb is to set it between 2 and 4 times the Phong shininess exponent. ([Page 275](zotero://open-pdf/library/items/VIJZ3LLB?page=274&annotation=8XB6A4FK)) ^879002055 - You can see that with gamma correction, the (updated) color values work more nicely together and darker areas show more details. ([Page 278](zotero://open-pdf/library/items/VIJZ3LLB?page=277&annotation=3EVAFTQY)) ^879002056 - The idea of gamma correction is to apply the inverse of the monitor’s gamma to the final output color before displaying to the monitor. ([Page 278](zotero://open-pdf/library/items/VIJZ3LLB?page=277&annotation=ZBR5SDDG)) ^879002057 - The color space as a result of this gamma of 2.2 is called the sRGB color space (not 100% exact, but close). ([Page 278](zotero://open-pdf/library/items/VIJZ3LLB?page=277&annotation=IELCGVIF)) ^879002058 - By enabling GL_FRAMEBU FFER_SRGB you tell OpenGL that each subsequent drawing command should first gamma correct colors (from the sRGB color space) before storing them in color buffer(s). ([Page 279](zotero://open-pdf/library/items/VIJZ3LLB?page=278&annotation=U7JAKXCG)) ^879002059 - After enabling GL_FRAMEBUFFER_SRGB, OpenGL automatically performs gamma correction after each fragment shader run to all subsequent framebuffers, including the default framebuffer. ([Page 279](zotero://open-pdf/library/items/VIJZ3LLB?page=278&annotation=ITKUHWDW)) ^879002060 - If you gamma-correct your colors before the final output, all subsequent operations on those colors will operate on incorrect values. For instance, if you use multiple framebuffers you probably want intermediate results passed in between framebuffers to remain in linear-space and only have the last framebuffer apply gamma correction before being sent to the monitor. ([Page 279](zotero://open-pdf/library/items/VIJZ3LLB?page=278&annotation=3ZZRGHNR)) ^879002061 #caveat - Because monitors display colors with gamma applied, whenever you draw, edit, or paint a picture on your computer you are picking colors based on what you see on the monitor. This effectively means all the pictures you create or edit are not in linear space, but in sRGB space e.g. doubling a dark-red color on your screen based on perceived brightness, does not equal double the red component. ([Page 279](zotero://open-pdf/library/items/VIJZ3LLB?page=278&annotation=WJ4EXBYY)) ^879002062 - If we create a texture in OpenGL with any of these two sRGB texture formats, OpenGL will automatically correct the colors to linear-space as soon as we use them, allowing us to properly work in linear space. ([Page 280](zotero://open-pdf/library/items/VIJZ3LLB?page=279&annotation=Z7DZZ9MS)) ^879002063 two formats: `GL_SRGB` and `GL_SRGB_ALPHA` - The linear equivalent gives more plausible results compared to its quadratic variant without gamma correction, but when we enable gamma correction the linear attenuation looks too weak and the physically correct quadratic attenuation suddenly gives the better results. ([Page 281](zotero://open-pdf/library/items/VIJZ3LLB?page=280&annotation=VB6EHWIB)) ^879002064 - Shadows are a bit tricky to implement though, specifically because in current real-time (rasterized graphics) research a perfect shadow algorithm hasn’t been developed yet. There are several good shadow approximation techniques, but they all have their little quirks and annoyances which we have to take into account. ([Page 283](zotero://open-pdf/library/items/VIJZ3LLB?page=282&annotation=A22K9ZTK)) ^879002065 - The idea behind shadow mapping is quite simple: we render the scene from the light’s point of view and everything we see from the light’s perspective is lit and everything we can’t see must be in shadow. ([Page 283](zotero://open-pdf/library/items/VIJZ3LLB?page=282&annotation=U4YECWBJ)) ^879002066 - What if we were to render the scene from the light’s perspective and store the resulting depth values in a texture? This way, we can sample the closest depth values as seen from the light’s perspective. After all, the depth values show the first fragment visible from the light’s perspective. We store all these depth values in a texture that we call a depth map or shadow map. ([Page 284](zotero://open-pdf/library/items/VIJZ3LLB?page=283&annotation=K85H2BNR)) ^879002067 Reusing the light space matrix, we can get the depth map coordinates in the real rendering pass. - We also give the texture a width and height of 1024: this is the resolution of the depth map. ([Page 285](zotero://open-pdf/library/items/VIJZ3LLB?page=284&annotation=IC4QTTKI)) ^879002068 It's a square because NDC xy plane is a square. - A framebuffer object however is not complete without a color buffer so we need to explicitly tell OpenGL we’re not going to render any color data. We do this by setting both the read and draw buffer to GL_NONE with glDrawBuffer and glReadbuffer. ([Page 285](zotero://open-pdf/library/items/VIJZ3LLB?page=284&annotation=4FAYBLEZ)) ^879002069 - Because we’re modelling a directional light source, all its light rays are parallel. For this reason, we’re going to use an orthographic projection matrix for the light source where there is no perspective deform ([Page 286](zotero://open-pdf/library/items/VIJZ3LLB?page=285&annotation=LBLAKKK9)) ^879002070 - Because a projection matrix indirectly determines the range of what is visible (e.g. what is not clipped) you want to make sure the size of the projection frustum correctly contains the objects you want to be in the depth map. When objects or fragments are not in the depth map they will not produce shadows. ([Page 286](zotero://open-pdf/library/items/VIJZ3LLB?page=285&annotation=AZ3QF35C)) ^879002071 - The code to check if a fragment is in shadow is (quite obviously) executed in the fragment shader, but we do the light-space transformation in the vertex shader ([Page 288](zotero://open-pdf/library/items/VIJZ3LLB?page=287&annotation=U5V53YWT)) ^879002072 - Within the fragment shader we then calculate a shadow value that is either 1.0 when the fragment is in shadow or 0.0 when not in shadow. The resulting diffuse and specular components are then multiplied by this shadow component. Because shadows are rarely completely dark (due to light scattering) we leave the ambient component out of the shadow multiplications. ([Page 289](zotero://open-pdf/library/items/VIJZ3LLB?page=288&annotation=3T8X6I3V)) ^879002073 - When we output a clip-space vertex position to gl_Position in the vertex shader, OpenGL automatically does a perspective divide e.g. transform clip-space coordinates in the range [-w,w] to [-1,1] by dividing the x, y and z component by the vector’s w component. As the clip-space FragPosLightSpace is not passed to the fragment shader through gl_Position, we have to do this perspective divide ourselves ([Page 290](zotero://open-pdf/library/items/VIJZ3LLB?page=289&annotation=KK8TZWAD)) ^879002074 - When using an orthographic projection matrix the w component of a vertex remains untouched so this step is actually quite meaningless. However, it is necessary when using perspective projection so keeping this line ensures it works with both projection matrices. ([Page 290](zotero://open-pdf/library/items/VIJZ3LLB?page=289&annotation=AWU5IGD2)) ^879002075 - To get the current depth at this fragment we simply retrieve the projected vector’s z coordinate which equals the depth of this fragment from the light’s perspective. ([Page 290](zotero://open-pdf/library/items/VIJZ3LLB?page=289&annotation=C3XHX4M9)) ^879002076 - A shadow bias of 0.005 solves the issues of our scene by a large extent, but you can imagine the bias value is highly dependent on the angle between the light source and the surface. ([Page 293](zotero://open-pdf/library/items/VIJZ3LLB?page=292&annotation=6EZ8YBUY)) ^879002077 - We can use a little trick to solve most of the peter panning issue by using front face culling when rendering the depth map. ([Page 293](zotero://open-pdf/library/items/VIJZ3LLB?page=292&annotation=PGBHFWIY)) ^879002078 Thus we can use smaller depth bias or no bias at all to mitigate Peter panning. - The idea is to sample more than once from the depth map, each time with slightly different texture coordinates. For each individual sample we check whether it is in shadow or not. All the sub-results are then combined and averaged and we get a nice soft looking shadow. ([Page 297](zotero://open-pdf/library/items/VIJZ3LLB?page=296&annotation=A26N94IG)) ^879002079 - Perspective projections are most often used with spotlights and point lights, while orthographic projections are used for directional lights. ([Page 298](zotero://open-pdf/library/items/VIJZ3LLB?page=297&annotation=4AFQS3F4)) ^879002080 - The main difference between directional shadow mapping and omnidirectional shadow mapping is the depth map we use. ([Page 300](zotero://open-pdf/library/items/VIJZ3LLB?page=299&annotation=K4VZ8HKB)) ^879002081 - To create a cubemap of a light’s surrounding depth values we have to render the scene 6 times: once for each face. ([Page 300](zotero://open-pdf/library/items/VIJZ3LLB?page=299&annotation=C9JHMJ9U)) ^879002082 - Since we’re going to use a geometry shader, that allows us to render to all faces in a single pass, we can directly attach the cubemap as a framebuffer’s depth attachment with glFramebufferTexture ([Page 301](zotero://open-pdf/library/items/VIJZ3LLB?page=300&annotation=U595YPTZ)) ^879002083 - The geometry shader will be the shader responsible for transforming all world-space vertices to the 6 different light spaces. ([Page 303](zotero://open-pdf/library/items/VIJZ3LLB?page=302&annotation=CMHNHCGP)) ^879002084 - The geometry shader has a built-in variable called gl_Layer that specifies which cubemap face to emit a primitive to. When left alone, the geometry shader just sends its primitives further down the pipeline as usual, but when we update this variable we can control to which cubemap face we render to for each primitive. ([Page 303](zotero://open-pdf/library/items/VIJZ3LLB?page=302&annotation=MQKI2E4S)) ^879002085 - From the lighting technique’s point of view, the only way it determines the shape of an object is by its perpendicular normal vector. ([Page 312](zotero://open-pdf/library/items/VIJZ3LLB?page=311&annotation=S5DCI373)) ^879002086 - The reason for this is that OpenGL reads texture coordinates with the y (or v) coordinate reversed from how textures are generally created. ([Page 314](zotero://open-pdf/library/items/VIJZ3LLB?page=313&annotation=SKTBIQAX)) ^879002087 - Normal vectors in a normal map are expressed in tangent space where normals always point roughly in the positive z direction. Tangent space is a space that’s local to the surface of a triangle: the normals are relative to the local reference frame of the individual triangles. ([Page 315](zotero://open-pdf/library/items/VIJZ3LLB?page=314&annotation=E9SD2F5X)) ^879002088 - When the aiProcess_CalcTangentSpace bit is supplied to Assimp’s ReadFile function, Assimp calculates smooth tangent and bitangent vectors for each of the loaded vertices, similarly to how we did it in this chapter. ([Page 322](zotero://open-pdf/library/items/VIJZ3LLB?page=321&annotation=VDPV8RV3)) ^879002089 - aiTextureType_NORMAL doesn’t load normal maps, while aiTextureType_HEIGHT does ([Page 323](zotero://open-pdf/library/items/VIJZ3LLB?page=322&annotation=N4K4TCUQ)) ^879002090 - Using normal maps is also a great way to boost performance. Before normal mapping, you had to use a large number of vertices to get a high number of detail on a mesh. With normal mapping, we can get the same level of detail on a mesh using a lot less vertices. ([Page 323](zotero://open-pdf/library/items/VIJZ3LLB?page=322&annotation=PVXVNT6D)) ^879002091 - Using a mathematical trick called the Gram-Schmidt process, we can re-orthogonalize the TBN vectors such that each vector is again perpendicular to the other vectors. ([Page 324](zotero://open-pdf/library/items/VIJZ3LLB?page=323&annotation=9JDQHEVP)) ^879002092 - The idea behind parallax mapping is to alter the texture coordinates in such a way that it looks like a fragment’s surface is higher or lower than it actually is, all based on the view direction and a heightmap. ([Page 326](zotero://open-pdf/library/items/VIJZ3LLB?page=325&annotation=FPQCTUYQ)) ^879002093 - Parallax mapping tries to solve this by scaling the fragment-to-view direction vector V ̄ by the height at fragment A. ([Page 326](zotero://open-pdf/library/items/VIJZ3LLB?page=325&annotation=C4TPDG9W)) ^879002094 - High dynamic range rendering works a bit like that. We allow for a much larger range of color values to render to, collecting a large range of dark and bright details of a scene, and at the end we transform all the HDR values back to the low dynamic range (LDR) of [0.0, 1.0]. ([Page 337](zotero://open-pdf/library/items/VIJZ3LLB?page=336&annotation=562PR9AB)) ^879002095 - This process of converting HDR values to LDR values is called tone mapping and a large collection of tone mapping algorithms exist that aim to preserve most HDR details during the conversion process. ([Page 337](zotero://open-pdf/library/items/VIJZ3LLB?page=336&annotation=UUFVER6M)) ^879002096 - When the internal format of a framebuffer’s color buffer is specified as GL_RGB16F, GL_RGBA16F, GL_RGB32F, or GL_RGBA32F the framebuffer is known as a floating point framebuffer that can store floating point values outside the default range of 0.0 and 1.0. This is perfect for rendering in high dynamic range! ([Page 337](zotero://open-pdf/library/items/VIJZ3LLB?page=336&annotation=NCAPVWAJ)) ^879002097 - One of the more simple tone mapping algorithms is Reinhard tone mapping that involves dividing the entire HDR color values to LDR color values. ([Page 339](zotero://open-pdf/library/items/VIJZ3LLB?page=338&annotation=HJJT64NS)) ^879002098 - Bright light sources and brightly lit regions are often difficult to convey to the viewer as the intensity range of a monitor is limited. One way to distinguish bright light sources on a monitor is by making them glow; the light then bleeds around the light source. ([Page 342](zotero://open-pdf/library/items/VIJZ3LLB?page=341&annotation=24DTJT3G)) ^879002099 - This light bleeding, or glow effect, is achieved with a post-processing effect called Bloom. Bloom gives all brightly lit regions of a scene a glow-like effect. ([Page 342](zotero://open-pdf/library/items/VIJZ3LLB?page=341&annotation=M4JM5TJX)) ^879002100 - Bloom works best in combination with HDR rendering. ([Page 342](zotero://open-pdf/library/items/VIJZ3LLB?page=341&annotation=VWPF3JWF)) ^879002101 - Bloom by itself isn’t a complicated technique, but difficult to get exactly right. Most of its visual quality is determined by the quality and type of blur filter used for blurring the extracted brightness regions. ([Page 343](zotero://open-pdf/library/items/VIJZ3LLB?page=342&annotation=9VLMSSKL)) ^879002102 - we can also use a neat little trick called Multiple Render Targets (MRT) that allows us to specify more than one fragment shader output ([Page 344](zotero://open-pdf/library/items/VIJZ3LLB?page=343&annotation=ULAMVXDG)) ^879002103 - As a requirement for using multiple fragment shader outputs we need multiple color buffers attached to the currently bound framebuffer object. ([Page 344](zotero://open-pdf/library/items/VIJZ3LLB?page=343&annotation=8T7TM6FB)) ^879002104 - We do have to explicitly tell OpenGL we’re rendering to multiple color buffers via glDrawBuff ers. ([Page 345](zotero://open-pdf/library/items/VIJZ3LLB?page=344&annotation=MLPZAQDT)) ^879002105 - If we for instance sample a 32x32 box around a fragment, we use progressively smaller weights the larger the distance to the fragment; this gives a better and more realistic blur which is known as a Gaussian blur. ([Page 346](zotero://open-pdf/library/items/VIJZ3LLB?page=345&annotation=SGAGEBFU)) ^879002106 - Specifically for the two-pass Gaussian blur we’re going to implement ping-pong framebuffers. That is a pair of framebuffers where we render and swap, a given number of times, the other framebuffer’s color buffer into the current framebuffer’s color buffer with an alternating shader effect. We basically continuously switch the framebuffer to render to and the texture to draw with. ([Page 347](zotero://open-pdf/library/items/VIJZ3LLB?page=346&annotation=JUMW52XW)) ^879002107 - Deferred shading is based on the idea that we defer or postpone most of the heavy rendering (like lighting) to a later stage. ([Page 351](zotero://open-pdf/library/items/VIJZ3LLB?page=350&annotation=568Q9CCS)) ^879002108 - The G-buffer is the collective term of all textures used to store lighting-relevant data for the final lighting pass. ([Page 352](zotero://open-pdf/library/items/VIJZ3LLB?page=351&annotation=63N3HCNA)) ^879002109 - We can copy the content of a framebuffer to the content of another framebuffer with the help of glBlitFramebuffer ([Page 358](zotero://open-pdf/library/items/VIJZ3LLB?page=357&annotation=EXP7SPYK)) ^879002110 - What deferred rendering is often praised for, is its ability to render an enormous amount of light sources without a heavy cost on performance. ([Page 359](zotero://open-pdf/library/items/VIJZ3LLB?page=358&annotation=7MLXKDSJ)) ^879002111 - The idea behind light volumes is to calculate the radius, or volume, of a light source i.e. the area where its light is able to reach fragments. ([Page 359](zotero://open-pdf/library/items/VIJZ3LLB?page=358&annotation=BAJ8WYYP)) ^879002112 - This often means that a shader is run that executes all branches of an if statement to ensure the shader runs are the same for that group of threads, making our previous radius check optimization completely useless; we’d still calculate lighting for all light sources! ([Page 361](zotero://open-pdf/library/items/VIJZ3LLB?page=360&annotation=6VEFCXM5)) ^879002113 - The appropriate approach to using light volumes is to render actual spheres, scaled by the light volume radius. The centers of these spheres are positioned at the light source’s position, and as it is scaled by the light volume radius the sphere exactly encompasses the light’s visible volume. This is where the trick comes in: we use the deferred lighting shader for rendering the spheres. As a rendered sphere produces fragment shader invocations that exactly match the pixels the light source affects, we only render the relevant pixels and skip all other pixels. ([Page 361](zotero://open-pdf/library/items/VIJZ3LLB?page=360&annotation=NUWGUGW7)) ^879002114 - In 2007, Crytek published a technique called screen-space ambient occlusion (SSAO) for use in their title Crysis. The technique uses a scene’s depth buffer in screen-space to determine the amount of occlusion instead of real geometrical data. This approach is incredibly fast compared to real ambient occlusion and gives plausible results, making it the de-facto standard for approximating real-time ambient occlusion. ([Page 364](zotero://open-pdf/library/items/VIJZ3LLB?page=363&annotation=2W5F8CTQ)) ^879002115 - Each of the gray depth samples that are inside geometry contribute to the total occlusion factor; the more samples we find inside geometry, the less ambient lighting the fragment should eventually receive. ([Page 365](zotero://open-pdf/library/items/VIJZ3LLB?page=364&annotation=RPEXSH43)) ^879002116 面是凹的就越暗 - By randomly rotating the sample kernel each fragment we can get high quality results with a much smaller amount of samples. ([Page 365](zotero://open-pdf/library/items/VIJZ3LLB?page=364&annotation=4SKQQGZR)) ^879002117 - By sampling around this normal-oriented hemisphere we do not consider the fragment’s underlying geometry to be a contribution to the occlusion factor. This removes the gray-feel of ambient occlusion and generally produces more realistic results. ([Page 366](zotero://open-pdf/library/items/VIJZ3LLB?page=365&annotation=MY2KD5AY)) ^879002118 - However, almost all real-time PBR render pipelines use a BRDF known as the CookTorrance BRDF. ([Page 385](zotero://open-pdf/library/items/VIJZ3LLB?page=384&annotation=7CDQJA2M)) ^879002119 - We’re going to pick the same functions used by Epic Game’s Unreal Engine 4 which are the Trowbridge-Reitz GGX for D, the Fresnel-Schlick approximation for F, and the Smith’s Schlick-GGX for G. ([Page 386](zotero://open-pdf/library/items/VIJZ3LLB?page=385&annotation=2WAPXVJH)) ^879002120