diff --git a/README.md b/README.md index da4c7e1..c7bfbc8 100644 --- a/README.md +++ b/README.md @@ -2,57 +2,40 @@ CIS565: Project 6 -- Deferred Shader ------------------------------------------------------------------------------- Fall 2014 +INTRODUCTION: ------------------------------------------------------------------------------- -Due Wed, 11/12/2014 at Noon -------------------------------------------------------------------------------- - +In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. ------------------------------------------------------------------------------- -NOTE: +Results: ------------------------------------------------------------------------------- -This project requires any graphics card with support for a modern OpenGL -pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work -fine, and every machine in the SIG Lab and Moore 100 is capable of running -this project. -This project also requires a WebGL capable browser. The project is known to -have issues with Chrome on windows, but Firefox seems to run it fine. +Live demo: https://cdn.rawgit.com/cyborgyl/Project6-DeferredShader/master/index.html -------------------------------------------------------------------------------- -INTRODUCTION: -------------------------------------------------------------------------------- +Video: http://youtu.be/Q6Dw10o2hGM -In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. +![diffuse](/Results/diffuse.PNG) -------------------------------------------------------------------------------- -CONTENTS: -------------------------------------------------------------------------------- -The Project5 root directory contains the following subdirectories: - -* js/ contains the javascript files, including external libraries, necessary. -* assets/ contains the textures that will be used in the second half of the - assignment. -* resources/ contains the screenshots found in this readme file. +Blinn-Phong - This Readme file edited as described above in the README section. +![toon](/Results/toon.PNG) -------------------------------------------------------------------------------- -OVERVIEW: -------------------------------------------------------------------------------- -The deferred shader you will write will have the following stages: +Toon Shading -Stage 1 renders the scene geometry to the G-Buffer -* pass.vert -* pass.frag +![ssao](/Results/diffuse_ssao.PNG) -Stage 2 renders the lighting passes and accumulates to the P-Buffer -* quad.vert -* diffuse.frag -* diagnostic.frag +SSAO -Stage 3 renders the post processing -* post.vert -* post.frag +![bloom](/Results/bloom_ssao.PNG) +Bloom + SSAO + +![bloom](/Results/toon_bloom_ssao.PNG) + +Toon + Bloom + SSAO + +------------------------------------------------------------------------------- +OVERVIEW: +------------------------------------------------------------------------------- The keyboard controls are as follows: WASDRF - Movement (along w the arrow keys) * W - Zoom in @@ -69,38 +52,21 @@ WASDRF - Movement (along w the arrow keys) * 2 - Normals * 3 - Color * 4 - Depth -* 0 - Full deferred pipeline - +* 0 - Blinn-Phong +* 5 - Toon +* 6 - SSAO toggle +* 7 - Bloom toggle There are also mouse controls for camera rotation. ------------------------------------------------------------------------------- REQUIREMENTS: ------------------------------------------------------------------------------- - -In this project, you are given code for: -* Loading .obj file -* Deferred shading pipeline -* GBuffer pass - -You are required to implement: +Features: * Either of the following effects * Bloom * "Toon" Shading (with basic silhouetting) * Screen Space Ambient Occlusion * Diffuse and Blinn-Phong shading - -**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels. - -You must implement two of the following extras: -* The effect you did not choose above -* Compare performance to a normal forward renderer with - * No optimizations - * Coarse sort geometry front-to-back for early-z - * Z-prepass for early-z -* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc. - * Must be accompanied with a performance analysis to count -* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple). - ------------------------------------------------------------------------------- RUNNING THE CODE: ------------------------------------------------------------------------------- @@ -130,95 +96,22 @@ machine from the root directory of this repository with the following command: `python -m SimpleHTTPServer` -------------------------------------------------------------------------------- -RESOURCES: -------------------------------------------------------------------------------- - -The following are articles and resources that have been chosen to help give you -a sense of each of the effects: - -* Bloom : [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) -* Screen Space Ambient Occlusion : [Floored - Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html) - -------------------------------------------------------------------------------- -README -------------------------------------------------------------------------------- -All students must replace or augment the contents of this Readme.md in a clear -manner with the following: - -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30 second or longer video of your project running. To create the video you - can use [Open Broadcaster Software](http://obsproject.com) -* A performance evaluation (described in detail below). - ------------------------------------------------------------------------------- PERFORMANCE EVALUATION ------------------------------------------------------------------------------- -The performance evaluation is where you will investigate how to make your -program more efficient using the skills you've learned in class. You must have -performed at least one experiment on your code to investigate the positive or -negative effects on performance. - -We encourage you to get creative with your tweaks. Consider places in your code -that could be considered bottlenecks and try to improve them. - -Each student should provide no more than a one page summary of their -optimizations along with tables and or graphs to visually explain any -performance differences. - -------------------------------------------------------------------------------- -THIRD PARTY CODE POLICY -------------------------------------------------------------------------------- -* Use of any third-party code must be approved by asking on the Google groups. - If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the ray tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code must be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will result in you - receiving an F for the semester. +Always 60fps steady using "suzanne" model. (on GTX780 and GT650M) -------------------------------------------------------------------------------- -SELF-GRADING -------------------------------------------------------------------------------- -* On the submission date, email your grade, on a scale of 0 to 100, to Harmony, - harmoli+cis565@seas.upenn.edu, with a one paragraph explanation. Be concise and - realistic. Recall that we reserve 30 points as a sanity check to adjust your - grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We - hope to only use this in extreme cases when your grade does not realistically - reflect your work - it is either too high or too low. In most cases, we plan - to give you the exact grade you suggest. -* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as - the path tracer. We will determine the weighting at the end of the semester - based on the size of each project. - - ---- -SUBMISSION ---- -As with the previous projects, you should fork this project and work inside of -your fork. Upon completion, commit your finished project back to your fork, and -make a pull request to the master repository. You should include a README.md -file in the root directory detailing the following - -* A brief description of the project and specific features you implemented -* At least one screenshot of your project running. -* A link to a video of your project running. -* Instructions for building and running your project if they differ from the - base code. -* A performance writeup as detailed above. -* A list of all third-party code used. -* This Readme file edited as described above in the README section. - ---- ACKNOWLEDGEMENTS --- - Many thanks to Cheng-Tso Lin, whose framework for CIS700 we used for this assignment. This project makes use of [three.js](http://www.threejs.org). + +SSAO tutorial: +http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html + +Fast Bloom: +https://software.intel.com/en-us/blogs/2014/07/15/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms + +http://wp.applesandoranges.eu/?p=14 diff --git a/Results/bloom_ssao.PNG b/Results/bloom_ssao.PNG new file mode 100644 index 0000000..6bdc89d Binary files /dev/null and b/Results/bloom_ssao.PNG differ diff --git a/Results/diffuse.PNG b/Results/diffuse.PNG new file mode 100644 index 0000000..33f9da6 Binary files /dev/null and b/Results/diffuse.PNG differ diff --git a/Results/diffuse_ssao.PNG b/Results/diffuse_ssao.PNG new file mode 100644 index 0000000..e3f7530 Binary files /dev/null and b/Results/diffuse_ssao.PNG differ diff --git a/Results/toon.PNG b/Results/toon.PNG new file mode 100644 index 0000000..5231952 Binary files /dev/null and b/Results/toon.PNG differ diff --git a/Results/toon_bloom_ssao.PNG b/Results/toon_bloom_ssao.PNG new file mode 100644 index 0000000..8fe4811 Binary files /dev/null and b/Results/toon_bloom_ssao.PNG differ diff --git a/Results/toon_ssao.PNG b/Results/toon_ssao.PNG new file mode 100644 index 0000000..81577f4 Binary files /dev/null and b/Results/toon_ssao.PNG differ diff --git a/assets/deferred/colorPass.frag b/assets/deferred/colorPass.frag deleted file mode 100644 index c151235..0000000 --- a/assets/deferred/colorPass.frag +++ /dev/null @@ -1,7 +0,0 @@ -precision highp float; - -uniform sampler2D u_sampler; - -void main(void){ - gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); -} diff --git a/assets/deferred/colorPass.vert b/assets/deferred/colorPass.vert deleted file mode 100644 index 9c3901c..0000000 --- a/assets/deferred/colorPass.vert +++ /dev/null @@ -1,9 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; - -uniform mat4 u_mvp; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); -} diff --git a/assets/deferred/diagnostic.frag b/assets/deferred/diagnostic.frag deleted file mode 100644 index d47a5e9..0000000 --- a/assets/deferred/diagnostic.frag +++ /dev/null @@ -1,40 +0,0 @@ -precision highp float; - -#define DISPLAY_POS 1 -#define DISPLAY_NORMAL 2 -#define DISPLAY_COLOR 3 -#define DISPLAY_DEPTH 4 - -uniform sampler2D u_positionTex; -uniform sampler2D u_normalTex; -uniform sampler2D u_colorTex; -uniform sampler2D u_depthTex; - -uniform float u_zFar; -uniform float u_zNear; -uniform int u_displayType; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - vec3 normal = texture2D( u_normalTex, v_texcoord ).xyz; - vec3 position = texture2D( u_positionTex, v_texcoord ).xyz; - vec4 color = texture2D( u_colorTex, v_texcoord ); - float depth = texture2D( u_depthTex, v_texcoord ).x; - - depth = linearizeDepth( depth, u_zNear, u_zFar ); - - if( u_displayType == DISPLAY_DEPTH ) - gl_FragColor = vec4( depth, depth, depth, 1 ); - else if( u_displayType == DISPLAY_COLOR ) - gl_FragColor = color; - else if( u_displayType == DISPLAY_NORMAL ) - gl_FragColor = vec4( normal, 1 ); - else - gl_FragColor = vec4( position, 1 ); -} diff --git a/assets/deferred/diffuse.frag b/assets/deferred/diffuse.frag deleted file mode 100644 index ef0c5fc..0000000 --- a/assets/deferred/diffuse.frag +++ /dev/null @@ -1,23 +0,0 @@ -precision highp float; - -uniform sampler2D u_positionTex; -uniform sampler2D u_normalTex; -uniform sampler2D u_colorTex; -uniform sampler2D u_depthTex; - -uniform float u_zFar; -uniform float u_zNear; -uniform int u_displayType; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - // Write a diffuse shader and a Blinn-Phong shader - // NOTE : You may need to add your own normals to fulfill the second's requirements - gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); -} diff --git a/assets/deferred/normPass.frag b/assets/deferred/normPass.frag deleted file mode 100644 index b41d6ed..0000000 --- a/assets/deferred/normPass.frag +++ /dev/null @@ -1,7 +0,0 @@ -precision highp float; - -varying vec3 v_normal; - -void main(void){ - gl_FragColor = vec4(v_normal, 1.0); -} diff --git a/assets/deferred/normPass.vert b/assets/deferred/normPass.vert deleted file mode 100644 index 9a0b4b4..0000000 --- a/assets/deferred/normPass.vert +++ /dev/null @@ -1,15 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec3 a_normal; - -uniform mat4 u_mvp; -uniform mat4 u_normalMat; - -varying vec3 v_normal; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - - v_normal = vec3( u_normalMat * vec4(a_normal, 0.0) ); -} diff --git a/assets/deferred/pass.frag b/assets/deferred/pass.frag deleted file mode 100644 index 2416199..0000000 --- a/assets/deferred/pass.frag +++ /dev/null @@ -1,16 +0,0 @@ -#extension GL_EXT_draw_buffers: require -precision highp float; - -uniform sampler2D u_sampler; - -varying vec4 v_pos; -varying vec3 v_normal; -varying vec2 v_texcoord; -varying float v_depth; - -void main(void){ - gl_FragData[0] = v_pos; - gl_FragData[1] = vec4( normalize(v_normal), 1.0 ); - gl_FragData[2] = vec4( 1.0, 0.0, 0.0, 1.0 ); - gl_FragData[3] = vec4( v_depth, 0, 0, 0 ); -} diff --git a/assets/deferred/pass.vert b/assets/deferred/pass.vert deleted file mode 100644 index 861cb1a..0000000 --- a/assets/deferred/pass.vert +++ /dev/null @@ -1,26 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec3 a_normal; -attribute vec2 a_texcoord; - -uniform mat4 u_projection; -uniform mat4 u_modelview; -uniform mat4 u_mvp; -uniform mat4 u_normalMat; - -varying vec4 v_pos; -varying vec3 v_normal; -varying vec2 v_texcoord; -varying float v_depth; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - - v_pos = u_modelview * vec4( a_pos, 1.0 ); - v_normal = vec3( u_normalMat * vec4(a_normal,0.0) ); - - v_texcoord = a_texcoord; - - v_depth = ( gl_Position.z / gl_Position.w + 1.0 ) / 2.0; -} diff --git a/assets/deferred/posPass.frag b/assets/deferred/posPass.frag deleted file mode 100644 index 645a521..0000000 --- a/assets/deferred/posPass.frag +++ /dev/null @@ -1,8 +0,0 @@ -precision highp float; - -varying vec4 v_pos; -varying float v_depth; - -void main(void){ - gl_FragColor = v_pos; -} diff --git a/assets/deferred/posPass.vert b/assets/deferred/posPass.vert deleted file mode 100644 index ece8cc4..0000000 --- a/assets/deferred/posPass.vert +++ /dev/null @@ -1,15 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; - -uniform mat4 u_modelview; -uniform mat4 u_mvp; - -varying vec4 v_pos; -varying float v_depth; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - v_pos = u_modelview * vec4( a_pos, 1.0 ); - v_depth = ( gl_Position.z / gl_Position.w + 1.0 ) / 2.0; -} diff --git a/assets/deferred/post.frag b/assets/deferred/post.frag deleted file mode 100644 index 52edda2..0000000 --- a/assets/deferred/post.frag +++ /dev/null @@ -1,17 +0,0 @@ -precision highp float; - -uniform sampler2D u_shadeTex; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - // Currently acts as a pass filter that immmediately renders the shaded texture - // Fill in post-processing as necessary HERE - // NOTE : You may choose to use a key-controlled switch system to display one feature at a time - gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); -} diff --git a/assets/deferred/quad.vert b/assets/deferred/quad.vert deleted file mode 100644 index 8e4662e..0000000 --- a/assets/deferred/quad.vert +++ /dev/null @@ -1,11 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec2 a_texcoord; - -varying vec2 v_texcoord; - -void main(void){ - v_texcoord = a_texcoord; - gl_Position = vec4( a_pos, 1.0 ); -} diff --git a/assets/shader/deferred/diffuse.frag b/assets/shader/deferred/diffuse.frag index ef0c5fc..7547074 100644 --- a/assets/shader/deferred/diffuse.frag +++ b/assets/shader/deferred/diffuse.frag @@ -1,5 +1,8 @@ precision highp float; +#define TYPE_TOON 5 +#define TYPE_DIFFUSE 0 + uniform sampler2D u_positionTex; uniform sampler2D u_normalTex; uniform sampler2D u_colorTex; @@ -9,6 +12,15 @@ uniform float u_zFar; uniform float u_zNear; uniform int u_displayType; +//added +uniform mat4 u_View; +uniform vec3 u_lightpos; +uniform vec3 u_lightcolor; +uniform vec3 u_eyepos; +uniform float screenHeight; +uniform float screenWidth; +/////////////// + varying vec2 v_texcoord; float linearizeDepth( float exp_depth, float near, float far ){ @@ -19,5 +31,66 @@ void main() { // Write a diffuse shader and a Blinn-Phong shader // NOTE : You may need to add your own normals to fulfill the second's requirements - gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); + vec3 position = texture2D( u_positionTex, v_texcoord ).xyz; + vec3 normal = texture2D( u_normalTex, v_texcoord ).xyz; + vec4 color = texture2D( u_colorTex, v_texcoord ); + float depth = texture2D( u_depthTex, v_texcoord ).x; + depth = linearizeDepth(depth, u_zNear, u_zFar); + vec3 lightpos = vec3 (u_View * vec4(vec3(0.0, 15.0, 15.0), 1.0 )); + + vec3 outColor = vec3(0.1, 0.2, 0.4); + vec3 L = normalize(lightpos - position); + vec3 N = normalize(normal); + vec3 E = normalize(vec3(0.0) - position); + vec3 H = normalize(E + L); + if(depth < 0.9) + { + if(u_displayType == TYPE_TOON) + { + vec3 toonColor_h = texture2D(u_colorTex, v_texcoord).rgb; + vec3 toonColor_m = toonColor_h * 0.5; + vec3 toonColor_l = toonColor_m * 0.5; + + float dot = clamp(dot(N,L),0.0,1.0); + if(dot > 0.8) + { + outColor = toonColor_h; + } + else if(dot > 0.4) + { + outColor = toonColor_m; + } + else + { + outColor = toonColor_l; + } + //draw silhouette + float maxDepth = 0.0; + for(int i= -3; i <=3; i++) + { + float offsetY = clamp(float(i) / screenHeight + v_texcoord.y, 0.0, 1.0); + for(int j= -3; j <=3; j++) + { + float offsetX = clamp(float(j)/ screenWidth + v_texcoord.x, 0.0, 1.0); + float other_depth = texture2D( u_depthTex, vec2(offsetX, offsetY)).x; + other_depth = linearizeDepth(other_depth, u_zNear, u_zFar); + if(other_depth > maxDepth) + maxDepth = other_depth; + } + } + if(maxDepth - depth > 0.01) + outColor = vec3(0.0); + } + else if(u_displayType == TYPE_DIFFUSE) + { + //blinn-phong + float shininess = 50.0; + float diffuse = clamp(dot(N,L),0.0,1.0); + float specular = pow(max(dot(N,H),0.0),shininess); + + outColor = u_lightcolor * diffuse * texture2D(u_colorTex, v_texcoord).rgb + u_lightcolor * specular; + } + } + gl_FragColor = vec4(outColor, 1.0); + } diff --git a/assets/shader/deferred/post.frag b/assets/shader/deferred/post.frag index 52edda2..29d6f1d 100644 --- a/assets/shader/deferred/post.frag +++ b/assets/shader/deferred/post.frag @@ -1,6 +1,22 @@ precision highp float; +#define SAMPLE_SIZE 50 uniform sampler2D u_shadeTex; +uniform vec3 u_sampleKernels[50]; +uniform float u_blurRadius; +uniform mat4 u_projection; +uniform int u_SSAOToggle; +uniform int u_BloomToggle; + +uniform float u_zFar; +uniform float u_zNear; +uniform int u_displayType; +uniform float u_screenHeight; +uniform float u_screenWidth; + +uniform sampler2D u_positionTex; +uniform sampler2D u_normalTex; +uniform sampler2D u_depthTex; varying vec2 v_texcoord; @@ -8,10 +24,90 @@ float linearizeDepth( float exp_depth, float near, float far ){ return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); } -void main() -{ +float rand(float co){ + return fract(sin(dot(vec2(co,co) ,vec2(12.9898,78.233))) * 43758.5453); +} + +vec4 bloom() +{ + vec4 sum = vec4(0); + vec4 outColor = vec4(0); + //float offsetHeight = 1.0 / u_screenHeight; + //float offsetWidth = 1.0 / u_screenWidth; + for(int i= -3 ;i < 3; i++) + { + for (int j = -3; j < 3; j++) + { + sum += texture2D(u_shadeTex, v_texcoord + vec2(j,i) * 0.002) * 0.25; + } + } + if (length(texture2D(u_shadeTex, v_texcoord).rgb) < 0.3) + { + outColor = sum*sum*0.012 + texture2D(u_shadeTex, v_texcoord); + } + else + { + if (length(texture2D(u_shadeTex, v_texcoord).rgb) < 0.5) + { + outColor = sum*sum*0.009 + texture2D(u_shadeTex, v_texcoord); + } + else + { + outColor = sum*sum*0.0075 + texture2D(u_shadeTex, v_texcoord); + } + } + return outColor; +} + +void main(){ // Currently acts as a pass filter that immmediately renders the shaded texture // Fill in post-processing as necessary HERE // NOTE : You may choose to use a key-controlled switch system to display one feature at a time - gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); + + //calculate common things + vec3 position = texture2D(u_positionTex, v_texcoord).xyz; + vec3 normal = texture2D(u_normalTex, v_texcoord).xyz; + float depth = texture2D(u_depthTex, v_texcoord).r; + depth = linearizeDepth( depth, u_zNear, u_zFar ); + float ssao = 1.0; + //***************SSAO****************** + //SSAO reference: http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html + if(u_SSAOToggle > 0) + { + float maxDepth = 0.0; + vec3 center = vec3(position.x, position.y, depth); + for(int i = 0; i < SAMPLE_SIZE; ++i) + { + vec3 rvec = normalize(u_sampleKernels[i]); + vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); + vec3 bitangent = cross(normal, tangent); + mat3 tbn = mat3(tangent, bitangent, normal); + + vec3 kernelv = vec3(rand(position.x),rand(position.y),(rand(position.z)+1.0) / 2.0); + kernelv = normalize(kernelv); + float scale = float(i) / float(SAMPLE_SIZE); + scale = mix(0.1, 1.0, scale * scale); + kernelv = kernelv * scale ; + vec3 sample = tbn * kernelv; + float sampleDepth = texture2D(u_depthTex, v_texcoord + vec2(sample.x,sample.y)* u_blurRadius).r; + sampleDepth = linearizeDepth( sampleDepth, u_zNear, u_zFar ); + float samplez = center.z - (sample * u_blurRadius).z / 2.0; + // range check & accumulate: + float rangeCheck = abs(center.z - sampleDepth) < u_blurRadius ? 1.0 : 0.0; + if(sampleDepth <= samplez) + maxDepth += 1.0 * rangeCheck; + } + maxDepth = 1.0 - (maxDepth / float(SAMPLE_SIZE)); + ssao = maxDepth; + } + //************************************* + //**************BLOOM****************** + vec3 outColor = texture2D( u_shadeTex, v_texcoord).rgb * ssao; + if(u_BloomToggle > 0) + { + outColor = bloom().rgb * ssao; + } + //************************************ + gl_FragColor = vec4(outColor, 1.0); } + diff --git a/index.html b/index.html index dd0ffef..72d8326 100644 --- a/index.html +++ b/index.html @@ -1,4 +1,4 @@ - + CIS 565 WebGL Deferred Shader @@ -8,13 +8,16 @@ WebGL unavailable. +
+

Controls: 0 --Blinn-Phong, 5 --Toon, 6--SSAO toggle, 7--Bloom toggle.

+

Controls: 1 --Position Pass, 2 --Normal Pass, 3--Color Pass, 4--Depth Pass.

- + diff --git a/js/core/camera.js b/js/core/camera.js index 11a995d..c8c79d8 100644 --- a/js/core/camera.js +++ b/js/core/camera.js @@ -9,116 +9,116 @@ var CIS565WEBGLCORE = CIS565WEBGLCORE || {}; var CAMERA_ORBIT_TYPE = 1; var CAMERA_TRACKING_TYPE = 2; -CIS565WEBGLCORE.createCamera = function(t){ - var matrix = mat4.create(); - var up = vec3.create(); - var right = vec3.create(); - var normal = vec3.create(); - var position = vec3.create(); - var home = vec3.create(); - var azimuth = 0.0; - var elevation = 0.0; - var type = t; - var steps = 0; - - - setType = function(t){ - +CIS565WEBGLCORE.createCamera = function (t) { + var matrix = mat4.create(); + var up = vec3.create(); + var right = vec3.create(); + var normal = vec3.create(); + var position = vec3.create(); + var home = vec3.create(); + var azimuth = 0.0; + var elevation = 0.0; + var type = t; + var steps = 0; + + + setType = function (t) { + type = t; - + if (t != CAMERA_ORBIT_TYPE && t != CAMERA_TRACKING_TYPE) { alert('Wrong Camera Type!. Setting Orbitting type by default'); type = CAMERA_ORBIT_TYPE; } }; - update = function(){ - if (type == CAMERA_TRACKING_TYPE){ + update = function () { + if (type == CAMERA_TRACKING_TYPE) { mat4.identity(matrix); - mat4.translate( matrix, matrix, position ); - mat4.rotateY( matrix, matrix, azimuth * Math.PI/180 ); - mat4.rotateX( matrix, matrix, elevation * Math.PI/180 ); + mat4.translate(matrix, matrix, position); + mat4.rotateY(matrix, matrix, azimuth * Math.PI / 180); + mat4.rotateX(matrix, matrix, elevation * Math.PI / 180); } else { - mat4.rotateY( matrix, matrix, azimuth * Math.PI/180 ); - mat4.rotateX( matrix, matrix, elevation * Math.PI/180 ); - mat4.translate( matrix, matrix, position ); + mat4.rotateY(matrix, matrix, azimuth * Math.PI / 180); + mat4.rotateX(matrix, matrix, elevation * Math.PI / 180); + mat4.translate(matrix, matrix, position); } var m = matrix; - vec4.transformMat4( right, [1,0,0,0], m ); - vec4.transformMat4( up, [0,1,0,0], m ); - vec4.transformMat4( normal, [0,0,1,0], m ); - vec3.normalize( normal, normal ); - vec3.normalize( up, up ); - vec3.normalize( right, right ); - - if(type == CAMERA_TRACKING_TYPE){ - vec4.transformMat4( position, [0,0,0,1], m ); - } - }; - setPosition = function(p){ - vec3.set( position, p[0], p[1], p[2] ); + vec4.transformMat4(right, [1, 0, 0, 0], m); + vec4.transformMat4(up, [0, 1, 0, 0], m); + vec4.transformMat4(normal, [0, 0, 1, 0], m); + vec3.normalize(normal, normal); + vec3.normalize(up, up); + vec3.normalize(right, right); + + if (type == CAMERA_TRACKING_TYPE) { + vec4.transformMat4(position, [0, 0, 0, 1], m); + } + }; + setPosition = function (p) { + vec3.set(position, p[0], p[1], p[2]); update(); }; - dolly = function(s){ - - var p = vec3.create(); + dolly = function (s) { + + var p = vec3.create(); var n = vec3.create(); - + p = position; - + var step = s - steps; - - vec3.normalize( n, normal ); - + + vec3.normalize(n, normal); + var newPosition = vec3.create(); - - if(type == CAMERA_TRACKING_TYPE){ - newPosition[0] = p[0] - step*n[0]; - newPosition[1] = p[1] - step*n[1]; - newPosition[2] = p[2] - step*n[2]; + + if (type == CAMERA_TRACKING_TYPE) { + newPosition[0] = p[0] - step * n[0]; + newPosition[1] = p[1] - step * n[1]; + newPosition[2] = p[2] - step * n[2]; } - else{ + else { newPosition[0] = p[0]; newPosition[1] = p[1]; - newPosition[2] = p[2] - step; + newPosition[2] = p[2] - step; } - + setPosition(newPosition); steps = s; }; - setAzimuth = function(az){ + setAzimuth = function (az) { changeAzimuth(az - azimuth); }; - changeAzimuth = function(az){ - - azimuth +=az; - - if (azimuth > 360 || azimuth <-360) { - azimuth = azimuth % 360; - } + changeAzimuth = function (az) { + + azimuth += az; + + if (azimuth > 360 || azimuth < -360) { + azimuth = azimuth % 360; + } update(); }; - setElevation = function(el){ + setElevation = function (el) { changeElevation(el - elevation); }; - changeElevation = function(el){ - - elevation +=el; - - if (elevation > 360 || elevation <-360) { - elevation = elevation % 360; - } + changeElevation = function (el) { + + elevation += el; + + if (elevation > 360 || elevation < -360) { + elevation = elevation % 360; + } update(); }; - goHome = function(h){ - if (h != null){ + goHome = function (h) { + if (h != null) { home = h; } setPosition(home); @@ -127,39 +127,42 @@ CIS565WEBGLCORE.createCamera = function(t){ steps = 0; }; - getViewTransform = function(){ + getViewTransform = function () { var m = mat4.create(); - mat4.invert( m, matrix ); + mat4.invert(m, matrix); return m; }; - moveForward = function(){ - vec3.scaleAndAdd( position, position, normal, -1.1 ); + getEyePosition = function () { + return position; + } + moveForward = function () { + vec3.scaleAndAdd(position, position, normal, -1.1); update(); }; - moveBackward = function(){ - vec3.scaleAndAdd( position, position, normal, 1.1 ); + moveBackward = function () { + vec3.scaleAndAdd(position, position, normal, 1.1); update(); }; - moveLeft = function(){ - vec3.scaleAndAdd( position, position, right, -1.1 ); + moveLeft = function () { + vec3.scaleAndAdd(position, position, right, -1.1); update(); }; - moveRight = function(){ - vec3.scaleAndAdd( position, position, right, 1.1 ); + moveRight = function () { + vec3.scaleAndAdd(position, position, right, 1.1); update(); }; - moveUp= function(){ - vec3.scaleAndAdd( position, position, up, 1.1 ); + moveUp = function () { + vec3.scaleAndAdd(position, position, up, 1.1); update(); }; - moveDown = function(){ - vec3.scaleAndAdd( position, position, up, -1.1 ); + moveDown = function () { + vec3.scaleAndAdd(position, position, up, -1.1); update(); }; @@ -172,12 +175,13 @@ CIS565WEBGLCORE.createCamera = function(t){ newObj.changeAzimuth = changeAzimuth; newObj.setElevation = setElevation; newObj.changeElevation = changeElevation; - newObj.update = update; + newObj.update = update; newObj.getViewTransform = getViewTransform; - newObj.moveForward = moveForward; - newObj.moveBackward = moveBackward; + newObj.getEyePosition = getEyePosition; + newObj.moveForward = moveForward; + newObj.moveBackward = moveBackward; newObj.moveLeft = moveLeft; - newObj.moveRight = moveRight; + newObj.moveRight = moveRight; newObj.moveUp = moveUp; newObj.moveDown = moveDown; diff --git a/js/ext/Stats.js b/js/ext/Stats.js new file mode 100644 index 0000000..90b2a27 --- /dev/null +++ b/js/ext/Stats.js @@ -0,0 +1,149 @@ +/** + * @author mrdoob / http://mrdoob.com/ + */ + +var Stats = function () { + + var startTime = Date.now(), prevTime = startTime; + var ms = 0, msMin = Infinity, msMax = 0; + var fps = 0, fpsMin = Infinity, fpsMax = 0; + var frames = 0, mode = 0; + + var container = document.createElement( 'div' ); + container.id = 'stats'; + container.addEventListener( 'mousedown', function ( event ) { event.preventDefault(); setMode( ++ mode % 2 ) }, false ); + container.style.cssText = 'width:80px;opacity:0.9;cursor:pointer'; + + var fpsDiv = document.createElement( 'div' ); + fpsDiv.id = 'fps'; + fpsDiv.style.cssText = 'padding:0 0 3px 3px;text-align:left;background-color:#002'; + container.appendChild( fpsDiv ); + + var fpsText = document.createElement( 'div' ); + fpsText.id = 'fpsText'; + fpsText.style.cssText = 'color:#0ff;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px'; + fpsText.innerHTML = 'FPS'; + fpsDiv.appendChild( fpsText ); + + var fpsGraph = document.createElement( 'div' ); + fpsGraph.id = 'fpsGraph'; + fpsGraph.style.cssText = 'position:relative;width:74px;height:30px;background-color:#0ff'; + fpsDiv.appendChild( fpsGraph ); + + while ( fpsGraph.children.length < 74 ) { + + var bar = document.createElement( 'span' ); + bar.style.cssText = 'width:1px;height:30px;float:left;background-color:#113'; + fpsGraph.appendChild( bar ); + + } + + var msDiv = document.createElement( 'div' ); + msDiv.id = 'ms'; + msDiv.style.cssText = 'padding:0 0 3px 3px;text-align:left;background-color:#020;display:none'; + container.appendChild( msDiv ); + + var msText = document.createElement( 'div' ); + msText.id = 'msText'; + msText.style.cssText = 'color:#0f0;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px'; + msText.innerHTML = 'MS'; + msDiv.appendChild( msText ); + + var msGraph = document.createElement( 'div' ); + msGraph.id = 'msGraph'; + msGraph.style.cssText = 'position:relative;width:74px;height:30px;background-color:#0f0'; + msDiv.appendChild( msGraph ); + + while ( msGraph.children.length < 74 ) { + + var bar = document.createElement( 'span' ); + bar.style.cssText = 'width:1px;height:30px;float:left;background-color:#131'; + msGraph.appendChild( bar ); + + } + + var setMode = function ( value ) { + + mode = value; + + switch ( mode ) { + + case 0: + fpsDiv.style.display = 'block'; + msDiv.style.display = 'none'; + break; + case 1: + fpsDiv.style.display = 'none'; + msDiv.style.display = 'block'; + break; + } + + }; + + var updateGraph = function ( dom, value ) { + + var child = dom.appendChild( dom.firstChild ); + child.style.height = value + 'px'; + + }; + + return { + + REVISION: 12, + + domElement: container, + + setMode: setMode, + + begin: function () { + + startTime = Date.now(); + + }, + + end: function () { + + var time = Date.now(); + + ms = time - startTime; + msMin = Math.min( msMin, ms ); + msMax = Math.max( msMax, ms ); + + msText.textContent = ms + ' MS (' + msMin + '-' + msMax + ')'; + updateGraph( msGraph, Math.min( 30, 30 - ( ms / 200 ) * 30 ) ); + + frames ++; + + if ( time > prevTime + 1000 ) { + + fps = Math.round( ( frames * 1000 ) / ( time - prevTime ) ); + fpsMin = Math.min( fpsMin, fps ); + fpsMax = Math.max( fpsMax, fps ); + + fpsText.textContent = fps + ' FPS (' + fpsMin + '-' + fpsMax + ')'; + updateGraph( fpsGraph, Math.min( 30, 30 - ( fps / 100 ) * 30 ) ); + + prevTime = time; + frames = 0; + + } + + return time; + + }, + + update: function () { + + startTime = this.end(); + + } + + } + +}; + +if ( typeof module === 'object' ) { + + module.exports = Stats; + +} \ No newline at end of file diff --git a/js/ext/stats.min.js b/js/ext/stats.min.js new file mode 100644 index 0000000..52539f4 --- /dev/null +++ b/js/ext/stats.min.js @@ -0,0 +1,6 @@ +// stats.js - http://github.com/mrdoob/stats.js +var Stats=function(){var l=Date.now(),m=l,g=0,n=Infinity,o=0,h=0,p=Infinity,q=0,r=0,s=0,f=document.createElement("div");f.id="stats";f.addEventListener("mousedown",function(b){b.preventDefault();t(++s%2)},!1);f.style.cssText="width:80px;opacity:0.9;cursor:pointer";var a=document.createElement("div");a.id="fps";a.style.cssText="padding:0 0 3px 3px;text-align:left;background-color:#002";f.appendChild(a);var i=document.createElement("div");i.id="fpsText";i.style.cssText="color:#0ff;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px"; +i.innerHTML="FPS";a.appendChild(i);var c=document.createElement("div");c.id="fpsGraph";c.style.cssText="position:relative;width:74px;height:30px;background-color:#0ff";for(a.appendChild(c);74>c.children.length;){var j=document.createElement("span");j.style.cssText="width:1px;height:30px;float:left;background-color:#113";c.appendChild(j)}var d=document.createElement("div");d.id="ms";d.style.cssText="padding:0 0 3px 3px;text-align:left;background-color:#020;display:none";f.appendChild(d);var k=document.createElement("div"); +k.id="msText";k.style.cssText="color:#0f0;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px";k.innerHTML="MS";d.appendChild(k);var e=document.createElement("div");e.id="msGraph";e.style.cssText="position:relative;width:74px;height:30px;background-color:#0f0";for(d.appendChild(e);74>e.children.length;)j=document.createElement("span"),j.style.cssText="width:1px;height:30px;float:left;background-color:#131",e.appendChild(j);var t=function(b){s=b;switch(s){case 0:a.style.display= +"block";d.style.display="none";break;case 1:a.style.display="none",d.style.display="block"}};return{REVISION:12,domElement:f,setMode:t,begin:function(){l=Date.now()},end:function(){var b=Date.now();g=b-l;n=Math.min(n,g);o=Math.max(o,g);k.textContent=g+" MS ("+n+"-"+o+")";var a=Math.min(30,30-30*(g/200));e.appendChild(e.firstChild).style.height=a+"px";r++;b>m+1E3&&(h=Math.round(1E3*r/(b-m)),p=Math.min(p,h),q=Math.max(q,h),i.textContent=h+" FPS ("+p+"-"+q+")",a=Math.min(30,30-30*(h/100)),c.appendChild(c.firstChild).style.height= +a+"px",m=b,r=0);return b},update:function(){l=this.end()}}};"object"===typeof module&&(module.exports=Stats); diff --git a/js/main.js b/js/main.js index bf701d9..b701451 100644 --- a/js/main.js +++ b/js/main.js @@ -27,20 +27,35 @@ var posProg; var normProg; var colorProg; -var isDiagnostic = true; +var isDiagnostic = false; var zNear = 20; var zFar = 2000; -var texToDisplay = 1; +var texToDisplay = 0; +//SSAO +var SSAOtoggle = 0; +var samplekernel = []; +var kernelSize = 50; +var BlurRadius = 0.1; +var ProjectionMat; + +//bloom +var bloomToggle = 0; +//added for shadeprogram +var lightpos = vec3.fromValues(15.0, 15.0, 15.0); +var lightcolor = vec3.fromValues(1.0, 1.0, 1.0); + +var stats; var main = function (canvasId, messageId) { var canvas; - + stats = initStats(); // Initialize WebGL initGL(canvasId, messageId); // Set up camera initCamera(canvas); - + //setup ssao + initSSAO(); // Set up FBOs initFramebuffer(); @@ -57,7 +72,30 @@ var main = function (canvasId, messageId) { // Start the rendering loop CIS565WEBGLCORE.run(gl); }; +//Generate SampleKernel +var initSSAO = function () { + for (var i = 0; i < kernelSize; i++) { + var x = Math.random() * 2.0 - 1.0; + var y = Math.random() * 2.0 - 1.0; + var z = Math.random(); + + samplekernel[i] = [Math.random() * x, Math.random() * y, Math.random() * z]; + } +}; +function initStats() { + stats = new Stats(); + stats.setMode(0); // 0: fps, 1: ms + + // Align top-left + stats.domElement.style.position = 'absolute'; + stats.domElement.style.left = '0px'; + stats.domElement.style.top = '0px'; + + document.body.appendChild(stats.domElement); + + return stats; +} var renderLoop = function () { window.requestAnimationFrame(renderLoop); render(); @@ -76,7 +114,8 @@ var render = function () { } else { renderDiagnostic(); } - + if (stats) + stats.update(); gl.useProgram(null); }; @@ -217,26 +256,34 @@ var renderShade = function () { gl.clear(gl.COLOR_BUFFER_BIT); // Bind necessary textures - //gl.activeTexture( gl.TEXTURE0 ); //position - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); - //gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); - //gl.activeTexture( gl.TEXTURE1 ); //normal - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); - //gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); gl.activeTexture( gl.TEXTURE2 ); //color gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); - gl.uniform1i( shadeProg.uColorSamplerLoc, 2 ); + gl.uniform1i(shadeProg.uColorSamplerLoc, 2); + + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i(shadeProg.uDepthSamplerLoc, 3); - //gl.activeTexture( gl.TEXTURE3 ); //depth - //gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); - //gl.uniform1i( shadeProg.uDepthSamplerLoc, 3 ); // Bind necessary uniforms - //gl.uniform1f( shadeProg.uZNearLoc, zNear ); - //gl.uniform1f( shadeProg.uZFarLoc, zFar ); - + gl.uniform1f( shadeProg.uZNearLoc, zNear ); + gl.uniform1f( shadeProg.uZFarLoc, zFar ); + gl.uniformMatrix4fv(shadeProg.u_MVLoc, false, camera.getViewTransform()); + gl.uniform3fv(shadeProg.uLightposLoc, lightpos); + gl.uniform3fv(shadeProg.uLightColorLoc, lightcolor); + gl.uniform3fv(shadeProg.uEyePosLoc, camera.getEyePosition()); + gl.uniform1f(shadeProg.screenHeightLoc, canvas.height); + gl.uniform1f(shadeProg.screenWidthLoc, canvas.width); + gl.uniform1i(shadeProg.uDisplayTypeLoc, texToDisplay); + drawQuad(shadeProg); // Unbind FBO @@ -270,7 +317,6 @@ var renderDiagnostic = function () { gl.uniform1f( diagProg.uZNearLoc, zNear ); gl.uniform1f( diagProg.uZFarLoc, zFar ); gl.uniform1i( diagProg.uDisplayTypeLoc, texToDisplay ); - drawQuad(diagProg); }; @@ -280,11 +326,38 @@ var renderPost = function () { gl.disable(gl.DEPTH_TEST); gl.clear(gl.COLOR_BUFFER_BIT); - // Bind necessary textures + // Bind necessary textures + gl.activeTexture(gl.TEXTURE0); //position + gl.bindTexture(gl.TEXTURE_2D, fbo.texture(0)); + gl.uniform1i(postProg.uPosSamplerLoc, 0); + + gl.activeTexture(gl.TEXTURE1); //normal + gl.bindTexture(gl.TEXTURE_2D, fbo.texture(1)); + gl.uniform1i(postProg.uNormalSamplerLoc, 1); + + //gl.activeTexture(gl.TEXTURE2); //color + //gl.bindTexture(gl.TEXTURE_2D, fbo.texture(2)); + //gl.uniform1i(postProg.uColorSamplerLoc, 2); + + gl.activeTexture(gl.TEXTURE3); //depth + gl.bindTexture(gl.TEXTURE_2D, fbo.depthTexture()); + gl.uniform1i(postProg.uDepthSamplerLoc, 3); + gl.activeTexture( gl.TEXTURE4 ); gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); gl.uniform1i(postProg.uShadeSamplerLoc, 4 ); + //setup uniforms + gl.uniformMatrix4fv(postProg.uProjectionMatLoc, false, ProjectionMat); + gl.uniform1f(postProg.uZFarLoc, zFar); + gl.uniform1f(postProg.uZNearLoc, zNear); + gl.uniform1i(postProg.uSSAOToggleLoc, SSAOtoggle); + gl.uniform1i(postProg.uBloomToggleLoc, bloomToggle); + gl.uniform1f(postProg.screenHeightLoc, canvas.height); + gl.uniform1f(postProg.screenWidthLoc, canvas.width); + gl.uniform1f(postProg.blurRadiusLoc, BlurRadius); + gl.uniform3fv(postProg.uSamplekernelLoc, samplekernel); + drawQuad(postProg); }; @@ -312,6 +385,7 @@ var initCamera = function () { persp = mat4.create(); mat4.perspective(persp, todeg(60), canvas.width / canvas.height, 0.1, 2000); + ProjectionMat = persp; camera = CIS565WEBGLCORE.createCamera(CAMERA_TRACKING_TYPE); camera.goHome([0, 0, 4]); interactor = CIS565WEBGLCORE.CameraInteractor(camera, canvas); @@ -321,7 +395,8 @@ var initCamera = function () { interactor.onKeyDown(e); switch(e.keyCode) { case 48: - isDiagnostic = false; + isDiagnostic = false; + texToDisplay = 0; break; case 49: isDiagnostic = true; @@ -339,6 +414,18 @@ var initCamera = function () { isDiagnostic = true; texToDisplay = 4; break; + case 53: + isDiagnostic = false; + texToDisplay = 5; + break; + case 54: + isDiagnostic = false; + SSAOtoggle = 1 - SSAOtoggle; + break; + case 55: + isDiagnostic = false; + bloomToggle = 1 - bloomToggle; + break; } } }; @@ -348,8 +435,8 @@ var initObjs = function () { objloader = CIS565WEBGLCORE.createOBJLoader(); // Load the OBJ from file - objloader.loadFromFile(gl, "assets/models/crytek-sponza/sponza.obj", null); - + objloader.loadFromFile(gl, "assets/models/suzanne.obj", null); + //objloader.loadFromFile(gl, "assets/models/crytek-sponza/sponza.obj", null); // Add callback to upload the vertices once loaded objloader.addCallback(function () { model = new Model(gl, objloader); @@ -463,7 +550,15 @@ var initShaders = function () { shadeProg.uPosSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_positionTex"); shadeProg.uNormalSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_normalTex"); shadeProg.uColorSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_colorTex"); - shadeProg.uDepthSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_depthTex"); + shadeProg.uDepthSamplerLoc = gl.getUniformLocation(shadeProg.ref(), "u_depthTex"); + + //added + shadeProg.uLightPosLoc = gl.getUniformLocation(shadeProg.ref(), "u_lightpos"); + shadeProg.uLightColorLoc = gl.getUniformLocation(shadeProg.ref(), "u_lightcolor"); + shadeProg.uEyePosLoc = gl.getUniformLocation(shadeProg.ref(), "u_eyepos"); + shadeProg.u_MVLoc = gl.getUniformLocation(shadeProg.ref(), "u_View"); + shadeProg.screenHeightLoc = gl.getUniformLocation(shadeProg.ref(), "screenHeight"); + shadeProg.screenWidthLoc = gl.getUniformLocation(shadeProg.ref(), "screenWidth"); shadeProg.uZNearLoc = gl.getUniformLocation( shadeProg.ref(), "u_zNear" ); shadeProg.uZFarLoc = gl.getUniformLocation( shadeProg.ref(), "u_zFar" ); @@ -477,8 +572,22 @@ var initShaders = function () { postProg.addCallback( function() { postProg.aVertexPosLoc = gl.getAttribLocation( postProg.ref(), "a_pos" ); postProg.aVertexTexcoordLoc = gl.getAttribLocation( postProg.ref(), "a_texcoord" ); - - postProg.uShadeSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_shadeTex"); + postProg.uShadeSamplerLoc = gl.getUniformLocation(postProg.ref(), "u_shadeTex"); + postProg.uSSAOToggleLoc = gl.getUniformLocation(postProg.ref(), "u_SSAOToggle"); + postProg.uBloomToggleLoc = gl.getUniformLocation(postProg.ref(), "u_BloomToggle"); + + postProg.uPosSamplerLoc = gl.getUniformLocation(postProg.ref(), "u_positionTex"); + postProg.uNormalSamplerLoc = gl.getUniformLocation(postProg.ref(), "u_normalTex"); + postProg.uColorSamplerLoc = gl.getUniformLocation(postProg.ref(), "u_colorTex"); + postProg.uDepthSamplerLoc = gl.getUniformLocation(postProg.ref(), "u_depthTex"); + + postProg.screenHeightLoc = gl.getUniformLocation(postProg.ref(), "u_ScreenHeight"); + postProg.screenWidthLoc = gl.getUniformLocation(postProg.ref(), "u_ScreenWidth"); + postProg.uZNearLoc = gl.getUniformLocation(postProg.ref(), "u_zNear"); + postProg.uZFarLoc = gl.getUniformLocation(postProg.ref(), "u_zFar"); + postProg.uSampleKernelLoc = gl.getUniformLocation(postProg.ref(), "u_sampleKernels"); + postProg.blurRadiusLoc = gl.getUniformLocation(postProg.ref(), "u_blurRadius"); + postProg.uProjectionMatLoc = gl.getUniformLocation(postProg.ref(), "u_projection"); }); CIS565WEBGLCORE.registerAsyncObj(gl, postProg); };