diff --git a/README.md b/README.md index 31db119..70c4709 100644 --- a/README.md +++ b/README.md @@ -3,336 +3,79 @@ WebGL Deferred Shading **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 6** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) **Google Chrome 222.2** on - Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Megan Moore +* Tested on: Google Chrome on MacBook Pro 2.6 GHz Intel Core i5 @ 8 GB 1600 MHz DDR3, Intel Iris 1536 MB ### Live Online -[![](img/thumb.png)](http://TODO.github.io/Project6-WebGL-Deferred-Shading) +[![](img/final_noEffects.png)](http://megmo21.github.io/Project6-WebGL-Deferred-Shading/) ### Demo Video -[![](img/video.png)](TODO) +[![](img/Bloom.png)](https://vimeo.com/144067450) -### (TODO: Your README) -*DO NOT* leave the README to the last minute! It is a crucial part of the -project, and we will not be able to grade you without a good README. +In this project, I implemented a deferred +shading pipeline and various lighting and visual effects using WebGL and GLSL. -This assignment has a considerable amount of performance analysis compared -to implementation work. Complete the implementation early to leave time! +**Effects:** -Instructions (delete me) -======================== - -This is due at midnight on the evening of Sunday, October 25. - -**Summary:** In this project, you'll be introduced to the basics of deferred -shading and WebGL. You'll use GLSL and WebGL to implement a deferred shading -pipeline and various lighting and visual effects. - -**Recommendations:** -Take screenshots as you go. Use them to document your progress in your README! - -Read (or at least skim) the full README before you begin, so that you know what -to expect and what to prepare for. - -### Running the code - -If you have Python, you should be able to run `server.py` to start a server. -Then, open [`http://localhost:10565/`](http://localhost:10565/) in your browser. - -This project requires a WebGL-capable web browser with support for -`WEBGL_draw_buffers`. You can check for support on -[WebGL Report](http://webglreport.com/). - -Google Chrome seems to work best on all platforms. If you have problems running -the starter code, use Chrome or Chromium, and make sure you have updated your -browser and video drivers. +* Implement deferred Blinn-Phong shading (diffuse + specular) + * With normal mapping (code provided) + + -In Moore 100C, both Chrome and Firefox work. -See below for notes on profiling/debugging tools. + * The Blinn-Phong shading was the first effect implemented. This is a deferred shader and does not use any post processing. The Blinn-Phong lighting was applied with normal mapping. -Use the screenshot button to save a screenshot. +* Implement Bloom using post-process blur + * Using post-process blur (Gaussian) [1] + + -## Requirements + * Bloom was implemented using a two-pass Gaussian blur. The first pass allows for the blur to be applied horizontally, while the second allows for the blur to be applied vertically. By splitting this process up into two passes, we are able to increase the speed of the effect. Doing this in one pass would require n^2 amount of time, where n is the diameter of the blur. However, in a two-pass process it only requires 2n. Overall the bloom effect slightly decreased the speed of the program. It went from 35 FPS to 29 FPS, causing a drop of 6 FPS. + +* Implement Toon shading + * With ramp shading and simple depth-edge detection + + + + * Toon shading requires just one post process pass. In order to implement toon shading, the diffuse and specular values in the blinn-phong shading were manipulated to be step functions, rather than continous functions. If the diffuse term was lower than .5, it became .2. If it was higher than .5, it became 1.0. The same was done for the specular value. This allows for the distinct change in color on the models. This was all done in the blinn-phong shader. Then, the edge detection was done in the post-process pass. The color of 8 neighboring fragments was collected. If the mix of all those fragments was greater than a certain threshold, it meant that the color was changing and this fragment was on the edge of a material. Thus, it's color should be changed to black. This implementation took less time than the bloom implementation, it only slowed down the program by 4 FPS. + * It took a while to get this implementation correct. Below are some of the early stage toon images. At first, the diffuse term was backwards, causing the lights to be very bright on the edge and dark at the center. Then, I had the correct shading, but no outlines. + + -**Ask on the mailing list for any clarifications.** +* Allow variability in additional material properties -In this project, you are given code for: + -* Loading OBJ files and color/normal map textures -* Camera control -* Partial implementation of deferred shading including many helper functions + * This effect was implemented, by adding an extra variable to each model that was loaded into the scene. As the model's data was being copied into the GBuffer, the specular exponent that had been loaded with the model, was copied into the fourth element of the normal vector in the GBuffer. Then it was used in the blinn phong calculations. This did not add on any extra run-time, as it didn't add to the size of the GBuffers that were being copied and passed into the shaders. -### Required Tasks +* Screen-space motion blur [3] -**Before doing performance analysis,** you must disable debug mode by changing -`debugMode` to `false` in `framework.js`. Keep it enabled when developing - it -helps find WebGL errors *much* more easily. + -You will need to perform the following tasks: + * This effect was implemented in a post process pass. The previous frame's camera matrix was kept in a global variable that could be passed into the shader during the next frame. With that, I am able to calculate the current world coordinates' texture coordinates of the previous frame. From this, I can calcualte the velocity of the fragment by taking the difference between the two coordinates. Then, I sample the colors along a vector going in the direction of the velocity. The average of this sample becomes the new fragment color. + * There is also a debug view for this effect. It shows the velocity (in color) at each fragment. -* Complete the deferred shading pipeline so that the Blinn-Phong and Post1 - shaders recieve the correct input. Go through the Starter Code Tour **before - continuing!** -**Effects:** +**Optimizations:** -* Implement deferred Blinn-Phong shading (diffuse + specular) - * With normal mapping (code provided) +* Scissor test optimization, Old and Improved -* Implement one of the following effects: - * Bloom using post-process blur (box or Gaussian) [1] - * Toon shading (with ramp shading + simple depth-edge detection for outlines) + -**Optimizations:** + * This is the original scissor test. When accumulating shading from each light source, I only want to render in a rectangle around the light. This helps to increase the run time of our program. In this case, the scissor test allowed my FPS to go from an average of 14 to 31. The images above show the debug view, which has all the rectangles that are being rendered, and an image of the actual rendered model. -* Scissor test optimization: when accumulating shading from each point - light source, only render in a rectangle around the light. - * Show a debug view for this (showing scissor masks clearly), e.g. by - modifying and using `red.frag.glsl` with additive blending. - * Code is provided to compute this rectangle for you, and there are - comments at the relevant place in `deferredRender.js` with more guidance. + + + * The scissor test was able to be improved. By calculating amore precise bounding box, around the entire light sphere, I was able to increase the speed and accuracy of the render. The speed up was less impressive, but still allowed for the FPS to increase to an average of 35, improving it by 4 FPS. Also, there are no longer any cut offs on the lights. You can see on the image on the right, the light in the upper right corner is cut off in the original image, but not in the image directly above. * Optimized g-buffer format - reduce the number and size of g-buffers: - * Ideas: - * Pack values together into vec4s - * Use 2-component normals - * Quantize values by using smaller texture types instead of gl.FLOAT - * Reduce number of properties passed via g-buffer, e.g. by: - * Applying the normal map in the `copy` shader pass instead of - copying both geometry normals and normal maps - * Reconstructing world space position using camera matrices and X/Y/depth - * For credit, you must show a good optimization effort and record the - performance of each version you test, in a simple table. - * It is expected that you won't need all 4 provided g-buffers for a basic - pipeline - make sure you disable the unused ones. - * See mainly: `copy.frag.glsl`, `deferred/*.glsl`, `deferredSetup.js` - -### Extra Tasks - -You must do at least **10 points** worth of extra features (effects or -optimizations/analysis). + + -**Effects:** - -* (3pts) The effect you didn't choose above (bloom or toon shading) - -* (3pts) Screen-space motion blur (blur along velocity direction) [3] - -* (2pts) Allow variability in additional material properties - * Include other properties (e.g. specular coeff/exponent) in g-buffers - * Use this to render objects with different material properties - * These may be uniform across one model draw call, but you'll have to show - multiple models - -**Optimizations/Analysis:** - -* (2pts) Improved screen-space AABB for scissor test - (smaller/more accurate than provided - but beware of CPU/GPU tradeoffs) - -* (3pts) Two-pass **Gaussian** blur using separable convolution (using a second - postprocess render pass) to improve bloom or other 2D blur performance - -* (4-6pts) Light proxies - * (4pts) Instead of rendering a scissored full-screen quad for every light, - render some proxy geometry which covers the part of the screen affected by - the light (e.g. a sphere, for an attenuated point light). - * A model called `sphereModel` is provided which can be drawn in the same - way as the code in `drawScene`. (Must be drawn with a vertex shader which - scales it to the light radius and translates it to the light position.) - * (+2pts) To avoid lighting geometry far behind the light, render the proxy - geometry (e.g. sphere) using an inverted depth test - (`gl.depthFunc(gl.GREATER)`) with depth writing disabled (`gl.depthMask`). - This test will pass only for parts of the screen for which the backside of - the sphere appears behind parts of the scene. - * Note that the copy pass's depth buffer must be bound to the FBO during - this operation! - * Show a debug view for this (showing light proxies) - * Compare performance of this, naive, and scissoring. - -* (8pts) Tile-based deferred shading with detailed performance comparison - * On the CPU, check which lights overlap which tiles. Then, render each tile - just once for all lights (instead of once for each light), applying only - the overlapping lights. - * The method is described very well in - [Yuqin & Sijie's README](https://github.com/YuqinShao/Tile_Based_WebGL_DeferredShader/blob/master/README.md#algorithm-details). - * This feature requires allocating the global light list and tile light - index lists as shown at this link. These can be implemented as textures. - * Show a debug view for this (number of lights per tile) - -* (6pts) Deferred shading without multiple render targets - (i.e. without WEBGL_draw_buffers). - * Render the scene once for each target g-buffer, each time into a different - framebuffer object. - * Include a detailed performance analysis, comparing with/without - WEBGL_draw_buffers (like in the - [Mozilla blog article](https://hacks.mozilla.org/2014/01/webgl-deferred-shading/)). - -* (2-6pts) Compare performance to equivalently-lit forward-rendering: - * (2pts) With no forward-rendering optimizations - * (+2pts) Coarse, per-object back-to-front sorting of geometry for early-z - * (Of course) must render many objects to test - * (+2pts) Z-prepass for early-z - -This extra feature list is not comprehensive. If you have a particular idea -that you would like to implement, please **contact us first** (preferably on -the mailing list). - -**Where possible, all features should be switchable using the GUI panel in -`ui.js`.** - -### Performance & Analysis - -**Before doing performance analysis,** you must disable debug mode by changing -`debugMode` to `false` in `framework.js`. Keep it enabled when developing - it -helps find WebGL errors *much* more easily. - -Optimize your JavaScript and/or GLSL code. Web Tracing Framework -and Chrome/Firefox's profiling tools (see Resources section) will -be useful for this. For each change -that improves performance, show the before and after render times. - -For each new *effect* feature (required or extra), please -provide the following analysis: - -* Concise overview write-up of the feature. -* Performance change due to adding the feature. - * If applicable, how do parameters (such as number of lights, etc.) - affect performance? Show data with simple graphs. -* If you did something to accelerate the feature, what did you do and why? -* How might this feature be optimized beyond your current implementation? - -For each *performance* feature (required or extra), please provide: - -* Concise overview write-up of the feature. -* Detailed performance improvement analysis of adding the feature - * What is the best case scenario for your performance improvement? What is - the worst? Explain briefly. - * Are there tradeoffs to this performance feature? Explain briefly. - * How do parameters (such as number of lights, tile size, etc.) affect - performance? Show data with graphs. - * Show debug views when possible. - * If the debug view correlates with performance, explain how. - -Note: Be aware that stats.js may give 0 millisecond frame timings in Chrome on -occasion - if this happens, you can use the FPS counter. - -### Starter Code Tour - -You'll be working mainly in `deferredRender.js` using raw WebGL. Three.js is -included in the project for various reasons. You won't use it for much, but its -matrix/vector types may come in handy. - -It's highly recommended that you use the browser debugger to inspect variables -to get familiar with the code. At any point, you can also -`console.log(some_var);` to show it in the console and inspect it. - -The setup in `deferredSetup` is already done for you, for many of the features. -If you want to add uniforms (textures or values), you'll change them here. -Therefore, it is recommended that you review the comments to understand the -process, BEFORE starting work in `deferredRender`. - -In `deferredRender`, start at the **START HERE!** comment. -Work through the appropriate **`TODO`s** as you go - most of them are very -small. Test incrementally (after implementing each part, instead of testing -all at once). - -Your first goal should be to get the debug views working. -Add code in `debug.frag.glsl` to examine your g-buffers before trying to -render them. (Set the debugView in the UI to show them.) - -For editing JavaScript, you can use a simple editor with syntax highlighting -such as Sublime, Vim, Emacs, etc., or the editor built into Chrome. - -* `js/`: JavaScript files for this project. - * `main.js`: Handles initialization of other parts of the program. - * `framework.js`: Loads the scene, camera, etc., and calls your setup/render - functions. Hopefully, you won't need to change anything here. - * `deferredSetup.js`: Deferred shading pipeline setup code. - * `createAndBind(Depth/Color)TargetTexture`: Creates empty textures for - binding to frame buffer objects as render targets. - * `deferredRender.js`: Your deferred shading pipeline execution code. - * `renderFullScreenQuad`: Renders a full-screen quad with the given shader - program. - * `ui.js`: Defines the UI using - [dat.GUI](https://workshop.chromeexperiments.com/examples/gui/). - * The global variable `cfg` can be accessed anywhere in the code to read - configuration values. - * `utils.js`: Utilities for JavaScript and WebGL. - * `abort`: Aborts the program and shows an error. - * `loadTexture`: Loads a texture from a URL into WebGL. - * `loadShaderProgram`: Loads shaders from URLs into a WebGL shader program. - * `loadModel`: Loads a model into WebGL buffers. - * `readyModelForDraw`: Configures the WebGL state to draw a model. - * `drawReadyModel`: Draws a model which has been readied. - * `getScissorForLight`: Computes an approximate scissor rectangle for a - light in world space. -* `glsl/`: GLSL code for each part of the pipeline: - * `clear.*.glsl`: Clears each of the `NUM_GBUFFERS` g-buffers. - * `copy.*.glsl`: Performs standard rendering without any fragment shading, - storing all of the resulting values into the `NUM_GBUFFERS` g-buffers. - * `quad.vert.glsl`: Minimal vertex shader for rendering a single quad. - * `deferred.frag.glsl`: Deferred shading pass (for lighting calculations). - Reads from each of the `NUM_GBUFFERS` g-buffers. - * `post1.frag.glsl`: First post-processing pass. -* `lib/`: JavaScript libraries. -* `models/`: OBJ models for testing. Sponza is the default. -* `index.html`: Main HTML page. -* `server.bat` (Windows) or `server.py` (OS X/Linux): - Runs a web server at `localhost:10565`. - -### The Deferred Shading Pipeline - -See the comments in `deferredSetup.js`/`deferredRender.js` for low-level guidance. - -In order to enable and disable effects using the GUI, upload a vec4 uniform -where each component is an enable/disable flag. In JavaScript, the state of the -UI is accessible anywhere as `cfg.enableEffect0`, etc. - -**Pass 1:** Renders the scene geometry and its properties to the g-buffers. -* `copy.vert.glsl`, `copy.frag.glsl` -* The framebuffer object `pass_copy.fbo` must be bound during this pass. -* Renders into `pass_copy.depthTex` and `pass_copy.gbufs[i]`, which need to be - attached to the framebuffer. - -**Pass 2:** Performs lighting and shading into the color buffer. -* `quad.vert.glsl`, `deferred/blinnphong-pointlight.frag.glsl` -* Takes the g-buffers `pass_copy.gbufs`/`depthTex` as texture inputs to the - fragment shader, on uniforms `u_gbufs` and `u_depth`. -* `pass_deferred.fbo` must be bound. -* Renders into `pass_deferred.colorTex`. - -**Pass 3:** Performs post-processing. -* `quad.vert.glsl`, `post/one.frag.glsl` -* Takes `pass_BlinnPhong_PointLight.colorTex` as a texture input `u_color`. -* Renders directly to the screen if there are no additional passes. - -More passes may be added for additional effects (e.g. combining bloom with -motion blur) or optimizations (e.g. two-pass Gaussian blur for bloom) - -#### Debugging - -If there is a WebGL error, it will be displayed on the developer console and -the renderer will be aborted. To find out where the error came from, look at -the backtrace of the error (you may need to click the triangle to expand the -message). The line right below `wrapper @ webgl-debug.js` will point to the -WebGL call that failed. - -#### Changing the number of g-buffers - -Note that the g-buffers are just `vec4`s - you can put any values you want into -them. However, if you want to change the total number of g-buffers (add more -for additional effects or remove some for performance), you will need to make -changes in a number of places: - -* `deferredSetup.js`/`deferredRender.js`: search for `NUM_GBUFFERS` -* `copy.frag.glsl` -* `deferred.frag.glsl` -* `clear.frag.glsl` + * In order to reduce the size of the GBuffer, I applied the normal map function during the copy fragment shader, rather than in the ambient, blinn-phong, and debug shaders. This allowed me not to pass through both the geometry normal and normal map. I could just pass the surface normal into the shader. You can see from the graph above, that having a smaller GBuffer size increased the FPS in all cases. This increased performance because the bounding boxes became more accurate, and there was no extra computations that were required which would slow it down. ## Resources @@ -344,82 +87,3 @@ changes in a number of places: * [3] Post-Process Motion Blur: [GPU Gems 3, Ch. 27](http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html) -**Also see:** The articles linked in the course schedule. - -### Profiling and debugging tools - -Built into Firefox: -* Canvas inspector -* Shader Editor -* JavaScript debugger and profiler - -Built into Chrome: -* JavaScript debugger and profiler - -Plug-ins: -* (Chrome/Firefox) [Web Tracing Framework](http://google.github.io/tracing-framework/) -* (Chrome) [Shader Editor](https://chrome.google.com/webstore/detail/shader-editor/ggeaidddejpbakgafapihjbgdlbbbpob) - - -Firefox can also be useful - it has a canvas inspector, WebGL profiling and a -shader editor built in. - - -## README - -Replace the contents of this README.md in a clear manner with the following: - -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30+ second video of your project running showing all features. - [Open Broadcaster Software](http://obsproject.com) is recommended. - (Even though your demo can be seen online, using multiple render targets - means it won't run on many computers. A video will work everywhere.) -* A performance analysis (described below). - -### Performance Analysis - -See above. - -### GitHub Pages - -Since this assignment is in WebGL, you can make your project easily viewable by -taking advantage of GitHub's project pages feature. - -Once you are done with the assignment, create a new branch: - -`git branch gh-pages` - -Push the branch to GitHub: - -`git push origin gh-pages` - -Now, you can go to `.github.io/` to see your -renderer online from anywhere. Add this link to your README. - -## Submit - -1. Open a GitHub pull request so that we can see that you have finished. - The title should be "Submission: YOUR NAME". - * **ADDITIONALLY:** - In the body of the pull request, include a link to your repository. -2. Send an email to the TA (gmail: kainino1+cis565@) with: - * **Subject**: in the form of `[CIS565] Project N: PENNKEY`. - * Direct link to your pull request on GitHub. - * Estimate the amount of time you spent on the project. - * If there were any outstanding problems, briefly explain. - * **List the extra features you did.** - * Feedback on the project itself, if any. - -### Third-Party Code Policy - -* Use of any third-party code must be approved by asking on our mailing list. -* If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the path tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code **MUST** be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will, at minimum, - result in you receiving an F for the semester. diff --git a/glsl/copy.frag.glsl b/glsl/copy.frag.glsl index 0f5f8f7..aeef93f 100644 --- a/glsl/copy.frag.glsl +++ b/glsl/copy.frag.glsl @@ -5,11 +5,28 @@ precision highp int; uniform sampler2D u_colmap; uniform sampler2D u_normap; +uniform float u_material; varying vec3 v_position; varying vec3 v_normal; varying vec2 v_uv; + +vec3 applyNormalMap(vec3 geomnor, vec3 normap) { + normap = normap * 2.0 - 1.0; + vec3 up = normalize(vec3(0.001, 1, 0.001)); + vec3 surftan = normalize(cross(geomnor, up)); + vec3 surfbinor = cross(geomnor, surftan); + return normap.y * surftan + normap.x * surfbinor + normap.z * geomnor; +} + + void main() { // TODO: copy values into gl_FragData[0], [1], etc. + gl_FragData[0] = vec4(v_position, 1.0); + vec4 gnorm = vec4(normalize(v_normal), 0.0); + gl_FragData[2] = texture2D(u_colmap, v_uv); + vec4 norm = texture2D(u_normap, v_uv); + gl_FragData[1].xyz = applyNormalMap(vec3(gnorm), vec3(norm)); + gl_FragData[1].w = u_material; } diff --git a/glsl/deferred/ambient.frag.glsl b/glsl/deferred/ambient.frag.glsl index 1fd4647..94e8ccf 100644 --- a/glsl/deferred/ambient.frag.glsl +++ b/glsl/deferred/ambient.frag.glsl @@ -3,25 +3,29 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 uniform sampler2D u_gbufs[NUM_GBUFFERS]; uniform sampler2D u_depth; varying vec2 v_uv; + void main() { vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); float depth = texture2D(u_depth, v_uv).x; + float amb = .2; // TODO: Extract needed properties from the g-buffers into local variables + vec3 pos = gb0.xyz; + vec3 colmap = gb2.xyz; + vec3 nor = gb1.xyz; if (depth == 1.0) { gl_FragColor = vec4(0, 0, 0, 0); // set alpha to 0 return; } - gl_FragColor = vec4(0.1, 0.1, 0.1, 1); // TODO: replace this + gl_FragColor = vec4(amb * colmap, 1); // TODO: replace this } diff --git a/glsl/deferred/blinnphong-pointlight.frag.glsl b/glsl/deferred/blinnphong-pointlight.frag.glsl index b24a54a..9fe8cee 100644 --- a/glsl/deferred/blinnphong-pointlight.frag.glsl +++ b/glsl/deferred/blinnphong-pointlight.frag.glsl @@ -2,38 +2,86 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 uniform vec3 u_lightCol; uniform vec3 u_lightPos; uniform float u_lightRad; uniform sampler2D u_gbufs[NUM_GBUFFERS]; uniform sampler2D u_depth; +uniform vec3 u_cameraPos; +uniform float u_toon; +//uniform float u_width; +//uniform float u_height; varying vec2 v_uv; -vec3 applyNormalMap(vec3 geomnor, vec3 normap) { - normap = normap * 2.0 - 1.0; - vec3 up = normalize(vec3(0.001, 1, 0.001)); - vec3 surftan = normalize(cross(geomnor, up)); - vec3 surfbinor = cross(geomnor, surftan); - return normap.y * surftan + normap.x * surfbinor + normap.z * geomnor; -} + void main() { vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); + float depth = texture2D(u_depth, v_uv).x; + + // TODO: Extract needed properties from the g-buffers into local variables + vec3 pos = gb0.xyz; + + vec3 colmap = gb2.xyz; + float spec = gb1.w; + + + vec3 normal = normalize(gb1.xyz); + vec3 lightDir = normalize((u_lightPos - pos)); // / length(u_lightPos - pos); - // If nothing was rendered to this pixel, set alpha to 0 so that the - // postprocessing step can render the sky color. - if (depth == 1.0) { - gl_FragColor = vec4(0, 0, 0, 0); + if (length(u_lightPos - pos) > u_lightRad) { + gl_FragColor = vec4(0, 0, 0, 1.0); return; + } + float lambertian = min(max(dot(lightDir,normal), 0.0), 1.0); + float specular = 0.0; + + vec3 viewDir = normalize(u_cameraPos - pos); + vec3 halfDir = normalize(lightDir + viewDir); + float specAngle = max(dot(halfDir, normal), 0.0); + + + specular = pow(specAngle, spec); + + + vec3 color = lambertian * colmap * u_lightCol * (u_lightRad - length(u_lightPos - pos)) + specular*vec3(1.0); + + if (u_toon > 0.5) { + //diffuse values + if (lambertian < 0.5) { + lambertian = .2; + } + else { + lambertian = 1.0; + } + + //specular highlight + if (specAngle > .75) { + specular = pow(1.0, spec); + } + else { + specular = 0.0; + } + + //fade from center of light + float fade; + if (u_lightRad - length(u_lightPos - pos) < .5) fade = .25; + else fade = .75; + + //check silhouette + + color = lambertian * colmap * u_lightCol * fade + specular*vec3(1.0); } - gl_FragColor = vec4(0, 0, 1, 1); // TODO: perform lighting calculations + + + gl_FragColor = vec4(color, 1.0); + } diff --git a/glsl/deferred/debug.frag.glsl b/glsl/deferred/debug.frag.glsl index d05638c..d028687 100644 --- a/glsl/deferred/debug.frag.glsl +++ b/glsl/deferred/debug.frag.glsl @@ -2,49 +2,49 @@ precision highp float; precision highp int; -#define NUM_GBUFFERS 4 +#define NUM_GBUFFERS 3 uniform int u_debug; uniform sampler2D u_gbufs[NUM_GBUFFERS]; uniform sampler2D u_depth; - +uniform mat4 u_prevPM; varying vec2 v_uv; +uniform vec3 u_cameraPos; const vec4 SKY_COLOR = vec4(0.66, 0.73, 1.0, 1.0); -vec3 applyNormalMap(vec3 geomnor, vec3 normap) { - normap = normap * 2.0 - 1.0; - vec3 up = normalize(vec3(0.001, 1, 0.001)); - vec3 surftan = normalize(cross(geomnor, up)); - vec3 surfbinor = cross(geomnor, surftan); - return normap.y * surftan + normap.x * surfbinor + normap.z * geomnor; -} void main() { vec4 gb0 = texture2D(u_gbufs[0], v_uv); vec4 gb1 = texture2D(u_gbufs[1], v_uv); vec4 gb2 = texture2D(u_gbufs[2], v_uv); - vec4 gb3 = texture2D(u_gbufs[3], v_uv); + float depth = texture2D(u_depth, v_uv).x; // TODO: Extract needed properties from the g-buffers into local variables - vec3 pos; - vec3 geomnor; - vec3 colmap; - vec3 normap; - vec3 nor; + vec3 pos = gb0.xyz; + + vec3 colmap = gb2.xyz; + + vec3 nor = gb1.xyz; + + + vec4 H = vec4(v_uv.x*2.0 - 1.0, (v_uv.y) * 2.0 - 1.0, depth, 1.0); + vec4 currentPos = H; + vec4 prevPos = u_prevPM * vec4(pos / gb0.w, 1.0); + prevPos /= prevPos.w; + + vec2 velocity = ((currentPos.xy) - prevPos.xy) / 2.0; if (u_debug == 0) { gl_FragColor = vec4(vec3(depth), 1.0); } else if (u_debug == 1) { gl_FragColor = vec4(abs(pos) * 0.1, 1.0); } else if (u_debug == 2) { - gl_FragColor = vec4(abs(geomnor), 1.0); + gl_FragColor = vec4(abs(nor), 1.0); } else if (u_debug == 3) { gl_FragColor = vec4(colmap, 1.0); } else if (u_debug == 4) { - gl_FragColor = vec4(normap, 1.0); - } else if (u_debug == 5) { - gl_FragColor = vec4(abs(nor), 1.0); + gl_FragColor = vec4(abs(velocity), 0.0, 1.0); } else { gl_FragColor = vec4(1, 0, 1, 1); } diff --git a/glsl/post/blend.frag.glsl b/glsl/post/blend.frag.glsl new file mode 100644 index 0000000..c98f530 --- /dev/null +++ b/glsl/post/blend.frag.glsl @@ -0,0 +1,23 @@ +#version 100 +precision highp float; +precision highp int; + + +varying vec2 v_uv; + +uniform sampler2D scene; +uniform sampler2D bloomBlur; +uniform float exposure; + +void main() +{ + const float gamma = 2.2; + vec3 hdrColor = texture(scene, v_uv).rgb; + vec3 bloomColor = texture(bloomBlur, v_uv).rgb; + hdrColor += bloomColor; // additive blending + // tone mapping + vec3 result = vec3(1.0) - exp(-hdrColor * exposure); + // also gamma correct while we're at it + result = pow(result, vec3(1.0 / gamma)); + gl_FragColor = vec4(result, 1.0f); +} \ No newline at end of file diff --git a/glsl/post/bloom.frag.glsl b/glsl/post/bloom.frag.glsl new file mode 100644 index 0000000..8f9a668 --- /dev/null +++ b/glsl/post/bloom.frag.glsl @@ -0,0 +1,64 @@ +#version 100 +precision highp float; +precision highp int; + +uniform sampler2D u_color; +uniform float u_bloom; +varying vec2 v_uv; + +uniform vec2 u_resolution; +uniform vec2 u_dir; + +const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); + +void main() { + vec4 color = texture2D(u_color, v_uv); + + if (color.a == 0.0) { + gl_FragColor = SKY_COLOR; + return; + } + + + vec4 sum = vec4(0.0); + + //the amount to blur, i.e. how far off center to sample from + //1.0 -> blur by one pixel + //2.0 -> blur by two pixels, etc. + float blur = 4.0/u_resolution.x; + + //the direction of our blur + //(1.0, 0.0) -> x-axis blur + //(0.0, 1.0) -> y-axis blur + float hstep = u_dir.x; + float vstep = u_dir.y; + + + + //apply blurring, using a 9-tap filter with predefined gaussian weights + + sum += texture2D(u_color, vec2(v_uv.x - 4.0*blur*hstep, v_uv.y - 4.0*blur*vstep)) * 0.0162162162; + sum += texture2D(u_color, vec2(v_uv.x - 3.0*blur*hstep, v_uv.y - 3.0*blur*vstep)) * 0.0540540541; + sum += texture2D(u_color, vec2(v_uv.x - 2.0*blur*hstep, v_uv.y - 2.0*blur*vstep)) * 0.1216216216; + sum += texture2D(u_color, vec2(v_uv.x - 1.0*blur*hstep, v_uv.y - 1.0*blur*vstep)) * 0.1945945946; + + sum += texture2D(u_color, vec2(v_uv.x, v_uv.y)) * 0.2270270270; + + sum += texture2D(u_color, vec2(v_uv.x + 1.0*blur*hstep, v_uv.y + 1.0*blur*vstep)) * 0.1945945946; + sum += texture2D(u_color, vec2(v_uv.x + 2.0*blur*hstep, v_uv.y + 2.0*blur*vstep)) * 0.1216216216; + sum += texture2D(u_color, vec2(v_uv.x + 3.0*blur*hstep, v_uv.y + 3.0*blur*vstep)) * 0.0540540541; + sum += texture2D(u_color, vec2(v_uv.x + 4.0*blur*hstep, v_uv.y + 4.0*blur*vstep)) * 0.0162162162; + + //discard alpha for our simple demo, multiply by vertex color and return + gl_FragColor = color + color * sum; + + + +} + + + + + + + diff --git a/glsl/post/bloom1.frag.glsl b/glsl/post/bloom1.frag.glsl new file mode 100644 index 0000000..eea8f7e --- /dev/null +++ b/glsl/post/bloom1.frag.glsl @@ -0,0 +1,63 @@ +#version 100 +precision highp float; +precision highp int; + +uniform sampler2D u_color; +uniform float u_bloom; +varying vec2 v_uv; + +uniform vec2 u_resolution; +uniform vec2 u_dir; + +const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); + +void main() { + vec4 color = texture2D(u_color, v_uv); + + if (color.a == 0.0) { + gl_FragColor = SKY_COLOR; + return; + } + + + vec4 sum = vec4(0.0); + + //the amount to blur, i.e. how far off center to sample from + //1.0 -> blur by one pixel + //2.0 -> blur by two pixels, etc. + float blur = 4.0/u_resolution.x; + + //the direction of our blur + //(1.0, 0.0) -> x-axis blur + //(0.0, 1.0) -> y-axis blur + float hstep = u_dir.x; + float vstep = u_dir.y; + + //apply blurring, using a 9-tap filter with predefined gaussian weights + + sum += texture2D(u_color, vec2(v_uv.x - 4.0*blur*hstep, v_uv.y - 4.0*blur*vstep)) * 0.0162162162; + sum += texture2D(u_color, vec2(v_uv.x - 3.0*blur*hstep, v_uv.y - 3.0*blur*vstep)) * 0.0540540541; + sum += texture2D(u_color, vec2(v_uv.x - 2.0*blur*hstep, v_uv.y - 2.0*blur*vstep)) * 0.1216216216; + sum += texture2D(u_color, vec2(v_uv.x - 1.0*blur*hstep, v_uv.y - 1.0*blur*vstep)) * 0.1945945946; + + sum += texture2D(u_color, vec2(v_uv.x, v_uv.y)) * 0.2270270270; + + sum += texture2D(u_color, vec2(v_uv.x + 1.0*blur*hstep, v_uv.y + 1.0*blur*vstep)) * 0.1945945946; + sum += texture2D(u_color, vec2(v_uv.x + 2.0*blur*hstep, v_uv.y + 2.0*blur*vstep)) * 0.1216216216; + sum += texture2D(u_color, vec2(v_uv.x + 3.0*blur*hstep, v_uv.y + 3.0*blur*vstep)) * 0.0540540541; + sum += texture2D(u_color, vec2(v_uv.x + 4.0*blur*hstep, v_uv.y + 4.0*blur*vstep)) * 0.0162162162; + + //discard alpha for our simple demo, multiply by vertex color and return + gl_FragColor = color * vec4(sum.rgb, 1.0); + + + + +} + + + + + + + diff --git a/glsl/post/motion.frag.glsl b/glsl/post/motion.frag.glsl new file mode 100644 index 0000000..1755d19 --- /dev/null +++ b/glsl/post/motion.frag.glsl @@ -0,0 +1,63 @@ +#version 100 +precision highp float; +precision highp int; + +uniform sampler2D u_color; + +varying vec2 v_uv; + +#define NUM_GBUFFERS 4 +uniform mat4 u_prevPM; +uniform mat4 u_invMat; +uniform sampler2D u_depth; +uniform sampler2D u_worldPos; + + +const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); + +void main() { + vec4 color = texture2D(u_color, v_uv); + vec2 texCoords = v_uv; + float depth = texture2D(u_depth, v_uv).x; + vec4 gb0 = texture2D(u_worldPos, v_uv); + vec3 pos = gb0.xyz; // / gb0.w; + vec4 H = vec4(v_uv.x*2.0 - 1.0, (v_uv.y) * 2.0 - 1.0, depth, 1.0); + //vec4 D = u_invMat * H; + //D = D / D.w; + vec4 currentPos = H; + + vec4 prevPos = (u_prevPM) * vec4(pos, 1.0); + prevPos /= prevPos.w; + + vec2 velocity = (currentPos.xy - prevPos.xy) / 2.0; + + texCoords += velocity; + + for (int i = 0; i < 5; i++) { + vec4 currentCol = texture2D(u_color, texCoords); + color += currentCol; + texCoords += velocity; + } + + gl_FragColor = (color / 5.0); + + /* + vec4 prevPos = u_prevPM * vec4(pos, 1.0); + prevPos.xyz /= prevPos.w; + prevPos.xy = prevPos.xy * 0.5 + 0.5; + + vec2 blurVec = prevPos.xy - v_uv; + + vec4 result = texture2D(u_color, v_uv); + for (int i = 1; i < 5; ++i) { + + vec2 offset = blurVec * (float(i) / float(5.0 - 1.0) - 0.5); + + + result += texture2D(u_color, v_uv + offset); + } + + result /= 5.0; + + gl_FragColor = result;*/ +} \ No newline at end of file diff --git a/glsl/post/one.frag.glsl b/glsl/post/one.frag.glsl index 94191cd..fb2b181 100644 --- a/glsl/post/one.frag.glsl +++ b/glsl/post/one.frag.glsl @@ -3,8 +3,9 @@ precision highp float; precision highp int; uniform sampler2D u_color; - +uniform float u_bloom; varying vec2 v_uv; +uniform float u_dir; const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); @@ -16,5 +17,6 @@ void main() { return; } + gl_FragColor = color; } diff --git a/glsl/post/toon.frag.glsl b/glsl/post/toon.frag.glsl new file mode 100644 index 0000000..d2d029c --- /dev/null +++ b/glsl/post/toon.frag.glsl @@ -0,0 +1,43 @@ +#version 100 +precision highp float; +precision highp int; + +uniform sampler2D u_color; +uniform float u_bloom; +varying vec2 v_uv; + +uniform vec2 u_resolution; + +const vec4 SKY_COLOR = vec4(0.01, 0.14, 0.42, 1.0); + +void main() { + vec4 color = texture2D(u_color, v_uv); + + if (color.a == 0.0) { + gl_FragColor = SKY_COLOR; + return; + } + + + vec4 neighbor1 = texture2D(u_color, vec2(v_uv.x, v_uv.y - 1.0 / u_resolution.y)); + vec4 neighbor2 = texture2D(u_color, vec2(v_uv.x - 1.0 / u_resolution.x, v_uv.y - 1.0 / u_resolution.y)); + vec4 neighbor3 = texture2D(u_color, vec2(v_uv.x + 1.0 / u_resolution.x, v_uv.y - 1.0 / u_resolution.y)); + vec4 neighbor4 = texture2D(u_color, vec2(v_uv.x, v_uv.y + 1.0 / u_resolution.y)); + vec4 neighbor5 = texture2D(u_color, vec2(v_uv.x - 1.0 / u_resolution.x, v_uv.y + 1.0 / u_resolution.y)); + vec4 neighbor6 = texture2D(u_color, vec2(v_uv.x + 1.0 / u_resolution.x, v_uv.y + 1.0 / u_resolution.y)); + + // Blend and perform edge ramping + vec4 neighbors1 = (neighbor1 + neighbor2 + neighbor3) - (neighbor4 + neighbor5 + neighbor6); + neighbors1 = vec4(vec3(max(max(neighbors1.x, neighbors1.y), neighbors1.z)), 1.0); + vec4 neighbors2 = -(neighbor1 + neighbor2 + neighbor3) + (neighbor4 + neighbor5 + neighbor6); + neighbors2 = vec4(vec3(max(max(neighbors2.x, neighbors2.y), neighbors2.z)), 1.0); + vec4 outline; + if(neighbors1.x > 0.2 || neighbors2.x > .2) { + outline = vec4(0.0); + } + else { + outline = vec4(vec3(1.0), 0.0); + } + + gl_FragColor = (outline) * color; +} \ No newline at end of file diff --git a/glsl/red.frag.glsl b/glsl/red.frag.glsl index f8ef1ec..a6188fe 100644 --- a/glsl/red.frag.glsl +++ b/glsl/red.frag.glsl @@ -3,5 +3,5 @@ precision highp float; precision highp int; void main() { - gl_FragColor = vec4(1, 0, 0, 1); + gl_FragColor = vec4(0.1, 0.1, 0.1, 1); } diff --git a/img/Bloom.png b/img/Bloom.png new file mode 100644 index 0000000..22cb021 Binary files /dev/null and b/img/Bloom.png differ diff --git a/img/FinalVideo.mov b/img/FinalVideo.mov new file mode 100644 index 0000000..6a53f70 Binary files /dev/null and b/img/FinalVideo.mov differ diff --git a/img/backwards_toon.png b/img/backwards_toon.png new file mode 100644 index 0000000..5806957 Binary files /dev/null and b/img/backwards_toon.png differ diff --git a/img/badScissor.png b/img/badScissor.png new file mode 100644 index 0000000..1a87ed5 Binary files /dev/null and b/img/badScissor.png differ diff --git a/img/diff_specExp.png b/img/diff_specExp.png new file mode 100644 index 0000000..d5039e6 Binary files /dev/null and b/img/diff_specExp.png differ diff --git a/img/final_noEffects.png b/img/final_noEffects.png new file mode 100644 index 0000000..95f0d9b Binary files /dev/null and b/img/final_noEffects.png differ diff --git a/img/goodScissor.png b/img/goodScissor.png new file mode 100644 index 0000000..ccf0d55 Binary files /dev/null and b/img/goodScissor.png differ diff --git a/img/graph.png b/img/graph.png new file mode 100644 index 0000000..2111aed Binary files /dev/null and b/img/graph.png differ diff --git a/img/light_badScissor.png b/img/light_badScissor.png new file mode 100644 index 0000000..6e2718a Binary files /dev/null and b/img/light_badScissor.png differ diff --git a/img/light_goodScissor.png b/img/light_goodScissor.png new file mode 100644 index 0000000..3435fc6 Binary files /dev/null and b/img/light_goodScissor.png differ diff --git a/img/motionBlur_trim.mov b/img/motionBlur_trim.mov new file mode 100644 index 0000000..b7b30e3 Binary files /dev/null and b/img/motionBlur_trim.mov differ diff --git a/img/motion_blur.gif b/img/motion_blur.gif new file mode 100644 index 0000000..02013ff Binary files /dev/null and b/img/motion_blur.gif differ diff --git a/img/motion_blur.mov b/img/motion_blur.mov new file mode 100644 index 0000000..fb8ddd2 Binary files /dev/null and b/img/motion_blur.mov differ diff --git a/img/notEvenCloseMotion.png b/img/notEvenCloseMotion.png new file mode 100644 index 0000000..9071071 Binary files /dev/null and b/img/notEvenCloseMotion.png differ diff --git a/img/toon1.png b/img/toon1.png new file mode 100644 index 0000000..8c9cd0b Binary files /dev/null and b/img/toon1.png differ diff --git a/img/toon_noOutline.png b/img/toon_noOutline.png new file mode 100644 index 0000000..3f8d5ad Binary files /dev/null and b/img/toon_noOutline.png differ diff --git a/img/toon_withBox.png b/img/toon_withBox.png new file mode 100644 index 0000000..48d2465 Binary files /dev/null and b/img/toon_withBox.png differ diff --git a/index.html b/index.html index 4a0ec13..ed3c0e5 100644 --- a/index.html +++ b/index.html @@ -79,9 +79,18 @@ height: 100%; } + #msgbox { + font-family: sans-serif; + color: white; + height: 1.4em; + position: fixed; + bottom: 2em; + left: 0; + } +
DEBUG MODE! (Disable before measuring performance.) diff --git a/js/deferredRender.js b/js/deferredRender.js index 3d19a30..93494fb 100644 --- a/js/deferredRender.js +++ b/js/deferredRender.js @@ -1,6 +1,7 @@ (function() { 'use strict'; // deferredSetup.js must be loaded first + var prevMat; R.deferredRender = function(state) { if (!aborted && ( @@ -10,7 +11,10 @@ !R.prog_Ambient || !R.prog_BlinnPhong_PointLight || !R.prog_Debug || - !R.progPost1)) { + !R.progPost1 || + !R.progToon || + !R.progBloom || + !R.progMotion)) { console.log('waiting for programs to load...'); return; } @@ -28,12 +32,12 @@ // CHECKITOUT: START HERE! You can even uncomment this: //debugger; - { // TODO: this block should be removed after testing renderFullScreenQuad + /*{ // TODO: this block should be removed after testing renderFullScreenQuad gl.bindFramebuffer(gl.FRAMEBUFFER, null); // TODO: Implement/test renderFullScreenQuad first renderFullScreenQuad(R.progRed); return; - } + }*/ R.pass_copy.render(state); @@ -41,14 +45,33 @@ // Do a debug render instead of a regular render // Don't do any post-processing in debug mode R.pass_debug.render(state); - } else { + } + else if (cfg && cfg.bloom) { + R.pass_deferred.render(state); + //R.pass_post1.render(state); + R.pass_bloom.render(state); + R.pass_bloom2.render(state); + } + else if (cfg && cfg.toon) { + R.pass_deferred.render(state); + //R.pass_bloom1.render(state); + R.pass_toon.render(state); + + } + else if (cfg && cfg.motion) { + R.pass_deferred.render(state); + R.pass_motion.render(state); + } + else { // * Deferred pass and postprocessing pass(es) // TODO: uncomment these - //R.pass_deferred.render(state); - //R.pass_post1.render(state); + R.pass_deferred.render(state); + R.pass_post1.render(state); // OPTIONAL TODO: call more postprocessing passes, if any } + + }; /** @@ -57,22 +80,24 @@ R.pass_copy.render = function(state) { // * Bind the framebuffer R.pass_copy.fbo // TODO: ^ - + gl.bindFramebuffer(gl.FRAMEBUFFER, R.pass_copy.fbo); // * Clear screen using R.progClear - TODO: renderFullScreenQuad(R.progClear); + renderFullScreenQuad(R.progClear); // * Clear depth buffer to value 1.0 using gl.clearDepth and gl.clear // TODO: ^ + gl.clearDepth(1.0); // TODO: ^ - + gl.clear(gl.DEPTH_BUFFER_BIT); // * "Use" the program R.progCopy.prog // TODO: ^ + gl.useProgram(R.progCopy.prog); // TODO: Write glsl/copy.frag.glsl var m = state.cameraMat.elements; // * Upload the camera matrix m to the uniform R.progCopy.u_cameraMat // using gl.uniformMatrix4fv // TODO: ^ - + gl.uniformMatrix4fv(R.progCopy.u_cameraMat, gl.FALSE, m); // * Draw the scene drawScene(state); }; @@ -97,9 +122,19 @@ // * Tell shader which debug view to use bindTexturesForLightPass(R.prog_Debug); gl.uniform1i(R.prog_Debug.u_debug, cfg.debugView); - + if (prevMat !== undefined) { + gl.uniformMatrix4fv(R.prog_Debug.u_prevPM, gl.FALSE, prevMat.elements); + } + gl.uniform3f(R.prog_Debug.u_cameraPos, state.cameraPos[0], state.cameraPos[1], state.cameraPos[2]); // * Render a fullscreen quad to perform shading on + + + prevMat = state.cameraMat.clone(); + + + renderFullScreenQuad(R.prog_Debug); + }; /** @@ -117,20 +152,64 @@ // * _ADD_ together the result of each lighting pass // Enable blending and use gl.blendFunc to blend with: - // color = 1 * src_color + 1 * dst_color + gl.enable(gl.BLEND); + //color = 1 * src_color + 1 * dst_color // TODO: ^ - + gl.blendFunc(1.0, 1.0); // * Bind/setup the ambient pass, and render using fullscreen quad bindTexturesForLightPass(R.prog_Ambient); renderFullScreenQuad(R.prog_Ambient); + gl.enable(gl.SCISSOR_TEST); // * Bind/setup the Blinn-Phong pass, and render using fullscreen quad bindTexturesForLightPass(R.prog_BlinnPhong_PointLight); // TODO: add a loop here, over the values in R.lights, which sets the // uniforms R.prog_BlinnPhong_PointLight.u_lightPos/Col/Rad etc., // then does renderFullScreenQuad(R.prog_BlinnPhong_PointLight). - + + for(var i = 0; i < R.lights.length; i++) { + + gl.uniform3fv(R.prog_BlinnPhong_PointLight.u_lightCol, R.lights[i].col); + gl.uniform3fv(R.prog_BlinnPhong_PointLight.u_lightPos, R.lights[i].pos); + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_lightRad, R.lights[i].rad); + gl.uniform3f(R.prog_BlinnPhong_PointLight.u_cameraPos, state.cameraPos[0], state.cameraPos[1], state.cameraPos[2]); + if (cfg.toon) { + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_toon, 1.0); + //gl.unifrom1f(R.prog_BlinnPhong_PointLight.u_width, width); + //gl.uniform1f(R.prog_BlinnPhong_PointLight.u_height, height); + //console.log('toon is true'); + } + else { + gl.uniform1f(R.prog_BlinnPhong_PointLight.u_toon, 0.0); + //console.log('toon is false'); + } + var sci = getImprovedScissorForLight(state.viewMat, state.projMat, R.lights[i]); + var sc = getScissorForLight(state.viewMat, state.projMat, R.lights[i]) + if (sc && sci) { + + if (cfg.debugScissor) { + gl.scissor(sc[0], sc[1], sc[2], sc[3]); + //console.log('scissor is true'); + renderFullScreenQuad(R.progRed); + } + else if (cfg.debugImproveScissor) { + gl.scissor(sci[0], sci[1], sci[2], sci[3]); + renderFullScreenQuad(R.progRed); + } + else { + gl.scissor(sci[0], sci[1], sci[2], sci[3]); + //console.log('scissor is false'); + renderFullScreenQuad(R.prog_BlinnPhong_PointLight); + } + } + /*else { + renderFullScreenQuad(R.prog_BlinnPhong_PointLight); + }*/ + //renderFullScreenQuad(R.prog_BlinnPhong_PointLight); + } + + gl.disable(gl.SCISSOR_TEST); // TODO: In the lighting loop, use the scissor test optimization // Enable gl.SCISSOR_TEST, render all lights, then disable it. // @@ -175,15 +254,125 @@ // * Bind the deferred pass's color output as a texture input // Set gl.TEXTURE0 as the gl.activeTexture unit // TODO: ^ + gl.activeTexture(gl.TEXTURE0); // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); // Configure the R.progPost1.u_color uniform to point at texture unit 0 gl.uniform1i(R.progPost1.u_color, 0); - + if (cfg.bloom) { + gl.uniform1f(R.progPost1.u_bloom, 1.0); + gl.uniform1f(R.progPost1.u_dir, 1.0); + + + renderFullScreenQuad(R.progPost1); + + //gl.uniform1f(R.progPost1.u_bloom, 1.0); + + //gl.uniform1f(R.progPost1.u_bloom, 0.0); + //renderFullScreenQuad(R.progPost1); + } + else { + gl.uniform1f(R.progPost1.u_bloom, 0.0); + + } // * Render a fullscreen quad to perform shading on renderFullScreenQuad(R.progPost1); }; + R.pass_toon.render = function(state) { + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); + + // * Bind the postprocessing shader program + gl.useProgram(R.progToon.prog); + gl.activeTexture(gl.TEXTURE0); + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit + // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1i(R.progToon.u_color, 0); + gl.uniform2f(R.progToon.u_resolution, state.width, state.height); + + renderFullScreenQuad(R.progToon); + } + + R.pass_bloom.render = function(state) { + //gl.bindFramebuffer(gl.FRAMEBUFFER, null); + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); + + // * Bind the postprocessing shader program + gl.useProgram(R.progBloom.prog); + gl.activeTexture(gl.TEXTURE0); + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit + // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1i(R.progBloom.u_color, 0); + gl.uniform2f(R.progBloom.u_resolution, state.width, state.height); + gl.uniform2f(R.progBloom.u_dir, 1.0, 0.0); + renderFullScreenQuad(R.progBloom); + } + + R.pass_bloom2.render = function(state) { + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); + + // * Bind the postprocessing shader program + gl.useProgram(R.progBloom.prog); + gl.activeTexture(gl.TEXTURE0); + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit + // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1i(R.progBloom.u_color, 0); + gl.uniform2f(R.progBloom.u_resolution, state.width, state.height); + gl.uniform2f(R.progBloom.u_dir, 0.0, 1.0); + renderFullScreenQuad(R.progBloom); + } + + R.pass_motion.render = function(state) { + + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + gl.clearDepth(1.0); + gl.clear(gl.DEPTH_BUFFER_BIT); + + // * Bind the postprocessing shader program + gl.useProgram(R.progMotion.prog); + gl.activeTexture(gl.TEXTURE0); + // Bind the TEXTURE_2D, R.pass_deferred.colorTex to the active texture unit + // TODO: ^ + gl.bindTexture(gl.TEXTURE_2D, R.pass_deferred.colorTex); + // Configure the R.progPost1.u_color uniform to point at texture unit 0 + gl.uniform1i(R.progMotion.u_color, 0); + gl.activeTexture(gl.TEXTURE1); + gl.bindTexture(gl.TEXTURE_2D, R.pass_copy.gbufs[0]); + gl.uniform1i(R.progMotion.u_worldPos, 1); + + gl.activeTexture(gl.TEXTURE2); + gl.bindTexture(gl.TEXTURE_2D, R.pass_copy.depthTex); + gl.uniform1i(R.progMotion.u_depth, 2); + + //gl.uniformMatrix4fv(R.progMotion.u_prevPM, gl.FALSE, state.projMat); + if (prevMat !== undefined) { + gl.uniformMatrix4fv(R.progMotion.u_prevPM, gl.FALSE, prevMat.elements); + } + var camMat = new THREE.Matrix4(); + camMat = camMat.getInverse(state.cameraMat); + + gl.uniform3f(R.progMotion.u_cameraPos, state.cameraPos[0], state.cameraPos[1], state.cameraPos[2]); + gl.uniformMatrix4fv(R.progMotion.u_invMat, gl.FALSE, camMat.elements); + + prevMat = state.cameraMat.clone(); + + renderFullScreenQuad(R.progMotion); + + + } + var renderFullScreenQuad = (function() { // The variables in this function are private to the implementation of // renderFullScreenQuad. They work like static local variables in C++. @@ -205,12 +394,14 @@ var init = function() { // Create a new buffer with gl.createBuffer, and save it as vbo. // TODO: ^ - + vbo = gl.createBuffer(); // Bind the VBO as the gl.ARRAY_BUFFER // TODO: ^ + gl.bindBuffer(gl.ARRAY_BUFFER, vbo); // Upload the positions array to the currently-bound array buffer // using gl.bufferData in static draw mode. // TODO: ^ + gl.bufferData(gl.ARRAY_BUFFER, positions, gl.STATIC_DRAW); }; return function(prog) { @@ -224,15 +415,17 @@ // Bind the VBO as the gl.ARRAY_BUFFER // TODO: ^ + gl.bindBuffer(gl.ARRAY_BUFFER, vbo); // Enable the bound buffer as the vertex attrib array for // prog.a_position, using gl.enableVertexAttribArray // TODO: ^ + gl.enableVertexAttribArray(prog.a_position); // Use gl.vertexAttribPointer to tell WebGL the type/layout of the buffer // TODO: ^ - + gl.vertexAttribPointer(prog.a_position, 3, gl.FLOAT, gl.FALSE, 0 , 0); // Use gl.drawArrays (or gl.drawElements) to draw your quad. // TODO: ^ - + gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind the array buffer. gl.bindBuffer(gl.ARRAY_BUFFER, null); }; diff --git a/js/deferredSetup.js b/js/deferredSetup.js index ca6baec..99a5187 100644 --- a/js/deferredSetup.js +++ b/js/deferredSetup.js @@ -6,9 +6,13 @@ R.pass_debug = {}; R.pass_deferred = {}; R.pass_post1 = {}; + R.pass_toon = {}; + R.pass_bloom = {}; + R.pass_bloom2 = {}; + R.pass_motion = {}; R.lights = []; - R.NUM_GBUFFERS = 4; + R.NUM_GBUFFERS = 3; /** * Set up the deferred pipeline framebuffer objects and textures. @@ -21,8 +25,8 @@ }; // TODO: Edit if you want to change the light initial positions - R.light_min = [-6, 0, -14]; - R.light_max = [6, 18, 14]; + R.light_min = [-14, 0, -6]; + R.light_max = [14, 18, 6]; R.light_dt = -0.03; R.LIGHT_RADIUS = 4.0; R.NUM_LIGHTS = 20; // TODO: test with MORE lights! @@ -34,14 +38,14 @@ for (var i = 0; i < 3; i++) { var mn = R.light_min[i]; var mx = R.light_max[i]; - r = Math.random() * (mx - mn) + mn; + r[i] = Math.random() * (mx - mn) + mn; } return r; }; for (var i = 0; i < R.NUM_LIGHTS; i++) { R.lights.push({ - pos: [posfn(), posfn(), posfn()], + pos: posfn(), col: [ 1 + Math.random(), 1 + Math.random(), @@ -110,7 +114,7 @@ p.a_position = gl.getAttribLocation(prog, 'a_position'); p.a_normal = gl.getAttribLocation(prog, 'a_normal'); p.a_uv = gl.getAttribLocation(prog, 'a_uv'); - + p.u_material = gl.getUniformLocation(prog, 'u_material'); // Save the object into this variable for access later R.progCopy = p; }); @@ -137,21 +141,49 @@ p.u_lightPos = gl.getUniformLocation(p.prog, 'u_lightPos'); p.u_lightCol = gl.getUniformLocation(p.prog, 'u_lightCol'); p.u_lightRad = gl.getUniformLocation(p.prog, 'u_lightRad'); + p.u_cameraPos = gl.getUniformLocation(p.prog, 'u_cameraPos'); + p.u_toon = gl.getUniformLocation(p.prog, 'u_toon'); R.prog_BlinnPhong_PointLight = p; }); loadDeferredProgram('debug', function(p) { p.u_debug = gl.getUniformLocation(p.prog, 'u_debug'); + p.u_prevPM = gl.getUniformLocation(p.prog, 'u_prevPM'); + p.u_cameraPos = gl.getUniformLocation(p.prog, 'u_cameraPos'); // Save the object into this variable for access later R.prog_Debug = p; }); loadPostProgram('one', function(p) { p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_bloom = gl.getUniformLocation(p.prog, 'u_bloom'); + p.u_dir = gl.getUniformLocation(p.prog, 'u_dir'); // Save the object into this variable for access later R.progPost1 = p; }); + loadPostProgram('toon', function(p) { + p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_resolution = gl.getUniformLocation(p.prog, 'u_resolution'); + R.progToon = p; + }); + + loadPostProgram('bloom', function(p) { + p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_resolution = gl.getUniformLocation(p.prog, 'u_resolution'); + p.u_dir = gl.getUniformLocation(p.prog, 'u_dir'); + R.progBloom = p; + }); + + loadPostProgram('motion', function(p) { + p.u_color = gl.getUniformLocation(p.prog, 'u_color'); + p.u_prevPM = gl.getUniformLocation(p.prog,'u_prevPM'); + p.u_invMat = gl.getUniformLocation(p.prog, 'u_invMat'); + p.u_worldPos = gl.getUniformLocation(p.prog, 'u_worldPos'); + p.u_depth = gl.getUniformLocation(p.prog, 'u_depth'); + p.u_cameraPos = gl.getUniformLocation(p.prog, 'u_cameraPos'); + R.progMotion = p; + }) // TODO: If you add more passes, load and set up their shader programs. }; diff --git a/js/framework.js b/js/framework.js index a0653a5..52649a2 100644 --- a/js/framework.js +++ b/js/framework.js @@ -17,16 +17,20 @@ var width, height; cameraMat: cameraMat, projMat: camera.projectionMatrix, viewMat: camera.matrixWorldInverse, - models: models + cameraPos: camera.position, + models: models, + width: width, + height: height }); }; var update = function() { controls.update(); + stats.end(); stats.begin(); render(); - gl.finish(); - stats.end(); + //gl.finish(); + //stats.end(); if (!aborted) { requestAnimationFrame(update); } @@ -88,7 +92,8 @@ var width, height; initExtensions(); stats = new Stats(); - stats.setMode(1); // 0: fps, 1: ms, 2: mb + stats.setMode(0); + //stats.setMode(1); // 0: fps, 1: ms, 2: mb stats.domElement.style.position = 'absolute'; stats.domElement.style.left = '0px'; stats.domElement.style.top = '0px'; @@ -133,6 +138,21 @@ var width, height; loadTexture('models/sponza/normal.png').then(function(tex) { m.normap = tex; }); + m.material = 100000.0; + models.push(m); + }); + }); + loadModel('models/cube.obj', function(o) { + scene.add(o); + uploadModel(o, function(m) { + // CHECKITOUT: load textures + loadTexture('models/cow_color.jpg').then(function(tex) { + m.colmap = tex; + }); + loadTexture('models/cow_norm.png').then(function(tex) { + m.normap = tex; + }); + m.material = 1.0; models.push(m); }); }); @@ -200,7 +220,7 @@ var width, height; elemCount: idx.length, position: gposition, normal: gnormal, - uv: guv + uv: guv, }; if (callback) { diff --git a/js/ui.js b/js/ui.js index 05c1852..cc4eb57 100644 --- a/js/ui.js +++ b/js/ui.js @@ -7,7 +7,10 @@ var cfg; // TODO: Define config fields and defaults here this.debugView = -1; this.debugScissor = false; - this.enableEffect0 = false; + this.debugImproveScissor = false; + this.toon = false; + this.bloom = false; + this.motion = false; }; var init = function() { @@ -19,15 +22,17 @@ var cfg; 'None': -1, '0 Depth': 0, '1 Position': 1, - '2 Geometry normal': 2, + '2 Surface normal': 2, '3 Color map': 3, - '4 Normal map': 4, - '5 Surface normal': 5 + '4 Velocity' : 4, }); gui.add(cfg, 'debugScissor'); + gui.add(cfg, 'debugImproveScissor'); var eff0 = gui.addFolder('EFFECT NAME HERE'); - eff0.add(cfg, 'enableEffect0'); + eff0.add(cfg, 'toon'); + eff0.add(cfg, 'bloom'); + eff0.add(cfg, 'motion'); // TODO: add more effects toggles and parameters here }; diff --git a/js/util.js b/js/util.js index 762fb4a..64c7dd1 100644 --- a/js/util.js +++ b/js/util.js @@ -83,7 +83,9 @@ window.loadModel = function(obj, callback) { var onProgress = function(xhr) { if (xhr.lengthComputable) { var percentComplete = xhr.loaded / xhr.total * 100; - console.log(Math.round(percentComplete, 2) + '% downloaded'); + var msg = obj + ': ' + Math.round(percentComplete, 2) + '% loaded'; + console.log(msg); + $('#msgbox').text(msg); } }; @@ -108,6 +110,9 @@ window.readyModelForDraw = function(prog, m) { gl.bindTexture(gl.TEXTURE_2D, m.normap); gl.uniform1i(prog.u_normap, 1); } + if (m.material !== undefined) { + gl.uniform1f(prog.u_material, m.material); + } gl.enableVertexAttribArray(prog.a_position); gl.bindBuffer(gl.ARRAY_BUFFER, m.position); @@ -179,6 +184,82 @@ window.getScissorForLight = (function() { }; })(); +window.getImprovedScissorForLight = (function() { + // Pre-allocate for performance - avoids additional allocation + var a = new THREE.Vector4(0, 0, 0, 0); + var b = new THREE.Vector4(0, 0, 0, 0); + var c = new THREE.Vector4(0, 0, 0, 0); + var d = new THREE.Vector4(0, 0, 0, 0); + var minpt = new THREE.Vector2(0, 0); + var minpt2 = new THREE.Vector2(0, 0); + var maxpt = new THREE.Vector2(0, 0); + var maxpt2 = new THREE.Vector2(0, 0); + var ret = [0, 0, 0, 0]; + + return function(view, proj, l) { + // front bottom-left corner of sphere's bounding cube + a.fromArray(l.pos); + a.w = 1; + a.applyMatrix4(view); + a.x -= l.rad; + a.y -= l.rad; + a.z -= l.rad; + a.applyMatrix4(proj); + a.divideScalar(a.w); + + // front upper-right corner of sphere's bounding cube + b.fromArray(l.pos); + b.w = 1; + b.applyMatrix4(view); + b.x += l.rad; + b.y += l.rad; + b.z += l.rad; + b.applyMatrix4(proj); + b.divideScalar(b.w); + + //back bottom-left corner of sphere's bounding cube + /*c.fromArray(l.pos); + c.w = 1; + c.x -= l.rad; + c.y -= l.rad; + c.z -= l.rad; + c.applyMatrix4(proj); + c.divideScalar(c.w); + + //back upper-right corner of sphere's bounding cube + d.fromArray(l.pos); + d.w = 1; + d.applyMatrix4(view); + d.x += l.rad; + d.y += l.rad; + d.z -= l.rad; + d.applyMatrix4(proj); + d.divideScalar(d.w);*/ + + minpt.set(Math.max(-1, a.x), Math.max(-1, a.y)); + //minpt2.set(Math.max(-1, c.x), Math.max(-1, c.y)); + maxpt.set(Math.min( 1, b.x), Math.min( 1, b.y)); + //maxpt2.set(Math.min(1, d.x), Math.min(1, d.y)); + + //minpt.set(Math.min(a.x, c.x), Math.min(a.y, c.y)); + //maxpt.set(Math.max(b.x, d.x), Math.max(b.y, d.y)); + + if (maxpt.x < -1 || 1 < minpt.x || + maxpt.y < -1 || 1 < minpt.y) { + return null; + } + + minpt.addScalar(1.0); minpt.multiplyScalar(0.5); + maxpt.addScalar(1.0); maxpt.multiplyScalar(0.5); + + ret[0] = Math.round(width * minpt.x); + ret[1] = Math.round(height * minpt.y); + ret[2] = Math.round(width * (maxpt.x - minpt.x)); + ret[3] = Math.round(height * (maxpt.y - minpt.y)); + return ret; + }; +})(); + window.abortIfFramebufferIncomplete = function(fbo) { gl.bindFramebuffer(gl.FRAMEBUFFER, fbo); var fbstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); diff --git a/lib/three.js b/lib/three.js index fea1dc1..dd98045 100644 --- a/lib/three.js +++ b/lib/three.js @@ -11743,6 +11743,7 @@ THREE.Camera = function () { this.matrixWorldInverse = new THREE.Matrix4(); this.projectionMatrix = new THREE.Matrix4(); + this.prevProjMatrix = new THREE.Matrix4(); }; @@ -11793,7 +11794,7 @@ THREE.Camera.prototype.copy = function ( source ) { this.matrixWorldInverse.copy( source.matrixWorldInverse ); this.projectionMatrix.copy( source.projectionMatrix ); - + this.prevProjMatrix.copy(source.prevProjMatrix); return this; }; @@ -11915,7 +11916,7 @@ THREE.OrthographicCamera.prototype = Object.create( THREE.Camera.prototype ); THREE.OrthographicCamera.prototype.constructor = THREE.OrthographicCamera; THREE.OrthographicCamera.prototype.updateProjectionMatrix = function () { - + this.prevProjMatrix = this.projectionMatrix; var dx = ( this.right - this.left ) / ( 2 * this.zoom ); var dy = ( this.top - this.bottom ) / ( 2 * this.zoom ); var cx = ( this.right + this.left ) / 2; @@ -12058,7 +12059,7 @@ THREE.PerspectiveCamera.prototype.updateProjectionMatrix = function () { var fov = THREE.Math.radToDeg( 2 * Math.atan( Math.tan( THREE.Math.degToRad( this.fov ) * 0.5 ) / this.zoom ) ); if ( this.fullWidth ) { - + this.prevProjMatrix = this.projectionMatrix; var aspect = this.fullWidth / this.fullHeight; var top = Math.tan( THREE.Math.degToRad( fov * 0.5 ) ) * this.near; var bottom = - top; @@ -12077,7 +12078,7 @@ THREE.PerspectiveCamera.prototype.updateProjectionMatrix = function () { ); } else { - + this.prevProjMatrix = this.projectionMatrix; this.projectionMatrix.makePerspective( fov, this.aspect, this.near, this.far ); } diff --git a/models/cow.jpg b/models/cow.jpg new file mode 100644 index 0000000..f6a23aa Binary files /dev/null and b/models/cow.jpg differ diff --git a/models/cow_color.jpg b/models/cow_color.jpg new file mode 100644 index 0000000..90cee2c Binary files /dev/null and b/models/cow_color.jpg differ diff --git a/models/cow_norm.png b/models/cow_norm.png new file mode 100644 index 0000000..50e44a2 Binary files /dev/null and b/models/cow_norm.png differ diff --git a/models/green.jpg b/models/green.jpg new file mode 100644 index 0000000..1d8cbda Binary files /dev/null and b/models/green.jpg differ