c++,arrays,opengl,glsl,uniform
GLSL follows the C rule there, that array sizes must be known at compile time. GLSL has a preprocessor but no means to supply preprocessor values other than as source code. So you'll need to generate the suitable #define statements as strings and supply them to glShaderSource prior to supplying...
There are some reasons why the gl_PerVertex structure must be redeclared in your shader sources. For example, when using separable shader programs GL_ARB_separate_shader_objects, you must redeclare the gl_PerVertex blocks (and they must match for all shaders attached in a pipeline). I am unsure why in this case you would be...
opengl,glsl,fragment-shader,vertex-shader
That normal map is in tangent-space, but you are treating it as object-space. You need a bitangent and/or tangent vector per-vertex in addition to your normal in order to form the basis to perform transformation into and out of tangent-space. This matrix is often referred to as simply TBN. You...
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader: out vec4 out_col; out vec2 passed_uv; out vec4 out_vert; out vec4 out_norm; and this in the fragment shader: in vec4 in_col; in...
For the first question, you need only one instance of the texture data. After you have uploaded it to VRAM, it can be deallocated from application's memory to save space. From the OpenGL perspective, you can simply bind the texture once before you start drawing and then draw every objects...
The only reason why resolution variable is used is only to get a valid uv mapping. Generally, I would recommend you to add texture coordinates (uv mapping) to your Square. You will not have to use textures, only texture coordinates. In this case your fragment shader would be: uniform float...
opengl,directx,glsl,shader,hlsl
According to the spec, values are always copied. For in parameters, the are copied at call time, for out parameters at return time, and for inout parameters at both call and return time. In the language of the spec (GLSL 4.50, section 6.1.1 "Function Calling Conventions"): All arguments are evaluated...
Instead of generating the quad CPU side I would attach a geometry shader and create the quad there, that should free up the slot for your model-geometry to be passed in. Geometry shader: layout(points) in; layout(triangle_strip, max_vertices = 4) out; out vec2 texcoord; void main() { gl_Position = vec4( 1.0,...
try this: void main() { vec4 texread = texture2D(diffuse, texCoord0); vec3 normal = normalize(normal0); vec3 material_kd = vec3(1.0,1.0,1.0); vec3 material_ks = vec3(1.0,1.0,1.0); vec3 material_ka = vec3(0.2,0.2,0.2); vec3 material_ke = vec3(0.0,0.0,0.0); float material_shininess = 60; vec3 lightpos = vec3(0.0,10.0,5.0); vec3 lightcolor = vec3(1.0,1.0,1.0); vec3 lightdir = normalize(lightpos - worldPosition); float shade...
opengl-es,glsl,opengl-es-2.0,glsles
There's nothing wrong with the picture... OpenGL is drawing exactly what you asked for, which is a brightness related to the distance from an edge. The distance is not going to be a smooth function, so your eye sees edges where the derivative of the distance function suddenly changes. You...
android,opengl-es,glsl,opengl-es-2.0,glsles
No, you can not store the per vertex data into the uniform. A uniform is designed to be per draw call and a vertex attribute is designed to be per vertex. The only way you could use your vertex position with a uniform would be for drawing a point since...
I keep three matrices in my OpenGL applications. They are the Model, View and the Projection matrices. All these matrices are sent to the shader using the uniforms and the shader does this. gl_Position = mProj * mView * mModel * vec4(position, 1.0); That is, the position is by default...
Why did you add the f suffix when the compiler did tell you that it is not supported? These suffixes actually do not mean anything in GLSL, and literals are single precision by default anyway - there is the lf suffix for double precision. It seems like there is some...
c++,glsl,fragment-shader,vertex-shader
Ok, so this appears to be a wrong assumption I made. The fragment shader also needs the flat keyword. (I thought the vertex shader flat attribute will make the uint flat for next level) So, this as fragment shader works: flat in uint frag_MeshID; (with vertex shader: flat out uint...
compression,glsl,bit-shift,glsles,bit-packing
Well, bit shifting can be represented by mulitplication (for left shifts) or division (for right shifts) with powers of two. You just have to take into account that the floats will stroe the fractional parts which would normally be shifted "out" in normal integer bitshifts. So to pack 4 normalized...
opengl,glsl,shader,hlsl,normals
It's pretty simple really: just rotate the normals in the map with the rotation of the map after generating them normally. You don't even need to regenerate them strictly speaking; just adjust your shader.
opengl,glsl,lighting,normals,deferred-rendering
The way you calculate p, p1 and p2 via viewSpacePositionFromDepth is certainly wrong. That function uses the same texcoordFrag for all three points, just with a different depth, so all three points will lie on a line. The cross product in getNormal should just yield 0 for any pixel -...
c++,opengl,opengl-es,textures,glsl
ETC2 and ETC formats are not commonly used by desktop applications. As such, they might not be natively supported by the desktop GPU and/or its driver. However, they are required for GLES 3.0 compatibility, so if your desktop OpenGL driver reports GL_ARB_ES3_compatibility, then it must also support the ETC2 format....
The View Matrix does not need to be transposed (it's in column-major order) whereas the Projection matrix is in row-major order and does need to be transposed into GL's column-major order. You can use the appropriate transpose flag as mentioned in the other answers. The reason you are getting these...
Well, the situation is very clear. You already gave the answer yourself. Shouldn't the location of the second vector be (location = 1)? Yes. Or less specific: it should be something else than 0. Attribute locations must be unique in a single program, for obvious reasons. The code you copied...
c++,opengl,glsl,ssao,deferred-shading
The issue appeared to be linked to the chosen texture types. The texture with handle viewPosTexture needed to explicitly be defined as a float texture format GL_RGB16F or GL_RGBA32F, instead of just GL_RGB. Interestingly, the seperate textures were drawn fine, the issues arised in combination only. // generate screen color...
each location can hold 4 floats (a single vec4), So a valid option would also be: layout (location = 0)in vec4 vertex; layout (location = 1) in vec4 VertexColor; What dictates where what attribute comes from is the set of glVertexAttribPointer calls. these are the ones I would expect for...
The code for calculating the perspective/frustum matrix looks correct to me. This sets up a perspective matrix that assumes that your eye point is at the origin, and you're looking down the negative z-axis. The near and far values specify the range of distances along the negative z-axis that are...
When storing 0.000013 as a GL_HALF_FLOAT, you're dealing with denormalized numbers. The smallest normalized number that can be represented by a standard half float (IEEE 754-2008) is 2^-14, which is approximately 0.000061, or more than the value you are representing. The OpenGL spec leaves implementations some latitude on how to...
c++,opengl,glsl,shader,vertex-shader
As you indicate in your question, the primary issues here is that of execution time and memory. There are many ways in which rendering objects with skinning (skeletons) takes more of both: Extra vertex data. For the bone weights and indices. Generally these streams are (each) 4 bytes per vertex....
These kind of attribute assignment is Three.js specific. If you always have by three times in your array, you could use vec3 type. attributes = { epochTimes: { type: 'iv3', value: null } // 'iv3' specifies that attribute type is ivec3, like as 'f' specifies that it would be float...
Because it needs to be constant expression, which is not in your case.
(As already pointed out in the comments): The GL will never interpolate integer types. To quote the GLSL spec (Version 4.5) section 4.3.4 "input variables": Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision floating-point type must be qualified with the interpolation qualifier flat. This...
The part of your own answer that describes the root cause of the problem makes a lot of sense. The GL_DRAW_INDIRECT_BUFFER binding is indeed not part of the VAO state. This is confirmed by the spec. The corresponding state (DRAW_INDIRECT_BUFFER_BINDING) is listed in table 23.5 captioned "Vertex Array Data (not...
input format I'd use GL_RED, since the GL_LUMINANCE format has been deprecated internalFormat depends on what you want to do in your shader, although you should always specify a sized internal format, e.g. GL_RGBA8 which gives you 8 bits per channel. Although, with GL_RGBA8, the green, blue and alpha channels...
opengl,glsl,shader,trigonometry,double-precision
My current accurate shader implementation of 'acos()' is a mix out of the usual Taylor series and the answer from Bence. With 40 iterations I get an accuracy of 4.44089e-16 to the 'acos()' implementation from math.h. Maybe it is not the best, but it works for me: And here it...
I think the problem is with your calculation of the screen coordinates, resulting in the tessellation levels to be too small. The key part is this: position_screen[i] = ProjectionMatrix * ModelViewMatrix * gl_in[i].gl_Position; What you're calculating here are clip coordinates, not screen coordinates. To get screen coordinates from clip coordinates,...
After three days I finally figured it out. My only clue was that the lighting was somehow 'reversed' so I spent a lot of time changing signs for - to + and vice versa and changing various orders of multiplication. In the end the error was not in my shaders....
I've gotten is that this syntax is not supported by versions of GLSL earlier than 4.2, is this correct? Yes. layout(binding=...) was intruduced in the GL_ARB_shading_language_420pack extension and is core since GL 4.2. If so, how do I rewrite this line to be compatible with GLSL 4.0? You simply...
image-processing,glsl,photoshop
Based on the image and description, it looks like this maps the original grayscale value of the image to the blue-green gradient. There are various ways of converting color to grayscale, depending on the color system used. Many of the simple one are just a weighted sum of the RGB...
You can use texture buffer objects (TBOs). Note that although they are exposed via the texture interface, the data access is totally different from textures, the data is directly fetched from the underlying buffer object, with no sampler overhead. Also note that the guaranteed minimum size for TBOs is only...
I solved it by using premultiplied alpha everywhere. Being used to how Photoshop works, it took me a while to grasp the concept. It felt a bit counterintuitive, but this explanation from Tom Forsyth helped me a lot. All my shaders now multiply the RGB values by it's A: gl_FragColor...
So I fixed this now. The problem is that when sampling a Texture in HLSL, the range can only go from 0 to 1. But the texture is saved in a range from -1 to 1. Somehow OpenGL works with this, but DirectX doesn't. To fix it, simple transform it...
glsl,lwjgl,slick2d,vertex-shader,blending
a) vertColor = vec4(0.5, 1.0, 1.0, 0.2); b) gl_FragColor = vertColor; the shader does exactly what you asked of it - it sets the color of all fragments to that color. If you want to blend colors, you should add/multiply them in the shader in some fashion (e.g. have a...
google-chrome,firefox,glsl,webgl,shader
I tested your minimal code with win7 Chrome and IE11 and win8 Chrome and IE11. And as you said, it is not working with win8 Chrome (but works with win7 Chrome). I did a few modifications to find out what's wrong. In both vertex and fragment shader I see uniform...
As genpfault said in the comment, only extensions that add features to the GLSL language need to be enabled manually in the shader with the #extension directive. Since GL_ARB_sparse_texture doesn't add GLSL functionality, you don't need to explicitly enable it in your shaders - checking support with glGetIntegerv is enough....
javascript,glsl,webgl,phaser-framework,pixi
I'm not too familiar with Phaser, but we can shed a little light on what that fragment shader is really doing. Load your jsFiddle and replace the GLSL main body with this: void main() { gl_FragColor = vec4(vTextureCoord.x * 2., vTextureCoord.y * 2., 1., 1.); gl_FragColor *= texture2D(uSampler, vTextureCoord) *...
This really is not that difficult if you know how all the built-ins work. fract (...) returns the fractional part of a floating-point number and the dot product is just the component-wise sum of products. The only unusual thing is the yzww swizzle, but that is easy to accomplish: //...
java,android,glsl,opengl-es-2.0,uniform
The reason of this to happend is that, as I said in my comment, an array of uniform is reported as only an active uniorm. https://www.opengl.org/sdk/docs/man/html/glGetActiveUniform.xhtml "Only one active uniform variable will be reported for a uniform array."...
c#,opengl,animation,glsl,skeletal-animation
So the answer is simple - don't multiply matrix x vector, multiple vector x matrix in the shader.
You may use textureLod, as BDL suggested, but I think that there is no real need for this. Once the required mipmap levels are generated, the right level will be selected automatically depending on the size of the active render buffer. The main idea of the article you have mentioned,...
Your varying float uv_depth logic is not necessary. If you understand the transformations (projection, perspective divide, viewport mapping) OpenGL uses to get to window-space (gl_FragCoord), you will find that you do not need to write uv_depth at all. What the line float depthNormalized = uv_depth * 0.5 + 0.5; in...
I'm not really sure what your isse is, but using GL_ELEMENT_ARRAY_BUFFER for your normals is not going to work. The glVertexAttribPointer() calls will always reference the currently bound GL_ARRAY_BUFFER, so you set up your noramls to be identical to your positions. However, glNormalPointer() works the same way, so it is...
You can see the struct is just 4 gluint one after the other, you will need to imitate that: ByteBuffer cmd = gl.glMapBufferRange(GL_DRAW_INDIRECT_BUFFER, 0, NUM_DRAWS * 4 * 4, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT); for (int i = 0; i < NUM_DRAWS; i++) { int first = object.get_sub_object_info_first(i % object.get_sub_object_count()); int count...
opengl,opengl-es,glsl,shader,glsles
So do I (generally speaking) benefit from rewriting the shaders in a more modern GLSL version? I'm not talking about optimizing the shader code itself for mobile/desktop, that's fairly obvious, I will do that of course. I'm talking about using different GLSL language versions on different platforms. Is there...
In your applyPointLight function, you're not using the diff and spec variables, which are presumably the light-dependent changes to diffuse and specular. See if the following works: vec3 diffuse = light.diffuse * surfaceColor * light.color * diff; vec3 specular = light.specular * surfaceSpecular * light.color * spec; ...
It is based on OpenGL ES 2.0, and according to here, it must support version 1.00. In fact that is all it supports. On another note, this has been my general reference for GLSL features: http://www.shaderific.com/glsl/...
you should use a GLfloat and note a glm::vec3. but here it is a any way: for (int i = 0; i != 10; i++) { GLint originsLoc = glGetUniformLocation(programID, "origins[i]"); glUniform3f(originsLoc, origins[i].x, origins[i].y, origins[i].z); } ...
Chapter 16 of GPU Gem 2 has nice explanation and illustration for achieving your goal in real time. Basically you need to perform ray casting through the atmosphere layer and evaluate the light scattering....
android,opengl-es,glsl,framebuffer
Its a multi-part solution, call GLES20.glDisable(GLES20.GL_DITHER), set render buffer format to GLES30.GL_UNSIGNED_SHORT_5_5_5_1, and as Jerem said, you also may be able to get even better results with alpha disabled, though experimentation with that on my device selection appeared to have no effect, possible on other devices it would.
opengl,directx,glsl,shader,hlsl
The way, how you have implemented subsurface scattering effect is very rough. It is hard to achieve nice result using so simple approach. Staying within selected approach, I would recommend you the following things: Take into account distance to the light source accordingly to the inverse square law. This applies...
This is your problem: glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(0)); glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer(GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(12)); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(24)); This uses the old and deprecated fixed-function style specification instead you wan to use glVertexAttribPointer: glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,...
opengl,attributes,glsl,vertex-shader,pixel-shader
There are two scenarios: If the attribute array is enabled (i.e. glEnableVertexAttribArray() was called for the attribute), but you didn't make a glVertexAttribPointer() call that specifies the data to be in a valid VBO, bad things can happen. I believe it can be implementation dependent what exactly the outcome is....
opengl,textures,glsl,framebuffer
I found the bug, it was unrelated to the shader. In my ShaderProgram class I use glGetActiveUniform to find information about the uniform variables. I used the uniform index as location later, but the location must be queried separately by calling glGetUniformLocation. The bug is described in this blog post....
opengl,glsl,light,deferred-rendering,deferred-shading
In deferred rendering you usually render each light separately and let the GPU blend the result together. This reduces shader complexity. Let's take a closer look at your lighting calculations. partialColor = vec4( mix( partialColor.rgb, tex, 0.5 ), partialColor.a ); Hm... What is this line supposed to do? At that...
c#,opengl,glsl,opentk,deferred-rendering
vec3 distance = toLight * (1.0 / LightRadius); float attenuation = clamp(1 - dot(distance, distance), 0, 1); attenuation = attenuation * attenuation; Using this "formula" it looks like it's working as it should....
opengl-es,glsl,webgl,shader,gpu
The scheduling unit of GPUs is the warp / wavefront. Usually that's consecutive groups of 32 or 64 threads. The execution time of a warp is the maximum of the execution time of all threads within that warp. So if your early exit can make a whole warp terminate sooner...
Many people go step further and create a Vertex POD object of type: struct Vertex{ vec4 position; vec4 normal; vec2 texture; } Then the stride is simply sizeof(Vertex), and the offsets can be extracted using a offsetof macro. This leads to a more robust setup when passing the data. ...
While we're on the topic of costs, you don't need two branches here. You can test the results of a component-wise test instead. So, this could be collapsed into a single test using any (greaterThan (A_min, B_max)). A good compiler will probably figure this out, but it helps if you...
WebGL shaders follow the GLSL ES 1.017 spec https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf That's different than Desktop OpenGL in several ways. One it's the 1.0 version of GLSL ES where as desktop GL at version 4.2. One big difference between WebGL GLSL and many articles found about shaders on the internet is there's no...
opengl-es,textures,glsl,shader,hlsl
If you switch calls from textureLod to texture2D, you will lose control over which mip-level is being sampled. If the texture being sampled only has a single mip-level, then the two calls are equivalent, regardless of the lod parameter passed to textureLod, because there is only one level that could...
android,opengl-es,glsl,opengl-es-2.0,glsles
I do not know why your GLSL compiler is not returning a compiler info log on failure to compile this shader, but using a struct with an opaque type (e.g. sampler2D) is known to cause issues. Opaque types cannot be assigned values at shader runtime, so passing them as part...
In WebGL/GLES2: Yes, only constants are allowed. However if your code can be unrolled (either by yourself or by the compiler) then it counts as a constant and you have a workaround. For example, The problem: uniform int i; ... int a[4]; a[2] = 42; // ✓ a constant index,...
opengl,glsl,shader,vertex-shader
What I type up there is a pseudo code: not the actual code. More explanation; Let's say i have two textures(2 images bind as textures) overlap each other. I want to display one texture with X+0.5 displacement while the other remain constant. The problem I am facing is distinguishing...
ios,objective-c,opengl-es,glsl,scenekit
the _light struct is only available at the SCNShaderModifierEntryPointLightingModel entry point. You can take a look at the header file, it's a bit more detailed than the SCNShadable protocol documentation....
Take the dot product of the vertex (after subtracting the origin) in the direction of cubeUp and then scale it and add to the vertex position. This will ensure that the vertices only move in the direction of cubeUp. Something like this: float scaleFactor = 1.5f; transformedPos.x -= cubeOrigin.x; transformedPos.y...
gl_TessCoord represents the same concept with all primitive topologies. This vector is the position within the current primitive, and it is probably best not to think of it as x, y and z. Since it is relative to a surface, u, v and w (optional) are more intuitive. You can...
A simple case of aliasing. Just like with polygon rendering, your fragment shader is run once per pixel. Colour is computed for a single central coordinate only and is not representative of the true colour. You could create a multisample FBO and enable super-sampling. But that's expensive. You could mathematically...
Blurred small-res textures don't seem blurred enough. I think there is somewhere a problem regarding the width of the filter (not number of samples, but distance between samples) or framebuffer size. Let's say that you have 150x150 original fbo, a a 15x15 downscaled version for bloom. And that you use...
When you draw using a Tessellation Control Shader (this is an optional stage), you must use GL_PATCHES as the primitive type. Additionally, GL_PATCHES has no default number of vertices, you must set that: glPatchParameteri (GL_PATCH_VERTICES, 3); glDrawElements (GL_PATCHES, 3, ..., ...); The code listed above will draw a single triangle...
One important consideration that people often forget is that OpenGL is not designed to be a convenient interface for programmers. It is designed to provide an abstraction of graphics hardware. Of course not all GPUs have exactly the same features, but they mostly tend to be fairly similar. And manipulating...
At first glance, I'm not sure why glGetError is returning an error code. But to answer your specific question of 'How can I debug this error further?', I do have a suggestion. Change your draw code to this: // Logging errors before the call to glUseProgram int error; while ((error...
opengl,optimization,glsl,shader
The uvCoord is used in the fragment shader. No it is not. Your output is a constant color. As a result, the texture fetches before are all eliminated by a decent compiler, as is the texCoord varying. This ultimately results in the elimination of the input attribute, which simply...
If you can change to a simple pattern of 5x5, this will calculate it in Javascript: var x = ((25 - positionInTime) % 5) / 5.0; var y = Math.floor((25 - positionInTime) / 5) / 10.0; TextureCoord = vec2(x, y); If not, you can use: if (positionInTime > 22) {...
1. Question: Why is gl_Position a variable that has already been defined? This is because OpenGL/the rendering pipeline has to know which data should be used as basis for rasterization and interpolation. Since there is always exactly one such variable, OpenGL has the predefined variable glPosition for this. There are...
Needed to enable blending .. gl2.glEnable(GL.GL_BLEND); gl2.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA); ...
Uniforms: You should at least know what is in that uniform and if that value can affect result. Default value is zero as stated here. Other than that, there can be any value, you stored here last time and it depends only on your code if it can affect result...
As @derhass already pointed out in a comment, some of your terminology is mixed up. A VAO (Vertex Array Object) contains state that defines how vertex data is associated with vertex attributes. It's the VBO (Vertex Buffer Object) that contains the actual vertex data. I doubt that using a SSBO...
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after. The wave simulation (geometry and animation): A procedural method...
The thing was that I forgot to enable the Nvidia video card in my laptop. And it was the Intel integrated graphical card which doesn't support the GL shading language version 4.0+ that kept working. Some small bugs were also found. ";" was missed in the line: "color = vec4(0.0,...
fwidth is not exactly generating the lines, mainly fract is. fwidth is only used to keep the line width constant in screen space. If you try to draw 1px lines only using fract, it will work. But if you want wide lines or antialiased lines, you'll need fwidth to make...
There's a few aspects here. If you already have the texture at the full size, I can't really think of a good reason to create a downscaled texture from it. If you really want to sample it at less than the full resolution, you can just use it in a...
ios,opengl-es,glsl,opengl-es-3.0
Similar to float, sampler3D does not have a default precision. Add this at the start of your fragment shader, where you also specify the default float precision: precision mediump sampler3D; Of course you can use lowp instead if that gives you sufficient precision. The only sampler types that have a...
glsl,webgl,normals,render-to-texture
After a brief look it seems like your problem is in storing x/y positions. gl_FragColor = vec4(vec3(p0*0.5+0.5), 1.0); You don't need to store them anyway, because the texel position implicitly gives the x/y value. Just change your normal points to something like this... vec3 p2 = vec3(1, 0, texture2D(rttTexture, vUV...
c++,opengl,glsl,fragment-shader,vertex-shader
The drawing area is -1,-1 to 1,1 so if the quad is larger than that the entire screen is drawn. To account for the matrix stack (input or the glRotate and friends) you need to multiply the position with gl_ModelViewMatrix in the vertex shader. However this is not available in...
Neither Qt nor OpenGL gives access to the shader version (in the case of Qt, probably because it does not know it and does not need to). You are left with parsing the source code, however since #version needs to be on its own line, you should be able to...