The much better approach here is to not use a pointer at all. glm::vec3 is a fixed size type that probably uses 12 or 16 bytes. I see absolutely no need to use a separate dynamic allocation for it. So where you currently declare your class member as: glm::vec3 *position;...
fwidth is not exactly generating the lines, mainly fract is. fwidth is only used to keep the line width constant in screen space. If you try to draw 1px lines only using fract, it will work. But if you want wide lines or antialiased lines, you'll need fwidth to make...
With glColorPointer(), you specify an array of colors, one for each vertex. For example, if your cube has 8 vertices, you need an array of 8 colors. Since you have 3 components per color, the total size of the array you need is 24 floats. Or, since you currently have...
It is likley that this refers to the configure option that was set when they compiled Qt. This option is explained in detail here: https://blog.qt.io/blog/2014/11/27/qt-weekly-21-dynamic-opengl-implementation-loading-in-qt-5-4/ To summarise, Qt can be compiled to use ether the desktop OpenGL (a direct interface to the graphics driver provided OpenGL version) or to use...
c,opengl,struct,code-separation
The reason for not getting the window opened is that one has to specify GLFW_CONTEXT_VERSION_MINOR in addition to the other window hints. This could be done, e.g., with: glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); ...
BufferUtils returns a direct buffer whereas the others might not. You can check the directness of the wrap and allocate methods using isDirect() method. Wrappers are non direct....
No, I do not believe this is supported. You can't "merge" multiple textures into a single texture without any copying. I can think of a few options you have: The ideal solution is of course that you place the data into cube map faces in the first place, instead of...
Well, the situation is very clear. You already gave the answer yourself. Shouldn't the location of the second vector be (location = 1)? Yes. Or less specific: it should be something else than 0. Attribute locations must be unique in a single program, for obvious reasons. The code you copied...
Writing a loader for TGA is relatively straightforward, so for an exercise: go for it. PNG on the other hand is a different kind of beast. It has a gazillion features, supports multiple compression schemes and encodings, all of which you have to support to load PNG files generated by...
OpenGL 1.x and 2.x require at least 2 texture units. OpenGL 3.x and 4.x require at least 16. Most current GPUs have 32. You can find those values fairly easily in the OpenGL specification itself, in the "Implementation Dependent Values" table. This specific value is called MAX_TEXTURE_UNITS in 1.x and...
Reto Koradi already mentioned copy semantics. Another thing to keep in mind is that OpenGL allows context sharing, i.e. some objects are shared between the OpenGL contexts and deleting in one context deletes it from all contexts. Objects transcending shared contexts are textures buffer objects that are bound using glBindBuffer...
First, obvious improvements: Pass vector vertData to processVertex by (const) reference. Reuse one vector where possible by clearing it Rewrite split to take vector by reference as argument, and write to this vector instead of return value Second, forget about first, and use profiler. For example, gprof on linux, or...
c++,opengl,textures,nvidia,framebuffer
Im really out of ideas what I can do to make it work... Your OpenGL implementation tells you, that the configuration you choose is not supported. You have to accept that. The OpenGL specification does not require that particular combination of formats to be supported by all implementation, so...
c++,opengl,visualization,simulation,heatmap
It is definitely feasible, probably even if the calculation are done by the CPU. Ideally you should be using the GPU. The APIs needed are either OpenCL or since you are rendering the results you might want to make use of Compute Shaders. Both techniques allow you to write a...
Your texture is a height map. Not a normal map. You have two options: Replace the height map with a normal map (they are usually blueish). Or calculate the normal from the height map. This can be done like this: float heightLeft = textureOffset(tex_normal, fs_in.UV, ivec2(-1, 0)).r; float heightRight =...
My guess is that you have a memory corruption bug "somewhere" that is overwriting the meshNormals variable on the stack. The fact that swapping the meshNormals and meshVertices declarations leads to meshVertices becoming bad matches that theory. To narrow in on the problem you can do a few things: Comment...
objective-c,opengl,quartz-2d,syphon
The second-to-last parameter in glTexImage2D is: type Specifies the data type of the pixel data. The following symbolic values are accepted: GL_UNSIGNED_BYTE, GL_BYTE, GL_UNSIGNED_SHORT, GL_SHORT, GL_UNSIGNED_INT, GL_INT, GL_FLOAT, GL_UNSIGNED_BYTE_3_3_2, GL_UNSIGNED_BYTE_2_3_3_REV, GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_5_6_5_REV, GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT_4_4_4_4_REV, GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_1_5_5_5_REV, GL_UNSIGNED_INT_8_8_8_8, GL_UNSIGNED_INT_8_8_8_8_REV, GL_UNSIGNED_INT_10_10_10_2, and...
Neither Qt nor OpenGL gives access to the shader version (in the case of Qt, probably because it does not know it and does not need to). You are left with parsing the source code, however since #version needs to be on its own line, you should be able to...
You're using a mix of C++ (std::string) and C (char*) strings in an invalid way. In the constructor, you're building up the code in a C++ string, which is an object that automatically allocates and re-allocates the memory to hold the string data as the string grows. It will also...
Actually stretching a texture over a rectangle works with the texture coordinates. But if you want to repeat it you have to set: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); ...
c++,osx,opengl,osx-yosemite,glew
The EXT variant will only be defined in glext.h or the headers which come or are generated by the various GL extenstion loaders. The actual GL_TEXTURE_BUFFER enum is defined in OpenGL/gl3.h. On OSX, modern GL is part of the OS, and you can directly link the modern GL funtions. However,...
You can enable face culling just for some of the objects. Since OpenGL is a state machine, from the moment when you call glEnable(GL_CULL_FACE) all objects are culled. When you disable it, nothing is called anymore. So for your case you will have to do something similar to: glEnable(GL_CULL_FACE); //Draw...
You're running in a VM. GPUs usually are not passed through to the VM and all you get is a shim-driver supporting only a lower OpenGL version, which commands are passed through the VM to the host. Solution: Run Linux natively on your box....
c++,opengl,geometry-instancing
To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute...
You're not reserving enough memory in the buffer: glBufferData(GL_PIXEL_PACK_BUFFER, width * height * 1.0f, 0, GL_STREAM_READ); Since you're using GL_RGBA as the format, you will need 4 bytes per pixel, which also matches what you're using in your memcpy() call: memcpy(myBuffer, data, width * height * 4); So the glBufferData()...
You need to call glfwMakeContextCurrent to bind the OpenGL context to your thread. There's a working example on the LWJGL website as well.
In the way you describe that (checking if each polygon is within FOV), it will almost always be slower - GPU can do it faster. But this idea can be improved by organizing the polygons in some clever data structure, which can quickly cut out large numbers of polygons that...
c++,opengl,visual-studio-2013,release
Your problem lies here: file.seekg(0, file.end); GLint len = GLint(file.tellg()); file.seekg(0, file.beg); GLchar* buf = new GLchar[len]; file.read(buf, len); file.close(); This code reads exactly the length of the file and nothing more. And unfortunately file sizes don't actually tell you about how much there's actually to read; if the file...
c++,opengl,opengl-es,integer,shader
These integers are handles.This is a common idiom used by many APIs, used to hide resource access through an opaque level of indirection. OpenGL is effectively preventing you from accessing what lies behind the handle without using the API calls. From Wikipedia: In computer programming, a handle is an abstract...
input format I'd use GL_RED, since the GL_LUMINANCE format has been deprecated internalFormat depends on what you want to do in your shader, although you should always specify a sized internal format, e.g. GL_RGBA8 which gives you 8 bits per channel. Although, with GL_RGBA8, the green, blue and alpha channels...
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after. The wave simulation (geometry and animation): A procedural method...
A simple case of aliasing. Just like with polygon rendering, your fragment shader is run once per pixel. Colour is computed for a single central coordinate only and is not representative of the true colour. You could create a multisample FBO and enable super-sampling. But that's expensive. You could mathematically...
c++,opengl,glsl,shader,vertex-shader
As you indicate in your question, the primary issues here is that of execution time and memory. There are many ways in which rendering objects with skinning (skeletons) takes more of both: Extra vertex data. For the bone weights and indices. Generally these streams are (each) 4 bytes per vertex....
xcode,performance,opengl,profiling
Yes, the Xcode "capture GPU frame" is only functional when running on iOS (at least, in Xcode 6.3 - perhaps this will be enabled in a future version). You can use the the OpenGL Profiler provided by Apple, which offers the similar functionality.
1. Question: Why is gl_Position a variable that has already been defined? This is because OpenGL/the rendering pipeline has to know which data should be used as basis for rasterization and interpolation. Since there is always exactly one such variable, OpenGL has the predefined variable glPosition for this. There are...
That was actually a wrong using of graphic process. My default graphic process had been changed, so I just had to change it. (by right clicking on a programm, select execute with graphic process > change default graphic process)
These two methods do not return the same information: glGetString(GLEW_VERSION) returns the version of the GLEW library, not the OpenGL Version used. In contrast, glewIsSupported checks whether a OpenGL Version is supported or not. In order to get the OpenGL Version used in a context, one can use the glGetIntegerv...
multithreading,winapi,opengl,c++11
A Win32 window is bound to the thread that creates it. Only that thread can receive and dispatch messages for the window, and only that thread can destroy the window. So, if you re-create the window inside of your worker thread, then that thread must take over responsibility for managing...
The angle argument to glm::rotate() is in radians, not degrees: m: Input matrix multiplied by this rotation matrix. angle: Rotation angle expressed in radians. axis: Rotation axis, recommanded [sic] to be normalized. Use this: void Frame::setRotation(float degrees) { this->rotation = glm::radians( degrees ); this->calcTransform(); } ...
i dont think you can just input a raw file to glTexImage2D, except if you store texture files in that format (which you probably dont). glTexImage2D expects a huge array bytes (representing texel colors), but file formats typically dont store images like that. Even bmp has some header information in...
Much of this must have been explained before, but let me try and give an overview that will hopefully make it clearer how all the different pieces fit together. I'll start by explaining each piece separately, and then explain how they are connected. Texture Target This refers to the different...
c++,objective-c,osx,cocoa,opengl
I have managed to get it working by using code from Apple's GLEssentials sample: https://developer.apple.com/library/mac/samplecode/GLEssentials/Introduction/Intro.html After writing a little code to set up the NSOpenGLView (including setting the OpenGL version that the NSOpenGLView will run to 4.1), The triangles.cpp example from the Red Book were running perfectly inside the NSOpenGLView....
Enable textures with glEnable(GL_TEXTURE_2D); ...
Found the problem. The last argument of clSetKernelArg() requires a pointer to the mem object and I forgot to prepend the & operator. So this: cl_stat = clSetKernelArg(cl.kernel, 0, sizeof(cl_mem), (const void*)cl.buffer); becomes this: cl_stat = clSetKernelArg(cl.kernel, 0, sizeof(cl_mem), (const void*)&cl.buffer); Very simple....
c++,performance,opengl,opengl-es
glGetError() does not really have to wait for anything from the GPU. All the errors it reports are from checking arguments of API calls, as well as internal state managed by the driver. So CPU/GPU synchronization does not come into play here. The only error that may appear deferred is...
Your second options is close, you can get at the underlying array of the vector by calling .data() myConstructor(int num) { std::vector <GLuint> textures(num); glGenTextures(num, textures.data()); } Assuming glGenTextures has a signature like void glGenTextures(int, GLuint*) I don't know much about this function, but be careful who owns that array....
opengl,opentk,raycasting,mouse-picking
You can apply the reverse transformation of each object to the ray instead.
I think the problem is with your calculation of the screen coordinates, resulting in the tessellation levels to be too small. The key part is this: position_screen[i] = ProjectionMatrix * ModelViewMatrix * gl_in[i].gl_Position; What you're calculating here are clip coordinates, not screen coordinates. To get screen coordinates from clip coordinates,...
You did not initialize GLEW. Without doing that all the entry points provided by GLEW (which is everything beyond OpenGL-1.1) are left uninitialized and calling them crashes your program. Add if( GLEW_OK != glewInit() ) { return 1; } while( GL_NO_ERROR != glGetError() ); /* glewInit may cause some OpenGL...
It seems like you're having trouble understanding orthographic projections. There are several reasons why your scene doesn't look right in an orthographic view: The perspective projection gives the appropriate depth to the floor so it can be seen; objects look bigger when they're closer to you. But the orthographic projection...
The OpenCL interface library installed on your system may pull in a different libGL.so than the libGL.so that shall eventually be loaded by your program. For example if you've got installed the Mesa OpenCL implementation but are using the NVidia driver, then linking against Mesa's OpenCL may pull in Mesa's...
Which man page are you quoting? There are multiple man pages available, not all mapping to the same OpenGL version. Anyways, the idea behind the + 2 (border) is to have 2 multiplied by the value of border, which is in your case 0. So your code is just fine....
c++,opengl,transform,translation,distortion
Matching OpenGL, GLM stores matrices in column major order. The constructors also expect elements to be specified in the same order. However, your translation matrix is specified in row major order: glm::mat4 trans = glm::mat4( 1.0f, 0.0f, 0.0f, translation.x, 0.0f, 1.0f, 0.0f, translation.y, 0.0f, 0.0f, 1.0f, translation.z, 0.0f, 0.0f, 0.0f,...
glUseShaderProgramEXT() is part of the EXT_separate_shader_objects extension. This extension was changed significantly in the version that gained ARB status as ARB_separate_shader_objects. The idea is still the same, but the API looks quite different. The extension spec comments on that: This extension builds on the proof-of-concept provided by EXT_separate_shader_objects which demonstrated...
Turns out I was using the Core OpenGL profile, which requires you to use Vertex Array Objects, which I didn't. Up until ~February, the graphics didn't mind, but after a certain driver update, it refused to render the object (Which I believe is the correct behaviour).
Those methods are part of the fixed render pipe of OpenGL ES 1. Support for OpenGL ES 1 has been removed since libGDX version 1.0.0. Only the programmable render pipe op OpenGL ES 2 and up is supported. If you really want to use such old methods then you could...
c++,opengl,mapping,textures,glut
You are calling LoadtankTexture() every time drawSelf() is called. LoadtankTexture() creates a new texture when it is called and fills it with the texture data. Calling LoadtankTexture() regularly is creating many copies of the texture in memory. You only need to create a texture once (unless you actively delete it...
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats. Example for GL_RGBA16F( Not tested ): glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth,...
That game appeaers to be using an orthographic projection - notice how the thing farther away from the camera are not smaller. That is why the direction of lines doesn't depend on the camera position. The left half of this picture shows a grid of crates rendered with a perspective...
c,opengl,vbo,vertex-buffer-objects
The first argument to glDrawArrays() here is invalid: glDrawArrays(GL_LINE, i, 2); GL_LINE is not a valid primitive type. You need to use GL_LINES: glDrawArrays(GL_LINES, i, 2); GL_LINE may look very similar, but it's one of the possible arguments for glPolygonMode(). You should always call glGetError() when your OpenGL call does...
Move the following lines outside the while loop. if((fp=freopen("PTRON", "w" ,stdout))==NULL) { printf("Cannot open file.\n"); exit(1); } I don't know what the behavior of freopen is when you use it multiple times in a program. Use: if((fp=freopen("PTRON", "w" ,stdout))==NULL) { printf("Cannot open file.\n"); exit(1); } while(i<p) { t=2*M_PI*((double)i/p+d); x=cos(t)*r; y=sin(t)*r;...
Have a look at glbinding. It includes pre-baked headers for every version and extension of OpenGL which I found most appealing.
Can we use wglMakeCurrent function in more than one thread to use the same opengl context, simultaneously? No: A rendering context can be current to only one thread at a time. You cannot make a rendering context current to multiple threads. I have to create one opengl context per...
GL_TEXTURE_2D needs to be glEnable()'d for rendering. Right now you're glDisable()ing it in initializeGL() and leaving it that way....
I didn't know of the existence of the QOpenGLFunctions and QOpenGLWidget classes, which I both extended into a new class of my own to acquire the newer functions I desired. I used to extend solely the QGLWidget, but by doing what I just mentioned above, I was able to maintain...
class,haskell,opengl,types,uniform
Try adding -XFlexibleContexts and the constraint, literally, to your existing answer: {-# LANGUAGE FlexibleContexts #-} mvpUnif :: ( GL.UniformComponent a , Num a , Epsilon a , Floating a , AsUniform a , AsUniform (V4 (V4 a)) ) => (GLState a) -> ShaderProgram -> IO () Usually this is the...
In OpenGL 3.2 and higher, there are texture sampler objects, which can override the sampling parameters in the textures themselves. You could use them here. It will be particularly convenient if you want all of your textures to use the same sampling parameters. You could then just create a single...
c++,user-interface,opengl,wxwidgets
The problem ended up being that the AUI manager which you attach to the main Frame DOES consume the wxEVT_PAINT event when it propagates, and it never reaches the Frame's class. The Frame's children however, do receive the events. Instead of just using Bind() from the Frame class, I called...
You can check out TWL's wiki here: "http://wiki.l33tlabs.org/bin/view/TWL/" it has some basic tutorials on how to use it, and here's a "Getting Started" page for niftyGUI: https://github.com/void256/nifty-gui/wiki/Getting-Started
Since you are using an integer format, you will have to use an isampler2D instead of your sampler. So for samplers, floating-point samplers begin with "sampler". Signed integer samplers begin with "isampler", and unsigned integer samplers begin with "usampler". If you attempt to read from a sampler where the texture's...
For the particular uniforms in question, you are using the wrong API call to set them. Those are vectors, rather than matrices. You would use glUniform4fv (...) rather than glUniformMatrix4fv (...). The latter function assumes that you are setting 1 or more 4x4 matrix uniforms (your passed parameters are telling...
You should create vector of direction. Every time you move your mouse, you should reevaluate it according to angle. Directional vector should be vector on unit circle. Suppose you have directional vector direction = (0.4X, 0.6Z) (numbers can be unreal but let it be for example), then for moving forward...
To get full control over your fragment processing, the best approach is using the programmable pipeline, where you can implement exactly what you want with GLSL code. But there are some options that could work for this case in the fixed pipeline. The simplest one is using a different GL_TEXTURE_ENV_MODE....
Your C++ matrices are probably stored as row major under the hood. Which means that multiplying them from left to right is an "origin" transformation, while right to left would be a "local" incremental transformation. OpenGL however uses a column major ordering memory layout (The 13th, 14th and 15th elements...
c++,qt,opengl,transform-feedback
It turned out that I never actually extended my class to use QOpenGLFunctions_4_3_Core, and it was instead just QOpenGLFunctions. Changing it to the former solved the problem.
opengl,state-machines,render-to-texture
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)". The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0....
opengl,textures,framebuffer,blit,msaa
The filtering options only come into play when you sample from the textures. They play no role while you render to the texture. When sampling from multisample textures, GL_NEAREST is indeed the only supported filter option. You also need to use a special sampler type (sampler2DMS) in the GLSL code,...
If you create a compatibility profile context, it should support all all legacy functionality. From OpenGL 4.3 compatibility spec, 1.2.4: Older generations of graphics hardware were not programmable using shaders, although they were configurable by setting state controlling specific details of their operation. The compatibility profile of OpenGL continues to...
Xlib is the traditional client side implementation of the X11 protocol. The modern (replacement for Xlib) client side implementation would be Xcb, however OpenGL/GLX are a bit of a hassle to use with Xcb. X11 has nothing to do with Microsoft Windows. Xlib/Xcb are essentially protocol implementations that turn function...
c++,c,opengl,graphics,deprecated
I'd like to extend on the article about deprecation in the OpenGL wiki which was given in the comments already. The current situation is that we can discern 3 "flavours" of OpenGL contexts on desktop platforms: "Legacy" GL. This means the GL from the old days, before there was any...
From the programmer's point of view OpenGL is the worst kind of short term amnesiac imaginable. Once a drawing call returns from the programmer's perspective OpenGL already did turn everything into coloured pixels and completely forgot about what it just did. So… Do I still have access to the drawing...
I came up with an interesting solution. Perhaps it was obvious but I didn't see it at first. I basically created a single static variable of type QOpenGLFunctions_3_3_Core inside a dummy class, GL, and used that through out the entire code whenever an OpenGL function was needed. For example class...
Ok, after playing around with GLSceneViewer I figured out how to do it: instead of drawing lines on onRender event of GLDirectOpenGL1, you should draw lines on PostRender event of a necessary GLSceneViewer, so code should look like that: procedure TForm1.GLSceneViewerL(Sender: TObject); var glc : TGLCanvas; begin glc:=TGLCanvas.Create(GLSceneViewerL.Width, GLSceneViewerL.Height); with...
In your applyPointLight function, you're not using the diff and spec variables, which are presumably the light-dependent changes to diffuse and specular. See if the following works: vec3 diffuse = light.diffuse * surfaceColor * light.color * diff; vec3 specular = light.specular * surfaceSpecular * light.color * spec; ...
opengl,rendering,deferred-rendering,ssao
I figured it out by debugging the code step by step. I calculate everything in world space. It's easier to handle everything there. I looked at a tutorial which uses view space and changed everything I needed to world space. The error is here: float rad = g_sample_rad / p.z;...
Do you think that updating the screen faster than the human eye can see it is productive? if you really must have your engine 100% independent of the retrace, use a triple buffer system. One buffer to display, and 2 buffers to update back and forth to until the screen...
No (technically there could be a difference, since OpenGL does not impose any performance requirements on the functioncall). Btw. you should set the clear color before calling clear....
glDeleteRenderbuffersEXT is part of the extension GL_EXT_framebuffer_object. Retrieve the list of available extensions using glGetString(GL_EXTENSIONS) and check for the availability of GL_EXT_framebuffer_object.
No. The whole idea of OpenGL is to provide an abstraction of GPU hardware that applies across vendors and architectures. Allowing direct access to VRAM would break this abstraction in a lot of ways: It assumes that the system has VRAM. It assumes that the texture is currently in VRAM....
c++,opengl,graphics,shader,shadow
This is intended. Since your shadowmap covers a pyramid-like region in space, your spotlight's cone can be occluded by it. This is happening because where you render something that is outside of the shadow camera's view, it will be considered unlitten. Therefore the shadow camera's view pyramid will be visible....