Discussion:
[Opensg-users] OpenSG2: Shading problems
Johannes Brunen
2014-05-30 10:31:20 UTC
Permalink
Hello,



after sync with the current OpenSG 2 master trunk
(51f04ac30d8481dcb2eb1ddb318bff97450d1956) I have rendering problems
(AMD Radeon HD 5700 series) with shaders.



I have uploaded a zip file to

http://wikisend.com/download/565136/Error.zip

which contains some png images, the CMakeCache.txt file and some log
files created during my test.





Basically, I have used the following relevant CMake options for my VC
9Sp1 Win64 setup:



OSG_ENABLE_MULTISHADER_VARCHUNK:BOOL=ON

OSG_ENABLE_NEW_GEOHANDLER:BOOL=ON

OSG_ENABLE_OGL2_SHADERFUNCTIONS:BOOL=ON

OSG_ENABLE_OGL3_PROTOS:BOOL=OFF

OSG_ENABLE_OGL4_PROTOS:BOOL=OFF

OSG_ENABLE_OGLES_PROTOS:BOOL=OFF

OSG_ENABLE_OGL_COREONLY:BOOL=OFF

OSG_ENABLE_OGL_ES2:BOOL=OFF

OSG_ENABLE_OGL_ES2_SIMGL3:BOOL=OFF

OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS:BOOL=ON





Ok, now I try to describe my problem. I will refer to the png images.
For what follows I use one directional light source.



11_NoShader.png

==============

I start my application with disabled shader support and create some
geometry (sphere and cylinder)



12_Shader.png

============

I enable the usage of shaders. I have applied a simple phong shader with
removed ambient light. All is correct here.



13_Shader.png

============

Now, I create new additional geometry the same way as I did before. The
shading is completely lost. However, I tested that the shader is still
working properly.



14_NoShader.png

==============

Now, I disable the shader usage. See that the increase of brightness due
to the additional ambient contribution. But, the lastly added geometry
still does not render correctly!



15_NoShader.png

==============

Creation of new geometry work now as expected. Rendering of first and
third geometry is correct. Second geometry is incorrect.



16_Shader.png

============

Shader activated again, first and third geometry correct, third geometry
incorrect.



17_Shader.png

============

New geometry, which renders incorrect.



Etc...



In order to get a little more information, I did modify my phong shader
so that the diffuse contribution is equal to the normal direction. I.e.
I used



Diffuse = (n + vec3(1,1,1)) * 0.5; // += Kd * n_dot_l;



The result of this can be seen in the second group of images:



21_NoShader.png

==============

Start of app with disabled shader support and create some geom.



22_Shader.png

============

Activated shader usage. The geom coloring of the normals in eye space.



23_Shader.png

============

New geometry created. All normals are identical.



I do not have any idea what is going wrong here and really need some
help.



Alternatively, I tried to compile the OpenSG code with



OSG_ENABLE_NEW_GEOHANDLER:BOOL=OFF

OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS:BOOL=OFF



but this resulted in other (unsolved problems) as I have reported in



http://sourceforge.net/p/opensg/mailman/message/32363721/



Beside, this message does not show up in the



www.mail-archive.com/opensg-users



Maybe, this is the reason I got no response from the community.





Any help is very much appreciated here. I need to solve these two
problems.



Best,

Johannes
Carsten Neumann
2014-05-31 08:03:38 UTC
Permalink
Hello Johannes,
Post by Johannes Brunen
after sync with the current OpenSG 2 master trunk
(51f04ac30d8481dcb2eb1ddb318bff97450d1956) I have rendering problems
(AMD Radeon HD 5700 series) with shaders.
does that mean it is limited to AMD cards and does not happen on Nvidia?
Also, what version where you on before where this worked?
Post by Johannes Brunen
I have uploaded a zip file to
http://wikisend.com/download/565136/Error.zip
which contains some png images, the CMakeCache.txt file and some log
files created during my test.
Thanks. Looking through the logs I noticed there are a number of Windows
(i.e. OpenGL contexts) being created and all but the first appear to
have all extensions set to "ignore". What type of windows are these
(WIN32Window, PassiveWindow, else?) and does your code place any
extension on the ignore list?
Post by Johannes Brunen
Any help is very much appreciated here. I need to solve these two
problems.
I understand this is frustrating for you (doubly so if this is an OpenSG
bug), but I'm also not sure how to go about debugging this here. Perhaps
as a first step it would be useful to know what aspect of the OpenGL
state is incorrect, i.e. is this related to shaders/uniform variables or
is the geometry setup incorrect. Can you run under an OpenGL debugger
[1] that allows to look at the state so we can narrow the problem area
down a bit?

Cheers,
Carsten


[1] gDEBugger http://www.gremedy.com/download.php
apitrace http://apitrace.github.io/
Johannes
2014-06-03 15:30:23 UTC
Permalink
Hello Carsten,
Post by Carsten Neumann
Hello Johannes,
Post by Johannes Brunen
after sync with the current OpenSG 2 master trunk
(51f04ac30d8481dcb2eb1ddb318bff97450d1956) I have rendering problems
(AMD Radeon HD 5700 series) with shaders.
does that mean it is limited to AMD cards and does not happen on Nvidia?
Also, what version where you on before where this worked?
I have the problems with the AMD cards only.

Sorry, but I have to correct myself, because the problems I have
reported in this thread are also present in my currently working release
branch (fc7990001035efeeae4458cb582e1a2abf6579a9) in which I use the
following flags

OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS=OFF
OSG_ENABLE_NEW_GEOHANDLER=OFF

If I change these options to ON I also see them there. I oversaw that as
I wrote the report.

However, as I have reported in the other thread
(http://sourceforge.net/p/opensg/mailman/message/32363721/), with the
above settings (OFF,OFF) my last OpenSG checkout
(533aa767fd59a0db446b6639fd9cccbdea577865) does not work on the AMD
hardware. This is (may be) a regression from the
(fc7990001035efeeae4458cb582e1a2abf6579a9) checkout. At first iteration
this is the more serious problem, because it stops me from
synchronizing. But let us come back to this in the other thread.
Otherwise it is going completely confusing :-)

So here we have:
1. OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS=ON
OSG_ENABLE_NEW_GEOHANDLER=ON

2. Only AMD

3. fc7990001035efeeae4458cb582e1a2abf6579a9
533aa767fd59a0db446b6639fd9cccbdea577865
Post by Carsten Neumann
Thanks. Looking through the logs I noticed there are a number of Windows
(i.e. OpenGL contexts) being created and all but the first appear to
have all extensions set to "ignore". What type of windows are these
(WIN32Window, PassiveWindow, else?) and does your code place any
extension on the ignore list?
I have a number of windows and contexts going around. But I haven't
placed anything into an ignore list. Actually, I do not know how to do
it at all.
Post by Carsten Neumann
I understand this is frustrating for you (doubly so if this is an OpenSG
bug), but I'm also not sure how to go about debugging this here.
I would not say that I'm currently frustrated, but the AMD cards do make
much more work then the NVidia ones. But probably it is a misuse or
misinterpretation of the specs.
Post by Carsten Neumann
Perhaps
as a first step it would be useful to know what aspect of the OpenGL
state is incorrect, i.e. is this related to shaders/uniform variables or
is the geometry setup incorrect.
My geometry setup is pretty simple. I have uploaded three osb - files in
the error.zip file which is available at

http://www.filedropper.com/error
Post by Carsten Neumann
Can you run under an OpenGL debugger
[1] that allows to look at the state so we can narrow the problem area
down a bit?
I will try to get more information...

Best,
Johannes
Gerrit Voß
2014-06-03 11:44:47 UTC
Permalink
Hi,

could you send me an osg file dump of the scene graph for the 1_x sequence
of images. I'll try to recreate the problems, but it might be easier having a
reference to see the actual setup.

kind regards
gerrit
Post by Johannes Brunen
Hello,
after sync with the current OpenSG 2 master trunk
(51f04ac30d8481dcb2eb1ddb318bff97450d1956) I have rendering problems
(AMD Radeon HD 5700 series) with shaders.
I have uploaded a zip file to
http://wikisend.com/download/565136/Error.zip
which contains some png images, the CMakeCache.txt file and some log
files created during my test.
Basically, I have used the following relevant CMake options for my VC
OSG_ENABLE_MULTISHADER_VARCHUNK:BOOL=ON
OSG_ENABLE_NEW_GEOHANDLER:BOOL=ON
OSG_ENABLE_OGL2_SHADERFUNCTIONS:BOOL=ON
OSG_ENABLE_OGL3_PROTOS:BOOL=OFF
OSG_ENABLE_OGL4_PROTOS:BOOL=OFF
OSG_ENABLE_OGLES_PROTOS:BOOL=OFF
OSG_ENABLE_OGL_COREONLY:BOOL=OFF
OSG_ENABLE_OGL_ES2:BOOL=OFF
OSG_ENABLE_OGL_ES2_SIMGL3:BOOL=OFF
OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS:BOOL=ON
Ok, now I try to describe my problem. I will refer to the png images.
For what follows I use one directional light source.
11_NoShader.png
==============
I start my application with disabled shader support and create some
geometry (sphere and cylinder)
12_Shader.png
============
I enable the usage of shaders. I have applied a simple phong shader with
removed ambient light. All is correct here.
13_Shader.png
============
Now, I create new additional geometry the same way as I did before. The
shading is completely lost. However, I tested that the shader is still
working properly.
14_NoShader.png
==============
Now, I disable the shader usage. See that the increase of brightness due
to the additional ambient contribution. But, the lastly added geometry
still does not render correctly!
15_NoShader.png
==============
Creation of new geometry work now as expected. Rendering of first and
third geometry is correct. Second geometry is incorrect.
16_Shader.png
============
Shader activated again, first and third geometry correct, third geometry
incorrect.
17_Shader.png
============
New geometry, which renders incorrect.
Etc...
In order to get a little more information, I did modify my phong shader
so that the diffuse contribution is equal to the normal direction. I.e.
I used
Diffuse = (n + vec3(1,1,1)) * 0.5; // += Kd * n_dot_l;
21_NoShader.png
==============
Start of app with disabled shader support and create some geom.
22_Shader.png
============
Activated shader usage. The geom coloring of the normals in eye space.
23_Shader.png
============
New geometry created. All normals are identical.
I do not have any idea what is going wrong here and really need some
help.
Alternatively, I tried to compile the OpenSG code with
OSG_ENABLE_NEW_GEOHANDLER:BOOL=OFF
OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS:BOOL=OFF
but this resulted in other (unsolved problems) as I have reported in
http://sourceforge.net/p/opensg/mailman/message/32363721/
Beside, this message does not show up in the
www.mail-archive.com/opensg-users
Maybe, this is the reason I got no response from the community.
Any help is very much appreciated here. I need to solve these two
problems.
Best,
Johannes
------------------------------------------------------------------------------
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
_______________________________________________
Opensg-users mailing list
https://lists.sourceforge.net/lists/listinfo/opensg-users
Johannes
2014-06-03 15:32:39 UTC
Permalink
Hello Gerrit,
Post by Gerrit Voß
could you send me an osg file dump of the scene graph for the 1_x sequence
of images. I'll try to recreate the problems, but it might be easier having a
reference to see the actual setup.
I have uploaded the files to

http://www.filedropper.com/error

See my reply to Carsten in the very same thread.


Best,
Johannes
Gerrit Voß
2014-06-04 09:36:36 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
could you send me an osg file dump of the scene graph for the 1_x sequence
of images. I'll try to recreate the problems, but it might be easier having a
reference to see the actual setup.
I have uploaded the files to
http://www.filedropper.com/error
thanks, with shaders I'm seeing a the same flat appearance on ATI
cards and correct shading on NVidia card.

ok, beware, we are entering the murky waters of vertex attribute
aliasing. Which IMHO causes the main problems between NVidia and ATI
hardware/drivers in your case, and in general. As far as I understand
it, when using vertex attributes with shaders in an any kind of
compatibility profile (I'm counting #version 120 as one) only
gl_Vertex/location 0 is guaranteed to alias correctly. All the others
might (NVidia) or might not (ATI), were should not is correct according
to the spec. Also, in compat mode you can't bind anything explicitly to
location=0 in your shaders, so here comes the 'ugly' mix.

Could you try the following vertex shader for your setup, which
makes use of gl_Vertex being aliased and the normal being explicit
specified.

//#version 150 compatibility
//#extension GL_ARB_explicit_attrib_location : enable
// or
#version 330 compatibility

out vec3 vTranfNormal;
out vec4 vPosition;

layout(location = 2) in vec3 glNormal;

//
// Compute the normal
//
vec3 fnormal(void)
{
vec3 normal = gl_NormalMatrix * glNormal;
normal = normalize(normal);
return normal;
}

void main()
{
gl_Position = ftransform();
gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;
vTranfNormal = fnormal();
vPosition = gl_Vertex;
}


Could you try this one and let me know if it solves your display
problems. If you need more attributes (colors/tex coords) you have
to provide the corresponding layout statements in your shader.


Coming back to another point in the thread:

OSG_ENABLE_NEW_GEOHANDLER=ON

I finally fixed to OSB loader problems (a while ago), so this option
will disappear rather soon and the =ON case will become the only
available option.

kind regards
gerrit
Johannes
2014-06-04 12:45:53 UTC
Permalink
Hello Gerrit,
Post by Gerrit Voß
thanks, with shaders I'm seeing a the same flat appearance on ATI
cards and correct shading on NVidia card.
This is good news, because then it seems to be a general problem :-)
Post by Gerrit Voß
ok, beware, we are entering the murky waters of vertex attribute
aliasing. Which IMHO causes the main problems between NVidia and ATI
hardware/drivers in your case, and in general. As far as I understand
it, when using vertex attributes with shaders in an any kind of
compatibility profile (I'm counting #version 120 as one) only
gl_Vertex/location 0 is guaranteed to alias correctly. All the others
might (NVidia) or might not (ATI), were should not is correct according
to the spec. Also, in compat mode you can't bind anything explicitly to
location=0 in your shaders, so here comes the 'ugly' mix.
Uh, Oh, Ahh...
Post by Gerrit Voß
Could you try the following vertex shader for your setup, which
makes use of gl_Vertex being aliased and the normal being explicit
specified.
I have uploaded result images to:

http://www.filedropper.com/error_1

It definitely has had some impact. But as you can see from the images it
is now reversed somehow. After activating the shader the geometry is
flat shaded until the shader is deactivated again. Any new created
geometry is shaded correctly by the shader. If, however, I switch of the
shader again this geometry becomes flat shaded :-(
Post by Gerrit Voß
OSG_ENABLE_NEW_GEOHANDLER=ON
I finally fixed to OSB loader problems (a while ago), so this option
will disappear rather soon and the =ON case will become the only
available option.
It is good that the space of possible options is going to be reduced a
little bit :-).

Best,
Johannes
Johannes
2014-07-09 15:20:50 UTC
Permalink
Hello Gerrit,

I definitely do need some help with this error. The rendering is still
incorrect on my AMD/ATI graphic adapter.

I have synchronized with the latest git master and enabled the following
switches:

OSG_ENABLE_MULTISHADER_VARCHUNK:BOOL=ON
OSG_ENABLE_OGL2_SHADERFUNCTIONS:BOOL=ON
OSG_ENABLE_OGL_VERTEXATTRIB_FUNCS:BOOL=ON
OSG_ENABLE_NEW_GEOHANDLER:BOOL=ON
OSG_ENABLE_OGL_COREONLY:BOOL=OFF
OSG_ENABLE_OGL_ES2:BOOL=OFF
OSG_ENABLE_OGL_ES2_SIMGL3:BOOL=OFF
OSG_ENABLE_OGL3_PROTOS:BOOL=OFF
OSG_ENABLE_OGL4_PROTOS:BOOL=OFF
OSG_ENABLE_OGLES_PROTOS:BOOL=OFF

I have changed my shader code as follows:

1. Vertex shader
----------------

#version 330 compatibility

varying vec3 vTranfNormal;
varying vec4 vPosition;

layout(location = 2) in vec3 N;

vec3 fnormal(void)
{
vec3 normal = gl_NormalMatrix * N;
normal = normalize(normal);
return normal;
}

void main()
{
gl_Position = ftransform();
gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;
vTranfNormal = fnormal();
vPosition = gl_Vertex;
}

2. Fragment shader
------------------

#version 330 compatibility

....rest unchanged.


After start of my application I create some geometry as follows

MaterialGroupUnrecPtr matCore =
graphic::createMaterialCore(_spSceneManager, *dspm);

MaterialGroupUnrecPtr mgrp = MaterialGroup::create();
mgrp->setMaterial(mat);

ChunkMaterialUnrecPtr mat = createChunkMaterial(rSDM,
spSceneManager->rImageData());
ShaderProgramVariableChunkUnrecPtr variableChunk =
ShaderProgramVariableChunk::create();
mat->addChunk(variableChunk);

variableChunk->addUniformVariable("uIsTwoSidedLighting",
false);
...


mat->addChunk(spSceneManager->getMaterialShader(PHONG_MATERIAL_SHADER));

geomPartNode->setCore(matCore);

I.e. nothing fancy, just a chunk material with a shader chunk and a
variable chunk and some 'classic' GL state chunks.

Ok, rendering is now correct. However, if I switch to classic non shader
based rendering by subracting of the shader chunk from my material the
surface normals are lost leading to flat shading. But, if I create new
geometry without the shader chunk it is rendered correct. Again, if I
switch the render mode by adding the shader chunk to the two materials
the second geometry is only flat shaded and the first one is correct again.

So switching by adding/subtracting of the shader chunk does not work on
the AMD/ATI platform.

Do you have any idea what I can do/check/change/look for in order to get
my setup running again?


Best,
Johannes
Johannes
2014-07-10 09:22:29 UTC
Permalink
Hello Gerrit,

I have created a minimalistic example which shows exactly the behavior
on my AMD HD Radeon 5700 series platform.

Best,
Johannes
Gerrit Voß
2014-07-10 13:40:51 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
I have created a minimalistic example which shows exactly the behavior on my AMD HD Radeon 5700 series platform
thanks, I will have a look at it, as soon as I get back to office. I'm only
a few hours away from a 12 hour plane ride ;) Latest Monday I'm back in
office.

kind regards
gerrit
Gerrit Voß
2014-07-14 12:11:09 UTC
Permalink
Hello,
Post by Johannes Brunen
Hello,
Post by Johannes
Hello Gerrit,
I have created a minimalistic example which shows exactly the behavior on my AMD HD Radeon 5700 series platform
thanks, I will have a look at it, as soon as I get back to office. I'm only
a few hours away from a 12 hour plane ride ;) Latest Monday I'm back in
office.
ok, I have an idea what goes wrong. I'll try to figure out a fix.
Currently I'm not sure if an automatic solution can be found. Would
it be a problem for you to explicitly mark the shader if it needs
vertex attributes (e.g. contains a layout(location = ...)) statement
or not ?

kind regards
gerrit
Johannes
2014-07-15 07:37:26 UTC
Permalink
Hello Gerrit,
Post by Gerrit Voß
ok, I have an idea what goes wrong. I'll try to figure out a fix.
Currently I'm not sure if an automatic solution can be found. Would
it be a problem for you to explicitly mark the shader if it needs
vertex attributes (e.g. contains a layout(location = ...)) statement
or not ?
So that means, I do have to figure out at run time, which shader code I
have to use, which implies that I have to change the shader code of
loaded (perhaps foreign) models on the fly?
So, I'm not a big fan of such a solution :-)

Can you elaborate what goes wrong?

Best,
Johannes
Gerrit Voß
2014-07-15 08:40:40 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
ok, I have an idea what goes wrong. I'll try to figure out a fix.
Currently I'm not sure if an automatic solution can be found. Would
it be a problem for you to explicitly mark the shader if it needs
vertex attributes (e.g. contains a layout(location = ...)) statement
or not ?
So that means, I do have to figure out at run time, which shader code I
have to use, which implies that I have to change the shader code of
loaded (perhaps foreign) models on the fly?
So, I'm not a big fan of such a solution :-)
Can you elaborate what goes wrong?
the old problem, on non NVidia you have to know how the the GPU accesses
the vertex attributes (no shader/shader | shader with attribs) in order
to submit them correctly as there is no aliasing except for attribute
0.

Currently I check the active attribute names and if any of them does not
start with "gl_" assume that custom attributes are used. OpenGL forbids
the use of gl_ so that should be fine (both the ATI/NVidia compilers
complain if you do)

That solution seem to work for now, I will commit it soon. With it
your example works correctly.

kind regards
gerrit
Gerrit Voß
2014-07-15 10:33:01 UTC
Permalink
Hello,
Post by Johannes Brunen
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
ok, I have an idea what goes wrong. I'll try to figure out a fix.
Currently I'm not sure if an automatic solution can be found. Would
it be a problem for you to explicitly mark the shader if it needs
vertex attributes (e.g. contains a layout(location = ...)) statement
or not ?
So that means, I do have to figure out at run time, which shader code I
have to use, which implies that I have to change the shader code of
loaded (perhaps foreign) models on the fly?
So, I'm not a big fan of such a solution :-)
Can you elaborate what goes wrong?
the old problem, on non NVidia you have to know how the the GPU accesses
the vertex attributes (no shader/shader | shader with attribs) in order
to submit them correctly as there is no aliasing except for attribute
0.
Currently I check the active attribute names and if any of them does not
start with "gl_" assume that custom attributes are used. OpenGL forbids
the use of gl_ so that should be fine (both the ATI/NVidia compilers
complain if you do)
That solution seem to work for now, I will commit it soon. With it
your example works correctly.
I pushed the commit, could you try and let me know if it works.

kind regards
gerrit
Johannes
2014-07-16 10:05:19 UTC
Permalink
Hello Gerrit,
Post by Gerrit Voß
I pushed the commit, could you try and let me know if it works.
Good and bad news...

The rendering does now work with and without shader on all of my tested
graphic platforms with reasonably current drivers installed:

ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000

It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.


With respect to the Intel HD Graphics 4000 platform I have another
problem. I'm facing a stray OpenGL error which I'm unable to track down.
In my debug sessions this error showed up in the Windows::doFrameExit
due to the error checking loop. Unfortunately, the error flag
(GL_INVALID_OPERATION) is not cleared by the glGetError call and the
application is trapped in this loop.

I have tried hard to find the cause for the error but have not succeeded
so far. I do know that this loop implementation is in accordance with
the GL specs but it would nevertheless be fine if I could disable the
check for the debug session.

Best,
Johannes
Gerrit Voß
2014-07-16 11:00:06 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
I pushed the commit, could you try and let me know if it works.
Good and bad news...
The rendering does now work with and without shader on all of my tested
ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000
It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.
hmm, as always fascinating ;) Would you have a small example file
(off-list is fine).
Post by Johannes
With respect to the Intel HD Graphics 4000 platform I have another
problem. I'm facing a stray OpenGL error which I'm unable to track down.
In my debug sessions this error showed up in the Windows::doFrameExit
due to the error checking loop. Unfortunately, the error flag
(GL_INVALID_OPERATION) is not cleared by the glGetError call and the
application is trapped in this loop.
I have tried hard to find the cause for the error but have not succeeded
so far. I do know that this loop implementation is in accordance with
the GL specs but it would nevertheless be fine if I could disable the
check for the debug session.
IIRC that only happens if there is no active context. I'll add that
check. Alternatively the fastest solution would be to add an environment
variable to suppress this check at that point in time.

kind regards
gerrit
Johannes
2014-07-16 11:49:33 UTC
Permalink
Hello,
Post by Gerrit Voß
Post by Johannes
It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.
hmm, as always fascinating ;) Would you have a small example file
(off-list is fine).
Sorry, for that. I have uploaded two files to filedropper:

http://www.filedropper.com/error_3

The first one (error_1.osb), saved with a former OpenSG version, does
not load anymore. The second (error_2.osb) one saved at the same session
has been transformed somehow and loads (surprisingly) correct.

I did use the 10loading example, but did have to disable the default
GraphOp in order to read the model.

I hope that you can load the model because it does contain special
attachments which we use in our application.
Post by Gerrit Voß
IIRC that only happens if there is no active context. I'll add that
check. Alternatively the fastest solution would be to add an environment
variable to suppress this check at that point in time.
Both is fine for me :-)

Best,
Johannes
Gerrit Voß
2014-07-17 14:55:43 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
I pushed the commit, could you try and let me know if it works.
Good and bad news...
The rendering does now work with and without shader on all of my tested
ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000
It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.
ok, what happened (for your example files) is that while the geometry is
tagged as useVAO, individual properties, in particular the vertex
position properties have their useVBO flag set to false. Hence nothing
is rendered. OpenSG is not very graceful right now. I'm thinking about
how to make it so.
Post by Johannes
With respect to the Intel HD Graphics 4000 platform I have another
problem. I'm facing a stray OpenGL error which I'm unable to track down.
In my debug sessions this error showed up in the Windows::doFrameExit
due to the error checking loop. Unfortunately, the error flag
(GL_INVALID_OPERATION) is not cleared by the glGetError call and the
application is trapped in this loop.
I have tried hard to find the cause for the error but have not succeeded
so far. I do know that this loop implementation is in accordance with
the GL specs but it would nevertheless be fine if I could disable the
check for the debug session.
ok, this was the easy part, I added both, checking for a valid context
before calling glGetError. At the same time if you set the env variable
OSG_NO_GL_CHECK_ON_FRAMEEXIT it will ignore the test as well.

kind regards
gerrit
Gerrit Voß
2014-07-21 10:34:20 UTC
Permalink
Hello,
Post by Johannes Brunen
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
I pushed the commit, could you try and let me know if it works.
Good and bad news...
The rendering does now work with and without shader on all of my tested
ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000
It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.
ok, what happened (for your example files) is that while the geometry is
tagged as useVAO, individual properties, in particular the vertex
position properties have their useVBO flag set to false. Hence nothing
is rendered. OpenSG is not very graceful right now. I'm thinking about
how to make it so.
ok, I made OpenSG a little more gracious and pushed the changes. With
them both of your files load and render (correctly for me at least).

kind regards
gerrit
Johannes
2014-07-25 09:49:49 UTC
Permalink
Hello Gerrit,

finally, I can give you some feedback for the changes you have provided
to the code base. I have tested the changes on my three graphic
platforms, namely

ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000

and they all work fine so far. However, in the debug case, I still have
to set the OSG_NO_GL_CHECK_ON_FRAMEEXIT environment variable for the
Intel platform in order to not get trap into the infinite loop.

I would like to thank you for the considerable effort you have invested
to solve this particular shading problem.

Additionally, it is good to see that the number of OpenSG flags has also
reduced a little.

Best,
Johannes

Gerrit Voß
2014-07-17 14:55:43 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
I pushed the commit, could you try and let me know if it works.
Good and bad news...
The rendering does now work with and without shader on all of my tested
ATI Radeon 5700 series
NVidia GTX 560
Intel HD Graphics 4000
It works, however, only for newly created models. My stored scenes do
not work anymore. No graphic is visible at all.
ok, what happened (for your example files) is that while the geometry is
tagged as useVAO, individual properties, in particular the vertex
position properties have their useVBO flag set to false. Hence nothing
is rendered. OpenSG is not very graceful right now. I'm thinking about
how to make it so.
Post by Johannes
With respect to the Intel HD Graphics 4000 platform I have another
problem. I'm facing a stray OpenGL error which I'm unable to track down.
In my debug sessions this error showed up in the Windows::doFrameExit
due to the error checking loop. Unfortunately, the error flag
(GL_INVALID_OPERATION) is not cleared by the glGetError call and the
application is trapped in this loop.
I have tried hard to find the cause for the error but have not succeeded
so far. I do know that this loop implementation is in accordance with
the GL specs but it would nevertheless be fine if I could disable the
check for the debug session.
ok, this was the easy part, I added both, checking for a valid context
before calling glGetError. At the same time if you set the env variable
OSG_NO_GL_CHECK_ON_FRAMEEXIT it will ignore the test as well.

kind regards
gerrit
Gerrit Voß
2014-07-15 08:40:40 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
ok, I have an idea what goes wrong. I'll try to figure out a fix.
Currently I'm not sure if an automatic solution can be found. Would
it be a problem for you to explicitly mark the shader if it needs
vertex attributes (e.g. contains a layout(location = ...)) statement
or not ?
So that means, I do have to figure out at run time, which shader code I
have to use, which implies that I have to change the shader code of
loaded (perhaps foreign) models on the fly?
So, I'm not a big fan of such a solution :-)
Can you elaborate what goes wrong?
the old problem, on non NVidia you have to know how the the GPU accesses
the vertex attributes (no shader/shader | shader with attribs) in order
to submit them correctly as there is no aliasing except for attribute
0.

Currently I check the active attribute names and if any of them does not
start with "gl_" assume that custom attributes are used. OpenGL forbids
the use of gl_ so that should be fine (both the ATI/NVidia compilers
complain if you do)

That solution seem to work for now, I will commit it soon. With it
your example works correctly.

kind regards
gerrit
Gerrit Voß
2014-06-04 09:36:36 UTC
Permalink
Hello,
Post by Johannes
Hello Gerrit,
Post by Gerrit Voß
could you send me an osg file dump of the scene graph for the 1_x sequence
of images. I'll try to recreate the problems, but it might be easier having a
reference to see the actual setup.
I have uploaded the files to
http://www.filedropper.com/error
thanks, with shaders I'm seeing a the same flat appearance on ATI
cards and correct shading on NVidia card.

ok, beware, we are entering the murky waters of vertex attribute
aliasing. Which IMHO causes the main problems between NVidia and ATI
hardware/drivers in your case, and in general. As far as I understand
it, when using vertex attributes with shaders in an any kind of
compatibility profile (I'm counting #version 120 as one) only
gl_Vertex/location 0 is guaranteed to alias correctly. All the others
might (NVidia) or might not (ATI), were should not is correct according
to the spec. Also, in compat mode you can't bind anything explicitly to
location=0 in your shaders, so here comes the 'ugly' mix.

Could you try the following vertex shader for your setup, which
makes use of gl_Vertex being aliased and the normal being explicit
specified.

//#version 150 compatibility
//#extension GL_ARB_explicit_attrib_location : enable
// or
#version 330 compatibility

out vec3 vTranfNormal;
out vec4 vPosition;

layout(location = 2) in vec3 glNormal;

//
// Compute the normal
//
vec3 fnormal(void)
{
vec3 normal = gl_NormalMatrix * glNormal;
normal = normalize(normal);
return normal;
}

void main()
{
gl_Position = ftransform();
gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;
vTranfNormal = fnormal();
vPosition = gl_Vertex;
}


Could you try this one and let me know if it solves your display
problems. If you need more attributes (colors/tex coords) you have
to provide the corresponding layout statements in your shader.


Coming back to another point in the thread:

OSG_ENABLE_NEW_GEOHANDLER=ON

I finally fixed to OSB loader problems (a while ago), so this option
will disappear rather soon and the =ON case will become the only
available option.

kind regards
gerrit
Loading...