Discussion:
[Opensg-users] WG: OpenSG2: DepthPeeling example
Johannes Brunen
2017-03-08 06:36:25 UTC
Permalink
Hello Carsten,

I'm playing around with the DepthPeelingStage and have written a simple example. For that I have taken the testDepthPeeling.cpp and merged it with the shaderstoragebufferobject_std430.cpp example. I have done this because I would like to have an example that uses transparent and opaque geometry and that uses the more elaborate shader code. The example is working somehow, but what I do not get right so far, is that the transparent geometry is correctly blended with the opaque geometry.

In extension to the attached example I also added the following code lines into the example:

geom1_state->setTransparencyMode(OSG::Material::TransparencyForceTransparent);
...
geom6_state->setTransparencyMode(OSG::Material::TransparencyForceTransparent);

This, however does also not bring the desired correct rendering.

Is it even possible to mix opaque and transparent geometry with the DepthPeelingStage core or is the stage incomplete with respect to that task?
How do I have to handle the task of differing actions with respect to transparency in a stage implementation at all?
How is blending of transparent and opaque rendering performed in the common case?

Could you please take a look into the example and give me some hint what I'm doing wrong here?

Additionally, I have tried to setup the HDR2 stage above of the DepthPeeling stage in another example. With that I do not get any rendering. If I replace the HDR2 stage or the DepthPeeling stage by a simple Group core I got the expected rendering for the particular stage. Do you have any idea what is missing in that scenario where both stages are active?

Best,
Johannes

P.S. I have send you a Cc because I expect the mailing list to have trouble with the example attachment, sorry for that.
Carsten Neumann
2017-03-08 16:02:30 UTC
Permalink
Hello Johannes,
Post by Johannes Brunen
I'm playing around with the DepthPeelingStage and have written a simple
example. For that I have taken the testDepthPeeling.cpp and merged it
with the shaderstoragebufferobject_std430.cpp example. I have done this
because I would like to have an example that uses transparent and opaque
geometry and that uses the more elaborate shader code. The example is
working somehow, but what I do not get right so far, is that the
transparent geometry is correctly blended with the opaque geometry.
hmm, I haven't written the DepthPeelingStage and don't know its details
well enough - or rather I don't know it at all
Post by Johannes Brunen
In extension to the attached example I also added the following code
geom1_state->setTransparencyMode(OSG::Material::TransparencyForceTransparent);
geom6_state->setTransparencyMode(OSG::Material::TransparencyForceTransparent);
This, however does also not bring the desired correct rendering.
I'm afraid I have no idea what that does, I can't recall ever having
seen a transparency mode enum :(
Post by Johannes Brunen
Is it even possible to mix opaque and transparent geometry with the
DepthPeelingStage core or is the stage incomplete with respect to that task?
I guess that is a possibility.
Post by Johannes Brunen
How do I have to handle the task of differing actions with respect to
transparency in a stage implementation at all?
How is blending of transparent and opaque rendering performed in the common case?
Transparent objects are rendered after opaque ones in back to front
order (IIRC using the bounding box center).
Post by Johannes Brunen
Could you please take a look into the example and give me some hint what
I'm doing wrong here?
Additionally, I have tried to setup the HDR2 stage above of the
DepthPeeling stage in another example. With that I do not get any
rendering. If I replace the HDR2 stage or the DepthPeeling stage by a
simple Group core I got the expected rendering for the particular stage.
Do you have any idea what is missing in that scenariowhere both stages
are active?
Not specifically, sorry. In general I would suspect it has something to
do with the FBOs/Attachments the stages are rendering into and how they
perform clearing.

Cheers,
Carsten
Johannes
2017-03-09 06:13:18 UTC
Permalink
Hi Carsten,

first off, my understanding is that the TransparencyForceTransparent,
... give you the possibility to govern the process of transparency
detection of a material. If you have a standard material this can be
done automatically and therefore the default for the attribute is
TransparencyAutoDetection. However, if you have a fancy material as in
my case that does not have any known transparency characteristic you
need to have a way to tell the system that your material is either
transparent or opaque. At least that is my interpetration :-)

See the ChunkMaterial::isTransparent() implementation:

bool ChunkMaterial::isTransparent(void) const
{
Int32 tm = getTransparencyMode();

if(tm != Material::TransparencyAutoDetection)
{
return (tm == Material::TransparencyForceTransparent);
}

bool returnValue = false;

MFChunksType::const_iterator chunksIt = _mfChunks.begin();
MFChunksType::const_iterator chunksEnd = _mfChunks.end ();

for(; chunksIt != chunksEnd && returnValue == false; ++chunksIt)
{
returnValue = (*chunksIt)->isTransparent();
}

return returnValue;
}

Next, I know that you haven't written the DepthPeelingStage. What I
hoped to learn, is the 'correct' way of writing any Stage with respect
to transparency. If have debugged a little in the stage and see that in
a mixed transparency/opaque scene the rooted stage's renderEnter(Action*
action) method is called exactly once. That mean (correct my if I'm
wrong) that the stage is responsible for discriminating the transparent
from the opaque pass. For instance in the case of shadowing, the
transparent geometry should not be part of the shadow map generating
objects at all, because they do not cast any shadows. So part of a
shadow stage must be able to constrain on opaque geometries. Or in the
depth peeling case the opaque geometry must be rendered with other
materials and possibly other shader configuration. So there must be a
way to detect the render state in the shader.

I'm looking for explanation of how these things are to be set up
correctly in any Stage implementation. I think that this is a central
point and I would like to learn these details in order to do my thinks
correctly.
Post by Carsten Neumann
Post by Johannes Brunen
Is it even possible to mix opaque and transparent geometry with the
DepthPeelingStage core or is the stage incomplete with respect to that task?
I guess that is a possibility.
In that case it needs correction to be usable in the common case.
Post by Carsten Neumann
Transparent objects are rendered after opaque ones in back to front
order (IIRC using the bounding box center).
Yes, but that is not enough in my understanding. There has to be a
pattern of how to write Stages with respect to transparency in the case
that different rendering setups are necessary for transparent and opaque
geometries.
Post by Carsten Neumann
Post by Johannes Brunen
Could you please take a look into the example and give me some hint what
I'm doing wrong here?
Not specifically, sorry. In general I would suspect it has something to
do with the FBOs/Attachments the stages are rendering into and how they
perform clearing.
I will have a look into the details.

I really need more explanations for the RenderAction, RenderPartition,
Stage, transparency mix. I have searched the mailing list but did not
get enough information for sorting the issues in my head.

Be patient with me :-)

Best,
Johannes
Carsten Neumann
2017-03-09 16:37:35 UTC
Permalink
Hello Johannes,
Post by Johannes
first off, my understanding is that the TransparencyForceTransparent,
... give you the possibility to govern the process of transparency
detection of a material. If you have a standard material this can be
done automatically and therefore the default for the attribute is
TransparencyAutoDetection. However, if you have a fancy material as in
my case that does not have any known transparency characteristic you
need to have a way to tell the system that your material is either
transparent or opaque. At least that is my interpetration :-)
ah, I see, that makes sense.
Post by Johannes
Next, I know that you haven't written the DepthPeelingStage. What I
hoped to learn, is the 'correct' way of writing any Stage with respect
to transparency. If have debugged a little in the stage and see that in
a mixed transparency/opaque scene the rooted stage's renderEnter(Action*
action) method is called exactly once. That mean (correct my if I'm
wrong) that the stage is responsible for discriminating the transparent
from the opaque pass. For instance in the case of shadowing, the
transparent geometry should not be part of the shadow map generating
objects at all, because they do not cast any shadows. So part of a
shadow stage must be able to constrain on opaque geometries. Or in the
depth peeling case the opaque geometry must be rendered with other
materials and possibly other shader configuration. So there must be a
way to detect the render state in the shader.
Hmm, I'm not sure if there is a general way. I believe some stages play
tricks with the traversal mask - which isn't ideal as those bits should
be available to the application.

The separation of opaque and transparent objects happens at the DrawTree
level. Those are constructed by the RenderPartition to decide the order
objects are really being drawn in - allowing to sort by materials (for
opaque objects, to reduce state changes) or depth (for transparent ones).
Post by Johannes
I'm looking for explanation of how these things are to be set up
correctly in any Stage implementation. I think that this is a central
point and I would like to learn these details in order to do my thinks
correctly.
Those are valid questions; the thing is that I suspect that there may
not be a straightforward answer to them. Especially with the more
complex stages that essentially implement entire "rendering algorithms"
compositing their results is in general not a problem with a totally
obvious solution.
Some of these methods were not even invented (or widely used at least)
when the Stage abstraction was initially implemented and its initial use
was to have a way to render to texture. If you have an algorithm that
requires finer grained control over the rendering you may have to go
into the bowels of the Render{Action,Partition} and extend what they expose.
Post by Johannes
Post by Carsten Neumann
Post by Johannes Brunen
Is it even possible to mix opaque and transparent geometry with the
DepthPeelingStage core or is the stage incomplete with respect to that task?
I guess that is a possibility.
In that case it needs correction to be usable in the common case.
True, but see above: compositing arbitrary rendering algorithms in all
combinations automatically seems to me like it could turn into tricky
problem.
Post by Johannes
Post by Carsten Neumann
Transparent objects are rendered after opaque ones in back to front
order (IIRC using the bounding box center).
Yes, but that is not enough in my understanding. There has to be a
pattern of how to write Stages with respect to transparency in the case
that different rendering setups are necessary for transparent and opaque
geometries.
Post by Carsten Neumann
Post by Johannes Brunen
Could you please take a look into the example and give me some hint what
I'm doing wrong here?
Not specifically, sorry. In general I would suspect it has something to
do with the FBOs/Attachments the stages are rendering into and how they
perform clearing.
I will have a look into the details.
I really need more explanations for the RenderAction, RenderPartition,
Stage, transparency mix. I have searched the mailing list but did not
get enough information for sorting the issues in my head.
The RenderAction (RA) traverses the scene tree visiting the NodeCores
along the way. Drawable objects are collected into the active
RenderPartition (RP, there is a default one that renders to the window
backbuffer), which stores them into its DrawTree. The DrawTree is
responsible for organizing objects in the "optimal" drawing order, by
default separating opaque and transparent objects and ordering them
differently (see above). IIRC the DrawTree is processed (i.e. drawing
happens) when its owner RP is finalized.
Stages use API on the RA to create additional RPs that then can target
custom FBOs, traverse the scene below them multiple times, post-process
the FBO attachments, etc.
I believe this is how these things fit together. I'll try to answer more
specific questions, but please keep in mind that I'm not working with
the code on a daily base these days...

Cheers,
Carsten
Johannes
2017-03-20 09:22:32 UTC
Permalink
Hello Carsten,

sorry for the late answer but I was quite busy with the finishing of the
MultiLightChunk that I have just send to the list.
Post by Carsten Neumann
Hello Johannes,
Post by Johannes
Next, I know that you haven't written the DepthPeelingStage. What I
hoped to learn, is the 'correct' way of writing any Stage with respect
to transparency. If have debugged a little in the stage and see that in
a mixed transparency/opaque scene the rooted stage's renderEnter(Action*
action) method is called exactly once. That mean (correct my if I'm
wrong) that the stage is responsible for discriminating the transparent
from the opaque pass. For instance in the case of shadowing, the
transparent geometry should not be part of the shadow map generating
objects at all, because they do not cast any shadows. So part of a
shadow stage must be able to constrain on opaque geometries. Or in the
depth peeling case the opaque geometry must be rendered with other
materials and possibly other shader configuration. So there must be a
way to detect the render state in the shader.
Hmm, I'm not sure if there is a general way. I believe some stages play
tricks with the traversal mask - which isn't ideal as those bits should
be available to the application.
The separation of opaque and transparent objects happens at the DrawTree
level. Those are constructed by the RenderPartition to decide the order
objects are really being drawn in - allowing to sort by materials (for
opaque objects, to reduce state changes) or depth (for transparent ones).
Post by Johannes
I'm looking for explanation of how these things are to be set up
correctly in any Stage implementation. I think that this is a central
point and I would like to learn these details in order to do my thinks
correctly.
Those are valid questions; the thing is that I suspect that there may
not be a straightforward answer to them. Especially with the more
complex stages that essentially implement entire "rendering algorithms"
compositing their results is in general not a problem with a totally
obvious solution.
Some of these methods were not even invented (or widely used at least)
when the Stage abstraction was initially implemented and its initial use
was to have a way to render to texture. If you have an algorithm that
requires finer grained control over the rendering you may have to go
into the bowels of the Render{Action,Partition} and extend what they expose.
Post by Johannes
Post by Carsten Neumann
Post by Johannes Brunen
Is it even possible to mix opaque and transparent geometry with the
DepthPeelingStage core or is the stage incomplete with respect to that task?
I guess that is a possibility.
In that case it needs correction to be usable in the common case.
True, but see above: compositing arbitrary rendering algorithms in all
combinations automatically seems to me like it could turn into tricky
problem.
Post by Johannes
Post by Carsten Neumann
Transparent objects are rendered after opaque ones in back to front
order (IIRC using the bounding box center).
Yes, but that is not enough in my understanding. There has to be a
pattern of how to write Stages with respect to transparency in the case
that different rendering setups are necessary for transparent and opaque
geometries.
Post by Carsten Neumann
Post by Johannes Brunen
Could you please take a look into the example and give me some hint what
I'm doing wrong here?
Not specifically, sorry. In general I would suspect it has something to
do with the FBOs/Attachments the stages are rendering into and how they
perform clearing.
I will have a look into the details.
I really need more explanations for the RenderAction, RenderPartition,
Stage, transparency mix. I have searched the mailing list but did not
get enough information for sorting the issues in my head.
The RenderAction (RA) traverses the scene tree visiting the NodeCores
along the way. Drawable objects are collected into the active
RenderPartition (RP, there is a default one that renders to the window
backbuffer), which stores them into its DrawTree. The DrawTree is
responsible for organizing objects in the "optimal" drawing order, by
default separating opaque and transparent objects and ordering them
differently (see above). IIRC the DrawTree is processed (i.e. drawing
happens) when its owner RP is finalized.
Stages use API on the RA to create additional RPs that then can target
custom FBOs, traverse the scene below them multiple times, post-process
the FBO attachments, etc.
I believe this is how these things fit together.
Carsten, thank you very much for these answers. They are really valuable
for me. Often the big picture is not easy to grasp and some things are
really hard to find without someone nudging one to the right place.
Post by Carsten Neumann
I'll try to answer more
specific questions, but please keep in mind that I'm not working with
the code on a daily base these days...
I'm in dept to your dedication to answer my questions. What I can give
is my coding time but I have still some way to go to be really
independent of help.

It is really sad that not more people are interested in OpenSG these
days anymore. A lot of man power, dedication and money has gone into
this peace of software.

However, thank you for your patience and I know that it is all volunteer
work and I really appreciate that.

Best,
Johannes

Loading...