Discussion:
[CsMain] Plans for the engine: speeding things up
(too old to reply)
Jorrit Tyberghein
2000-03-27 07:05:08 UTC
Permalink
I don't like to see CS branch into some other version. Many of the concerns
people have are already addressed in my mind. But you don't know it yet :-)
So here I present a list of all the things I (or someone else) plan to do on CS
to make it better on today's hardware.

First we need to support hardware accelerated transform as much as possible.
This can be accomplished with the functions DrawTriangleMesh (which is
already there but does not support hardware accelerated transforms yet)
and the upcoming function DrawPolygonMesh.

DrawTriangleMesh is used by sprites (3D triangle meshes) and will also be
used for terrain and curved surfaces. So this means that those important entities
in CS will be able to have optimal speed.

DrawPolygonMesh is a variant which is more similar to DrawPolygon and thus
is able to draw lightmapped polygons (instead of DrawTriangleMesh which draws
gouraud shaded triangles). I plan to use this for detail objects. Those are things
which will enhance detail of sectors. We already have these things but currently
they are just drawn like sector walls.

So this means that we can easily get hardware accelerated transform for
everything except sector walls. Changing those would be more fundamental.
However, I wonder if this is still a big problem. The sector walls should be few
and will only define the coarse boundaries of the rooms. They will help do
visibility culling and that's the main reason they need to be handled (i.e.
transformed) by the engine itself.

Personally I think that this is the best approach. All detail objects and triangle
meshes can have optimal speed that way while still keeping the c-buffer/clipping
visibility culling for culling large number of objects at once.

The above changes are also possible without fundamental engine changes.

In addition I'm working on a PVS for CS (Potentially Visible Set). This will
avoid having to use the c-buffer for almost everything. If it works well we
might even be able to avoid it at all. For software rendering or slow hardware
(where overdraw is reasonably expensive) we can still reenable the c-buffer
after the PVS to do fine culling. But on fast hardware the PVS alone will
be enough giving even more speed.

What do people think?

Greetings,

--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
==============================================================================
Andrew Zabolotny
2000-03-27 08:10:38 UTC
Permalink
Post by Jorrit Tyberghein
and will only define the coarse boundaries of the rooms. They will help do
visibility culling and that's the main reason they need to be handled (i.e.
transformed) by the engine itself.
How about STATBSP()? Are polygons that are a part of BSP considered "boundaries
of rooms" or they are handled separately? The reason is that I'm going to use
STATBSP() extensively in MazeD-generated worlds (currently MazeD separates the
convex hull of each room into a sector and everything that is "inside" into a
thing called __static__, and adds an STATBSP() keyword).
Post by Jorrit Tyberghein
Personally I think that this is the best approach. All detail objects and
triangle meshes can have optimal speed that way while still keeping the
c-buffer/clipping visibility culling for culling large number of objects
at once.
c-buffer/covtree needs transformed or original vertices?

Another observation: I've built with MazeD a simple room with a relatively
complex thing in the middle (a sphere consisting of 256 triangles and 114
vertices) (which was part of STATBSP()), this dropped framerate from 24fps to
almost 8fps (even when the thing is behind me!). Well, it was in debugging mode
but hell, its a Celeron/433! Thus the question: why it was so slow, are
transforms such non-optimal? Also another question: how optimal the Z-clipping
algorithm is, maybe it is so slow? Yet another question: how u/v/z clipping is
handled in Z-plane clipping algorithm?

Greetings,
_\***@teamOS/2
Jorrit Tyberghein
2000-03-27 09:17:30 UTC
Permalink
Post by Andrew Zabolotny
Post by Jorrit Tyberghein
and will only define the coarse boundaries of the rooms. They will help do
visibility culling and that's the main reason they need to be handled (i.e.
transformed) by the engine itself.
How about STATBSP()? Are polygons that are a part of BSP considered "boundaries
of rooms" or they are handled separately? The reason is that I'm going to use
STATBSP() extensively in MazeD-generated worlds (currently MazeD separates the
convex hull of each room into a sector and everything that is "inside" into a
thing called __static__, and adds an STATBSP() keyword).
Every polygon in the STATBSP is considered to be world geometry. This means
it will get used for visibility culling. I think that the best approach in the future
will be to mark all large occluders (i.e. sector walls and big things containing
large polygons) so that they go to the STATBSP() tree but all fine detail objects
(i.e. small but complex things and curved surfaces) should not be included
in that tree.

The reason for this is that the visibility algorithm works best with large
polygons. i.e. it needs large polygons that can cull a lot of things at once.
It does not pay of to add every single small triangle of polygon because that
adds processing power without much expected gain.

So I'm considering a new keyword called 'DETAIL' which could go to a
thing so that STATBSP will ignore that for the BSP/octree. This will be a
detail object. You could also use MOVEABLE for that but maybe it is
better to use another keyword as MOVEABLE could in the future also
mean other things.
Post by Andrew Zabolotny
Post by Jorrit Tyberghein
Personally I think that this is the best approach. All detail objects and
triangle meshes can have optimal speed that way while still keeping the
c-buffer/clipping visibility culling for culling large number of objects
at once.
c-buffer/covtree needs transformed or original vertices?
Yes. It works on transformed AND perspective corrected coordinates.
That's why it needs to be done as little as possible and on as large
polygons as possible (because a large polygon culls more).
Post by Andrew Zabolotny
Another observation: I've built with MazeD a simple room with a relatively
complex thing in the middle (a sphere consisting of 256 triangles and 114
vertices) (which was part of STATBSP()), this dropped framerate from 24fps to
almost 8fps (even when the thing is behind me!). Well, it was in debugging mode
but hell, its a Celeron/433! Thus the question: why it was so slow, are
transforms such non-optimal?
I'm interested in that level.

One reason for the slowdown could be a huge number of splits caused by the
sphere. I think that in the future we should mark the sphere as a detail
object so that it is not used in the visibility algorithm (but still tested of course).
Post by Andrew Zabolotny
Also another question: how optimal the Z-clipping
algorithm is, maybe it is so slow?
Maybe...
Post by Andrew Zabolotny
Yet another question: how u/v/z clipping is
handled in Z-plane clipping algorithm?
It isn't. That's the reason that gouraud shaded polygons are limited to
triangles. That's also why I needed your clipper extension.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
==============================================================================
Aaron Drew
2000-03-27 10:48:10 UTC
Permalink
Ok,
I'm guilty of not looking into this myself due to heavy Uni commitments at
the moment but I'm wondering why your sphere would be being translated at
all? Why not just translate the view frustum into world coordinates and clip
to that? The translating of your sphere should never occur. In fact, I don't
understand why you need to translate any static objects at all. Can't you
just send them to the graphics card / software renderer with a
transformation matrix and let it take care of things? Is the culling done on
transformed data?

Apologies for posting this without first checking up on it. Please don't
hesitate to tell me to go read up before posting if I'm missing something.

- Aaron

-----Original Message-----
From: crystal-main-***@lists.sourceforge.net
[mailto:crystal-main-***@lists.sourceforge.net]On Behalf Of Andrew
Zabolotny
Sent: Monday, March 27, 2000 6:11 PM
To: crystal-***@lists.sourceforge.net
Subject: Re: [CsMain] Plans for the engine: speeding things up
Post by Jorrit Tyberghein
and will only define the coarse boundaries of the rooms. They will help do
visibility culling and that's the main reason they need to be handled (i.e.
transformed) by the engine itself.
How about STATBSP()? Are polygons that are a part of BSP considered
"boundaries
of rooms" or they are handled separately? The reason is that I'm going to
use
STATBSP() extensively in MazeD-generated worlds (currently MazeD separates
the
convex hull of each room into a sector and everything that is "inside" into
a
thing called __static__, and adds an STATBSP() keyword).
Post by Jorrit Tyberghein
Personally I think that this is the best approach. All detail objects and
triangle meshes can have optimal speed that way while still keeping the
c-buffer/clipping visibility culling for culling large number of objects
at once.
c-buffer/covtree needs transformed or original vertices?

Another observation: I've built with MazeD a simple room with a relatively
complex thing in the middle (a sphere consisting of 256 triangles and 114
vertices) (which was part of STATBSP()), this dropped framerate from 24fps
to
almost 8fps (even when the thing is behind me!). Well, it was in debugging
mode
but hell, its a Celeron/433! Thus the question: why it was so slow, are
transforms such non-optimal? Also another question: how optimal the
Z-clipping
algorithm is, maybe it is so slow? Yet another question: how u/v/z clipping
is
handled in Z-plane clipping algorithm?

Greetings,
_\***@teamOS/2
Jorrit Tyberghein
2000-03-27 11:50:52 UTC
Permalink
Post by Aaron Drew
Ok,
I'm guilty of not looking into this myself due to heavy Uni commitments at
the moment but I'm wondering why your sphere would be being translated at
all? Why not just translate the view frustum into world coordinates and clip
to that? The translating of your sphere should never occur. In fact, I don't
understand why you need to translate any static objects at all. Can't you
just send them to the graphics card / software renderer with a
transformation matrix and let it take care of things? Is the culling done on
transformed data?
CS has no concept of spheres at this moment. So the only thing you can
do is to make a thing (or sprite) using triangles/polygons making up the sphere.
In future we'll be able to mark the sphere as a detail object which means
it will be excluded from transformation/visibility (except for a global test
to see if the entire object is likely visible or not). In that case we can let
the hardware take care of transformation. Right now this is not possible
yet. We're working towards that goal.

Visibility culling in CS happens in transformed coordinate space (even
perspective corrected). So it is important to try to do this as little as possible
with as big polygons as possible (in order to cull much with little operations).
This is also what I'm working on. Both the PVS and detail objects will make
culling quicker.


Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
==============================================================================
Andrew Zabolotny
2000-03-27 10:51:26 UTC
Permalink
Post by Jorrit Tyberghein
So I'm considering a new keyword called 'DETAIL' which could go to a
thing so that STATBSP will ignore that for the BSP/octree. This will be a
detail object. You could also use MOVEABLE for that but maybe it is
better to use another keyword as MOVEABLE could in the future also
mean other things.
So I will need to create two things: one that will go into static BSP
(__static__) and one which won't (__detail__)?
Post by Jorrit Tyberghein
Post by Andrew Zabolotny
but hell, its a Celeron/433! Thus the question: why it was so slow, are
transforms such non-optimal?
I'm interested in that level.
I will send it to you tommorow (dont have it handy)
Post by Jorrit Tyberghein
Post by Andrew Zabolotny
Also another question: how optimal the Z-clipping
algorithm is, maybe it is so slow?
Maybe...
Post by Andrew Zabolotny
Yet another question: how u/v/z clipping is
handled in Z-plane clipping algorithm?
It isn't. That's the reason that gouraud shaded polygons are limited to
triangles. That's also why I needed your clipper extension.
But even with triangles it should incorrectly clip against Znear. I'm inclining
towards implementing yet another clipper - for clipping against a plane. Do you
need to clip against a general plane, or just against z > zmin plane?

Greetings,
_\***@teamOS/2
Jorrit Tyberghein
2000-03-27 11:58:12 UTC
Permalink
Post by Andrew Zabolotny
Post by Jorrit Tyberghein
So I'm considering a new keyword called 'DETAIL' which could go to a
thing so that STATBSP will ignore that for the BSP/octree. This will be a
detail object. You could also use MOVEABLE for that but maybe it is
better to use another keyword as MOVEABLE could in the future also
mean other things.
So I will need to create two things: one that will go into static BSP
(__static__) and one which won't (__detail__)?
No more. Every detail object should be a seperate object. That's because I
plan to do visibility culling on entire detail objects. For example, if you would
add two of your spheres to some world then both should be seperate detail
objects. Otherwise I can only cull them both or none. I need to be able to
cull them individually.
Post by Andrew Zabolotny
Post by Jorrit Tyberghein
Post by Andrew Zabolotny
Yet another question: how u/v/z clipping is
handled in Z-plane clipping algorithm?
It isn't. That's the reason that gouraud shaded polygons are limited to
triangles. That's also why I needed your clipper extension.
But even with triangles it should incorrectly clip against Znear.
I'm inclining
towards implementing yet another clipper - for clipping against a plane. Do you
need to clip against a general plane, or just against z > zmin plane?
Well both. The z plane clipper is needed most but I also need a general clipper
in some cases (i.e. floating portals).

This clipper needs to be in 3D however (as you can imagine).

Greetings,

--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
==============================================================================
Aaron Drew
2000-03-27 12:57:20 UTC
Permalink
Ok. I understand a bit better now. Is there any reason why CS transforms to
screen space before culling visible surfaces or is it a historical element
of the engine inherited from the software renderer? (I understand that with
software rendering much of the transformed data can likely be reused in the
2D form to render things.) To me it would be faster to translate the
clipping planes in 3D to world coordinates and clip against them. Is this a
future goal of the engine?

-----Original Message-----
From: crystal-main-***@lists.sourceforge.net
[mailto:crystal-main-***@lists.sourceforge.net]On Behalf Of Jorrit
Tyberghein
Sent: Monday, March 27, 2000 9:51 PM
To: crystal-***@lists.sourceforge.net
Subject: Re: [CsMain] Plans for the engine: speeding things up
Post by Aaron Drew
Ok,
I'm guilty of not looking into this myself due to heavy Uni commitments at
the moment but I'm wondering why your sphere would be being translated at
all? Why not just translate the view frustum into world coordinates and
clip
Post by Aaron Drew
to that? The translating of your sphere should never occur. In fact, I
don't
Post by Aaron Drew
understand why you need to translate any static objects at all. Can't you
just send them to the graphics card / software renderer with a
transformation matrix and let it take care of things? Is the culling done
on
Post by Aaron Drew
transformed data?
CS has no concept of spheres at this moment. So the only thing you can
do is to make a thing (or sprite) using triangles/polygons making up the
sphere.
In future we'll be able to mark the sphere as a detail object which means
it will be excluded from transformation/visibility (except for a global test
to see if the entire object is likely visible or not). In that case we can
let
the hardware take care of transformation. Right now this is not possible
yet. We're working towards that goal.

Visibility culling in CS happens in transformed coordinate space (even
perspective corrected). So it is important to try to do this as little as
possible
with as big polygons as possible (in order to cull much with little
operations).
This is also what I'm working on. Both the PVS and detail objects will make
culling quicker.


Greetings,


--
============================================================================
==
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
============================================================================
==
Jorrit Tyberghein
2000-03-27 13:02:21 UTC
Permalink
Post by Aaron Drew
Ok. I understand a bit better now. Is there any reason why CS transforms to
screen space before culling visible surfaces or is it a historical element
of the engine inherited from the software renderer? (I understand that with
software rendering much of the transformed data can likely be reused in the
2D form to render things.)
The c-buffer is the reason. This is a VERY good culler but it only operates
in 2D coordinates (screen space).
Post by Aaron Drew
To me it would be faster to translate the
clipping planes in 3D to world coordinates and clip against them. Is this a
future goal of the engine?
But that's only useful for clipping. The c-buffer does culling. This is much
more than only clipping. CS already has a 3D frustrum which can be used.
I plan to use that more in the future.


Greetings,

--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Shadwell hated all southerners and, by inference, was standing at the
North Pole.
-- (Terry Pratchett & Neil Gaiman, Good Omens)
==============================================================================
Aaron Drew
2000-03-27 22:18:12 UTC
Permalink
Post by Jorrit Tyberghein
Post by Aaron Drew
Ok. I understand a bit better now. Is there any reason why CS transforms
to
Post by Jorrit Tyberghein
Post by Aaron Drew
screen space before culling visible surfaces or is it a historical
element
Post by Jorrit Tyberghein
Post by Aaron Drew
of the engine inherited from the software renderer? (I understand that
with
Post by Jorrit Tyberghein
Post by Aaron Drew
software rendering much of the transformed data can likely be reused in
the
Post by Jorrit Tyberghein
Post by Aaron Drew
2D form to render things.)
The c-buffer is the reason. This is a VERY good culler but it only operates
in 2D coordinates (screen space).
Hardware z-buffering is very fast (and c-buffer's aren't possible in
hardware as far as I know). Is it possible to move the c-buffer code to the
software renderer? I don't understand why its not possible to just translate
5/6 frustum planes to world space and use those to clip/cull geometry that
is out of view or only partially visible. Translating vertices to viewspace
is expensive.
Samuel
2000-03-28 02:17:53 UTC
Permalink
Post by Aaron Drew
Post by Jorrit Tyberghein
Post by Aaron Drew
Ok. I understand a bit better now. Is there any reason why CS transforms
to
Post by Jorrit Tyberghein
Post by Aaron Drew
screen space before culling visible surfaces or is it a historical
element
Post by Jorrit Tyberghein
Post by Aaron Drew
of the engine inherited from the software renderer? (I understand that
with
Post by Jorrit Tyberghein
Post by Aaron Drew
software rendering much of the transformed data can likely be reused in
the
Post by Jorrit Tyberghein
Post by Aaron Drew
2D form to render things.)
The c-buffer is the reason. This is a VERY good culler but it only operates
in 2D coordinates (screen space).
Hardware z-buffering is very fast (and c-buffer's aren't possible in
hardware as far as I know). Is it possible to move the c-buffer code to the
software renderer? I don't understand why its not possible to just translate
5/6 frustum planes to world space and use those to clip/cull geometry that
is out of view or only partially visible. Translating vertices to viewspace
is expensive.
_______________________________________________
Crystal-main mailing list
http://lists.sourceforge.net/mailman/listinfo/crystal-main
Nooo don't move the c-buffer to the software renderer. I believe Jorrit mentioned a while
back that the aim was to calculate a convex solid hull for a thing/sprite and an overall
bounding box. This minimises the number of verts transformed in software. The bounding box
would be tested against the c-buffer, if visible its convex hull would be added into the
c-buffer, and potentially its full untransformed geometry passed to the api (depending on
hardware caps). This is defnitley the way to go imho, I thought the latest additions to
iGraphics3D were going towards this goal. With large worlds with many objects, even with
lightning fast hardware you need culling, hardware z-buffers only are a long way from
ideal in all situations. Simple example: you are in a room with a couple of portals
pointing out to complex worlds. Both portals are within the frustrum. But both portals are
obscured by one big object sitting right in front of you. With culling you don't reach the
portals. With only hardware Z buffering you will have much overdraw and unnecessary
recursion through portals. The trick is to tailor culling and hardware z buffers to
capabilities given circumstances.

If this is way off base Im sure I'll read about it!
cya
samuel
Jorrit Tyberghein
2000-03-28 06:32:59 UTC
Permalink
Post by Samuel
Nooo don't move the c-buffer to the software renderer. I believe Jorrit mentioned a while
back that the aim was to calculate a convex solid hull for a thing/sprite and an overall
bounding box. This minimises the number of verts transformed in software. The bounding box
would be tested against the c-buffer, if visible its convex hull would be added into the
c-buffer, and potentially its full untransformed geometry passed to the api (depending on
hardware caps). This is defnitley the way to go imho, I thought the latest additions to
iGraphics3D were going towards this goal. With large worlds with many objects, even with
lightning fast hardware you need culling, hardware z-buffers only are a long way from
ideal in all situations. Simple example: you are in a room with a couple of portals
pointing out to complex worlds. Both portals are within the frustrum. But both portals are
obscured by one big object sitting right in front of you. With culling you don't reach the
portals. With only hardware Z buffering you will have much overdraw and unnecessary
recursion through portals. The trick is to tailor culling and hardware z buffers to
capabilities given circumstances.
You are mostly right. But you forget to mention the single most reason of all.
The world is structured in a big octree. An octree node can contain thousands
or more polygons. Using the c-buffer you can also cull entire octree nodes with
one test. This effectively eliminates testing of thousands of polygons at once.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Jorrit Tyberghein
2000-03-28 06:22:50 UTC
Permalink
Post by Aaron Drew
Post by Jorrit Tyberghein
Post by Aaron Drew
Ok. I understand a bit better now. Is there any reason why CS transforms
to
Post by Jorrit Tyberghein
Post by Aaron Drew
screen space before culling visible surfaces or is it a historical
element
Post by Jorrit Tyberghein
Post by Aaron Drew
of the engine inherited from the software renderer? (I understand that
with
Post by Jorrit Tyberghein
Post by Aaron Drew
software rendering much of the transformed data can likely be reused in
the
Post by Jorrit Tyberghein
Post by Aaron Drew
2D form to render things.)
The c-buffer is the reason. This is a VERY good culler but it only operates
in 2D coordinates (screen space).
Hardware z-buffering is very fast (and c-buffer's aren't possible in
hardware as far as I know). Is it possible to move the c-buffer code to the
software renderer? I don't understand why its not possible to just translate
5/6 frustum planes to world space and use those to clip/cull geometry that
is out of view or only partially visible. Translating vertices to viewspace
is expensive.
Hardware z-buffer is fast but can never cull thousands of polygons at once.
The c-buffer is useful for very large levels (hundreds of thousands of polygons)
for which even the Z-buffer of the very fast cards are not sufficient.

The c-buffer can cull thousands or more polygons with one single test so that
you don't have to send them to the hardware.

View-frustrum culling is just not enough when you have large worlds. Even
if you have fast hardware. There is always a point at which the world becomes
to big for any kind of hardware. And that's the point at which you need
more advanced culling systems than z-buffering.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Paul Garceau
2000-03-28 00:44:53 UTC
Permalink
Hi folks,

Starting to get an image of the rendering concepts. Need some
feedback to refine that image...please forgive the length of the
message...I needed to get it all out here in the interest of
accurate feedback.
Post by Jorrit Tyberghein
CS has no concept of spheres at this moment.
This puzzles me. Exactly what are the primitives that CS
understands?

I may be missing something here, but I had assumed that the
primitives that CS could understand were cubes, spheres, cones
and polygons.
Granted, cubes, spheres and cones can all be assembled from
polygons, but is it really the most effective and efficient way
to deal with cubes, spheres and cones?

Jorrit mentioned that CS has no concept of spheres. Ok. The
next thought that came into my mind was: Is there some reason
why there is such a limited availability of primitive
recognition for CS (keeping in mind the amount of free time
folks have to work on CS)?

Comparison of the Radiance Concept with CS Concept:

Here I need to fall back to my experience with the Radiance
application.
Radiance, a ray tracing engine, recognizes a number of
primitives (pre-defined via the use of octree files and pre-
defined header references).
Once the primitive (or collection thereof) has(have been)
established, Radiance then outputs or modifies the primitive (or
a collection thereof) by using a module specific to the
primitive (or collection of primitives) in question.
The rest of the process has to do with the actual type of image
processing requested for any given scene (ray tracing, etc.).

o Greg Larson (he wrote the original Radiance Synthetic Imaging
Tool). About a year ago he demo-ed a 3d OpenGL version of
Radiance which allows the end-user to actually look around a 3d
AutoCad scene in real-time.

o The most time-consuming aspect of Radiance has to do with the
actual image processing itself.

Bear in mind that the original Radiance windowing system was X-
Windows based. Also, Radiance had been finished/more-or-less
completed well before the OpenGL API was publicly released.

Final questions & comments:

Granted, Radiance is not a rendering engine, per se. It
includes a ray tracing engine. However, primitives are still
primitives.

Is it possible to have a list of pre-defined primitives for CS
which the rendering engine works from when generating an image
or scene?

[...
No, I am not really talking about pre-rendered backgrounds. I
am talking about real-time scene assembly which uses some sort
of database to pull pre-rendered primitives from; primitives
such as spheres, etc., and then renders a new scene based on the
primitives and transforms being invoked from the available
object data.

A quick CS example:

Assumption -- Initial sphere size/dimension(s) has been pre-
defined based on data extracted from a collection or database of
pre-rendered primitives.
Process -- One opaque sphere, large "room". Sphere floats in
3d space and incrementally increases in size by simply
increasing the sphere dimensions in real-time and re-rendering
only the sphere itself when size difference demands a re-render.
Occluded surfaces, in relationship to current frustrum, are not
re-rendered. Non-occluded surfaces are treated as "pre-
rendered" and subsequently may be pulled from the collection or
database of pre-rendered objects and transformed on an as-needed
basis.

...]

Don't we have some sort of collection of pre-defined
primitives for CS? Or, did primitive recognition come after the
original CS engine was built?

Thanks for your feedback.

Peace,

Paul G.


Nothing real can be threatened.
Nothing unreal exists.
Jorrit Tyberghein
2000-03-28 06:30:33 UTC
Permalink
Post by Paul Garceau
Hi folks,
Starting to get an image of the rendering concepts. Need some
feedback to refine that image...please forgive the length of the
message...I needed to get it all out here in the interest of
accurate feedback.
Post by Jorrit Tyberghein
CS has no concept of spheres at this moment.
This puzzles me. Exactly what are the primitives that CS
understands?
Polygons. That's it :-)

There are some higher level constructs:

- Bezier curves. These are tesselated depending on LOD to triangles.
- 3D Sprites. Basicly a LOD/skeletal/frame animation triangle mesh.
- 2D Sprites: one single polygon facing the camera
- Thing: a set of 3D polygons
- Sector: a set of 3D polygons

So in fact bezier curves are the only real high level primitive that CS has.
The rest is just a collection of triangles/polygons. The curve system in CS
is very general. So it would be possible to add spheres as a new primitive
object and let it use the curve system to tesselate to triangles.
Post by Paul Garceau
I may be missing something here, but I had assumed that the
primitives that CS could understand were cubes, spheres, cones
and polygons.
CS does not know about cubes, spheres, and cones. It is not a CSG
library.
Post by Paul Garceau
Granted, cubes, spheres and cones can all be assembled from
polygons, but is it really the most effective and efficient way
to deal with cubes, spheres and cones?
Well, in the end you'll have to send polygons or triangles to the hardware
so that's the end situation that you have to deal me. I can see the use/need
for high-level libraries sitting on top of CS that would support those
kinds of objects. But CS internally cannot really benefit much from
it (except for the curve system).
Post by Paul Garceau
Jorrit mentioned that CS has no concept of spheres. Ok. The
next thought that came into my mind was: Is there some reason
why there is such a limited availability of primitive
recognition for CS (keeping in mind the amount of free time
folks have to work on CS)?
The most important reason is that the entire CS rendering loop is
based on polygons mostly.
Post by Paul Garceau
Is it possible to have a list of pre-defined primitives for CS
which the rendering engine works from when generating an image
or scene?
I think this is possible but I don't see the need to bring the knowledge
of the primitives into CS itself. I don't really see how that can benefit
anything. In the case of detail objects (which can be anything really:
curves, boxes, ...) CS will deal with them as with a normal bounding
box. So visibility testing will be done on the bounding box only.

Greetings,

--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Paul Garceau
2000-03-29 00:15:25 UTC
Permalink
Post by Seth Galbraith
It sounds to me what you are interested in is procedural geometry
- where you "render" the geometry by creating whatever
representation is appropriate for the current view and renderer -
or in other words what we call a "curve" in Crystal Space: A
cylinder primitive class would "render" a triangle mesh
representation of a cylinder because a triangle mesh is what the
renderer knows how to draw.
Is this what you mean?
Yes. Wish I could've been as succinct.

Thanks, Seth.

Peace,

Paul G.


Nothing real can be threatened.
Nothing unreal exists.
Seth Galbraith
2000-03-28 05:12:40 UTC
Permalink
It sounds to me what you are interested in is procedural geometry - where
you "render" the geometry by creating whatever representation is
appropriate for the current view and renderer - or in other words what we
call a "curve" in Crystal Space: A cylinder primitive class would
"render" a triangle mesh representation of a cylinder because a triangle
mesh is what the renderer knows how to draw.

Is this what you mean?
Post by Paul Garceau
Hi folks,
Starting to get an image of the rendering concepts. Need some
feedback to refine that image...please forgive the length of the
message...I needed to get it all out here in the interest of
accurate feedback.
Post by Jorrit Tyberghein
CS has no concept of spheres at this moment.
This puzzles me. Exactly what are the primitives that CS
understands?
I may be missing something here, but I had assumed that the
primitives that CS could understand were cubes, spheres, cones
and polygons.
Granted, cubes, spheres and cones can all be assembled from
polygons, but is it really the most effective and efficient way
to deal with cubes, spheres and cones?
Jorrit mentioned that CS has no concept of spheres. Ok. The
next thought that came into my mind was: Is there some reason
why there is such a limited availability of primitive
recognition for CS (keeping in mind the amount of free time
folks have to work on CS)?
Here I need to fall back to my experience with the Radiance
application.
Radiance, a ray tracing engine, recognizes a number of
primitives (pre-defined via the use of octree files and pre-
defined header references).
Once the primitive (or collection thereof) has(have been)
established, Radiance then outputs or modifies the primitive (or
a collection thereof) by using a module specific to the
primitive (or collection of primitives) in question.
The rest of the process has to do with the actual type of image
processing requested for any given scene (ray tracing, etc.).
o Greg Larson (he wrote the original Radiance Synthetic Imaging
Tool). About a year ago he demo-ed a 3d OpenGL version of
Radiance which allows the end-user to actually look around a 3d
AutoCad scene in real-time.
o The most time-consuming aspect of Radiance has to do with the
actual image processing itself.
Bear in mind that the original Radiance windowing system was X-
Windows based. Also, Radiance had been finished/more-or-less
completed well before the OpenGL API was publicly released.
Granted, Radiance is not a rendering engine, per se. It
includes a ray tracing engine. However, primitives are still
primitives.
Is it possible to have a list of pre-defined primitives for CS
which the rendering engine works from when generating an image
or scene?
[...
No, I am not really talking about pre-rendered backgrounds. I
am talking about real-time scene assembly which uses some sort
of database to pull pre-rendered primitives from; primitives
such as spheres, etc., and then renders a new scene based on the
primitives and transforms being invoked from the available
object data.
Assumption -- Initial sphere size/dimension(s) has been pre-
defined based on data extracted from a collection or database of
pre-rendered primitives.
Process -- One opaque sphere, large "room". Sphere floats in
3d space and incrementally increases in size by simply
increasing the sphere dimensions in real-time and re-rendering
only the sphere itself when size difference demands a re-render.
Occluded surfaces, in relationship to current frustrum, are not
re-rendered. Non-occluded surfaces are treated as "pre-
rendered" and subsequently may be pulled from the collection or
database of pre-rendered objects and transformed on an as-needed
basis.
...]
Don't we have some sort of collection of pre-defined
primitives for CS? Or, did primitive recognition come after the
original CS engine was built?
Thanks for your feedback.
Peace,
Paul G.
Nothing real can be threatened.
Nothing unreal exists.
_______________________________________________
Crystal-main mailing list
http://lists.sourceforge.net/mailman/listinfo/crystal-main
__ __ _ _ __ __
_/ \__/ \__/ Seth Galbraith "The Serpent Lord" \__/ \__/ \_
\__/ \__/ \_ ***@krl.org #2244199 on ICQ _/ \__/ \__/
_/ \__/ \__/ http://www.planetquake.com/simitar \__/ \__/ \_
Aaron Drew
2000-03-28 06:03:57 UTC
Permalink
Nooo don't move the c-buffer to the software renderer. I believe Jorrit
mentioned a while
back that the aim was to calculate a convex solid hull for a thing/sprite
and an overall
bounding box. This minimises the number of verts transformed in software.
The bounding box
would be tested against the c-buffer, if visible its convex hull would be
added into the
c-buffer, and potentially its full untransformed geometry passed to the api
(depending on
hardware caps). This is defnitley the way to go imho, I thought the latest
additions to
iGraphics3D were going towards this goal. With large worlds with many
objects, even with
lightning fast hardware you need culling, hardware z-buffers only are a long
way from
ideal in all situations. Simple example: you are in a room with a couple of
portals
pointing out to complex worlds. Both portals are within the frustrum. But
both portals are
obscured by one big object sitting right in front of you. With culling you
don't reach the
portals. With only hardware Z buffering you will have much overdraw and
unnecessary
recursion through portals. The trick is to tailor culling and hardware z
buffers to
capabilities given circumstances.

Ok, I can see now the point in having a c-buffer in the front-end of the
renderer but I'm still a skeptic. :)

It will take a lot of work to convince me that the savings made from
*possibly* culling some polygons/portals that are behind things in the world
warrent translating bounding boxes for every thing in the world and
rendering it to an internal buffer. How does this work for things such as
BSP trees? (I didn't fully grasp Jorrit's explaination) If each object isn't
treated as a single detail does that mean that each leaf is?? 8 points
transformed for each convex hull which may contain as little as 5 vertices?
The front-end of the renderer IMO should be concerned with only major
culling of non-visible portals and things as well as organising geometric
data into the most efficient order for rendering (ordering by texture,
triangles in strips, etc..) Again, I'm welcome enlightenment. I'm new to all
this.

- Aaron Drew
Jorrit Tyberghein
2000-03-28 06:46:51 UTC
Permalink
Post by Aaron Drew
Ok, I can see now the point in having a c-buffer in the front-end of the
renderer but I'm still a skeptic. :)
It will take a lot of work to convince me that the savings made from
*possibly* culling some polygons/portals that are behind things in the world
warrent translating bounding boxes for every thing in the world and
rendering it to an internal buffer. How does this work for things such as
BSP trees? (I didn't fully grasp Jorrit's explaination) If each object isn't
treated as a single detail does that mean that each leaf is?? 8 points
transformed for each convex hull which may contain as little as 5 vertices?
The front-end of the renderer IMO should be concerned with only major
culling of non-visible portals and things as well as organising geometric
data into the most efficient order for rendering (ordering by texture,
triangles in strips, etc..) Again, I'm welcome enlightenment. I'm new to all
this.
Ok. Let's have some explaining here (I really need to make a doc about this).
CS currently supports both portalized worlds and octree worlds (more about
those later). The both approaches can even be mixed but that doesn't happen
much right now.

The c-buffer visibility system is mostly useful for the octree worlds so I
keep this explanation to those. In an octree world you have one large sector
containing a general polygon soup (no structure is assumed). On this polygon
soup an octree is built. At some point (when the number of polygons falls down
to some thresshold) this switches to using BSP trees. So effectively the world
will be subdivided into one large octree with mini-BSP trees at every octree
leaf.

The visibility culling only operates on octree node level. The BSP trees are
ignored (they are mostly there for polygon based culling and lighting).

Rendering and culling happens as follows. You traverse all nodes/polygons
from front to back. Every polygon is added to the c-buffer. Every node
is tested against the c-buffer (i.e. the convex outline is taken and that
is tested). If the test fails then the node is not visible which means that
that octree node AND all it's children, mini-BSP trees and polygons in
those BSP trees are culled with one test. So you can also stop traversing
there.

This is really very efficient.

However, it has its problems. The c-buffer test is very good when you
can cull away an entire octree node at once. But it is still a reasonably
expensive test. I have to plans to avoid this:

1. PVS. Potentially Visible Set. This is a preconstructed set of
all visible nodes/polygons for every other node. Using this set means
that I can avoid having to do the c-buffer test for a lot of polygons
nodes. Because if a polygon or node is not in the PVS for the node
the camera is in then you don't need to transform/project the polygon
or node in order to do the c-buffer test. This will give a GREAT
performance boost. The question here will be wether or not it
still pays of to use the c-buffer to cull the rest. For the software
renderer this will probably be the case as overdraw is VERY
expensive there. For hardware renderers we will have to see how
this turns out. It may turn out that we simply send all remaining
polygons to the hardware and let the z-buffer handle visibility.

2. Detail objects. In order to prevent having to test every single
small polygon with the c-buffer I propose to define detail objects.
Detail objects are things which do not participate in the visibility
culling except that their bounding box will be tested against the
c-buffer (after the view frustrum test of course). It is even possible
that we don't even do this. If we have a list of all detail objects
for an octree node then we can assume that all detail objects in
a node are visible if the node itself was visible using the c-buffer.
All other non-detail geometry (i.e. sector walls and large polygons)
will be used to feed the c-buffer in order that culling can occur.
But this overhead will be acceptable because that overhead
will result in the culling of all the rest.

So you see that there is some flexibility in this approach. We can
choose wether or not the use the c-buffer on individual polygons
or let the hardware z-buffer handle this. We can choose wether
or not we will test detail objects or just assume they are visible
when their parent node is.

I think this plan will solve a lot of the speed problems for CS in
the future. And it isn't too difficult to implement. Certainly
doesn't require a full rewrite or branch to do. I'm already busy with
the PVS right now. Detail objects will follow after that (Michael
Dale Long also said he might be interested in helping me with this).

Greetings,

--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Andrew Zabolotny
2000-03-28 09:01:39 UTC
Permalink
Post by Jorrit Tyberghein
Post by Andrew Zabolotny
So I will need to create two things: one that will go into static BSP
(__static__) and one which won't (__detail__)?
No more. Every detail object should be a seperate object. That's because I
plan to do visibility culling on entire detail objects. For example, if you
would add two of your spheres to some world then both should be seperate detail
objects. Otherwise I can only cull them both or none. I need to be able to
cull them individually.
Uh-oh. Well, I understand, however it won't be simple - I just have a general
mesh and I will have to separate somehow the polygons into convex hull, into
"large" polygons that will go into __static__ and finally into groups of
polygons that are "close"... not easy. Only the convex hull is currently
separated from the mesh.
Post by Jorrit Tyberghein
Post by Andrew Zabolotny
need to clip against a general plane, or just against z > zmin plane?
Well both. The z plane clipper is needed most but I also need a general clipper
in some cases (i.e. floating portals).
This clipper needs to be in 3D however (as you can imagine).
Ok. I will put it into csgeom.

Greetings,
_\***@teamOS/2
Aaron Drew
2000-03-28 11:05:53 UTC
Permalink
Post by Jorrit Tyberghein
containing a general polygon soup (no structure is assumed). On
this polygon
soup an octree is built. At some point (when the number of
Just out of curiousity, Are the octrees calculated at startup each time, or
cached on subsequent loads?
Post by Jorrit Tyberghein
Rendering and culling happens as follows. You traverse all nodes/polygons
from front to back. Every polygon is added to the c-buffer. Every node
is tested against the c-buffer (i.e. the convex outline is taken and that
is tested). If the test fails then the node is not visible which
means that
that octree node AND all it's children, mini-BSP trees and polygons in
those BSP trees are culled with one test. So you can also stop traversing
there.
This is really very efficient.
I understand how this can be efficient if you get a lot of hits. What's the
average culling rate on flarge? How well does this perform?
Post by Jorrit Tyberghein
However, it has its problems. The c-buffer test is very good when you
can cull away an entire octree node at once. But it is still a reasonably
1. PVS. Potentially Visible Set. This is a preconstructed set of
This I presume would require more preprocessing at level loading? I am
planning to work on a game that would require dynamic loading and unloading
of partial level data at run time. Preprocessing of data at load time isn't
an option for me. Are these type of requirements possible to fulfil with the
current engine? I realise its a development platform for new ideas as much
as it is a usable engine so allowing this may restrict progress in other
areas.
Post by Jorrit Tyberghein
2. Detail objects. In order to prevent having to test every single
small polygon with the c-buffer I propose to define detail objects.
Detail objects are things which do not participate in the visibility
culling except that their bounding box will be tested against the
c-buffer (after the view frustrum test of course). It is
Are the view frustum tests done in world coordinates?? This would save some
translations I'd assume. Do detail objects have a bounding box at all? (And
are detail objects always 3D sprites?) If the detail objects aren't used in
visibility culling then do they have to use the c-buffer at all?? I can see
one situation whereby a c-buffer would help where a large 'thing's origin is
in one sector that is around a corner (and this sector is culled) but the
object is long enough to reach past the corner into a visible area). Is
there a way to take such things into consideration using just portals? It
would save a lot of transforms if that is the case.
Post by Jorrit Tyberghein
All other non-detail geometry (i.e. sector walls and large polygons)
will be used to feed the c-buffer in order that culling can occur.
This is probabily a good thing for the software renderer. Could you use
clipping planes for each portal polygon in a sector (aligned with the
camera) to cull out unnecessary geometry (and portals) and clip remaining
ones in much the same way the c-buffer works now? I'm not sure of the order
of operations here. If objects are rendered before level data then the
c-buffer has the advantage of culling non-visible poly's and portals that
this method couldn't do. On the other hand, if not, than this method may be
a lot faster for hardware rendering wouldn't it? Is this a hard thing to
implement with the current architecture?

- Aaron Drew
Jorrit Tyberghein
2000-03-28 11:49:35 UTC
Permalink
Post by Aaron Drew
Post by Jorrit Tyberghein
containing a general polygon soup (no structure is assumed). On
this polygon
soup an octree is built. At some point (when the number of
Just out of curiousity, Are the octrees calculated at startup each time, or
cached on subsequent loads?
Currently at startup time.
Post by Aaron Drew
I understand how this can be efficient if you get a lot of hits. What's the
average culling rate on flarge? How well does this perform?
I think it performs very well. Andrew already answered this part.

Of course, how well it performs depends on the level. This is not a
solution for every kind of level. It works well if you have lots of large
occluders which can cull a lot of other geometry. It doesn't work
very well in a huge open space area containing only a few things
here and there.
Post by Aaron Drew
Post by Jorrit Tyberghein
However, it has its problems. The c-buffer test is very good when you
can cull away an entire octree node at once. But it is still a reasonably
1. PVS. Potentially Visible Set. This is a preconstructed set of
This I presume would require more preprocessing at level loading? I am
planning to work on a game that would require dynamic loading and unloading
of partial level data at run time. Preprocessing of data at load time isn't
an option for me. Are these type of requirements possible to fulfil with the
current engine? I realise its a development platform for new ideas as much
as it is a usable engine so allowing this may restrict progress in other
areas.
We will need to save the octree/PVS information to disk of course. I will
first implement PVS so that we know what format this will be. Then I will
have a look at how to save this information.
Post by Aaron Drew
Post by Jorrit Tyberghein
2. Detail objects. In order to prevent having to test every single
small polygon with the c-buffer I propose to define detail objects.
Detail objects are things which do not participate in the visibility
culling except that their bounding box will be tested against the
c-buffer (after the view frustrum test of course). It is
Are the view frustum tests done in world coordinates?? This would save some
translations I'd assume.
This could be done if possible.
Post by Aaron Drew
Do detail objects have a bounding box at all? (And
are detail objects always 3D sprites?) If the detail objects aren't used in
visibility culling then do they have to use the c-buffer at all??
You only need to test the bounding box of the detail object and see if
that is visible with regards to the c-buffer. Of course it is possible that
even this is not needed on fast hardware. In that case you simply consider
all detail objects in a node visible if the node is visible. This is something
with which we're going to have to experiment to find out what is
really better.
Post by Aaron Drew
I can see
one situation whereby a c-buffer would help where a large 'thing's origin is
in one sector that is around a corner (and this sector is culled) but the
object is long enough to reach past the corner into a visible area). Is
there a way to take such things into consideration using just portals? It
would save a lot of transforms if that is the case.
I think you're missing the scope of the c-buffer. The c-buffer is currently
used in levels which have no portals at all. It is possible to combine
portals and c-buffer but this is currently not done yet (not exactly true,
but I will not spoil the surprise yet :-)

In that kind of levels visibility culling is done ENTIRELY using the c-buffer
as there are no portals to cull with. Why would you do this? Well in general
you do this with levels which are hard to portalize. Breaking a level in portals
is not always easy and in some cases it can lead to levels with too many
portals/sectors (which is inefficient in itself). So the octree/c-buffer approach
was created for that purpose.

Of course, I think that a combination of both approaches will probably be best.
Post by Aaron Drew
Post by Jorrit Tyberghein
All other non-detail geometry (i.e. sector walls and large polygons)
will be used to feed the c-buffer in order that culling can occur.
This is probabily a good thing for the software renderer. Could you use
clipping planes for each portal polygon in a sector (aligned with the
camera) to cull out unnecessary geometry (and portals) and clip remaining
ones in much the same way the c-buffer works now?
Well this is possible. Currently this clipping/culling happens in 2D but it
would be possible to do this in 3D. This is also in my 'virtual todo' :-)


Note that I'm planning to write a big doc about this very soon now.
I think that too many people have little idea about what CS can do in this
regards and what is planned for the (near) future.

Expect this document in less than a week or so.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Andrew Zabolotny
2000-03-28 11:29:29 UTC
Permalink
Post by Aaron Drew
Post by Jorrit Tyberghein
soup an octree is built. At some point (when the number of
Just out of curiousity, Are the octrees calculated at startup each time, or
cached on subsequent loads?
Currently its done each time, but in the future they should be of course cached
on VFS.
Post by Aaron Drew
I understand how this can be efficient if you get a lot of hits. What's the
average culling rate on flarge? How well does this perform?
flarge is a manually portalized level, thus it only loses from culling
algorithms. But for levels that contains lots of non-portalized polygons (such
as dmburg) gain a lot. Try to load dmburg level, then press <c>. You will see
how framerate jumps from 0.5 fps to 10-20 fps.

Greetings,
_\***@teamOS/2
Jorrit Tyberghein
2000-03-28 11:50:04 UTC
Permalink
Post by Andrew Zabolotny
Post by Aaron Drew
I understand how this can be efficient if you get a lot of hits. What's the
average culling rate on flarge? How well does this perform?
flarge is a manually portalized level, thus it only loses from culling
algorithms. But for levels that contains lots of non-portalized polygons (such
as dmburg) gain a lot. Try to load dmburg level, then press <c>.
Correction: press 'd'.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Seth Galbraith
2000-03-29 01:30:07 UTC
Permalink
Post by Aaron Drew
This I presume would require more preprocessing at level loading? I am
planning to work on a game that would require dynamic loading and
unloading of partial level data at run time. Preprocessing of data at
load time isn't an option for me. Are these type of requirements
possible to fulfil with the current engine? I realise its a
development platform for new ideas as much as it is a usable engine so
allowing this may restrict progress in other areas.
What preprocessing is currently done at load time?
How much of this could be cached?
__ __ _ _ __ __
_/ \__/ \__/ Seth Galbraith "The Serpent Lord" \__/ \__/ \_
\__/ \__/ \_ ***@krl.org #2244199 on ICQ _/ \__/ \__/
_/ \__/ \__/ http://www.planetquake.com/simitar \__/ \__/ \_
Jorrit Tyberghein
2000-03-29 05:59:10 UTC
Permalink
Post by Seth Galbraith
Post by Aaron Drew
This I presume would require more preprocessing at level loading? I am
planning to work on a game that would require dynamic loading and
unloading of partial level data at run time. Preprocessing of data at
load time isn't an option for me. Are these type of requirements
possible to fulfil with the current engine? I realise its a
development platform for new ideas as much as it is a usable engine so
allowing this may restrict progress in other areas.
What preprocessing is currently done at load time?
How much of this could be cached?
Octree/BSP and PVS generation is currently done at startup time
(not PVS but that will be when I finish it). I plan to be able to save all
that into VFS.

Greetings,


--
==============================================================================
***@uz.kuleuven.ac.be, University Hospitals KU Leuven BELGIUM

Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
==============================================================================
Aaron Drew
2000-03-29 06:00:54 UTC
Permalink
Lighting is currently done at load time in CS and I presume PVS and Octree
calculations would also be. There is nothing stopping this from being cached
though (unless there are engine restrictions). I was just wondering how much
is cacheable in the current codebase.
Post by Aaron Drew
-----Original Message-----
Galbraith
Sent: Wednesday, March 29, 2000 11:30 AM
Subject: RE: [CsMain] Plans for the engine: speeding things up
Post by Aaron Drew
This I presume would require more preprocessing at level loading? I am
planning to work on a game that would require dynamic loading and
unloading of partial level data at run time. Preprocessing of data at
load time isn't an option for me. Are these type of requirements
possible to fulfil with the current engine? I realise its a
development platform for new ideas as much as it is a usable engine so
allowing this may restrict progress in other areas.
What preprocessing is currently done at load time?
How much of this could be cached?
__ __ _ _ __ __
_/ \__/ \__/ Seth Galbraith "The Serpent Lord" \__/ \__/ \_
_/ \__/ \__/ http://www.planetquake.com/simitar \__/ \__/ \_
_______________________________________________
Crystal-main mailing list
http://lists.sourceforge.net/mailman/listinfo/crystal-main
Paul Garceau
2000-03-29 23:04:05 UTC
Permalink
Post by Jorrit Tyberghein
Post by Paul Garceau
Hi folks,
Starting to get an image of the rendering concepts.
Need some
feedback to refine that image...please forgive the length of
the message...I needed to get it all out here in the interest
of accurate feedback.
On 27 Mar 00, at 13:50, the Illustrious Jorrit Tyberghein
Post by Jorrit Tyberghein
CS has no concept of spheres at this moment.
This puzzles me. Exactly what are the primitives that
CS
understands?
Polygons. That's it :-)
- Bezier curves. These are tesselated depending on LOD to
triangles. - 3D Sprites. Basicly a LOD/skeletal/frame
animation triangle mesh. - 2D Sprites: one single polygon
facing the camera - Thing: a set of 3D polygons - Sector: a
set of 3D polygons
So in fact bezier curves are the only real high level primitive
that CS has. The rest is just a collection of triangles/polygons.
The curve system in CS is very general. So it would be possible
to add spheres as a new primitive object and let it use the curve
system to tesselate to triangles.
I think this may be what I am driving at; if spheres can be
added as a new primitive object, why can't the other primitives
(ie. cubes, cones, triangular prisms, tori and cylinders) also
be implemented in a similar manner?
Post by Jorrit Tyberghein
Post by Paul Garceau
I may be missing something here, but I had assumed that
the
primitives that CS could understand were cubes, spheres, cones
and polygons.
CS does not know about cubes, spheres, and cones. It is not a CSG
library.
Understood.
Post by Jorrit Tyberghein
Post by Paul Garceau
Granted, cubes, spheres and cones can all be assembled
from
polygons, but is it really the most effective and efficient way
to deal with cubes, spheres and cones?
In other words, is it really most efficient to deal with the
high level primitives as a collection of polygons?

Again, I may be missing something here, but wouldn't judicious
use of high level primitives increase the fps by reducing the
amount of polygons that the engine needs to deal with?
Post by Jorrit Tyberghein
Well, in the end you'll have to send polygons or triangles to the
hardware so that's the end situation that you have to deal me.
In other words, the engine needs to deal with the high level
primitives in the form of polygons or triangles in order to have
an efficient hardware output, right?
Post by Jorrit Tyberghein
I
can see the use/need for high-level libraries sitting on top of
CS that would support those kinds of objects. But CS internally
cannot really benefit much from it (except for the curve system).
The curve system is very general, so it is pretty forgiving.
That is why it might benefit from the use of high level
primitives, right?
Post by Jorrit Tyberghein
Post by Paul Garceau
Jorrit mentioned that CS has no concept of spheres.
Ok. The
next thought that came into my mind was: Is there some reason
why there is such a limited availability of primitive
recognition for CS (keeping in mind the amount of free time
folks have to work on CS)?
The most important reason is that the entire CS rendering loop is
based on polygons mostly.
Ok...thanks for the clairification, Jorrit.
Post by Jorrit Tyberghein
Post by Paul Garceau
Is it possible to have a list of pre-defined primitives
for CS
which the rendering engine works from when generating an image
or scene?
I think this is possible but I don't see the need to bring the
knowledge of the primitives into CS itself.
I understand.
Post by Jorrit Tyberghein
I don't really see
how that can benefit anything. In the case of detail objects
(which can be anything really: curves, boxes, ...) CS will deal
with them as with a normal bounding box. So visibility testing
will be done on the bounding box only.
Thanks, Jorrit!

Peace,

Paul G.
Post by Jorrit Tyberghein
Greetings,
--
=================================================================
Hospitals KU Leuven BELGIUM
Five exclamation marks, the sure sign of an insane mind.
-- (Terry Pratchett, Reaper Man)
=================================================================
=============
_______________________________________________
Crystal-main mailing list
http://lists.sourceforge.net/mailman/listinfo/crystal-main
Nothing real can be threatened.
Nothing unreal exists.
Seth Galbraith
2000-03-30 04:33:30 UTC
Permalink
I think this may be what I am driving at; if spheres can be added as a
new primitive object, why can't the other primitives (ie. cubes,
cones, triangular prisms, tori and cylinders) also be implemented in a
similar manner?
They can. The only question is: How? How general do we want the system
to be? How will it work? What can it do?

Some types of primitives: prisms, cubes, etc. can be generated from
existing geometry types - they should not be known to the engine, but
simply be a part of the editor, or possibly a world parser. Our current
world loader has a few features like that. This is not a very important
issue.

Other primitives: Anything based on circles for example - can not be
easily represented by any of our current geometry types (polygons or
beziers.) Instead we should create a new curve type - or several curve
types - for these.

How will these curves be represented? What types of curves can be created
with this system? Maybe you can make a cylinder, sphere, cone, or donut.
Can you make an irregular cross section of a cylinder? Can you make an
arbitrary section of a donut? Can you make a partial sphere or cone?
CS does not know about cubes, spheres, and cone.
It is not a CSG library.
Yes, but how can we make it dynamically teselate the meshes that represent
the surface of a sphere or cone?
Granted, cubes, spheres and cones can all be assembled from polygons,
but is it really the most effective and efficient way to deal with
cubes, spheres and cones?
Actually you can't assemble a sphere or cone from polygons. Think about
it - there are never enough polygons (even with hardware T&L, you'll just
want more spheres and cones in the long run :-)
In other words, the engine needs to deal with the high level
primitives in the form of polygons or triangles in order to have an
efficient hardware output, right?
No, the engine needs to send polygons and tringles to the hardware. The
engine can "deal with" other sorts of geometry. This is what "curves" are
about. They are rendered with polygons that are generated from the curve
info.
Jorrit mentioned that CS has no concept of spheres. Ok. The
next thought that came into my mind was: Is there some reason
why there is such a limited availability of primitive
recognition for CS (keeping in mind the amount of free time
folks have to work on CS)?
Two possibilities:

1. The technically advanced Crystal Space types probably don't want to
waste their time on childish building block type stuff :-)

2. The existing primitives support is built into the loader. Instead
perhaps it could become a seperate small library of it's own - if anyone
is interested in the idea.
__ __ _ _ __ __
_/ \__/ \__/ Seth Galbraith "The Serpent Lord" \__/ \__/ \_
\__/ \__/ \_ ***@krl.org #2244199 on ICQ _/ \__/ \__/
_/ \__/ \__/ http://www.planetquake.com/simitar \__/ \__/ \_
Jason Platt
2000-03-31 01:38:57 UTC
Permalink
Ok, you've got me.. Whats a metaball?
Basically you use balls as control points for a curve. So an egg
shape
could be made from two different sized balls with a loosely attatched
curve. A person can be modeled by adding balls representing muscles
and
bony or fatty protrusions, and so on. I don't know a lot about it.
Ohh, ahh, sounds too complex to be really useful.
Myself I would stay away from using conics, it's easy to do really
nasty things with them although they are very flexible.
Okay, you have my permission to stay a safe distance from conics so
you
don't hurt yourself with any of that nasty flexible stuff :-)
But seriously, what sort of things about conics worry you?
Oh, just that I had a great nack for making conic formulas that would
take infinite time to calculate a simple shape.
As I see things, what people want is a primitive that can be passed
to
the engine that will use LOD to either upsample or downsample the
number of points used to create curved surfaces such as cones,
cylinders, spheres, etc..
Yes, a new curve type or types, but it only needs to "upsample" :-)
(BTW - we have 2 "downsampling" types: terrain meshes and sprites.)
__ __ _ _ __ __
_/ \__/ \__/ Seth Galbraith "The Serpent Lord" \__/ \__/ \_
_/ \__/ \__/ http://www.planetquake.com/simitar \__/ \__/ \_
_______________________________________________
Crystal-main mailing list
http://lists.sourceforge.net/mailman/listinfo/crystal-main
=====
Jason Platt.

"In theory: theory and practice are the same.
In practice: they arn't."

ICQ# 1546328

__________________________________________________
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://im.yahoo.com
Peter Ashford
2000-03-31 02:46:55 UTC
Permalink
Post by Jason Platt
Ok, you've got me.. Whats a metaball?
Basically you use balls as control points for a curve. So an egg
shape
could be made from two different sized balls with a loosely attatched
curve. A person can be modeled by adding balls representing muscles
and
bony or fatty protrusions, and so on. I don't know a lot about it.
Ohh, ahh, sounds too complex to be really useful.
I've implemented metaballs in a ray-tracer. They're quit good for
modelling, or for photo realistic rendering when you don't care too much
about speed.

They're not really complex at all - in fact, they're really easy to
program, but creating output for polygonal renderers in real time
would be quite hard - I'm not sure if there are any algorithmns out that
do it fast enough for real-time.

Peter.

Loading...