Discussion:
Question about dealing with content creation for large worlds
Juan Linietsky
2012-11-24 20:50:02 UTC
Permalink
This is a semi on-top and semi-offtopic question. I can't find much
information on creating pipelines for creating 3D content for large worlds.
I'm finding that artists modelling the "shape" of such words (primitives)
of cities, buildings or even natural places such as canyons or valleys is
doen pretty quickly, they usually do it in a matter of days. Artists
painting texturing and detail to them is also done quickly in nowadays
tools, but the absolute bottleneck is UV mapping.
The guys modeling take a long time uvmapping everything and artists are
often not happy with the resulting UVs. Unwrapping architetural geometry in
most 3D apps produces suboptimal results. There is also the problem of
texture streaming, for which to achieve a good quality, textures should
somehow be split evenly around world areas like quadrants or octants i
think.

I looked into Carmack's megatexturing approach, but it seems too
unnecesarily complex IMO, plus there are no tools available that artists
can use commercially?
So, I'm looking at some intermediate step, where a tool, (or at least an
algorithm), can take a giant world (like a city), untextured and either:
a) Let artists paint directly on it (kind of like ptex?), then flatten and
export to zoned models with textures and uvs.
b) Just slice up the world in zones/octants, generate uniform uvs and
export?

In any case I'm really ignorant on how this works in large scale games, so
hence my question.

Thanks!

Juan Linietsky
Eric Chadwick
2012-11-24 22:52:14 UTC
Permalink
UV mapping buildings is not the bottleneck, in my experience. Keeping draw
calls within limits and keeping the memory from ballooning are the most
difficult parts. Having a clear idea how the buildings will be seen by the
player, and handled by the engine, are key determinants for how they should
be created by the artists. Do they need interiors? Does the player climb
them? Etc.

Once that's clear, creating the actual content is fairly straightforward.

Depending on the kind of game, buildings are often done in a modular
fashion. There's a nice visual breakdown here.
http://www.chrisalbeluhn.com/Building_Layout_Guideline_Tutorial.html

Modular UVs are pretty well understood by artists these days, and the tools
are fairly robust. Maybe the artists you're working with aren't very
familiar with UV tools or process?

This tutorial should help artists understand the UV process.
http://www.philipk.net/tutorials/modular_sets/modular_sets.html

This one, although not free, is very good as well, and covers much more.
http://www.3dmotive.com/training/udk/modular-building-workflow/

We have a lot of articles here about modular workflow. This wiki has been
built for game artists in particular.
http://wiki.polycount.com/CategoryEnvironmentModularity

Hope this helps.
Eric


On Sat, Nov 24, 2012 at 3:50 PM, Juan Linietsky <***@gmail.com> wrote:

> This is a semi on-top and semi-offtopic question. I can't find much
> information on creating pipelines for creating 3D content for large worlds.
> I'm finding that artists modelling the "shape" of such words (primitives)
> of cities, buildings or even natural places such as canyons or valleys is
> doen pretty quickly, they usually do it in a matter of days. Artists
> painting texturing and detail to them is also done quickly in nowadays
> tools, but the absolute bottleneck is UV mapping.
> The guys modeling take a long time uvmapping everything and artists are
> often not happy with the resulting UVs. Unwrapping architetural geometry in
> most 3D apps produces suboptimal results. There is also the problem of
> texture streaming, for which to achieve a good quality, textures should
> somehow be split evenly around world areas like quadrants or octants i
> think.
>
> I looked into Carmack's megatexturing approach, but it seems too
> unnecesarily complex IMO, plus there are no tools available that artists
> can use commercially?
> So, I'm looking at some intermediate step, where a tool, (or at least an
> algorithm), can take a giant world (like a city), untextured and either:
> a) Let artists paint directly on it (kind of like ptex?), then flatten and
> export to zoned models with textures and uvs.
> b) Just slice up the world in zones/octants, generate uniform uvs and
> export?
>
> In any case I'm really ignorant on how this works in large scale games, so
> hence my question.
>
> Thanks!
>
> Juan Linietsky
>
>
>
>
>
>
> _______________________________________________
> Sweng-Gamedev mailing list
> Sweng-***@lists.midnightryder.com
> http://lists.midnightryder.com/listinfo.cgi/sweng-gamedev-midnightryder.com
>
>
Juan Linietsky
2012-11-24 23:35:34 UTC
Permalink
Hi Chad! Thanks for the answer!

About memory, I believe streaming textures sort of gets rid of that
problem. I did some experiments in modular design with decent results a few
years ago: http://www.youtube.com/watch?v=TKmt1SfCTgM. This was really cool
because i can use instancing for drawing the modules and performance was
superb even on low end hardware.

But I'm trying to experiment with less conventional looking buildings, as
in, more artistic-like or mor apocalyptic. I guess it's in this case where
environments become less modular and uv mapping of a huge scene becomes
more difficult. As an example, imagine something like this:

http://www.hdgamewallpaper.com/wallpapers/rage-city-1280x800.jpg , where
no UVs are reused (ID's Rage)

Or this, as something more artistic where artists paint directly over

http://th01.deviantart.net/fs71/200H/i/2012/167/8/a/town_from_the_dream___leonid_afremov_by_leonidafremov-d53ojlz.jpg

I know carmack made a Tool for this, but it's a special tool that artists
use for painting. For keeping draw calls low, i guess one could reduce far
away octants to bigger-sized LODs or something like that.



On Sat, Nov 24, 2012 at 7:52 PM, Eric Chadwick <***@gmail.com>wrote:

> UV mapping buildings is not the bottleneck, in my experience. Keeping draw
> calls within limits and keeping the memory from ballooning are the most
> difficult parts. Having a clear idea how the buildings will be seen by the
> player, and handled by the engine, are key determinants for how they should
> be created by the artists. Do they need interiors? Does the player climb
> them? Etc.
>
> Once that's clear, creating the actual content is fairly straightforward.
>
> Depending on the kind of game, buildings are often done in a modular
> fashion. There's a nice visual breakdown here.
> http://www.chrisalbeluhn.com/Building_Layout_Guideline_Tutorial.html
>
> Modular UVs are pretty well understood by artists these days, and the
> tools are fairly robust. Maybe the artists you're working with aren't very
> familiar with UV tools or process?
>
> This tutorial should help artists understand the UV process.
> http://www.philipk.net/tutorials/modular_sets/modular_sets.html
>
> This one, although not free, is very good as well, and covers much more.
> http://www.3dmotive.com/training/udk/modular-building-workflow/
>
> We have a lot of articles here about modular workflow. This wiki has been
> built for game artists in particular.
> http://wiki.polycount.com/CategoryEnvironmentModularity
>
> Hope this helps.
> Eric
>
>
> On Sat, Nov 24, 2012 at 3:50 PM, Juan Linietsky <***@gmail.com> wrote:
>
>> This is a semi on-top and semi-offtopic question. I can't find much
>> information on creating pipelines for creating 3D content for large worlds.
>> I'm finding that artists modelling the "shape" of such words (primitives)
>> of cities, buildings or even natural places such as canyons or valleys is
>> doen pretty quickly, they usually do it in a matter of days. Artists
>> painting texturing and detail to them is also done quickly in nowadays
>> tools, but the absolute bottleneck is UV mapping.
>> The guys modeling take a long time uvmapping everything and artists are
>> often not happy with the resulting UVs. Unwrapping architetural geometry in
>> most 3D apps produces suboptimal results. There is also the problem of
>> texture streaming, for which to achieve a good quality, textures should
>> somehow be split evenly around world areas like quadrants or octants i
>> think.
>>
>> I looked into Carmack's megatexturing approach, but it seems too
>> unnecesarily complex IMO, plus there are no tools available that artists
>> can use commercially?
>> So, I'm looking at some intermediate step, where a tool, (or at least an
>> algorithm), can take a giant world (like a city), untextured and either:
>> a) Let artists paint directly on it (kind of like ptex?), then flatten
>> and export to zoned models with textures and uvs.
>> b) Just slice up the world in zones/octants, generate uniform uvs and
>> export?
>>
>> In any case I'm really ignorant on how this works in large scale games,
>> so hence my question.
>>
>> Thanks!
>>
>> Juan Linietsky
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Sweng-Gamedev mailing list
>> Sweng-***@lists.midnightryder.com
>>
>> http://lists.midnightryder.com/listinfo.cgi/sweng-gamedev-midnightryder.com
>>
>>
>
> _______________________________________________
> Sweng-Gamedev mailing list
> Sweng-***@lists.midnightryder.com
> http://lists.midnightryder.com/listinfo.cgi/sweng-gamedev-midnightryder.com
>
>
Jon Watte
2012-11-26 21:41:25 UTC
Permalink
>
> I looked into Carmack's megatexturing approach, but it seems too
> unnecesarily complex IMO, plus there are no tools available that artists
> can use commercially?


There exists a commercial implementation; it's called "id Tech 5" if I
remember right :-)

You really have two questions here:
1) Do you want unique texturing of everything?
2) What kind of view distance, level of detail, and memory requirements do
you have?

The problem with game art has always been, not creating "the art," but
making the very best use of available technology. That has always been
about knowing where the player will be, what the player will view, and what
can be re-used versus not. Rust decals, stucco normal maps, trim, rocks,
etc.

By the way, the "brute force" solution of unique texturing everywhere is
something that milsim and Google Earth does, and has done for a long time.
It requires lots and lots and LOTS of storage, and still locks worse than a
game for any particular place on Earth, because it's not highly tuned for a
particular play-through path. Yet, if you're doing, say, flight simulators,
those techniques will work well.


Sincerely,

Jon Watte


--
"I pledge allegiance to the flag of the United States of America, and to
the republic for which it stands, one nation indivisible, with liberty and
justice for all."
~ Adopted by U.S. Congress, June 22, 1942



On Sat, Nov 24, 2012 at 12:50 PM, Juan Linietsky <***@gmail.com> wrote:

> This is a semi on-top and semi-offtopic question. I can't find much
> information on creating pipelines for creating 3D content for large worlds.
> I'm finding that artists modelling the "shape" of such words (primitives)
> of cities, buildings or even natural places such as canyons or valleys is
> doen pretty quickly, they usually do it in a matter of days. Artists
> painting texturing and detail to them is also done quickly in nowadays
> tools, but the absolute bottleneck is UV mapping.
> The guys modeling take a long time uvmapping everything and artists are
> often not happy with the resulting UVs. Unwrapping architetural geometry in
> most 3D apps produces suboptimal results. There is also the problem of
> texture streaming, for which to achieve a good quality, textures should
> somehow be split evenly around world areas like quadrants or octants i
> think.
>
> I looked into Carmack's megatexturing approach, but it seems too
> unnecesarily complex IMO, plus there are no tools available that artists
> can use commercially?
> So, I'm looking at some intermediate step, where a tool, (or at least an
> algorithm), can take a giant world (like a city), untextured and either:
> a) Let artists paint directly on it (kind of like ptex?), then flatten and
> export to zoned models with textures and uvs.
> b) Just slice up the world in zones/octants, generate uniform uvs and
> export?
>
> In any case I'm really ignorant on how this works in large scale games, so
> hence my question.
>
> Thanks!
>
> Juan Linietsky
>
>
>
>
>
>
> _______________________________________________
> Sweng-Gamedev mailing list
> Sweng-***@lists.midnightryder.com
> http://lists.midnightryder.com/listinfo.cgi/sweng-gamedev-midnightryder.com
>
>
Sylvain Vignaud
2012-11-27 01:55:29 UTC
Permalink
On Sat, Nov 24, 2012 at 12:50 PM, Juan Linietsky <***@gmail.com> wrote:


I looked into Carmack's megatexturing approach, but it seems too unnecesarily complex IMO, plus there are no tools available that artists can use commercially?


It's actually not that hard really, if you generate the virtual texture(s) from common artists' data - not using particular artist tool. It took me a bit more than one week to implement a virtual 3D texture system to display CT scans of a human body which would otherwise not fit in video memory.



Here's a short description of the way I implemented it:

Pixel shader part:
instead of giving a shader one albedo texture, you give it two: the first one is a map, which you read using normal UVs. This will tell you where to read in the second, content texture. This map is much smaller than the original texture, it should have only one texel per geometry (a larger map can be used for several objects to share texture space across several objects). But depending on how you share texture space between several objects (see texture atlas), some objects may require several texels.
then compute: UV in the content texture = (base UV mod slot size * scaling) + offset. You get "scaling" and "offset" in the first map texture.
to get correct bi-linear filtering: after you mod your UV inside the slot size, clamp them between +epsilon and 1-epsilon before adding the offset, to never get bi-linear filtering blur across slot boundaries. In theory this should create obvious too sharp line in the texture on screen, but in practice I couldn't see any in my 3d virtual texture using 3d bi-linear filtering. Might needs more effort (manual bi-linear filtering or adding slot boundaries) in a more generic case.

CPU part:
stream texture slots from disk to system memory that would be used in a distance around the camera. I prefer not to take into account the direction of the camera at this point.send required slots for current rendering and for a bit more around from system memory to video memory.create map textures in VRAM for each object/mipmap to be displayed.when displaying an object, send its shader its map texture and the corresponding content texture.
Offline (or during object loading if you have enough memory and availalble cpu):
For each texture/mipmap, cut it in slots of the size your system use.Store list of virtual texture slots per object.
Juan Linietsky
2012-11-30 16:52:32 UTC
Permalink
On Mon, Nov 26, 2012 at 10:55 PM, Sylvain Vignaud <***@iit.edu> wrote:

> On Sat, Nov 24, 2012 at 12:50 PM, Juan Linietsky <***@gmail.com>wrote:
>
> It's actually not that hard really, if you generate the virtual texture(s)
> from common artists' data - not using particular artist tool. It took me a
> bit more than one week to implement a virtual 3D texture system to display
> CT scans of a human body which would otherwise not fit in video memory.
>
>
>
The problem i'm trying to address has more to be with creating tools for
production than rendering. For rendering, I think the approach i'm using is
the same one Unreal does, just streaming higher mipmap levels for objects
that are closer. For megatexturing, I think the biggest advantage of it is
the fact artists have a tool that allows them to paint directly over
everything, and not so much the fact that of using a texture indirection. I
guess the added advantage is that they can bake lighting extremely well.


Cheers

Juan Linietsky
Juan Linietsky
2012-11-30 16:44:28 UTC
Permalink
Hi! Thanks for the answer!

1) Do you want unique texturing of everything?


Yes, Ideally, I'd like to have a workflow that works like this. Probably
either of these would work:

1) 3D Artist models a town, a section of a city, a valley or any other kind
of scenery, but does not do the UV coordinate generation nor worry about
how to split the scene in chunks.
2) A tool does something to what was modelled, splits the scene in regions
(octants or quadrants?) and generates UV maps for everything automatically.
3) 2D Artist uses an existing tool for painting over the geometry.

2) What kind of view distance, level of detail, and memory requirements do
> you have?


The usual consoles/mobile memory requierement i guess. For optimizing work,
i thought that in Step 1) the level designer could place hint zones on
where the player is expected to be, so for step 2), zones that are too far
away could just use smaller textures and be bundled together to minimize
draw calls and improve occlussion culling.

So, my questions are mainly.

for 2), is there any algorithm recommended for doing the world subdivision
and automatic UV generation (unwrapping?), I suppose it needs to consider
that
a) it should generate more textures if there is a larger plolyhedral
surface to cover in that zone






On Mon, Nov 26, 2012 at 6:41 PM, Jon Watte <***@gmail.com> wrote:

> I looked into Carmack's megatexturing approach, but it seems too
>> unnecesarily complex IMO, plus there are no tools available that artists
>> can use commercially?
>
>
> There exists a commercial implementation; it's called "id Tech 5" if I
> remember right :-)
>
> You really have two questions here:
> 1) Do you want unique texturing of everything?
> 2) What kind of view distance, level of detail, and memory requirements do
> you have?
>
> The problem with game art has always been, not creating "the art," but
> making the very best use of available technology. That has always been
> about knowing where the player will be, what the player will view, and what
> can be re-used versus not. Rust decals, stucco normal maps, trim, rocks,
> etc.
>
>
Juan Linietsky
2012-11-30 16:47:01 UTC
Permalink
oops sent the mail accidentally. Continued.

for 2), is there any algorithm recommended for doing the world subdivision
and automatic UV generation (unwrapping?), I suppose it needs to consider
that
a) it should generate more textures if there is a larger plolyhedral
surface to cover in that zone
b) can generate UV coordinates by brute force that are somewhat
acceptable overall or, in the worst case, with the help of the artist
specifying seams

for 3) being more of a production question, which app is best to paint
over the resulting models that are more architectural than organic?

Thanks!

Juan Linietsky
Jon Watte
2012-12-02 08:01:42 UTC
Permalink
Tools like Google Earth (and the military type tools that come after the
Keyhole system that spawned GE) do something like that. It's not terribly
hard to implement, BUT it will not be as good as hand-generated UV mapping.
This is the general trade-off between large-scale automation, used for
commercial systems, and hand-tuned game art, used to squeeze the last pixel
out of limited consumer hardware.

There are a number of unwrapping algorithms that have various drawbacks.
You pick one that has draw-backs you can live with :-) The keywords to look
for in research are "mesh parameterization."
Or you can look at the unwrap tools available in 3ds Max et al. One that
works alright is: 1. Pick the biggest unused triangle. This triangle makes
a reference plane. Mark it "used."
2. Pick all unused neighbors that deviate from this triangle by less than X
degrees (dot normal test) and project/map those to the reference plane.
Mark those as "used."
3. Keep doing this, excluding triangles that would overlap the already
mapped triangles.
4. When you cannot find more neighbors that match the criteria, you have a
"cluster." Repeat from 1 until there are no more unused triangles.

Another is even simpler:
1. Pick the biggest unused triangle. Mark it used.
2. Map it to an unused area of a texture.
3. Goto 1 if there are still unused triangles.

You will have to duplicate texels across the split borders in the generated
textures.

Another place to look is probably in various light map generation tools, as
they also solve a similar problem.

As for the "find cohesive sets of triangles to treat as a single thing,"
again, light mapping tools solve a problem similar to that. The "walk,
collecting triangles that don't deviate too much in normal" is also useful,
although you'll want to also include pieces that are "small" compared to
the full triangle surface set, so that you treat a full facade with
overhangs and dormers and shutters and windowsills as a single "thing."
"thickening" the selected triangles into a volume of X meters surrounding,
and including everything "small" in that volume, also is a reasonable
heuristic.

Again, let me repeat: This will *not* be as good as hand-generated UVs, and
thus your game won't look as good as the best hand-tuned AAA game with 200
artists on it. But perhaps that's not the goal.


Sincerely,

Jon Watte


--
"I pledge allegiance to the flag of the United States of America, and to
the republic for which it stands, one nation indivisible, with liberty and
justice for all."
~ Adopted by U.S. Congress, June 22, 1942



On Fri, Nov 30, 2012 at 8:47 AM, Juan Linietsky <***@gmail.com> wrote:

> oops sent the mail accidentally. Continued.
>
> for 2), is there any algorithm recommended for doing the world subdivision
> and automatic UV generation (unwrapping?), I suppose it needs to consider
> that
> a) it should generate more textures if there is a larger plolyhedral
> surface to cover in that zone
> b) can generate UV coordinates by brute force that are somewhat
> acceptable overall or, in the worst case, with the help of the artist
> specifying seams
>
> for 3) being more of a production question, which app is best to paint
> over the resulting models that are more architectural than organic?
>
> Thanks!
>
> Juan Linietsky
>
>
> _______________________________________________
> Sweng-Gamedev mailing list
> Sweng-***@lists.midnightryder.com
> http://lists.midnightryder.com/listinfo.cgi/sweng-gamedev-midnightryder.com
>
>
Loading...