Texturizer

Discuss the Wing Commander Series and find the latest information on the Wing Commander Universe privateer mod as well as the standalone mod Wasteland Incident project.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Texturizer

Post by chuck_starchaser »

Texturizer:
I started working, about a week ago, on a "little program" to automate a lot of the texturing work. The idea, in a nutshell, is that the texturer will feed the program a number of input textures: bakings and masks --lightmap baking, ambient occlusion and/or PRT's; and masks such as line drawings for armor panel seams, holes, rivets, depressions, boxes, pipes and wires; as well as a color-coded "materials" mask; and then the program will do all the heavy work, such as de-speckling the bakings, making the pipes appear round, computing height map and normal map, adding bumpmap's computed ambient occlusion to the standard ambient occlusion baking, converting material color codes into appropriate colors for diffuse, specular and shininess textures, adding frontal "heat rust" hues, adding dirt trails to grooves and depressions, adding air-marks around raised elements in the bumpmap, plus impacts and scratches, padding the background around UV islands, and outputting high quality reductions of the final images. Unlike using Gimp's xcf's, where at every step you lose precision due to it 8-bit per color channel limitation, this tool would use floating point precision throughout, and only at the end dither back to 8-bit per channel (24-bit). Additionally, the software would first scale up the images, typically 4x, with edge detection, and do all its internal processing at high rez, and only at the end scale back down.

Finally I got a first test to show.

Original image was Roman Lynch:

Image

I'm only processing one channel for now, green, so,

Image

And I'm only scaling up 2x, for now; but it's hard to see at full pixel resolution so both for the reference and test I'll do one extra scaling without interpolation...

So, here's a 2x scaling using bicubic (best) interpolation in Gimp.

Image

And here's what my code is producing

Image

Well, actually, the code is not doing even one fifth of what I'm planning for it to do. Half of the color data, right now, is coming from linear interpolation. Reason being, at the center of each original pixel I'm using first and second order 3x3 and 5x5 matrices for line and edge detection; but each pixel spits into four; so I need corner values to average with the center, and those corner values right now are linearly interpolated (avaraged) from the four original pixels adjacent to that corner. I need to write code for 4x4 line and edge detection, to be consistent with the rest of the processing. Besides, line/edge detection is tricky; very noise sensitive; so the plan is to first write detected angles and intensities to an auxiliary plane, and do a selective gaussian blur of those angle values. Haven't done that yet eiter. Finally, I'm planning to write a fairly decent dithering routine for when I convert back from floating point color to png format. Right now it's rounding down.

So, you probably won't see much of a difference yet. If you want a hint for what to look for, what the code is doing (last image) is detecting the directions of thin lines or edges, and smoothly-interpolating along the direction of the line or edge; but sharpening in the direction across it. Where no lines are clearly present, it interpolates linearly (for now), which filters out noise. But like I said, the effect is being watered down 50% for the moment.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Update:
I just ran it twice, for now, to get 4x scaling.

So, here's first using Gimp's cubic interpolation 4x scaling:

Image

And here's my code run twice:

Image

I've intentionally exaggerated the line detection score, so that it's easier to see what it does; but this also produces artifacts, like the shapened lines penetrating the silohuette. And the effect is still watered down 50%, like I said before. And keep in mind that saving a png at 2x scaling and reading it again to blow up 2x again incurrs 2 consecutive quantization blues stages. Just a quick test... What it will do eventually is scale up a second time out of its internal floating point precision buffer.
snow_Cat
Confed Special Operative
Confed Special Operative
Posts: 349
Joined: Thu Jan 05, 2006 12:43 am
Location: /stray/
Contact:

Post by snow_Cat »

^ · . > ... ?

^ - -^ Sorry, are you exclusively using raster interpolation?
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I'm not sure what you mean by "raster"; if you mean "horizonta", the answer is definitely no. I'm applying 8 different 3x3 square matrix filters: 4 for line detection and 4 for edge detection, at angles of 0 degrees (horizontal), 45 degrees, 90 degrees and 135 degrees. Here:

Code: Select all

#include "matrix.h"
#include "aaa.h"

float Watkins000[12] =
{
	-0.125f, -0.250f, -0.125f,
	+0.250f, +0.500f, +0.250f,
	-0.125f, -0.250f, -0.125f,
};
float Watkins045[12] =
{
	-0.239141f, -0.239141f, +0.239141f,
	-0.239141f, +0.956564f, -0.239141f,
	+0.239141f, -0.239141f, -0.239141f,
};
float Watkins090[12] =
{
	-0.125f, +0.250f, -0.125f,
	-0.250f, +0.500f, -0.250f,
	-0.125f, +0.250f, -0.125f,
};
float Watkins135[12] =
{
	+0.239141f, -0.239141f, -0.239141f,
	-0.239141f, +0.956564f, -0.239141f,
	-0.239141f, -0.239141f, +0.239141f,
};
float Sobel000[12] =
{
	-0.250f, -0.500f, -0.250f,
	 0.000f,  0.000f,  0.000f,
	+0.250f, +0.500f, +0.250f
};
float Sobel045[12] =
{
	-0.250f, -0.375f,  0.000f,
	-0.375f,  0.000f, +0.375f,
	 0.000f, +0.375f, +0.250f
};
float Sobel090[12] =
{
	-0.250f,  0.000f, +0.250f,
	-0.500f,  0.000f, +0.500f,
	-0.250f,  0.000f, +0.250f
};
float Sobel135[12] =
{
	 0.000f, +0.375f, +0.250f,
	-0.375f,  0.000f, +0.375f,
	-0.250f, -0.375f,  0.000f
};
By multiplying each of those matrices to a square area of 3 x 3 pixels surrounding the pixel I'm processing, and adding together the 9 results for each of them, I get 8 "scores". I interpolate between the two best Watkins and two best Sobel matrices, in proportion to their line detection scores, to get precise angles of line detection. Eventually I use anisotropic interpolation matrices to smooth along the lines but sharpen across them.
The Watkins matrices detect thin lines on a background, whereas Sobel matrices detect edges. I can have disagreements where both a line and an edge are detected, at cross angles; so I decide which to use based on detection signal strength. Or I can have no detection at all, in which case I interpolate bilinearly. Anyhow, like I said, half the result is actually from linear interpolation; still working on it.
snow_Cat
Confed Special Operative
Confed Special Operative
Posts: 349
Joined: Thu Jan 05, 2006 12:43 am
Location: /stray/
Contact:

Post by snow_Cat »

^ - -^ no, by wiki:raster I mean square pixels sitting on fixed regular grids.

^- - ^ I think I understand your approach to the problem, you are using the existing pixels as seeds to generate the 'missing' pixel values, weighed based by existing lines. But are encounter... ^· · > ...
< · .·> *beat*
< ·. · > *beat*
^-. -^ ...
^ - -^ Excuse me.
/\ _ _ /\ *listens silently*
^. . ^
^ ·.·^
^- - ^ I was going to post a link to Horizontal by the Bee Gees but the record company hijacked my browser to play music, music unexpected. [/quote]

wiki:vector
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Ah, so "raster" is like the opposite of "vector" images. I see. I always associated the term with television. Yeah, my input is a png, and my output is a png. And in-between I don't do any such thing as trying to find the ends of lines and redrawing. I thought of doing that but gave it up after some thinking.
Sorry, no new screenshots yet; I'm refactoring and cleaning up for now, trying to set the stage for the next improvement.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Update:

4x scaling with no interpolation:

Image

With Gimp 4x bicubic:

Image

With my WIP scaler:

Image
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Another update:

With Gimp 4x bicubic:

Image

With my WIP scaler:

Image

I think this is the best it will get without adding the 4x4 matrices and selective angle blur pass, so, probably no updates for 2 or 3 days.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Small, incremental improvement: I made the sharpening factor to be contrast-dependent.

Image

In case you don't see it, it means less banding... --as areas of similar brightnesses aren't so sharply sharpened in-between.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Colorful but inaccurate first try at color...

Image
Shissui
ISO Party Member
ISO Party Member
Posts: 433
Joined: Wed Feb 07, 2007 9:27 pm

Post by Shissui »

chuck_starchaser wrote:Colorful but inaccurate first try at color...
I like the tie clip, even if it *is* an artefact.
I want to live in Theory. Everything works in Theory.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

HAHAHA, an artefact it is. Here's some more artefacts:

Image

What's happening is this:
I thought I might get away with doing all this just once, rather
than three times (for RGB), by representing colors in a format
that separates luminance from chroma, such as YUV.
Ended up coming up with my own color space: "YLC" I called it.
Y stands for Yellow, where yellow is +0.5, blue is -0.5.
C stands for Cyan, where cyan is +0.5, and red is -0.5.
L stands for Luminance, and is the sum of R+G+B. But
Y and C are divided by L, so that they are luminance-independent.
Anyhow, as the outline of the pic shows, it doesn't work.
Well, actually, the chroma channels (Y and C) are scaled with
no interpolation whatsoever, here (sample nearest). So, it's not
a really fair test. Only luminance is being selectively filtered.

But, on the other hand, filtering only luminance would imply that
a jaggy line beween two regions, say, red and a green, would not get
detected by my edge detection matrices, to the extent that
the red and green shades' luminances are similar. So, I guess
I'll go back to trusty old RGB space and do it all three times.

This was just an experiment. On the other hand, I needed to separate
luminance from chroma for other processings I've planned, so
the YLC invention is not wasted. :)
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I needed to make one more test to be convinced. I added filtering to the color channels, though based on the luminance-derived adaptive filters, --that is, I didn't do per-channel line-detection and all that; I did that in the luminance only, but I applied the same luminance-adapted filters to the chroma channels, and this is what happened...

Image

At least now I understand what the problem is:
The problem is that in my YLC color space, the chroma channels are scaled by (divided by) the luminance. Dark regions, such as the jacket, still have possibly large Y and C vectors. My guess is that the jacket is slightly greenish, but so dark you don't even notice it. But when the filters apply sharpening along the edge of the jacket, they have no way to know that the chroma of the jacket should be scaled down by its darkness, and so sharpening on the side of the whitish background produces a large magenta value, to try and enhance contrast. And that's not good.

What I'm going to try next is to go back to RGB, but rather than edge detect on all 3 channels, I'll leave the detection on the green channel only, but apply it to R and B as well, see what happens. Stay tuned...
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, so here's the situation:

Image

It would seem fairly okay on the surface, but with a magnifying glass, this is what you see:

Image

And what I think is causing the problem is that, as I said before, half the data here comes from linear interpolation, and only the other half is from the line detection deal. So, each pixel expands in a way that the center follows the sophisticated, anisotropic smoothing/sharpening filtering idea, but the corners are mere averagings of adjacent pixels, which can pull the center and corner colors like a tug of war.
What I need to implement is 4x4 matrices for smoothing corners anisotropically as well. But to do add that in the existing code as is would be either too inefficient or too complex. What I need to do first is implement an intermediary idea: I need to separate the process into two passes: One that writes detected edge angles to an internal plane, and a second pass that reads those angles and does the filtering.
Once I do that, I can then add a pass in-between, that writes line detection angles for corners to yet another internal plane. And then I can finally have a common way of processing the whole data.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Alright, enough finetuning; this is as good as it gets until I do the bigger stuff...

Image

For the record, I did NOT go to full line detection per color channel. Already it takes about 20 seconds to blow up the 128x128 Lynch to 512x512. If I did this for all 3 channels it would take a full minute. What I did was to compute a more sophisticated luminance

Code: Select all

gamma_correction_of ( 0.30*R + 0.54*G + 0.16*B )
and stick it into the alpha channel. So my line detection works off the alpha channel, and then the anisotropic filters it produces are applied to all 3 color channels.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Done separating line detection from the rest of the scaling routine. Now the line detection routine writes angles to an internal plane. Just for a quick test, I decided to try a blur pass on the angles, to try and reduce the angular noise, and here's the result:

Image

You'll notice the glint on the eye there got stretched vertically... Well, it seems that the larger the blur radius of my angle filter, the more slitted and cat-like the eyes get. I don't know why; but I think I know a way to fix it...

UPDATE: Fixed...

Image

Next is the 4x4 matrices...
No. Next is avoiding recomputing the angles in the second 2x scaling. Angles should only be detected on the first pass and then interpolated...
Zeog
ISO Party Member
ISO Party Member
Posts: 453
Joined: Fri Jun 03, 2005 10:30 am
Location: Europe

Some background information

Post by Zeog »

I'd like to supply a couple of links for everybody who is as clueless as I am about what chuck is actually doing.

What is so hard about upscaling an image?
http://www.cambridgeincolour.com/tutori ... lation.htm

Play around and see scaling algorithms in action:
http://www.cambridgeincolour.com/tutori ... gement.htm

@chuck:
It appears that "Genuine Fractals" and the not yet released "SmartEdge" are the best known scaling algorithms so far. Unfortunately they are patented...
( http://www.dyetrans.com/design_software ... actals.php )
Also I remember from various emulators (the ones that enable you to play old game boy games etc. on your box) that they come with various upscaling filters that do a pretty good job at creating relatively sharp high res pictures. Perhaps it is worth taking a look at their source code, from http://www.zsnes.com/ for example.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Thanks; good links. Yeah, looks like Smart Edge is some tough competition to go up against. I just compared my program on the same cat pic, from your second link, and I'm nowhere near that quality yet.

Smat-Edge's cat:

Image

My cat:

Image

On the other hand, I did try to reduce, and even reverse sharpening at lower contrasts, because it looked better with Lynch's pic, so maybe I'd get a lot closer to SE's quality by removing some hacks. IOW, maybe my edges are a bit too smart in some way.

But in any case, I'm nowhere nearly done yet; like I've said, half my pic is from linear interpolation, until I implement those 4x4 matrices; so we'll see...
And using fractals to increase detail did cross my mind, too; just don't know enough about fractals to implement it. Another idea that crossed my mind was to jpeg-ize the data, and then try to edit the jpg representation to add harmonics... But jpg is patented, so I'd have to do my own fourier analysis.

EDIT:
Just to clarify: I'm not working on any kind of real-time scaling related idea. The purpose of this work is to have a means to scale up old game textures and screens to speed up work in WCU, and to serve as front end to my texturizer project. Strictly for off-line work.
hurleybird
Elite
Elite
Posts: 1671
Joined: Fri Jan 03, 2003 12:46 am
Location: Earth, Sol system.
Contact:

Post by hurleybird »

Chuck, keep in mind that image enhancement for photos is different than image enhancement on old games. What looks good on one might not look good on the other and vise-versa. For example, your scalar doesn't get as much detail on the cat, so it looks worse than smartedge on that picture. However, I'd be willing to bet your scalar looks better on Lynch as that extra 'detail' would come across as noise.

I'm really liking your latest screenshot. Very nice gradient in regards to Lynch's skin. Edge's are looking pretty good but could be improved. Notice the discoloration where Lynch's shirt meets his neck.

EDIT: Either somethings up with that cat picture or your scalar just sucks at photos (I'm guessing the former) because even the bicubic cat on that site looks better than yours.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Yes, indeed, that bluish creep on the edge of the shirt's collar is a mystery; it comes and goes as I tweak things, but I can never seem to zero in on exactly what affects it. Right now it looks a bit worse (more bluish) than on the last test, and I don't know how to get it back. And yes, the edges are still pretty jaggy, but like I said, half the data here comes from linear interpolation. Need those 4x4 corner matrices to put a nail on that, but for that I need to impement a few other things first. And you're quite right this may be better tuned to hand-painted textures than to kitty pics; but to be fair, I'm sure Smart Edge would make a better job on Lynch than my WIP does, as it is for the moment. But that will change... (hopefully today, though I got laundry to do, and haven't had a bite to eat since yesterday).

EDIT:
There's actually a bit of a longer story with the cat. My scaler algorithm only scales by powers of two, but the scaling shown is 250%, so I scaled up 400% using my WIP, and then scaled down to 62.5% using Gimp's bicubic. On the other hand, my 400% cat sucks, no matter.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Alright! 4x4 matrices implemented; --though in a terribly inefficient way... Each gets computed again 4 times... But I was too curious to see how much of a difference they'd make.

Image

Thing is, though, all corner matrices follow the same line detection angle and intensity as the central pixel; --no interpolation at all for angles. And needless to say, it needs more sharpening; but my time is running out for laundry...
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Well, I just did another major change: I no longer detect edges on the second 2x pass; instead I scale the angles plane using linear interpolation, but I didn't get the dramatic improvement I was expecting.

Image

But I think part of the reason is bugs. I'm positive I have bugs. At least one I know of is the top/bottom edge artifacts. That's because some of the iterator operators are wraparound, some are clamped, and some don't work period; and I lost track of which was which. But there's also the bluish tint on the shirt's edges, which certainly doesn't come from the original...

Image

Image

And while in most places pixels are smoothed, in some one can see square tiles of 16 pixels, corresponding to an original pixel, all together jumping out... like around the tip of the nose and stuff...

Image

Image

Not sure why's it happening, but it must be some bug...

Anyways, I tried scaling down my output with Gimp using nearest (no interpolation) just to see how close to the original it might be...

Image

Image

Can you tell which is which is the original, and which was scaled up and then back down?
hurleybird
Elite
Elite
Posts: 1671
Joined: Fri Jan 03, 2003 12:46 am
Location: Earth, Sol system.
Contact:

Post by hurleybird »

Very smooth! And in the last post the second one is the rescaled one.
Post Reply