Right now I’ve been trying to make some more of the pigmentation maps for the critter avatar, and in particular I want to make one for the splotchy coloration (which would also be possibly useful for calico and blue giraffe). But hand-drawing this on a disjoint surface is pretty obnoxious.
So, I haven’t implemented anything yet, but I think what I’m going to do is make a few more tools to make procedural pigmentation a bit easier. Namely, I’ll write a thing that takes the UV-mapped polygons and renders out an XYZ-map (where each of the pixels in the texture maps to the XYZ coordinates in 3D — stored in floating point, of course), and then I can write various evaluator functions to map the XYZ map into pigmentation values based on a procedural algorithm.
For the splotches I’m thinking that it makes sense to do it as a Voronoi domain; basicaly, generate a whole bunch of random points in space each with one of the three pigmentation values, and then for each coordinate evaluate which one is the closest and use it for the resulting value (with some slight fudge factors for antialiasing).
Alternately, it could be done using three layers of Perlin noise, with again the strongest factor winning. That would be a lot easier to make computationally-efficient, anyway.
Another thing to consider is that I’d like the overall density to be based on the local surface curvature, although that gets a lot trickier to work out.
But also, it might be nice to make a simple-ish paint tool where clicking on a point on the screen draws paint not based on where it projects to the texture from the camera (like what Blender and Surface Painter do), but instead draws a solid blob of paint that covers the part of the texture within that sphere. And that would actually be pretty darn nice as a generalized 3D paint tool. It’s kind of a wonder that I haven’t seen any 3D texture tools which work that way; everything tries to be a projective airbrush, which is kind of unwieldy to work with.
The actual implementation can be made pretty efficient, I think. Basically, at startup, for every triangle in the model, you rasterize the triangle in texture (u,v) space, and for each texel it touches you add that texel’s respective coordinate into an octree, so you get an octree which maps \((x,y,z) \rightarrow (u,v)\). Then when you draw on the texture, instead of drawing a circle around the brush, you apply the paint to all of the texels whose originating coordinate fall within the brush’s sphere of influence (using the octree to speed up the lookup, of course).
Regardless of which of the above I do, hopefully it’d end up looking better than this hand-authored mess:
Update: It occurs to me that this is something Houdini can probably do, and SideFX are very generous with providing licenses to indies. Something to look into before I kill myself with yet another feverish coding project.