Critter avatar progress

I hadn’t made much progress on my avatar, as mentioned previously, because I wasn’t feeling all that up on building stuff for VRChat for a number of reasons. But I’ve finally gotten the urge to start working on it again, and I’ve made quite a bit of progress.

The main thing is over the past day or so I’ve set up all of the visemes and some useful configuration shapekeys1 (namely being able to adjust the torso configuration), and have also looked at some plugins that will supposedly make my life easier. In particular, my friend Lagos pointed me to Cat’s Blender Plugin which seems to provide some very nice improvements to the VRChat avatar creation workflow.

I’m at a point where I think the next steps are rigging and UVing, and those require destructively applying my mesh modifiers, which is a point of no return. I think the Cat’s Plugin workflow is set up such that instead of doing that to my base model, I’m supposed to export it as a .fbx and then reimport it into a new project (or as a new object in the same project, I suppose) and then do the next steps from there. Which makes some amount of sense.

I do wish that for 3D models you could do more with it while it’s still a subdivision surface, because that makes it much easier to iterate on geometry and shapekeys after starting on the later parts, but I can see why that would also be difficult to support correctly. Attribute data such as UV coordinates and bone weights and so on could be applied based on blending between the control points, but that’d also be pretty frought with peril.

The various Blender tutorials I’ve seen do apply those attributes at the subdivision level but those tutorials are also targeting Blender as a cinematic modeler for use with its own built-in renderer, so there’s some pretty vast differences that are going to be necessary from that, I think. (I’m not too up on the math involved in Catmull-Clark subdivision surfaces in the first place, honestly.) But I should at least see if I can do UVs in subdivision space, since that would at least make future iteration so much easier.

The main things I’m not looking forward to figuring out are:

  • How to rig the mittens
  • Really, how to rig in general
  • Setting up the various “expression” modifiers in the VRC SDK
  • Also setting up walk loops, crouch poses, etc.; I’m not sure how much of that VRC takes care of automatically in SDK3 with the new IK engine and how much still has to be done manually

On the other hand, I’m feeling a lot better about targeting VRChat for now. Yeah, I don’t like the idea of doing stuff specific to one specific VR platform, but the other major platforms are worse in several key ways (and don’t actually address any of the concerns I have about VRC), and all of my friends are still on VRC. Anything that will replace VRC will probably take a while to get to a point where it’s usable/useful.

Apparently both of the major competitors (ChilloutVR and NeosVR) don’t make it too hard to import from VRChat at this point, anyway, so if I need to cut and run it’s not going to be that big of a problem to do so, I don’t think.

Expression modifiers are of course the most obnoxious thing in VRC, because it’s all tied to the animation system, which is all built around, well, animations. I’m probably just going to use the builtin shaders with the hope that I can build expression “animations” that swap out normal maps and textures separately (rather than swapping out entire materials) and then I can set a bunch of color schemes and normal maps to choose from and sliders for material parameters, which gives me everything I want.

Most avatars these days use the Poiyomi shaders, but those are built in such a way that they have to be “locked” which doesn’t let you swap out material parameters in realtime, so you have to basically build a separate material for every possible color and material combination you want. The built-in PBR shaders, on the other hand, just take an albedo map, a normal map, and a bunch of lighting parameters, and they’re all modifiable in real time. And they also work cross-platform, too, which is a huge bonus. It’s also vaguely easier to get more consistency with the lighting, although the way most environments are built I worry that might be “consistently awful.” But we’ll see.

There’s also the possibility of building some custom shaders that would give me more flexibility, but that also has downsides. I was thinking at one point of making one mega-shader that would encode three different color setups (plaid, stripes, polkadots) into the R/G/B channels and putting the tummypatch into A, and then having shader parameters for choosing the color style and mappings on the fly. Which would also look a lot cooler when I shift stuff around, since people could see things fade around as I manipulate them. But that would only work on PC, and I think the permissions model also would get in the way of that working as universally as I’d like.

Although for Quest I’m pretty sure I’m going to have to limit it to a single texture and normal map anyway, so Quest users would only be able to see me modify my shininess and that’s about it. But Quest users being able to see something is better than the current situation, where all they see is one of the default public fallbacks.

Anyway. It’s nice to make some progress on this stuff. Yay, etc.

Comments

Before commenting, please read the comment policy.

Avatars provided via Libravatar