It seems like I chose a very… interesting time to get into VRChat, and also I’m taking a break from it for reasons not related to the current debacle.
I also have a lot of thoughts about VRChat as a platform, and where we can go from here.
So, last night I had a bit of an Incident. After having spent so many nights late at night in VR, I had a bit of a break from reality; I’d woken up at about 4 AM after an intense dream about the universe collapsing in on itself and the virtual world invading the real one. And I was having a bad panic attack, which led to me needing to use the bathroom due to the usual aftereffects of a panic attack, and while I was sitting on the toilet struggling to collect myself, I could only see myself as my avatar, even looking at my physical hands, and I had the feeling of the universe swirling around me while I saw so many other VR characters in macro scale, just looking down on me.
It was… somewhat alarming. I think it was a combination of lack of sleep, a jacked-up circadian rhythm (both from hyperfocus and having a bright screen literally strapped to my face well after midnight), and my brain processing a lot of the more interesting experiences I’ve been having over the last few days as if they were real.
I decided that I need to take a few days away from VRChat.
Conveniently enough, my left controller is somewhat defective and Oculus support finally got back to me about that, and it turns out Oculus will only ship out a new one after they receive a defective one, so my setup is out of commission for a few days anyway. Good timing on that.
Anyway, when I do get my VR system working again, my plan is to impose some pretty strict limits on myself:
- No VR after 9 PM
- Limit my daily usage to, say, 3 hours, and only do it a few days a week
- Basically be more intentional about my usage
To that end I’m looking for a software solution, because it’s extremely easy to lose track of time while in VR, for the same reasons that casinos don’t have clocks on the walls. If anyone knows of a good parental control/screen time-like function in Windows or SteamVR or the like, please let me know. Unfortunately the Windows built-in parental controls are designed for helicopter parenting and not self-care, and I need self-care. I did find an app which might work but it isn’t clear whether it would interact correctly with SteamVR apps and it’s also a bit expensive for a single-purpose thing like that.
Of course, what could work really well for that would be a client-side mod for time tracking! Which leads us to…
So, if you have any interest in VRChat you already know that there’s been a rather controversial decision which nullifies all of the user client mods out there. Having just gotten into VRChat I don’t have any client mods yet, and it looks like I never will.
What would have been really great is if VRChat had provided a client modding API so that the useful functionality that many users rely upon could be restored in a supported way, rather than just categorically banning all use of VRChat outside of their narrow view of what they want to directly support.
Many folks are also worried that VRChat is going to move to making accessibility options into paid features, which would be awful, but I think that presupposes a level of malice on their part when it’s really more like ignorance and cluelessness.
I do worry somewhat about their job listings for in-game economy stuff, but like… the company has a huge valuation and very little revenue. They have to make money somehow. I think that there are ways for them to monetize the game without destroying it. I don’t have a lot of confidence in their ability to do it well, though, because they seem to have forgotten a lot of lessons learned by prior efforts in this space.
It’s kind of amazing that VRChat is as good as it is, and I feel like it’s mostly accidental.
VRChat does a lot of stuff that you simply Do Not Do in VR. Best practices in VR say that you should never let your frame rate drop below 72FPS; reprojection should be an emergency feature, not something to rely on. And yet, on my 2080Ti, most environments run at around 35FPS (especially with the various ridiculously complex avatars people have), and they run just fine.
In VR you “need” to build your spaces to be as efficient as possible. Shared materials for everything, texture atlases, single-drawcall objects (ideally everything batchable into as few drawcalls as possible), no postprocessing, no deferred shading, no realtime lighting. And yet, so many spaces revel in pushing GPUs as hard as possible, doing things that Shall Not Be Done in VR, and the resulting experiences are amazing.
In VR, it’s well-known that your avatars should have detached hands and no legs. Even with full-body tracking there’s no way to get the avatar body to line up well enough with the player’s physical body that it’ll be a comfortable experience. And yet, in VRChat, literally every avatar is fully-articulated, using inverse kinematics to fill in whatever movement details aren’t provided by the user’s tracking setup. And that’s a big part of what makes it so compelling!
In VR, you must never move the camera independently from the user’s head motion. All motion must be from in-playspace locomotion or using rapid teleports. Using the joystick to move the player around is a huge no-no. You must never do camera shake, you must never allow the user to “fall” from a height, everything must cater to the physical reality of the user’s space and body. AND YET…
Okay, for the last one, VRChat does follow a common practice in putting an optional “comfort tunnel” around the user’s fovea to give them a detachment of the peripheral vision from the motion. But VRChat’s implementation is fairly, um… not good, especially in dark spaces (where the tunnel is usually indistinguishable from the environment you’re in to begin with).
Anyway. While using VRChat I do experience a great deal of simulation sickness, vertigo, dizziness and nausea, but for some reason it’s not so debilitating that I have to stop — as opposed to when I was working on Westworld VR and even the slightest issue with projection lag (which, to be fair, was back in the days before reprojection was a thing) or with the camera moving even slightly backwards or sideways even while seated, I’d need to take off the headset for hours while I recuperated.
Also, the actual act of building in VRChat is an obnoxious mess. You have to go out-of-engine and buid stuff as a scene in Unity, using their own proprietary scripting system (which, to be fair, is one of the better things they’ve done, in that they made the incredibly smart decision to decouple the language frontend from the assembly backend, which allows things like UdonSharp to work), with a really obnoxious build-and-test workflow that can’t run from the editor itself.
There’s no in-world building or avatar editing. Avatars themselves can’t take attachments unless they’re specifically built with them, and the attachments are a horrific nightmare to actually add and edit. Even doing things like making multiple color schemes, or editing the boob physics, or making modifications that are outside of how the avatar was directly rigged, is a mess, using hacks on hacks on hacks.
Second Life had a much better setup for all the above, and all of the learnings of Second Life seem to have been completely ignored.
Back in the late 90s I was designing a VR-esque 3D MMO system (called SOLACE for silly reasons) as a highly-aspirational spare-time thing. I was designing the system to be much like a MUCK, where you have rooms with portals between them, and objects that were carryable by users or could be put into other objects and so on. I also had some pretty ambitious ideas like all actions being based on semantic constraint-based stuff (like
foo walks up to bar rather than
foo walks to (50,2,75)) and allowing the clients to figure out the actual spatial locations of everything — which could then also be used to provide a text-based interface, so people could be in and interact with the world without having to be in 3D. (Obviously that would only work for text chat, though, which is what I was targeting. The idea of doing fully-immersive voice chat in 1999 was unthinkable; I wanted this to work on dialup!)
Anyway. I don’t think any of my design stuff would really survive into trying to bring it back for the modern era, and that’s okay. I still think that the idea of having in-engine building and in-engine avatar customization would be great. Ideally having an in-engine mesh modeler would be even more fantastic.
SL’s avatar system was really good and apparently it only got better after I stopped using SL. It was also pretty limited, though. What I’d love to see an avatar system do is let you mix-and-match parts with different attachment points, and have it solve all motion using IK. Being able to just like. Choose a head, choose arms/legs/tails (or lack thereof), etc., and other attachments like horns and hair and glasses and so on, and have a fully-realized avatar that you can apply textures to, either with procedural texturing methods or by downloading a fully-UV'ed mesh that you can paint onto (in an external editor or in-game using a magic paintbrush or the like).
Another thing I’d really like to see in a system like this: self-hostable storage, and federated communications. In SOLACE my plan was to use XMPP as the transport; these days I’d probably just use plain ol' HTTP and then auth and publish mechanisms like IndieAuth and MicroPub.
Basically, I feel like rooms themselves can be webpages with links and embedded scripts. Y'know, like what VRML did.
I feel like VRML was a bunch of great ideas which were very ahead of their time.
Many of the current VR systems are very much held back by being built in Unity. Unity is a great renderer for what it’s built for, but it comes with so many compromises that make it difficult to actually build out fully-fledged collaborative environments.
But you know what exists, is platform-independent, lets you use whatever custom render stuff you want, and lets you pull in arbitrary scripts from wherever? WebXR!
Basically I don’t think there’s any real reason to have a single-stack platform like VRChat or NeosVR or the like. What we need is IndieWeb but for VR. It can’t be quite as simple as IndieWeb (for example, there needs to be some agreement on how avatars and objects and so on work) but the basic concepts of like… here’s a room! here’s a portal to another room! here’s an avatar description (meshes, kinematics, etc.), here’s interactable scripts, here’s the stuff needed to get the current WebRTC session to join in on local voice chat and the spatialization data for each avatar’s voice (disclaimer: I have absolutely no idea how WebRTC works)… I feel like building something truly independent and self-hostable and flexible could be done without a huge investment.
There’s plenty of existing renderers that could probably be used for this purpose; three.js is pretty good and has a huge ecosystem of shaders and mesh loaders and so on.
This is such a pie-in-the-sky idea but I feel like it could be the right place and right time for it.
I don’t think we need to be locked in to any siloed, hosted VR system.
Maybe let’s use the VRChat debacle (not to mention the Unity debacle already in progress) as an opportunity to never need to worry about this ever again.