A long-winded IndieWeb ramble I wrote on the train back from Portland

(This is a somewhat-edited version of a disconnected ramble I posted on Twitter/Mastodon while on the train home today. I feel like putting this somewhere that I own it, but am not in a good enough mental state to actually write it properly.)

Yesterday at IndieWeb Summit, someone – Aaron, I believe – mentioned that one of the big differences between IndieWeb initiatives and ActivityPub is that IndieWeb is made up of simple building blocks you can pick and choose while ActivityPub frontloads a lot of complex work. This is a sentiment I very much agree with and it’s unfortunate that the main reason Mastodon switched from OStatus (which is very IndieWeb-esque) is because it made it slightly less inconvenient to pretend to have private posts. Which aren’t even implemented that well.

Mastodon’s “private” posts really suck from a bunch of standpoints. There’s no ability to backfill or even view on web without being on the same instance, and Mastodon’s actual privacy controls go in the wrong direction, so it’s still necessary for a separate vent account. As usual I don’t know if this is a problem with ActivityPub itself, or an artifact of how Mastodon shoehorned its functionality into ActivityPub, but either way, the end result is that Mastodon’s post privacy isn’t really all that useful, nor is it really all that private.

So, right now ActivityPub is the darling of the fediverse, but I’m hoping that the current push toward AutoAuth and trying to use it as a basis for private webmentions and the obvious next steps of private feeds and private WebSub will change that. I do worry that IndieAuth/AutoAuth are kind of hard to do in piecemeal ways though (well, okay, IndieAuth becomes really easy using IndieLogin but I don’t want to see a single endpoint become what everyone on the Internet relies on). And of course once you get into an integration between auth stuff and content stuff you also need to worry a lot more about content management and how it integrates, as well as this seeming fundamentally incompatible with static site generation.

At the Summit there was definitely a lot of compromise that people were doing, such as using Javascript libraries to introduce externally-hosted dynamic IndieWeb stuff onto statically generated pages. I think in this world where SSGs can be supplemented with third-party endpoints that use client-side JavaScript there could be a world where some level of privacy can happen via clever use of client-side includes of data at non-public unguessable URLs. (Although the ideal solution for that is to use the third-party APIs to generate webhooks that then trigger a file change → git commit → commit hook → build/redeploy.)

Non-public unguessable URLs aren’t great for privacy in general (and I mean, Publ has had “privacy through obscurity” since day one and there’s several reasons why I rarely use it anyway) but it’s at least better than nothing.

Anyway after the above ramble appeared on Mastodon, wohali mentioned:

i’ve thought a bit about static site integration. best I can come up with is the equivalent of .htaccess or nginx stanzas that get included server-side to enforce. but it’s still leaky into things like indexes, unless you’re comfortable with metadata (titles, tags, etc.) being exposed.

i really want to move to a static site but this is presently holding me back.

This is along the lines of why I decided to make Publ “like a static generator, but dynamic.” I mentally went down the path of making a static site that generates server-interpreted routing rules that attempt to provide privacy at the request level, but I came to the decision that implementing complex auth and routing rules in the HTTP server isn’t any better than just implementing it as an application engine in the first place. You’re just pushing the problem up a layer, and that layer is a lot more fragile and complicated and way easier to get wrong – with potentially catastrophic results if you do.

As far as indexes and feeds go, one could do something silly like generate the site multiple times, once for each possible permission group, and then have the routing layer select the site rendition based on authentication headers. But that’s just implementing a dynamic publishing system the long way around.

Anyway. I don’t think that dynamic sites are necessarily bad, if you’re careful about what makes them “dyanmic” anyway. In Publ, the actual routing aspect is pretty lightweight; the heavyweight parts are the image rendition generator (which caches to disk with a default TTL of a month) and the HTML rendering, and the HTML rendering is trivial to cache as well (incidentally, today I switched this site from Flask-Caching’s in-process SimpleCache to use the unused memcached instance I’ve had running for years). Page routing only relies on data that’s kept in the database, and the database mostly lives in RAM, and so on.

Most of the problems with dynamic sites are really problems with not sufficiently separating out the routing vs. display vs. storage vs. post editing access concerns. For that matter, you can still have the storage and edit access concerns with a static publishing system; older verisons of Movable Type were absolutely based on a database-driven static publishing model with what amounted to internal webhooks for triggering rebuilds on comments, for example. (Newer versions of Movable Type are as well, but the templates have been gussied up with PHP. Which is what I did for my various untenable hacks on my old MT-based site. We all know how well that worked out.)

Anyway. This is stuff that’s been on my mind for the last few days. IndieWeb Summit was a great opportunity and I’m still feeling a bit buzzed from having such a great weekend with folks I could talk to about this stuff.


Before commenting, please read the comment policy.

Avatars provided via Libravatar