Thanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes youβd like to see for this audio edition!
tl;dr β Kapowski is a simplified tool for finding and posting reaction GIFs to your personal website. It works without a sign-in and gives you HTML to copy-paste into whatever posting interface you use for your website. It's "progressively enhanced" with IndieWeb building blocks, so if your site supports them it becomes faster and easier to use. Search and content are currently powered by Gfycat.
Kapowski supports IndieAuth and Micropub. If your site supports these, you can sign in using your website as your identity and then use Kapowski to post the image to your site directly.
But that's not all! If your site supports sending Webmentions and markup for reply posts, you can use Kapowski to make your post a photo reply!
Still reading? Here's some (too much) history.
I really like Micropub as an IndieWeb building block. As a developer, it's easy to understand on the wire. It's very extensible because the spec provides very few constraints over what you can post with it.
However, that flexibility comes at a coordination cost. I had (have!) a dream that being able to rapidly iterate on special-purpose Micropub clients will let many ways of posting bloom. I also loved (love!) Glitch as a place to build web apps in the open where other folk can see how they work and remix them to make them their own.
I kept Kapowski intentionally simple, hoping that some other IndieWeb folks might use it, give feedback, and iterate on the idea of what a good user experience might be for posting reply GIFs on the IndieWeb.
I ... didn't get much feedback! As far as I know, few people have used it. However, I very much did notice that it kept breaking.
Giphy, the original service Kapowski used, was bought by Facebook with the intention of, I don't know, tracking all the GIFs people posted. I didn't like that, so I switched things over to Gfycat. (Although with Facebook (Meta) being forced by UK regulators to sell off Giphy, maybe it'll be time soon to bring it back.)
The biggest problem, though, was that server-side Javascript bits rot. I want to be able to stand up a hobby project and forget it for months or years at a time. For a project of this pace, especially one that I think of as being very simple, the Javascript stack moves fast. I would get notices every week or so that this or that dependency had a required security update. Sometimes apply what looked like a small point update would cause a breaking change in an API (Axios!). Eventually, it became not-fun to think about keeping up Kapowski.
Multiplying this maintenance across a number of other Micropub clients I had managed to barely knock together on Glitch led to me burning out on the idea. So, I stopped maintaining it and at some point it stopped working.
Reviving the embers
I still want to see a thousand Micropub-powered flowers bloom, but I don't have the personal project bandwidth to build the tool set on Glitch that I thought would make that possible. I'm just not that fluent in server-side Javascript development and project management, and it's too far of a road right now to git gud.
That said, there are styles of web app development I am much more comfortable working in! I think I can take this stuff a lot further by sharpening the knives I already know how to use.
So, I've spent a good chunk of free time this year quietly porting some of my IndieWeb projects to PHP and hosting them on a virtual private server. That's stuff I know how to do! As I've re-built each one, I've also looked to extract the common points of similarity and complexity into a kind of "Micropub kit", with a common-but-extensible engine. That's made each client easier to build and deploy, and that's very exciting.
(This "micropub kit" isn't ready for public consumption at all but it is available for looking-at if you want. Here's the micropubkit source.)
What's next?
Since it's IndieWeb Gift Calendar season, I think I'll spend the next month polishing up and posting more about this work.Β If you have thoughts about Kapowski, "micropubkit", or posting weird stuff on the IndieWeb in general, I'd love to read them! Just reply to this post on your own site and send me a Webmention.
“This is the final inversion of blogging: not just publishing before selecting, nor researching before knowing your subject β but producing to attract, rather than serve, an audience.”
“The general idea is I have a list that contains lists of books. A list of books can contain books directly, or only be a link to that list of books. A list of books can be one of my own lists on my own domain, or it can be a list published by someone else on a different web address.”
“This is the story of the birth of the web, its loss of innocence, its decline, and what we can do to make it a bit less gross. Or if you prefer, this is the video in which I say the expression βbarbecue setsβ far too many times.”
“Iβm going to be talking about personal data warehouses, what they are, why you want one, how to build them and some of the interesting things you can do once youβve set one up.”
First, some backstory. But feel free to skip to the good stuff!
With topics ranging from media and social critiques, to making and tech topics that I care about, to death itself, regular content from creators that post on YouTube have been a part of my daily life for the last several years.
This is enabled by three main features:
Subscriptions, to let me check in for new videos from creators I want to follow.
The Watch Later playlist, to let me save videos I wanted to include in my regular watching.
A YouTube app connected to my TV to let me play through my Watch Later list.
Over time, I feel that YouTube has been consistently chipping away at this experience for the sake of engagement.
In 2016, when I found the advertisements to be too invasive, I became a paid "YouTube Red" (now YouTube Premium) subscriber. With ads gone, and with so many content creators posting weekly or more, it was easy to let watching videos through YouTube become a regular habit. Turning off and clearing my YouTube viewing history helped mitigate some of the most creepy aspects of the suggestion system, at the cost of being able to track what I'd seen.
This replaced a lot of idle TV watching time. For several years!
"Progress" marches on, however, and the next thing to go was the experience of accessing the Watch Later playlist. I first noticed this after updating to a 4th generation Apple TV. From the (suggestion-cluttered) main screen of the YouTube app, you must make a series of precise swipes and taps down a narrow side menu to "Library", then to "Watch Later", then to the video that you'd like to start. Not long after, I noticed that the YouTube iOS app and the website itself had similarly moved Watch Later behind a "Library" option that was given the smallest of screen real-estate, overwhelmed by various lists of suggestions of "Recommended for You", "Channels You Might Like", and more.
Most recently, I noticed that YouTube has been changing the definition of a "subscription", where the iOS app will show a timeline of text posts and ephemeral "Moments" in between the actual video content that I am trying to see. Or they'll (experimentally?) try to chunk the subscription display by days or weeks.
All the while, this extra emphasis on recommended videos wore me down. I found myself clicking through to watch stuff that I had not planned to watch when sitting down. Sometimes this would be a fun waste of time. Sometimes I'd get dragged into sensationalized news doom-and-gloom. Regardless, I felt I was being manipulated into giving my time to these suggestions.
And hey, it's #Blocktober, so let's see if we can escape the algorithm a bit more.
A Plan
What I would like to achieve is what I described at the top of my post:
I want a way to check for new videos from creators I follow (no notifications, please).
I want a way to add those to a list for later viewing.
I want to view items from that list on my TV.
I have some tools that can help with each part of that plan.
RSS is (still) not dead
Feeds are already part of my daily life, thanks to an indie social reader setup. I run Aperture, a Microsub server that manages organizing source feeds in various formats, checking them periodically for new content, and processing them into items grouped by channel. I can browse and interact with those items and channels via Microsub clients, like Monocle which runs in the browser and on my mobile devices with an app called Indigenous.
Did you know that YouTube provides an RSS feed for every channel? It's true! In fact, if you visit your Subscription manager page, you'll find a link that the bottom to download a file containing the feed URLs for all of your subscriptions in a format called OPML.
My YouTube subscriptions download had more than 80 feeds (yikes!) so I didn't want to load these into Aperture by hand. Thankfully, there's a command-line tool called ek that could import all of them for me. I had a small issue between ek's expectations and YouTube's subscription file format, but was able to work around it pretty easily. Update 2020-10-04: the issue has already been fixed!
With Aperture taking care of checking these feeds, I can now look at a somewhat minimal listing of new videos from my subscribed channels whenever I want. For any new video I can see the channel it came from, the title of the video, and when it was posted. Importantly, I can click on it to open the video in the YouTube app to watch it right away or save it for later.
This feels like a lot of work to avoid the mildly-annoying experience of opening the YouTube app and browsing the subscriptions page.
We must go further.
Save me (for later)
In addition to fetching and parsing feeds, Aperture also has a bit of a secret feature: each channel has an API, and you can generate a secret token which lets you push content into that channel, via an open protocol called Micropub.
So in theory, I could browse through the list of new videos in my YouTube Subscriptions channel, and β somehow β use Micropub to save one of these posts in a different channel, maybe named Watch Later.
This is where we introduce a super handy service called Indiepaper. It's a bit of web plumbing which essentially takes in a URL, gets all the nice metadata it can figure out (what's the title of this page? who's the author? etc.), and creates a Micropub post about it, wherever you want.
The real β¨magicβ¨ of Indiepaper comes in the form of utilities that making adding an item as few clicks as possible.
For your desktop web browser, Indiepaper can take your channel's Micropub URL and key and generate a bookmarklet which will send the current page you're looking at straight to your Watch Later channel. Add it to your browser's bookmark toolbar, load a YouTube video, click "Watch Later", and you're done!
For an iOS device, Indiepaper also provides a Shortcut that works the same way. Share a YouTube video URL (from the YouTube app, or straight from your reader) to the Shortcut and it adds the item to the channel right away.
Browse new videos from my subscriptions in my feed reader.
Save videos on demand to a separate watch later channel in my feed reader
However, something is missing. I still want to be able to watch these, distraction-free, on my TV.
The Last (and Longest) Mile
This is where things get ugly, folks. It is also where I admit that this project is not finished.
As far as I'm aware there are no apps for any "smart" TV or media appliance that can browse a Microsub channel. Much less one that can identify a video URL and send it off to the appropriate app for playback.
However, there are some existing ways to save media on your home network and play it back on your TV, such as Plex or Kodi.
So, here are some highlights:
Assuming you've got a Plex server with a library called "myTube". Your TV (maybe through an appliance) can run a Plex app that lets you browse and play that local media distraction-free.
An automated task on that server could act as a Microsub client, periodically looking in your Watch Later channel for new entries.
For each new entry, the automated task could fetch the video with a tool like youtube-dl and save it to the myTube folder, where Plex will find it.
Little details:
To prevent repeated downloads, the automated task should either delete or mark new entries as "read" once they've been downloaded.
Plex doesn't have an easy way to delete media from the TV interface. Perhaps the automated task can check with Plex to see if a video has been watched and, if so, remove it from myTube. Or maybe save it to a "watched" list somewhere!
If this feels like a lot of work just to avoid some engagement temptation, that's because it is! It may sound simple to say that someone should build a competitor to YouTube that focuses on creators and viewers. One that doesn't seem to spend all its time pushing ads and pulling on you for engagement and all the other things that go into funding a corporate surveillance-driven behemoth.
But no matter how easy it feels to browse a slickly animated user interface that pushes carefully coached eye-grabbing thumbnails of videos with carefully coached compelling titles, there is a lot about video - even watching video - that is not easy!
It's good to stay mindful of what these services make easy for you, what they make hard, and what they make impossible. Trying to take charge of your own consumption is barely a first step.
What aspects of social media are you shutting down for yourself in #Blocktober?
I have a great fondness for IndieWeb building blocks, and Webmention is a wonderful meta-building-block that connects so many individual websites together.
Obligatory "what is Webmention?": it's a specification that describes a way to "tell" a website that some document out on the web links to one of the pages on that site.
Sound simple? It is! Perhaps even suspiciously simple. Webmention enables whole new kinds of interactions between sites (there are some great examples in this A List Apart piece). Unfortunately, almost all of the coordination to support these interactions happens outside of the "Webmention" spec itself.
So when I see blog post titles like these I am not sure exactly what to expect:
This may sound like a simple feature! We might expect it to look like this:
You see a post on the web that you like. Let's call that "their post".
On your own site, you make a post that links to theirs with some comment like "Nice post!". We'll call that "your post".
Assuming that you both "have Webmention support", you might check their post a little later and see a nice summary of your post as a comment below their content.
However, for a webmention to "succeed", a lot of coordination needs to happen.
On your side:
You publish "your post" which links to "their post". So far, so good. This is the web! You probably publish links on your site all the time.
When your post is live, you can try to send a webmention. How do you do that? It depends.
From here, it's pretty much out of your hands. On their side:
Their post must advertise the URL of a service that will accept webmentions for that post.
Their service must check that your post is a real post on the web, and that it contains a link to their post. Then it ... stores it somewhere? Maybe it goes into a moderation queue?
So then they have the webmention, but to actually display it, their site must:
Pull your post out of wherever their webmentions are stored.
Somehow understand what your post "is".
Render that into their post. If they want to.
When I see folks posting "I added Webmentions to my site" I want to believe that they have some version of all of the bullet points above.
Unfortunately, there are lots of incompletes.
A list, without references, of partial Webmention support I have seen
The Junk Drawer
Folks in this category are collecting webmentions, probably by signing up for a receiving service like webmention.io. Their posts advertise webmention.io as the place to send mentions, webmention.io dutifully checks and stores them, and ... that's it.
This kind of "Webmention support" is often announced alongside a sentence like "Next up I'll figure out how to display them!" For me, this conjures up images of the warehouse at the end of Raiders of the Lost Ark, or a house filled to the ceiling with stacks of moldy newspapers.
There are, I recognize, lots of good reasons not to display webmentions, beyond some of the technical speedbumps and pitfalls I talk about below. For example there are a lot of unanswered questions and not-yet-built tools and services for dealing with moderation and abuse.
"Why didn't my reply show up on your site?"
Static sites are back and I love it. But if there's one thing that static sites do extremely poorly it is responding dynamically to outside events. Some static sites (including my own!) will save webmentions as they come in, but won't display them until the next time a post is added or modified on the site.
Making sure your posts have the "correct" markup to look like you want can be difficult even for developers writing their own HTML. Tools like indiewebify.me, Monocle's preview, and microformats.io can help if you are getting your hands dirty. It's much harder for folks that just redesigned their site with a new WordPress theme.
Bridgy Over Troubled Waters
Bridgy is an absolutely incredible suite of services provided by Ryan, also for free, for the community.
With the power of Bridgy Backfeed you can use Webmention to feed replies, likes, and reposts from your Twitter tweets to their corresponding post on your own site! This works despite the fact that twitter.com does not link to your website because Bridgy generates little "bridge" pages for which to send webmentions. And it's just a little bit of tweaking to have your Webmention display code handle the quirks.
With the power of Bridgy Publish you can use Webmention to automatically copy posts from your website directly to social media silos like Twitter! You do this by hiding a link to Bridgy in your post, which sends a Webmention to Bridgy, and then Bridgy parses your post to understand it and figure out which bits to tweet. And then Bridgy responds with info about your new tweet. And it's just a little bit of tweaking to have your Webmention sender handle those quirks and update your post with that link.
These are all fantastic things that are built on top of Webmention but that I often feel are conflated with Webmention.
"Just let JavaScript do it!"
This one is a bit... unfair on my part. In fact, I think this setup is the best you can get for the least effort, and I encourage folks to go for it. It looks like this:
Register with webmention.io to receive, verify, and store your webmentions.
I love webmention.io and use it myself. It is an amazing community resource run by Aaron at no charge! Kevin's mention.tech is another great tool, as is VoxPelli's webmention.herokuapp.com. By configuring one of them to accept webmentions on your behalf, you save yourself a lot of self-hosting trouble. Each of these services provides APIs that let you pull out the mentions for pages across your site.
Similarly, webmention.js is a really great tool developed by fluffy (at no charge!) that hides a lot of complexity and forethought about how to display webmentions with a single JavaScript include.
All that said, I have some issues with this particular combo long-term because all the fetching and display of webmentions happens in the browser of the person viewing your post.
If 1,000 people visit your post, that's 1,000 requests to webmention.io, putting load on a service being run by one individual for free.
This setup also means that the webmentions for a post aren't included in the original HTML. So, if your site sends a webmention and wants to check back automatically to see if it's shown up, but their site only displays webmentions via JavaScript, then your site will never see it. Likewise, it becomes much harder to keep track of reply chains, for example.
Why are you being such a downer about this?
Despite, apparently, being a bit salty today, I really do get excited about Webmention, how it's being used in so many ways to connect independent sites, and new ways it can be used in the future.
I'm worried, a bit, that "Webmention" is starting to lose its meaning in conversation. It's starting to feel like a shorthand that hides important details.
Yesterday I joined in the West Coast Homebrew Website Club via Zoom. I was worried I'd be too tired to join - it was held 9-10:30pm my time - but after a nice dinner and wind-down from the day I was in a pretty good place.
A recurring theme as new folks join the IndieWeb community is that the wiki is an amazing resource but also a source of crippling information overload. In recent years we have discussed ways to (re-)organize content for folks geting their own site started. On his own, Jason has begun writing guided intros for developers on topics like Webmention. But as Sarah pointed out, IndieWeb is much more than just developer documentation or guides for folks setting up their own websites. IndieWeb is also an active community with frequent live events, 24/7 discussions, and a long-term memory of observations and ideas and projects and participants. So, perhaps it's worth thinking about onboarding from a perspective of introducing new folks to this community, what they can expect, and how to successfully engage.
For my part, I continued research into the cursed project that I've been noodling on for years and finally started during IndieWebCamp 2020 West. My goal is to give folks a free way to dip their toes into the IndieWeb without needing to understand the building blocks first, but with on-ramps to understanding, customizing, and improving any part of it. While I am learning a lot from projects like Postr, I think I will end up actually making my own version of building blocks like authorization, token, and micropub endpoints. By way of being careful, I would like to be able to test that each one works properly. So that's how I ended up volunteering to help build out the IndieAuth.rocks test suite. π
I really enjoy and appreciate these meetups and I look forward to joining the next one!