Marty McGuire

Posts Tagged reader

2020
Sat Oct 3

Unsubscribing from YouTube's recommender

First, some backstory. But feel free to skip to the good stuff!

With topics ranging from media and social critiques, to making and tech topics that I care about, to death itself, regular content from creators that post on YouTube have been a part of my daily life for the last several years.

This is enabled by three main features:

  • Subscriptions, to let me check in for new videos from creators I want to follow.
  • The Watch Later playlist, to let me save videos I wanted to include in my regular watching.
  • A YouTube app connected to my TV to let me play through my Watch Later list.

Over time, I feel that YouTube has been consistently chipping away at this experience for the sake of engagement.

In 2016, when I found the advertisements to be too invasive, I became a paid "YouTube Red" (now YouTube Premium) subscriber. With ads gone, and with so many content creators posting weekly or more, it was easy to let watching videos through YouTube become a regular habit. Turning off and clearing my YouTube viewing history helped mitigate some of the most creepy aspects of the suggestion system, at the cost of being able to track what I'd seen.

This replaced a lot of idle TV watching time. For several years!

"Progress" marches on, however, and the next thing to go was the experience of accessing the Watch Later playlist. I first noticed this after updating to a 4th generation Apple TV. From the (suggestion-cluttered) main screen of the YouTube app, you must make a series of precise swipes and taps down a narrow side menu to "Library", then to "Watch Later", then to the video that you'd like to start. Not long after, I noticed that the YouTube iOS app and the website itself had similarly moved Watch Later behind a "Library" option that was given the smallest of screen real-estate, overwhelmed by various lists of suggestions of "Recommended for You", "Channels You Might Like", and more.

Most recently, I noticed that YouTube has been changing the definition of a "subscription", where the iOS app will show a timeline of text posts and ephemeral "Moments" in between the actual video content that I am trying to see. Or they'll (experimentally?) try to chunk the subscription display by days or weeks.

All the while, this extra emphasis on recommended videos wore me down. I found myself clicking through to watch stuff that I had not planned to watch when sitting down. Sometimes this would be a fun waste of time. Sometimes I'd get dragged into sensationalized news doom-and-gloom. Regardless, I felt I was being manipulated into giving my time to these suggestions.

And hey, it's #Blocktober, so let's see if we can escape the algorithm a bit more.

A Plan

What I would like to achieve is what I described at the top of my post:

  • I want a way to check for new videos from creators I follow (no notifications, please).
  • I want a way to add those to a list for later viewing.
  • I want to view items from that list on my TV.
I have some tools that can help with each part of that plan.

RSS is (still) not dead

Feeds are already part of my daily life, thanks to an indie social reader setup. I run Aperture, a Microsub server that manages organizing source feeds in various formats, checking them periodically for new content, and processing them into items grouped by channel. I can browse and interact with those items and channels via Microsub clients, like Monocle which runs in the browser and on my mobile devices with an app called Indigenous.

Did you know that YouTube provides an RSS feed for every channel? It's true! In fact, if you visit your Subscription manager page, you'll find a link that the bottom to download a file containing the feed URLs for all of your subscriptions in a format called OPML.

Screenshot a an interface listing channel subscriptions. At the bottom is an entry named "Export to RSS readers" with a button labeled "Export subscriptions". The button is highlighted with hand-drawn pink annotations of an arrow and a circle.

My YouTube subscriptions download had more than 80 feeds (yikes!) so I didn't want to load these into Aperture by hand. Thankfully, there's a command-line tool called ek that could import all of them for me. I had a small issue between ek's expectations and YouTube's subscription file format, but was able to work around it pretty easily. Update 2020-10-04: the issue has already been fixed!

A list of feed URLs in Aperture
A list of videos in Monocle, showing channel name and video title.

With Aperture taking care of checking these feeds, I can now look at a somewhat minimal listing of new videos from my subscribed channels whenever I want. For any new video I can see the channel it came from, the title of the video, and when it was posted. Importantly, I can click on it to open the video in the YouTube app to watch it right away or save it for later.

This feels like a lot of work to avoid the mildly-annoying experience of opening the YouTube app and browsing the subscriptions page.

We must go further.

Save me (for later)

In addition to fetching and parsing feeds, Aperture also has a bit of a secret feature: each channel has an API, and you can generate a secret token which lets you push content into that channel, via an open protocol called Micropub.

So in theory, I could browse through the list of new videos in my YouTube Subscriptions channel, and — somehow — use Micropub to save one of these posts in a different channel, maybe named Watch Later.

This is where we introduce a super handy service called Indiepaper. It's a bit of web plumbing which essentially takes in a URL, gets all the nice metadata it can figure out (what's the title of this page? who's the author? etc.), and creates a Micropub post about it, wherever you want.

The real ✨magic✨ of Indiepaper comes in the form of utilities that making adding an item as few clicks as possible.

For your desktop web browser, Indiepaper can take your channel's Micropub URL and key and generate a bookmarklet which will send the current page you're looking at straight to your Watch Later channel. Add it to your browser's bookmark toolbar, load a YouTube video, click "Watch Later", and you're done!

For an iOS device, Indiepaper also provides a Shortcut that works the same way. Share a YouTube video URL (from the YouTube app, or straight from your reader) to the Shortcut and it adds the item to the channel right away.

For example, I can load up this YouTube video by Aaron Parecki about making a DIY Streaming Bridge with a Raspberry Pi for the ATEM Mini and OBS in my browser and click the "Watch Later" bookmark in my bookmarks toolbar. After a brief delay, I'll see a notification that it "Saved!", and can check my Watch Later channel (marked with the television emoji 📺) to see that it's there now.

Screenshot of a Watch Later channel in Monocle with the saved video.

At this point I can:

  • Browse new videos from my subscriptions in my feed reader.
  • Save videos on demand to a separate watch later channel in my feed reader

However, something is missing. I still want to be able to watch these, distraction-free, on my TV.

The Last (and Longest) Mile

This is where things get ugly, folks. It is also where I admit that this project is not finished.

As far as I'm aware there are no apps for any "smart" TV or media appliance that can browse a Microsub channel. Much less one that can identify a video URL and send it off to the appropriate app for playback.

However, there are some existing ways to save media on your home network and play it back on your TV, such as Plex or Kodi.

So, here are some highlights:

  • Assuming you've got a Plex server with a library called "myTube". Your TV (maybe through an appliance) can run a Plex app that lets you browse and play that local media distraction-free.
  • An automated task on that server could act as a Microsub client, periodically looking in your Watch Later channel for new entries.
  • For each new entry, the automated task could fetch the video with a tool like youtube-dl and save it to the myTube folder, where Plex will find it.

Little details:

  • To prevent repeated downloads, the automated task should either delete or mark new entries as "read" once they've been downloaded.
  • Plex doesn't have an easy way to delete media from the TV interface. Perhaps the automated task can check with Plex to see if a video has been watched and, if so, remove it from myTube. Or maybe save it to a "watched" list somewhere!

If this feels like a lot of work just to avoid some engagement temptation, that's because it is! It may sound simple to say that someone should build a competitor to YouTube that focuses on creators and viewers. One that doesn't seem to spend all its time pushing ads and pulling on you for engagement and all the other things that go into funding a corporate surveillance-driven behemoth.

But no matter how easy it feels to browse a slickly animated user interface that pushes carefully coached eye-grabbing thumbnails of videos with carefully coached compelling titles, there is a lot about video - even watching video - that is not easy!

It's good to stay mindful of what these services make easy for you, what they make hard, and what they make impossible. Trying to take charge of your own consumption is barely a first step.

What aspects of social media are you shutting down for yourself in #Blocktober?

2017
Fri Aug 11

I (finally) watched the video of this session from IndieWeb Summit 2017:

https://youtu.be/C0Cv10b83RE

One of the most powerful ideas discussed is the “algorithm that works for you”, to avoid information overwhelm.

Time to nudge the folks that started working on Together after the session!

https://cleverdevil.io/2017/richard-its-early-but-at-last-weekends-indieweb-summit-in

post from
Richard, it's early, but at last weekend's IndieWeb Summit in Portland, a small group of us started tinkering on what we hope could be the Timeline of the Open Web.
Tue Jun 13
🔖 Bookmarked Feed reader revolution: it’s time to embrace open & disrupt social media – AltPlatform http://altplatform.org/2017/06/09/feed-reader-revolution/

“Ideally the increased competition will prod current social silos to open up and compete on a more even playing field in which they’re supporting these open protocols as well. Then anyone with a web presence can use it to communicate from one website to another (or one permalink URL to any other permalink URL) on the internet, in a way that’s as simple and easy as any of the methods that’s currently available in the closed social spectrum.”

2016
Mon Nov 28

Self-Hosting kylewm's Woodwind Indie Reader

One of my favorite aspects of the IndieWeb community is that when you get things "right" with your website, you often get a bunch of fun interoperability with other IndieWeb-compatible websites "for free". For example, the Micropub standard lets you use lots of different clients to post to your own site, and the Webmention standard lets sites notify one another of things like comments, event RSVPs, etc.

Fundamental to having these technologies work well together is microformats2 (mf2), a lightweight way of marking up "structural information" in HTML so that a machine can make (some) sense of the information, such as the name, url, and photo of the author, hints on the important pieces of content in a page, etc.

Getting these things "right" on my own website led me to look for a "Reader" that would make use of the mf2 data and attempt to display it in a meaningful way.

One of the popular readers I saw talked about in the #IndieWeb chat was Woodwind. It was easy to get started by logging in with my own website and then subscribing to my own site to get all my h-feeds, h-entrys, h-cards, etc. in a row. Recently, the hosted version of Woodwind at https://woodwind.xyz/ was down for a few days, so I set out to host my own.

Initial Setup

Thankfully, Woodwind is on GitHub and the Installation instructions are pretty good for getting started. Since I already had a server with the expected dependencies (Python3, PostgreSQL, and Redis), I was able to get a test site up and running in a few steps:

  1. clone the git repo
    git clone https://github.com/kylewm/woodwind.git
    cd woodwind
  2. create Python3 virtualenv and activate it
    virtualenv --python=/usr/bin/python3 venv
    source venv/bin/activate
  3. install the required Python libraries with pip
    sudo apt-get install python3-dev
    pip install -r requirements.txt
  4. as the postgres user, create the woodwind database and the database user that would access it

    $ sudo -u postgres createdb woodwind
    $ sudo -u postgres psql woodwind
    woodwind=# create user woodwind_user with password '...'
  5. copy woodwind.cfg.template to woodwind.cfg and edit it up

    SECRET_KEY = '...' SERVER_NAME = 'woodwind.yourdomain.com' SQLALCHEMY_DATABASE_URI = 'postgres://woodwind_user:DB_PASSWORD@localhost/woodwind'

  6. run the init_db.py script
    python init_db.py
    • at this point I discovered a typo in woodwind/views.py that was throwing errors - a missing parenthesis. once fixed, this ran fine.
    • I've created a pull request for this, so kylewm can merge it back in eventually.
  7. finally, use uwsgi to run the demo version
    uwsgi woodwind-dev.ini
  8. visit localhost:3000 in my browser and I could see that woodwind was running!

This had me off to a very good start, but I wanted to be able to visit my copy of Woodwind from anywhere using a public domain name, protect my activity from eavesdroppers on the network with HTTPS, and have Woodwind up and running reliably across server crashes, reboots, etc.

Setting up Woodwind with uwsgi, Upstart, and nginx

Woodwind is an application written in Python. uwsgi is an application server that can run that code on demand, efficiently. It is possible to run uwsgi by hand as we did above, but I wanted the service to be started and managed automatically by the operating system.

I run an Ubuntu server with the upstart process manager. So, I created an upstart configuration for Woodwind at /etc/init/woodwind.conf:

description "woodwind uwsgi instance"
start on runlevel []
stop on runlevel []
respawn
setuid woodwind_user
setgid woodwind_user
chdir /home/woodwind_user/woodwind
env LC_ALL=C.UTF-8
export LC_ALL
env LANG=C.UTF-8
export LANG
script
. venv/bin/activate
uwsgi --ini woodwind.ini
end script

With this, the uwsgi server should start up on boot to serve Woodwind, and I can now manage woodwind from the command line. For example:

$ sudo start woodwind woodwind start/running, process 14104 $ status woodwind woodwind start/running, process 14104 $ sudo restart woodwind woodwind start/running, process 14246 $ sudo stop woodwind woodwind stop/waiting

Since I wanted to use HTTPS to protect my activity on Woodwind from network eavesdropping, I used Let's Encrypt and their certbot tool to create an SSL certificate for my domain. The steps are:

  1. Create a DNS entry for woodwind.yourdomain.com to point to the public IP address of my server. This may take some time to propagate and certbot won't work until it has taken effect.
  2. Install certbot
  3. Stop nginx from running, temporarily.
  4. Use certbot to issue a
    ./certbot-auto certonly --standalone \
      --standalone-supported-challenges http-01 \
      -d woodwind.yourdomain.com

This resulted in an SSL certificate and key pair that I could use to encrypt traffic to this domain.

Next up, I need something to actually handle the HTTPS requests and pass them along to uwsgi. I used nginx for this because I was already using it on this server. In my nginx config directory, I created a woodwind.conf file:

upstream woodwind {
server unix:/tmp/woodwind.sock;
}
upstream woodwind_wss {
server localhost:8077;
}
server {
listen *:80;
server_name woodwind.maktro.net;
server_tokens off;
ssl on;
ssl_certificate /etc/letsencrypt/live/woodwind.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/woodwind.yourdomain.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_dhparam /etc/ssl/certs/dhparam.pem;
root /home/woodwind_user/woodwind;
access_log /var/log/nginx/woodwind_access.log;
error_log /var/log/nginx/woodwind_error.log;
location /_updates {
uwsgi_pass woodwind_wss;
}
location / {
try_files /woodwind/static/$uri /frontend/$uri @woodwind;
}
location @woodwind {
uwsgi_pass woodwind;
include uwsgi_params;
uwsgi_buffering off;
}
}

This nginx configuration has some things worth noting:

  • In addition to running a process that answers regular HTTP requests on a unix socket at /tmp/woodwind.sock, Woodwind also runs a service that answers WebSocket traffic at localhost:8077 for nifty features like live updating the page in your browser when a feed is updated.
  • Woodwind serves some static files out of its /woodwind/static folder as well as the /frontend folder. I needed to install the dependencies in /frontend using npm:
    $ cd frontend
    $ npm install --nodev

After all this setup, I restarted nginx and was able to visit Woodwind in my browser!

I am happy with my setup so far. I am not quite sure yet if I did the WebSockets configuration correctly, but in general things seem to be working alright. I hope this information is useful to someone down the road, even if it is just future me.