A slightly messy visit to the decentralized web

Maybe closing some tabs will help with what feels to be an unending anxiety?

Here goes a few.

At the beginning of December the Internet Archive hosted a Decentralized Web (aka DWeb) Meetup online with lightning talks from 12 different groups / projects.

You can find the full video of the event at archive.org and one of the attendees captured some notes covering their takeaways.

Here are some of the highlights, from my perspective.

Beaker is now 1.0!

The Beaker Browser has been through some major changes and is now at 1.0. They've fully migrated from dat:// URLs (and some related under-the-hood tech) to hyper:// URLs (and under-the-hood tech). There's a migration tool to move dat:// sites to hyper:// but it seems like several of the APIs will have changed, so while it makes these tools accessible at hyper:// URLs, many of them won't work without some rewriting.

Paul Frazee gave the Beaker lightning talk at the DWeb Meetup spent most of the time talking about what did not ship in 1.0. For a while the Beaker team has been working on building in social features including profiles and microblogs and much more and in the end they decided to rip it all out in order to focus on a simpler experience - being good at creating decentralized websites. The plan seems to be to let those features move into their own apps, possibly at the hands of the community.

One thing that stood out to me was a comment that the team seemed to hit some barriers with the underlying approach they were taking to build these social features: merging files from lots of shared and synced "drives" into a singular experience. I have yet to dip my toes into the waters of building on hyper, but from the little bits I've absorbed, this is one of the few approaches that I think I understand and if its creators are having performance issues I don't have high hopes of figuring it out myself. There was mention of a new approach called hyperbee, but it still feels very Computer Science to me at the moment. I look forward to seeing some new stuff built on it, though!

These details and many more are discussed in the videos from the summer DAT Conference, including an earlier talk about Beaker, and a great interactive workshop on building stuff with Dat-SDK by Mauve.

In general I am excited about stuff that is happening in DAT / hyper, but I think a few things are stopping me from getting into it. Beaker seems, to me, to be a kind of flagship experience for DAT and hyper and the big leap they just made to hyper left an unknown number of projects behind. That's a big filter, and it doesn't give me confidence in the longevity of any new project I might build at the moment.

Hello Agregore!

At both the DWeb Meetup and the summer's DAT Conference, Mauve gave some great introductions to Agregore, "a minimal web browser for the distributed web".

I am infatuated with this project, which I will attempt to explain here, badly. Agregore is a browser that focuses on making it easy to build and use apps based on distributed web technologies like hyper, IPFS, and many, many more. This is made possible through a plugin-based architecture that makes it easy to add new protocols to the browser, and by a set of libraries which abstract away the complexities of each protocol behind an HTTP-like interface.

What I find really fun about this is it encourages mashing up these different technologies. You can link freely between regular HTTP sites and sites on decentralized protocols. You could build a web app on hyper:// that offloads large media files onto IPFS. Heck, even though IPFS has been getting money and hype for years, I think Agregore was the first app I was able to just download and immediately access IPFS content. It's even got a protocol handler for Gemini, a kind of baffling (to me) alternate universe version of Markdown blogs on Gopher. And more protocols — and related alternative tech like DNS via .eth domains (maybe someday .bit and .onion?) — are in the works.

I can hardly think of a better web sandbox. I love the focus on "web apps" because HTML, CSS, JavaScript, and media in a browser are super flexible. The ability to make apps that bridge across the classic web, decentralized protocols, and maybe even local files, feels like it opens up new worlds of possibilities.

I still have lots of questions about how to make things stick around on protocols like hyper:// and ipfs:// and ipns:// and I don't think I'll be doing much more than tinkering until I understand those features better.

Speaking of Sticking Around

Through the IndieWeb chat I caught a reference to a blog series on decentralized web tech that is now a few years out of date at decentralized.blog. In a series called "blockchain train journal", the author sets out to build their blog on decentralized tech, evaluates several of the technologies available at the time (in 2017), and discusses some experiments on publishing.

I found this one post, trying out a handful of ways of making human-friendly names for content on IPFS, particularly interesting. In that post, and others, the author makes reference to the URLs of several bits of content that they had published via IPFS and IPNS, including some experiments on resolving IPFS content via regular DNS.

A couple of years later and... that content seems to be gone. I haven't been able to resolve any of the IPFS or IPNS versions of any of these blog posts. There seems to be no DNS entry pointing to an IPFS/IPNS content. One of the main "features" of decentralized networks like IPFS, hyper (and many more) is that they forget content extremely quickly. If you're not paying a service to host it for you, or taking care to host it yourself, it simply fades away.

However, the blog continues to be available on the plain-old web — at https://decentralized.blog/ — with one interesting caveat. When I visited decentralized.blog for the first time my browser warned me that the connection was not secure because the certificate that it uses to encrypt HTTPS traffic and assert its identity has expired. It seems that the site is configured to redirect plain HTTP traffic to HTTPS. Thankfully my browser, for now, allows me to ignore this warning and read the site, despite the author failing to pay this HTTPS admin tax.

And on my own forgetting...

The decentralized.blog writeup on .bit domains and Namecoin reminded me that, at about the same time this blogger was exploring IPFS and more, I was excited about a sort of Beaker-competitor called ZeroNet. I had made a simple demo site for myself, played around with making profiles on the demo sites which kind of emulated Twitter, Reddit, and more.

I even got around to figuring out how to buy some Namecoin and register and configured my own domain. So you could find my little test site at schmarty.bit.

However, beyond the initial configuration, Namecoin also has some upkeep requirements! Every 5-6 months (ish) I would have to open up my Namecoin wallet and let it sync the last 5-6 months of transactions before spending a tiny amount of the coin in my wallet to keep the record up to date.

Of course, my focus eventually moved to other things. I got a new laptop and stopped using the one with my Namecoin wallet, and I eventually let it expire. It took about 8 months before a spammer grabbed it to advertise bitcoin services. About 8 months after that it was updated to note that it was being squatted and available for purchase.

I doubt I'll get around to trying to negotiate for its return. Something about the whole thing feels a little hopeless to me.

But it's on a blockchain, so you can go revisit the story of schmarty.bit any time you like. For as long as people keep mining Namecoin.


Likes

Beko Pharm

Bookmarks

Johan Bové

Mentions

Johan Bové Johan Bové at said:

Thanks for sharing your thoughts on the DWeb. You point out some true challenges that this exciting Web Space is facing at the time you wrote the post and is still facing today. About the "phasing out" or disappearing of the data on the IPFS network; I personally think it's kind poetic almost that for as long as someone is interested in content, it will always be there on the IPFS network. You only need one node with the content to have the data to be able to reach it. Unfortunately perhaps, the …