I also really appreciated your other post[0]. This is what allowed me to understand that SSB is not really a social network but more a p2p protocol to exchange messages where applications can be built on.
It's been a while, but I used to be an active SSB user.
I hosted SSB pubs and used to post on patchwork semi-regularly.
I thought it worked pretty well as a social network. I discovered new and interesting ideas from folks that I don't see much on mainstream social media.
I haven't followed the space much recently, and I'm curious about how it has evolved over the last year or so.
My favorite memories on SSB:
Someone promoted a book that they had written, and we arranged for a sales transaction by talking purely over the network. I sent them some amount of Bitcoin, and they sent me the PDF of their book. It felt very personal to work with the author directly, and side-step payment processors.
I loved taking my laptop out on the train or to a coffee shop, and replying to threads and publishing a post to SSB while offline. Something about reading other peoples ideas while disconnected, and then writing my thoughts, and having them automatically sync to the network when I got back on my WiFi at home, gave me a different perspective on ways to use technology.
> I loved taking my laptop out on the train or to a coffee shop, and replying to threads and publishing a post to SSB while offline. Something about reading other peoples ideas while disconnected, and then writing my thoughts, and having them automatically sync to the network when I got back on my WiFi at home, gave me a different perspective on ways to use technology.
You can do this with Usenet and most BBS's. Most native IM apps will also do this for you, plus of course there's email.
One cool feature of Scuttlebot is that if you and your friend are already following each other, you only need a connection to each other P2P to be able to send messages to each other. So if you're on a train with ad-hoc WiFi connected to each other, you can still proceed as usual and sync stuff.
I don't think this feature exists in Usenet and BBS's where there is a central server who masterminds the sync that everyone is doing. Same with email, requires a server (local or remote) to send/receive stuff while in SSB both local and remote are usually the same machine.
For BBSs, you're correct. Usenet (and email) used UUCP, which is actually much closer in concept I think here.
UUCP is a store-and-forward mechanism, not dependent on a real-time connection to a particular server. I used to run a node, connected to a guy I'd met who worked for an ISP. He had, gasp, a full-time network connection via ISDN; pretty magical in these days of dial up.
So, Usenet feeds were configured on my own little system, essentially subscribing to the newsgroups I wanted. Periodically, it would dial out to the other gent, upload any new posts from me, and download anything new on those newsgroups. My email came and went the same way. Naturally, what I got was a subset of what he had accessible.
While I never used this functionality, I could have had others call up to me, and I would just be an intermediate link in the chain. RFC 976 (https://tools.ietf.org/html/rfc976) describes how this works for email, including SMTP over UUCP.
The lack of multi-device support was a constraint that gave me a different perspective of interactions on the web.
My ssb keypair was on a work laptop, so when I changed jobs and had to give my laptop back, I lost my keypair. Now, I could have exported the keypair and continued to use my "account" on my new laptop. The network would have synced on my new device, and I'd get all my posts and pictures back. But decided to embrace the constraint instead.
When I rejoin the network, I'll have a new keypair, and no post history. I think this can have an interesting effect on how we view our attachment to data.
Creating a new account from scratch also means rotating your keys, which is a good practice. As the last few discussions on PGP have shown, the model of having a long-term identity key is more dangerous than it seems, because a single mistake (by you or by the application developer) means so much content can now be leaked. It's probably easier to let the natural connection between people be the vector of long-term trust, which ironically SSB emphasizes on
I love SSB, in principle. The protocol itself is very well documented[1]. The community tends to center lofty ideals around accessibility, anti-authoritarianism, and social responsibility[2][3] which I'm all about.
Unfortunately, I've found the software implementations maintained by the SSBC to be "barely working" at best, with pretty scant and out of date documentation (to the point where code in "Getting Started" sections doesn't actually work) for most libraries/tools, PRs and issues lounge for many months without a response, and I've noticed a disappointing tendency among the SSB community and maintainers to be a bit condescending to newcomers and less technical users (not to mention cliqueish) in a way that seems in tension with some of the ideals that they pay lip service to.
That said, I'm aware that we're all human, and my experience here is as more of an observer and tinkerer than active participant, so should be taken with a grain of salt
I've been using ssb for about a year and a half, and love it. The ideas behind the dweb do not get the credit they deserve, IMO : p2p data sharing where your data is sent directly to your friends rather than centralized on a know-it-all server is what the world needs, for anyone concerned about privacy.
If I had to say what I don't like about ssb, it would probably be that it's not easy to write an application for it. After trying for a while, I turned to Dat and the Beaker browser, which let me write frontend applications the usual way, with just an api to manage the p2p archives.
Both dat and ssb are javascript projects, so if you want to write your dweb applications using an other language, you're out of luck. I heard there is a rust implementation of dat being worked on that will allow to access an ABI, and thus write bindings for other languages, if it gets to conclusion.
Also, for all dweb stuff, the main current problem IMO is privacy. You can make private discussions on ssb, you can add an encryption layer on top of dat/beaker (I made myself a library using libsodium, that's good enough), but the main focus is about publishing things for the world, like you publish blog posts.
All in all, on my free time, I'm now more interested to work on the dweb than on the web, so I can only encourage people to toy with it.
The lack of multi-device support isn't too much cumbersome?
Also, you do not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leak, and all your conversations are publicly exposed?
> The lack of multi-device support isn't too much cumbersome?
For ssb, it's not a problem for me, because I only use it on my laptop. There's a mobile app that exists, Manyverse, but you have to make a separate account for it, so usually ssb users will have a "john_doe" account and a "john_doe_mobile" one. I guess that's good enough.
For dat, yes, it's been a major problem for me for a while, because I mainly use it to make my own "p2p cloud", so I want my data on mobile as well. There is the Bunsen browser on android, quite experimental while able to load dat urls. Sadly for me, localStorage doesn't work in it, and it's what I use to store my encryption keys. I thought it was hopeless for a while, until I started using Termux (which basically provided a POSIX environment on android). From there, I start dat processes to replicate my data, and I wrote a small server to serve them on 127.0.0.1, which then allow me to use the app in any browser on mobile. Completely hackish, and I can't recommend any sane person to do that, of course:)
> Also, you do not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leak, and all your conversations are publicly exposed?
Yes indeed, this is a real risk for any p2p data. I _think_ it's still better than having them unencrypted on big database known for snooping, but we'll have to deal with that at some point. I guess the best would be to have some sort of encryption capable of autodestructing past a given age, I guess? That's a challenge for cryptographers, especially given it needs to not allow to just set the clock in the past to bypass it. Well, I hope the world will surprise me once again :)
On the other hand, when I thought about that, I also considered it may actually be a good thing, depending on how many years it takes to break the encryption or find the keys. If I'm long dead, I'm fine with my data being decrypted, because otherwise we'll make the work of future historians impossible, if data is sparse and severely encrypted so they can't access it.
Nope, I used a "pub", which are bot accounts that give you an invitation and auto follow you to boot the network. The way to use them is described here : https://scuttlebutt.nz/get-started/
It's a bit cumbersome, but the purpose is to avoid having any central authority, if I understood correctly (well, except the server running the webpage on which pubs are listed :] ).
I've been on SSB for some 3 years (with some breaks when I had enough of npm). Once you're onboarded it works like a charm: exchange of data between peers works swiftly and efficiently, to the extend that you can even use it for realtime chat the way IRC works. The community is colourful and friendly, and the signal to noise ratio is high. I've learned a lot about fermentation and growing mushrooms and living off-grid while reading posts on SSB.
My biggest frustration has been that all usable clients are written in nodejs. I recently took a seven months break to cool off from rage over npm (and yarn, for that matter), but now I'm back again.
Onboarding can be tricky because there are no central servers — it's 100% p2p — but I guess it's easier these days than it was in the beginning. And if you know somebody who is already onboard it shouldn't pose a problem at all.
The lack of multi-device support isn't too much cumbersome?
Also, do you not fear, due to the gossip protocol, that your private messages may be stored forever by peers and one day, your private key leaks, and all your conversations are publicly exposed?
It would be sweet if we had multi-device support, but it doesn't bother me too much: All my IDs are mutually following each other, and I have my “hops” set to 3, so I see virtually the same timeline on each device. What can be frustrating, though, is that notifications and private messages to a given ID are only visible from the device with that ID.
I am not overly concerned about leaks of my private key, although the risk is certainly there.
One think many people have to get used to, though, is that the log is append-only: once you've published a message and is no way to delete or undo the message — is it there for “all eternity”. The positive side is that you're more conscious about what you post and why, because there is no way of taking it back.
One of my wishes is that it would support a hardware token like the yubikey for storing the private key, to make leaks less likely (although it might not be super performant).
I believe the principal reason is that social networks have accustomed users to be able to interact with the content they absorb. And as we all think that we say is important...
What I would really love to emerge is a kind of new social network / protocol based only on RSS for following and emails for comments :D
I agree that bitbucket provides a nightmarish experience, but does it take in account that it's a Single Page App and thus after the initial loading the traffic will be much less?
Based on the data quoted here, it would take a very large amount of browsing after the initial load to cancel out the difference, which is often a difference of 50-100 times the SourceHut data.
Having used BitBucket since 2013, I've seen their performance ebb over the years. The most recent redesign made the biggest difference. I left some comments in their beta period noting that the performance was painful in areas, and it did improve some by the time it hit release, but was still noticeably slower than it had been before the redesign.
I would be interested in some sort of analysis of performance differences across web frameworks. Looking at BitBucket's code in the dev tools, they're using React + Redux currently. BackboneJS's web site lists BitBucket as using Backbone, so I would deduce the old, less slow design was the one that used Backbone. I've worked on both a snappy Backbone SPA and a lethargic AngularJS SPA, as well as in-between sites with Angular and React. But the business domains differ, so I don't have as apples-to-apples of a comparison as BitBucket's switch from Backbone to React would provide.
It would be an interesting experiment to get several teams of developers with similar levels of experience in different frameworks - Angular, React, Backbone, Vue, JQuery, all implementing sites to the same business specifications, and compare the relative performance (in both features delivered and page responsiveness) a year or two in. Practical to perform that experiment, probably not, but I'd certainly read the results.
I'm not sure if it does or not, but either way it seems like initial page load is the important part. SourceHut is not a single page app, so to compare apples to apples you'd want to load each page individually. Also, it still takes forever to switch between things on Bitbucket after the initial page load. In general for simple things like this with distinct pages, I'm pretty convinced that "single page app" just means that you've made your site infinitely more complicated, more likely to have issues, and more likely to have accessibility problems, for both developers and users for no benefit other than using some trendy new thing.
The idea of SPA, is to not have to send 'useless' data at every page change (like the HTML layout), but only the strict necessary data with a JSON payload. Especially coupled with good caching.
Also it offloads computation done on the server to clients (templating).
So generally yes SPA are heavier, but they are more powerful. It's a matter of tradeoffs.
As long as they've existed, SPAs have been faster, but only in theory ;)
I don't think I've _ever_ seen one that's faster than the normal webapp it replaced. Except possibly for Google's first attempt on gmail, which was pretty snappy. Fortunately they've since replaced it with a sluggish behemoth, so order has been restored to the universe.
With the possible exception you mentioned (I switched to FastMail long ago to get back to that snappy feeling) my experience with SPAs is that they all add 10 megs of JavaScript up front and several dozen kbs of JSON or similar per click to save a couple tens of bytes per page of HTML. I pulled those numbers out of the air obviously, but I'm not sure that they're even exaggerations…
Or you could just make the HTML and what not simpler and then who cares? The useless data is fine if you don't already have the problem of sending tons of extra crap that nobody needs. Single page stuff can still use templating on the server side.
This is a really interesting technical write-up, but please no.
Adding more streaming to the cloud bullshit only adds dependencies, points of failure, costs and energy consumption. Especially when it's for playing games that can easily be emulated on any phone today.
Humanity's first goal should be to reduce its energy consumption! This is the opposite of the way forward.
In my view, it is to share and save hardware resources and hence reduce energy + material cost. It is the same as the way cloud provider optimizes resource by virtualization.
That seems interesting. Have there been any studies on user dropoff due to this scheme? Any well known sites using it? Would you recommend doing this exclusively?
Yes (unless you are launching an email provider or an end to end encrypted service), because it reduces the number of passwords users have to keep (one of the biggest problem of our time, password managers are not a viable / good solution) and it reduces your attack surface (you no longer have to store password hashes nor can't chose an insecure hash function).
I'm working on it 2 days a week beside writing the book.
The next major milestone is receiving emails in the inbox, but I think it will take 2-4 months to have it working.