It's not just WebDAV that's an abomination (returning HTTP status codes and messages in XML, WTF were they thinking), but vCard and iCal as well.
The vCard "group construct" (see rfc6350) is one of the dumbest things ever added to a spec. It seems a trivial thing to add, but completely screws up your internal storage and manipulation formats. It's horrible, on top of all the other horribleness that is vCard.
Of course, I'm biased, given we're trying to push alternatives to IMAP, CardDAV and CalDAV - http://jmap.io/ - but please do check it out. Having actually worked on IMAP servers and clients, CalDAV servers and clients and CardDAV servers and clients, we've learned a lot about creating a saner alternative.
>returning HTTP status codes and messages in XML, WTF were they thinking
You're giving me so many ideas right now for a follow-up post.
>but vCard and iCal as well
The group construct is bad, the syntax is bad, but other than that, I never had to implement or parse it properly. Vdirsyncer is mostly passing strings through.
I suspect that this part is much harder to replace than CardDAV and CalDAV, since it's not just a file structure with a ton of bullcrap on top of it. For that reason I currently have those file formats in the remoteStorage folder structure i'm syncing to, but it's still a step forward. I might switch to jCard and jCal, but I don't think it'd be worth the breakage.
So one of your guys just pinged me in private about this, and I responded lengthily. Here's the basic summary:
* I didn't know JMAP was also about calendar and contacts.
* Those parts seem nice.
* However, I'm the kinda guy who self-hosts his data servers. JMAP seems like a powerful protocol. I'm not sure those two things mix well, since JMAP server implementations probably will require a good database, not just a dumb FS like in remoteStorage.
* However, I would like to collaborate wherever possible.
> JMAP server implementations probably will require a good database, not just a dumb FS like in remoteStorage.
To do JMAP well you need to be able to calculate change sets, which does mean a database, though a fairly light one. I think you could do a server without delta updates by always returning a cannotCalculateChanges error in response to getUpdates calls, but it would be very inefficient on both the client and the wire.
> However, I would like to collaborate wherever possible.
Sure! Best place to start is probably the jmap-discuss list:
> For anybody who wants to be "careful" when syncing and thinks they want this option, consider putting your collections into git repos and commit after sync.
There's a demo/beta DAV<->JMAP proxy out there by fastmail. You might want to look into that (and keep the current DAV backend in the meantime).
I didn't notice that JMAP includes calendars and contacts. Looks good!
Personally, I'd prefer a dumb storage with a generic API over yet another custom protocol/API just for these data formats alone. Worked great on people's local drives before the Web, and I think it should work similar on the Web.
Dumb storage is fine if you only have a single actor that can maintain a copy of the state at all times. As soon as you've got multiple actors on the data (another client on your desktop or phone, or even something doing mail delivery), then you need a way to determine what changed on the server so you can update your state.
If the server has no ability to tell you what changed, then you're left with having to download and check everything. On a large data set, that's pretty much impossible to do quickly and without a lot of network traffic.
Of course, no server is actually that dumb - even a file listing with file sizes can get you part of the way there. But if you've got 10000 files in a file store, that list can still get pretty heavy. If you're willing to make the server smarter, eventually you can get to the point where the server can give you only what changed since the last time you checked. JMAP isn't unique in this; IMAP has MODSEQs, *DAV has collection synchronisation, etc.
JMAP specifically doesn't really care much about the actual format of the data it works with. The only thing it really needs is an immutable ID, so you could use the same model to store all sorts of things (and at FastMail we do, with things like client settings).
> Of course, no server is actually that dumb - even a file listing with file sizes can get you part of the way there. But if you've got 10000 files in a file store, that list can still get pretty heavy.
Exactly. remoteStorage has ETAGs in folder listings for that. The point is that you can implement a folder structure that enables you to update say just the last week of events plus upcoming ones, which is usually nowhere near 10000. Except with CalDAV you can't (according to the article, I haven't actually looked into it myself).
I guess you can dump a [Maildir](http://cr.yp.to/proto/maildir.html) (or at least a modified version) into a remoteStorage, but this is not really an acceptable protocol for mobile clients with limited space. As mentioned in the blogpost I have the same problems with storing calendars in remoteStorage.
Hey robmueller, I think it bears moving this paragraph on proxy.jmap.io
The service is not running with all of the security measures employed on a production site such as FastMail, so DO NOT USE THIS PROXY FOR ACCOUNTS WITH SENSITIVE DATA.
above the "Oauth to a gmail account or log in to an IMAP server below".
Of course it stands to reason that one shouldn't grant access to third parties to one's gmail, but you may save some heartache from people who are enthusiastic-past-the-point-of-reason if you move that bit up.
I would have just sent you a PR but proxy.jmap.io isn't hosted on github pages.
My pet peeve with vCard is the lack of an encoding and the resulting mess with various implementations. JMAP fixes this by enforcing utf8. You have my vote. :)
To be fair the newer versions of vcard also specify this, and I haven't had a single encoding issue with even the older versions. Every server just seems to send UTF-8, no questions asked.
So far all my phones had a different way to encode Unicode strings. When I wanted to merge the data, I tried a bunch of Python libraries and all failed differently. In the end I wrote my own one-off script.
We (rsync.net) supported webDAV from ... about 2006-2012.
It was a total mess and headache the entire time - mostly because the standard implementations that you would attempt to use in a FOSS environment are complete abandonware.
I am speaking of mod_dav on apache. It's abandonware. The original authors cannot be contacted, it lies stagnant ... it's broken.
One of the original reasons that we supported webDAV was that the mac finder, with it's "Connect to Server" choice in the "Go" menu supported webDAV - but their webDAV was really, really quirky and non-standard and did not function well with anything. We had to reverse engineer Apples weirdness and even then it rarely worked well. MS DAV in office, etc. - also weird. And again, very difficult to even work around the weirdness because mod_dav was abandoned.
Giving up on DAV was a win-win - not only did we stop wasting time on these bizarro home-grown dialects of DAV, but we also removed a ton of attack surface by removing apache entirely from rsync.net storage arrays. Nowadays it's just OpenSSH and I think it will stay that way.
wistful: If only (if only!) Apple would support SFTP in the "connect to server" function of the Finder.
Do you have any insight into why SFTP support was never built into "connect to server" ?
It seems so simple and obvious ... would have saved humanity millions of man hours with people mucking around with sshFS/FUSE on their macs, which barely worked back then ...
No insight whatsoever. Probably nobody ever asked for it, to be honest. "Connect to Server ..." was overwhelmingly used to connect via AFP or, shudder NFS at the time, both of which expose a filesystem like interface. How much like a filesystem is a SFTP connection, as opposed to a FTP-like one? That's probably the reason, right there: WebDAV purported to expose filesystem semantics, which meant that in theory adding it was simply defining a communications protocol to the Finder, rather than a complete translation layer between filesystem operations and FTP ones.
I wrote a new one in Perl which was always going to become Net::DAV::Server upstream, but never found the time to push it. This would have been about 2006. Sorry about that :(
Mostly due to a lack of enthusiasm from Filesys::Virtual to add the hooks I needed.
It's still powering the FastMail DAV filestorage service, and running fine.
Full disclosure: I contributed to the remoteStorage specification [0].
Instead of (or in addition to) considering remoteStorage, one could also go back to "normal" WebDAV, if such a thing exists, and completely forget about the CalDAV/CardDAV servers.
I'd say WebDAV is more mature, it has litmus[1] for testing implementations. remoteStorage has as far as I know only one production instance running at 5apps [2]. The benefits of using the RS protocol are mostly due to the CORS headers (which could be implemented easily for WebDAV) and the use of OAuth/Bearer, for which a PR exists for SabreDAV [3]. One thing missing from WebDAV is the (implicit) mapping of OAuth scopes to ACLs, which should not be too difficult to implement... The discovery, by depending on Webfinger, is also not one of my favorites. I'd prefer something like OAuth authorization server discovery [4].
I mean, I am not opposed to using remoteStorage and love JSON as much as the next guy, but it just doesn't bring (in my opinion) many benefits and loses interop with existing WebDAV clients for no good reason...
In theory this all sounds fantastic, until you actually get to implement it and see that only a handfull of servers actually managed to implement it correctly. Probably because of its complexity. See also another comment from the Flock author: https://github.com/WhisperSystems/Flock/issues/93#issuecomme...
>I'd say WebDAV is more mature, it has litmus[1] for testing implementations.
Within the first few weeks of writing vdirsyncer's testsuite, I already found a bug in SabreDAV, which is tested with litmus. I tried to raise this hole in their testsuite on their mailing list, but at the time their mailinglist was down.
>remoteStorage has as far as I know only one production instance running at 5apps [2].
Yes, that's sad. It has yet to be proven that implementations of remoteStorage will actually become more reliable than the ones for WebDAV, but I'm confident that it's easier to get implementations right with remoteStorage.
>The benefits of using the RS protocol are mostly due to the CORS headers (which could be implemented easily for WebDAV) and the use of OAuth/Bearer, for which a PR exists for SabreDAV [3].
SabreDAV implementing it is cool. They're the best FOSS serverside implementation of the -DAV protocols I know.
But still: It's all optional. Servers implementing all of this is a luxury. Right now we're in a situation where even the stuff that isn't optional doesn't work.
Don't trust anything just because it has been around for a long time.
(Co-founder of 5apps and RS core contributor here.)
> remoteStorage has as far as I know only one production instance running at 5apps
That's not true. 5apps is running the only public service for end users at the moment, but there are certainly more production instances running.
> The benefits of using the RS protocol are mostly due to the CORS headers (which could be implemented easily for WebDAV) and the use of OAuth/Bearer, for which a PR exists for SabreDAV [3].
As both of these would be optional additions to WebDAV servers, all of WebDAV's benefits parish with most servers not supporting these new extensions. That's the very critique in the article as far as I understand. WebDAV alone is not good enough, and optional additions lead to a world of incompatibility and pain.
> One thing missing from WebDAV is the (implicit) mapping of OAuth scopes to ACLs, which should not be too difficult to implement
And another addition.
> I'd prefer something like OAuth authorization server discovery
And another one. Counting 4 now. :)
> but it just doesn't bring (in my opinion) many benefits and loses interop with existing WebDAV clients for no good reason
You just mentioned that to get to feature parity with remoteStorage, a WebDAV server needs 4 optional additions, for only one of which an unmerged PR to a single server implementation exists. Maybe I miss something, but it doesn't sound like interop is WebDAV's benefit in this scenario.
Ah, my point was, actually, to base the remoteStorage spec on WebDAV instead of a new JSON-based protocol as it currently exists. remoteStorage could define a WebDAV 'profile' for what needs to be supported by the server...
Yes, and then you have another optional version of WebDAV, adding to the existing mess, while you don't actually have the benefit of interop that you say would be the reason for using WebDAV in the first place.
The way I read it, that's basically the point of the article.
The linked proposal for OAuth2 discovery is broken. In most cases the client application will have to authenticate by providing client credentials that are registered with the authentication service.
I'm not sure if you're claiming that the OAuth integration into WebDAV is broken or OAuth itself (because remoteStorage's integration works fine, e.g.).
I'm just saying that the mechanism to provide authorization endpoint and token endpoint to the client as proposed in https://www.tuxed.net/fkooman/blog/as_discovery.html is (in general) not sufficient. The client usually needs to provide valid client credentials in order to authenticate.
OpenID Connect solves that by extending OAuth2 with a layer to discover additional information about the authentication service (like authorization endpoint and token endpoint) and to sign up for client credentials.
There is no general issues with OAuth2 and DAV. We're using it successfully to authenticate at Google's DAV services and with Yahoo! Calendar.
I worked on a document management system 10 years ago, we would loved to have used webdav to present a virtual filesystem from our (linux) servers to Office clients for document retrieval/conversion. It was just impossible. As you say, every version of office & windows had different, severe bugs that meant we couldn't get something as simple as browsing a directory, loading & saving files to work across all versions, and it wasn't like recent versions were better - it was completely unreliable.
It turned out to be easier to write plugins for all versions of office than it was to rely on office's own code actually working.
One of the big "sells" for WebDAV was transparent access like a network drive in Windows clients without the need to install additional software. As a power user, accessing WebDAV isn't much of an issue. But as, say, a software developer wanting to provide a WebDAV service, it's another story :-/
Yeah, I was hoping to not need any software in Windows. Having a maintained client that presents a sane UI, with few demands for the back-end ("speaks sane WebDAV") is still quite good -- if you want filesystem like semantics over a secure connection that works reasonably well across the Internet.
Note that, according to some recent searching, in windows 7+, WebDAV over TLS w/Basic Auth should be less painful -- I'm going to have to test it later.
After writing a custom CalDAV server I agree with the author. The number of requests it takes to get the data you need is insane, and most requests will involve a DB hit. My CalDAV server is much more "chatty" than my web server even though they serve essentially the same information.
Plus each CalDAV client does things a little differently so it's very hard to debug, especially when most clients are black boxes. Permissions and feature and how they respond...a potentially different.
I (unfortunately) consider the custom CalDAV server I wrote to be a bit a technical achievement, even though I'd rather I didn't have to do it. But I had to do it because there's really no other good option for writing calendar services that work with native calendar applications.
I bet it's easier to use SabreDAV (http://sabre.io/dav/) for your server than writing it all by yourself, even though it means writing PHP. I really feel like its author is the only one who understands what the hell is going on.
Ken Murchison from CMU has a pretty good grasp on it these days too, having written the Cal/CardDAV support for the Cyrus IMAP server. I'm slowly catching up, having rewritten parts of it :)
What I loved about WebDAV is that it let me edit a file on a remote server using my personal PC. When I wanted to save to the remote computer, I just saved it.
Really any remote file protocol can do this. The difference is that OS X bundles software to mount webdav servers without any fancy configuration. On Linux virtually any protocol for managing remote files has a FUSE implementation that lets you treat them like files on your own system. Most of these should compile just fine on OS X but no one uses them because people have gotten used to the far-worse option of a lot of apps having built-in support for several file-transfer protocols, and then having a basic client application for manually transfering files for applications without built-in support.
I agree with you. I also like the idea of a remote file system as a standard protocol.
There are a lot of complaints here that it is too complex or that it should be json rather than xml. How many people are developing their own WebDAV libraries? I like that I can grab a WebDAV library and add remote file support to my software.
I have tried a number of the ones that you and others mention. The security was awful. They would create the remote drive (or whatever) automatically, all the time. eek! I want to have to log in first.
It has been a few years since I have tried. I hope something has improved.
Again, basically, I would prefer to edit in Notepad++ or Sublime instead of vi. Personal preference.
For a self-hosted, wide area network file sharing solution I think WebDAV is a very simple and cheap choice.
My experience with WebDAV has been just that, its simplest form. Setup an Apache web server for project managers to share files with a client for example.
Well, there's just three basic options: have the WebDAV server run as root, to allow changing uids. It's probably a bad idea. Have a group that both the server and uid is member of, have that group own the files (better, but breaks down when you need more than one user, unless you have dav-uid1, dav-uid2 ... groups for every uidN, and have the server be member of each -- and even then you're not much better off -- the server can still access all files -- at least uid1 logged in locally won't have access to files owned by dav-uid2 directly). Finally, just run the server as your uid, on an alternate port. That's probably more sane, optionally have a proxy stand in so you can connect over 443.
The state of dav clients is a mess, FWIW last I checked, getting it to work sanely on Linux, either with Gnome, or davfs -- was easy enough. Trying to get it to work reasonably from Windows was hopeless -- I never tried OS X -- I'd hoped that they actually had a usable client.
Either way, if you basically end up just being able to access it reasonably from Linux, you might as well just go with sshfs. For a handful of users, setting up reasonably secure[1], key-based file sharing -- ssh(fs) is hard to beat.
SAMBA/CIFS isn't fit to be used over the Internet, NFSv4/pNFS w/kerberos and encryption might be, but it, along with AFS is complex to set up.
The second easiest thing with some hope of being both secure and not a nightmare to get working for most (after sftp/sshfs) is probably Samba/CIFS over a VPN.
Not sure about the current state of the webdav-server/client stuff for python -- that might be a viable way to get a pair of multiplatform client/server things working. Last I looked the WebDAV part of Zope/Plone worked fine with davfs (sadly the rest of the thing, for editing content etc, needs/needed work).
On paper WebDAV har many things going for it: it works over standard TLS ports, should go through most firewalls, is "trivial" to secure using certificates (not sure about 2fa -- would probably depend on clients having a sane UI/UX for "require certificate and OTP/password" etc).
Does anyone have any experience with actually using something derived from Plan9 9p for sharing files? I've only been able to find broken, abandoned implementations, and no recent how-to's or tutorials.
Again, on paper, it would seem that 9p wrapped in some kind of certificate based protocol (either built from nacl primitives, or simply TLS/SSH) should be a reasonable candidate. A nice combination of not reinventing the wheel, and keeping things as simple as possible (as opposed to, NFSv4, CIFS, AFS).
[1] Ssh breaks down when it comes to revoking keys etc. This is in theory solved by using ssh certificates (which can expire, be revoked etc) -- but AFAIK there aren't any sane cli/ui/management tools yet. In that case managing x509 via AD or some other CA tool is probably easier -- but still ridiculously painful.
>The state of dav clients is a mess, FWIW last I checked, getting it to work sanely on Linux, either with Gnome, or davfs -- was easy enough. Trying to get it to work reasonably from Windows was hopeless -- I never tried OS X -- I'd hoped that they actually had a usable client.
Nonsense, the reason I use WebDAV when non-tech people need a file sharing service within my company time and time again is because it works so easy on Windows and Macintosh. Their two favorite OS'.
On windows you mount it as a regular network share, you can even authenticate with AD if it's in your network. On Mac you just Cmd+K in finder and use the same URL as on Windows. On Linux I've recently learned it's equally simple.
This works over TLS1.2? Without any extra software needed in Windows? Note that last I checked, clients were running Windows XP -- maybe this is fixed/improved in 7/8/8.1/10?
[ed: I seem to recall I had some issues with Windows 7 as well, but I may be misremembering things. Does seem that WebDAV is now properly bundled with Windows, and https should work as long as the certificate matches. Apparently Windows will refuse basic auth (but do digest, which really isn't that much of an improvement) over standard http -- but as part of the point is the added security of simple transport encryption over SSL -- I can't imagine why anyone would want to expose WebDAV (other than read-only, perhaps) over anything other than SSL]
As for it working in OS X -- I haven't tried it -- and my impression was that it did work. Apparently others in this thread have different experiences...
The big problem I have with WebDAV is that the specs for very pieces all do kind of "vertical" slices of functionality without separation of concerns between low-level protocol (what does HTTP provide and what is it missing, and how do we generically extend it to provide what is necessary for the applications we are trying to do) and content semantics, so that where it extended the HTTP protocol (with new methods, etc.), those extensions were tightly bound to content semantics.
I think rethinking the same problem domain with a more RESTful approach, a lot better could be done, with a lot simpler set of generic protocol extensions covering the few real gaps in HTTP/1.1 and a simple set of metadata content types [0] to cover authoring/versioning information, with primary content just using any content type.
[0] And, really, this is two problems that should be addressed somewhat separately, in terms of representation-neutral metadata content models first, and then in terms of various representations for those content models.
Parsing speed does not necessarily correspond to the complexity of the spec. For example a lot of complexity in the XML spec is the DTD part. But if you parse a document without a DTD, this would presumably not have a performance cost.
Parsing was never the bottleneck, I admit. But I can't see how parsing XML is as easy a task as parsing JSON either?
Of course you can show me some benchmarks that show that libxml2 is much faster than the avg. JSON implementation. But you can do that with any format that is verbose, complex and therefore hard to parse, yet has been around for so long that it paid off to write well-optimized C libraries for it.
I recommend that you simply write a JSON parser and an XML parser using a parser generator. Then you will see. They are both trivial. I would be surprised if it took you more than an afternoon to do both.
Ease of parsing was the whole point of XML. It was actually a beautiful concept that because it was a subset of SGML, you could validate documents and build smart editors, etc, etc on file formats in XML. Then when you wanted to simply read the file, the parser was mind blowingly simple to write.
Of course people decided that it would be great for communications protocols, and inter-language object representations and... all the stuff that it really wasn't great for.
And people decided that you would want to write event driven parsers and do DOM traversals and crazy things like that.
Yeah, that stuff is complicated. (and dare I offend people by saying that it wasn't really a good idea after all).
But parsing is dead easy. Personally prefer JSON for most things because it is easier to type :-)
The point I'm trying to make is that JSON is generally easier to parse, therefore theoretically faster to parse, even though XML implementations might be faster at the moment.
A theoretical microsecond difference that doesn't even exist in practice doesn't seem worth mentioning when there is network traffic involved. The article says:
"Yes, it's slow to fetch all events, but so is parsing XML. "
That's several orders of magnitude difference glossed over.
Meh, I have to agree that it isn't a good point. I guess I was merely trying to point out that both have its performance deficiencies. At least the ones of remoteStorage are avoidable, while you can't just avoid XML in WebDAV.
I tried WebDAV a couple of years back on MacOS. Turned out that MacOS had a bug in its WebDAV implementation, causing files to be lost. Never looked at it since.
> In practice, you can't use a WebDAV client library. In practice, you copy-paste XML from the examples in the RFC into your source code and hope for the best.
I've written CalDav clients. This is completely true. Being both an extension and modification of WebDAV, CalDAV cannot be used with common WebDAV libraries.
The problem with Jmap is that it doesn't work well with the rest of the world. Unless your only use-case is personal sync, this can be problem. It's possible to ditch WebDAV, but also breaking compatibility with iCalendar and iTip makes this a no-go for most people.
If they considered using jCal/jCard, compatiblity would be possible and then it actually has a chance.
In fact, they could even ignore jCal/jCard and pick a format that can express the same data-model as iCalendar/vCard, and then they would still have my vote.
The way it's currently developed, is that it's an entirely new format that maps to iCalendar in a lossy way. Any iCalendar/iTip extension that is currently exists or that is under development would have no way to be expressed in Jmap.
I think the authors of Jmap have a really great understanding of Imap and what it would take to replace it, but their use-case of what they need from calendaring is narrow and their approach to replacing CalDAV is naive.
It's basically ActiveSync v2. Just like ActiveSync it uses HTTP as a tunnel with a single endpoint and an RPC-like system using POST, except ActiveSync uses WBXML, and Jmap uses Json.
By the way, I see you edited this from the first time I read it - I should point out that FastMail provides a full CardDAV/CalDAV service and uses VCARD under the hood as well (it's the Cyrus IMAPd server)
What we do need to make sure works is extensibility and custom properties, because as you have correctly pointed out - we can't tell what's necessary for the future. I really want to make the extensibility not be at the expense of the simple things working well and reliably though. I see a fetish in standards bodies for edge cases and complexity. We want "making the simple things easy and the hard things possible" not "making the hard things possible and the simple things hard". The whole user principal dance in *DAV at the end of which there is STILL no standard way to share calendars/addressbooks - all the DAV sharing stuff is basically unused or semi used, and the whole use of collections is pretty much ignoring the capabilities of DAV.... that's crazy. It's layers upon layers and you get crap like Evolution depending on seeing that the PUT action is allowed on the collection (which it's not in Cyrus, because you can only put resources with in the collection) and marking the calendar read-only. Really? Even bootstrapping is a fraught nightmare.
That's what we want to avoid with JMAP - a million choices and all of them horrible, with no clear guidance.
I don't know. Compatibility with jCal and jCard would've been nice, yes, but I don't think the kind of properties they support in their datamodel differ too much from what the VCard and iCalendar support.
They also have a proxy server that proxies client => JMAP => DAV, so if that works without a massive translation layer, it'll probably be fine.
EDIT: Just saw that you edited your post to address those points.
In that case I'd like to see a list of things that are required from a -DAV successor. JMAP's syntax strikes me as quirky, I agree, but the datamodel and the amount of methods provided seem sensible to me.
Yes, the formats are inspired by standards and similar, but that's not the main point.
Currently CalDAV and CardDAV agents expect servers to be able to store custom properties, and retrieve those again. This is relevant for sync clients such as yours, but it becomes even more important for scheduling systems.
In addition, there's many useful efforts already existing and underway such as recursion for non-gregorian calendars, consensus-based scheduling (a la doodle), inter-server scheduling (iTip, iMip, iSchedule), Freebusy, Calendar sharing, Availability, etc that all cannot be expressed by JMap.
I'm sure there's many more issues that would cause subtle bugs. As long as people are not actually storing iCalendar and vCard, and map to their own inferior data-models (such as Jmap, but Google Calendar is another big offender), we'll continue to have interop issues.
The thing is, they don't really like jCal. It has a bit of an odd structure compared to most json formats. So my thinking is that they don't have to support jCal to work within the wider calendaring world, they just need something that maps to iCalendar in a lossless manner. They can deprecate VTIMEZONE and other frustrating features, they can introduce new properties that are easier for them to use, just don't lose compatibility.
disclaimer, I'm the main author for sabre/dav and member of CalConnect. Fwiw I DON'T think that the iCalendar format is great, but a replacement will need to have actual backwards compatibility and not just loosely model the few things that a simple online calendar such as fastmail needs.
>Currently CalDAV and CardDAV agents expect servers to be able to store custom properties, and retrieve those again. This is relevant for sync clients such as yours, but it becomes even more important for scheduling systems.
What's wrong with storing a file inside the calendar folder? Why do clients need to do this?
Even in CalDAV/CardDAV's case: Flock had massive compat issues because no server actually supported this. So I guess it's not actually necessary?
I'm not knowledgeable enough to respond to the rest of your points. Perhaps somebody from FastMail could respond to them. But I wonder if we should simply ignore some of those feature requests to make a simple protocol for the majority of us possible.
> What's wrong with storing a file inside the calendar folder? Why do clients need to do this?
I'm talking about iCalendar properties, parameters and components, NOT webdav properties. The distinction is important.
> Even in CalDAV/CardDAV's case: Flock had massive compat issues because no server actually supported this. So I guess it's not actually necessary?
Flock used custom 'dumb' WebDAV properties, this is not widely supported, and actually wasn't supported by sabre/dav until 3.0. Flock was the first client that required it.
Not true for iCalendar properties. CalDAV and CardDAV servers really should support it. Servers such as fastmail and google calendardon't do this well and they'll usually silently discard user-supplied data. Fastmail is part of interop the problem that we have today. They've only done this for a bit over a year, so it's not entirely surprising that their conclusion is a simpler data-model. I think most people tend to take that path before they realize the shortcomings.
Jmap and fastmail is bad for interop, and letting Fastmail pick their favourite subset of caldav and carddav and attempt to make that the standard is not really a solution, and will also probably never fly.
>I'm talking about iCalendar properties, parameters and components, NOT webdav properties. The distinction is important.
It wasn't clear at all what you meant.
It has caused quite a bit of trouble for CardDAV interopability, since particularly Apple has defined proprietary extensions for crucial features like groups. This lead to other clients adopting the proprietary extension, only for it to be invalidated at the next wiggle of the worm.
On collection properties, Apple has also added the color property to calendar collections. I'm sure you remember the rant on CalConnect by one of FastMail's employees about both the lack of documentation and standardization of such extensions.
Supporting arbitrary properties can be good for interop, but in DAV's case it has brought many proprietary, optional extensions whose support is almost taken for granted by the user.
EDIT: Note that I'm not for discarding unrecognized props, I'm for rejecting the whole item.
> It has caused quite a bit of trouble for CardDAV interopability, since particularly Apple has defined proprietary extensions for crucial features like groups. This lead to other clients adopting the proprietary extension, only for it to be invalidated at the next wiggle of the worm.
The problem here is actually the lack of vCard 4 adoption, but I agree that it's an issue. Apple (and others) have effectively extended vCard 3 to adopt vCard 4 features.
> On collection properties, Apple has also added the color property to calendar collections. I'm sure you remember the rant on CalConnect by one of FastMail's employees about both the lack of documentation and standardization of such extensions.
I also thought it was completely wrong. An extremely minor thing compared to all the things that _have_ been standardized or are currently. But there's a lot of work still to be done. Also a lot of work has been done. Work which is discarded by JMap.
> Supporting arbitrary properties can be good for interop, but in DAV's case it has brought many proprietary, optional extensions whose support is almost taken for granted by the user.
I could grant that could be an issue, but creating a standard that simply does not support any of these features out of the box is not really a solution either.
In the end, people will want to implement certain features on top of these servers because the standards don't cover the range of features of non-standard alternatives such as Lotus Notes and MS Exchange.
But I want to iterate that I agree that DAV, iCalendar and vCard each have issues, but however you look at it, JMap is a massive step backwards because it discards and ignores many years of actual standardization and development.
> EDIT: Note that I'm not for discarding unrecognized props, I'm for rejecting the whole item.
HTTP works because browsers, other clients, servers and proxies don't need to be aware of every detail of the protocol. They need to understand the overall structure and certain baserules, but if it were restricted what the response body had to look like, or which headers are legal, it would have stumped innovation.
HTML, CSS, Atom, you name them and they have a well defined-extension system and I think it's contributed to their success.
The iTip protocol needs to work like a carrier and not care about all it's contents. If in the future iSchedule lands, and we get multiple caldav servers talking together and do scheduling together, you'll want individual iSchedule nodes to ignore extensions, so that servers and clients can innovate and extend without having to alter the underlying protocol.
Have those optional extensions been that bad? I would say that they've only been bad when people have badly implemented the core protocol.
To extend the CSS analogy, it would be as if Fastmail created something similar to SASS or Less, but instead of having a 1:1 mapping, new syntax is created for every property. Well, most of them... because many CSS properties are not supported. Also, any future CSS extension would need to get explicitly added to this new stylesheet format.
All I ever demanded from my calendaring protocol was to safely and quickly synchronize my calendars between the server and the client. CalDAV has utterly failed at this because it is a complex protocol, and I have no idea how this new protocol improves upon that situation.
I don't give a shit about my CalDAV servers talking to each other and "doing scheduling". I use WhatsApp and email for my scheduling. We don't need more features, we need less of them. CalDAV, iTip and iSchedule seem to be designed around the needs of the corporate bureaucratic world, with little to no regard to the average users' needs, and frankly, I'm horrified that for a protocol drafted in the last three years, XML is still used.
Simplicity is not just some arbitrary property I'm striving for. It's what all protocols out of the DAV series lack to be actually sensibly implementable. Have you read this thread? People are struggling to mount a simple WebDAV collection, because either the clients or the servers are so bad. And you're talking about scheduling.
This isn't even in defense of JMAP. While it at least doesn't try to add crap on top of the crappile, it's just too complex for my usecase either.
I guess I'll just stick to remoteStorage and some ics files in it.
> I guess I'll just stick to remoteStorage and some ics files in it.
Well that was all you needed to begin with, wasn't it? As far as I can tell your use-case could probably be satisfied by rsync alone... might be another option.
We definitely need the ability to store custom properties on the server in JMAP - I think we've agreed on that. Hopefully you'll be at CalConnect in a few weeks and you and Neil can fight to the death or whatever about it.
It compresses extremely well; even if you use a predetermined dictionary (reducing CPU usage). However, because of CPU usage considerations it probably won't ever do as well as high-entropy encodings such as Protobuf because to get truly great compression you simply have to spend the cycles.
If you are using XML for the right applications (i.e. not a data firehose) the compression/CPU characteristics shouldn't matter at all. It's when you start using it for high bandwidth scenarios that things become sketchy: you should be negotiating an out-of-band stream (such as SI, Jingle or just a plain old socket) and using that for the firehose.
That being said, people like AeroFS supposedly used it as a firehose for years[1] (in the form of XMPP) before having to replace it with a simpler protocol: so there does seem to be some elasticity to that assertion about not using it for a firehose.
I think it's pretty clear which is least readable and least concise.
So, what does XML buy one in exchange for its lack of concision? Well, a pre-processing tool which knows nothing about WebDAV or CalDAV could use a schema definition to sanity-check the formatting of a query. It could probably be set up to detect if one used an invalid date format; it may or may not be configurable to detect if one searched for a VCALENDAR component inside a VEVENT. Regardless, there are semantic constraints in a data model which cannot be expressed with, e.g., a DTD or XSD.
How about JSON? What does it buy? Well, it has built-in hash tables (as opposed to faking them with (table (key val) (key2 val2))), which is a huge win, and it has a ton of libraries in every language one can imagine.
How about S-expressions? I think that they win on clarity, readability and concision. But there are not a ton of parsing libraries available for every language (OTOH, a canonical-S-expression parser can be written in a few hours for any language).
So, who wins? I think that for most projects nowadays, JSON's clearly the answer. The pain of XML simply doesn't pay off in practice, while the elegance of S-expressions doesn't outweigh the unfamiliarity of the great mass of enterprise software developers.
WebDAV cannot be validated by schema, since an implementation must be able to ignore elements it does not understand.
(RFC 4918: "A recipient of a WebDAV message with an XML body MUST NOT validate the XML document according to any hard-coded or dynamically-declared DTD.")
I think that's a gross oversimplification. Here you're critiquing XML's use as a substrate for a query language, for which I think you have a good point. For APIs there are reasons why JSON is generally easy enough to work with. But there is a sweet spot for semistructured documents where I would say XML still has the edge, because you don't have to quote text and errors in structure are much easier to identify. The specific problem domain matters with choices like these. See "XML Is Not S-Expressions" for a deeper treatment of the question.
> But there is a sweet spot for semistructured documents where I would say XML still has the edge, because you don't have to quote text and errors in structure are much easier to identify.
No argument—as a markup language, XML is find (not perfect, but fine); it's a better markup language than either JSON or S-expressions.
But as a data encoding, it's far inferior to JSON and S-expressions.
> See "XML Is Not S-Expressions" for a deeper treatment of the question.
Yeah, I've read it. He's right that XML is better for documents; he's wrong that the benefits of integrating data and document encoding are worth the pain of XML.
Yeah, but it's not only that. In the general usecase you only want content-type and etag, so why not just send no body for that? That's what remoteStorage does. It's the only way to do folder listings.
If only there was a process whereby you could propose corrections to existing standards or even propose a new standard to replace the existing standard instead of whining about it on Hacker News...oh wait.
Seriously if these things bother you then and you think you have a better idea write it up in an RFC.
The process of doing that involves getting a technical community to vet your idea and provide perspectives and related work that you hadn't seen, so that by the time you're approaching writing the RFC, you have rough consensus and running code. If you want to call that "whining about it on Hacker News," you're welcome to your choice of terminology, but I'm not sure why you think that's something separate from the process of proposing a new standard.
Writing a good RFC is hard work, does it not make sense to write informally about your ideas first to see if they have any appeal before investing the time into proposing a new standard?
Writing about it this way also exposes the problems to people who otherwise wouldn't give two fucks about webdav. If the author had simply submited an RFC, only the people working on calendar/contact syncing tech would take notice.
I find it sad that you consider pointing out the flaws in something to be "whining".
Isn't that the point of the RFC process ? It's right there in the name "Request for Comment". The whole point of the RFC structure is that it forces you to lay out what someone would need to implement your RFC, which is what you need to even begin the discussion on a new standard.
A lot of the successful early internet standards followed this path. They weren't written by committee, they were written by one person with an idea to make things better.
And I only consider it whining if you point out the flaws, then don't do something about the flaws using the process for correcting them. The internet was built by the RFC process and its poorer these days because people feel to follow it is "a bit of hard work".
which is what you need to even begin the discussion on a new standard.
I only consider it whining if you point out the flaws, then don't do something about the flaws using the process for correcting them.
To even begin discussion? The discussion begins with pointing out the flaws. One person writes an article pointing out some flaws in a technology, then people comment saying how X is actually not a flaw, and how Y is also quite a major flaw. After that you start focusing on brainstorming ideas that address these flaws. Then you refine those ideas and come up with some specification or prototype implementation (followed by a specification based on it). All this has to happen before what you claim to be the beginning of the discussion.
You can do all of these things in private, maybe consulting a couple friends and colleagues, but you don't have to. You can also do it completely out in the open, on a public forum. In doing so you increase the noise, but you also increase the signal. Do you consider this process whining?
The documents may be called “Requests for Comments” but in reality, most of the modern RFCs dealing with Internet protocols simply became de facto standards – until they were superseded by subsequent RFCs.
The vCard "group construct" (see rfc6350) is one of the dumbest things ever added to a spec. It seems a trivial thing to add, but completely screws up your internal storage and manipulation formats. It's horrible, on top of all the other horribleness that is vCard.
Of course, I'm biased, given we're trying to push alternatives to IMAP, CardDAV and CalDAV - http://jmap.io/ - but please do check it out. Having actually worked on IMAP servers and clients, CalDAV servers and clients and CardDAV servers and clients, we've learned a lot about creating a saner alternative.