They are hacks that paper over the lack of good package management. Plus they are too much extra work, and OS-specific. No build system integrates with them as far as I know.
I think what people want is something like go, where you add a single line to your code and it magically downloads and compiles the referenced library. C++ doesn't have anything like that (and I doubt it ever will to be honest).
> I think what people want is something like go, where you add a single line to your code and it magically downloads and compiles the referenced library.
Which is why I use Lyx - best of both worlds. You get to use latex equations, the final document is pretty, and the document is readable while editing (i.e. not filled with commands and other crap).
I have a feeling that nobody would really bother with WebP for its compression, but does JPG/PNG have:
* Lossy compression with alpha channels.
* Efficient lossless compression of photo-like images.
* Efficient compression of photo-like and diagram-like images in the same format (and in the same image, e.g. screenshots containing photos).
* Good lossy compression of diagram-like images.
• There's a draft for gracefully-degrading JPEG eXTensions that add all the features you want http://www.jpeg.org/jpegxt/index.html (by encoding classic JPEG + residual image hidden in JPEG metadata).
WebP is a bit of a hack: it has JPEG-like algorithm for photos (VP8) and a custom PNG-like algorithm for lossless. Technically it's not much different than having JPEG and PNG and using same filename extension for both.
JPEG 2000 and JPEG XR have truly scalable algorithm that can support lossy and lossless.
I did, last summer, converted all images (35K) on my NSFW hobby site (check profile) to WebP with no jpeg fallback or shabby javascript decoder (which don't work on very high res images), and haven't looked back.
On my journey to 1000ms-to-glass with a site like mine, I'm going to go with the format that gives me dramatic size savings, thank you Google.
That said, I can see how it benefits Firefox users not to be able to render WebP... sigh.
It would be helpful if 4chan followed my lead by at least allowing users to post WebP with something like mod_pagespeed running.
> I'm going to go with the format that gives me dramatic size savings, thank you Google.
As far as I've seen, testing has shown that WebP is not dramatically better than JPEG, as long as you're using a clever encoder (like MozJPEG, which is what we're talking about). If you have evidence to the contrary, I'm sure the MozJPEG guys would appreciate a test-case!
> That said, I can see how it benefits Firefox users not to be able to render WebP... sigh.
Instead of spending energy on dubious WebP, Mozilla spends energy on improving JPEG (which benefits everybody now) and Daala (which will hopefully benefit everybody eventually). I think it's a pretty sensible trade-off.
An nginx redirect based on user agents to an apology and a list of download links to WebP friendly browsers. I used to include a link to a Firefox fork that supported WebP natively, but no one bothered.
I made a sort of Google+ companion to the site which I'd bump them onto but I still haven't gotten the hang of not getting banned.
Yes, but not when storage, bandwidth, money, a desire to deliver only the best user experience (or nothing) and pushing WebP are concerns.
By the way, it's remarkable when running an image-heavy site how much bot/mass downloader traffic relative to humans vanish when turning away Firefox user agents.
Nice to see someone is attempting this. Physical keyboard and resistive touchscreen seem like pretty odd decisions though. I guess this is going to be as enthusiast-only as the previous Neo phones which is a shame.
s/odd/sane/:
This oddity you've identified is precisely the reason people like me are interested or even invested in the project.
Must everyone worship at the altar of smudgy screens
and ineffective text input? The N900 is the last phone
that was actually usable. And by usable I'm not only referring to the stylus/keyboard input duality, but also to the the fact that it's as trivial extendable as any reasonably open Linux system.
In fact, I'd wager that the two are very closely correlated:
the need to tinker and the desire to do so with sane input devices.
I hate capacitive screens. They're great for tablets, but overly limiting on phones with small screens - especially "hacker-oriented" ones.
I used to code using on-screen keyboard with fingernails on my old Neo Freerunner. It maybe wasn't the greatest programming experience ever, but it worked for me and I had some great debugging sessions while in tram :) I can't imagine doing that if Freerunner had capacitive screen - even if there wouldn't be a bezel around the screen anymore.
People seem to hate resistive screens based on their experiences with poor ones (which is understandable, as most of them are pretty poor). However, a good one is a real pleasure to use, and I find the one used in N900 a good one.
The N900 supports both touchscreen and physical keyboards, so you get the best of both and I'd wager the Neo900 continues this.
Personally I've always found the physical keyboard to be far more productive and have wondered why smartphone manufacturers didn't keep on making them.
Not in this case - packets can be dropped by the route and the game player doesn't care as long as his connection is good enough to play. When it gets really bad the player will likely quit and the packets will stop.
If it is really desired you could implement throttling according to packet loss, but not in the way that TCP does it - by buffering and waiting - instead you'd just send packets every N frames. You can't do that if you're just using TCP since you don't know when packets are dropped.
> If it is really desired you could implement throttling according to packet loss, but not in the way that TCP does it - by buffering and waiting - instead you'd just send packets every N frames. You can't do that if you're just using TCP since you don't know when packets are dropped.
It is really desired (and there are a bunch of ways to do it, and plenty of libraries that already implement them). Please, please implement protocols in this way, and not in the way described in the article.
That's actually pretty significant. I doubt anyone would want to drive more than 400 miles in a day. You can almost get between any two points in England with that range.
400 miles wouldn't even get me to my brother's place in the next state over.
More than 400 miles in a day is common in US roadtripping. On the other hand, breaking up those long driving days with a reasonable recharging period would be more than OK.
I think for many Europeans it's easy to forget just how big USA and Canada are. I've been on vacation (we just call it holiday) where we've driven SF -> Yosemite -> Las Vegas -> Arizona -> LA, and on another trip DC -> Virginia -> Charleston -> Savannah -> DC, and both times I've forgotten just how long it takes to get anywhere. I suspect this is why so many people fly domestic?
I agree. I mean, I can see it being useful for things that you "know" how to do but just forgot the syntax... But the chances that a random javascript snippet on SO are right seems pretty damn low to me!
Agreed, especially a third party provide of javascript! It would be absolutely trivial for them to inject highly invasive tracking code into your site, or worse.
I'm not saying they will, but really it doesn't make sense to take the risk.
I think what people want is something like go, where you add a single line to your code and it magically downloads and compiles the referenced library. C++ doesn't have anything like that (and I doubt it ever will to be honest).