Hacker Newsnew | past | comments | ask | show | jobs | submit | more darthdeus's commentslogin

And here I was, thinking this would be about statistics.


And I thought this was about tensor analysis :)

https://en.wikipedia.org/wiki/Covariance_and_contravariance_...


And it was generalised in category theory:

> The use of both terms in the modern context of multilinear algebra is a specific example of corresponding notions in category theory.


It's a new meaning for me too. I knew General Relativity before I could program.


> which is going to be peanuts for a company requiring this kind of scale

That depends. Two million connections from people in an e-commerce site is a lot, but two million connections for a side-thingy like some analytics/ads/background-job-whatever doesn't have to be that much, especially if you consider 3rd party code.


Cases will vary, but if you have 2 million active users at a time and can't cover $1500/mo, then I'm not sure what your options will be. Along these lines, I'm really excited about what kinds of creations Phoenix will enable exactly because the kind of scale it gives you for current hardware. I think we'll see disruptive applications/services come out because of what you can get out of a single machine, so I find stressing price as not affordable in this context really bizarre.


I understand, I do like the idea of Phoenix, and it is definitely great to be able to keep 2M connections open at all.

I'm just wondering where is the per-connection overhead going into. Is there some inherent limitation of the WebSocket protocol that forces the server to keep large buffers or something? Not trying to bash on Phoenix, I'm just genuinely interested in what is the lowest possible overhead one could achieve while keeping an open WS connection.


Doing some rough math and my limited understanding of linux network internals, it's about 40KB per connection in this benchmark. I know that cowboy is going to require ~4KB or so per connection. Consulting a local ubuntu install, the default minimum buffer sizes in TCP will be at least 4KB each (2* for read and write), but by default 16KB each, and by default the max goes to 1.5MB or so each. This is required for TCP retransmits and such. If you have clients on shoddy connections or see packet loss, your memory could skyrocket on you. I remember reading of a case where someone had a service die despite memory overhead of 33% when the TCP packet loss rate went up (still under 1%), but it caused their buffer sizes to grow large enough to run out of memory.

So that's 8KB (will be higher with more usage) for TCP buffers in the kernel, 4KB or so for cowboy, and 28KB or so left for various other bits of the system when amortized per connection.


That's only about twice as much as in your case (I'm not saying it's not a lot, just pointing it out).

Still, I wonder how low could one get when implementing this at a much lower level. 41kB per connection seems like a lot of bookkeeping for something that's essentially a handle to a socket? Yes there's a process overhead in BEAM, but based on the Erlang docs, this should be only 309 memory words.


I wonder how Meteor's performance would hold up for a game like this.


This is interesting, when I opened the link I thoguht to myself "this might be a nice and faster way to browse HN" ... but wow, this thing I slow. Not just slow, but the back button isn't working as well. Or maybe it is, but it loads the whole content again when you click back?


Something I have in my todo list is to use the new angular router (https://github.com/angular/router/) however it doesn't appear to be currently available for angular 2. See issue https://github.com/angular/router/issues/233.


I think I've taken it as a given that any website that uses XHR to load parts of pages will necessarily be slow because of all the rest of the bloat (e.g. jQuery + 10 plugins + Angular) required to make the UI actually work while still keeping the code somewhat readable to the programmers. Basic HTML Gmail is also a ton faster than the normal version ...

You can however use CSS transitions effectively to make the user think the page is faster. Just have some action flying around (it's offloaded to the GPU if you're doing it "right") while things are being processed and it won't "count" towards the user's perception of page load speed.

e.g. Your HTML-only page loads in 200ms, your XHR-/Angular/jQueryified version loads in 1000ms, but you want the user to use and like the latter. Make the page dance around for the first 900 ms, and the user will "feel" that your new version loads in 100 ms. You don't want to push this "effective loading speed" all the way to 0 or else the user will become aware of your trick. Keep it at a perceptual minimum and the user will be like "Woah!"


You cannot just paper over everything. If you're having to resort to tricks due to inherent flaws in your application stack, it may be worth reexamining your application stack.

Also, I wondered who creates that sort of annoyance. Now I know. Having half of mobile webpages have animations all over the place only makes me go "whoa" in the sense of "whoa, how do I disable this and go back to the (relatively sane) desktop site". It doesn't make it "feel" as though it loads in 100ms instead of 1000ms. 900ms instead of 100ms, perhaps, and that's stretching it. But loading in 1/10th the time? Nope.


Yeah; unfortunately the "flaws" aren't something that most people can control. HTML5+JS+CSS+[insert JS frameworks] is inherently inefficient because it's a lot of band-aids on top of band-aids. But we as front-end developers just have to live with those band-aids; we don't get to reengineer the user's browser. If jQuery and Angular were implemented in C

Compare the loading speeds of these two on your own Google account and you'll see.

https://mail.google.com/mail/?ui=html

https://mail.google.com/

Like seriously, even opening a message is faster in the basic HTML version (on my system about ~300 ms vs. ~600 ms) despite it re-loading all the page chrome. I'm 99% positive that's because of all the bloat caused by the all the UI framework they used to make it happen in the regular version. The basic HTML version is so fast that it doesn't even need a progress bar on load!

But of course, you as a developer want to develop the full-blown HTML5 experience because there are tons of features you can't do with basic HTML only. Also, basic HTML makes your site look dated (want a nice-looking button? You'll a bunch of jQuery bloat instead of just a <button> tag. Want a nice-looking text box that maintains a consistent height across browsers? That's a bunch of CSS bloat and putting a text box inside a fake <div> to ensure its height because different browsers have different interpretations of your CSS. Want a text box with tagging ability? That's going to be a massive, inefficient bloat of JavaScript because you essentially have to re-invent the text box from the ground up in JS)

In these cases, sometimes using effects to play tricks on the user to make it "feel" faster does help, because there's nothing we can really do about it ...


And that's why I use the "basic" interface on the (two) webmail providers I have email accounts with. Much faster, and actually handles things like open in new tab properly, among other things.

When the fancy chrome gets in the way of basic functionality, I take the functionality every time.

There is a solution. Or rather, a way of mitigating it. Namely, unlike so many developers, when you look at a "feature", consider the drawbacks, not just the positive side. When you're considering adding something to a button that pulls in umpteen billion JS frameworks, consider if the bloat is worth it. When you're starting to reinvent the text box just so you can have tags, consider if the inefficiency is worth it. When you're considering reimplementing scrollbars in JS, consider if the UI problems you'll have are worth it.

And, you know, if/when you run across something that's problematic to do well, consider feedback. Among other things, the number of features of CSS that ultimately boiled down to someone going "there isn't a good way to do <x> currently"...

I'd be interested to see the other effects of effects. I've seen things on perceived time - but that's not the whole story. Does it affect user retention? Clickthrough rates? User mood?


I'd contend that these aren't problems with HTML/CSS/JS, but with their categorical misuse under the guise of simplicity. Things like jQuery are orders of magnitude slower than both alternatives and native implementations, and using many of these bloated libraries will only multiply the inefficiencies.


It's based on the firebase API, that's what's probably slow.


The Firebase API is pretty quick after the initial connection, here's an un-optimised React version (i.e. no shouldComponentUpdate hooks) which uses the Firebase API, for comparison: http://insin.github.io/react-hn


This is pretty cool, thanks for sharing. I may start using this to browse HN all the time, since it does have the one feature that I miss from the normal site, e.g. highlighting new comments since the last time I viewed a thread. It also appears to autoload new items? I assume this must be cookie/session based, but I'll have to look at it.


Local storage, full source is linked at the bottom:

https://github.com/insin/react-hn/blob/master/src/utils/stor...


Nice job.


Why do node people feel like they have to re-invent everything UNIX?


Why would I want to be a PHP developer? :O


Because finding a job is incredibly easy. Because it pays well if you know your stuff. Because you don't have to work with people who will judge you based on something as in-significant as what programming language you decide to use.


Because it still pays and if you are good at it - pays well.


Thanks for noticing this! It should be fixed now.


Thanks! I've actually written this out of frustration with the existing tutorials and kind of for self reference, since every time I want to do a tunnel I spend 15 minutes googling it :)


That's an amazing idea for a plugin :) I'll try to investigate this, because the way I do it now is to just open any *.cljs file, connect it to the Light Table UI and eval the expression that way. It is however a lenghty process, as it takes about 10-20 seconds to connect and compile on the frist time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: