Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> seems to consider performance to be something of an afterthought

Not an afterthought, but I'm more interested in providing features for my customers. The site is fast enough, and the time to develop new features is more important than making the thing scale for zillions of users when I only have 100's hitting the web site.

I don't ignore performance, but if you're charging enough, it's certainly not the primary consideration.



I think that people building end-user software are less culpable in some regard. My comment is aimed more at people producing gems and libraries and frameworks that then propagate through the Rails community. When they are slow, they cause a compounding effect that ripples down to the developer building end-user applications, who just assume that 400ms for a page is "normal", because that's how it's always been.

Ideally, in the Rails world, the people building client apps would only have to worry about the algorithmic complexity of their code, and the libraries should be operating efficiently under the covers. To that end, you're completely right in that you shouldn't have to worry about performance - the people writing the software you build on should have already have done that. But when they don't, that costs you time and money, and I think that's a shame.


I think the main reason for this is that there's no 'convention' to profiling a gem. No one's built something that's super simple, off the shelf and easy to use. If someone did, and it got enough traction, it could be as much a part of the gem building process as bundler or jeweler is right now.


ruby-prof has been around forever. Combine it with kcachegrind and you have a very powerful toolset. I wrote about instrumenting Rails apps here: https://www.coffeepowered.net/2013/08/02/ruby-prof-for-rails...

The same concept applies to gems. You just construct a scenario that stresses your gem internals, run it with ruby-prof, pop open the resulting dump in kcachegrind or similar, and explore to find the hotspots. I did something like this for MongoMapper, and documented the process here: https://www.coffeepowered.net/2013/07/29/mongomapper-perform...


It can be less about scalability and more about user experience. I have several apps that I only need one or two dynos to run, but I invested in caching/performance anyway because it's a better experience for the (few, high-paying) clients who are using them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: