Junior developer is someone who was unable to negotiate a stronger position for any reason, such as experience, recent unemployment, expertise in an outside specialty, company is super tightwad about 'status' and/or pay levels, naive about negotiation, etc. Conversely, sometimes you'll also see someone at a 'senior' level because the title meant more to them than pay or other benefits. Sometimes a title distinction is engineered by HR and the hiring team at a company to function that way.
It is often thought of as a prediction about what level of contribution is expected, but that's basically a farce. In every organization I've seen that makes distinctions between junior and senior developers, the only thing different between them is that senior developers are paid more. They don't do more, manage more, design with more foresight, or anything like that.
The exception is that developers who are brand new to a job function as junior engineers until they get calibrated and learn some parts of the codebase in which they can be effective. But this is just as true for a brand new senior dev with 10 years experience in your company's primary domain application as it is for a kid straight out of a bachelor's program whose only prior work experience has been theoretical REUs or something.
You're getting down voted for expressing the reality as opposed to the the platonic ideal. Ideally junior/senior implies something about ability to work independently, but there are plenty of crappy places where job titles bear no relation to actual abilities. Acknowledging that the meritocracy doesn't work perfectly seems to piss people off around here.
I would expect a senior engineer to come into a new job and be able to propose the design a new service very quickly - its API, architecture, datastores (including sharding, replication, active-active multi-master designs, ...), DR strategy, how it fits into the ecosystem, how to scale it to thousands of QPS, etc.
I would too, but I have worked with plenty of people with the word senior in their title who wouldn't be able to do a any of that. I've also met "juniors" who do carry the world and build great systems.
The problem is that titles get inflated and can't be relied on as much as we would hope.
You might expect it, but I've encountered plenty of 23-year-old Senior Network/Software/whatever Engineers with just 1 year of experience. Basically as soon as they'd implemented a single feature, they would consider themselves senior.
Personally I say a senior engineer is one who solves problems by the most appropriate means. Maybe he fixes a "bug", maybe he/she knows enough about the problem domain, the use cases, the dependencies, the history and changes the documentation instead. Maybe he/she tells a junior engineer "figure out how to scale that" and uses experience to know what part of the system really needs to scale. And so on and so on. A senior engineer spends little time coding and lots of time mentoring.
Maybe I'm missing something, but why isn't the implication going the other direction?
In my experience, the problem is that interviewers have no idea how to correctly value a candidate's performance. Maybe the candidates are closer to being well-calibrated, but their self-assessments don't match up with the interviewers' because the interviewers don't know how to gauge what they are looking for?
Making the assumption that an interviewer knows how to measure the response of a candidate, even in cases of extremely quantitative questions with well-defined answers, is highly suspect to me. I think virtually no one knows how to do that effectively.
:)) Just the other day a manager said "HR is overwhelmed. They have a student that filters the CVs and she is doing all she can." The senior programmer exploded "How is a student any good at filtering programmers, she does not know anything about programming!".
If you consider how well someone does as how likely they are to be hired, then it makes sense that interviewers are the source of "truth", because it's their grade or opinion which will decide whether you get hired or not.
We certainly can speculate whether whose opinions are more correct or valid, but if we just objectively consider that doing perfectly means you get hired, then what the interviewer thinks matters and what the interviewee thinks doesn't.
I do think you raise an interesting question though. I do wonder whether, in a scenario where many interviewers saw the same interview performance, how varied their scores would be.
After all, most people fail at many job interviews before landing one they get, but is that because of variation in the performance of the interviewee or because of the differences in interviewers?
Using the measurement, "how likely are they to be hired" seems like it's precisely the problem. "How likely someone is to be hired" is a totally erratic and unpredictable property of a bureaucratic hiring process. It's exactly not the sort of thing you want to use to define an absolute notion of skill or performance in a job interview.
This is my concern too. Any person who's gone in for many interviews has seen that the interviewer isn't the absolute measure of performance: we've all seen problems that are just trivia questions, are designed to show off the skills of the interviewer, or (worst of all) where a second solution isn't accepted because it's not what the interviewer thought of. All of these issues can result in disagreements on performance, in both directions.
I would like to see a different analysis: for multiple interviewers of the same candidate, how similar are the ratings?
> There's actually a shortage of data-savvy people who can also write production software, and you would nicely complement a more research-inclined data scientist or analyst -- someone with far more experience with research/analysis than development.
I experience the same problem with shortage-at-price-X in the field you describe. I'm a machine learning engineer with experience in MCMC methods, but I also have a lot of low-level Python and Cython experience, some intermediate experience with database internals, and lots of experience writing well-crafted code for production systems.
There are basically zero companies willing to pay what I'm seeking (which is a salary based on my previous job and a few offers I got around the time I took that job). In fact, in some of the more expensive cities, the real wage offered is far lower than other markets.
I've seen reputable, multi-billion dollar companies offering in the $140k range for this type of role in New York. That's wildly below anything reasonable for this sort of thing in New York. I've seen companies in Minneapolis offering $130k for the same kind of job -- and even that is still too low for Minneapolis! The same has been true in San Francisco as well.
Because these companies value you more for simply looking good on paper and looking good as a piece of office ornamentation when investors stroll through, and they view you as an arbitrary work receptacle closer to a software janitor than a statistical specialist, their whole mindset is about how to drive wage down.
Frankly, given the stresses of the job and the risk of burnout, I think it's actually a terrible time to be in the machine learning / computational stats employment field, despite all of the interesting new work and advances being made. The intellectual side is good, but the quality of jobs is through the floor.
"I've seen reputable, multi-billion dollar companies offering in the $140k range for this type of role in New York. That's wildly below anything reasonable for this sort of thing [in NY/SF"]
Man, do I ever agree. This is where the "shortage" argument falls apart.
This is why I'm so uninterested in the abstract arguments happening elsewhere on this topic about whether markets are failing and basic laws of supply and demand no longer apply at theoretical salary levels (10 million was offered as an example).
Why are we bothering with this debate, when it's so far from reality? I'd say that if you're trying to hire a very high skilled and critical tech worker in SF, and you just can't find one no matter how hard you try, and then I find out that you're only offering 140k a year?
In San Francisco and New York (and anywhere else in the US, really), that's nowhere close to the kind of pay where we should start scratching our heads about a shortage and start wondering why the usual laws of supply and demand aren't working anymore.
Yeah, I strongly believe companies haven't (or aren't willing to) figure(d) out the IC track problem for data people in the way they've figured it out for engineers. Part of me wonders if it even makes sense for them to figure it out, if they're not an Uber/Netflix/Amazon with a strong need for advanced ML abilities.
It sounds like you're a principal/lead/post-senior ML engineer; at that level, you can easily command more than $140k but you have fewer options to apply those skills at companies that really need them (because few companies actually need them).
I don't know. It's tough. I agree that it might be a terrible time to work in ML/computational stats because of stuff like this.
I suspect the reason is those companies offering $140k frankly don't need that level of expertise. With that kind of background it would be fairly easy to get 200-300k as an infrastructure engineer at a quant shop.
and as pointed out so much, is entirely why nobody wants to work for them. Respect these very bright people znd you have a starting negotiation position.
All of the C/C++ experts I know, as well as people who have interviewed me coming from primarily that background, have always been among the most adamant to stress that an application crashing unexpectedly should never happen and is always the wrong outcome.
I imagine they would say that your statement about crashing vs. e.g. launching the missiles is a false dilemma. You don't crash and you don't incorrectly launch the missiles.
I'm not a C++ developer so I can't say it with certainty. I more agree with what you're saying. I'm just relaying that my experience has been that out of many different language communities, C++ actually seems adamantly the opposite of what you're describing.
I think the industry is on the cusp of settling on the Erlang model which is essentially allowing pieces of a program to crash so that the whole program doesn't have to. It will take time for practices and tools to spread.
I have occasionally needed to argue with a long-time C dev that crashing is exactly what I want my program to do if the user gives unexpected input. They're used to core dumps instead of pleasant tracebacks.
I'm a big fan of fatal errors and crashing the program with a stack trace:
1. Stack trace at point of a contract violation tends to capture the most relevant context for debugging -- the faster it is to discover and debug an issue the easier it is to fix
2. Interacting code has to become sufficiently coupled to preserve "sane program state" -- an exception may or may not be recoverable -- a fatal error never is and there's no point in building code to try to recover. If the programmerer has to design the interaction among program components to avoid fatal errors then there must be fewer total states in the program vs a program which recovers from errors -- this makes the program easier to reason about.
3. On delivering good User experience -- id rather have clear and obvious crashes which are more likely to include the most relevant debug information -- than delivering the user some kind of non-crash but non-working, behavior (with possibly unknown security consequences) which may take longer to get noticed and fixed as a result of an error handling mechanism that deliberately _tries_ to paper over programming problems ...
I've actually modified third party libraries I've used to remove catch blocks or replace error handling within with fatal errors -- when dealing with unknown code it really can vastly speed up the learning process and the understanding based on observational behavior ... -- especially in understanding the behavior around edge cases.
In my experience, "You don't crash" means you catch the exception and exit gracefully, reporting a fatal error has occurred. Users don't distinguish between a crash and a fatal error.
Higher level languages are better at reporting uncaught runtime errors than C/C++ is, because they'll automatically do things like print useful stack traces and then exit gracefully even if you don't catch an exception. The interpreter doesn't crash when your code does.
I think you misunderstand the use case. If your container is tracking work done and it thinks 20 requests were handled and only 10 were received, you have an invariant failure. Without more context, this could easily be trashed memory, in which case, you might already be in the middle of undefined behavior. In that case, getting the hell out of the process is the most responsible course of action. Efforts to even log what happened could be counterproductive. You might log inaccurate info or write garbage to the DB.
Also, if you don't catch an exception in C++, most systems will give you a full core, which includes a stack trace for all running threads. Catching an exception and 'exiting cleanly' actually loses that information.
I've been writing software in C and C++ for a long time. Crashing is never a good user experience, so avoid it. If something unexpected happens, catch and report the error, then carry on, if possible, or gracefully exit otherwise.
As someone who works in support, customers (at least the ones I support) REALLY need a clear and obvious crash to understand something's wrong. That really forces them to "do something differently" and/or look for help. You're correct in that it's not a good user experience. Neither is chaos.
Exactly. "Undefined behavior" includes showing private data to the wrong user and booking ten times more orders than the user originally indicated. I'll take crashing over that.
It really depends on the type of software. Sometimes if something unexpected happens and you just catch and report an error, then you may end up in a state which will result in further errors a few hours later. It is much easier to track the root cause, if the program crashes immediately, than having to analyze several hours of logs. And then crashing (i.e. quitting with a core dump) is as graceful as it can be, since it provides all the necessary information to analyze the problem right when it happened.
The tool is also extremely messy internally, written in a proprietary language that resembled a very early version of Python and has accrued nearly unthinkable technical debt. Delivering usage via webservice lets them better hide the dysfunction on the other end of a service call.
This whole topic is not all that newsworthy. The team within Goldman that had architected and developed this years ago had spun out into a consulting group that essentially reimplemented the same thing in Bank of America (Quartz), JPMorgan (Athena) and many others, now including Morgan Stanley, and even trickling down to smaller banks like PNC.
I consider it one of the biggest ripoffs in modern finance that those organizations have paid untold fortunes to adopt the Goldman-like approach, sometimes even with new or additional proprietary languages brought in on the project. It also adds systemic risk for society because it further correlates these internal banking systems between the largest banks. If something goes systematically wrong with it in one place, there's a comparatively high risk the same sort of thing can or will go wrong in another too.
If we were bearing that risk for a good reason it might be OK. But really we're only bearing it because of the superficial branding of Goldman, and the pressure on banks to hand wave and appear to be doing something in the aftermath of the 2008 crisis. And so they go for what looks politically defensible (e.g. "well, this is what Goldman did and they survived the crash" -- despite it being widely researched and reported that Goldman's position in the crash truly had nothing at all to do with superior risk management systems and was a mixture of political favors and luck) instead of anything sensible from a system design point of view.
Last I heard Quartz and Athena are both failed projects. Quartz's lead left years ago, Netezza DB was having massive issues, and Python was way too slow. They had to reboot the project and it is nowhere close to what they wanted it. Athena has similar issues with developers constantly changing, no direction, and still isn't anywhere close to real-time risk. I know Credit Suisse had something working, but I haven't heard where that project was going since they moved it from C# to Java.
Would you have any insights on why other places are perceived to not to be able replicate Goldman's success (since you mentioned failed projects) ?
Is it really the tech, and not because of GS's business practices instead ?
I can't speak for the other commenters, but my view is that not even Goldman has really any success to show for it. The whole Slang/SecDB thing is a colossal failure even inside of Goldman. That they nonetheless ratchet pay upwards to entice overqualified engineers to babysit a clearly defunct and ineffectual system is no surprise though, because keeping the lid on its badness is paramount to their marketing efforts, which in turn drives GSAM's ability to get high AUM, and more recently has driven the ability to sell this nonsense to others.
The software is junk software. There's no other secret thing going on -- no misdirection or duplicitous motives. A certain class of high-paying customers responds more to the Goldman brand name -- or at least believes it buys them cache with regulators or investors. For that class of customers, vetting the reliability and quality of the tech stack is at best an afterthought. Since that pile of money exists as a thing for Goldman to target, they do target it.
I advocate that more people should prioritize vetting the technology. If so, they would see it is not of sufficient quality to justify its use, let alone paying to perpetuate it elsewhere. But I'm not naive -- the political approach will always matter more to a wide range of people than will a more objective assessment.
Besides introducing systematic risk, the sale of this software by Goldman smells fishy. Despite the Blankfein quote about maybe selling for $5 billion back in the day, if the software is what they purport it to be, wouldn't selling it be akin to Amazon licensing their product distribution to Walmart?
I've wondered what these million dollar per month programmers do on Wall Street. This really puts it in perspective.
On that note, it's completely depressing to see many of the best minds of our time working on shit software that adds nothing to society. Another swath of them are working on getting people to click on ads for Facebook and Google.
> Another swath of them are working on getting people to click on ads for Facebook and Google.
Which finances Google's driverless car efforts and an untold other amount of businesses (like gmail). Plus the salaries of thousands of developers and the myriad of other people who work for Google, and the subindustries it supports (bus drivers, chefs, real estate, etc). Just because their specific job isn't world-changing doesn't mean it has no positive effect on the world.
Silicon Valley has benefited greatly from the ad industry which is why the popularity of this type of complaint bothers me.
Same with Goldman. They do contribute to the world by facilitating commerce. Although they likely contribute far less to the world than SV developers since they siphon so much off the top for ultimately marginal longterm ROI. They also ultimately wouldn't make so much money unless they did provide some value to the economy beyond exploitation of byzantine financial systems.
I grant that Google is more of a social good than Facebook.
Your claim that Goldman has contributed is a debated topic. Paul Krugman favorably mentioned a study that purports to demonstrate that Wall Street's endeavors are largely unproductive. I can't find it now unfortunately.
Please see my last comment, this is a debated topic.
However, I've read Mike Milken and, despite his warts, I believe he did radically improve capital allocation. So it certainly has happened over the years.
-- edit: I meant about Mike Milken
Somewhere along the line CTO's or their juniors with budgeting authority were convinced that Goldman's success was due in some measure to SecDB and Slang, which is pure nonsense.
haha, but of course you mean ctrl+F9 -- you probably edited multiple files...
I completely agree with the parent and grandparent posts! I worked with one of the SecDB clones for three years at a Too Big To Fail bank, and it was criminally bad (imho). It's snake oil.
I agree with jnordwick that "Quartz and Athena are both failed projects." For example, Mike Dubno, Kirat Singh, and three other managing directors on the Quartz project are all gone [1].
I agree with the grandparent post that "Goldman's position in the crash truly had nothing at all to do with superior risk management systems and was a mixture of political favors and luck"
The snake oil in this case is what p4wnc6 (who's spot on) highlighted: "The team within Goldman that had architected and developed this years ago had spun out into a consulting group that essentially reimplemented the same thing" [at other banks].
The product that they sold (a SecDB clone) is pure snake oil, and the projects (which were massively expensive) delivered very little value.
Consider: If SecDB really lives up to the hype, then why would Goldman let all these other banks steal Goldman developers and straight-up copy it?
I think this gets at a bigger philosophical question about depression that few seem interested in engaging in.
Sometimes depression is a correct, reasonable, and even necessary reaction to un-live-with-able life circumstances. It's a way of your body generating warning signals that others might see and then offer you help, not unlike shooting up a flare if you're stuck on a life raft.
If the circumstances truly are un-live-with-able, then medication that simply makes you superficially feel like the circumstance is live-with-able, when really it's just continuing to be destructive to your life in every single same way apart from your now-masked-with-medication feelings, then medication can be counter-productive.
Many people with depression don't actually have un-live-with-able circumstances. Instead, they have live-with-able circumstances and either they have medical issues preventing them from doing the actions necessary to manage that living, or else they have what I'll call psychological issues preventing them from it, where 'psychological' here is meant to mean the subset of mental health issues that are not well-treated by medication, but may be treated with counseling or changes to other life habits.
The problem I've always faced when seeking counseling or mental health help is that every single mental health professional I've ever interacted with, every single time across many years and highly varied geographical situations, has always, always, always dismissed completely the possibility that a person can actually have un-live-with-able circumstances within which the depressive reaction makes reasonable sense. Instead, they begin from a point of view fundamentally rooted in the belief that that cannot ever happen to a human.
I mean, if they were pressed to think about like a child soldier forcibly addicted to cocaine or something, maybe they'd agree people really can have circumstances such that depression is the correct reaction. But in general, with first world people, they just assume it's impossible and have a pervasive Occam's Razor sort of filter, before ever meeting you or even talking to you the first time, that, nope, you're wrong about how you view your own problems, that there's no way you could be thoughtful enough to have done meaningful introspection before seeing them, and that your circumstances are never a justifiable reason for feeling depressed.
And from this attitude, the next step is almost always to suggest medication right away. And any time I've tried to say something like, "well, I'll consider medication, but I'm not just going to jump right into it. I want to speak more about my circumstances and explain why I feel like it's a real life Catch-22 that truly, utterly is depressing, rather than the depression being a part of me as I respond to it," then it's like talking to a brick wall. The counselor / therapist / psychiatrist doesn't want to hear about about. They already know you need drugs within one 1-hour visit, and now that you're saying you won't just rush right out and get them, that means further that you are a problem because you won't just go get the drugs you need.
And what's crazy is that the underlying rationalizations from the different mental health professionals have been all over the place. One person thinks it's because I have issues about my childhood and my father. Another thinks it's because I grew up in a relatively more religious community. Someone else thinks I have PTSD from an abusive relationship in my late 20s. Yet another thinks that it's related to overwork and job stress.
All of these are important issues and some mixture of all of them is affecting me. But every counselor I see has their own colored opinion about which magic answer it is, all those magic answers are different from each other, and yet, every one of them thinks that drugs, drugs, drugs is what will magically solve the problem.
It really gives me no faith in the mental health treatment infrastructure, and causes me to be even more guarded about when or if I will consider trying anti-depressants.
From some of these statistics, it makes me feel like anti-depressants are way over-prescribed, and that counselors are just extremely lazy. They don't want to hear the whiny, tearful narratives of their hurting patients' lives -- just like friends and family also don't want to hear it. And so shoveling out some drugs is an easy way to focus on something different than the thing they find unpleasant (i.e. actually listening).
There's depression the feeling, which may be caused by un-live-with-able circumstances and serve as a stimulus to make life changes.
Then there is depression the illness, which and may or may not be brought on by some circumstances, but which is debilitating.
If it's serious enough, I'm not sure if it matters whether it is spontaneous or caused - you need to be brought out of it before you can deal with the un-live-with-able circumstances. Staying in the depression doesn't help you to take action at all.
But the huge difference is that with truly un-live-with-able circumstance, the most important thing you need is help to alleviate the circumstance itself. Changing your response to it isn't helpful if it doesn't contribute to changing the underlying circumstance, and when the circumstances are things that you don't control which happen to you, especially when they are on-going and it's not so much about how you process them but actually stopping them, it's questionable what role medication really plays.
> Staying in the depression doesn't help you to take action at all.
It definitely can. Precisely when you lack the power to change your own extrinsic circumstances, and it is those circumstances causing the harm that leads to depression, then staying in the depression keeps sending out that distress signal for help. Taking you out of the depression without also addressing the circumstances makes it seem like you're managing it, when really you're still being harmed.