Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Do Java programmers make more money than .NET programmers? Anyone describing themselves as either a Java programmer or .NET programmer has already lost, because a) they’re a programmer (you’re not, see above) and b) they’re making themselves non-hireable for most programming jobs. In the real world, picking up a new language takes a few weeks of effort and after 6 to 12 months nobody will ever notice you haven’t been doing that one for your entire career.

In the real world this is (unfortunately) not how technical people are hired. The stack they are using seems to be the first thing hiring managers look at.

You would think it would be only clueless recruiters who would think this way, but my experience is that well-respected and successful entrepreneurs also operate along these lines. It is not clear to me whether their success is related to or in spite of this way of thinking. To give them the benefit of the doubt, startups often need to move quickly and those few weeks-months needed for people to become familiar with a new stack they're unfamiliar with may simply not be available.



Stack is relevant though. If you need a front-end architect, and you hire someone with 10+ years Java experience, they aren't going to know the first thing about web accessibility, polyfills, bundlers, cross-browser support, etc. The domains of implementing microservices vs. writing front end code are fundamentally driven by very different problems.

The stack an engineer has experience with can be one indicator of the areas they excel in.


> the first thing about web accessibility, polyfills, bundlers, cross-browser support, etc.

Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

Polyfills aren't exactly quantum mechanics. They're not some radically different mental paradigm that takes years to develop fluidity. If you already have a sound basis in the major software engineering concepts, learning polyfills is just a matter of reading the docs to learn the details.

Let's not forget that Jordan Walke was working on back-end infrastructure code just prior to inventing ReactJS. Miso Hevery was working on Java testing frameworks just before inventing AngularJS. So it's not like there's a history of front-end being such a unique problem domain, that it's impossible for outsiders to pick up.

The only major sub-domain of software that I think this may potentially be true of is embedded systems. And even then, that's a big maybe.


>Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

Individually yes, but all of these things would take more than a few weeks - possibly months. With onboarding on an average software project also taking months, you could be looking at paying for a lot of unproductive time if you don't hire somebody with some prior knowledge.

The idea of language fungibility is a joke. Sure, it takes you a day to learn the basics of syntax in another language. Learning the whole ecosystem takes considerably longer.


> Individually yes, but all of these things would take more than a few weeks - possibly months. With onboarding on an average software project also taking months,

So, let's say that is the case. The lesson is that hiring is expensive and risky, even if you find a candidate whose experience exactly matches your stack. Like you mention, just onboarding for the internal technology and project itself takes months.

You're already sinking a big investment into a new hire. So most times you should choose the best engineer even if it means additional O(project onboarding) time. From a business standpoint a great engineer who takes 6 months to get up to speed is a much better investment than a mediocre one who takes 3 months. Not always (maybe you're an ultra-high growth startup who needs bodies on the floor ASAP). But usually.

And the reality is that dropping stack requirements drastically improves the candidate pool. For one you have way way more candidates to select from. Two, most times when a company is hiring for [tech X] it usually means that the market for engineers who know [tech X] is super-tight. It's just the nature of the business cycle. If [X] is in demand at your company, it's probably in-demand everywhere, and therefore in short supply.

All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers. If you're willing to hire from anywhere, then usually there's some sub-sector that's in a downturn. That's a huge opportunity to poach talented engineers, who are being mostly overlooked because their stack experience doesn't align with the hot growth sectors.


>And the reality is that dropping stack requirements drastically improves the candidate pool. For one you have way way more candidates to select from.

There's definitely a sweet spot in terms of stack specificity. There's very little issue hiring a flask developer if you need somebody to work on django. However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

>All of which means that if you insist on [X] experience, most of the time you're scraping the bottom of the labor market barrel and getting mediocre engineers.

I don't think that's necessarily true. Nonetheless, this could be market specific. I can imagine that hiring a developer in, say, Ohio might make your approach more worthwhile compared to hiring in SF, where the talent pool is deeper.


> However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

I don't think that's true at all. I've met tons of Java devs who moved to Ruby with minimal pain. It's still OO and many of the same patterns apply.


> There's definitely a sweet spot in terms of stack specificity.

The GP's calculation had a very important weakness that it does not take retention under account.

The sweet spot will be mostly determined by retention. And the very bad places that have impossibly (sometimes literally) specific requirements are probably correct on requiring a narrow set of competences. But of course, they would gain more by improving themselves so the developers don't quit as often.


> However, if you're a ruby shop and you hire a java developer - even if they are great - you've potentially doubled or tripled your onboarding time.

You think? I went the other direction and I don't think it was so hard. The language was the easy part of onboarding, the hard part was all the company specific stuff.


As atomic points, I agree. Polyfills aren't rocket science. If the browser API is not defined, the use the polyfill function (at least, that's how most work, but then there's ponyfills, which don't pollute the global scope).

But front-end development of complex applications isn't done in a vacuum by one person. It's done by multiple teams, possibly distributed across different physical locations.

As an architect, what is your polyfill story? Does each team ship ES Modules that are later compiled by Webpack? Is each team responsible for loading their own? It's probably more efficient to handle it globally --knowing what all teams need at once as part of a build step-- to prevent duplicative loading of the same polyfill by 5 different teams.

It gets nuanced quick. Nobody can master all these concepts --and reasonable ways of managing them in distributed team environments-- in a few weeks.

And if you think JavaScript is "simple", I encourage reading "You Don't Know JS". What does "this" mean, in JavaScript? I could ask a handful of questions from the multitude of sections in this book, and people writing JavaScript for years wouldn't be able to answer all of them. React and Angular are such a thin slice of the overall problem. Even simple projects will require a dozen or so more NPM packages until you have something remotely useful.


> I could ask a handful of questions from the multitude of sections in this book, and people writing JavaScript for years wouldn't be able to answer all of them.

I think this might prove my point. There's tons of great Javascript engineers out there, who are consistently delivering business value, who actually aren't even that knowleadgable about all the details inside Javascript.

Most of the variance in quality between software engineers has to do with technology-agnostic skills rather than stack-specific knowledge. They architect well-designed, modular systems. They communicate with stakeholders. They write robust, readable, testable code. Their documentation is understandable and comprehensive. They're thoughtful about naming things. They can large codebases and keep complexity contained. They understand performance tradeoffs, and anticipate bottlenecks before they occur.

Very little of that has to do with stack-specific knowledge. A developer who does all of the above, but doesn't know all the ins-and-outs of the JS coercion model, is going to be much more productive on almost all practical business problems.

The most important skill usually isn't knowing every single detail of your underlying ecosystem. It's knowing enough about its overarching landscape to be aware of the things you don't know. As long as you know how to ask the right questions and where the limits of your knowledge lies.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

How about average engineers with average fundamentals? The kind most people will on average be hiring and working with.


Another career advice: Try to not join a company which mostly hires average engineers with average fundamentals.


A lot can be learned working with average engineers. You shut yourself out of a lot of jobs by avoiding average and will be probably screwed if the "above-average" places consider you an average candidate.


Most engineers are average. If you are above average you are likely working at a company that pays above average.


Jordan Walke and Miso Hevery are outliers working at companies that pay a lot. An average company is not going to be attracting talented engineers with strong fundamentals. They are going to be attracting engineers with average aptitude and average fundamentals.

An average company, paying average wages, solving average problems is better off getting an average developer specialized in the area they need.


This is only true because average teams get annoyed at having people trying to push the project along in ways they do not understand. In most of my jobs, average people really appreciate stability and do not like change. To fit in, you end up having to do less work and transfer your energy to side projects or hobbies


> The only major sub-domain of software that I think this may potentially be true of is embedded systems. And even then, that's a big maybe.

The big problem with embedded is that when things go wrong, it has a cross cut with electrical engineering--you have to understand datasheets, read a scope, and understand that digital signals sometimes aren't.

I have never known a good embedded software person who is software-only. Even DSP-types with EE degrees don't often operate on hardware very well.

Generally, the best embedded software types are good hardware EE's and passable software people.

I consider myself a very good embedded engineer, but my software is merely "straightforward". Of course, the best software people I know claim I should take that as a compliment, and they are all happy to work with my code.


Exactly but now embedded systems look more and more like personal computing used to be not much time ago. We're certainly on par with mid to late 90s in regards to that, and even farther if you consider things like the RPi as embedded. Not only processing power but relating to the skill set needed.

I don't take lightly your skillset at all, but it comes a time when people start using inneficient solutions because it's faster to produce, and you have plenty of hw resources, so... why not? Do it in Python and improve it later... maybe, and then you don't. :-)

I believe you get my point. Not that I personally think using micropython is a worthy path solution for embedded dev as of now, in a professional context. But there will come a time when that can make sense, as it's already the case in the RPi, as I mentioned.

> I consider myself a very good embedded engineer, but my software is merely "straightforward"

And that's where I make my business. I'm merely a straightforward embedded engineer but focus on the software/hw integration making the excellent work of people like you work with the external world, databases, desktop/web UIs and all of that using software engineering practices. Basically doing the "cloud", "edge" and "IoT" buzzwords.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

The most important 67-80% of front-end work can probably be learned in a few weeks by an experienced engineer who already has a high level understanding of the web. The last 20-33% hides some things that are trickier to wrap your head around, along with more than a few WTFs, and a never-ending stream of subtle cross-browser differences.

It may still be plausible that a talented engineer with strong fundamentals in software design and cs could cover the ground quickly. But the message of the "don't call yourself a programmer" piece might be understood as being as much about how you're going to spend your time as it is how you're classified professionally (though those are certainly related): are you a stack-whisperer, or are you a badass problem solver with expertise in translating a problem domain into a formal-ish system which enables expands the capabilities of the human system (and networks of machines) it's embedded in?

My bet as someone who did years of front-end focus is that for all but the truest 10xers, if you focus on front-end and try to be thorough, your risk of being a stack-whisperer jumps dramatically because of how much of your time corner-case ephemeral arcana will chew up. Front-end is certainly not the only place in the industry where we've let that hazard run away with the lives and time of too many talented people, but it's a popular one.


"Polyfill" itself is a word that was invented by some inexperienced people who had never programmed anything outside of the browser that needed per-platform conditional code to isolate itself from missing or different functionality.


> Almost all of these concepts could easily be learned in a few weeks by a talented engineer with strong fundamentals in software design and computer science.

It's web development, and let's face it, many companies don't want long term talent, they just want someone to finish their current project and move on. Which can make sense in some circumstances. You don't need a software engineer level guy to make a dashboard.

People that are beyond the framework user level end up also moving on from even considering working on those kind of jobs.


That's more a question of overall area of past work, though, than particular language or framework. If you've been working on frontend stuff, problems of layout, matching redlines, response latency, etc, should be in your deck regardless of which programming language you've been using.


Is this the right way to hire? Not saying it’s wrong but I’ve taken a different approach which has generally worked out.

Even if I was hiring for a specific role (front end), I care less about familiarity with specific bundlers or frameworks and something more like - “Can this person problem solve and deal with varying levels of ambiguity? Do they seem capable of figuring something out without a ton of direction?”


That is exactly the thing I select for. I recently had to hire 4 developers, and I was not all that interested in how many years of experience they had in javascript or typescript (our main language), and even less if you know Vue or Express (our stack). I mean, it's good to know javascript, and certainly ES6 or typescript, but I'm much more interested in: can I discuss stuff with you, do you have interesting opinions, can you make clear decisions in the face of unclear requirements, are you creative, and can you fix stuff and build features without anyone having to hold your hand? In case of doubt, knowledge of ES6 or Typescript or any relevant front-end framework (Vue, Angular or React) can make a lot of difference, but I'd rather hire someone who knows a dozen languages we don't use than someone who only knows exactly the one language we do use.


And then when you need to ship now you have a bunch of dead weight while you wait for them to ramp up. Also, since now you are taking time from your devs who do know the stack to mentor your new hires and code reviews because you can’t trust them to do good work without messing up your code base. They are now doing “negative work”.


That's why I don't hire dead weight, or people who require handholding, but people I can trust, who can pull their own weight, and know how to learn. Someone who merely has X years in technology Y means little to me. Plenty of people manage to still be bad after years of working on the same technology.


But if you're in a secondary employment market as opposed to startup heavy SV you might work with this person for a very long time or have a high chance of working together in the future. So the right, right now isn't always what people are optimizing for.


After spending 10 years at one company and suffering from not learning anything new but just as importantly salary compression, for the next 8 years, and 5 jobs, I learned the best way to make more money is to do Resume Driven Development and change jobs. While your market value goes up tremendously, HR departments insists on giving 3-4% raises to existing employees while being forced to bring in new employees at market rates.

This is the “secondary employment” market where HR policies aren’t optimized for in demand fields like software development. The immediate managers are usually helpless to do anything about it and see employees they spend time training leave for greener pastures. Being that is the reality, what other course is there but to find people you don’t have to train?


Compared to what, needing to ship now and not being able to find any "qualified candidates"?

It really depends upon the exact situation. We can all make up bullshit examples to make either option look stupid.


It’s “bullshit” to open req to hire a developer because you need to ship a product within the next six months and you need more bodies? Why are you hiring if you don’t need the extra people? If you didn’t need the extra people for a year until they ramp up, wouldn’t it be better just to take your time hiring until you found someone that didn’t have to spend a year learning on your dime and then leave in two years?


I needed extra people, but not merely warm bodies to fill a seat. I need people who don't need a year to ramp up, but can do so in a few weeks.


It’s not current speed that matters but acceleration and max speed


There is no "right way to hire". All "ways" are flawed and miss things. If hiring could've been solved by now, it would have already happened.


I think what he is saying is, if you are hiring a React developer, but someone only has experience in Vue, would you not hire them because of their lack of React experience?


I agree with you, it's not how technical people are hired. But, I think it's entirely necessary in certain contexts, once we realise the software world is bigger than just Web.

Learning languages and libraries is easy. But new paradigms, domains, and entire new ways of thinking? It takes time to master.

So I'd be inclined to add disclaimers to the idea that "after 6 to 12 months nobody will ever notice".

This might be the true if you're moving from 1 web programming gig to another web programming gig, but the author himself acknowledges the vastness of software out there that underpins every aspect of industry.

So - if you're going to be working on enterprise middleware in investment banks, or embedded systems on airplane simulation kit, or APL, or missile guidance software - but all you've done is web... then tech stack does matter. Insofar as being familiar with the paradigms, standard patterns and unique industry norms.

The problem is that some middlemen (ie recruiters, HR or non-technical hiring managers) do not understand that, for instance, Java and C# are somewhat interchangeable, but C# and APL are not.

Fantastically written piece in all though, lots of great stuff in there that I wish I had either known (or stopped denying!) earlier in my career. But specifically on this point I think it's a dangerous idea to imply to undergrads that deep, hard-earned experience in both tech and domain can be overcome in such small time scales.


>>Learning languages and libraries is easy. But new paradigms, domains, and entire new ways of thinking? It takes time to master.

Here's why I think this is sometimes a false dichotomy: going from Ruby to Python will be relatively easy. Going from Ruby to Haskell, on the other hand, will involve not just learning Haskell the language (syntax etc.) but also new paradigms and entire new ways of thinking.


It's the first thing some hiring managers look at.

Speaking for myself as a hiring manager I'm not looking for specific tech stacks. I'm looking for general patterns. Someone who has spent their entire career doing front end work is unlikely to be a great fit for embedded development and vice versa, but I don't care so much about the specific technologies they used in the process.

My experience has been that concepts and fundamentals trump specific tech stacks. However I also take a longer range view of things. I care far less about relative productivity of two candidates a month from now than I do a year from now.


Until your customer needs a feature a month from now and not a year from now or you only have a six month runway....


The percentage of hires for companies in that situation is going to be extremely small, yet the "hire for tech stack" motif is cargo culted throughout the industry.


Why else would most companies have open reqs if they don’t see a need for man power soon unless you are one of the four or five big tech companies that are trying to accumulate smart people (tm)?


Because they project for incoming needs. Hiring for today’s needs is a fools errand, it can take a good amount of time to identify a quality candidate. If I need something filled right ie or the company goes under it is already too late.


Doesn’t brooks say that adding manpower to an already late project makes the project later?


who said the project was late? You get a new customer or want to get to market with a new feature fast you still need new people.

On the other hand, you have a few “full stack developers” on hand and you need someone to concentrate on one part of the stack so your current developers can concentrate on other parts.

I’ve found it remarkably easy to hire a front end developer who knows the stack we want to use, who doesn’t know anything about the business besides - here is what we want from the website, give us some mockups and we come up with the APIs you need.


Why would a company choose someone who doesn’t know the stack they are using over someone who does? Let alone 6-12 months.

Knowing the ecosystem, best practices, frameworks, etc takes longer than a few weeks.

Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?


Because someone is smarter or better at the job in general that the lower quality specialists they can find. It's very hard to find a good generalist. If you can be a good generalist, you're exponentially more valuable than a generic specialist. I have done mostly backend work in Python, but in my current role I've worked in the frontend with Angular, I've debugged critical issues on our iOS app, I've fixed our AWS infrastructure, I've moved us away from dragging files into windows boxes to CI/CD, I've helped architect solutions, I've worked with our customers, I've led efforts across multiple teams, etc.

Being a "Python Programmer" in no way measures my value to the company, in fact, it necessarily limits what I'm actually capable of contributing to the company. If you can show that you can add value outside of a specific tech stack, you're worth the few weeks to learn a new technology. And yeah, it is basically a few weeks to get up to speed and be a contributing member of a code base if you don't know the technology. 6-12 months to know all of the details if you're a solid engineer.


I would say just the opposite. When it comes to the entire “web developer” tech stack, sure I can put together a website good enough for internal use, but I’m not very good at the front end.

I consider myself to be very good at C#, okay at backend JS development and passable at Python.

I have enough experience from doing a lot of ETL in a previous life to know how to automize queries and schemas for speed and to not lock up a database.

I’ve set up CI/CD solutions from scratch with what is now called “Azure Devops”, AWS’s CodeBuild/CodeDeploy/CodePipeline, OctopusDeploy and Jenkins

I could just as easily and competitively apply for jobs as an “AWS Architect” who knows most of the popular AWS offerings for developers, Devops, netops, and system administrators and I have experience with them.

But in many of those areas - especially on the front end and with the netops, system administration stuff, at any scale. It wouldn’t make any sense to hire someone who “kind of” knows what they are doing over hiring a specialist.

If I need something now I’m not going to want to wait for you to get up to speed in year.

Do you really think you’re as good at any of those areas as a specialist? AWS alone announces dozens of new things every month on their podcast.


I think we might be talking in different time scales, and maybe different responsibilities. If you're looking to bring on someone to solve an urgent problem this month, you're going to need a specialist in the area. If you're bringing on an early hire to build a team, or another engineer to bolster your team with some specialists, you may want a solid generalist. There might also be some significant difference between organizational structures, if you're joining a team centered around building and supporting a product, or maybe a startup with only one or two products, a generalist might be much more effective. If you're joining a massive organization where there are highly specialized teams dealing with specific problem areas, a specialist makes a lot more sense.

As a sweeping statement, I'm better at solving "a problem" than a specialist. If you define the problem area tightly, they may be the right person for the job, but if the role you're hiring for has uncertainty and flexibility, the front end specialist probably isn't the right person to figure out why your database is slow, your load balancer isn't working, your build and deploy process is stuck, etc.

There are definitely roles that are much more fit to one or the other, but the generalist can handle a lot of things pretty well. All of that being said, we can probably agree that the best setting is having both.


> Why would a company choose someone who doesn’t know the stack they are using over someone who does? Let alone 6-12 months. > Knowing the ecosystem, best practices, frameworks, etc takes longer than a few weeks. > Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?

I would separate this into two separate categories - big tech companies (or others with similarly large shared internal infra) and others. For big tech companies, for most positions, the tech stack is proprietary internal stuff such that knowing the language and best practices only get you about 10% of the way. For other types of companies, it's generally more important that you hire people with the right business context than the exact tech stack.

With that said, native mobile development isn't just a stack - it's more of a different, though overlapping, career path - the main reason not to hire non-mobile developers into a mobile role isn't that the stack is different and takes time to learn, but that the workflow is so different that they may or may not know what it is that they are even signing up for. Hypothetically, you'd rather hire someone with Xamarin background with no Java experience for an Android java role, than someone with no mobile dev experience, but lots of Java backend experience.

My first mobile development experience was a moderately complex Android project on an app that was used by tens of millions of people daily. There was no ramp-up - I had zero prior experience before signing up for this project, never even played around with any mobile development before and I had never professionally programmed in Java - and I was the sole engineer working on both mobile and backend. It was a little painful but everything shipped on time.


> It was a little painful but everything shipped on time.

Your last sentence would seem to contradict the entire body of your comment. You describe mobile development as a fundamentally different thing, but then your first gig was to work on a large project, the result of which was shipping on time at the cost of a “little” pain.


That's fair - what I meant by it being a different career isn't that skills aren't transferable at all but that you solve different kinds of problems entirely, as distinct from a difference in stack. And if you hired me then as a mobile developer, I'd have quit, so you don't want to hire non mobile engineers into mobile roles. Language/stack is a red herring here in the sense that Java backend development is a lot closer to Ruby backend development than it is to Android development.


> And if you hired me then as a mobile developer, I'd have quit, so you don't want to hire non mobile engineers into mobile roles.

That is a great point. I’ve also had the experience of working for a short time on a system, knowing that I would hate for it to become a regular part of my work. So hiring someone with experience is one way to mitigate staff turnover from undesirable tasks.


I don’t know Ruby. But I can tell you the difference between having the rails of the compiler and type safe language and having those rails taken away in a language like Javascript and Python caused a lot of heartache early on.


This is fair too but I think most experienced engineers for whom stack is a valid consideration have some experience with at least one dynamically typed language and one statically typed language and can self-select out of roles if they have a strong preference either way. And I don't know anyone for whom this was an actual blocker as opposed to an annoyance (in either direction) that they got used to after a while. On the whole, I think it's less likely for this to be a serious issue than, say, the details wrt how the system is architected and how the organization is run, some of which you won't really get to know until you start.


> Why would a company choose someone who doesn’t know the stack they are using over someone who does?

If the two candidates are perfectly identical on every other criteria, sure. But that's not the case, the point is that most other criteria are more important that the specific tech experience when dealing with competent people.

> Sure I could learn Java or Swift in a few weeks, but does that mean I would be a competent Android or iOS developer?

If you have web experience, I'm not too worried about your productivity on Android or iOS. If I'm hiring for Django and you have experience with any of Symfony, Rails, Spring, .Net or NestJS, the tech expertise is the least of my concerns.


You’re not worried about their mobile experience until the app fails to work in a tunnel because they are expecting an always on connection or they have half their records on their mobile and half on their server and need to figure out which to use or they never had to figure out a syncing algorithm because you usually don’t have to sync things on the web since you never expect long periods of not being connected.

Edit: and I forgot to mention the biggest UI failure I made when I did mobile development a decade ago on WinCE ruggedized devices. I didn’t even think about actually taking the device out into the sun where are all of the field service techs would be working and seeing how the screen, colors and contrast looked.


I would argue that this isn’t solely the responsibility of the programmer. This is where testing and QA comes in. Also, these are pretty basic considerations when building mobile apps and I would argue that they aren’t language specifics.


So now we are going to spend more on development and probably rework because the developer decided he was just going to wrap the website in a webview and call it a “mobile app” because he didn’t think through these scenarios. Anytime that you have to go through the development -> QA cycle more then once it costs money and time.

That’s kind of my point. Learning a language is easy and for the most part useless without knowing the frameworks and architectural best practices.

I don’t know what Android has, but iOS has built in frameworks for handling syncing. If an Android developer didn’t know all of the built in frameworks available to them and re-invented the wheel, that would also be a waste of money.


The argument made here is a good generalist tech guy would definitely think these things beforehand as much as an android or iOS guy ... That is language knowlegde !== problem solving ...


I know all of this because I’ve Architected two solutions on Windows CE devices and built one on top of proprietary mobile/web form builder that would let you do real coding in Javascript.

Any company that would hire me as a modern “mobile developer” would be absolutely foolish. I have only written 30 lines of Java my entire 20 year professional career, never written a line of Swift or Objective C. Why would they hire me over someone with relevant experience as a developer? If they want to hire me as a team lead/architect and then find mobile developers, I would know what to look for.


While it's not solely the responsibility of the programmer, most of the responsibility should fall on the programmer.

It's a much more efficient system for the programmer to own UI/UX and for testing and QA to simply verify. In contrast the the programmer doing whatever, and leaving the full responsibility of UI/UX to the testing & QA cycle.


I notice this a lot on apps in the subway (who doesn't) and the app just dies

A lot of problems in life can be simply avoided this is no different... If you're getting that tiny last bit of differentiation because you're 90% market share and you can afford to hire people to solve that exact specific problem then great. But that isn't most places. Most apps would do better to 100% avoid the problem and just say "No Internet Connection" if there's no WiFi or 3g. On top of that you get mobile developers who swore to God they solved this problem but guess what they actually have no idea what they are doing. They think it works but it doesn't because it's actually a database concurrency control problem. This https://www.postgresql.org/docs/current/mvcc-intro.html and what I expect for someone who claims to "solve the problem" not some "algorithm" they invented.

So I don't buy it, and I don't buy the business need for it unless it's an app specifically made for disconnected use. Unfortunately it sounds like one of those things people do to make themselves feel important or smart (no nice way to put it; I see it as bad as someone who invents their own encryption "algorithm" without realizing how ridiculous that is).

In short I would say don't do it. And if someone does it better be a real business requirement.


And you’re kind of demonstrating my point - the difference between people who think it’s an easy problem and the proportion who have experience.

Enterprise mobile apps aren’t about the apps you download from the App Store. Usually they are distributed using an on-site mobile device management system.

1st use case: I worked for s company that wrote field service applications for ruggedized Windows mobile devices. Some had cellular, some had WiFi, and some had neither. You had to actually dock the device. The field service techs had to have all of the information they needed on the device to do service calls including routes whether or not they had connectivity. They would record the information and it would sync back to the server whenever they had a connection.

2nd use case: worked for a company that wrote software for railroad car repair billing. Repairs are governed by Raillinc (https://www.railinc.com/rportal/documents/18/260737/CRB_Proc...) all of the rules and audits had to be on the device and the record of the repair had to be available whether or not they had connectivity. It had to sync back with the server whenever a connection was available.

3rd case: software for doctors. Hospitals are notorious for having poor connections.

4th case: home health care nurses had to record lots of information for Medicare billing. Again you can’t count on having mobile connections.


It's not an easy problem. It's a database concurrency issue and well beyond the skillset of most mobile devs. Database isn't even a required course for a compsci degree it's usually an elective.

My point is the problem you think you solved, you didn't and it will break under dozens of scenarios. Maybe the clients are happy and they think it works but you just haven't encountered the case where data goes missing or overwritten.

In other words what I am saying is it is wrong and hard to know it is wrong unless you directly attack it. The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.

Not everything in the world should exist that leads to 737 Max. The word "sync" has implications way beyond the concerns of a mobile developer.

So I call bullshit; the fact the industry does it, that everyone does it, that you consider "real" mobile devs to require it, that customers want it doesn't mean it is a good idea or that it's mathematically or scientifically sound. It may cover most cases and nobody may notice the problems except once in a blue moon but that doesn't make it right because operational systems need full data integrity.

The correct way to handle such a request is not to "sync" but to collect data push it to the backend and let the backend sort out the mess. Not "sync" by whatever stretch of the imagination no matter what cottage industry or cult beliefs have been born of it.


So you’re saying “it’s not a good idea” to have software that actually fulfills the need? The mobile app that doesn’t work in the majority of use cases to solve the problem that it was meant to solve is useless.

And yes “the problem” we solved, a mobile app that could route field technicians dynamically at a level of quality we needed we did solve.

The word "enterprise" is an euphemism for low cost and potentially low quality. It's a buzzword. I wouldn't take an Enterprise mobile developer over a B2C mobile developer just because of the word enterprise.

Again this comes from someone who thinks they have experience versus someone who does have experience. Did you read the link I posted about the industry required rules for repairing railway cars? That isn’t even the entire regulation. If the typical B2C app doesn’t work, oh well. For the railroad industry, if you don’t submit your railcar repair just right - it gets rejected either by the interchange or the customer and you can only submit your invoices and rebuttals once per month.

The correct way to handle such a request is not to "sync" but to collect data push it to the backend and let the backend sort out the mess. Not "sync" by whatever stretch of the imagination no matter what cottage industry or cult beliefs have been born of it.

How well does “one way server syncing” when you’re a field tech doing routes and the customer calls customer service and cancels one of your routes while you’re in the truck? How well does it work when your back end system needs to calculate where each truck is on the road and needs to re-assign routes on the fly? How well does it work when one tech needs a part and they need to know where the parts are based on what other techs have already been at the warehouse and now they have the part? But wait, they went to the customer’s house and found that they don’t need the part at all and it’s available on the truck a mile away? All of this involves dynamic two way syncing...

Again, the difference between someone who has real world experience and someone who thinks that because their Twitter app doesn’t need to work in the subway nothing does.


I value experience a lot, over almost anything.

What you mention is very dangerous to the data. Take the medical app example. Suppose there's an app to update a chart that doctors carry around. Suppose there's five doctors and/or nurses working on the patient. Whose prescription or orders do you take? On top of that it gets worse -- there might be dependencies between the orders, orders might be to countermand other orders or in response to others which may or may not exist. It is not a problem that any algorithm or programming can solve, because the whole point is to take the experience and skill of the doctors which is being blindly ignored for some process that the doctors may or may not be aware of who submit the information. Similar problems could appear for any of the examples you mentioned if you dug hard enough.

As for the submission you can simply ban submission unless you have an active Internet connection. 737 Max is also "real world experience" Boeing panicked at Airbus and instead of going through a 10 year design and 10 billion dollar process for a plane they surrendered to market realities at the cost of lives. The fact that "enterprise" has onerous business requirements or even legal requirements demanding technical sacrifice doesn't make it any less technically wrong. If asked to make a sync on the client side I would make it as simple and straightforward as possible and assume nothing.

I suppose so long as it doesn't cost lives or ruins people I don't particularly care if you value handling data on the client in this way as a qualification for "enterprise" mobile developer. As long as it's "good enough" to meet the requirement, great. But it doesn't mean I like it, and it doesn't mean one should ignore technical flaws. Unless it's ACID you don't guarantee anything it's just a feel good (and possibly done in a much simpler way). For all the scenarios you mentioned I can mention another half dozen scenarios or even a very simple one, one person with same seniority making exactly the same change to the same record. Then your system tosses one or the other or even merges them -- in other words you dive into expert systems, NOT anything to do with "syncing".

Experience is important but there's a theoretical foundation to everything and it's wrong to expect an offline node in a distributed network to act as a source of truth for any period of time. Sorry.


> And yes “the problem” we solved, a mobile app that could route field technicians dynamically at a level of quality we needed we did solve.

Just look for people good at handling split data updates and ownership, there are a lot of people working on that kind issues on the backend, I really doubt you find more mobile devs with those skills than backend devs with it.


What makes you think that the two are distinct?


What two are distinct? Yes you sometimes have conflicting data on two machines when you work on mobile apps, but that is the core of a lot of distributed computing problems. So if you only look for those skills in mobile devs you will miss out on a lot of people with relevant experience.


> Most apps would do better to 100% avoid the problem and just say "No Internet Connection" if there's no WiFi or 3g.

Lol we are in such a bubble.


If the bubble is large enough, it's the others who are in a bubble, and we're just "in the world"...


I recently came across a company similar to Udemy offering cheap “on sale” video courses like Udemy does.

This company has an app and in it you can download the videos and watch them offline, just like Udemy.

That’s great, I have a limited amount of data on my plan.

And better yet, I can watch these videos when I am completely offline, for example on the 30 hour connecting series of flights I went on recently. Except... whereas the Udemy app actually works completely offline, this other app needs internet access in order for the “my courses” tab to work.

You still have access to the videos through the “downloads” tab. But there they are not organized neatly. So I decided to do other things than to look at any of the videos.

Also, a lot of apps are bad at properly syncing data. For example I think neither Udemy nor this other one properly syncs the course progress data. Even when they do have a connection.


> But that's not the case, the point is that most other criteria are more important that the specific tech experience when dealing with competent people

Unfortunately, the more important criteria are harder to assess, and hiring in the real world very heavily weights the east-to-assess bits whether or not they are actually important.


It’s like saying “well, we’re an agile company, so we prefer to hire people who know standups and sprint planning”. In principle they will ramp up faster, sure. But it’s just not a useful thing to select on, and if you do select on it you’ll end up with a weird monoculture.


Knowing an ecosystem well on a senior level won’t happen in a few weeks.

But to your example, I’ve seen developers who couldn’t adjust to developing where they had a rapid release cycle because they were use to the big design up front. How you develop software where you don’t have all of the requirements for the next year is a completely different mindset.

Even on comments on this post, I see people who aren’t actually willing to actually talk to the customer to decide what they should work on.


Really senior level is about talking to people; “Knowing the ecosystem well” (if by that you mean a specific tech stack) is almost irrelevant. Senior is more about solving the right problem than solving the problem “right”


And then you end up re-inventing the wheel, setting up servers and spending more on maintenance and development because even though your entire company is already on AWS, you didn’t know you could just click a button and make that entire part of the product someone else’s problem.....


Come on. Even highschool kids know that AWS exists, that doesn't make you "senior". How did you go from "Knowing an ecosystem well on a senior level won’t happen in a few weeks." to an example of someone reinventing the wheel? That's absolutely an example of non-senior behavior. But I was talking about e.g. someone that doesn't know Spring, or Angular - it seemed to me you were saying "can't hire a non-Spring-expert as senior if we're using Spring and he doesn't know it".


There is a difference between “knowing it exists” and knowing what it can do. No, despite what a bunch of old school netops guys who watched an ACloudGuru video and now can click around and duplicate their on prem infrastructure on AWS (and cost more) think, it’s much more than just hosting VMs.

And you kind of proved my point....

If you don’t know the wheel exists, you don’t know you’re reinventing it.


Look, I don't even know what your point is. I thought you were claiming "you can't/shouldn't hire someone who doesn't know well the technologies that you currently use". If it's not that, then don't mind me, I was debating something else/ I misunderstood your point.


That’s exactly what I’m saying, in my example the old school netops guys who didn’t know anything about the “tech stack” in this case AWS, ended up designing horrible, inefficient solutions because they were learning on the job. They could have been “senior network engineers” because they spent years managing a colo. But, they definitely weren’t as efficient as someone who had built real world solutions on AWS.


The linked article actually mentions that ("there are people with title 'senior' that can't do fizzbuzz"). So the fact that some "senior network engineers" did something stupid doesn't prove your point or invalidate the article in any way.

There's a big difference between not having experience with the <framework_du_jour>, and not knowing foundational technologies. E.g. there's a good chance Jeff Dean doesn't have much experience with most of AWS technologies, but there's no reason to believe it would take him more than a few weeks to get up-to-speed with them if he'd really need to. Not on the "expert" level, mind you - but enough to not make big mistakes.


I agree the problems you're describing are real and important to select for, but just requiring agile experience won't help you there. Lots of people practice a weird form of agile development, where they go through all the motions of fast iteration, but almost all tasks consist of non-negotiable internal dependencies and not value delivered to a customer.

I'd argue a similar thing is true for tech stacks. If there's some correlation between knowledge of all the fiddly bits of C++ and ability to write clean, performant systems-level code, I've yet to see it.


> almost all tasks consist of non-negotiable internal dependencies and not value delivered to a customer

It’s especially nice when you realize that as soon as you’ve completed all the non-negotiable features, a bunch of other things magically becomes non-negotiable.


I've hired for the stack specifically and it really narrowed our applicant pool. Lately I've relaxed this filter. We're a Rails app, but what if there was a really great engineer that uses Python or Node?

They might not know the right libraries to use, but would probably know there are libraries, have a good concept of application design in general, and know how to ask the right questions. And we're hoping to hire someone for years.


I have made the jump from one language to another several times on the job. It’s considered lower risk to use someone on the team than higher someone new. It’s also going to take a while to higher someone and get them up to speed. After a year you might not be as experienced as someone using the same stack for 5+ years but the difference is not that extreme.

Further stacks tend to share quite a bit. Knowing SQL, HTML, JavaScript, CSS, jQuery etc takes a long time and has little to do with Java vs .Net.


Languages are easy. But do you think you could jump from systems programming in C to mobile programming on Android “easily”.

I’ve seen people jump from Java to C#. You can see the difference in their coding style, them not taking advantage of the features of the language, reinventing the wheel because they didn’t know there were popular packages that would do it for them, creating horrible inefficient queries and practices using EF etc.


Sure, jumps like that are not as bad as you might think. Style wise the code is often poor, but users don’t really care about the code.

I started with Object Pascal on Mac OS 8/9 which forced you to do a lot of low level tasks like deal with Handles and suffer through a cooperative multithreaded OS. Rewriting the network layer from AppleTalk to TCP/IP was closer to systems programming in C than you might think.

I have also been paid to write C programs, Windows application pre .Net in Visual C++ and post in C# and VB, written Java and C# websites, and most recently Angular SPA’s. Add to that a few random oddities like XSLT.

PS: I even turned down working on Android, but could have made that jump.


The jump is huge. Web development has a completely different model than mobile development. In web development, losing a connection is a failure case. Developing for mobile it is expected. Especially for enterprise apps. Logic is usually distributed, syncing is required, knowing which is the source of truth etc.

This is really an example of hiring someone who has experience and hiring someone who thinks “it’s easy and just like what I did before”.

Then I could get into all of the old netops guys who think AWS is just like what they did on prem and end up costing the company more...


They are different, but I don’t think networking is one of the major differences.

Website backends often have significant dependence on other services that can be down. Further standalone apps can have zero network dependence or a lot. So having written both, and similar code for each, I don’t feel the networking side is all that different.

If anything networking is probably the largest similarly between them.

PS: I did some iOS development in my spare time even worked with some old J2ME, so I assume Android is fairly similar.


>In web development, losing a connection is a failure case.

That's changing with the advent of progressive web apps. It's possible to write web apps now that are robust in the face of network problems, e.g., the web app renders and is functional even without a connection.


I’m not questioning whether you can do it with web technology. But you still have to design it where both your business logic and your data can live on the device and it can queue submitted data until there is a connection available and if multiple people are updating the same record, knowing how to merge the information or knowing which one takes precedence. I’ve even seen cases where the logic depended not on just the record, it depended on knowing which fields were updated by the device when it was disconnected and the prior value of the field to make sure that the user had seen the most recent value before they updated it. If not, someone on the back end had to do a “manual merge conflict resolution” by calling both people.


You frequently encounter similar issues between multiple web servers. Either due to scaling to multiple data centers or when multiple independent services all update the same information. This can get really complex when clients start talking to multiple different services.


> You can see the difference in their coding style, them not taking advantage of the features of the language, reinventing the wheel because they didn’t know there were popular packages that would do it for them, creating horrible inefficient queries and practices using EF etc.

That's why code reviews are a mentoring opportunity. Help your colleague level up.

(Also there should be team dialogue about how to solve some problems more efficiently, I mean, if I was unsure of something I would go and ask colleagues for some guidance, or you would go and do some research on your own)


Now not only do you have a less effective developer who takes time to ramp up, you’re also taking time from senior developers.

I’m not saying this is always the right course and of course this doesn’t scale to larger companies, but my first project at my current company a week in was to develop a feature from scratch which ended up involving coding an API for the front end developers, writing an ETL process that used both Redshift (AWS OLAP database), and MySQL designing the schemas, configuring the AWS resources with CloudFormation, setting up queues, messages, lambdas, dealing with the vendor we were integrating with and learning the business vertical. How much longer would it have taken if instead of already knowing their stack - C# WebAPI, MySQL, and AWS - I came from a Java/Mongo/GCP background?

They needed someone useful now to get a feature out that they wanted to charge customers for.


>Why would a company choose someone who doesn’t know the stack they are using over someone who does?

Because a stack defined programmer is a limited programmer...


As a person who was responsible for hiring in a previous role and I still sit on a lot of technical interviews, I didn’t care if you were “limited” to the stack we were using as long as you could get the job done without six months to a year of training.

If you started your career learning Java 20 years ago and you kept up with the latest trends of Java, you could still find a job now. The same is true for C# around 15 years ago.


Agreed. The job doesn’t exist to train you.


It’s more nuanced than that. Most job reqs have “must haves” and “nice to haves”.

My current job had one must have - C#/MVC/Web API and some Javascript experience - their current tech stack. Nice to haves were React, the fiddly bits of AWS, and Python. I was immediately useful because I was strong in the must haves, had a little AWS experience, and knew nothing about Python or React.

I’ve since leveled up on the nice to haves except for React, I refuse to jump on the $cool_kids bandwagon of front end development. Especially seeing that everything else I listed, pays more, and doesn’t change as often.


I find React's relatively simple architecture makes it easy to get into for JS devs; I've always found it extremely productive and maintainable. My sickness is to the point that it’s hard for me to imagine an immediate future where it hasn’t entirely consumed the front-end web environment like jQuery did 15 or so years ago. Maybe you're onto something, but I'm loving the new digs so far.


I don’t necessarily have anything against React as a technology, but as the old saying goes, a man with one ass can’t dance at two weddings. Almost everyone else at the company are better at the front end than I am but most haven’t taken the time to learn the backend. Why would I learn something, that I still wouldn’t be as good at in a year and couldn’t negotiate a higher pay based on bringing an above average amount of value to the company than other employees? I also wouldn’t be as competitive in the overall job market.

This isn’t directed toward you, just a general comment.


I run a small company and I m looking for Web Developers. We primarily use PHP and getting into Golang but I really dont care about the language since what matters is concepts and experience with html/css/js http requests/response, client server architecture, database/CRUD, REST API etc. So i m not one of those employers :)


Same here, hire the person, treat them well, and the language doesn’t matter as long as they have experience in similar languages.


This is entirely company dependent. My team is primarily Ruby and we hire folks with no ruby experience all the time.


I need a job and can write code in nearly any language. Hire me?


Yeah downvote a guy looking for work. Thanks.


Don't take it personally. It's not you that's being downvoted; it's your comment.


Because ruby programmers are rare nowadays and your company still need people to build software. At least, that's my assumption.


They're not rare. I know a ton and many still write Ruby. Willingness to work on brownfield Ruby projects is rare; as with other similar systems like Django or roll-your-own Express stuff, such systems strongly tend towards chaos and a lack of maintainability after a certain point if they haven’t already had strong leadership end-to-end. In my neck of the woods, the folks who you want doing that work don't want to do that work anymore. Unless you pay a lot.


> Willingness to work on brownfield Ruby projects is rare

After working in a dev agency that took on outside Rails projects several times while I was there, my conclusion is that Rails (not the only way to write even Web-focused Ruby, and not the only one I've used, but the only one that'll score you any points for hiring) is a pretty bad framework for any project that will have multiple teams on it over its lifetime, absent heroic technical direction & testing efforts that I've never actually seen in the wild—probably because young or outsourced teams picking Rails are doing it to move fast, over all other concerns.

You can take on or resurrect an average outside or old Rails projects. It's just slow and expensive.

Too much magic, too much room for doing things some way that the next person will never have seen before, too little grep-ability, too hard for your tools to help you.


Agreed in full. I don't love Rails, either; I always used https://github.com/modern-project/modern-ruby (which I almost definitionally like, 'cause I wrote it). Rails is a framework that depends on developer continuity. I view Ruby as a tool for writing very specific things for experts, more than anything else. Web dev in general seems to be against that principle; Ruby could probably benefit a lot from more effort spent on ways to make it hard to do the Wrong Thing at this point.


Honestly, I find Rails projects to unupgradeable.

We found it to be easier to just rewrite the project when moving between major versions (Rails 4 to Rails 5 for example). Shoehorning old functionality into a newer Rails just made this inconsistent.


Good call, coming on to a Rails project that's been upgraded—or worse upgraded twice—is... usually very unpleasant.


> They're not rare

Depends on how you define rare. Exceedingly uncommon at the very least. stares at a pile of 150 resumes Ruby is a language languishing from being embedded in a few niches, having poor performance, and otherwise being unremarkable, imo. Ruby was made with the idea that picking it up would be easy (which it is), leading to less incentive to learn it.

Putting Ruby in the job description gives you applicants with Ruby experience, which is why the blog post was (and is) impractical. Signaling works.


Calling Ruby "unremarkable" is a significant understatement, I think. For my money nothing in common usage approaches the same kinds of metaprogramming and solving-a-category-of-problems-at-once nature of a really comfortable and dynamic Lisp as does Ruby. (Clojure isn't in common use.)

I get that that's not what you want, you want somebody to close tickets, but that's not Ruby's fault. And most of the people I know still working in Ruby are at a level where that kind of grunt work just isn't worth their time; most others have, of course, moved on.


Point taken, but the language is not the thing I would hire for anyway. It's the environment and skill-set beyond the language. A mobile developer even requires a completely different set of instincts to be successful than an engineer who's got work experience building scalable backend systems. C++ running across 1000 inter-connected systems is not the same C++ compiled down to run on a gaming console with limited CPU cycles to spare. C++ shared across iOS and Android is also different because Android requires that you bridge the library across Java and iOS may require an Obj-C layer if you're in Swift.

If you spent your whole career writing code in Python and you wanted your next job on iOS, my concern wouldn't be, can you learn Swift in a short time. My concern would start at something as simple as how you would cache data when the network connection times out. Or can you manage your data models in a way that can maintain 60fps scrolling, something iOS engineers pride themselves in achieving. Of course all of this can be learned, but my point is that it's more than the language.


It's also not true with respect to people noticing what you've been doing with your career. A programming language isn't just a language, it's also a culture. I've been writing Java professionally for almost 20 years. I can definitely tell when a person isn't a "java programmer." I am struggling to describe exactly what I mean, but the best metaphor I can come up with is it's kind of like hearing someone speak with a heavy accent. It can all be technically correct, the grammar is perfect, but you can still tell it's not native.

Sometimes this is good. It can bring in new/interesting/helpful ways of thinking about problems to come up with better solutions. Sometimes it's bad because anything that's a little jarring or unusual impedes support and maintenance by "native programmers" in the same way a heavy accent can impede communication.


I would argue that it is the first thing a manager looking for a cost center, cog type engineer looks at. They are not looking for an engineer who actually understands business they are looking for a “ plumbing engineer” that just keeps their head down and grinds out code. There is little upside in this position and this type hiring manager mentality should filter those jobs out for you.

I do agree that most start ups will probably restrict them selves to hiring in the same stack as their tolerance for learning curves will be extremely short.

However, I feel that many employers are used to engineers who don’t “get business” and so they just want somebody to plunk down code. This leads them to “what kind of dumb cog/stack are you?” questions as they are already assuming they won’t be able to hire a truly “full stack” engineer who understands the revenues and costs associated with their work and how they impact corporate strategy etc.


The point is to acknowledge that the way most technical people are hired isn't the way that maximizes outcomes for technical people. Patio11's point is that the way to maximize outcomes as a technical employee is to _break out_ of this paradigm. It might reduce some of your top of funnel (random recruiter outreach) options, but it will increase the value of the opportunities that do arise.


> In the real world, picking up a new language takes a few weeks of effort and after 6 to 12 months nobody will ever notice you haven’t been doing that one for your entire career.

I think this is only true if what you're working on is simple, like crud apps.

It takes longer to be intimately familiar with a language's concurrency model, pros and cons of major libraries and frameworks and the ability to make such decisions quickly and without learning hurdles, etc.


In my experience folks can be reasonably productive essentially immediately in a language they've never touched if they're diving into an existing code base. Give them a machine that's already set up like the other developers, point them at a couple easy bugs or features, do some code reviews, and yes they'll be slightly slower through this process than someone that's already familiar with the stack, but not much.


I'm not in the usual hiring circuit but went to one or two interviews for having a feel for that world. I always had the attitude of being the guy who will learn whatever is needed to successfully finish the task at hand, and I have work to show for it. It's amazing how's that was worth nothing for people that interviewed me, compared with how many years of react experience I had.


Sort of same kind of experience. I presented myself as a problem solver, with algorithmic skills and a deep understanding of language's principles as this is my research area...

"Sure, but can you do python"

Well, I definitely can learn it. But can I do it right their, right now? I never really worked in python, so no.

And so... no.


It's not even just start-ups. Established companies work the same way. Getting pidgeon-holed is a very real and common phenomenon.

It's also kind of silly in a lot of cases too. It always feels like "Oh I see you do Angular... but could you _really_ be capable of developing in React? I don't know..."

So retarded. Yet this attitude seems to dominate.


> The stack they are using seems to be the first thing hiring managers look at.

I try to avoid companies that hire in this manner; it shows a lack of vision and planning. It's unlikely that a given developer will be working on the same stack in five years; the stack will have evolved or the developer will have moved on.


That’s kind of why you do want people who know the stack. If you as the manager or you expect the developer to move on with three years. Why waste a year getting a developer up to speed without having anything to show for it?


First, developers rarely become tech fluent in 6-12 months.

Second, startups don't have 6-12 months.


Exactly what I thought when I read this paragraph. But still some CTOs are willing to let the team take time to learn a new stack and that in the end helps the organization (was part of one).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: