Hacker Newsnew | past | comments | ask | show | jobs | submit | sp1982's commentslogin

My attempt at a super simple explanation behind the intuition of LLM attention with help of chatgpt/gemini. As a sanity check, I asked Claude to verify.

just didn't make the cutoff of top 10 categories I am tracking, I will update the report. iOS/android/c# are around same range of 2%

I got ~5K if I include Bay Area, tho my data only covers jobs that are active in the past 7d and am quite sure I have room to improve the crawl coverage. My hope is that this report is representative sample of trends.

Quickly checking db, SF bay area has quite a bit more than NYC. There are clearly a lot of .NET jobs too but didnt make it to cutoff. I will see if I can include metro areas when I get a chance.

Unfortunately I don't have it because I started working on this last year but I am curious to see how AI skills surface as the year progresses.

Data is ex-china. Good luck to everyone looking for a new role in the new year.

What does “ex-china” mean, excluding China?

Edit: did a quick find-in-page on mobile for “china” and it appears 0 times. Though notably China is missing from the geographic charts


Yes, excluding china. I don’t have a lot of companies based in china in my crawl data currently.

That makes sense, they have their own ecosystem for posting jobs there.

Got it, thanks. Good write up and presentation.

Do they even hire engineers from abroad, or at least from the West?

Why not? Tiktok certainly needed engineers?

I been hacking on jobswithgpt.com, you don’t need to sign up either.


If you are using cloudflare, add a rule to do managed JS challenge. Your backend shouldn’t see the requests unless they pass challenge.


This makes sense. I recently did an experiment to test GPT5 on hallucinations on cricket data where there is a lot of statistical pressure. It is far better to say idk than a wrong answer. Most current benchmarks don’t test for that. https://kaamvaam.com/machine-learning-ai/llm-eval-hallucinat...


I did a similar experiment and found that GPT5 hallucinates upto 20% in domains like cricket stats where there is too much info to memorize. However interestingly the mini version refuses to answer most of the time which is a better approach imho. https://kaamvaam.com/machine-learning-ai/llm-eval-hallucinat...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: