Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish they had a range that was affordable for a hobbyist. You can buy a cheap Nvidia card to "get your feet wet".

I would like to play with these things.



A recent multicore i7 (e.g. 4 core Haswell, 8 way SIMD for single precision = 32 threads) is enough to prototype OpenCL code which you can then run on larger CPUs or GPUs.


Intel was selling the Xeon Phi 31S1P for under 200$ (it's back to 500$ now) for a limited time. They will likely to have cheap version and promotions this time around too.


Keep an eye on Colfax - they've had some nice deals in the past:

http://www.colfax-intl.com/nd/


Why not use Google's platform, which runs on their new custom chips (Tensor Processing Unit)?


Because it's not made to accelerate training, just inference. The TPU is an 8-bit fixed point processor less power hungry than GPUs, so it won't help research, only deployment for large projects, running in the cloud.


Why use it, if it locks you in?


If you're a hobbyist, why would you care about lock in?


I care about being able to carry on doing my hobby stuff 5 or 10 years from now. With NVidia I can at least be confident that as long as my graphics card keeps working (which feels like something under my control, unlike Google shutting down their products) I can keep running my code on it.


Because you don't have control over what Google does. They may kill TPU altogether leaving your work irrelevant.


You don't have control over what nvidia, intel or the rest of them either.

If you want to get your 'feet wet', then why bother?


Because they have client facing products and backward compatibility is important to clients.


Hobbies come, hobbies go.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: