Except that up to now at least, CUDA IMO remains the best abstraction for programming multi-core: subsuming away multiple threads, SIMD width, and multiple cores into the language definition. AMBER (http://www.ambermd.org) literally "recompiled and ran" with each succeeding GPU generation since GTX 280 in 2009. 3-5 days of subsequent refactoring then unlocked 80% of the attainable performance gains of each of the subsequent GPUs. DSSTNE (https://github.com/amznlabs/amazon-dsstne) just ran as well, but it's only targeting Kepler and up because the code relies heavily on the __shfl instruction.
So I honestly don't get the Google clang CUDA compiler right now. It's really really cool work, but I don't get why they didn't just lobby NVDA heavily to improve nvcc. With the number of GPUs they buy, I suspect they could have anything they want from the CUDA software teams.
However, if it could compile CUDA for other architectures, sign me up, you'd be my heroes.
For I'd love to see CUDA on Xeon Phi and on AMD GPUs (I know, they're trying). And if Intel poured the same amount of passion and budgeting into building that as they are pouring into fake^H^H^H^Hdeceptive benchmark data and magical powerpoint processors we won't see for at least a year or two (and which IMO will probably disappoint just like the first two), they'd be quite the competitor to NVIDIA, no?
That said, the Intel marketing machine seems to have succeeded in punching NVDA stock in the nose the past few days and in grabbing coverage in Forbes (http://www.forbes.com/sites/aarontilley/2016/08/17/intel-tak...) so maybe they know a thing or two I don't.
So I honestly don't get the Google clang CUDA compiler right now. It's really really cool work, but I don't get why they didn't just lobby NVDA heavily to improve nvcc. With the number of GPUs they buy, I suspect they could have anything they want from the CUDA software teams.
However, if it could compile CUDA for other architectures, sign me up, you'd be my heroes.
For I'd love to see CUDA on Xeon Phi and on AMD GPUs (I know, they're trying). And if Intel poured the same amount of passion and budgeting into building that as they are pouring into fake^H^H^H^Hdeceptive benchmark data and magical powerpoint processors we won't see for at least a year or two (and which IMO will probably disappoint just like the first two), they'd be quite the competitor to NVIDIA, no?
That said, the Intel marketing machine seems to have succeeded in punching NVDA stock in the nose the past few days and in grabbing coverage in Forbes (http://www.forbes.com/sites/aarontilley/2016/08/17/intel-tak...) so maybe they know a thing or two I don't.