Fuck.. that's a hard question. I'm almost always trying to push at least one of three boundaries; voxel engine techniques, engine performance, my mechanical programming skill. Trying to push those boundaries, often in tandem or tridem, is always hard. Different jobs are often hard for different reasons, but overall it's been a difficult project for most of it's existence. That said, doing hard projects is what I enjoy, and it's a great feeling when you sit down to, for example, optimize something, and end up making it 20x faster!
> What motivated you to attempt creating this?
It started out as a learning exercise; a safe space where I could just 'fuck around and find out'. When I started, I never expected to spend nearly as much time on it as I have.
Sorry if I'm misunderstanding, but isn't FEM used in physics engines because it is an good approximation for the underlying physics? For example, I believe the Drake Physics engine uses FEM to model deformable materials relating to vehicle crashes at Toyota
FEM is just a numerical technique for solving some kinds of differential equations. It doesn't aitomatically make you accurate or not, just like any other stable solver.
> If you look at existing GPU applications, their software implementations aren't truly GPU-native. Instead, they are architected as traditional CPU software with a GPU add-on. For example, pytorch uses the CPU by default and GPU acceleration is opt-in. Even after opting in, the CPU is in control and orchestrates work on the GPU. Furthermore, if you look at the software kernels that run on the GPU they are simplistic with low cyclomatic complexity. This is not unique to pytorch. Most software is CPU-only, a small subset is GPU-aware, an even smaller subset is GPU-only, and no software is GPU-native.
> We are building software that is GPU-native. We intend to put the GPU in control. This does not happen today due to the difficulty of programming GPUs, the immaturity of GPU software and abstractions, and the relatively few developers targeting GPUs.
Really feels like fad engineering. The CPU works better as a control structure and the design of GPUs are not fitted for proper orchestration compared to CPUs. What really worries me is their mention of GPU abstractions, which is completely the wrong way to think about hardware designed for HPC. Their point about PyTorch. and kernels having low cyclomatic complexity is confusing to me. GPUs aren't optimize for control flow. The nature of SIMD/SIMT values throughput and the hardware design forgoes things like branch prediction. Having many independent paths a GPU kernel could take would make it perform much worse. You could very well end up with kernels that are slower than their optimize CPU counterparts.
I'm sure the people behind this are talented and know what they're doing, but these statements don't make sense to me. GPU algorithms are harder to reason about and implement. You often need to do more work just to gain the parallizable benefit. There aren't actually that many use cases were the GPU being the primary compute platform is a better choice. My cynical view is that people like the GPU because they compare unoptimize slow CPU code with decent GPU/tensorized code. They never see how much a modern CPU can actually do, and how fast it can be.
Absence of evidence doesn't mean you can conclude that there was no hierarchy or social organization though. For that you would need evidence that there wasn't.
There seems to be an issue with how archeology is taught at schools, where these conclusions are stated as facts, or theories with a high amount of proof, when in reality, they have very little evidence supporting them.
Why are we assuming that the top of a hierarchy must dress better and eat better than everyone else? They could just be leaders who command everyone, but still an assumption of equality when it comes to good.
Also hierarchy doesn't mean that there is a single individual that's at the top. There could be a series of caste hierarchies, where groups of people were considered "better" than others.
We don't know if everyone from the tribe was in the common grave or a select group of people.
You're building a narrative based on your knowledge of other civilizations histories and understanding of modern day social structures, and assuming those traits must apply to this civilization. The truth is, we don't, and likely won't ever, know how their social hierarchy was built. Anything said is speculation. This isn't the case where you can be 80% confident something is true, more like 2% confident based on the discovered evidence.
This is my main problem with archeology. They take a small amount of evidence for something and use it confidently for their conclusion. There are so many other possibilities why something could be true, but those are ignored for whatever theory appeals to them.
The above comment also illustrates their biases. They map their knowledge to history that came much later to the Tepe sites. Stating that it was "was likely only the craftsmen or shamans" is a prime example of that.
What clicked for me was drawing the chain rule as a graph. When I was in school I just applied the chain rule without thinking about it. I really didn't mean this to be some deep insight or anything. Just an anecdotal comment.
I’m confused why confirming important predictions is considered less impactful than ML in physics. Isn’t experimental confirmation exactly what’s required for a Nobel Prize?
Experimental confirmation of X makes X great physics and X worthy of a nobel prize, not the engineering setup needed for the experimental confirmation.
The setup by itself can also general technique that is useful beyond confirming one thing (example LIGO). But then, ML is itself is a more general technique that has enabled a lot more new physics than one new experiment.
I would couple the Experiment and the theory together, and treat them both deserving of the prize, but not sure how it works in practice. As for the general technique of ML, sure, it's important but it seems to me that it's a tool that can be used in Physics, and the specific implementation/use-case is the actual thing that's noteworthy, not the general tool. I wouldn't consider a new mathematical theorem by itself to be physics and deserving of a physics prize, I view general ML the same way.
You should listen to the racist recording. It wasn't nebulous, it was clearly and explicitly racist. Most people think he went way too far. This isn’t a case of subtle racism where people might be overreaching; what he said was awful.
Sorry you can't outgrow your childhood, but you should come to terms that the man you idolized was a shitty person.
This take is nonsensical. People don’t bring up George Floyd’s past when discussing him because it isn’t relevant to the circumstances of his death. When people protest or talk about him, they’re focusing on the way he died and how cruel it was.
In contrast, Hulk Hogan was a racist. What he said on that tape was blatantly racist. When discussing his life and legacy, it’s relevant to bring up a racist rant he made as an adult, especially when he was already famous.
No one is suggesting George Floyd was a role model. That’s not the point.