Former AMD employee here (2007-2012)
AMD 'dropped the ball' BADLY when (2012) then-VP Ben Bar-Haim decided to do a software purge, and focused on retaining the over-bureaucratic folks of ATI/Markham.
Net result: NVidia was (and did) pick up a lot of very smart researchers and developers from AMD (I know of a couple whom were thoroughly disgusted with AMD management at that time)
He also trashed a lot of good and useful software projects for seemingly protectionist reasons (if it wasn't ATI/Markham, it was dumped)
Wasn't there a point at which AMD was actually looking at buying nvidia but Jensen wanted to be something like CEO. Jensen actually worked at AMD so there was already a connection there.
Instead AMD bought ATI which if I remember was barely hanging on. Not saying it was a bad purchase, just interesting that a bet on ATI (always had buggy drivers in my experience) which hadn't really demonstrated success ... how decisions ripple for a while.
Seems to me that the goal is to build a funding model.
There CANNOT be such a thing as "Safe Superintelligence". A ML system can ALWAYS (by definition of ML) be exploited to do things which are detrimental to consumers.
Unaligned pointer accesses are for 80386 bozos.
Period
End of story.
If you want to play in 64-bit land, live by the architectural rules. If you do not, your code will likely die. And you need to "Lurn" a lot
Silly Rabbit - absolutely accurate times are a security problem. CPU designers (even waay back in Alpha @ DEC) intentionally introduced clock jitter, just to prevent total predictability. For x86, I think if you performed 3-4 of them, and then saved the values in to registers, and then upon completion reported those values, you would find that the time deltas are NOT exactly the same.
Do you have any sources for this? My googling skills are failing me. I'm surprised early x86 (which I assume you're including) were aware of security issues with accurate clocks; I certainly wasn't until this millennium :D I would rather guess observed clock jitter would be explained by interrupts or some such. Not saying you're wrong, I'd just like to learn more.
Very true, but if you consider things like "The ATLAS Collaboration" to be one (very composite) author, I think GP's reasoning has some merit. There is a lot of homogeneity in a collaboration author like that, so it's a reasonable thing to do. (Also, speaking from experience, most people in the collaboration paid no attention to the paper, unless it was a Big One.)
He also trashed a lot of good and useful software projects for seemingly protectionist reasons (if it wasn't ATI/Markham, it was dumped)