Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MIT researchers uncover ‘unpatchable’ flaw in Apple M1 chips (techcrunch.com)
885 points by markus_zhang on June 10, 2022 | hide | past | favorite | 194 comments



The author is here and ought to make it all clear but if you google the title of the article you can download the paper already despite everyone being coy about it and the ACM not having published it yet. It's kind of ridiculous it's getting this kind of press before the paper is officially published and available. If the paper was published and security experts were allowed to analyze it before the tech press went nuts the stories would probably have a different tone.

This is interesting theoretically but the amount of access required is pretty high, this is hardly an exploitable zero day.

Having read the paper, in a nutshell it requires:

- Login to the Mac in question

- Ability to install a custom kext to make the PACMAN Gadget work. The exploit requires access to undocumented registers on the M1 that are apparently not accessible from user space, but this is a bit unclear.

- Also need an exploitable kernel buffer overflow against Mac OS (they made a custom kext with a buffer overflow)

- Run the bufferflow + Pacman Gadget together to do the final elevation

If someone can find a way to do this without installing kexts then it becomes way more serious. As is it certainly is a super interesting paper and presents a bunch of work for chip designers.


Hi! I think I can clear a few things up here.

Our goal is to demonstrate that we can learn the PAC for a kernel pointer from userspace. Just demonstrating that this is even possible is a big step in understanding of how mitigations like pointer authentication can be thought of in the spectre era.

We do not aim to be a zero day, but instead aim to be a way of thinking about attacks/ an attack methodology.

The timer used in the attack does not require a kext (we just use the kext for doing reverse engineering) but the attack itself never uses the kext timer. All of the attack logic lives in userspace.

Provided the attacker finds a suitable PACMAN Gadget in the kernel (and the requisite memory corruption bug), they can conduct our entire attack from userspace with our multithread timer. You are correct that the PACMAN Gadget we demonstrate in the paper does live in a kext we created, however, we believe PACMAN Gadgets are readily available for a determined attacker (our static analysis tool found 55,159 potential spots that could be turned into PACMAN Gadgets inside the 12.2.1 kernel).

Our paper is available at our website: https://pacmanattack.com/paper.pdf


Something definitely went wrong here though that more guidance was not provided to the tech journalists.

Most of the mainstream articles make it seem like they a) did not read the paper b) are incapable of understanding the paper c) were not provided any guidance about what any of this actually means in the real world.

Which is all scary as the paper is well written and very accessible IMO.


Based on the article, I think the journalist basically understands the situation (and if they don't, they should investigate further, that's the job). The headline is just intentionally over-dramatic to get clicks. This shouldn't be treated as a good-faith error, more guidance isn't required and wouldn't help.


It's sad that we reached a point where assuming bad faith from public informers is acceptable and, worse, reasonable.


Worth noting that someone else usually writes the headline for the articles, not the journalist / the author of an article.


OK, but that doesn't excuse things. There's a problem with journalism and its mostly about how they are incentivized and compensated. I don't know what the fix is but its clear that trust is so low, and rightfully so that journalism has largely failed as an industry at its job.


Journalism is paid for by ads, mostly. For online journalism, unless people click there is no money to pay the producers. Hence clickbait. This is a problem but there are worse problems.


In my opinion the requirement that HN submissions match the article's title is quite absurd because of this phenomenon.


Uh, that isn't the rule, for exactly that reason.

> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

https://news.ycombinator.com/newsguidelines.html


I've seen many cases where mods revert an informative title in favor of a less informative one. Also, the idea that the title of a submission should match the title of the resulting article is quite silly, since often the article is written for a different audience and the informed HN submitter can sometimes craft a title that better summarizes to HN readers why the story is interesting / worth reading / controversial.


Expected, yes. Reasonable or acceptable, no.

But what can you do about it?


>Most of the mainstream articles make it seem like they a) did not read the paper b) are incapable of understanding the paper

This seems pretty par for the course in terms of science/tech journalism to be honest.


> It's kind of ridiculous it's getting this kind of press

> Something definitely went wrong here though

Both of these have the same reason: it's about Apple. I know someone that avoids telling people that they work at Apple, to avoid similar drama.


I mean, it's been like months of random press about the M1 despite there being anything special other than , _Apple_ so that's basically what you get when your marketing is super effective.


a) did not read the paper

Yeah, that's a given. Journalists do not have time for optional tasks.

b) are incapable of understanding the paper

That's a safe bet.

c) were not provided any guidance

Asking for guidance or clarifications is another one of those optional tasks.


Clearly "did not read the article" applies to some internet commentators.

The article is pretty good and shows good understanding of what the attack is.


Welcome to tech journalism.



Welcome to journalism.


It’s not just a problem with journalism but with humans in general. People are more imprecise with their comprehension of things than they are willing to admit.


How many synonyms did you try before you landed on “imprecise”?

It’s a very diplomatic choice. Moreso than I’d have gone with. I admire your restraint!


Thank you. Even here on HN, plenty of folks will comment on all kinds of research they know little about.


> Even here on HN, plenty of folks will comment on all kinds of research they know little about.

I don't see anything wrong with that, because all opinions are not equal.

I think there is some level of pressure to get one's word in quickly, otherwise the nebulous cloud of commenters moves on to the next story, and your well-thought-out comment that took hours to write is seen by no-one. If you're responding to someone hoping to get into a nice conversation, you're out of luck since they have no idea you just responded to them.


Any thread about nutrition is so painful here.


Anything related to medicine/biochemistry can get cringe-y pretty quickly here. I think the problem is that the crowd here is generally pretty intelligent, but they know it and it's a coefficient > 1 on the Dunning-Kruger effect


Welcome to advertising driven journalism.


If the information was given solely to public security experts with blog presence (Matthew Green, Bruce Schneier, and a plethora of others) we could've linked to them and either ignore the 'clickbait middleman' or do most of the work for them allowing them an easier time to write up something half decent.


Matthew Green once publicly criticized something I created by simply parroting what someone else had said without bothering to do his own investigation. The original criticism turned out to be hogwash, and Matthew failed to recognize an obvious real crypto problem with the first version of my feature because he was too busy trying to just quickly stick his name into someone else's feature announcement while it was still "hot off the press."

I would take anything Matthew Green blogs about with a grain of salt. It's not clear how much of what he says is just cheap amplification of what others claim.


MG shows here that even a small dose of fame can ruin the biggest of nerds.


I can imagine some of the Tech YouTube channel headlines this week:

"Apple is DOOMED!"

"Turn off your iPhone, Apple's security BUSTED!"


Youtube titles with CAPITALIZED words make me sad. I don't want to click on any video with a title like that, but creators are incentivized to use those titles because they get more views. Some fine videos end up with those titles and I would miss out on some good stuff if I refuse to click on clickbait titles.


The Techcrunch article is well written and I thought summarises this rather well.

The headline is pretty reasonable too. Apple can't patch this, and as other commentators point out subsequent attack techniques are only going to make this flaw worse.


Everything you listed are at the very end of the list of things that matter to modern day Journalist. If they even make the list.

But reading your comment does sort of put a smile on my face though. The what others would called a non-cynical world view.


d) were given about 90 minutes to write the article


95% of journalism is at this level of understanding for anything non liberal arts that is commonly offered as a 4 year degree, and has been for as long as journalism has existed.


I really sympathise how your research is being misunderstood based on the reporting and responses to the press stuff missing the main point. And everyone equating modern ARM with "M1".. Anyway, awesome work! Let's hope pointer authentication gets a thorough treatment from the research community and you and other people can build further exciting results on your work!


Given there are "55,159 potential spots that could be turned into PACMAN Gadgets" do you think it is highly probably this attack is now part of a zero-day kill-chain?


100% chance.


Yep, 100% chance. (Despite the above downvotes, and this is coming from a JavaScript engine researcher).


Yeah, I work on Linux security for a big Red I mean Blue, company. Wrote poc for spectre/meltdown and more, but what do I know. ;)


You didn't clear up anything about publicizing this heavily in mainstream press before it's been reviewed by your peers.


It has been reviewed by the author’s peers — it was accepted to ISCA ‘22.


What are you hoping for here? Look at the facts as written.


Also important to mention that PAC is a new ARMv8.2 feature and previous versions of ARM have no PAC system at all. The only ARM chips with PAC are from Apple and AWS Graviton3 - any other chip has no hardware protections against this.

So ultimately, right now, it's just downgrading a very new protection to as if it didn't exist, which is exactly how 98% of ARM chips in the world operate right now. Not great, not terrible, A for effort, was worth a shot, speculative execution breaks everything.


Besides the Apple CPUs and Graviton 3 (Armv8.4-A), the cores introduced by ARM in 2021 and present in some 2022 smartphones (Cortex-X2, Cortex-A710, Cortex-A510), which implement Armv9.0-A, also support PAC.


The doc in the kernel tree is dated 2017, so it's percolated to products quite slowly: https://www.kernel.org/doc/html/latest/arm64/pointer-authent...


Ah thanks, this finally makes sense in context


Edit: Correction, ARMv8.3, not ARMv8.2.


From the home page: " For our demo, we add our own bug to the kernel using a kext. In a real world attack, you would just find an existing kernel bug."

This is attacking a memory safety exploitation mitigation tech, there needs to be a memory safety bug. Otherwise there's no need for PAC in the first place.


> "PACMAN is an exploitation technique- on its own it cannot compromise your system. While the hardware mechanisms used by PACMAN cannot be patched with software features, memory corruption bugs can be."

reading the "official" site of the Researchers gives more clues than the headline of the article.


Once you can get someone to install your kext aren’t you basically at game over anyway?

Or do they have strong limits as well?


This was just to expose the gadget easier, there are other gadgets in kernel that you can abuse too.

The full fix will require a complete recompile of the full kernel/toolchain, if they can do any kind of microcode update (I have not checked) that allows for mitigation.


> - Ability to install a custom kext to make the PACMAN Gadget work. The exploit requires access to undocumented registers on the M1 that are apparently not accessible from user space, but this is a bit unclear.

^ This is not the case, you can trigger these from userspace in valid syscalls. The same behaviour was in spectre.


Yeah, the tech news just runs with it because fear sells. This is the “intel backdoor” all over again.


> - Login to the Mac in question

Really? Usually malicious stuff is installed by the user themselves being unaware of it.


Installing a kext is a password prompt away, I believe, so all that's needed technically seems to be "install something not reviewed by Apple, fill in a genuine OS password prompt when asked, and run it" which strikes me as a very common scenario.


Enabling 3rd-party extensions is much more involved on AS Macs: https://support.apple.com/guide/mac-help/change-security-set...

Then the extension needs to be allowed in System Preferences > Security (this step has been required on Intel Macs too)


Additionally, if can find a way to trick a user into installing a malicious kext, why even bother with PACMAN? You already have arbitrary kernel code execution!


Perhaps the kext with the overflow may not necessarily look malicious? It can serve as an actually useful kext and pass review.


These days all kexts look malicious.


Yeah. The bundled ones in the main OS image are the worst. Who the hell knows what nefarious acts lie behind IOPCIFamily.kext?!

/s, though not entirely, moving more stuff to unprivileged contexts would be nice


yeah but if you can trick the user to do that, you can already trick him to do more


First you need to trick Apple into signing that kext (which is getting more difficult by the day even for legitimate uses), or get the user to disable SIP first.


Didn't many tools require disabling SIP, like Homebrew? Is this no longer true?


This hasn't been true for at least 2 major macOS releases.

But yes, there was a time when editing even /etc/sudoers required disabling SIP. That time is long gone.


I downloaded Brave after googling it and got a .pkg instead of a .dmg because someone wanted to hack my new laptop, and didn't want to risk a hack that leveraged an exploit only nation states know about. They are sMArt. Like me! I still installed it because I am gullible and lazy.


> Login to the Mac in question

right, ok? as Snowden thankfully heroically warned us, the NSA already has root-level access to most devices.

“ Remember the Spectre vulnerability. Now we have more than 7 variants of Spectre. No one guarantees that new, more destructive versions of this vulnerability will not be discovered in the future.”

> If someone can find a way to do this without installing kexts then it becomes way more serious.

right, yeah, you’re onto the right trail now.


As someone with a bit of experience in this area, IMO, the Techcrunch article is more confusing than it should be.

Here's a link to the actual abstract. The work will be presented at ISCA, which will start on June 18. https://dl.acm.org/doi/10.1145/3470496.3527429

Here's a link to MIT's press release. https://www.csail.mit.edu/news/researchers-discover-new-hard...

Here's a link to the vulnerability's website, as is tradition now. (Plus the paper) https://pacmanattack.com/


Having grokked the abstract, I feel like can speculate a bit as to what is going on. Take this with a grain of salt; I have no clue what has actually been discovered.

I believe that the researchers have found a way to remove PAC as a barrier to exploitation by disclosing PAC verification results via speculative execution. This is only useful to attackers going after a target that uses PAC, and those attackers will need to have another vulnerability that enables them to hijack control-flow through modifying pointers to code that are located in memory.

The attackers can use this new Pacman vulnerability as a crash-free oracle that says whether their forged pointer worked, and once they find a working one, they can use that to hijack control flow.

PAC (or Pointer Authentication) is a security feature found in recent iPhones, the Apple Silicon Macs, and the Graviton3. It is intended as a defense against control-flow hijacks. It works by signing pointers found in memory with one of five keys that are known only to the processor. Before the pointer is used, the processor should be instructed to "authenticate" the pointer by checking the pointer's signature using its private keys. To prevent simple reuse of one authenticated pointer used in one place to a pointer used in another place in the program, code can provide a "context" value to be used during the authentication.

A great resource for learning about PAC and its usage in the Apple platforms is at [1] (it links to other resources) and if you want to play with a PAC enabled binary, check out [2]

[1]: https://googleprojectzero.blogspot.com/2019/02/examining-poi...

[2]: https://blog.ret2.io/2021/06/16/intro-to-pac-arm64/

EDIT: The attack works by:

1) Place your guess such that it is used as the pointer input to an authentication instruction

2) Causing a branch misprediction. On the not-taken side of the branch, code needs to perform a pointer authentication and usage of the pointer. On the taken side of the branch, code should not crash.

3) CPU speculatively executes down the not-taken side of the branch (misprediction) and speculatively executes the authentication instruction.

4) If your guess is correct, the authentication instruction will return a valid pointer. If your guess is incorrect, the authentication instruction returns a pointer that, if dereferenced, will cause an exception.

5) CPU speculatively executes a load (in the case of a data pointer) or an instruction fetch (in the case of a code pointer) on the pointer value.

6) If the pointer is valid, the address translation for that pointer will appear in the TLB. If the pointer is not valid, it will not (because of the exception).

7) All of the effects from this mispredicted branch get squashed when the CPU realizes that the branch is not taken. No exception is actually thrown!

8) Measure the TLB entries to determine whether the speculative address translation made it in. If it is present, you know that the guess is correct.

9) Repeat, up to 2^16 times.


You hit the nail right on the head! That's exactly what we did :)


Apparently they haven’t fixed it yet, so a hardware solution may in fact not be possible, but is there any reason to believe it couldn’t be patched in “microcode”?

Who can guess at the performance impact, but one could imagine a configurable mechanism capable of disabling speculation past a PAC authentication.


Does the m1 even have microcode ?


I’d be shocked if a modern CPU didn’t have some kind of “firmware” to respond to errata.


Amazingly articulate writing, I think you have a second career as a tech writer if you ever wanted


Hahaha thank you. I am humbled by your praise


And this can really not be fixed in any way? Not trolling, happy to barely understand this in the first place


Until it has been fixed in hardware I think it could be mitigated in software a bit, but at a cost. A PAC signature can include also a 64-bit "context" value, which you could make unique per pointer (like a nonce).

However, context values are not something that is supported by any C ABI: the PAC extension contains also instructions that hardcode the context value to zero, and I would guess that those are what the kernel is using currently. To make use of context values, I suppose you would have to use a new ABI that stores effectively 128-bit pointers, and which also creates the random nonces/keys/whatyoucallthem to store in them.


I do not understand how you want to mitigate this issue by using the "context" given that the attack demonstration is done with a source code that makes use of the "context". The attack is fully context-agnostic since the "PACMAN gadget" in victim's code is injecting the "context" by itself.

The root of the problem is the small hash size and the fact that you can "suppress" failed hash check effects to bruteforce the hash. (it's expected that a failed hash check will cause a crash, which was intended to prevent bruteforcing)


Speculative exec ruins everything again.

I get the performance gainz, but when are we going to get past the formal fallacy that executing any instruction we don't need to based on actual flow is de facto a complete violation of user expectations and therefore completely unsafe to do.

Like every lay person I explain speculative execution seems to be able to recognize that a pipeline stall to figure out what a value actually is just the way to go.

Hell, my personal sanity check with computing is that there must exist a humans only implementation that correlates to a good computing primitive.

Nowhere on Earth, will you find an organization that will execute both sides of a conditional process requiring hunans to do the work just to throw away the result. Not taken.

Oh wait... Finance does it with Hedges...

Frigging finance. Ruins everything for everyone.


> Nowhere on Earth, will you find an organization that will execute both sides of a conditional process requiring hunans to do the work just to throw away the result. Not taken.

Speculative execution today (within modern high-end processor) does not execute both sides of a conditional branch.

It would indeed be a waste of power and it would be a much more complex micro-architecture.

Modern speculative OoO processors execute a single path and simply relies on the branch predictor accuracy. And they are pretty accurate, on the order of 3 misspredictions every 1000 instructions. The power consumption in unnecessary work due to a missprediction is quite low.

Modern processors consume much of their power in Out-of-Order instruction scheduling.


Looks like I need to go through Patterson and Hennessy again.

As I was fairly sure that as many computations were done as possible in the same cycle with the ditching of "not the case" results on subbsequent cycles, but I'll be the first to admit I haven't synced on the bleeding edge lit recently. And sipping power has become much more of a concern in recent years, do I may be due for a refresh anyway.

My favorite past time if the statistics are to be believed.


C pointers don’t typically use the context very much, but C++ uses it heavily.


Personally, I believe it can be fixed via key rotation (e.g. there are 3 inputs to the PAC algorithm. The pointer, a "modifier", and a "key" e.g. APIAKey_EL1).

I would have added that as a potential mitigation in the mitigations section. I think, say, changing the key every so often would be a reasonable task for a kernel to do, especially in the timeframe that this was exploited (about 3 minutes)


Hi! This is an interesting idea. However, there is a problem that arises- if you rotate the key, then old pointers now become invalid. And since the kernel is always alive and servicing requests (and contains structures with very long lifetimes) we don't believe this to be a practical solution.


Hi! Joseph (one of the authors) here. You can read more about our attack here: https://pacmanattack.com


Hi Joseph! Go Illini! I didn't see you my last semester but I'm glad to see Chris's members doing well in the world. Also always love Mengjia's work.

2 questions.

1) it's relatively known that PAC is brute-forcable given its relatively small key space (16 bits, sometimes 8 if TBI is enabled). How does your attack differ from general brute forces? (My impression is just your leveraging of the BTB/iTLB is a bit more stealthy.) Similarly, in your opinion, would a fix be more ISA-level or you think it's more specific to the M1 (given brute forcing in general is a PtrAuth problem)?

2) you mention in section 8 that this took 3 minutes for a 16b key and tons of syscalls. Wouldn't another proper mitigation be to limit the number of signatures per key? 3 minutes is definitely a long time, and some form of temporal separation may be quite helpful.


ILL-INI!!!

1) Our attack does apply a brute force technique with the twist that crashes are suppressed via speculative execution. If you tried to brute force a PAC against the kernel, you'd instantly panic your device and have to reboot.

2) Given that we never sign anything (only try to verify a signed pointer), and that every authentication attempt happens under speculation, I'm not sure how you would rate limit this without absolutely destroying performance. Keep in mind the kernel is doing a whole lot more with PAC than just our attack (for example, every function's return address is also signed with PAC) so distinguishing valid uses from a PACMAN attack might be challenging.

I suppose you could track how many speculative PAC exceptions you got, but it's a little late to add that now isn't it? And it could also raise lots of false positives due to type confusion style mechanisms on valid mispredicted paths.


Thanks for the quick answers.

Third Q-- What's your opinion on BTI as a possible mitigation? Given it's an v8.5 feature meant for JOPs, and this attack is essentially a speculative JOP, maybe we could use BTI to mitigate and heavily reduce the number of gadgets, speculative or not.


Would it be possible instead to mitigate this by removing the side-channel: either don't leave any trace in the TLB of the speculative execution, or deny access to the TLB for user mode software?


Unwinding changes to the TLB on every mispredict would have a significant overhead and hurt overall performance. Removing valid data you just cached (speculatively or otherwise) is generally a bad idea.

User mode software requires a TLB (unless you want to do a page walk for every single instruction!)

Even if you could remove the TLB entirely from the CPU somehow, the attacker could just use the cache or some other microarchitectural structure.


Well that's disappointing. Thanks for the explanation!

I never have to worry about such low level details in my day-to-day work, so this is all new to me.


Really amazing work here!

A colleague pointed out that FPAC[1] in ARMV8.6-A likely prevents this attack, is that right?

I haven't fully digested the paper, but the gadgets seem to rely on AUT, and "Implementations with FPAC generate an exception on an AUT* instruction where the PAC is incorrect"

[1] https://community.arm.com/arm-community-blogs/b/architecture...


Same problem. Speculative failed authentication speculatively traps, speculative successful authentication accesses data.


You were in touch with them since 2021. Did they manage to fix it for M2? Or is it also valnerable?


That's too short of a timeframe in the silicon world. If it's considered important we'll see something in M3, but more likely in M4.


Is PAC something like the old GCC stackguard canary mechanism done in hardware?


You can think of it a lot like that! PAC is more advanced as you can describe what a pointer "should" do on access (aka is this a data or code pointer?).


Hey Joseph,

How does one prove that a hardware exploit is actually 'unpatchable'?

Thanks


This is a great question! What this means is that a software patch cannot fix the speculative execution behavior that causes the PACMAN issue since it is built directly into how the hardware operates.


So there is no possible set of instructions that could block the particular behavior in the exploit?


You could maybe do it with lots of fences or just a ridiculous chain of NOPs after each branch such that the ROB is cleared before you have time to try to load a pointer speculatively.

In practice, both of these would probably kill performance, so I don't think either of these are great solutions. Recall we are targeting the kernel where everything needs to be as fast as possible.


This gets into the turing completeness tarpit. Yes, it's possible to make a vulnerable implementation emulate a chip that is not vulnerable. And maybe even detect when you don't need to emulate and run natively some of the time.


(Will read the paper later)

How lawyer-y do you think Bandai Namco will be?


They probably won't care about this, although I do find it weird when researchers make a whole website with custom domain just to publish something like this. Personally, it comes off as less trustworthy since it enters the same realm of bullshit as those market manipulation attacks on AMD a few years back[1]

Not saying that's what this is (I'm sure these are legitimate findings), but this tactic raises some red flags for me.

1: https://www.gamersnexus.net/industry/3260-assassination-atte...


Yeah I hate this trend of naming vulnerabilities and pandering to the tech press. The CTS Labs FUD was just beyond the pale. Most tech journalism just ate up those claims that were clearly B.S. and not even self consistent. They were claiming it was impossible for AMD to patch with firmware or microcode but in the same sentence claiming an attacker could use it to create a rootkit that couldn't be removed. Nobody bothered taking two seconds to think critically about what they were publishing to realize they were claiming that it was, in essence, somehow possible for an attacker to "pull up the ladder behind them" but not for AMD.

Maybe this "unpatchable flaw" with the M1 has some more legitimacy than the "critical AMD vulnerabilities" back in 2018, but please, stop with the stupid trendy names for vulnerabilities. Lets discuss this on the technical merits and skip the marketing.


>Yeah I hate this trend of naming vulnerabilities and pandering to the tech press.

It is not a trend. It's a tradition:

Back Orifice. Ping of Death. Smurf Attack. Computer Viruses. Computer Worms. (Hello Robert Morris!)


Actively marketing yourself and your ideas is one of the most important things you can do. Without, most people simply won’t know about it or will dismiss it. Just because you market it, doesn’t mean it’ll be successful - things still have to prove their worth regardless and will otherwise fizzle out.

How many important security vulnerabilities have just had technical white papers and no marketing have gotten wider coverage? Very, very few. It’s also very useful for humans to talk about something when given a short, memorable name.


Heartbleed bug was a great name for this purpose, for motivating more towards fixing it.


If they are - Joseph and MIT, please stand up to them. The standard for infringement is confusing similarity. Researchers aren't marketing goods and there's no risk of confusion.


Would have thought Arch Linux would have more of a case for their package manager (Pacman) seeing as now it could be confused with an exploit.


Skimmed this really fast, but is this just bypassing PAC by brute-forcing the code with speculative execution?


Pretty much!

(There are a few aspects that make this challenging in practice, but that's the idea).


cool, thanks!


Seems like if this were successful it would weaken the extra security provided by pointer authentication, at worse weakening it to the level of a CPU without pointer authentication like the x86_64 chips they used to use. So not great but not catastrophic. Or am I missing something?


Correct. And, in general, I don't think this worsens it to the level of a CPU without PAC. It still takes effort to do this, since like SPECTRE, you need the ability to measure time, and you need to spend a significant amount of CPU power to get a useful signal.

Definitely an impressive result, and it certainly reduces the usefulness of PAC, but I'd guess PAC is still going to prevent a significant number of attacks.


You're right, that's exactly what this is. Just a way to defeat a defense in depth measure.

This vulnerability it's useless by itself.


This is a dangerous position to hold. this vulnerability aids in reducing the overall security posture of the OS so its quite valuable. it improves an adversaries opportunities and however limited, still advances the potential for system compromise.

Infosec isnt just home runs, it is iterative, cumulative progress toward a shared goal. things like this are what ultimately led to XBox and Playstation jailbreaks.


This is correct, but at the same time it's important to not overhype every single vulnerability as the end of the world. Unfortunately some security people seem incentivized to do that, and it causes a "crying wolf" problem. Serious end of the world announcements should be reserved for serious end of the world vulnerabilities.

This is a vulnerability that reduces the security of the Apple Silicon platform to being closer to on par with the immediate prior platform that Apple is actually still selling.


...in certain, very restricted circumstances.


So not the "last line of defense"?


It is. It's just that compromising the last line of defense, without compromising the ones which come before it, is not the end of the world.

It's like if I could wave a magnet over your encrypted backup tapes, ruining your restore capability, but without having any ability to affect your production and DR sites. You'd rather it didn't happen, but you are still up and even have redundancy.


>is not the end of the world.

today, how many years it took the theory to be applied in "real world" with other cpu vulns


I don't think you understand this vulnerability.

This will not become Spectre-like in 10 years from now.

The impact will be the same as it is today.


Sorry, I meant chaining vulns, once they do appear


lol. apparently the article title is "technically the truth"


It can reasonably be described as such. The headline implies that previous lines of defense have also been broken, which isn't the case.


>https://pacmanattack.com

>does PACMAN have a logo? >Yes!

great, answering the hard questions.

the trend of creating a marketing website for every horrible exploit is so strange. Who are these people selling to, and what?

Fear to media outlets is my only guess.


Imagine that you had come up with a delightfully clever way to break pointer authentication using wacky speculative execution side-channel shenanigans. Wouldn't you be proud? Wouldn't you want to give it a cute name and an explanatory web site and a picture of Pac-man in a Venn diagram? I sure would.


Researchers write papers. They want their papers to be read by many people as they depend on them for their livelihood. Wouldn’t you market yours?


No. It’s patently unethical to equate this to something like Heartbleed. It doesn’t warrant a meme.


Guys, chill out. This is like halfway down their FAQ and obviously tongue-in-cheek. This is not a whitepaper, it's a press kit.


So an attacker who needs to bypass PAC can already sometimes find a nonspeculative PAC bypass, though it’s hard. Assuming they can’t, maybe they can use this, if they can find and weaponize an appropriate gadget in the kernel. Sounds plausible but hard; maybe harder than finding a nonspeculative PAC bypass. Any speculative PAC bypass will also suffer from nondeterminism, so it’s not as practical for an attacker as a nonspeculative bypass.

It’s not a given that the speculative PAC gadgets in any kernel are exploitable as effectively as the synthetic kext gadget in the paper.

Even if you find a weaponizable PAC gadget in the kernel, and it actually gives you what you want, it’s not clear how reliable it’ll be in practice.

So, this is kinda scary but it’s also a bit of theatre. The tech press will have something to write about though.


One “feature” this has is that, assuming no retpoline-like mitigation is constructed to prevent against it, it’s usable for a long time if you can get it to work: at least as long as the gadget remains and conditions remain favorable. Your favorite non-speculative PAC bypass might get patched in the next iOS release and you’re back at square one.


Agreed.


Isn't pointer authentication a feature of the ARM 8.3 instruction set and not an Apple specific thing?


Yes, it’s defined by ARM. Though the unsafe implementation of speculative execution is obviously by Apple.


Is Apple's implementation the only vulnerable implementation?

Prior speculative execution issues applied to more than one vendor's implementation.


Except for the Apple CPUs, only the ARM cores introduced in 2021 implement this.

It remains to be seen whether their implementation is better.

Until now only few people had access to such recent CPUs, which can be found only in the 2022 models of some smartphones and in the new Graviton 3 servers, so they did not receive much scrutiny.


My reading implies you need actual code execution already? As you need to be able lay down the actual auth instruction that you want to force? (Eg nothing so horrific as simply running js)

Hahah, ok now I have a much better understanding.

It requires an existing path to arbitrary code execution, and a buffer overflow or some such in kernel space.

So yes this does defeat one part of the M1 defensive system, which is clearly suboptimal, but the way the article portrays it is absurd.


but the way the article portrays it is absurd.

Spreading paranoia is how the security industry has always operated. It is its incentive, after all.


I know. I think it's this new* habit of giving literally every issue a marketing name is particularly grating, but such is the way of the world :D

* A few years old now I guess


The researcher use a kext afaict solely to provide an exploitable gadget - e.g an attacker would need to find one themselves, and finding a kernel bug is not something that gets a journal publication so isn’t relevant to the research. In that case using a kext seems reasonable.

The attack itself however seems like it’s fairly close to requiring true arbitrary code execution (the earlier spec. Execution bugs could be unit from “correct” JS).


Sounds like it requires physical access to a device, is that right? At least less of a concern than software only, though still not good.


As the article puts it, it's a breach in “last line” security defences. Specifically, it's a mitigation that applies when you already have code execution. It's nothing for most people to worry about. PACs are a new mitigation that Intel Macs didn't have, so it's not like it puts the M1 in a worse position than what it replaced.


This is a new security feature, which few computers have, so the fact that it does not work obviously has little practical importance. At worst it makes the new computers with this feature only as secure as any old computer.

Nevertheless, the article is very important, because it shows that this supposedly security-improving feature has been implemented in a way that makes it useless (like it has also happened with some Intel security features, e.g. Software Guard Extensions).

Everybody must become aware that this "pointer authentication" feature is currently unreliable, so it must not be used, and the designers of future CPUs must take care to not repeat these mistakes.


What are you basing “so it must not be used” on?

I would think it can’t harm, ever.


Like you say, it does not do much harm if used.

Nevertheless, there is some small loss of performance, because instructions to compute the Pointer Authentication Code (PAC) must be inserted in the program, and possibly also instructions to authenticate the PAC, though the latter may be omitted if the function return instructions and the indirect jumps through pointers are replaced with instructions that combine the pointer authentication with the jump.

I have no idea whether on the Apple CPUs the jumps/returns with authentication have the same speed with the simple jumps/returns.

I suppose that the Apple compiler adds automatically the PAC computation and authentication instructions, so using this option does not increase the complexity of the source code.

Nevertheless, even if the performance impact of using PAC might not be important, whenever a security feature that does not work is used, there is the danger that there will be some people who do not know that it does not work, so they will believe that exploits are impossible and there is no need to be careful to avoid the possibility that an adversary might be able to modify a pointer, e.g. by crafting a special input to the application.

In general all the variants of pointer validation that have been recently introduced in all CPU architectures are just workarounds against the bugs introduced by programmers, because in a well written program there should be no way for an adversary to modify an internal pointer.


"All programs must be well written" with nothing in the ISA to help you write them properly isn't a good way to do software. PAC helps you reduce bugs because it checksums your pointers.


The most important helping feature of an ISA is to provide an efficient way to compare indices and pointers against limits, so that the poor performance of limits checking will not discourage the compiler writers and the compiler users to always make such checks.

Any decent C/C++ compiler has compile options to check all array accesses against limits, which, coupled with the good programming practice of always using indices and not pointers for memory accesses, can catch all the programming errors of the buffer overflow type.

However, neither the compilers enable this option by default nor most programmers take care to always enable it, because of usually baseless worries of degrading the performance.

Checking accesses against limits catches any such bugs at their origin, not much later when a corrupt pointer is identified by PAC, without knowing how it became corrupt.

Intel has made an attempt with Skylake to introduce some instructions for easier limits checking, but unfortunately the instructions conceived by them were just dumb and not helpful at all, so they have been eventually deprecated.

Better support exists in the IBM POWER ISA, which has conditional trap instructions.


Security techniques that have known workarounds cause harm by engendering a false sense of security.


nearly every security feature in existence has known (situation dependent) work arounds. That does not mean you should turn them off.


> it's a mitigation that applies when you already have code execution

Sounds like a huge deal if you want to run untrusted code inside a sandbox.


You still need a sandbox escape before this mitigation kicks in.


I wonder at what point we will finally give up on trying to make a stable implementation of speculative execution.


The moment someone wants to take the economic hit of a decade+ of performance progress.

IOW, not gonna happen.

I could see coprocessors becoming more popular though. We already have AES-NI, what if the sensitive keys never ended up in CPU cache because the CPU never had to see them? Specialized HW could not have speculative execution. Granted, that doesn't prevent seeing the plain text of something that's decrypted. It's all about the threat model and what tradeoffs you're willing to make.

And that's why Intel et al haven't completely abandoned speculative execution. For the vast vast vast majority of people, the security issues they're much more likely to deal with are straight up getting scammed. Not 0 days, and especially not insane stuff like this. Unless you can turn it into a zero-click iMessage bug (or similar), meh.


I wonder at what point we will finally give up on all these "mitigations" which are otherwise pure bloat without the presence of an actual attack, and seem like they don't make things all that much harder even when there is one.


I just want my computer to go into a special "nonspeculative" mode when I open by bank's website.


PAC is a good mitigation with a good threat model: preventing ROP/JOP.


Any reason they aren't using formal verification for this kind of thing? It would seem like a very worthy investment.


We basically have zero ability to do formal verification against information leakage through cache timing attacks and the like.


The growing pains of silicon development date back to the Intel line of Pentium FDIV bug issues. Not surprised it occurred, just surprised it took so long to come to fruition. I can only think its the lack of hardware engineers savvy enough to exploit such an issue, since the abstractions from hardware are so far removed from us general populous software developers.

Any thoughts on the above?


For the uninitiated, are such ‘unpatchable’ hardware flaws prevalent and/or debilitating to a greater or lesser degree in other processors (Intel, AMD, Apple AX processors)? Or has Apple "dropped the ball" compared with other chip designers?


All the modern CPUs have a long list of at least several dozens of design defects, most of which are never corrected.

Such lists were initially published as "Errata Lists", but now many manufacturers use less honest names, e.g. Intel uses "Specification Update" and AMD uses "Revision Guide".

Some defects may become manifest only when the hardware is used in certain ways, which may be avoided by the motherboard manufacturers, maybe at the price of reduced performance.

Many uncorrected defects affect only various testing or performance monitoring features. Many other uncorrected defects affect only privileged programs, so the resolution "Won't Fix" is justified by saying that all the popular operating systems have been tested and they have not been seen to trigger the bug. (For someone who develops their own operating system it is mandatory to read all the errata lists, because obviously their OS will not be used by Intel and AMD for testing the new CPUs.)

When the defects can affect user programs, in many cases it is possible to implement workarounds with microcode updates included in BIOS or operating systems, possibly with the price of reduced performance. Only when no microcode workaround is possible and the bug can be triggered by user programs, leading to crashes or incorrect results, then the defect is scheduled for being corrected in a new revision of the CPU. Most of these defects that are corrected are discovered during the testing of the engineering samples, before the official launch of the CPU, which uses the latest revision, with only the known defects that either do not affect non-privileged programs or have microcode workarounds.


> then the defect is scheduled for being corrected in a new revision of the CPU

Note that Intel very rarely issue new revisions of existing models nowadays. Probably creating new masks is too expensive. They simply include some fixes in new models. Probably likewise for all vendors for high-perf but non-critical applications. And probably they all try to let the microcode cover more things via chicken bits or other methods, to avoid the risk of having to do catastrophic recalls.


You can look up some other major events such as spectre/meltdown which also used hardware side channels and speculative execution, or rowhammer which affects RAM.


Interesting! Were the unit testing procedures used in the hardware design and simulation processes themselves flawed? Reading up on these I have not yet been able to elucidate any forensic insight into the original chip design.


Unit testing isn't really the issue here, spectre / meltdown / rowhammer are pretty fundamental design problems.


meltdown / rowhammer are pretty fundamental design problems, spectre is even a pretty fundamental logic problem. It hasn't been comprehensively fixed in HW and probably will never be because it directly contradicts the need for reasonable perf on multicore CPUs (at the intersection between speculative/OOO execution and cache coherency, and both are needed for reasonable perfs). Rowhammer is also hard to comprehensively fix for physics reasons, but hopefully some dedicated mitigations plus ECC are good enough for not ultra-critical applications; some people think more could be done that could practically fix it, but I don't know if they have managed to convince the industry (and get some value from their patents in the process). So of the three, only meltdown could be really fixed by a quick iteration of processor design (and it is also workaround by the OS on old affected models, at a performance cost)


This is a great reminder that software and hardware attacks can be combined to construct better exploits. It reminds me of the attacks that used microarchitectural attacks to break ASLR/KASLR.


Since the M1 is not being used in servers, how big of a deal is this?


The answer, as always, is that speculative execution is the devil.

I’m not convinced you can write a “for” loop safely on a modern processor.


Probably an M2 vulnerability too?


Given the disclosure timeline, it’s very likely.


I guess that was a reason why the M2 was recently announced?

Anyway, it seems those apple chips can be used not only for apple systems, which makes it a competitor to both AMD and intel?


I find it so frustrating and disgusting that articles like this don't link to the original research paper/publication.


MIT has some of the smartest tech people in the world. I wonder if some of smartest malware comes out of MIT somehow?


This is the second hardware security flaw in the M1. Was this known to Apple in time for M2?


Just taking a stab in the dark, I guess we won't know until the M2s are out. On the site by the researchers, they have been in talks with Apple since 2021 about this. To there may be a fix in the M2. However, we won't know for sure until the M2s are out in the public and in the hands of researchers. I don't know what the turn around time is for silicon design on an Engineer's desk to TSMC to being placed in a board on a Mac. It could be that it was too late in 2021 to make a change.


Second security flaw that is nowhere near as significant as Meltdown/Spectre and their endless derivatives. Still a good showing.


ummm. Could this flaw be attacked via the web? And Apple cannot fix it?


No, not without several other 0-days to exploit. See the other comments here.

Essentially, the attacker must have access to another buffer overflow of some sort.

This is only a bypass for a specific line of defense, called PAC - a security feature introduced recently in ARM 8.3. By itself, it is not enough to attack a system at all.


OT, but is anyone here also redirected to "https://guce.advertising.com/collectIdentifiers?sessionId=3_...", which gets blocked by µBlock Origin? It's a HTTP redirect.

This only happens with my IPv6 landline internet connection (german carrier Telekom), via IPv4 mobile internet (T-Mobile) it loads fine. Happens with two different devices, so it shouldn't be a compromised device, Techcrunch TLS certificate is valid, so it also should not be a compromised router or my ISP. Are they A/B testing?


Not on my side, but it wouldn't be the first time some third party advertising server would be serving malware. Malvertising will restrict itself to only some visitors to make sure it's not detected and blocked too quickly.

The massive cookie wall I'm met with when opening this site makes it clear that it's probably impossible to determine which third party is responsible this time.

You can read the article safely here: https://12ft.io/proxy?q=https%3A%2F%2Fweb.archive.org%2Fweb%...

The web archive also seems to be redirected for some reason, though that might as well be intentional.

Edit: thinking about it, this might also be an attempt to use first party tracking to bypass third party cookie restrictions. Maybe they're A/B testing a new advertising tool?


A domain "advertising.com" is basically the definition of malware.

It's Yahoo/Oauth's spyware domain and as the URL suggests it "collects identifiers" presumably takes a browser fingerprint and sets tracking cookies.


This is Techcrunch first-party native. If it can't set a third party tracking and fingerprint cookie, instead it will forcefully HTTP redirect you through that to drop a "first party" cookie.

NB: techcrunch is basically advertising.com (via aol/yahoo/oath/verizon media/whatever else)


Yeah, I'm getting this too. TechCrunch somehow managed to get even worse.

edit: https://spectrum.ieee.org/pacman-hack-can-break-apple-m1s-la... seems to cover the same thing with less spyware


Your browser shouldn't even be capable of resolving advertising.com

You can use https://github.com/StevenBlack/hosts as your hosts file, but even better is TLD and wildcard domain blocking with dnsmasq or dnscrypt-proxy.


I have this errors on techcrunch for years. I don't remember the last time I read something on this site.


Adlink ahoy


The Apple fanboy army is present in this conversation


Considering all we have learned over the years, it is not unreasonable to wonder whether this “flaw” isn’t there by design to meet some secret American government vulnerability requirement.


The NSA is holding a portfolio of undiscovered vulnerabilities, whether they've been planted by its operatives or discovered by its researchers. Old ones get discovered independently and patched, new ones get created, all the time.

Sending men in black or a top secret letter to a company and demanding a back door has to be the clumsiest possible way to go about introducing a vulnerability. It creates way too many people in the know, anyone could disclose it to researchers like OP who could then claim to have found it independently.

It's way more effective to have moles on your payroll.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: