Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's putting a lot of trust in Apple. What else can they do when the is off.


Whatever they've wanted to since they started shipping the product, the same as every other manufacturer. If "Apple is secretly after me and others" or "I can't trust that Apple is sufficiently secured against a 3rd party turning their devices against me" is your threat vector then owning an Apple (or other's) device has always been problematic for you and does not change with plans of them updating the firmware in store.


Being able to remotely turn it on and push updates is a new problem, and requires strong measures to lock it down.


Why is it fair to assume this is only a concern when Apple publicly announces it if the whole concern is they won't publicly announce misuse? The phones already do things when off now e.g. Find My.


If by remotely, you mean physically touching the device, then one could say that it's remotely turning the device on.


I mean by remotely, what the article means by remotely:

"Consisting of a "pad-like device," store employees place unopened iPhone boxes onto it to trigger an update. The pad wirelessly turns on the iPhone, runs the software update, then turns it off again."

As in without contact with the device.


But they can already push updates OTA, and presumably this is subject to the same signing


I think your worry is this: this capability means Apple can turn on and install arbitrary software whenever they want.

Wood you be ok if this capability could only be activated at super-closer range (wireless charging range) and only worked on iPhones that have not been activated?

We don’t know if that is how this works, but I would be comfortable with this feature if that’s how it works.


Yes

And that's obviously how it works. Maybe it will then enable wifi instead of sending data through wireless charging but that's it. And then it would turn itself off. Still what you can do is what Apple could do before putting it into the box. Again, obviously

I fail to see how this is concerning (to put it politely)


That is how you would like it to work, but we have learned time and time again that "obvious" does not mean all these safeguards are actually implemented or bug free.

So what you get is a mechanism that wirelessly, without any confirmation, modifies the software on your device and "only" requires proximity. That seems like a great angle of attack for malware, surveillance, cracking a stolen phone,... . You are free to believe that apple implemented this perfectly bug free, but I would not be surprised if we get a CVE related to this in a few years.


Your worries are valid, but thankfully Apple security architecture is better than your average left-pad enjoyer. Sure, it is possible there will be security implications.

> that wirelessly, without any confirmation, modifies the software on your device and "only" requires proximity

Running an update is different than "modifying the software". It is possible that such an update just wipes everything on the phone as well


How would it wipe everything on the phone anyway? There’s nothing on the phone. It’s brand new and in the box.

Apple clearly take security very seriously. There’s no way this would be left enabled after someone sets up the iPhone.


Just.. how it is.. with any other device and OEM service..?

I don't get it? What do you think does Samsung do when you send over your device for repair? Ever got asked for a passcode by them?


Luckily the post explicitly stated how it is different and you can even read it again.


Yes, and it was moronic. Has nobody here ever heard of DFU mode?


This seems really easy to implement , IMO. Here's a world where this basically already works:

* When an iPhone comes from the factory, it's not really off-off, it's just in standby mode. Probably true to some extent in any case?

* If battery is above 50%, try to connect to a special Apple Store wifi network with a known signature. (Wifi has to have this kind of feature, right?)

* If you're connected, check for updates and apply any that apply.

* Go back to sleep.

Easy peasy. Bonus points if you can wirelessly charge them through the box, but honestly apple stores probably go through enough product it doesn't even matter.


AFAIK it's off-off. Any kind of standby mode that was monitoring WiFi would drain the battery after some number of weeks, and Apple doesn't want consumers to open their box to a phone that won't turn on until it's charged.

And phones can definitely sit around for many months, at the warehouse and the store.


I walk into an Apple store with my device masquerading as the Apple update endpoint, and update the phones with some arbitrary payload to pwn your device before you even open the box? idk if it's possible, but seems plausible.


Apple might not be willing to give you their private key to sign your updates with.


Surely you have to put a lot of trust in Apple as soon as you buy their product?


I mean you do need to trust them a bit more when they have the capability to remotely install software.


You mean like on almost every laptop or phone sold today with their OTA update support? Whether Apple installs the updates before or after you first start the device doesn't change the risk.

Maybe you don't install any updates over the entire device lifetime? Okay but what if the firmware runs a keylogger? You always need to put a ton of trust into the company.


Well, maybe they had that capability all along, hmm. I guess you trusted that they didn't?


Well no, I use a phone that I could install an OS on myself.


If your phone vendor can install firmware blobs, you have basically the same problem. But I agree that it’s a problem I wish we didn’t have.


Apple already has the capability to remotely install software on your phone.


[flagged]


You're starting to read things into my comment that I very much didn't put in, but for the sake of argument sure, let me strengthen my statement a bit.

People should have agency over which software gets installed on the devices they own. This starts at the moment they buy the device.

So a device installing software while it is in the custody of the shop, sure that's fine. However merely having the capability to install software without physical access or any interaction from the owner of the device is already a threat.


Perhaps i over-interpreted. If I came on too strong my apologies.

This seems to me like a solution driven by Apple switching to ocean based shipping. I’d be really surprised if a mechanism like this would work after activation.


Literally everything already, they're authoring and flashing the firmware.


I disagree. The same could be said for client side scanning child pornography. This is about the potential capacity for abuse where there was not capability previously.


Apple owns the firmware.

They could enable 24/7 recording, lie to the OS about what is happening and trickle the data back server-side within harmless iCloud requests. It would be pretty much undetectable.

And the point of CSAM client-side is that it is more secure and private than them doing it server-side.


The difference is there is no not choosing to use client-side scanning. I could choose not to use apple cloud services for photos and therefore not participate in server side scanning. Essentially, client-side scanning assumes everybody's guilty until proven innocent. It's a slippery slope for scanning local content. Today the admirable goal of eliminating child pornography and maybe tomorrow FBI most wanted list, then pretty much anybody in the criminal system. Not to mention the political implications.

You see this as a simple trusting apple because they have complete control over the OS/Hardware. I see this as a culture shift that goes beyond trusting any particular company that fundamentally erodes privacy.


Apple's plan was to scan only those photos going into iCloud on the client, so that they can send them to the server e2e encrypted. That is strictly better for privacy than scanning them in the cloud, as other providers do.

Of course, they could lie and scan everything, but as others pointed out, they could do that already anyway.


They _do_ do that anyway for iCloud. The E2EE support was inherently better, as it significantly reduced the decrypted content exposure to Apple. The uproar over the feature and Apple’s relenting on the implementation resulted in no E2EE as the default. Allegedly advanced data protection does do E2EE for iCloud Photos though. I can understand the good intentions playing out poorly and being ripe for abuse.


Apple confirmed they down scan iCloud for unwanted content, and it's pretty obvious doing that would be a huge privacy risk, and it's very debatable whether that actual risk is worse than the theoretical risk of scope creep in on-device assisted scanning.


> CSAM client-side is that it is more secure and private than them doing it server-side.

What can be more insecure and not at all private than scanning user files using user's computer resources? Server-side scanning is impossible if E2EE.


While generally true, battery, storage, and cellular data are not unlimited, so 24/7 surveillance couldn't go undetected for very long — at least not yet.


Wait, what? Client-side scanning is offensive and objectionable, but it has much less potential for abuse. None of us have any idea if and how cloud scanning responds to ad hoc requests from anyone.

As much as I oppose client-side scanning, having a single global hash database makes it much harder for a company to do special favors without anyone knowing.


Well, no one has any idea of how ad hoc requests are made to turn on your phone when it's off. Logically, if someone's intentionally turned off their phone they desire it to be off.


You're going to be really unhappy when you hear about IME and PSP.


Or, on the topic of cell phones: baseband firmware.


It's really surprising how people end up making absolutist claims about privacy while demonstrating very little understanding of the threat model posed by modern smartphones.


> It's really surprising how people end up making absolutist claims about privacy while demonstrating very little understanding of the threat model posed by modern smartphones.

Would you mind expanding upon that?


Consider it this way: your smartphone is an advanced collection of multiple, highly sensitive microphones, cameras, location sensors, accelerometers, etc.

The fact that you have a phone in and of itself is traceable, as cell phone towers maintain records of who, when, and where. At this point, usage of such records is so commonplace by LEO that not having your phone with you when you do something is considered suspicious in and of itself.


Applies to all phones, dumb or not: law enforcement sends you a SMS that must never be shown to the user. It answers with data that leads to your precise location. Has been there since forever.

Advocating against Apple force-updating a phone that you haven't bought seems... silly in comparison? Especially as only with an up to date OS, you can be sort of safe against attackers, be it state level sponsored or regular ones.


If it's actually SMS it wouldn't work in the USA on a phone that doesn't support SMS over LTE (3G shutdown). Such a phone is quite usable with data only.


Does that apply when your phone is off?


"potential for abuse"

As opposed to opening the phone, plugging the cable, doing whatever you want and shrink-wrapping again? (which is easy)

"Oh but it's wireless" It's through the wireless charging mechanism. That makes the whole difference


My argument is a bit different than others for why Apple would never do this.

The iPhone is one of the most popular devices on the planet, and Apple is in a very high profile position because of it. The last time they tried doing anything remotely close to sneaky was the slowing down of phones with older batteries which eventually resulted in a class action lawsuit.

Sure Apple could install backdoor software, crapware or whatever else in an update, but their exposure to a class action lawsuit would be insane. Whatever they did would be found out pretty quick by security researchers and....you have class action on your hands that will probably cost the org $100M - $500M.

Apple isn't in the business of being nefarious, their in the business of selling you an iPhone. It's in their best interest to sell you the most secure and best phone possible, because all they want is to sell you an iPhone.


Ah, but you forget the license agreement or definitely the privacy policy that changes with every OS upgrade. By doing forced OS upgrades, Apple is essentially forcing new ToS and privacy policy updates on you too.


They’re doing this BEFORE you buy the device.

They’re not forcing anything on you.


They announce the OS version with every new hardware release. So it is indeed a criteria that consumers use to decide to purchase. And not everybody automatically upgrades to a new OS version.


> It's in their best interest to sell you the most secure and best phone possible, because all they want is to sell you an iPhone.

No. Apple's best interest is to make money. Apple is a business; they care neither about you or me.

The iPhone is side-product, a cash-cow that can be milked over and over.


Just imagine what they can do when the power is on!


I mean unless you author the software for your own devices we all out trust in all of the companies we purchase from.


Apple also writes iOS, so you already have to trust them to buy and use their phones. I don't really see the difference here.


It's not Apple you have to worry about. It's supply chain attackers whose lives just got easier.


Seems no one's worried by the possibility of fast, low-cost, mass device compromise then.


Buying a computer and using it for daily activities is inherently trusting whoever made that computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: