Why can’t Apple just add “itms-services” as a forbidden URL scheme on a sandbox level? I don’t see why the App Sandbox can’t block (and isn’t already blocking) certain protocols.
Heck, what if I have a malicious web frame inside my app that tries to invoke “itms-services”, similar to this Polyfill.io debacle?
I’m not sure what the big deal with the url handler is, but I can’t imagine it causing remote code execution or other actual malicious behaviour.
At this point Apple seems to be using simple substring matches, so if there is any exploit vector the malware authors can circumvent the check using "itms" + "-services" or something more sophisticated like ROT13.
Which is also just why… the App Store review process claiming this is a problem doesn’t seem to make any sense.
Imagine your app embeds a WebView at myapp.com/terms. It’s your Terms of Service, you show it to everyone when they sign up. Everyone clicks OK.
After it’s on the App Store, you modify your WebView to include `itms-services` for some reason. You’ve just completely bypassed App Store review and gotten that URL handler into your app. The sandbox should stop you - but clearly the review processes don’t consider this possibility and are enforcing it before you publish. Why?
My point is that the scanning for this handler, if Apple doesn’t want to allow it, seems misplaced if they wanted the ban to actually be effective.
But it has been known for over a decade now that Apple searches binaries for strings. They've never done runtime execution checks which would pick up you actually making such an HTTP call.
My baseline would be similar to that which you get from modern compilers and automated tests. Something like:
Lib/urllib/parse.py contains disallowed string "itms-services" at line 62 column 25
Reason: Apps may not install or launch executable code, such as through the itms-services URI schema.
To get credit for "giving you a clear direction", I'd want them to make available and suggest a fix. In general that could be "if this is a false positive, click to request a human reviewer make an exemption" but ideally monitoring for this kind of issue in the first place (sudden increase in identical rejections) and fixing the broken check.
Instead, Apple seem to omit the information on the first line that they likely already get from their internal tool, and not only don't suggest a fix but make the proper fix so unavailable that it's easier to get the language itself changed than a check in their review framework.
> But it has been known for over a decade now that Apple searches binaries for strings. They've never done runtime execution checks which would pick up you actually making such an HTTP call.
It's true that someone with folk knowledge about the way Apple does checks, gleaned about the process by other frustrated users, could likely infer the way in which Apple's test is broken and so eventually deduce the first line from the second line. That's not Apple giving a clear direction - that's developers managing to work around an inscrutable system.
1. Apple received a copy of your source code to do source analysis
2. Apple actually supported running python code as an option for writing iOS apps and wrote source code analysis tools for python for reporting compliance issues
I'm not suggesting any language-specific features like telling you which function it's in - just the filename, position, and matched string. This is information already available to Apple, and can be useful even for compiled/object code.
From the linked Github issue:
> After lots of 'we can provide you with no further information' I finally submitted an appeal for the rejection which at last resulted in Apple telling me that parse.py and its .pyc were the offending files
That came up in the original post with whether an obfuscation could be an acceptable workaround; but it came up that Apple really doesn’t like obfuscation techniques. The workaround for now is a compiler flag that excludes problematic code from iOS builds.
I’m still asking though why, if Apple doesn’t like apps which use this protocol, the sandbox is not intervening; as surely an approved app could have a malicious web view.
Sandboxed apps are allowed to use itms-services:// links, it's just not allowed in the App Store - iOS enterprise apps using in-house deployments can use it for installs and updates, and sandboxed Mac apps deployed outside of the App Store can use it as well.
However, App Review Guidelines forbid App Store apps from installing other apps, so that scheme gets scanned during review.
That makes some sense, but then I have a new question: Why doesn’t Apple have different certificate schemes for in-house versus App Store (if they don’t already)? In which case, the iOS Sandbox should be smart enough, and probably does already delineate, allowed functionality based on whether an app comes via App Store or via a private deployment.
> Why doesn’t Apple have different certificate schemes for in-house versus App Store (if they don’t already)?
Enterprise distribution allows you to deploy applications to corporate managed devices with no App Store review whatsoever. The restrictions are mostly in the business agreement in who you can provide enterprise distribution to (e.g. employees and contractors) and on what your apps can do. The justification is that the enterprise has a relationship with the employee/contractor, is ultimately on the hook for abuses/harms they do via MDM on employee devices.
This is the situation that led to both Facebook and Google having their enterprise accounts banned temporarily a few years ago, as they were each using them as part of a market analysis program - they offered to install VPN software onto consumer devices that monitored web and third-party app usage. Such monitoring is not allowed even for employees per the enterprise developer account agreement.
This can be handled by granting privileges to open that scheme to enterprise Apps and not granting to regular App Store apps. Relying on string scanning is simply not secure.
More seriously, I'm sure they also prevent the privilege to that URI scheme. This is likely part of some ill-thought defense-in-depth approach. Same way they search for the names of private symbols in the exec, even when the linker will outright refuse to give you those. I absolutely detest this pervasiveness of useless layers of security that add almost nothing. But since almost nothing is not nothing, no one can remove any of them. Like cockroach papers, I'm going to call them "cockroach security". Practically everything is infested with those these days.
The URL is likely used from within Apple frameworks for various purposes, and therefore it's possible for an app process to open the URL even without the app itself knowing about the URL.
Heck, what if I have a malicious web frame inside my app that tries to invoke “itms-services”, similar to this Polyfill.io debacle?