DEV Community

Indigotime
Indigotime

Posted on

Proposal on Play Store security measures (alternative to Google's mandatory "developer verification")

What Google claims to be doing for the sake of security is, in fact, not really related to security. Let me explain why.

Before diving into details, we should first clarify what "security for whom" and "from whom" are Google's measures supposed to protect according to their statements. If we sum up all the official statements and publications by Google on this topic, they say that security is provided for end users of applications against malicious actors who create and publish harmful or phishing applications. For a moment, let's pretend this is indeed true. What follows from this?

The primary goal of most malicious applications is to collect data that may be of interest to various attackers. These interests could be commercial or economic motives for fraud, extortion, blackmail, and industrial espionage; or political ones, such as assisting censorship, identifying unfriendly individuals for subsequent shaming or reprisal. Data of users who may be of interest in these contexts include various passwords and keys, private correspondence, geolocation information, browser history, and usage patterns of applications.

Now let's ask ourselves: what would we do if we were Google and genuinely wanted to protect users according to these statements? I would answer this question as follows:

  1. Introduce mandatory declaration of public keys and certificates which automatically trusted by the application. Without this, users may have a false sense of security, believing that as long as their system store only trusted CA certificates, no application can use embedded MITM (Man-in-the-middle) certificates or keys to intercept and send data through their servers.

  2. Introduce mandatory declaration of hardcoded addresses of web services used by the application. Currently, when we examine Android permissions for applications, we only see abstract and non-specific permissions for internet access. We cannot selectively grant permission only for certain parts of the addresses used by the application. In some cases, this might be justified, but at least Google should require to provide a list of hardcoded addresses used by an application before publishing it in the Google Play Store for security checks through Google Safe Browsing.

As we can see, such measures are not being taken at the moment. Maybe I have misunderstood something or misinterpreted Google's statements, and in reality, they were more focused on protecting developers of these applications? However, this is unlikely because the process itself would require developers to provide exactly those data that are most sought after by the same fraudsters and other malicious actors.

Perhaps Google is simply trying to enforce the rules for using the services utilized by applications? It's likely not the case, as requiring developers to publicly declare hardcoded addresses of websites and services would also address this issue at least for applications on Google Play Store. If Google were indeed aiming to remove applications from Google Play that violate their service usage rules, including those prohibiting the use of alternative client applications, they would undoubtedly have done precisely this. However, as we can see, no such action is being taken.

Bellow I will provide an illustrative example of why Google's so-called "security measures" would not improve the situation in any way.

There is an app called Telega that positions itself as an alternative Telegram client capable of functioning even under conditions where Telegram is blocked in Russia.

According to source, this is achieved by using a proxy server working with the MTProto protocol. The address of this proxy server is hardcoded within the application itself. Additionally, the public keys used to connect to this proxy server are also hardcoded and differ from those used by the official Telegram client.

This means that whoever controls this proxy server can read chats of users who use this app. Please note this: I am not talking about some obscure third-party application hosted on a one-day website, with links to which scammers spread through sophisticated and hard-to-trace schemes. I am talking about an app that is still available in the Google Play Store.

And even though I don't know how reliable the source information is, the very fact of such accusations or suspicions itself indicates that users cannot verify them, which is a problem in itself.

If the information from the source is false, it clearly demonstrates how easy it is to accuse a legitimate application of malicious actions.

But if the information from the source is true, it clearly shows the complete helplessness of the Play Store against such threats. Imagine if these so-called "Google security measures" had already been introduced. What would have prevented this application from being published on Google Play Store? Were the developers lacking funds to pay a fee? They could have managed it. Would there have been any issues with providing government identification? None existed then, as the developer account stated the official name of the legal entity. Who can guarantee that this is just a single case and that similar situations do not already exist among popular client applications for social networks, messaging services, and other platforms on Google Play Store? The answer is - nobody, and even after implementing these so-called "security measures", the answer will not change.

Now let's try to imagine what would have changed if Google had indeed required developers to publicly declare the hardcoded web addresses used by their applications and explicitly declare trusted public keys and certificates used by them. Or even better, in the declaration of public keys and certificates, it would be mandatory to specify which web addresses they correspond to (with a possibility for wildcard entries).

Firstly, attempting to publish this app would have raised many suspicions and questions, and it would likely not have been published on Google Play Store.

Secondly, if these data were declared in the application manifest, it would allow various antivirus programs, security scanners, and even Google Play Protect to quickly detect such patterns or at least consider them suspicious, regardless of where the app was downloaded from.

Finally, with future versions of Android, this approach could potentially enable more flexible permission handling, specifically by first requesting access to a list of declared addresses and only afterward, if the user requires, in run-time requesting access to non-declared resources directly when the application attempts to access them.

Unlike the "security measures" that Google is currently introducing, this would look like something very close to a final solution on user control over data streams sent by mobile applications.

Based on the public discussions I have observed on this topic, it seems that many experts, users, and other stakeholders who oppose Google's so-called "security measures" simply call for Google not to implement them or urge pressure to be applied to Google in favor of not taking such actions.

However, I believe that for such pressure to be successful, it is not enough just to show opposition to these measures; there should also be an alternative proposal offering other, more effective solutions to the same problem that Google is trying to address. Moreover, this alternative must be so compelling that it renders Google's security measures untenable for criticism. The aim is not to immediately convince Google to adopt this proposal and implement it but to sow doubt about their arguments. If there exists a viable solution to this problem that genuinely protects user safety, but Google chooses another course of action that appears to be against developers and users' interests, the "we do this for your safety" argument becomes invalid. Even for those who are not deeply familiar with the issue, this argument begins to look like "we do this simply because we can, but we mislead you about security to deceive you".

In fact, my "alternative proposal" has already been mentioned earlier in this text: adding the technical capability to declare hardcoded public keys, certificates, and web addresses. If it is currently not possible to implement this directly in the application manifest, then at least it should be done on the level of requirements and capabilities for publishing applications to the Google Play Store. However, if it is feasible, it would be advisable to introduce corresponding fields specifically in the manifest for future versions of Android, making this security mechanism work regardless of where the app was downloaded from.

Even in its current form, this proposal somewhat undermines the argument that "malicious and fraudulent applications are somehow only present on devices from external sources while everything is safe in Google Play Store". If Google has reasons for being against implementing such a mechanism in their own Play Store but continues to insist on their "security measures" with the requirement of providing government identification, it would already appear odd and somewhat dishonest.

Top comments (0)