Intern
Secure Software Systems Group

(Federated) Machine Learning for Risk Detection on Mobile Platforms

23.09.2020

The importance of security and protection of private information on mobile devices has increased in recent years due to the widespread use of these devices. This lead to the intensive use of mobile platforms for security-critical tasks, such as online banking, mobile payments, healthcare applications or business-related activities. On another hand, mobile platforms became attractive attack targets, since they store and process a significant amount of security-critical information, such as authentication credentials, payment information, and access tokens.

While the protection of security-critical information in mobile apps is paramount to the security of mobile services, the often-used BYOD (Bring Your Own Device) paradigm makes the protection challenging. With BYOD, users may install arbitrary apps on their mobile devices, including malicious apps that can interfere with security-sensitive logic. On another hand, mobile service providers are limited in what defense mechanisms they can deploy for protection, since no additional requirements on underlying hardware can be assumed for interoperability reasons, and any OS-level protections are inapplicable, too, since those would void the warranty of mobile platform vendors. Hence, defenders need to resort to lightweight application-level defense strategies, such as application hardening, app-level monitoring and intrusion detection. This approach, however, typically relies on data collection and fingerprinting of platform features – an approach that is associated with privacy risks for users.

Project Goal. In this project, the goal is to build a lightweight framework for risk detection on mobile platforms, which applies machine learning (ML) and artificial intelligence (AI) methods for risk detection, while remaining privacy-friendly towards end-users. The examples of risks the project aims to address are co-installed malicious apps, jailbreaks, code injection, clickjaking/UI-redressing attacks, and device theft, to name some. The privacy friendliness is achieved through the application of a concept of so-called Federated Machine Learning (FML), which allows for the predictive ML models training directly on the devices, thereby eliminating the need for centralized collection and processing of user data. The central aggregation service in FML is only responsible for the collection of locally trained models and their integration into a global model, but not for the collection of end-user data. Once aggregated, the global model can be re-distributed among the clients, thus improving the precision of local models through the knowledge obtained on other platforms. This approach enables devices to learn models in collaboration while keeping all training data local.

 

Overall, the FML-based risk detection method provides the following advantages:

    1. Secure: Since the user data is never collected by the service provider, there is no risk of server-side security breaches
    2. Privacy-preserving: The solution is GDPR (General Data Protection Regulation)  friendly since the collected data is never sent to the service provider
    3. Precise: Privacy-preserving treatment of data enables collection of data in higher volumes, which results in larger datasets and more precise detection models
    4. Adaptive: The model continues to evolve through iterations of aggregation and re-distribution cycles

 

The project is funded by KOBIL Security Systems GmbH, and is executed in cooperation with KOBIL and TU Darmstadt.

People involved: Prof. A. Dmitrienko, Christoph Sendner, Filip Roos, Lukas Nothhelfer

Zurück