Back to Blog
Insights16 min read

Mobile App Penetration Testing: A Complete Guide for iOS and Android Security

P

Pentestas Team

Security Analyst

4/20/2026
Mobile App Penetration Testing: A Complete Guide for iOS and Android Security

Mobile Security · iOS & Android · April 2026

Your mobile app runs on devices you do not control, connects to APIs over networks you cannot trust, and stores data on file systems that users can inspect. Mobile penetration testing goes beyond web testing to examine the unique attack surface of iOS and Android applications — from binary analysis to local data storage to backend API security.

💫 Key Takeaways

  • Mobile apps have a fundamentally different attack surface than web apps: the binary runs on the attacker’s device, local storage is inspectable, and network traffic can be intercepted even with TLS
  • The OWASP Mobile Top 10 identifies the most critical mobile security risks, including insecure data storage, insecure communication, and insufficient binary protections
  • 60% of mobile apps we test store sensitive data (tokens, credentials, PII) in insecure local storage accessible to any app on a rooted/jailbroken device
  • Testing must cover both the mobile client and its backend API — many mobile app vulnerabilities are actually API authorization flaws that the app UI masks
  • iOS and Android have different security models requiring platform-specific testing: Keychain vs. SharedPreferences, App Transport Security vs. Network Security Config, different binary formats
  • Professional mobile pentests cost $8,000–$25,000 per platform and take 2–3 weeks including API testing
Two smartphones with exposed internal security architectures being scanned by penetration testing probes

A banking application we tested in early 2026 had been through three web application penetration tests and two API assessments over the previous four years. Every report came back clean with only low-severity findings. The development team was confident in their security posture. When they engaged us for their first dedicated mobile application penetration test, we found that the iOS app stored the user’s full authentication token in NSUserDefaults — a plaintext, unencrypted storage mechanism that persists across app sessions and is easily accessible on jailbroken devices. The token had no expiration. An attacker with temporary physical access to an unlocked device could extract the token and maintain persistent access to the user’s bank account indefinitely.

This vulnerability exists entirely within the mobile client. The backend API was secure. The web application was secure. But the mobile app’s implementation of local data storage created a risk that no amount of server-side testing could discover. This is the fundamental reason mobile applications need dedicated penetration testing: the mobile client is an attack surface that exists independently of the server infrastructure.

Mobile applications operate in an environment that web applications do not face. The application binary is distributed to devices controlled by end users (and attackers). Network connections traverse untrusted Wi-Fi networks. The device file system is accessible on rooted or jailbroken devices. Sensitive data cached locally can persist even after the user logs out. And the backend API trusts that the mobile client is behaving as expected — but an attacker can replace the client with direct API calls that bypass all client-side controls.

This guide covers everything you need to know about mobile application penetration testing: the OWASP Mobile Top 10, platform-specific differences between iOS and Android, what testers actually do during an engagement, and how to prepare your application for testing.

📱

The Mobile Threat Model

Why Mobile Applications Are Uniquely Vulnerable

The security model for mobile applications is fundamentally different from web applications, and this difference creates attack vectors that do not exist in a browser-based context:

The binary is in the attacker’s hands. When a user installs your mobile app, they download the compiled binary to a device they fully control. An attacker can decompile, reverse-engineer, and modify this binary. They can extract hardcoded secrets, API keys, and encryption keys embedded in the code. They can patch the binary to disable security checks, certificate pinning, or jailbreak detection. Web applications run on your servers — mobile applications run on the attacker’s hardware.

Local data storage is an attack surface. Mobile apps store data locally for offline access, session persistence, and performance. This includes authentication tokens, cached API responses, user preferences, and sometimes sensitive business data. On rooted (Android) or jailbroken (iOS) devices, this local storage is fully accessible. Even on non-rooted devices, data stored in insecure locations (SharedPreferences on Android, NSUserDefaults on iOS) can be extracted through device backups.

Network communication is exposed. While TLS encrypts traffic in transit, mobile apps are susceptible to man-in-the-middle attacks if certificate validation is improperly implemented. Attackers on the same network can intercept traffic using proxy tools like Burp Suite if the app does not implement certificate pinning, or if the pinning implementation is bypassable. Many mobile apps disable certificate validation in debug builds and accidentally ship this configuration to production.

Client-side controls are bypassable. Business logic enforced only in the mobile client (not on the server) can be trivially bypassed. We commonly find: payment amount validation only on the client, feature restrictions enforced by hiding UI elements rather than server-side checks, and biometric authentication that can be bypassed by hooking into the authentication API at runtime.

Third-party SDKs expand the attack surface. Mobile apps typically include numerous third-party SDKs for analytics, crash reporting, advertising, and payment processing. Each SDK has access to the same permissions and data as the host application. A vulnerable or malicious SDK can exfiltrate data, track users, or introduce vulnerabilities. We regularly find that analytics SDKs log sensitive data to remote servers, including URLs containing authentication tokens.

Security vulnerability icons arranged in mobile device shape representing OWASP Mobile Top 10
📜

OWASP Mobile Top 10

The OWASP Mobile Top 10: What We Actually Find

The OWASP Mobile Top 10 provides a framework for the most critical mobile security risks. Here is each category with the real-world findings we encounter most frequently in our engagements:

M1: Improper Credential Usage. Hardcoded API keys, client secrets, and encryption keys in the application binary. We find these in approximately 35% of mobile apps we test. Developers embed secrets for convenience during development and forget to externalize them before release. Tools like strings and decompilation tools can extract these in seconds.

M2: Inadequate Supply Chain Security. Third-party SDKs with known vulnerabilities, outdated libraries, and SDKs that collect and transmit data without the app developer’s knowledge. We regularly find analytics SDKs sending device identifiers and user behavior data to third-party servers, creating both security and privacy risks.

M3: Insecure Authentication/Authorization. Authentication logic implemented client-side rather than server-side, biometric auth bypass, weak session management, and API endpoints that trust client-provided user identity. The most common finding: the mobile app restricts certain actions based on the user role stored in the JWT payload, but the backend API does not independently verify the role.

M4: Insufficient Input/Output Validation. SQL injection, XSS through WebViews, path traversal in file handling, and insecure deep link handling. Deep links are a particularly mobile-specific concern: malicious deep links can trigger actions within the app, and if input validation is missing, they can be exploited for phishing or unauthorized actions.

M5: Insecure Communication. Missing certificate pinning, accepting self-signed certificates, sending sensitive data over HTTP, and TLS configuration issues. About 40% of the mobile apps we test either lack certificate pinning entirely or implement it in a way that can be bypassed with standard tools like Frida or Objection.

M6: Inadequate Privacy Controls. PII logged to system logs, excessive data collection by third-party SDKs, and personal data stored in unencrypted local storage. On Android, system logs (logcat) are accessible to other apps on the device, and we frequently find authentication tokens, email addresses, and API responses logged in plaintext.

M7: Insufficient Binary Protections. Missing code obfuscation, no tampering detection, absence of debugger detection, and lack of root/jailbreak detection. While these are defense-in-depth measures rather than security controls, their absence makes reverse engineering and runtime manipulation significantly easier.

M8: Security Misconfiguration. Debug mode enabled in release builds, backup allowed (exposing app data through device backups), exported Android components (Activities, Services, Broadcast Receivers) that should be private, and iOS App Transport Security exceptions that disable TLS requirements.

M9: Insecure Data Storage. Sensitive data in plaintext files, SQLite databases without encryption, SharedPreferences (Android) or NSUserDefaults (iOS) containing tokens and credentials, and clipboard data exposure. This is the most consistently found category in our testing, affecting roughly 60% of applications.

M10: Insufficient Cryptography. Weak encryption algorithms, hardcoded encryption keys, improper key storage (keys stored alongside encrypted data), and custom cryptographic implementations instead of platform-provided APIs. A common pattern: developers encrypt sensitive local data but store the encryption key in SharedPreferences on the same device, rendering the encryption meaningless.

🔄

Platform Comparison

iOS vs. Android: Platform-Specific Security Differences

iOS and Android have fundamentally different security architectures, which means testing methodologies differ by platform. Understanding these differences is important for scoping and interpreting test results.

Security Area iOS Android
Secure storage Keychain (hardware-backed on devices with Secure Enclave) Android Keystore (hardware-backed on most modern devices)
Insecure storage NSUserDefaults, plist files, CoreData without encryption SharedPreferences, SQLite, internal/external storage files
Network security defaults App Transport Security (ATS) enforces HTTPS by default Network Security Config (cleartext blocked by default since API 28)
Binary analysis Mach-O binary, class-dump for Objective-C, Hopper/IDA for Swift APK (DEX format), easily decompiled with jadx to near-source Java/Kotlin
Runtime manipulation Frida, Objection, Cycript (requires jailbreak for full access) Frida, Objection, Xposed (root helpful but not always required)
Inter-process communication URL schemes, Universal Links, App Groups Intents, Content Providers, Broadcast Receivers, deep links
App distribution App Store only (requires special provisioning for testing) Play Store + sideloading (APK easily extracted and analyzed)
Reverse engineering difficulty Moderate (Swift is harder to decompile than Objective-C) Low (DEX bytecode decompiles cleanly to readable Java/Kotlin)

Android is generally easier to test because the APK format decompiles to readable source code, sideloading is straightforward, and the platform provides more flexibility for inspection tools. iOS testing requires a jailbroken device or specialized provisioning for testing builds, and the compiled binary is harder to reverse-engineer. However, iOS apps are not inherently more secure — they simply require different tools and techniques. We find roughly equivalent vulnerability rates across both platforms.

If your application is available on both platforms, both should be tested. The iOS and Android versions of the same app frequently have different implementations, different third-party SDK versions, and different security configurations. We regularly find vulnerabilities in one platform version that do not exist in the other, because the codebases diverged or because platform-specific APIs were used differently.

Mobile application binary being reverse-engineered with decompiled layers revealed
🔧

Testing Methodology

What Mobile Penetration Testers Actually Do

A comprehensive mobile penetration test covers four distinct testing phases, each targeting different aspects of the mobile application’s attack surface:

Phase 1: Static analysis (binary review). Before running the application, we analyze the compiled binary. For Android, this means decompiling the APK with jadx to review Java/Kotlin source code. For iOS, we use class-dump, Hopper, and otool to inspect the Mach-O binary. We look for: hardcoded secrets and API keys, embedded URLs and internal endpoints, insecure cryptographic implementations, debug flags and logging configurations, third-party SDK versions with known vulnerabilities, and the overall obfuscation posture. Static analysis frequently reveals secrets that developers intended to keep hidden, including backend API keys, Firebase configuration details, and third-party service credentials.

Phase 2: Dynamic analysis (runtime testing). With the app installed on a test device, we use runtime manipulation tools (Frida, Objection) to inspect the application’s behavior during operation. We examine: local data storage contents (what the app writes to disk), clipboard interactions, screenshot caching behavior, background state handling, biometric authentication implementation, root/jailbreak detection effectiveness, and certificate pinning implementation. Dynamic analysis often reveals that sensitive data persists on disk long after the user logs out, that screenshots of sensitive screens are cached by the OS, and that authentication can be bypassed through runtime method hooking.

Phase 3: Network analysis (traffic interception). We configure a proxy (Burp Suite or mitmproxy) to intercept all network communication between the app and its backend. If certificate pinning is implemented, we bypass it using Frida scripts to inspect the underlying traffic. We analyze: all API endpoints the app communicates with, authentication token handling and transmission, sensitive data in request/response bodies, API authorization enforcement (does the server trust client-side controls?), and certificate validation behavior. This phase bridges mobile testing and API testing — the mobile app is the client, and every API call it makes is subject to the same BOLA, IDOR, and authorization testing we perform in dedicated API pentests.

Phase 4: Reverse engineering and exploitation. For high-security applications (banking, healthcare, government), we conduct deeper reverse engineering: patching the binary to disable security checks, analyzing custom cryptographic implementations, mapping the app’s internal architecture, and attempting to extract proprietary business logic or algorithms. This phase also includes testing inter-process communication (IPC) mechanisms — can other apps on the device interact with your app through exported components, deep links, or URL schemes in unintended ways?

🔍

Case Study

Banking App: 6 Critical Findings Hidden Behind a Clean Server-Side Report

A regional bank we’ll call SecureBank had invested significantly in server-side security: annual penetration tests of their web application and API, a mature vulnerability management program, and real-time monitoring. Their mobile banking app served approximately 150,000 active users. It had never been independently tested because the backend had always been the security focus.

When a regulatory examination recommended mobile application security testing, SecureBank engaged us for a comprehensive assessment of both their iOS and Android apps. Our findings illustrated why mobile testing is a distinct discipline:

Finding 1: Authentication token in plaintext storage (Critical). The iOS app stored the OAuth access token in NSUserDefaults and the Android app stored it in SharedPreferences without encryption. Both locations are accessible on rooted/jailbroken devices and through device backups. The token did not expire until the user explicitly logged out, meaning an extracted token provided persistent account access.

Finding 2: Certificate pinning bypass (High). The app implemented certificate pinning, but only for the main API domain. Requests to the analytics and logging endpoints were not pinned, and these requests included the authentication token in the headers. An attacker on the same network could intercept the analytics requests to steal the token without needing to bypass the main API’s certificate pinning.

Finding 3: Account balance cached in plaintext (High). The app cached the user’s account balance and recent transactions in an unencrypted SQLite database for offline viewing. This data persisted on disk even after the user logged out, accessible to any process with file system access on a compromised device.

Finding 4: Biometric bypass via method hooking (High). The app’s biometric authentication (Touch ID/Face ID on iOS, fingerprint on Android) relied on a client-side boolean check. Using Frida, we hooked the authentication callback and forced it to return true, bypassing biometric authentication entirely. The server accepted the subsequent API calls without additional verification.

Finding 5: Hardcoded API key for payment gateway (Critical). Static analysis of the Android APK revealed a payment gateway API key embedded in the application code. This key was a test environment key with limited permissions, but it was the same key used in the build system, and the naming convention suggested production keys might follow a predictable pattern.

Finding 6: Screenshot caching of sensitive screens (Medium). When the app was backgrounded, iOS automatically captured a screenshot for the task switcher. Screens showing account balances, transaction details, and account numbers were cached as images in the app’s sandbox, persisting until overwritten by new screenshots. The app did not implement a privacy screen or blur the view when backgrounding.

Key insight: None of these six findings would have been discovered through web application or API penetration testing. They exist entirely within the mobile client layer: local storage, binary contents, certificate pinning scope, biometric implementation, and OS-level caching behavior. A mobile application is not just a user interface for your API — it is an independent attack surface that requires dedicated security testing.

Man-in-the-middle attack visualization showing mobile data interception and certificate pinning defense
💻

Testing Environment

Emulator vs. Real Device: Why Physical Devices Matter

A question we frequently hear: “Can you test using emulators?” The answer is that emulators are useful for some testing but cannot replace real devices for a thorough assessment.

Emulators are suitable for: Static analysis and binary review, basic dynamic analysis, network traffic interception, and API testing through the app. Many apps work identically on emulators and real devices for these test categories.

Real devices are required for: Hardware-backed keystore and Secure Enclave testing, biometric authentication assessment, Bluetooth and NFC functionality testing, push notification security, and accurate performance and timing analysis. Many apps also include emulator detection and refuse to run on emulated environments, requiring a real (rooted/jailbroken) device for testing.

Our approach: We use both. Emulators for initial static analysis and API traffic mapping, then transition to real devices (jailbroken iOS and rooted Android) for comprehensive dynamic analysis, runtime manipulation, and hardware security feature testing. This hybrid approach provides thorough coverage without the overhead of running every test on physical hardware. If your app includes hardware-specific features (NFC payments, Bluetooth peripherals, hardware security keys), testing on appropriate physical devices is essential.

📋

Preparation

How to Prepare Your Mobile App for a Penetration Test

Proper preparation significantly improves the efficiency and thoroughness of a mobile penetration test. Here is what to provide your testing team:

Application builds. Provide the testing builds directly rather than requiring testers to download from app stores. For iOS, provide an IPA file with a provisioning profile that allows installation on test devices, or provide TestFlight access. For Android, provide the APK or AAB file. If your app has debug and release variants, provide the release build (what users actually run) for security testing and the debug build as a supplement if helpful.

Test accounts. Provide at least two accounts for each user role. Testers need pairs of accounts to test horizontal privilege escalation (can User A access User B’s data?). If your app has premium features, provide accounts with different subscription tiers. Include any required MFA setup codes or recovery codes.

API documentation. The mobile app’s backend API is a critical part of the test. Provide OpenAPI/Swagger specs, Postman collections, or other API documentation. Document any API endpoints that the mobile app calls but that are not used by the web application — these mobile-specific endpoints are frequently less mature from a security perspective.

Architecture overview. Share how the app handles authentication, where sensitive data is stored locally, what third-party SDKs are included, and how push notifications are implemented. This context helps testers focus on the areas most likely to have vulnerabilities.

Certificate pinning documentation. If your app implements certificate pinning, inform the testers. They may request a testing build with pinning disabled, or they may prefer to test pinning bypass as part of the assessment. Either approach is valid, but communication upfront avoids wasted time troubleshooting network interception.

📄

Deliverables

What to Expect in a Mobile Penetration Test Report

A comprehensive mobile pentest report should cover each phase of testing with specific, actionable findings. Here is what a professional report includes:

Executive summary. A non-technical overview of the assessment, the overall risk posture, key statistics (total findings by severity), and strategic recommendations. This section is written for leadership and compliance teams, not developers.

Detailed findings. Each finding should include: a clear description, the affected platform (iOS, Android, or both), severity rating with CVSS score, step-by-step reproduction instructions with screenshots, proof-of-concept evidence, business impact analysis, and specific remediation guidance with code examples where applicable. Mobile-specific findings should reference OWASP Mobile Top 10 categories and MASVS (Mobile Application Security Verification Standard) requirements.

Platform-specific sections. Findings should be organized by platform, since remediation teams for iOS and Android may be different. Each platform section should cover static analysis results, dynamic analysis findings, data storage assessment, network communication security, and binary protection analysis.

Remediation priority matrix. A prioritized remediation roadmap that balances severity with implementation effort. Some critical findings (like moving tokens to Keychain/Keystore) are straightforward to fix. Others (like implementing proper certificate pinning) require more engineering effort and should be phased appropriately. The best reports help your development team plan their remediation sprint rather than simply listing everything that is wrong.

Array of physical mobile devices on testing bench connected to central security analysis hub

Ready to Test Your Mobile Application?

We provide comprehensive mobile application penetration testing for iOS and Android, covering static analysis, dynamic testing, network security, binary reverse engineering, and backend API assessment. Every engagement includes platform-specific remediation guidance and complimentary retesting.

Get a Mobile Pentest Proposal
Alexander Sverdlov

Alexander Sverdlov

Founder of Pentestas. Author of 2 information security books, cybersecurity speaker at the largest cybersecurity conferences in Asia and a United Nations conference panelist. Former Microsoft security consulting team member, external cybersecurity consultant at the Emirates Nuclear Energy Corporation.