How to Choose a Mobile App Penetration Testing Company: 10 Questions to Ask Before You Sign
Pentestas Team
Security Analyst

💫 Key Takeaways
- Over 60% of mobile pentest reports we review from other firms contain only automated scanner output with no manual testing evidence
- A quality mobile pentest must cover both iOS and Android on real devices — emulator-only testing misses entire vulnerability classes
- Providers who skip binary analysis, reverse engineering, and API testing leave critical attack surfaces untouched
- The cheapest bid often means you are buying a MobSF scan with a logo — not an expert assessment
- Always ask for a sample report before signing — the report quality reveals the testing quality
- Retesting should be included in the engagement — firms that charge full price for retests are optimizing for revenue, not your security
Mobile applications are now the primary interface between your business and your customers. They handle authentication, process payments, store sensitive data, and connect to critical backend infrastructure. A vulnerability in your mobile app is not just a technical problem — it is a direct threat to customer trust, regulatory compliance, and business continuity.
This makes choosing the right mobile app penetration testing company one of the most consequential security decisions you will make. Unfortunately, the mobile security testing market is flooded with providers who run automated scanning tools like MobSF or QARK, wrap the output in a branded PDF, and call it a penetration test. The result is a report that looks professional but misses the vulnerabilities that actually matter — the ones that require a skilled human to find.
We have reviewed hundreds of mobile pentest reports from other providers over the years, both as part of our own retesting engagements and when clients bring us previous reports for comparison. The quality gap is staggering. Some reports contain 80 pages of scanner output listing informational findings that pose no real risk. Others miss critical vulnerabilities like hardcoded API keys in the binary, insecure data storage in shared preferences, or broken certificate pinning implementations that allow full traffic interception.
This guide exists to help you ask the right questions before you sign. Whether you are a CISO evaluating vendors, a product manager responsible for app security, or a procurement lead comparing proposals, these 10 questions will help you distinguish genuine expertise from scanner-repackaging operations.
The Stakes
Why a Bad Pentest Is Worse Than No Pentest
A bad penetration test creates something more dangerous than ignorance: false confidence. When your team receives a report with a handful of low-severity findings and a clean summary, they believe the app is secure. Leadership checks the compliance box. Developers move on to the next sprint. Meanwhile, the critical vulnerabilities that a thorough test would have uncovered remain in production, waiting for an attacker who will not use the same automated scanner your testing provider used.
We have seen this pattern repeatedly. A fintech company hired a mobile pentesting firm that tested their iOS app on a simulator only. The report came back clean. Six months later, a security researcher discovered that the Android version stored OAuth tokens in plaintext in external storage, accessible to any app on the device. The firm had never tested Android at all — but the original report's scope section was vaguely worded enough that the client assumed both platforms were covered.
Another healthcare client received a mobile pentest report that was 45 pages long. Impressive, until you looked closely. Thirty-eight of those pages were informational findings from an automated scan: the app uses a WebView (informational), the app does not implement root detection (low), the app binary is not obfuscated (informational). Not a single finding addressed the fact that the app transmitted patient data over HTTP to a staging API endpoint that was still active in production, or that the app's GraphQL API had no authorization checks on patient record queries.
The cost of a shallow test is not just the fee you paid. It is the cost of the breach that follows when real vulnerabilities go undiscovered, plus the regulatory penalties, customer notification expenses, and reputational damage. A thorough pentest from a qualified provider typically costs $15,000–$40,000 for a complex mobile application. A data breach involving a mobile app averages $4.5 million according to IBM's 2025 Cost of a Data Breach report. The math is straightforward.
Evaluation Framework
10 Critical Questions to Ask Every Mobile Pentest Provider
Ask these questions during the proposal and scoping phase. A quality provider will answer confidently and specifically. Vague or evasive answers are a red flag.
1. Do you test on real physical devices or only emulators and simulators?
This is the first question for a reason. Emulators and simulators do not replicate the full security behavior of physical devices. Keychain behavior on iOS simulators differs from real hardware. Android emulators do not accurately reflect file system permissions, biometric authentication flows, or hardware-backed keystore operations. A provider that tests exclusively on emulators is missing vulnerabilities related to data storage, biometric bypass, and device-level security controls. The right answer is: we test on real, jailbroken/rooted devices for security analysis, and also verify behavior on stock devices to understand the default user experience.
2. Do you test both iOS and Android platforms independently?
iOS and Android have fundamentally different security architectures. Sandboxing models, data storage mechanisms, inter-process communication, and binary formats are all different. A vulnerability on one platform may not exist on the other, and vice versa. Some providers test one platform and assume the findings apply to both. Others only test the platform they have expertise in and skip the other entirely. Insist that both platforms receive independent, platform-specific testing with dedicated time allocated to each.
3. Does your methodology include binary reverse engineering and static analysis?
A mobile app's compiled binary contains secrets, logic, and configuration that are invisible during dynamic testing alone. Reverse engineering the IPA (iOS) and APK/AAB (Android) using tools like Ghidra, Hopper, jadx, and apktool reveals hardcoded credentials, API endpoints, encryption keys, hidden debug functionality, and business logic that can be manipulated. If a provider's methodology does not include decompilation and binary analysis, they are testing the surface while ignoring the internals. Ask them to describe their static analysis toolchain specifically.
4. Do you test the mobile app's backend APIs as part of the engagement?
The mobile app is a client. The real business logic, data, and authentication decisions happen on the backend API. Testing the app without testing the API is like testing a bank's front door while ignoring the vault. Ask whether the engagement includes API authentication bypass testing, broken object-level authorization (BOLA/IDOR) testing, rate limiting validation, and server-side input validation. The provider should intercept traffic from the app using a proxy (like Burp Suite) and test every API endpoint the app communicates with.
5. Do you test against the OWASP Mobile Application Security Verification Standard (MASVS)?
OWASP MASVS provides a comprehensive framework for mobile security verification with two levels: MASVS-L1 (standard security) and MASVS-L2 (defense-in-depth for apps handling sensitive data). A provider who tests against MASVS will systematically cover data storage, cryptography, authentication, network communication, platform interaction, code quality, and resilience against reverse engineering. If the provider is unfamiliar with MASVS or only references the OWASP Mobile Top 10 (which is a risk awareness document, not a testing standard), their coverage will have gaps.
6. Can you provide a sample report from a previous engagement?
The report is the primary deliverable. You need to evaluate its quality before committing. A quality sample report should include: an executive summary with risk ratings, detailed findings with CVSS scores, step-by-step reproduction instructions with annotated screenshots, proof-of-concept code or demonstrations, platform-specific remediation guidance with code examples, and MASVS mapping for each finding. If a provider cannot share a redacted sample, or if the sample is thin on technical detail, expect the actual report to be equally superficial.
7. Does the engagement include retesting after remediation?
Retesting verifies that your developers actually fixed the vulnerabilities correctly. Some fixes introduce new issues. Others address the symptom but not the root cause. A quality provider includes at least one retest cycle in the engagement fee. Providers who charge the full engagement price for retesting are creating a financial disincentive for you to verify fixes — which is misaligned with the goal of actually improving your security. Ask about the retest window (typically 60–90 days) and scope (all findings or critical/high only).
8. What is the experience level of the testers who will work on our engagement?
Some firms assign junior analysts to mobile engagements while their senior testers focus on more complex work. Ask who will personally conduct the testing, how many years of mobile-specific experience they have, and whether they hold relevant certifications (GWAPT, GMOB, OSCP, eMAPT). Ask if you can speak with the lead tester during the scoping call. A firm that won't let you talk to the actual tester before the engagement is a firm where the salesperson and the technician are very different quality levels.
9. What is your realistic timeline for a thorough mobile pentest?
A thorough mobile penetration test of a moderately complex application on both platforms requires 2–3 weeks of active testing plus time for reporting. A provider who promises comprehensive results in 3–5 days is not conducting manual testing — they are running automated scans. Beware proposals that promise fast turnaround without qualifying the scope or complexity. The right answer describes how timeline scales with app complexity: number of screens, authentication flows, API endpoints, and platform-specific features.
10. How do you structure pricing, and what exactly is included?
Pricing transparency matters. A quality proposal should clearly state: the number of tester-days allocated, which platforms are covered, whether API testing is included or separate, whether retesting is included, and what deliverables you receive. Be wary of providers who quote a flat rate without understanding your app's complexity. Also be wary of providers who are significantly below market rate — for a complex app tested on both platforms with API testing included, expect $15,000–$40,000. A $5,000 quote for the same scope is a scan, not a pentest.
Warning Signs
Red Flags in Mobile Pentest Proposals
Beyond the 10 questions, watch for these warning signs during the proposal and sales process:
The proposal is generic. If the scope of work could apply to any mobile app without modification, the provider did not invest time understanding your application. A quality proposal references your specific app, its functionality, its user roles, and the testing approach tailored to your architecture.
They can't name their tools and methodology. Ask which tools they use for static analysis, dynamic analysis, and API testing. A competent mobile tester will name specific tools: Frida for runtime instrumentation, objection for iOS/Android security testing, Burp Suite for API interception, jadx or Ghidra for reverse engineering, and platform-specific tools like Xcode Instruments or Android Studio Profiler. If they cannot name tools or describe a methodology beyond "we follow OWASP," their testing depth is likely shallow.
The timeline is suspiciously short. A provider who claims they can thoroughly test a complex mobile app on two platforms in three days is either staffing a large team (unlikely at their price point) or cutting corners. Manual testing takes time — reversing the binary, instrumenting the runtime, tracing data flows, and testing each API endpoint cannot be meaningfully rushed.
They only mention automated scanning tools. If the proposal references MobSF, QARK, or AndroBugs as primary testing tools without mentioning manual testing techniques, you are buying an automated scan report. These tools are useful for initial reconnaissance, but they cannot discover business logic flaws, complex authentication bypasses, or insecure server-side implementations.
No scoping call or questions about your app. A provider who quotes a price without asking detailed questions about your application — how many user roles, what sensitive data it handles, which third-party SDKs are integrated, whether it uses biometric authentication, what backend frameworks power the API — is pricing a commodity product, not a professional service. Scoping is a critical part of a quality engagement.
Deliverable Quality
What a Quality Mobile Pentest Report Looks Like
The report is the tangible output of the engagement. Here is what distinguishes a professional report from a scanner dump:
| Report Element | Scanner Report | Expert Manual Report |
|---|---|---|
| Executive Summary | Generic boilerplate with pie charts | Tailored risk narrative with business context and strategic recommendations |
| Findings Detail | Tool output with generic descriptions | Step-by-step reproduction, annotated screenshots, PoC code, CVSS scoring |
| Remediation Guidance | "Fix the vulnerability" or generic OWASP links | Platform-specific code examples (Swift/Kotlin) with implementation guidance |
| MASVS Mapping | Not included or superficial | Each finding mapped to specific MASVS requirements with gap analysis |
| API Testing Coverage | Not included | Full endpoint inventory, authentication testing, authorization bypass attempts |
| Prioritization | Sorted by severity only | Remediation roadmap balancing severity, exploitability, and engineering effort |
Case Study
The Company That Switched Providers After a Shallow Test
A digital banking startup — we will call them NeoBank — launched their mobile app in late 2025. Before launch, they hired a mobile penetration testing firm that advertised "comprehensive mobile security assessments" at a competitive price. The engagement lasted five days and cost $8,000. The report came back with 12 findings, all rated low or informational. NeoBank's leadership felt confident in their security posture and launched the app.
Three months later, a security researcher reported a critical vulnerability through NeoBank's bug bounty program. The researcher had decompiled the Android APK using jadx and found hardcoded API keys for their payment processor embedded in the source code. Worse, the keys had production-level permissions that could initiate refunds and access transaction histories. The researcher also discovered that the app stored user session tokens in Android SharedPreferences without encryption, making them accessible to any app with root access or through local backup extraction.
NeoBank contacted us for a full retest. Our assessment of both platforms over three weeks uncovered 34 findings — including 6 critical and 11 high severity. The original provider's report had missed every single one of the critical findings. When we reviewed their methodology, the reason was clear: they had run MobSF against the APK, performed basic dynamic testing on an emulator, and never tested the iOS version at all. No binary reverse engineering. No runtime instrumentation. No API testing. No real device testing.
The lesson: NeoBank spent $8,000 on a test that gave them false confidence, then spent $28,000 on remediation after a public vulnerability disclosure, plus $32,000 on our comprehensive retest and a second remediation cycle. The "savings" from choosing the cheaper provider cost them $52,000 more than a thorough test would have cost in the first place — plus the reputational cost of a public vulnerability disclosure during their first quarter of operation.
Provider Comparison
Comparing Mobile Pentest Provider Types
The mobile pentest market broadly divides into four provider types. Understanding where a provider falls helps set expectations for depth and deliverable quality.
| Provider Type | Typical Price | Testing Depth | Best For |
|---|---|---|---|
| Automated Scan Providers | $1,000–$5,000 | Surface-level. Tool output only. No manual testing. | Initial baseline if budget is severely limited |
| Generalist Pentest Firms | $8,000–$15,000 | Moderate. Some manual testing but limited mobile-specific expertise. | Low-complexity apps with minimal sensitive data |
| Mobile-Focused Specialists | $15,000–$40,000 | Deep. Manual testing, reverse engineering, API testing, both platforms. | Apps handling sensitive data, financial, healthcare |
| Big 4 / Enterprise Firms | $40,000–$100,000+ | Variable. Quality depends heavily on individual analyst assigned. | Organizations requiring Big 4 brand for compliance or board reporting |
The sweet spot for most organizations is a mobile-focused specialist firm. These providers have dedicated mobile security expertise, invest in real device labs, and their testers work on mobile engagements full-time rather than switching between web, network, and mobile projects. The Big 4 firms can deliver quality results, but the price premium reflects their brand overhead, and the actual testing quality varies significantly depending on which individual analyst is assigned to your project.
Regardless of the provider type you choose, use the 10 questions above to evaluate the actual testing depth. A mobile-focused specialist who cannot answer question 3 about binary reverse engineering is not as specialized as they claim. A generalist firm with a senior mobile security expert on staff may deliver better results than a specialist firm that is growing faster than their talent pipeline.
Looking for a Mobile App Penetration Testing Partner?
We test on real devices, cover both iOS and Android independently, include binary reverse engineering and full API testing, and provide complimentary retesting. Ask us any of the 10 questions above — we welcome the scrutiny.
Request a Mobile Pentest Proposal
Alexander Sverdlov
Founder of Pentestas. Author of 2 information security books, cybersecurity speaker at the largest cybersecurity conferences in Asia and a United Nations conference panelist. Former Microsoft security consulting team member, external cybersecurity consultant at the Emirates Nuclear Energy Corporation.