Software vulnerabilities that reach production are not theoretical risks. They are the direct result of organizations treating security testing as an afterthought, bolted onto the end of a release cycle instead of embedded throughout it. When acceptance testing checks only whether an application works, not whether it can be exploited, the gap between “functional” and “secure” becomes the gap attackers walk through.
What 8.29 requires
ISO 27001 Control 8.29 requires organizations to define and implement security testing processes throughout the software development lifecycle, ensuring that security requirements are validated before code reaches production. This means going beyond functional Quality Assurance (QA) to include techniques such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), penetration testing, and secure code review at defined stages of development and acceptance.
The obligation is not to run a single test before a release. Organizations must establish a structured testing strategy that matches the risk profile of the system under development. High-risk applications handling sensitive data or exposed to the internet require more rigorous and frequent testing than internal tools with limited access. Testing must cover the full stack, including application code, infrastructure configuration, third-party libraries, and the interfaces between them.
Critically, 8.29 also requires that test results drive action. Identified vulnerabilities must be tracked, prioritized, and remediated according to defined criteria before deployment is approved. Without formal security acceptance gates, testing becomes a documentation exercise rather than a control. The output of every test must feed into a decision about whether the software is fit for production.
Why 8.29 matters
A financial services company deploys a customer-facing application after months of development. The QA team confirms all user stories pass. Acceptance testing covers performance benchmarks and business logic. No one runs a DAST scan against the staging environment. Three weeks after launch, an attacker discovers an unvalidated input field, exploits a SQL injection flaw, and exfiltrates 40,000 customer records. The vulnerability was present in the code from Sprint 2. Functional testing never looked for it.
This scenario is not unusual. Verizon’s 2025 Data Breach Investigations Report found that vulnerability exploitation surged 34% year-over-year to become the second most common breach vector, accounting for 20% of all breaches. Organizations that rely solely on functional QA and skip dedicated security testing create a predictable opening for attackers who specialize in finding exactly the flaws that developers did not test for.
The consequences extend beyond the immediate breach. Regulatory penalties, lost customer trust, incident response costs, and increased cyber insurance premiums compound over months. For organizations subject to ISO 27001 certification, a missing or ineffective 8.29 implementation creates a nonconformity that auditors will flag, potentially jeopardizing the certification itself.
What attackers exploit
When security testing is absent or inadequate, specific failure modes become attack surfaces:
- Untested code paths with injection flaws: SQL injection, Cross-Site Scripting (XSS), and command injection vulnerabilities from the OWASP Top 10 that automated SAST and DAST would catch remain hidden in production code.
- Third-party libraries with known CVEs: Dependencies with publicly disclosed vulnerabilities never scanned before deployment give attackers a roadmap to exploitation.
- Missing acceptance criteria: Without defined security gates, insecure configurations and weak access controls pass through to production unchallenged.
- Test environments with production data: Environments containing real customer data become secondary attack surfaces when they lack the same protections as production.
- Weak authentication logic: Authentication and session management flaws that were never subjected to adversarial testing allow unauthorized access at scale.
How to implement 8.29
Effective implementation requires security testing in two directions: within your own development processes and across your vendor and supplier ecosystem. Each demands a different approach.
For your organization (first-party)
1. Define a security testing strategy tied to risk classification. Not every application needs the same level of scrutiny. Classify systems by data sensitivity, exposure level, and regulatory scope as part of a cybersecurity risk assessment. Map testing requirements to each tier. A public-facing payment application requires penetration testing before every major release. An internal wiki may only need periodic SAST scans.
2. Integrate SAST into CI/CD pipelines. Static analysis tools should run automatically on every code commit or pull request. This catches common vulnerabilities, including injection flaws, hardcoded credentials, and insecure cryptographic usage, before code merges into the main branch. Tools like SonarQube, Checkmarx, or Semgrep integrate directly into most CI/CD platforms.
3. Conduct DAST against staging environments. Dynamic testing simulates real-world attacks against running applications. Schedule DAST scans against staging or pre-production environments using tools like OWASP ZAP or Burp Suite to identify runtime vulnerabilities that static analysis cannot detect, such as authentication bypass and server misconfiguration.
4. Penetration testing for high-risk systems before major releases. Engage qualified penetration testers, either internal red team members or third-party firms following standards like the Penetration Testing Execution Standard (PTES), for systems that handle regulated data or face internet exposure. Pen tests should be scoped, time-bound, and produce actionable findings with risk ratings.
5. Security acceptance gates. No deployment should proceed without documented sign-off confirming that security testing was completed and critical findings were remediated. This gate must be a formal step in the release process, not an informal checkbox.
6. Protect test environments. Isolate test environments from production networks. Mask or synthesize sensitive data using tools like Delphix or Tonic.ai rather than copying production databases. Restrict access to test environments with the same rigor applied to production.
The urgency of building these practices into the development lifecycle is growing. Veracode’s 2025 State of Software Security report found that the average time to fix a security flaw has increased 47% since 2020, reaching 252 days, with 70% of critical security debt originating in third-party code. Organizations that defer testing to the end of the cycle inherit a remediation backlog that compounds with every release.
Common mistakes:
- Running security tests only before annual audits instead of continuously throughout development
- Testing only the application layer while ignoring infrastructure configuration, container images, and deployment scripts
- Using production data in test environments without masking, creating compliance and breach exposure
- Treating penetration testing as a substitute for integrated SDLC testing rather than a complement to it
- Running tests with no defined acceptance criteria, so results are generated but no one acts on the findings
For your vendors (third-party assessment)
When outsourcing development or procuring software, 8.29 obligations extend to your supply chain. Vendors building or maintaining systems that process your data must demonstrate equivalent security testing rigor. A structured third-party risk management program ensures these obligations are met consistently.
Questionnaire questions to ask:
- What types of security testing do you perform during development (SAST, DAST, pen testing, code review)?
- At what stages of the SDLC are security tests executed?
- How are vulnerabilities prioritized and tracked through remediation?
- Do you maintain formal security acceptance criteria before production deployment?
Evidence to request: Use a structured vendor risk assessment to collect SAST and DAST scan summaries, penetration test executive reports, vulnerability remediation logs with time-to-fix metrics, and security acceptance sign-off records. Request these as artifacts, not just attestations.
Red flags in vendor responses:
- “We perform security testing annually” with no evidence of integration into the development process
- Inability to produce scan reports or penetration test summaries
- No defined vulnerability remediation SLAs
- Test environments that use production data without controls
Verification beyond self-attestation. Cross-reference vendor claims against their external security posture. Continuous monitoring of a vendor’s attack surface can reveal exposed test environments, unpatched public-facing assets, or leaked credentials that contradict self-reported testing maturity.
Audit evidence for 8.29
Auditors assessing 8.29 will look for documented processes and verifiable artifacts demonstrating that security testing is embedded in development and acceptance workflows.
| Evidence Type | Example Artifact |
|---|---|
| Security Testing Policy | Document defining testing types, frequency, scope, and responsibilities across the SDLC |
| SAST Scan Reports | Automated static analysis results per release with severity classifications and remediation status |
| DAST Scan Reports | Dynamic testing results from staging environment runs showing identified vulnerabilities and fixes |
| Penetration Test Report | Annual or release-triggered pen test executive summary with findings, risk ratings, and remediation timelines |
| Security Acceptance Records | Sign-off documentation confirming security criteria were met before production deployment |
| Vulnerability Remediation Log | Tracking system showing time-to-fix for identified vulnerabilities by severity |
| Test Environment Access Controls | Evidence of separation between test and production environments with access restrictions |
| Third-Party Code Review Records | Documentation of security testing performed on outsourced or vendor-supplied code before acceptance |
Cross-framework mapping
Organizations operating under multiple compliance frameworks can map 8.29 controls to equivalent requirements, reducing duplication of effort across audits.
| Framework | Equivalent Control(s) | Coverage |
|---|---|---|
| NIST 800-53 | CA-02 | Full |
| NIST 800-53 | SA-04 | Full |
| NIST 800-53 | SA-11 | Full |
| NIST 800-53 | SR-05(02) | Partial |
| SOC 2 | CC8.1 | Partial |
| CIS Controls v8.1 | 16.12 | Full |
| NIST CSF 2.0 | PR.DS | Partial |
Related ISO 27001 controls
Control 8.29 does not operate in isolation. It connects to a network of controls that collectively govern secure development and operational integrity.
| Control ID | Control Name | Relationship |
|---|---|---|
| 8.25 | Secure development lifecycle | Parent process that 8.29 testing validates |
| 8.26 | Application security requirements | Defines the security requirements that 8.29 testing verifies |
| 8.28 | Secure coding | Coding practices tested through SAST and code review under 8.29 |
| 8.31 | Separation of development, test and production environments | Provides the isolated environments where 8.29 testing occurs |
| 8.8 | Management of technical vulnerabilities | Receives vulnerability findings from 8.29 testing for remediation tracking |
| 8.27 | Secure system architecture and engineering principles | Architecture decisions validated through 8.29 security testing |
| 5.23 | Information security for use of cloud services | Cloud deployments require 8.29 testing before acceptance |
| 5.20 | Addressing information security within supplier agreements | Contractual basis for requiring vendors to perform 8.29 testing |
Frequently asked questions
What is ISO 27001 8.29?
ISO 27001 Control 8.29 requires organizations to implement security testing processes throughout the software development lifecycle to validate that security requirements are met before code is deployed to production. It covers techniques including SAST, DAST, penetration testing, and secure code review. The control applies to both internally developed software and code acquired from third parties.
What happens if 8.29 is not implemented?
Without structured security testing, vulnerabilities in application code, third-party libraries, and infrastructure configurations reach production undetected. This directly increases the likelihood of exploitation, data breaches, and regulatory penalties. During an ISO 27001 audit, the absence of security testing evidence constitutes a nonconformity that can result in certification failure or conditional findings requiring remediation within a defined timeframe.
How do you audit 8.29?
Auditors verify 8.29 by reviewing the security testing policy, examining SAST and DAST scan reports from recent releases, confirming that penetration testing was conducted for high-risk systems, and checking that formal security acceptance records exist for production deployments. They also assess whether vulnerability findings were tracked through remediation and whether test environments are properly isolated from production.
How UpGuard helps
Organizations implementing 8.29 need visibility into whether their vendors and suppliers maintain the same security testing rigor they apply internally. The UpGuard platform helps security teams assess, monitor, and verify third-party security practices at scale.
- Vendor Risk: Streamline vendor security assessments with questionnaire management covering SDLC security testing practices, automated risk scoring, and continuous monitoring of vendor security posture. Track whether vendors maintain security testing evidence and identify gaps in their development security controls.
- Breach Risk: Continuously monitor external attack surfaces to detect exposed test environments, unpatched assets, and misconfigurations that indicate inadequate security testing, both in your own infrastructure and across your vendor ecosystem.
Start a free trial to experience the UpGuard cybersecurity platform.