Application security requirements | ISO 27001 control 8.26

Applications ship with security assumptions baked in, whether those assumptions were intentional or not. When no one defines what “secure” means before development starts, teams inherit whatever defaults the framework, library, or developer happened to choose. ISO 27001 control 8.26 exists to close that gap before it becomes a vulnerability.

What 8.26 requires

ISO 27001 control 8.26 requires organizations to identify, document, and formally approve information security requirements during the requirements-gathering phase for every new application or significant enhancement. Rather than treating security as something validated after code is written, this control mandates that security expectations are defined upfront and tracked through the full development lifecycle. It sits within Annex A’s technology controls and applies to both custom-built applications and procured software.

In practice, this means you establish explicit requirements for authentication, authorization, input validation, encryption, session management, and logging before a single line of code is written. These requirements apply equally to applications built in-house and to software procured from third parties. Meeting the third-party risk requirements of ISO 27001 means assessing vendor security posture against your documented requirements before onboarding any new Software as a Service (SaaS) tool or procured application. The same applies to significant enhancements of existing systems, not just net-new applications.

The control also requires formal sign-off. Requirements cannot exist as informal tribal knowledge or assumptions buried in a Confluence page no one reads. You must document them in a way that auditors can trace from requirement to implementation to test evidence, and stakeholders with appropriate authority must approve them. This traceability requirement is what separates 8.26 from general secure development guidance. You need a clear chain from documented requirement to verified implementation.

Why 8.26 matters

Most organizations discover their application security gaps after an incident, not before. A financial services firm rolls out a customer-facing portal without specifying input validation requirements. Six months later, an external penetration test reveals the portal is vulnerable to SQL injection (SQLi) because the development team relied on default framework sanitization that didn’t cover stored procedures. Remediation requires rewriting the data access layer, retesting the entire application, and disclosing the vulnerability to regulators. The cost of fixing a security defect in production can reach up to 100x what it would have cost during design, according to IBM Systems Sciences Institute research, which is exactly why 8.26 pushes requirements to the earliest possible phase.

The pattern repeats across industries. When application security requirements don’t exist, developers make their own security decisions. Some of those decisions are good. Many are inconsistent, and effective vendor risk management becomes impossible when there are no baseline requirements to measure against. And in a compliance audit, “it depends on who built it” is not a defensible answer.

Certification bodies expect to see a systematic, repeatable process, not ad-hoc judgment calls. Beyond audit findings, the operational cost compounds over time. Every application deployed without defined security requirements becomes technical debt that your security team must eventually address through retroactive assessments, compensating controls, or incident response. Organizations that define requirements upfront build security programs that compound in effectiveness. Organizations that don’t build backlogs that compound in risk.

What attackers exploit

When 8.26 is absent, attackers target the predictable gaps that result:

  • No authentication or authorization requirements: Applications default to overly permissive access models, allowing privilege escalation or unauthorized data access.
  • Missing input validation specifications: Without explicit requirements for server-side validation, applications remain vulnerable to SQLi, Cross-Site Scripting (XSS), and injection attacks.
  • No Transport Layer Security (TLS) enforcement: Data transits unencrypted between clients and servers, exposing credentials and sensitive payloads to interception.
  • Absent session management requirements: Sessions lack proper timeout, rotation, and invalidation controls, enabling session hijacking and fixation attacks.
  • No third-party API security requirements: Integrations with external services lack mutual authentication, rate limiting, or payload validation, creating trust boundary violations.
  • SaaS procured without security vetting: Business units adopt cloud applications without evaluating encryption standards, access controls, or data residency, creating unmanaged risk.
  • No cryptographic requirements for data at rest: Sensitive data stored without encryption or with weak algorithms exposes the organization to data breach and regulatory penalties.

How to implement 8.26

Effective implementation requires addressing both applications you build and applications you procure. The distinction matters because your control over security requirements differs significantly between first-party development and third-party software.

For your organization (first-party)

Five steps move you from ad-hoc security assumptions to a documented, auditable requirements process:

  1. Create a security requirements baseline. Define a minimum set of security requirements that apply to every application regardless of risk level. The Open Worldwide Application Security Project (OWASP) Application Security Verification Standard (ASVS) provides a structured, tiered framework for this baseline. ASVS Level 1 covers foundational security controls that every application should meet, while Levels 2 and 3 add progressively stricter requirements for applications handling sensitive data. Map your requirements to ASVS levels so you have a recognized reference point auditors will accept.
  2. Integrate requirements into your Software Development Lifecycle (SDLC). Security requirements should be a mandatory input to the design phase, not a checklist applied after development. Add security requirements as acceptance criteria in user stories or as a required section in design documents. Gate progression from design to development on documented security requirements. An ISO 27001 implementation checklist can help ensure these gates are consistently applied. Teams using agile methodologies should include security requirements review in sprint planning, while waterfall teams should include them in formal design reviews.
  3. Risk-classify applications. Not every application needs the same level of scrutiny. Classify applications based on the data they process, their exposure to external networks, and their regulatory scope. A public-facing application processing payment card data needs more rigorous requirements than an internal scheduling tool. Use your classification to determine which ASVS level applies. Document the classification criteria so it is repeatable and auditable, not dependent on subjective judgment.
  4. Require formal sign-off. Designate who approves security requirements for each risk tier. For high-risk applications, this should include a security architect or the Information Security Officer. For lower-risk internal tools, a team lead with security awareness may suffice. Document approvals with timestamps and reviewer identity. This creates the audit trail 8.26 demands.
  5. Maintain traceability. Link each security requirement to its implementation in code and its verification through testing. A traceability matrix that maps requirement to code commit to test case gives auditors exactly what they need and gives your team visibility into coverage gaps. Tools like Jira, Azure DevOps, or dedicated Governance, Risk, and Compliance (GRC) platforms can automate this linkage, reducing manual tracking effort.

Common mistakes:

  • Treating requirements as a one-time exercise: Security requirements need review when the threat landscape changes, when the application’s scope expands, or at minimum annually.
  • Writing requirements too generically: “The application must be secure” is not a requirement. Specify the mechanism, such as “All API endpoints must enforce OAuth 2.0 bearer token authentication.”
  • Excluding third-party components: Open-source libraries and commercial SDKs must meet your requirements. A vulnerability in a dependency is your vulnerability.
  • No formal approval process: Requirements documented but never approved create audit findings. Sign-off must be explicit and recorded.

For your vendors (third-party assessment)

When you procure software rather than build it, you cannot dictate the codebase, but you can define what you require and verify compliance. Your vendor risk assessment process should mirror the rigor you apply internally, adapted for the reality that you are evaluating outputs rather than controlling inputs. Start with targeted questionnaire questions:

  • How are security requirements defined and tracked during development?
  • What authentication and authorization frameworks does the application support?
  • How is data encrypted in transit and at rest, and which algorithms are used?
  • What is the vulnerability remediation Service-Level Agreement (SLA) for critical, high, and medium findings?
  • Has the application undergone independent penetration testing in the last 12 months?

Request concrete evidence rather than relying on vendor self-attestation:

  • SOC 2 Type II report: Confirms controls were operating effectively over a defined period, not just designed.
  • Penetration test summary: Look for scope, methodology, critical finding count, and remediation timelines. A clean summary with no findings may indicate a narrow scope.
  • Secure development policy: Confirms the vendor has a documented SDLC with security gates.
  • Vulnerability management SLA documentation: Verifies the vendor commits to specific remediation timelines.

Red flags:

  • Vendor refuses to share a SOC 2 report or penetration test summary. Legitimate vendors with mature security programs share these routinely under Non-Disclosure Agreement (NDA).
  • No documented SDLC or secure coding standards. The vendor is likely making ad-hoc security decisions.
  • Vulnerability remediation SLAs exceed 90 days for critical findings. This signals under-investment in security engineering.
  • Self-attestation only, with no independent validation. Without third-party verification, you have no assurance beyond the vendor’s word.

Verification should go beyond questionnaires. A vendor’s security posture can change significantly between annual review cycles. Continuous monitoring through security ratings and external risk assessments provides ongoing assurance that vendors continue to meet your application security requirements after onboarding. The UpGuard platform provides this kind of continuous visibility across your vendor ecosystem, surfacing changes in posture that questionnaire snapshots miss.

Audit evidence for 8.26

The most common audit finding for 8.26 is not a missing policy but a missing connection between policy and practice. Auditors expect documented artifacts that demonstrate the control is operationally embedded, not just a policy on paper. Prepare for auditors to sample specific applications and trace requirements end-to-end, from definition through sign-off to test evidence. The following table maps evidence types to specific artifacts you should maintain:

Evidence typeExample artifact
PolicyApplication security requirements policy defining scope, roles, and the process for establishing requirements
Requirements specificationCompleted security requirements document for a specific application, with requirements mapped to OWASP ASVS levels
SDLC integrationSDLC process documentation showing security requirements as a mandatory gate in the design phase
Risk classificationApplication risk classification matrix with criteria, tiers, and assigned ASVS levels per tier
Testing traceabilityRequirements traceability matrix linking each security requirement to test cases and test results
Vendor assessmentCompleted vendor security assessment with evidence collection records (SOC 2 reports, penetration test summaries received)
Approval recordsSigned-off requirements documents with reviewer name, role, date, and approval decision
Review cadenceRecords of periodic requirements reviews with change logs showing updates made after each review cycle

Cross-framework mapping

If your organization operates under multiple compliance frameworks, mapping 8.26 to equivalent controls reduces duplicated effort. A single, well-implemented application security requirements process can satisfy overlapping obligations across frameworks. Aligning with ISO 27001 compliance goals, the NIST SP 800-53 Rev. 5 mappings below come from the OLIR crosswalk and represent direct equivalences.

FrameworkEquivalent control(s)Coverage
NIST 800-53AC-03Full
NIST 800-53SC-08Full
NIST 800-53SC-13Full
SOC 2CC6.1Partial
CIS Controls v8.1Control 16 — Application Software SecurityFull
NIST CSF 2.0PR.DS, PR.IPPartial
DORA (EU)Article 8Partial

Control 8.26 does not operate in isolation. It connects to a cluster of application security, development lifecycle, and governance controls that together form a comprehensive approach to secure software. Understanding these relationships helps you design implementation programs that satisfy multiple controls simultaneously, reducing duplicated effort and strengthening your overall security posture.

Control IDControl nameRelationship
8.25Secure development lifecycleProvides the SDLC framework into which 8.26 requirements are integrated
8.27Secure system architecture and engineering principlesTranslates 8.26 requirements into architectural design decisions
8.28Secure codingImplements 8.26 requirements at the code level through secure coding practices
8.29Security testing in development and acceptanceVerifies that 8.26 requirements are met through testing
8.30Outsourced developmentExtends 8.26 requirements to third-party developers and outsourced development
8.31Separation of development, test, and production environmentsSupports 8.26 by ensuring requirements are validated in isolated environments before production
5.8Information security in project managementEnsures 8.26 requirements are considered at the project planning level
5.23Information security for use of cloud servicesApplies 8.26 requirements to cloud-hosted applications and services
8.9Configuration managementMaintains the secure configuration baselines that 8.26 requirements define

Frequently asked questions

What is ISO 27001 8.26?

ISO 27001 control 8.26 requires organizations to identify, document, and formally approve information security requirements before building or procuring applications. It ensures that security expectations for authentication, encryption, input validation, and other controls are defined during the requirements phase rather than retrofitted after deployment.

What happens if 8.26 is not implemented?

Without 8.26, applications launch with inconsistent security controls that depend on individual developer judgment rather than organizational standards. This creates exploitable vulnerabilities such as missing input validation, weak session management, and unencrypted data. During certification audits, the absence of documented security requirements results in nonconformities that can prevent or delay ISO 27001 certification.

How do you audit 8.26?

Auditors verify 8.26 by sampling specific applications and tracing their security requirements from documentation through sign-off to implementation and test evidence. They check that requirements are integrated into the SDLC as a formal gate, not applied retroactively. They also assess whether vendor-supplied applications were evaluated against documented security criteria before procurement, reviewing questionnaire responses and evidence collection records.

How UpGuard helps

Defining application security requirements is only half the challenge. For third-party applications and vendor-supplied software, you need continuous visibility into whether those requirements are actually met. The UpGuard platform delivers automated, continuous vendor security assessments and security ratings that go beyond point-in-time questionnaires, giving you real-time insight into your vendors’ security posture.

  • Vendor Risk: Automates vendor security assessments with industry-standard and custom questionnaires, evidence collection workflows, and continuous monitoring of third-party security posture. Track whether vendors meet your documented application security requirements and receive alerts when their posture changes.
  • Breach Risk: Provides external attack surface management visibility that identifies exposed application assets, misconfigurations, and vulnerabilities across your own infrastructure. Prioritize findings using Exploit Prediction Scoring System (EPSS) and Known Exploited Vulnerabilities (KEV) data rather than relying solely on Common Vulnerability Scoring System (CVSS) severity.

See how the UpGuard platform strengthens your application security requirements program →

Experience superior visibility and a simpler approach to cyber risk management