API Discovery & Documentation — FREE 3-DAY TRIAL
Start Now

Next-generation API and application security testing:
What you should know

The web has been undergoing an earth shattering shift in recent years. No longer are we all just browsing a bunch of HTML files easily crawlable by security tools to find things to test. Instead, APIs have eaten the web and security has been struggling to keep up.

Because of the introduction of modern web frameworks and APIs, Dynamic Application Security Testing (DAST) tools have been unable to find vulnerabilities like they used to. Static Application Security Testing (SAST) tools have always had issues with false positives and being even able to find vulnerabilities that only become obvious once the browser is involved.

Even if you look at the DAST industry over the last few years, many have rebranded to Attack Surface Management (ASM) or API Security vendors. But the historic tools with new branding still have a major issue when it comes to finding the vulnerabilities present in these new environments.

We have a lot to learn about this area, but we want to talk about the innovation NightVision has been hard at work developing, which is combining SAST and DAST to take the benefits of both and minimize the cons of both as well. Turns out, they go together like peanut butter and jelly.

Table of contents

What is next-generation API security testing?

The shift in API security that’s happening now is a shift in focus from the low-level network established by the infrastructure supporting applications, to the higher-level network established by the applications as they’re deployed.  For a modern, containerized, microservices-based application, the API forms a network.  Its endpoints are not IP addresses but interfaces (the “I” in “API”).  First-generation DAST does not perceive the entire API network, missing as much as 20 percent of active endpoints.  Developers often believe they’ve deprecated earlier versions of their APIs, when in fact they’re still operational.

Beyond the perimeter: Where is app security failing?

An April 2025 academic study by researchers from Pakistan’s National University and the UK’s University of York [PDF] examined the performance of four conventional SAST tools and five conventional DAST tools, working both separately and together, in spotting known vulnerabilities.  The team discovered none of the tools they tested were capable of detecting eight known and very common classes of CVE-catalogued vulnerabilities, as well as one of OWASP’s Top 10 categories of API security flaws for 2021.  “This highlights a critical limitation of popular tools,” the team wrote.

DAST also has many issues when it comes to modern web frameworks and APIs in a discoverability sense. They just can't test what they can't find. DAST has also always struggled to identify business logic flaws that are complicated, multi-step, involve the DOM, or need human interaction.

Dynamic Application Security Testing (DAST), Built for API-First Apps

Modern apps are API-heavy, stateful, and dynamic. Legacy DASTs that “crawl and guess” miss large parts of the attack surface, struggle with auth and state, and bury teams in noise. This section covers what modern DAST must do and how to operationalize it next to API discovery and code-aware testing.

Why Legacy DAST Misses Modern Apps
  • SPA & dynamic routing: Client-rendered routes hide content from naive crawlers; form flows and JS events gate critical paths.
  • Auth & state drift: Sessions expire; CSRF/anti-automation defenses and MFA flows break scans; test data isn’t persisted between steps.
  • API blind spots: Endpoints outside an OpenAPI file (shadow/legacy/versioned) go untested.
  • Throughput vs. safety: Aggressive fuzzing trips WAF/rate limits and pollutes logs; timid scans under-cover.
  • Alert fatigue: Pattern-only findings without exploit proof (and no code or request/response evidence) waste triage cycles.
What “Next-Gen DAST” Looks Like
  1. API-Aware Discovery First. Pull endpoints from multiple sources—gateway logs, traffic, service catalogs, OpenAPI/GraphQL, and code metadata—then feed that inventory into DAST so you test what actually exists, not just what’s documented. 
  2. Auth & Stateful Crawling that Actually Works. Store and refresh tokens/cookies, replay login journeys, and handle multi-step forms. LLM-assisted form handling/crawling improves path coverage on real apps. 
  3. Smart Test Generation. Combine semantic checks (OWASP API/Web Top 10 classes, authz bypass, object-level access, mass assignment) with context from discovery to avoid blind fuzzing.
  4. Exploit Verification & Evidence. Attempt lightweight, safe proof steps (e.g., controlled data reflection or unauthorized resource access) and capture request/response evidence. Prefer “code-traced” linkage when available to slash false positives.
  5. Coverage Tracking & Ratcheting. Track endpoint & route coverage across builds; fail PRs only when new gaps or criticals appear.
  6. Pipeline-Native Controls. Choose “fast PR check,” “balanced CI,” or “deep nightly” modes; honor rate limits and change windows.
NightVision’s Approach to DAST (How It Fits Your Stack)
  • Discovery → Test: NightVision’s API discovery (API Envy) enumerates real endpoints—including shadow and versioned routes—and feeds that inventory directly into DAST so testing starts from reality, not a stale spec.
  • Crawl That Keeps Up: Session-aware scanning with LLM-assisted form handling reduces stalls on login and multi-step flows, improving route coverage on SPAs and authenticated areas. 
  • Impact-First Findings: Evidence-backed results (request/response artifacts and reproduction steps) cut triage time; optional code-trace mapping (when integrated) helps you send fixes to the right owners quickly.
  • Tunable for CI/CD: Run lightweight checks in PRs, balanced scans on main, and deeper scheduled scans off-hours. Rate-limit awareness and safe defaults keep production happy.
  • Why this matters: Teams told us they want DAST alongside API discovery in the pillar—not buried—so they show up for DAST queries and reflect real capability. This section does that, with a clear “what is next-gen DAST” frame.

Federal API Security Requirements (U.S.) and How NightVision Helps

Public-sector teams have to secure modern, API-heavy apps and show alignment to Federal guidance. This section maps the major requirements to what NightVision does out-of-the-box.

NightVision at a glance for regulated teams

  • API eNVy (static): Automatically generates OpenAPI specs by scanning source code. No run/compile needed; uncovers shadow/undocumented endpoints and version drift; syncs with GitHub/GitLab/Bitbucket for continuous coverage.
  • Language coverage: Java, Python, JS/TS, .NET, Go, Ruby (PHP in progress).
  • Gray-box testing: Discovery-driven DAST with auth/state handling for accurate findings.
  • Auth gaps: Flags missing/weak authentication and can auto-open PRs to remediate.

Requirement-by-requirement mapping

NIST SP 800-204 Series (microservices/API security)
  • What it asks for: Secure microservices with gateways/meshes, policy enforcement, and strong authz (ABAC), and integrate DevSecOps practices across code, infra, policy, and observability. (NIST Computer Security Resource Center)
  • How NightVision helps:
    • API eNVy extracts real endpoints from code to seed gateway/mesh policy and generate OpenAPI - improving least-privilege routing and policy scope.
    • DAST exercises those endpoints (including auth paths) and provides evidence for secure-by-default gateway rules and ABAC policy tuning (e.g., object-level access checks).
CISA Zero Trust Maturity Model (ZTMM v2.0)
  • What it asks for: Mature across Identity, Devices, Networks, Applications & Workloads, and Data; treat apps as internet-accessible, enforce strong auth, encrypt traffic, and continuously monitor/test. (CISA)

  • How NightVision helps:
    • Applications & Workloads: Discovery-led DAST provides “rigorous empirical testing” inputs and coverage reporting.
    • Data & Visibility: Request/response evidence and logs plug into SIEM/SOAR for continuous monitoring and ATO packages.
FedRAMP (NIST 800-53 Rev. 5–based)
  • What it asks for: Meet 800-53 Rev.5 control baselines and ConMon requirements (e.g., AC, AU, CM, SC, SI families); maintain logging/monitoring, vulnerability management, and secure engineering practices. (help.fedramp.gov)
  • How NightVision helps:
    • Controls alignment: Evidence-backed findings and exportable artifacts support AU/SI control narratives; static+dynamic coverage supports AC (authz), CM (change), and SC (secure comms) discussions in SSPs and annual assessments.
Executive Order 14028 (Improving the Nation’s Cybersecurity)
  • What it asks for: Secure software development, automated vulnerability discovery/remediation, EDR, robust logging, and SBOM practices; greater real-world testing and coordinated disclosure.
  • How NightVision helps:
    • DAST produces reproducible exploit evidence and aligns with “test like an adversary”
    • API eNVy helps maintain accurate API inventories/specs that support SBOM and dependency provenance across services.
OMB M-22-09 (Federal Zero Trust Strategy)
  • What it asks for (by FY24 and beyond): Phishing-resistant MFA, encrypt internal DNS/HTTP, treat apps as internet-accessible, and routinely subject applications to rigorous empirical testing while welcoming external reports. (The White House)

  • How NightVision helps:
    • Gray-box DAST operationalizes the memo’s “empirical testing” requirement and produces artifacts for POA&Ms and governance;

API eNVy keeps the application inventory/current endpoints in sync with gateway policies as apps move to internet-accessible postures.

Shadow APIs and Zombie APIs

When an automated documentation process such as Swagger visualizes API endpoints in an application, it can miss a surprisingly large number of active points for a number of shockingly understandable reasons:

  • Developers may intentionally build secret APIs into a system for testing or development purposes, then leave them in the source code (shadow APIs).
  • Organizations may have deprecated previously authentic APIs or replaced them with new versions when they update their applications, without removing the older versions — sometimes intentionally, for instance, to ensure downward compatibility with older applications still installed in the field (zombie APIs).

Both of these categories are often also referred to as rogue APIs.  A SAST tool that scans for API endpoints from source code, and a DAST tool whose map of all endpoints in an application come from Swagger documentation, will naturally fail to uncover rogue APIs.  As a result, the most vulnerable endpoints in an application — the ones a malicious actor wants to concentrate on anyway — will go unrecognized.

Learn More from NightVision

NightVision’s eNVy API Discovery identifies, catalogs, and documents active, inactive, shadow, and zombie APIs.  By finding every endpoint, eNVy’s DAST effectively maps the entire attack surface of your application.
Denial-of-Wallet (DoW) attacks

One of OWASP’s Top 10 API security vulnerabilities is catalogued as Unrestricted Resource Consumption, referring to the ability of code — malicious or benign — to overtax endpoints with deluges of requests.  One pattern for a request that may be used in such a deluge is a deeply nested GraphQL resource request, such as for alternating properties of an object, like this:

query {
  titles {
    authors {
      titles {
        authors {
          titles {
            authors {
              titles {
                authors {
                  name
                }
              }
            }
          }
        }
      }
    }
  }
}

An API gateway whose query depth limit is manually set to 3 would thwart the execution of the above GraphQL query.  But since such a gateway is, by definition, a serverless service, each time that gateway is invoked, it could incur a charge.  It’s like a robotic security guard that sends you a bill each time it gets kicked.  A SAST scanner should be able to spot such an egregious example as the code fragment above.  However, if the GraphQL query were constructed dynamically — for instance, through the insertion of conditional directives such as @include — you’d need to execute the code to see exactly how much depth the query reached.

As a result, a denial-of-wallet attack (DoW) could be easily constructed using an unrestricted number of simultaneous queries, each of which kick the gateway, with each kick incurring a charge.  Another layer of response could be to set a rate limiter or throttler to a reasonable cap, such as 100 requests per IP address per minute.  The way around that obstruction would be to make the DoW attack distributed (DDoW), and then deploy the attack from multiple addresses.  A rate limiter and a throttler are themselves serverless functions, so they too would incur charges.

What makes modern API security “next-generation?”

API development has always been about validating inputs, authenticating users, encrypting sensitive information, and observing and adhering to permissions.  In the early 2000s, the prevailing model of networked applications was called web services, to distinguish them from what was still being considered “real software.”  At that time, the prevailing methodology for web services communication was SOAP, a messaging protocol based around XML.

When the WS-Security model for SOAP was first conceived, its champions initially touted their proposed strategy as “comprehensive.” [PDF]  When the model was first published, however, its initial practitioners conceded, “It is not feasible to provide a comprehensive list of security considerations for such an extensible set of mechanisms.”

Modern API security is more comprehensive because it pays closer attention to the then-unforeseen deficiencies introduced to the network by SOAP, and that could easily have been continued by REST and GraphQL.  Here’s how some of the significant evolutionary changes address the elements that API security of the past missed:

  • Direct fault observation — SOAP set a precedent for producing error messages (“Fault Response Messages”) that were, in their own way, comprehensive.  They were so descriptive of what went wrong that a malicious actor could infer quite a bit about the profile of the entire network.  GraphQL recognized this open window of exploitability right away.  But its developers’ suggestion that detailed error information only be revealed to properly authenticated roles has inadvertently led to reliance upon logging and event management, which at the outset were rather haphazard.  A multi-billion dollar industry emerged from managing security event logs.  Next-generation API security recognizes that bad behavior incidents will often fail to make the log, no matter how comprehensive the log management system may be.  Faults need to be observed, not just recorded, so that protection may be delivered at runtime.  DAST is built around direct observability, flagging faults and bad behavior as they happen, and spotting them in action before they are released to production.
  • Context-aware protection — REST made it feasible for a malicious actor with two or more active accounts to retrieve an object’s parameters with one account, then supplement a request using those parameters on another account to feign privileges to make higher-order, restricted requests.  The phenomenon of insecure direct object references (IDOR) has historically been one of OWASP’s more common API security quirks.  The logic that leads to successful authentication bypass exploits can be masked so effectively that SAST can miss it altogether.  Modern API security not only spots authentication bypass attempts in progress, but moves one step further back in time to observe and flag the parameter disclosures that lead to bypassed authentication in the first place.  Benign looking logic can lead to malicious incursion, and next-gen API security addresses this by implementing context-aware protection.  With this method, an observation tool can investigate whether a process, based on its recent behavior, appears to actually need a parameter that it’s requested.
  • Perception past the perimeter — While web application firewalls (WAF) were originally marketed to offer deep visibility into API traffic over the network, many WAF components utilize packet sampling to monitor traffic behavior, rather than examining every packet.  When traffic bottlenecks become an issue, DevSecOps personnel have been known to reconfigure their WAFs to allow some encrypted traffic to pass through unchecked, especially if it has a large packet size.  Malicious actors exploit this tendency by generating synthetic requests with oversized headers.  Modern API security enables an additional layer of application-level security, including by enabling signals and monitoring to be embedded in the application code itself.

In recent years, API development has incorporated the notions of mapping HTTP routes to secure methods, pre-evaluating data for the presence of commands to thwart injections, and assessing output data before it’s rendered to ensure its adherence to compliance frameworks.  The key principle of next-generation API security that distinguishes it from previous incarnations is the enablement of fuller visibility into security incorporation and validation processes.  Rather than having faith that everything should come out reasonably okay at the end, next-gen API security illuminates defects and bad behavior at their inception.

With every new generation of API security, its practitioners assert themselves by declaring that now they embrace greater visibility, whereas they didn’t before.  This may not be the last time this happens.  The visibility that’s being incorporated with this latest round embraces a new awareness that the interface of the application — the “I” part of “API” — may cover a far wider span of functions and methods than its developers and maintainers believe it does, and more than they would assert they designed it to have.  Once an application becomes distributed and networked, it morphs into its own beast.  Clever individuals may craft permutations of requests that no developer ever intended to design or activate.  An application’s API is the entirety of the addressable network that it generates and supports, including the parts that were deployed intentionally and the other parts that happened to find their way there organically.

Learn More from NightVision

NightVision eNVy API Discovery
is a quick, simple, and inexpensive solution that enables initial onboarding in just minutes, and that soon produces results for developers and engineers in just seconds.

The key tenets of next-generation API security testing

There are two aspects of cybersecurity that a renewed attention to the integrity of the API will directly benefit.  The first, obviously, is API security.  In this respect, we mean the ability of the interface of a networked, usually distributed, application to facilitate access to data and functionality without endangering the integrity of the system, and without providing unauthorized access to privileged data and functionality to an unverified or non-authentic party.  Keep in mind, an API does not have to be defective to be insecure.

The second is the broader domain of application security, which refers to the integrity and efficiency of the programs, along with the platforms and infrastructure that support them.  An application’s security may be threatened when its API is vulnerable.  So there is a separate realm of application security testing, covered later in this article, that embodies applying first principles to strengthen application code, protecting it from exposure to vulnerability by insecure APIs.

API discovery and inventory management

From the perspective of anything that could breach the integrity of an API, whether maliciously or incidentally, an API presents itself as an attack surface.  It is not so much as a wall that may or may not be penetrated to some extent, but a programmatic structure where information transactions take place.  Comprehending the structure and dynamics of this wall requires us to adopt a rational, relatable set of metrics.

An API inventory is not just a list of function calls or the output from a Swagger scan.  The sensitivity of each API function to being probed or prodded directly maps to the level of risk the organization assumes by exposing that function.  An API need not be vulnerable to be sensitive.  Risk occurs whenever an API is capable of divulging information that could conceivably be weaponized against the organization, or misused somehow to its detriment.  This is true even if the API performs precisely as designed and maintains, by any measure, the highest integrity.

So a proper API inventory process is an evaluation of how much its existence exposes the organization to risk.  The presence of any rogue API function — any endpoint that communicates information in a way that’s not being monitored and that is not traceable and auditable — is a threat to the organization even if that function technically does not constitute a vulnerability.  It’s like having a headquarters building with doors you don’t know are there, thus you never think about locking them.

What makes the task of API discovery so confusing for a great many people is the ongoing disagreement over the fundamental question of whether an API is a singularity or a collective.  When the concept was first introduced to network computing in the late 1980s, an API was a library of accessible functions, each of which was accessed via a remote procedure call.  In common conversation among developers, an individual endpoint that responds to HTTP POST and GET requests could be an API, and the collection of all such endpoints constitutes all of the organization’s APIs.  Either definition is perfectly sensible, although if there were only one, we’d be a lot better off.  Although this may seem like nitpicking, security engineers have stated for the record that the issue of API nomenclature is the first and foremost sticking point preventing organizations from finally getting a handle on the API discovery process.

To make sense to both groups of API practitioners, the concept of API discovery must embody a complete reconnaissance of the communications structure of the application.  This investigation must turn up all of the following:

  • The complete route with which each function is addressed, including the fully-qualified domain name (FQDN), the method name, and subsequent paths
  • The identities of the physical host servers and virtual containers that support the complete API
  • All the integrations made by vendors, suppliers, and the organization’s own developers that have shifted the functionality of the API library beyond its original profile (and which may, in turn, have led to older, defunct, though still operable endpoints accessible as rogue APIs)
  • The inclusion, location, and protection of all active, machine-readable specifications using OpenAPI (Swagger), RAML, API Blueprint, or other formats, that once acquired and parsed may serve as maps for the entire API structure, including active and rogue endpoints

A complete API inventory management regimen that follows from the discovery process should document the following:

  • The identities of applications to which API requests are mapped
  • The identity, structure, format, and sensitivity levels of the data which API functions return
  • The identities of the roles and/or individuals who have responsibility for managing the data that API functions transact
Learn More from NightVision

NightVision’s API discovery method is a blend of static and dynamic analysis.  Through static analysis, which generates an OpenAPI inventory file, NightVision eNVy obtains a complete site map of all API endpoints and the domains in which they operate, including rogue API functions that may have been missed.  Then using dynamic analysis, eNVy conducts tests on all discovered endpoints to evaluate their sensitivity to, for instance, divulging unprivileged data.  
You can sign up for NightVision eNVy now for free.

Advanced API security testing

Penetration testing (“pentesting”) of modern applications does not necessarily mean exploiting them in production under black-hat scenarios.  NightVision eNVy implements a kind of penetration testing as its implementation of DAST:  By deploying applications under development within a secure, publicly inaccessible staging environment, eNVy can interact directly with the API in a live setting, scanning for compromised functions and observing all logging and outputs, without endangering or compromising the application in production.

No DAST tool, however, is a complete replacement for a professional offensive security test.  NightVision partner Sidekick Security offers a comprehensive suite of offensive security testing services, including staged API vulnerability examinations.

Inside organizations, DevSecOps personnel may stage live penetration tests of API-accessible applications using a variety of tools, including:

  • Burp Suite, whose Community Edition includes a proxy server for simulated attacks that intercepts HTTPS traffic and facilitates inspection and live modification of requests and responses, plus whose commercial edition includes an automated vulnerability scanner that audits requests for common exploits such as cross-site scripting (XSS) and SQL injection
  • Zed Attack Proxy (ZAP), which implements a manipulator-in-the-middle between the application and browser, intercepting and modifying messages for experimentation purposes before forwarding them towards their intended destination
  • Fuzz Faster you Fool (Ffuf), which is a fuzzing tool that utilizes brute-force methods to identify hidden, exploitable directories, paths, routes, and files, producing verbose reports of their results and findings
  • Wfuzz, which is another fuzzing tool that analyzes HTTP requests and systematically replaces all references to the word FUZZ with permutations of parameters, authentication tokens, common names of files and directories, and information often found in request headers, to determine whether susceptible APIs will act on those mangled requests by attempting to parse them
  • NightVision, which we believe takes many of the strengths of the point and shoot tools above and strips them of their weaknesses with our state of the art pairing of DAST and SAST which can work faster, more accurately, and seamlessly work into the CI/CD pipeline.

API Security Posture Management (ASPM)

APIs are not exclusively usable the way they were designed or intended.  API discovery applied on a continual basis enables an organization to maintain a complete assessment of how its application functionality is accessible, and through what means.

The information collected through API discovery should not only be kept and used by developers.  Security and DevSecOps personnel charged with oversight of risk management will also benefit from what discovery reveals about the integrity of the application network.  API Security Posture Management (ASPM) refers to a method for making API discovery information actionable in the risk management and compliance processes.

NIST CSF 2.0

The US National Institute of Science and Technology’s Cybersecurity Framework (CSF 2.0) isn’t a law or set of regulations, but rather a means for organizations to visualize cybersecurity goals in a common manner, and implement a common methodology to address them.  CSF 2.0’s six core functions — Govern, Identify, Protect, Detect, Respond, and Recover — map directly to best practices for implementing and maintaining API security through discovery.

SOC 2

In the US, organizations often hire independent auditors to conduct investigations into their cybersecurity posture.  The System and Organization Controls Framework (SOC 2, also called Trust Services Criteria) lets personnel self-assess their systems’ and organization’s security posture by means of a questionnaire.  This tool covers five principal categories: Security, Availability (which also pertains to the accessibility of functionality through APIs), Processing Integrity, Confidentiality, and Privacy.

The figures obtained through API discovery apply directly to the areas of insight covered by the SOC 2 questionnaire: for example, authentication methods, error handling procedures, versioning protocols, rate limiting policies, and data format specifications.  For an organization to maintain its security posture through an ASPM regime, it needs a robust and continual API discovery policy in place.

PCI DSS

The world’s leading payment processing companies jointly developed the Payment Card Industry Data Security Standard (PCI DSS) to give governments a means of mandating and ensuring that retailers and other payment processors adhere to security standards, or else face stiff fines.  Requirement 6.2.3 of PCI DSS applies to the testing and validation of custom software code before it is released to production.

The need for API security is amplified greatly by the fact that developers frequently customize their API endpoints for the needs of their organizations, facilitating data exchanges and transactions that are particular to their needs without needing to alter the core source code.  When organizations that process financial transactions fully comply with PCI DSS, they establish a working environment that essentially necessitates the creation and adoption of API coding best practices.  A programmatic approach to API security requires direct observation of API behavior in action.  SAST alone won’t cut it.

Learn More from NightVision

NightVision Has Completed a SOC 2 Exam. Here’s What It Means for You.

The key tenets of next-generation application security testing

If you apply next-generation API security at the proper stage of the application lifecycle — a stage we argue is further “left” in the process — then the real payoff comes from the ability to inform the application’s architecture early in its development.  A human code architect has the mandate to imbue an application with fundamental coding principles for security and integrity.  Modern API security, applied during the design and staging phases of development (not just the production phase), can and should guide decisions about how better to apply these principles.  This way, once the application is fully built and deployed into production, its security becomes tighter, observable, predictable, measurable, and extensible.

What is a secure application?

First and foremost, a secure application avoids misusing data.  By “misuse” in this context, we mean enabling any entity (user or agent) to utilize data for purposes for which that entity is not authorized, or to do damage to the application, or to harm the application’s underlying network and infrastructure.  There’s a variety of potential disruptions that can occur: denial of service, unauthorized access, disruption of infrastructure, even conceivably the collapse of the entire system.  All of these potential outcomes begin with misuse of data.

There is a circle of thought which maintains that, once the developers of an application have achieved a desired state of API security, developers can rest assured the application has access solely to the data to which it is already entitled.  Type checking would no longer be something the application needs to do, for example, because the API would have already taken care of that.  In fact, this assertion has been offered more than once as a definition of shift-left: catching vulnerabilities at the API before they reach the application core.  An extended form of this argument is something you may have read before elsewhere, and on first read it does seem persuasive:  Integrating automated security tools and policies into the CI/CD workflow, including adding SAST and DAST, enables developers to focus on crafting business logic without being “dragged down” with security issues.

Sensing there may be some trouble with that interpretation, competitors have put forth an alternative approach:  SAST, they argue, has been the poster child for shift-left, thus because SAST is ineffective by itself, shift-left is ineffective as an ideal.  As this line of reasoning continues, since DAST is capable of detecting a greater number of vulnerabilities — both known and known — at runtime (a.k.a. in production) a so-called “DAST-first” approach would actually shift security right in the SDLC model, waiting until the code is completely built and staged, then observing production behavior very closely in keeping with a “fail fast” mindset.  This is often cited by practitioners of the “build early, build often” approach to so-called accelerated DevOps, and has on occasion been co-opted by subscribers to the Agile framework.

One of the classic texts on this topic — API Security in Action by Neil Madden — defines it as the intersection between application security, network security, and information security.  Although this is not technically wrong, a more updated interpretation should perhaps take into account an emerging reality:  While API security is a common factor of all three practices, it is not a subset of any of them.  Rather, API security should inform the application, including giving it the tools and resources it requires to ensure for itself that it is fulfilling its first mission: avoiding misuse of the data.

Integrating API security into the SDLC

Since NASA first coined the term “software,” there has been a professed ideal that true software security and integrity begins at the design phase.  Yet developers have not always had the tools and resources that would enable them to envision what code integrity should look like to start with, or to observe bad behavior for themselves before their code is ever deployed to production.

A holistic approach to security, in the context of modern API security, enables a developer to make adaptations and corrections to their code much closer to their “first drafts.”

The traditional SDLC is, ironically, not depicted as a cycle at all but rather a six-phase linear itinerary.  Planning is at the beginning of this line, and deployment at the end.  Whenever technologists are forced to consider the SDLC as genuinely cyclical, they will begrudgingly concede that software development is indeed continuous as the word “cycle” suggests.  From there, they will eventually accept that security is ideally integrated into each stage of the cycle, and when they’ve finally achieved peak honesty, they’ll admit that security truly encompasses the cycle in its entirety.

In an environment where security is only given any consideration whatsoever after software has been deployed into production, security actually plays no part in the SDLC.  Once security is given the serious attention it requires to positively influence the architecture of software, then security becomes a factor in every stage of the SDLC.  Yes, the true degree of security of any amount of code may only be gauged when that code is executed.  Yet that execution can be engineered to take place much earlier in the development process, even before the typical testing and staging phases in DevOps, in a limited and secure environment only the developer needs to see.  In this sense, where the developer is finally allowed to think about security without risk of offending other people in the organization paid to be “the security guys,” we can say the organization thinks about security earlier than before.  For this reason — again, for the left-to-right readers among us — you will hear that this ideal is responsible for shifting security left.

Perhaps more accurately, a holistic approach to security shifts this mindset left, right, up, and down.  It acknowledges that security is integral to thinking about code, as opposed to an afterthought that comes up once code is deployed.  It only makes sense that a security tool should help developers to think about their code better.

Learn More from NightVision

Make meaningful security improvements throughout the SDLC.

Empower your DevSecOps with NightVision.

Where API security enriches application coding principles

The ideal to which the shift-left concept points is to enable security principles to be baked into code at its inception.  The effectiveness of this capability cannot be estimated or assumed.  It may only be measured directly, which is why next-gen API security is crucial to this effort.

Principle of Least Privilege (POLP)

When articulated properly, the Principle of Least Privilege asserts that no entity is every given more authority or access than is necessary for it to perform its task, and no authorized human user is delegated more accessibility than is required for that user to fulfill their designated role.

The implications of this principle are too often unspoken.  What POLP means is that there is no entitlement in a properly secured system.  This contradicts the assertion made by proponents of role-based access control (RBAC) that the institution of roles makes it easy to designate access to users or entities based on what their anticipated level of usage entitles them to.  “Traditionally,” writes the AWS DevOps & Developer Productivity Blog, “customers have used RBAC to manage entitlements within their applications.  The application controls what user can do, based on the roles they are assigned.”

No application can simultaneously adhere to the idea that an application can delegate entitlements to users or authenticated entities in advance using three or four general accessibility levels of RBAC, and to the Principle of Least Privilege.  It might not be possible or practical for an API to acknowledge hundreds of individual access rights, or separate Access Control Lists (ACL), for each entity granted access to a system.  RBAC is a necessary thing, at least in the beginning.  Next-gen API security makes it feasible for a developer to design roles that are more granular and that are a tighter fit for different classes of user, especially as newer classes become necessary over time.

It may mean there are more classes of user to consider.  Yet a dynamic permissions system, used in conjunction with an identity and access management (IAM) system for microservices applications, can be made to scale across service domains.  This makes it possible for the IAM to tailor new, more refined, and more applicable roles dynamically as the application deployment evolves organically, without the application needing to be re-architected, and without having to re-engineer API security.

This preceding paragraph might have actually described policy-based access control (PBAC), where the concept of the role is replaced by a kind of working, dynamic contract between the user/entity and the application.  So be it.  PBAC may be what RBAC must evolve into, to ensure that a growing number of classes are always assigned the minimum privilege possible.

Next-gen API security gives developers a way to observe application behavior at a very deep level, informing them as to the extent to which certain pre-defined security roles may be too loose for a new and emerging class of user.  Networked, distributed applications have a tendency to evolve as they scale up and scale out.  The architectural models that depicted their operations while they were being prototyped, may no longer be applicable once their deployments have become mature.  This is one of a multitude of reasons why continual API behavior observation has become absolutely critical for modern web applications.

Learn More from NightVision

NightVision AppSec simulates real-world attacks to determine which parts of your application are exploitable, tracing its findings directly back to code.  Capabilities include API identification, comprehensive API scanning, shadow API discovery, pinpointing and highlighting vulnerable code, seamless integration, and plug-and-play testing.
Zero-Trust Security Model

You may have read an argument made by security vendors in recent years that the zero-trust ideal and the API security ideal are incompatible.  The reason, they assert, is because zero-trust is about denying access as opposed to restricting access, while API security is about granting access.

This is a misinterpretation.  A reasonable conclusion to be drawn by anyone reading NIST’s seven tenets of zero-trust [PDF] would conclude the following:

  • All accessibility is ascertained and delegated only at the time it is required
  • Any entity may only be granted access to resources when that access is required, after which time that grant is revoked (once again, there is no entitlement)
  • All policies granting and denying access are dynamic and continually reviewed, as a result of all transactions and network activity being continuously monitored
  • Systems are capable of improving their own capability to provide and deny access and authentication based on the activity they are monitoring

That last conclusion might sound like zero-trust requires AI to be practicable, and indeed it seems like AI would certainly help.  What is clearly necessary in order to bring about this more accurately interpreted ideal of zero-trust is that API security must be proactive, constant, and vigilant.  Put another way, it can’t be done automatically.  It is an “eyes-on” mission.

Defense in Depth (DiD)

The non-profit Center for Internet Security (CIS) embraces the concept of Defense in Depth, acknowledging that no single security mechanism or methodology ensures absolute safety, but all of them together (or at least, as many as may be compatible with one another) is worth whatever redundancy that may result.  So while a Google or YouTube search may uncover a multitude of “showdown” comparisons between SAST and DAST, or DAST and WAF, asking the perennial question, “Which should you choose?” DiD maintains that no single technology is a choice, but rather a necessity.

SAST and DAST both have significant roles to play in application security testing.  SAST does not scan live applications but rather source code, searching for patterns between application code fragments and recorded patterns associated with known, catalogued vulnerabilities.  That’s essentially the definition of code-level security.  This code is not executed, so SAST is not looking for behavior patterns, but instead suspicious semantics.  DAST exposes suspicious behaviors.  A Defense in Depth approach to architecture maintains that, since both SAST and DAST have separate, useful roles to play, then both have a place in an application’s overall security.

Microsegmentation

One of the security practices that Defense in-Depth legitimizes and promotes is microsegmentation.  At its inception, this methodology was about maximizing the modularity of applications, and separating the interaction between those modules with a web application firewall.  This way, access to functions could be granted or denied at a very granular level.  Arguably, if an application is finely segmented, a less granular RBAC scheme may be effective enough at granting and denying access to individual functions without having to update policy each time.

Microsegmentation is typically introduced to newcomers to the topic by explaining it reduces the attack surface of an application.  This is about as sensible as arguing that a semi-trailer truck, by virtue of being constructed using a greater number of parts, is far less likely to be struck on the highway by another vehicle.

As mentioned earlier, insecure direct object references is one of OWASP’s ten most common means of bypassing API security.  IDOR is accomplished by attacking more than one API endpoint at a time, using the information gleaned through one interaction — often through brute force — as parameters (direct object references) in one or more other interactions.  The endpoint is susceptible to assuming that the values for these parameters were obtained legitimately, and thus presumes the agent in the transaction was legitimately authorized.  Obviously, the means of penetrating a microsegmented application is to attack a plurality of segments simultaneously and leverage the confusion generated in the process.  Making the targets smaller, as things turned out, only made the attack surface larger.

Granular permissions and authentications checking is still a good idea — but not for the reason originally concocted.  When you adopt a Defense in Depth approach and supplement your WAFs with next-gen API security, you have the means for every segment of your application, no matter how macro or micro it may be, to continually validate permissions and authorities for users and entities as traffic flows between segments.

The imperative of next-gen AppSec

We can talk about the high cost organizations pay when their applications are discovered to be insecure.  Let’s state the unstated fact that should be most obvious:  Deferring the act of building secure applications until a later and later stage is a loan.  That time you buy to keep security from “dragging down” your developers is repaid with interest.  And the interest rate, as history shows us time and again, is huge.

What we’re calling next-gen application security should, in retrospect, have been first-generation.  We grew accustomed to a development environment where security was shoved to one side or the other — not so much left or right as down.  Developers and security engineers rarely met one another.  Even their tech conferences were separate events.  Now is the opportunity to integrate security into DevOps at a deeper level than just incorporating “Sec” into the portmanteau.

Now is your opportunity to secure the future in the present, rather than waiting for the future to become the past.  Schedule a call with a NightVision security expert now to learn how your organization can align its security development with its application architecture development.  Start remediating vulnerabilities at the root.

Contact Us