When standards meet patents: The economics of SEPs and FRAND (Part II)
Standards solve a coordination problem—and then create a new one. Once an industry converges on a technical interface, compatibility becomes a requirement, not a preference.
Standards solve a coordination problem—and then create a new one. Once an industry converges on a technical interface, compatibility becomes a requirement, not a preference. If implementing that interface requires patented technology, the patents become standard-essential, and licensing shifts into a high-stakes environment shaped by lock-in and complements: missing one license can block an entire product. Part I used this logic to explain hold-up, a recurring dispute in SEP markets. Part II asks what keeps this system workable in practice: how FRAND tries to pin royalties to ex ante value, why injunction leverage is the pressure point, and how thickets, stacking, and pools complicate the governance problem. The literature is quite complex and a bit jargon-heavy, so buckle up.
FRAND commitments: the governance response to SEP bargaining power
SSOs’ “FRAND” policies can be understood as an attempt to manage this transformed bargaining problem. FRAND stands for “Fair, Reasonable, and Non-Discriminatory.” The goal of FRAND is to create a neutral zone in which royalties reflect the technology’s ex-ante incremental value, rather than the value created by lock-in. Daniel Swanson and William Baumol explain that FRAND does this by pushing the negotiation back toward an ex-ante benchmark—the value of the technology before the standard was chosen and before implementers became locked in. The relevant comparison is not “how valuable is this feature inside today’s ubiquitous standard?”, but rather “how much better was this technology than the next-best alternative that could have been standardized?” This idea aims to strip out the portion of value stemming from the industry’s coordination on the standard itself (and from implementers’ sunk investments), leaving the patent holder rewarded for genuine technical merit.
If that sounds like a clean solution, the complication is that FRAND is intentionally incomplete. SSOs typically require a commitment to license on FRAND terms, but they generally avoid specifying numbers: they do not set a royalty rate, define the precise royalty base, or provide a formula that mechanically maps a patent portfolio into a fee. In practice, FRAND functions less like a posted price and more like a contractual framework—a set of constraints on bargaining behavior—within which firms negotiate bilaterally (and, when negotiations fail, ask courts or arbitrators to fill in the missing terms). Mark Lemley tells us that this incompleteness is not accidental. If SSOs tried to dictate prices centrally, they would risk becoming de facto price-setting bodies, raising governance and competition-law concerns, and would still face severe information problems: the value of a technology depends on context, alternatives, and portfolio interactions that are hard to observe ex-ante.
The most concrete way to see why FRAND matters is through injunction leverage. An injunction means that a court can order an implementer to stop making, using, or selling the standard-compliant product, so the SEP holder can credibly threaten to shut down sales unless a license is agreed. Mark Lemley and Carl Shapiro explain that the essence of hold-up is not merely that implementers are locked in; it is that a patent holder may credibly threaten exclusion—an injunction—unless the implementer agrees to pay. That threat can shift bargaining outcomes even when the patent’s contribution is modest, because the implementer’s downside from disruption is so large. FRAND commitments are meant to attenuate this dynamic: they signal that licensing should be available on reasonable terms, and, in many settings, they are invoked to argue that exclusionary threats should be limited when an implementer is willing to take a license.
At the same time, limiting exclusion can create a different concern: when enforcement is slow, and outcomes are uncertain, some implementers may have incentives to delay taking a license if the discounted expected cost of litigation is lower than the cost of reaching an agreement early—feeding the “hold-out” narrative. [1] However, evidence that post-judgment ongoing royalties are often higher than pre-judgment rates implies that delay can be costly when the implementer ultimately loses, strengthening incentives to settle in many cases. [2]
More generally, FRAND is a compromise between two risks: too much exclusionary leverage can let licensors capture value created by lock-in, but too little leverage can weaken payment discipline and reduce incentives to contribute technology to open standards. A vivid illustration is IEEE Standards Association’s 2015 patent-policy revision discussed by Michela Bonani, which sought to reduce FRAND uncertainty by offering more guidance on “reasonable” royalties (including a recommendation to anchor royalties to the smallest salable compliant implementation) and by tightening the conditions under which SEP holders could seek prohibitive orders (injunctions). In the aftermath, several major contributors—including Qualcomm, Ericsson, and InterDigital—reportedly refused to submit Letters of Assurance under the revised policy.
FRAND is therefore best understood as institutional plumbing rather than a magic number. It is an attempt to preserve the gains from standardization—interoperability, network effects, scale—while preventing the bargaining environment created by essentiality from turning into either opportunistic extraction by SEP holders or strategic delay by implementers.
From one SEP to many: patent thickets and royalty stacking
So far, the discussion has treated licensing mainly as if there were a single essential patent holder and a single implementer. In reality, major standards are typically covered by many patents owned by many entities. This creates what the literature calls a patent thicket: a dense landscape of overlapping and potentially blocking rights that an implementer must clear to bring a standard-compliant product to market. The thicket is not just “a lot of patents.” It is fragmented ownership plus overlap, which turns licensing from a bilateral negotiation into a multi-party coordination problem. In a previous post, we discussed one consequence of this problem: the tragedy of the anti-commons.
A visible symptom of this fragmentation is “royalty stacking.” When each SEP holder sets its royalty (or negotiates its share) independently, the sum of royalties can become large—even if each individual request appears “reasonable” when viewed in isolation. A simple illustration makes the point. Suppose a product needs licenses from ten SEP holders, and each asks for a 1–2% royalty. None of these numbers sounds outrageous on its own. But in aggregate, the implied licensing burden can quickly reach double digits, before even accounting for non-SEP IP, compliance costs, or manufacturing margins. The implementer experiences the total, not the components.
This aggregation problem has a clean economic structure. Carl Shapiro emphasizes that when multiple firms control complementary (blocking) rights, they face a “Cournot complements” problem: each licensor sets its terms to maximize its own return while ignoring the effect on total output (and thus on other licensors’ royalty revenues)—but higher aggregate royalties depress implementation and shrink the pie. The result is an inefficiently high total royalty burden relative to what a coordinated set of licensors would choose. The thicket problem is compounded by uncertainty about what is truly essential. As noted earlier, many SSOs rely on self-declaration, and essentiality is rarely verified at scale. Mathias Dewatripont and Patrick Legros describe a related strategic response as “padding”: firms have incentives to declare (or contribute) patents that are not strictly essential in order to increase their perceived share of the licensing revenue associated with the standard. Padding makes the thicket denser and increases the number of claims that must be assessed, negotiated, or litigated—further intensifying the Cournot complements dynamic.
Finally, thickets impose transaction costs that go beyond the royalty rate itself. Each additional licensor means additional search and verification, negotiation, and enforcement risk. These frictions create delays and uncertainty that can be especially burdensome for smaller entrants, who find it harder to amortize legal and licensing costs. Consistent with this view, Iain Cockburn and Megan MacGarvie find that software start-ups in thicketed markets experienced delayed initial VC funding and that the negative effects of thickets were largely driven by their impact on new entrants (small, specialized firms) rather than established incumbents. Thus, patent thickets can function as an entry tax on standards-based innovation—one that is paid not only in money, but also in time and managerial attention.
Patent pools: when bundling solves (some) coordination failures
If the problem with SEP licensing is “too many doors to knock on,” patent pools are the canonical attempt to create a one-stop shop. A patent pool is a licensing arrangement in which multiple patent holders aggregate (some of) their patents—often those claimed to be essential to a standard—and offer a single portfolio license through a common administrator. Instead of negotiating coverage separately with each SEP owner, an implementer can obtain coverage for a bundle in a single transaction.
Economically, pools can address two distinct frictions created by patent thickets. First, they reduce transaction costs: fewer counterparties to identify, fewer bilateral negotiations, and a clearer licensing pathway for entrants who lack the scale to negotiate dozens (or hundreds) of agreements. Second, pools can mitigate royalty stacking by internalizing the complements pricing externality discussed above: when complementary inputs are priced independently, each supplier adds its own markup without accounting for its effect on total adoption, leading to an overall price that is too high and output that is too low. Bundling complementary rights can reduce that inefficiency by aligning incentives around the total royalty burden rather than each individual slice, consistent with Josh Lerner and Jean Tirole’s finding that pools enhance welfare by lowering prices when they aggregate complementary technologies. Historical evidence by Ryan Lampe and Petra Moser on the 19th-century sewing machine industry also suggests that pools can function as a coordination technology in their own right—sometimes facilitating diffusion by clearing fragmented rights—though the welfare effect depends on whether the pool aggregates complements or suppresses rivalry.
But pools are not a free lunch, and the controversies around them track the same economic logic. Pools are most defensible when they bundle complementary patents—rights that are jointly needed to implement the standard. If a pool starts bundling substitute patents (covering alternative ways of doing the same thing), it can become a vehicle for suppressing competition between technologies or for raising rivals’ costs, as Richard Gilbert discusses extensively. This is why essentiality screening is central. In principle, if a pool license covers only truly essential patents, the bundle behaves like a set of complements. In practice, essentiality is hard to verify, and “padding” incentives do not disappear simply because a pool exists. The credibility and rigor of the pool’s inclusion rules—what qualifies, how it is checked, and how disputes are handled—therefore matter greatly for whether the pool reduces frictions or exacerbates them.
A final wrinkle is that markets do not always converge on one pool, and not all major SEP holders join. Sometimes there are competing pools for the same standard, offering different coverage, different rates, or different governance. For example, licensing for the HEVC/H.265 video-compression standard has been offered through multiple rival pools (including MPEG LA, Access Advance/HEVC Advance, and Velos Media). This situation can preserve some competitive pressure among licensing platforms, but it can also recreate coordination costs: implementers must compare bundles, assess gaps in coverage, and potentially take multiple pool licenses plus additional bilateral licenses. In other words, pools can simplify the “too many doors” problem—but when there are multiple pools (and not all major SEP holders join), some of the fragmentation simply reappears in a new form.
Regarding the incentives of SEP holders to join a pool, Anne Layne-Farrar and Josh Lerner show that participation is higher when portfolios are more symmetric, while pools with more founders can deter later participation because rents are shared among more claimants. Antonio Tesoriere offers a complementary mechanism: when a pool shares revenue proportional to patent counts, it is stable against opportunistic reshuffling of patents (a SEP holder strategically splitting its patent portfolio into smaller portfolios). But this stability can come at the cost of participation, because small portfolio holders may prefer to remain outside and compete—highlighting a trade-off between stable sharing rules and broad membership.
Seen through this lens, patent pools are best understood as a market-design response to the same underlying issue: once standardization turns patents into complementary bottlenecks, the licensing problem becomes as much about coordination as about pricing. Pools can improve coordination, but their effectiveness depends on governance—especially on whether the bundle truly consists of complements, and whether inclusion is disciplined by credible essentiality checks—and on its coverage.
Declared vs. true essentiality: a measurement and governance problem
At this point, a subtle but crucial distinction comes back into focus: declared essentiality is not the same thing as true essentiality. This measurement-and-verification problem is not a footnote: pool credibility, stacking estimates, and even FRAND disputes depend on knowing which patents are truly essential. In most SSOs, firms are asked (or required) to declare patents that may be essential to a standard. [3] As explained earlier, the declaration is typically based on the firm’s own assessment and is rarely verified ex-ante by the SSO. The resulting SEP landscape is therefore noisy: it contains patents that are truly indispensable, patents that are plausibly relevant but not strictly required, and patents that are strategically “in the mix” even though they are inessential. A manual check by David Goodman and Robert Myers of patents declared essential to 3G cellular standards (3GPP and 3GPP2) found that only about 21% of these patents were actually essential. Lorenz Brachtendorf and colleagues confirm this noise using automated text analysis, finding that while truly essential patents show high semantic similarity to standards, a significant portion of declared SEPs exhibit low similarity, indicative of over-declaration.
This gap matters because it directly amplifies the coordination failures discussed above. Over-declaration—whether framed as caution, strategic positioning, or “padding”—inflates the apparent thicket by increasing the number of claimed rights that implementers must evaluate. That, in turn, raises transaction costs (search, legal review, and negotiation) and can worsen royalty stacking dynamics by expanding the set of parties with a claim to a share of licensing revenue. It also complicates patent pools: the more inessential patents are presented as essential, the harder it becomes for a pool to credibly claim that its bundle consists of complements rather than a mixture that includes irrelevant or even substitute technologies. Put differently, the economics of pools depends heavily on bundling genuine complements; over-declaration makes that classification problem harder and makes outcomes more contested.
Because of these costs, essentiality checks have emerged as a partial institutional response. The basic idea is straightforward: rather than relying solely on self-declarations, an independent technical review assesses whether a given patent claim is actually required to implement the standard. A credible checking mechanism can improve market functioning in three ways. First, it reduces uncertainty for implementers by clarifying what truly needs to be licensed. Second, it strengthens the informational foundations of pooling by helping ensure that pools bundle complements (and by making inclusion rules more defensible). Third, it can discipline incentives to “pad” declarations, since the expected payoff from declaring weakly related patents falls when those patents are likely to be screened out. Florian Schuett and Chayanin Wipusanawan formally model this dynamic, showing that essentiality checks reduce wasteful litigation by eliminating information asymmetries between patent holders and implementers.
A final implication is that SSO design matters. As Benjamin Chiao and colleagues empirically show, SSOs vary widely in their disclosure requirements (e.g., the specificity of required declarations) and licensing rules, often reflecting the organization’s orientation toward technology sponsors rather than users. Those governance choices shape not only the size of the declared SEP universe, but also the severity of the downstream coordination problems—thickets, stacking, and the effectiveness (or limits) of patent pools as a remedy.
Strategic behavior around standardization: positioning IP when the stakes are high
Standardization not only coordinates technologies; it also reshapes incentives. Once a standard becomes the focal point for an industry, being “inside” the standard can create a durable revenue stream through SEP licensing. It is, therefore, unsurprising that firms invest not only in R&D but also in strategic IP positioning around the standardization process.
One recurring pattern is “just-in-time patenting”: Byeongwoo Kang and Rudi Bekkers document the opportunistic filing of patent applications shortly before or during key standardization milestones, when the technical direction of the standard is becoming clearer and when contributions can be aligned with the specification. A related tactic involves using continuations (and other claim-drafting strategies) to adjust patent claims over time so that they “read on” the evolving standard. Cesare Righi and Tim Simcoe (2023) provide strong evidence of this practice, finding that standardization leads to an 80–121% increase in continuation filings, with the vast majority of SEP continuations filed after the standard is published. The point is not necessarily that the underlying engineering is trivial; rather, patent prosecution offers flexibility in how an invention is claimed, and that flexibility becomes more valuable when a particular technical pathway is about to be widely adopted. Consistent with this narrative, Florian Berger and colleagues provide empirical evidence showing that essential patents have more claims and amendments than a set of control patents.
This strategic behavior has two economic implications. First, it reinforces why declared essentiality can be noisy: as the standard crystallizes, the set of patents that plausibly cover it can expand through claim amendments and late-stage filings, increasing uncertainty for implementers. Second, it highlights why SSO governance choices—timing rules, disclosure expectations, and treatment of late disclosures—can matter for market outcomes. In short, once standards shape rents, firms respond by investing in both the technology and the IP boundary around it.
So what? Innovation, diffusion, and policy trade-offs
Standards are a powerful institution for diffusing innovation. By creating interoperability, they expand markets, lower adoption frictions, and enable cumulative innovation: firms can build new products and complementary technologies on a shared platform rather than reinventing interfaces from scratch. In that sense, standardization is often a force multiplier—turning dispersed inventions into an ecosystem.
SEPs sit at the center of this system in an ambivalent way: they can reward genuine technical contributions that enable interoperability, but they can also create bottlenecks when licensing becomes fragmented, noisy, or used as bargaining leverage. When standardization makes certain patents unavoidable, licensing shifts from a competitive setting to a bargaining environment shaped by lock-in, imperfect information, and multi-party coordination problems. Patent thickets, royalty stacking, and litigation risk can act as a tax on entry—paid not only in money, but also in time, uncertainty, and managerial attention. The result is a genuine trade-off: the industry benefits of interoperability can coexist with frictions that slow implementation and skew participation.
The policy question, therefore, is how to design institutions so that rewards for innovation do not choke diffusion. The broader lesson is simple. Standards create economic value by coordinating markets. When patents sit at the core of that coordination, the challenge is to keep the system in the neutral zone: innovators are compensated for genuine technical contributions, and implementers can bring standard-compliant products to market without paying for lock-in. Getting there is less about slogans than about institutional details—and those details, as this literature shows, are where the real economics lives.

If you enjoy evidence-based takes on patents and innovation, join hundreds of readers who receive The Patentist directly in their inbox.
Please cite this post as follows:
de Rassenfosse, G. (2026). When standards meet patents: The economics of SEPs and FRAND. The Patentist Living Literature Review 11: 1–11. DOI: TBC.

