IP
IPValueLabs
FeaturedPatent Prosecution12 min read

How to Respond to a USPTO Section 101 Rejection for AI Software Patents

A § 101 rejection can feel like a dead end—but it rarely has to be. According to AI Working Group 2120 data, 77% of AI-related Office Actions included a § 101 rejection in 2024, up from 38.5% in 2022. Yet generative AI applications still achieved a 79% allowance rate overall. The gap between those numbers represents opportunity: these rejections are common, predictable, and—with the right strategy—surmountable. This guide explains the legal framework, the specific pitfalls AI inventors face, and the concrete strategies practitioners use to secure allowance in 2025 and beyond.

1. Understanding Section 101 and the Alice/Mayo Framework

35 U.S.C. § 101defines the categories of patentable subject matter: any new and useful process, machine, manufacture, or composition of matter. On its face the statute is broad, but judicial exceptions carve out three categories of ineligible subject matter—laws of nature, natural phenomena, and abstract ideas.

The modern eligibility analysis flows from two Supreme Court decisions. Mayo Collaborative Services v. Prometheus Laboratories (2012) addressed laws of nature, while Alice Corp. v. CLS Bank International (2014) addressed abstract ideas. Together, they established a two-step test that the USPTO applies to every patent application:

  1. Step 1:Are the claims directed to a judicial exception—an abstract idea, a law of nature, or a natural phenomenon?
  2. Step 2A (Prong 1 & Prong 2) and Step 2B: If so, do the claims nonetheless recite additional elements that integrate the exception into a “practical application” (Step 2A, Prong 2)? If not, do they amount to “significantly more” than the abstract idea itself (Step 2B)?

AI and machine learning inventions are particularly vulnerable to § 101 rejections because their core innovations often involve mathematical operations, data classification, and pattern recognition—activities that examiners readily categorize as abstract ideas or mental processes. The scale of the problem is significant: in 2024, the Federal Circuit decided 22 § 101 cases and found claims eligible in only one of them—a 95.5% invalidity rate. At the PTAB, affirmance rates for § 101 rejections historically hovered between 88% and 91%, though that figure shifted notably under Director Squires, with the reversal rate tripling to 29% by November 2025.

Understanding this challenging but evolving landscape is the first step toward crafting a successful response.

2. USPTO Guidance Updates: 2024–2025

The USPTO has issued a series of updates that directly reshape how examiners evaluate AI patent eligibility. Practitioners who cite these developments in their responses gain measurable traction with examiners.

July 2024: New AI-Specific Examples

The USPTO released Examples 47 and 48as part of its updated Patent Eligibility Guidance. Example 47 addresses a neural network-based intrusion detection system, and Example 48 covers AI-driven speech separation—both found eligible. These examples establish that AI claims demonstrating specific improvements to the functioning of a computer or to another technology satisfy the practical application test at Step 2A, Prong 2. The improvement must be evident from the claims themselves, not merely stated in the specification.

August 2025: The Kim Memo

The Kim Memo introduced several critical distinctions that practitioners should cite directly. First, it draws a line between claims that “recite” a judicial exception versus those that merely “involve”one—a distinction that narrows the scope of Step 2A, Prong 1. Second, it clarifies that the mental process exception applies only when the claimed steps are practically performable by a human, not merely theoretically conceivable. Third, it reinforces that the USPTO uses the preponderance of evidencestandard and explicitly discourages examiners from issuing § 101 rejections in close cases.

September 2025: Ex parte Desjardins

In this precedential decision from the Anteroom Review Panel (ARP), the PTAB reversed a § 101 rejection of a DeepMind continual-learning patent. The panel held that “software can effect non-abstract improvements to computer technology” and that claims directed to modifications in the AI backbone architecture itself—rather than merely applying AI to a new domain—qualify as patent-eligible improvements. This decision is significant because it provides a PTAB-level precedent specifically addressing AI architecture innovations.

December 2025: MPEP Updates and SMEDs

The MPEP was updated to incorporate the Desjardins holding, and the USPTO introduced Subject Matter Eligibility Determinations (SMEDs)—standardized evidentiary frameworks that give applicants a structured vehicle for presenting objective evidence of technical improvement. Practitioners should use SMEDs to submit benchmark data, ablation studies, and computational complexity analyses as part of their § 101 responses.

3. Common 101 Rejection Patterns for AI Inventions

Most § 101 rejections for AI patent applications fall into one of four recurring patterns. Recognizing the specific category the examiner has invoked is essential to framing an effective response—particularly after the Kim Memo narrowed the circumstances under which each category should apply.

“Mathematical Concepts”

Machine learning algorithms are frequently characterized as mathematical formulas or relationships. An examiner may assert that a convolutional neural network is simply a series of matrix multiplications and nonlinear activation functions. This characterization ignores the practical context in which the algorithm operates. After Desjardins, practitioners can argue that modifications to the AI backbone architecture—not just the math—constitute non-abstract improvements to computer technology.

“Mental Processes”

Examiners often argue that a classification or prediction task could theoretically be performed by a human using pen and paper. The Kim Memo now limits this category: a process qualifies as a mental process only if it is practically performable by a human, not merely theoretically conceivable. A claim involving billions of parameter updates across a distributed training pipeline, for instance, falls outside this category by definition. Cite the Kim Memo’s “recites vs. involves” distinction to push back on overbroad mental process characterizations.

“Methods of Organizing Human Activity”

AI-driven business method patents are especially vulnerable. An invention that uses a recommendation engine to optimize pricing or a reinforcement learning agent to manage supply chains may be characterized as a fundamental economic practice merely automated with generic computing. The Federal Circuit confirmed this risk in Recentive Analytics v. Fox Corp.(April 2025), holding that “applying generic ML to new data environments without improving the models themselves” is not eligible. Certiorari was denied in December 2025, cementing this as binding precedent.

“Abstract Idea of Collecting and Analyzing Data”

Many AI inventions follow a pipeline pattern: collect data, process it, output a result. Examiners frequently cite Electric Power Group v. Alstom and, more recently, Broadband iTV v. Amazon (2024) and Mobile Acuity v. Blippar(2024)—both ineligible—to argue that the collection, organization, and display of information is abstract regardless of how sophisticated the analysis may be. This pattern is particularly problematic for claims directed to predictive analytics, anomaly detection, and data-driven diagnostics.

4. Strategies for Overcoming 101 Rejections

Responding to a § 101 rejection requires a combination of claim amendments and persuasive argumentation. The following strategies reflect current best practices informed by the 2025 guidance updates and recent case law.

Lead with Step 2A Prong Two, Not Prong One

The most effective responses focus on demonstrating a practical applicationat Step 2A, Prong 2 rather than arguing that the claims do not recite an abstract idea at Prong 1. Prong 1 arguments ask the examiner to reverse their initial characterization—an uphill battle. Prong 2 arguments accept the premise and show that the claims integrate the exception into something more. After the Kim Memo, examiners are instructed to give close calls to the applicant, making Prong 2 the highest-probability path to allowance.

Show Improvements to the AI Itself, Not Just Its Application

Recentive Analytics v. Fox Corp. drew a bright line: applying generic ML to a new data environment is not enough. The claimed improvement must be to the AI model, training process, or inference architecture itself. Ex parte Desjardins reinforces this—the eligible claims there involved modifications to the continual-learning backbone design. Amend claims to specify what changed in the model architecture, the loss function, or the training protocol, and why that change produces a measurable technical benefit.

Amend Claims Rather Than Argue Alone

Pure argument without claim amendment rarely succeeds. The PTAB historically affirmed § 101 rejections at an 88–91% rate. Amending claims to incorporate specific technical limitations from dependent claims—particular model architectures, hardware integration, or quantified performance improvements—gives the examiner a concrete basis for withdrawal. Draft a layered claim set where dependent claims progressively add technical specificity, so amendments never require new matter.

Specify Hardware Elements

Claims that integrate the AI model with specific hardware components—ASICs, GPUs, FPGA implementations, edge computing devices with defined memory constraints, or sensor arrays—are harder to characterize as abstract. Reciting the interaction between software and hardware demonstrates that the invention is a particular machine configured to achieve a technical result, not merely a mathematical concept. The eligible claims in Contour IP v. GoPro (September 2024) were directed to specific means that improved the underlying technology.

Use SMEDs for Objective Evidence

Since December 2025, the USPTO’s Subject Matter Eligibility Determinations framework provides a structured vehicle for submitting objective evidence of technical improvement. Include benchmark comparisons, ablation studies, or computational complexity analyses showing that the claimed invention produces measurable improvements over the prior art. When arguing practical application, being able to point to specific performance data through a SMED submission strengthens the argument that the improvement is real, not theoretical.

Request Examiner Interviews Before Formal Response

An examiner interview is one of the most underutilized tools in patent prosecution. Before filing a formal response, request a telephone or video interview to walk through the technical details, identify which specific claim amendments would satisfy the examiner’s concerns, and reach an agreement that avoids further office actions. This is especially valuable in AI cases, where examiners may lack deep domain expertise in machine learning architectures.

5. Drafting AI Patent Claims That Survive 101 Scrutiny

The difference between a claim that fails § 101 and one that survives often comes down to drafting technique. Below is a comparison illustrating how the same underlying invention can be claimed in ways that produce vastly different prosecution outcomes.

Weak Claim (Likely Rejected)

1. A method for detecting anomalies in a dataset,
   comprising:
     receiving a plurality of data points;
     applying a machine learning model to the
       data points to generate anomaly scores; and
     identifying data points with anomaly scores
       exceeding a threshold as anomalous.

This claim is functionally described and technology-agnostic. An examiner will argue that “applying a machine learning model” is a mathematical concept, that “generating anomaly scores” is a mental process, and that the entire method amounts to the abstract idea of collecting and analyzing data—precisely the pattern condemned in Broadband iTV v. Amazon and Mobile Acuity v. Blippar. The claim does not specify the model architecture, the nature of the data, or any technical improvement over existing approaches.

Strong Claim (Eligible)

1. A computer-implemented method for real-time
   detection of anomalous sensor readings in an
   industrial control system, comprising:
     receiving, by a processing unit coupled to a
       sensor array, a time-series stream of sensor
       readings at a sampling rate of at least 1 kHz;
     extracting, using a temporal convolutional
       network comprising a plurality of dilated
       causal convolution layers, a feature vector
       from a sliding window of the sensor readings,
       wherein each dilated causal convolution layer
       applies a dilation factor that increases
       exponentially across successive layers to
       capture multi-scale temporal dependencies
       without increasing computational complexity
       beyond O(n log n);
     computing an anomaly score for the feature
       vector using a learned threshold function
       trained on labeled normal-operation data; and
     transmitting, to a supervisory controller
       within a latency of less than 10 milliseconds,
       a control signal responsive to the anomaly
       score exceeding the learned threshold,
       thereby enabling the supervisory controller to
       initiate a safety protocol before a fault
       propagates through the control system.

This claim is anchored in a specific technical environment (industrial control system with a sensor array), recites a particular model architecture (temporal convolutional network with dilated causal convolution layers), quantifies the computational constraint (O(n log n) complexity), and identifies a concrete technical result (sub-10-millisecond latency enabling preemptive safety action). Under Desjardins, these modifications to the AI backbone qualify as non-abstract improvements to computer technology.

Functional vs. Structural Claiming

The weak claim above is an example of functional claiming—it describes what the method achieves without specifying how. Functional claims are not inherently invalid, but under § 101 they are far more likely to be treated as abstract. Structural claiming, by contrast, specifies the architecture, data flow, and system components that produce the result. For AI inventions, structural claims should identify the model type, the nature of the input data, the processing pipeline, and the technical effect on the system in which the model operates.

A practical middle ground is to include both types of claims: structural independent claims for prosecution strength, and functional dependent claims for broader enforcement flexibility if the patent is ultimately granted.

6. Recent Federal Circuit Decisions Shaping AI Patent Eligibility

Federal Circuit case law continues to evolve on the question of when AI and software inventions satisfy § 101. In 2024 the court decided 22 eligibility cases and found claims eligible in only one—Contour IP v. GoPro—its first reversal since 2021. The following decisions define the current boundaries.

Recentive Analytics, Inc. v. Fox Corp.

Ineligible

Federal Circuit, April 2025 • Certiorari denied December 2025

The court held that claims directed to scheduling live television programming using AI were ineligible under § 101. The claims recited collecting viewership data, applying a machine learning model to generate schedule recommendations, and outputting an optimized schedule. The court ruled that “applying generic ML to new data environments without improving the models themselves” does not supply an inventive concept. The Supreme Court’s denial of certiorari in December 2025 cemented this as controlling precedent.

Key takeaway: The improvement must be to the AI itself—its architecture, training process, or inference mechanism—not merely to the domain in which it operates.

Contour IP Holding v. GoPro, Inc.

Eligible

Federal Circuit, September 2024 • First reversal since 2021

The court reversed a district court finding of ineligibility, holding that claims directed to specific means for improving the underlying technology—rather than using a computer as a generic tool—satisfied § 101. This was the Federal Circuit’s first finding of software-related eligibility since 2021, making it a critical data point for practitioners. The decision reinforces that claims anchored to specific technical means and concrete improvements to how technology functions can survive Step 2A analysis.

Key takeaway: Claims directed to specific technical means that improve the functioning of technology—not just its application—remain the clearest path to eligibility.

Ex parte Desjardins (Precedential ARP Decision)

Eligible

PTAB Anteroom Review Panel, September 2025 • MPEP updated December 2025

The PTAB reversed a § 101 rejection of a DeepMind continual-learning patent, holding that claims focused on modifications to the AI backbone architecture constitute non-abstract improvements to computer technology. The panel stated that “software can effect non-abstract improvements to computer technology”—language now incorporated into the MPEP. This is the most AI-specific precedential decision from the PTAB to date and is directly citable in prosecution responses.

Key takeaway: Focus claims on AI backbone design and modification. Improvements to the model architecture itself—not just its deployment context—are patent-eligible under current PTAB precedent.

7. When to Consider Design-Arounds or Alternative IP Protection

Not every AI innovation is best protected through utility patents. When § 101 poses a persistent barrier—or when the nature of the invention makes patent protection strategically suboptimal—practitioners should consider the full spectrum of intellectual property tools. The global landscape adds urgency: China filed approximately 300,000 AI patent applications in 2024, representing roughly 70% of the world’s cumulative AI filings. The U.S. filed approximately 67,800, while GenAI patents specifically grew from 733 in 2014 to over 14,000 in 2023.

Trade Secrets for Training Data and Model Internals

Trade secret litigation has surged 25% since the Defend Trade Secrets Act (DTSA) was enacted in 2016, with over 1,200 cases filed annually. For AI companies, trade secrets are often the best protection for training data, model weights, hyperparameter configurations, and system prompts—elements that are valuable precisely because they are not publicly detectable. Patent disclosure requirements would force the inventor to reveal them. Trade secret protection has no expiration date and does not require demonstrating patent eligibility, but it offers no protection against independent discovery or reverse engineering. Patents, by contrast, are better suited for detectable innovations and novel architectures that competitors could independently develop.

Copyright for Model Weights and Code

While copyright does not protect functional aspects of software, it does protect the specific expression of code and—in evolving case law—may protect trained model weights as a form of creative compilation. Copyright registration is inexpensive, automatic upon creation, and provides statutory damages and attorney’s fees in infringement actions. For organizations that open-source their model architectures but want to control commercial use of their trained weights, copyright-based licensing is an increasingly common strategy.

Legislative Developments and Continuation Strategies

The legal landscape for AI patents is shifting rapidly. The Patent Eligibility Restoration Act (PERA), introduced in May 2025, would eliminate all judicially created exceptions to § 101—effectively overruling Alice and Mayo. While PERA’s passage is uncertain, its existence underscores the instability of current doctrine. Filing continuation applications preserves the priority date while allowing applicants to present new claim sets, respond to evolving case law, or take advantage of updated USPTO guidance. Claims that are rejected today under current precedent may become allowable if PERA passes or if the Federal Circuit further clarifies the eligibility standard. Maintaining a pending continuation ensures the applicant retains the ability to secure protection as the law develops.

Why Prosecution Investment Matters

Understanding the potential damages a patent can secure makes the investment in overcoming § 101 rejections worthwhile. Use our Patent Damages Estimator to model royalty scenarios and quantify the value of a successfully prosecuted AI patent.

Need Help With a § 101 Rejection?

Our patent analytics platform helps prosecution teams identify the strongest eligibility arguments for AI inventions—backed by data from thousands of office actions and Federal Circuit decisions.

Explore Patent Tools

Sources

Selected primary or official reference materials used for this guide.

Disclaimer: This article is for educational and informational purposes only and does not constitute legal advice. Patent eligibility analysis involves complex legal questions that require qualified professional guidance. Consult a licensed patent attorney for advice on specific matters.