PRE-SEED · APRIL 2026

The Validation Layer for AI-Generated Mobile Apps.

Run one command before you submit. ~30 seconds. Four layers. The independent check between code generation and the app store.

Raising $2–3M · invest@aucert.ai · Vivek Soneja & Rajesh Kumar
02 / 11 PROBLEM

Mobile QA was already broken.

0h+
Lost per release to manual QA — even teams with automation lose 6–10 hrs.
Runway 2025, n=300[2]
0h4h8h
0%
Of mobile teams hit release-blocking incidents on a regular cadence.
Runway 2025[2]
0%50%100%
0.00M
Apple App Store submissions rejected in 2024 — performance, legal, design, business, safety.
Apple Transparency Report[3]
01.2M2.4M
0.00M
Google Play apps blocked in 2024 — 92% AI-assisted enforcement, 158K dev accounts banned.
Infosecurity Magazine[4]
01.2M2.4M

Apple Guideline 5.1.2(i) — effective Nov 13, 2025

Any third-party AI call now requires explicit user consent and the AI provider must be named. Most AI-coded apps ship in violation on day one. No LLM with a pre-Nov-2025 training cutoff knows it exists. Compliance moves faster than retraining cycles.[5]

03 / 11 SOLUTION

One command. ~30 seconds. Release-ready.

Code Generation
Cursor · Copilot · Rork · Claude Code
RELEASE-READY CHECK
AuCert Checks
~30s · 4 layers · ready signal + fix prompt
App Store
Apple · Google
~/projects/my-app
$aucert validate my-app.apk → scanning ... 4 layers ... ~28s static compliance . . . . . . . . . pass crash & performance . . . . . . . . fail (Samsung A14, Android 11) security . . . . . . . . . . . . . . 2 warnings functional behavior . . . . . . . . pass ──────────────────────────────────────────── Result: FAIL — 1 blocker, 2 warnings → fix prompt ready: aucert fix --paste-to=cursor
LAYER 01

Static Compliance

Manifest/Info.plist parsing · permissions vs. usage · SDK violations · 5.1.2(i) AI disclosure · content rating.

Free tier
LAYER 02

Crash & Performance

Top 20 device + OS combos · cold-start, ANR, OOM · catches the 34% of bugs that only reproduce on real devices.[15]

Paid
LAYER 03

Security

Static + dynamic · hardcoded secrets · insecure storage · vulnerable deps · OWASP MASVS baseline.

Paid
LAYER 04

Functional Behavior

AI-powered exploratory testing · walks the app like a user · flags visual regressions and broken flows.

Paid
Everyone else hands you tools. We hand you a release-ready signal.
04 / 11 TEAM · FOUNDER–PROBLEM FIT

The team that's lived this problem.

CO-FOUNDER · CEO

Vivek Soneja

San Francisco Bay Area · MS CS, Georgia Tech

Engineering leader at PhonePe — one of India's largest fintech apps, hundreds of millions of users, billions of transactions. Previously shipped core Android at Flipkart. Brings mobile depth at regulated scale, the PhonePe design-partner relationship, and Bay Area investor network.

CO-FOUNDER · CTO

Rajesh Kumar

Bengaluru · University of Wisconsin-Whitewater

Engineering leader at Multiplier. Previously at PayPal — Settlement Hackathon winner, 2015. Brings payments and compliance-heavy systems experience, plus the ability to run engineering in India — lower burn, deep mobile talent pool.

01
Shipped mobile to hundreds of millions of users in regulated, high-stakes environments (fintech, e-commerce).
02
Personally owned the release pipeline through Apple & Google review cycles — felt every rejection, every hotfix, every 24-hour ANR storm.
03
Watched AI codegen flood the same pipeline from inside a high-stakes mobile org. PhonePe's $150K verbal commit is the evidence.

"Which is why we know exactly what's about to break →"

05 / 11 HOW IT WORKS · VISION

Today: a release-ready signal. Year 3: the validation graph.

Three runtime layers · LLM cascade · 80% of findings resolve on cheap models.

01

Compliance Parsing

Manifests, plists, SDK versions, permissions. ~10K policy rules across Apple, Google, Samsung, Amazon, Huawei.

02

Device Matrix

Cloud farm runs the binary across the top 20 device + OS combinations. Cold-start, ANR, OOM, battery, network.

03

AI Behavioral

Walks the app like a user. Visual regressions, broken flows, and a paste-ready fix prompt for Cursor/Claude Code/Copilot.

Verification Cascade — the cost moat

80% of scans resolve on cheap models · $0.20–$0.40 blended cost
Tier 1
Haiku · Flash
80% · $0.01–0.05
Tier 2 · escalate
Sonnet
15% · $0.10–0.40
Tier 3 · edge cases
Opus
5% · $0.50–2.00
3-YEAR ARC · LAUNCH → SCALE → LOCK-IN
YEAR 1
LAUNCH
$150–180K
ARR run rate

Own the pre-submission moment

OSS CLI · GitHub Action · 10K free users · 5 design partners · PhonePe live.

YEAR 2
SCALE
$1M+
ARR · 5+ logos

PLG bottoms-up to enterprise

First enterprise AE · CI/CD integrations · 500 paid teams · multi-SDK compliance packs.

YEAR 3
LOCK-IN
Network
effects compound

The validation graph

Cross-customer failure intelligence · SDK + device fingerprints · "CrowdStrike for app quality."

06 / 11 MARKET

$1B SOM. Bottom-up. Conservative.

0M
Global mobile devs[16]
0M
Ship to stores[17]
0.0M
Use AI coding tools[1]
0K
5-yr paying target
(~20% capture)
500K teams × $2K blended ACV = $1.0B SOM
Team $1,788/yr + Growth $4,788/yr + Enterprise $50K+ weighted by tier mix. BrowserStack averages $7.6K — our $2K leaves enterprise headroom.
Comparable moat

BrowserStack

$381M revenue · $4B valuation · ~50K customers — built on cross-browser + device farm alone.[8]
Excluded from TAM: 750K+ monthly web vibe-coders on Lovable — they deploy to Vercel, not stores. Wearables (CarPlay/Android Auto), automotive, TV (tvOS, Fire TV) come later as cross-store extension.
07 / 11 WHY NOW

Five convergent shifts. All inside 18 months.

46% of code is AI-generated

AI codegen tipping point

80% of new GitHub devs use Copilot week-1. Acceptance rate 27–30% — more code, more bugs.[1]

85% MoM mobile codegen traffic growth

Mobile codegen mainstream

Rork, RapidNative, Replit Mobile, Cursor mobile flows — all <12 mo old · 743K monthly visits and accelerating.[11]

4.29M store rejections in 2024

Store enforcement tightened

Apple 1.93M[3] + Google 2.36M[4] + the new 5.1.2(i) AI rule, Nov 13 2025.[5]

Q4 2025 capability unlock

Model capability threshold

Opus 4.5 and GPT-5.1 reason about compliance, SDKs, crash traces at a quality that didn't exist 18 months ago.

3 labs publish on it

Maker/checker is doctrine

Anthropic · OpenAI · DeepMind all publish on independent verification. We're productizing it for one specific domain.

"The system that produces a decision is never the system that validates it."

08 / 11 COMPETITION · MOAT

Everyone else hands you tools. We hand you a release-ready signal.

Player What they sell Why they lose to us
BrowserStack $381M / $4B Cross-browser + device farm Infrastructure, not a release-ready signal. You still write the tests. No compliance intelligence.
Sauce Labs Selenium / Appium cloud Same — IaaS. Built pre-AI. Manual test authoring required.
Firebase Test Lab Google Free Robo crawler + device access Android only. No compliance. No fix prompts. No incentive to block submissions to its own store.
Sofy.ai $9.6M raised No-code mobile test automation No round since 2022. 2019-era positioning. Vibe-coders don't author tests — they want a one-shot ready check.
Spur YC S24, $4.5M AI QA for web e-commerce Web vertical. Different buyer. Validates the category, not a competitor.
AuCert A release-ready signal + fix prompt Cross-model · pre-submission · 10K+ store rules · open-source CLI hook · enterprise compliance moat.
COST MOAT

Verification Cascade

80% / 15% / 5%Haiku · Sonnet · Opus

$0.20–$0.40 blended cost per scan. ~75–85% gross margin. A naive wrapper burns cash.

DATA MOAT

Validation Graph

10K+ customersshared failure intelligence

SDK + permission + device = failure. Leaving us means flying blind. CrowdStrike's exact moat.

INFRA MOAT

Device + Rule Library

10K+ policiesApple · Google · Samsung · Amazon · Huawei

Kept current by a dedicated compliance research function. A team, not a prompt.

What about Cursor / Copilot building this? The maker can't credibly check itself — same logic that keeps auditors independent of accounting. Plus the training signals oppose each other: generation optimizes for plausibility; validation optimizes for catching low-probability violations. Cursor could buy us. Rork could partner. Neither will build us — incentives don't line up.
09 / 11 BUSINESS MODEL · GO-TO-MARKET

How we charge. How we acquire.

Credits-based · unlimited seats · release-driven. Clay's exact model: $0 → $100M+ ARR.

Free
$0/mo
500 credits/mo
CAC engine · indie wedge
Team
$149/mo
5K credits/mo
Gross margin ~75%
Enterprise
$50K+/yr
Custom · SSO · SOC 2 · audit
Gross margin ~85%
Cascade economics: 80% on cheap models. A typical scan burns 5–20 credits.
MOTION · FROM CLI TO ENTERPRISE
OSS CLI
npx aucert validate · Product Hunt + HN launch
0K free
Users in 6 months · partnerships + content + GitHub Action
0%
Free → paid conversion (OpenView PLG benchmark)[19]
$0M ARR
500 paid teams × Team-tier · run-rate exit Y2
GitHub + Product Hunt Rork / RapidNative / Replit Mobile GitHub Action marketplace /r/androiddev · /r/iOSProgramming · /r/reactnative YouTube demos droidcon · MobileDevWorld · AppDevCon
Year 1: 100% founder-led, bottoms-up. No outbound. No BDRs. First enterprise AE in Year 2 once 5+ logos signed.
OSS → ENTERPRISE · PROVEN PLAYBOOK $30B+ aggregate value
Vercel
$9.3B
ClickHouse
$15B
Supabase
$2B → $10B
Sentry
$3B
PostHog
$920M

Free OSS CLI builds the install base. Paid cloud + enterprise compliance is the moat.

10 / 11 TRACTION · YEAR-1 BRIDGE

Pre-launch. Soft commits. $150–180K run rate.

APRIL 2026
BUILT
MAY 2026
LAUNCH
H2 2026
PIPELINE
BUILT
  • Working CLI: aucert validate
  • LLM cascade — Haiku → Sonnet → Opus, end-to-end
  • Rule pack: 5.1.2(i), iOS 17/18, top-20 Play flags
SOFT-COMMITTED
  • PhonePe · $150K/yr verbal
  • Beans.ai · Multiplier — design-partner convos
  • 7 warm VC intros · Nishant Mittal's network
SHIPPING — MAY
  • Cloud device farm — top 20 Android + iOS
  • AI behavioral testing layer
  • GitHub Action · OSS CLI · Web dashboard

Year-1 ARR bridge — $150–180K run rate

PhonePe — $150K/yr commit, half-year ramp
$0K
3 design partners — $20K avg
$0K
Inbound Growth tier — 5–10 logos × $399/mo
$24–48K
Total run-rate by month 12
$150–180K
Risk we're naming: PhonePe single-name concentration · device-farm execution by May 2026 launch. Both retired by this round.
11 / 11 THE ASK
$2–3M Pre-Seed
12-MONTH RUNWAY · MAY 2026 LAUNCH

USE OF FUNDS

Product 45%
Team 30%
GTM 20%
DP 5%
Product: ship v1 by May, cloud device farm, OSS CLI, GitHub Action, dashboard
Team: core eng (device infra + AI/ML), DevRel by month 6
GTM: Product Hunt, dev content, Rork/RapidNative integrations
DP: convert PhonePe contract, onboard 3–5 more

OSS → ENTERPRISE COMPARABLES

Vercel
$9.3B[6]
ClickHouse
$15B[7]
BrowserStack
$4B / $381M[8]
Sentry
$3B / $100M[22]
Supabase
$2B → $10B[12]
PostHog
$920M[13]

Mobile testing alone is tens of billions in aggregate value. We're playing in a proven category — with an AI-native wedge, a regulated gate, and a network that compounds.

invest@aucert.ai  ·  Vivek Soneja & Rajesh Kumar

1 / 11