Review.AI
HomeBrowseAskFrameworkAbout
Sign inAsk Review.AIAsk→
About · Est. 2025 · Bengaluru & remote

A trusted layer between you and the AI gold rush.

Company Core Layer Labs Private Limited
Founded Sep 2025
HQ Bengaluru, IN
Team humans + AI army
Backed by NSRCEL, IIM Bangalore
Every week, hundreds of new AI tools ship. Most directories rank by who paid the most. We ship a different deal: hand-tested reviews, a transparent scoring framework, and a reasoning layer that actually reads the evidence before it answers your question.
The shape of the work

Growing the catalog, one honest week at a time.

Testing pace · relative index
Our monthly testing output has roughly tripled since we started.
● live · updated weekly
3×2×1×
Sep '25OctNovDecJan '26FebMarApr (now)
Index is normalised against our first full testing month. Exact counts — and raw logs — live on each review's page.
Category mix · share of catalog
CodingLARGEST
Writing·
Design·
Research·
Voice / Video·
Other·
Bars are proportional — widest category is shown at full width.
~15%
Scores revised in public
Roughly one in seven reviews has moved since first publish. Each change will land on the per-tool changelog — rolling out soon.

We built this because we were tired of guessing.

Review.AI started as a spreadsheet. One founder, an evenings-and-weekends habit of testing every AI tool for real work, and a growing pile of arguments about which ones actually held up. The existing directories were full of affiliate links, outdated benchmarks, and write-ups from people who had clearly never opened the product.

So we made a list. Then a framework. Then a pipeline. Then we realised a lot of people were doing the same thing in the same week, and all of them were wrong about something.

We think AI tools deserve the same rigour the Wirecutter brought to kitchen gear and Consumer Reports brought to dishwashers. That means testing by hand. Admitting when we're wrong. Publishing raw logs. Leaving affiliate money on the table when it conflicts with the verdict.

We're not neutral about everything, though. We are deeply opinionated about which tool we'd hand to a friend starting tomorrow. That's the product.

What we believe

Six principles, stuck to a wall — and, so far, to the product.

Principle 01

Test it, or don't rank it.

No tool enters the catalog without a real human running the standardized prompt battery and real workflows. Vendor claims don't count.

✓ Hands-on testing    ✗ Marketing-copy directories
Principle 02

Show the receipts.

Every score is logged — prompts, outputs, temperatures, graders — and tied to a replayable battery run. We're opening those logs up to readers category by category as each one stabilises.

✓ Replayable logs · rolling out    ✗ “Trust us, it's good”
Principle 03

Affiliate links don't decide rank.

We do take referral deals, because paying humans to test tools costs money. But the RAI score is computed before we look at the referral rate. Ever.

✓ Editorial firewall    ✗ Pay-to-win placement
Principle 04

Be willing to be wrong in public.

When a tool ships a meaningful update or we misjudge it, we revise the score and say exactly why. A per-tool changelog will publish alongside each review, capturing every edit and the reason behind it.

✓ Public changelog · shipping soon    ✗ Silent re-edits
Principle 05

AI reasons over humans — not instead of.

AI turns our evidence into personalized answers. But the ranking, the prompts, and the graders are human. Reasoning is the interface, not the source of truth.

✓ Human-graded + AI-reasoned    ✗ Pure model leaderboard
Principle 06

One recommendation beats ten lists.

If we can't tell you which tool to start with, we haven't done the work. “Top 47 tools for X” posts are the enemy — they substitute volume for judgment.

✓ A clear editor's pick    ✗ Endless listicles
Timeline

How a spreadsheet became a startup.

From one shared sheet to the pipeline behind every answer.

We didn't plan to build a company. We just couldn't find a review we trusted, so we wrote one. Then another. Then kept going.

August 2025
The spreadsheet.

A shared Google Sheet ranking every AI tool the founder tried for real work. Out of the arguments in the comments came the first draft of the RAI framework — six dimensions, none of them neutral.

September 2025
Pen, paper, and papers filed.

Review.AI went from a spreadsheet to a plan: what the product should do, who it should serve, why a reasoning layer belonged next to the catalog. Core Layer Labs Private Limited was incorporated in the same month.

October 2025
Campus Founders Program.

Selected into the Campus Founders Program at NSRCEL, IIM Bangalore — the first formal stamp on a plan that, until then, had mostly lived in one person's notebook.

Dec 2025 – Jan 2026
The reasoning layer lands.

Ask opened to a private group of four hundred users — all asking real questions, all catching the edges we hadn't. Every one of those conversations went back into the system.

March 2026
Incubated at NSRCEL.

Formally incubated at NSRCEL, IIMB — the graduation from the Campus Founders Program into the full incubation track.

April 2026
A growing catalog. And counting.

What you're reading now. One founder, a team behind him, an AI army running the pipeline — and one honest recommendation at a time.

The humans

A founder. A team. An AI army.

Ah
Founder

Aayush Holkar

Started Review.AI after burning too many evenings trying to figure out which AI tool to actually use. Drives the RAI framework, the editorial rigour, and the decision that every score needs a human hand on it.

“If we can't tell you which tool to start with, we haven't done the work.”
+ Research analysts · Engineering · Editorial · An AI army that never sleeps
Mission · Why we exist

Cut the noise out of AI buying.

Every week, more AI tools ship than any one person can track. Most directories sort by who paid the most; most listicles by who launched the loudest. We hand-test every tool against a framework that's accountable, then reason over the evidence to give you the one answer you came for — specific, cited, and yours.

Vision · Where we're going

One honest recommendation. Every time.

The default trust layer for AI buyers — where the question “which tool should I use?” has one honest answer, tuned to your use case, your weights, your constraints. Scores that are earned, not bought. Receipts on the record. No one picking the wrong tool because a listicle ranked it first.

Help us raise the bar for AI tool reviews.

Writing for us, submitting a tool, or just disagreeing with a verdict — we take all of it seriously. The best reviews we've published started as reader complaints.

Read: how we review →Write to the editors
Review.AI
© 2026 · reviewai.in · The trusted layer for AI tools
Incubated atNSRCEL, IIM Bangalore
MethodologyThe RAI FrameworkSubmit a toolJournalAPI access
AboutCareersPressPrivacyTerms