Open Source Framework

From Idea to Publication
Automated.

Amplify turns your AI coding assistant into an autonomous research agent that conducts the full scientific workflow — literature review, experiment design, execution, and paper writing — with built-in rigor and human oversight.

24 Skills
7 Phases
4 Gates
3 Layers

AI Assistants Are Powerful.
But They Can't Do Research.

Ask any AI coding assistant to "help me write a paper" and you'll get a model trained for 1 epoch, cherry-picked metrics, fabricated references, and a report no reviewer would accept. The raw intelligence is there — what's missing is methodology.

Jumps to Code

Skips literature review, problem validation, and experimental design — going straight to implementation.

Cherry-Picks Results

Reports only best-case metrics from a single seed, hiding failures and variance behind selective reporting.

Fabricates Citations

Invents plausible-sounding references that don't exist, undermining scientific integrity.

No Iteration

Produces one-shot outputs without the diagnose-hypothesize-fix-measure cycles real research requires.

Amplify Bridges This Gap

Instead of building another standalone agent, Amplify is a skills-based framework that teaches your existing AI assistant how to do research properly. If you have a coding assistant and enough tokens, you have a co-scientist.

Seven Phases. Four Gates.
Full Scientific Rigor.

Every research project follows a structured 7-phase pipeline with mandatory human checkpoints. The AI proposes, analyzes, and enforces. You decide.

0

Domain Anchoring

Identify field, research type, generate expert persona, assess resources

AI anchors context
G1
Topic & Venue Lock
1

Direction Exploration

Review 15–30 papers, 6 deep thinking strategies, multi-agent brainstorming for 5+ ideas

3 agents debate ideas
2

Problem Validation

Adversarial questioning by 3 agent personas — novelty litmus test, data screening

Must survive scrutiny
G2
Plan Freeze
3

Method Design

Type-specific design, evaluation protocol locked, ablation plan, 3-agent review

Metrics frozen here
4

Experiment Execution

4a: exploratory probe → 4b: full execution. Baseline-first, 5 seeds, mandatory iteration

Run to completion
G3
Execution Readiness
5

Results Integration

3-agent story deliberation, claim-evidence alignment, publishability check

Narrative blueprint
G4
Write-Ready
6

Paper Writing

Modular LaTeX, per-section 3-agent polishing, full-paper review, reference verification

Publication quality

Three Layers of
Intelligent Control

24 skills organized in three layers — each with a distinct role in ensuring your research meets the highest standards.

Meta-Control Layer

4 skills

"Is this still on track?"

Novelty Classifier Scope Control Pivot or Kill Venue Alignment

Discipline Layer

7 skills

"What rules must I follow?"

Metric Lock Anti-Cherry-Pick Claim-Evidence Figure Quality Alt. Hypothesis Reproducibility Verification

Workflow Layer

13 skills

"What do I do next?"

Domain Anchoring Direction Exploration Problem Validation Method Design Evaluation Protocol Analysis Storyboard Experiment Execution Results Integration Paper Writing Multi-Round Deliberation Git Worktrees Parallel Agents

Built for
Real Science

Multi-Agent Deliberation

Specialized agent panels — not single-agent reasoning — debate at every stage where judgment matters. Up to 5 rounds of iterative evaluation until consensus.

Continuous Literature Search

Literature retrieval runs throughout all phases — not just once. From broad discovery in Phase 1 to targeted citation verification in Phase 6.

Discipline Enforcement

Metrics frozen after plan freeze. All seeds reported. Every claim maps to evidence. Fabricated references blocked. No shortcuts allowed.

Deep Idea Generation

6 thinking strategies — contradiction mining, assumption challenging, cross-domain transfer, and more — go beyond "what papers say is missing."

Publication-Quality Figures

Venue-specific styles enforced automatically — CNS, CS Conference, IEEE, Life Science, Physical Sciences. ≥300 DPI, vector format, colorblind-safe.

Four Research Types

Method (M), Discovery (D), Tool (C), or Hybrid (H) — each type triggers specialized workflows, evaluation protocols, and paper structures.

Before & After
Amplify

Bare AI Assistant
With Amplify
Starting point
Jumps straight to code
Literature review, deep thinking, multi-agent brainstorming first
Research question
Whatever the user said
Refined through adversarial 3-agent deliberation
Metrics
Chosen after seeing results
Locked before any experiment runs
Experiments
Single seed, best-case reported
5 seeds, all results including failures
Baselines
None or weak
Baseline-first execution; skipping forbidden
Paper
Written in one shot
Per-section multi-agent review + full-paper review
References
Unverified, often fabricated
Every citation checked; fabrication blocked
Human control
Black-box process
15+ decision points; 4 mandatory gates

You Decide
How Much to Drive

Amplify enforces human checkpoints at every critical juncture. But how you engage is entirely up to you.

Hands-on

Debate research questions, suggest method modifications, guide analysis, edit paper sections

Best for: Shaping every decision
Supervisory

Review deliverables at each gate, give high-level feedback, approve or redirect

Best for: Quality control without micro-managing
Approve & Go

Say "approved" at each gate and let the AI handle details autonomously

Best for: Maximum automation, minimal overhead

Up and Running
In 3 Steps

1

Install the Plugin

terminal
# Copy or symlink to your IDE skills folder
mkdir -p ~/.cursor/skills
cp -r /path/to/amplify ~/.cursor/skills/amplify
2

Add the Bootstrap Rule

terminal
# User-level (all projects)
mkdir -p ~/.cursor/rules
cp ~/.cursor/skills/amplify/install/amplify-bootstrap.mdc \
   ~/.cursor/rules/
3

Start Researching

chat
# Type in a new chat session:
I want to develop a new method for
multi-modal data integration and apply it
to disease subtype discovery.

Prerequisites

Cursor IDE Git LaTeX (TeX Live / MiKTeX) Python 3 + pyyaml

Also compatible with Claude Code, OpenClaw, and other LLM-powered coding assistants.