Amplify turns your AI coding assistant into an autonomous research agent that conducts the full scientific workflow — literature review, experiment design, execution, and paper writing — with built-in rigor and human oversight.
Ask any AI coding assistant to "help me write a paper" and you'll get a model trained for 1 epoch, cherry-picked metrics, fabricated references, and a report no reviewer would accept. The raw intelligence is there — what's missing is methodology.
Skips literature review, problem validation, and experimental design — going straight to implementation.
Reports only best-case metrics from a single seed, hiding failures and variance behind selective reporting.
Invents plausible-sounding references that don't exist, undermining scientific integrity.
Produces one-shot outputs without the diagnose-hypothesize-fix-measure cycles real research requires.
Every research project follows a structured 7-phase pipeline with mandatory human checkpoints. The AI proposes, analyzes, and enforces. You decide.
Identify field, research type, generate expert persona, assess resources
Review 15–30 papers, 6 deep thinking strategies, multi-agent brainstorming for 5+ ideas
Adversarial questioning by 3 agent personas — novelty litmus test, data screening
Type-specific design, evaluation protocol locked, ablation plan, 3-agent review
4a: exploratory probe → 4b: full execution. Baseline-first, 5 seeds, mandatory iteration
3-agent story deliberation, claim-evidence alignment, publishability check
Modular LaTeX, per-section 3-agent polishing, full-paper review, reference verification
24 skills organized in three layers — each with a distinct role in ensuring your research meets the highest standards.
"What rules must I follow?"
"What do I do next?"
Specialized agent panels — not single-agent reasoning — debate at every stage where judgment matters. Up to 5 rounds of iterative evaluation until consensus.
Literature retrieval runs throughout all phases — not just once. From broad discovery in Phase 1 to targeted citation verification in Phase 6.
Metrics frozen after plan freeze. All seeds reported. Every claim maps to evidence. Fabricated references blocked. No shortcuts allowed.
6 thinking strategies — contradiction mining, assumption challenging, cross-domain transfer, and more — go beyond "what papers say is missing."
Venue-specific styles enforced automatically — CNS, CS Conference, IEEE, Life Science, Physical Sciences. ≥300 DPI, vector format, colorblind-safe.
Method (M), Discovery (D), Tool (C), or Hybrid (H) — each type triggers specialized workflows, evaluation protocols, and paper structures.
Amplify enforces human checkpoints at every critical juncture. But how you engage is entirely up to you.
Debate research questions, suggest method modifications, guide analysis, edit paper sections
Best for: Shaping every decisionReview deliverables at each gate, give high-level feedback, approve or redirect
Best for: Quality control without micro-managingSay "approved" at each gate and let the AI handle details autonomously
Best for: Maximum automation, minimal overhead# Copy or symlink to your IDE skills folder
mkdir -p ~/.cursor/skills
cp -r /path/to/amplify ~/.cursor/skills/amplify
# User-level (all projects)
mkdir -p ~/.cursor/rules
cp ~/.cursor/skills/amplify/install/amplify-bootstrap.mdc \
~/.cursor/rules/
# Type in a new chat session:
I want to develop a new method for
multi-modal data integration and apply it
to disease subtype discovery.
Also compatible with Claude Code, OpenClaw, and other LLM-powered coding assistants.