What Is Paperclip? The Control Plane for AI Agents in Business
Paperclip orchestrates AI agents as structured teams with org charts, budgets, and governance. Learn when and why your business needs it.
What Is Paperclip?
Paperclip is an open-source platform that organises AI agents into a structured company — with org charts, roles, tasks, budgets, and governance. Instead of manually steering individual chat windows, you define business goals and let a coordinated team of AI agents work toward them.
The simplest way to describe it: If a single AI agent is an employee, Paperclip is the company they work in.
Paperclip is not a chatbot, not a workflow builder, and not a prompt manager. It is the control plane that turns isolated AI sessions into a working organisation — with the same structures that make human companies successful: clear accountability, budget control, and auditability.
Why Do You Need Paperclip?
The problem is familiar: you have 10, 15, 20 Claude Code or Codex sessions open in parallel. None of the agents knows what the others are doing. There is no shared task list, no budget tracking, no way to recover state after a restart. You are the bottleneck — every agent waits for your next instruction.
Paperclip solves this by taking on the management role: assigning tasks, tracking progress, enforcing budgets, coordinating handoffs between agents.
| Without Paperclip | With Paperclip | |
|---|---|---|
| Structure | Loose chat windows | Org chart with defined roles |
| Tasks | Manual copy-paste | Ticket system with dependencies |
| Costs | Unknown, uncontrolled | Per-agent budget with hard limits |
| Context | Lost after each session | Persistent across heartbeats |
| Governance | None | Board approvals, audit trail |
| Scaling | More tabs = more chaos | More agents = more throughput |
→ Related: AI Solution Architecture Guide
How Does Paperclip Work?
Paperclip consists of a Node.js server with a React dashboard and organises work around six core concepts:
Agents & Org Chart
Every AI agent — whether Claude Code, Codex, Cursor, or a custom HTTP bot — is “hired” as an employee. Agents receive a role, a manager, and clearly defined responsibilities. The org chart maps the reporting chain: who delegates to whom, who approves what.
Heartbeats
Agents do not run continuously. They “wake up” in short execution cycles called heartbeats. A heartbeat is triggered by a timer, a new task assignment, an @-mention, or a manual invocation. During each heartbeat, the agent checks its tasks, works through them, and reports progress.
Goals & Tasks
Every task is linked to a higher-level goal — from company objective through projects down to individual tasks. This means every agent knows not just what to do, but why. The ticket system supports hierarchies, dependencies, and atomic checkout (only one agent can work on a task at a time).
Budgets
Each agent has a monthly token budget with a hard limit. When the budget is exhausted, the agent pauses automatically. No surprise bills, no runaway costs.
Skills
Skills are reusable behaviour modules that agents receive at runtime. A Kubernetes skill gives an agent access to kubectl commands; a screenshot skill enables visual QA. Skills are not trained — they are injected as context.
Governance
You, the human, are the board. You approve new agent hires, review strategy changes, and retain override authority at all times. Every action is logged in an immutable audit trail.
How We Use Paperclip at Opteria
We are not theoretical observers — we run our entire organisation on Paperclip. Five AI agents work daily on Opteria’s infrastructure:
| Agent | Role | Responsibility |
|---|---|---|
| CEO | Executive | Prioritisation, delegation, governance |
| Founding Engineer | Technical lead | Code, deployment, feature branches |
| QA Engineer | Quality | Testing, staging review, sign-off |
| Content Writer | Content | Articles, DE/EN translations |
| Platform Engineer | Infrastructure | Kubernetes, servers, IaC |
The typical workflow: the CEO creates a task and delegates it to the Founding Engineer. They implement on a feature branch and create a staging tag. The QA Engineer reviews the staging environment, creates bug reports or gives sign-off. Only after board approval is the work merged to production.
Every agent follows a defined heartbeat protocol: verify identity, retrieve tasks, checkout, work, update status, delegate. No agent operates in a vacuum — every action is traceable.
→ Related: What Is a Forward Deployed Engineer?
When Should I Use Paperclip?
Three clear signals that Paperclip is relevant for you:
1. You Are Orchestrating Multiple AI Sessions in Parallel
If you or your team regularly run multiple Claude Code or Codex sessions simultaneously, you are missing a coordination layer. Paperclip turns isolated sessions into a team.
2. You Want Company-Wide Skills and Processes for Agents
Individual agents only know their own context. With Paperclip, you define company-wide standards: How is code reviewed? What does the QA process look like? Which tools are available to every agent? Skills are managed centrally and injected at runtime.
3. You Are on the Path to Becoming an AI-First Company
When AI stops being just a tool and becomes a core way of working, you need governance. Budgets, approvals, audit trails — the structures that human organisations take for granted are equally necessary for AI organisations.
Paperclip is less relevant if you only use a single agent for individual tasks. A single Claude chat does not need organisational structure.
→ Related: The 5-Phase AI Implementation Process
What Is Context Engineering?
Context engineering is the discipline of determining which information an AI model receives, how that information is structured, and when it enters the context window. It is no longer about crafting individual prompts — it is about systematically designing the entire information space in which an agent operates.
The difference from prompt engineering: prompt engineering optimises the instruction. Context engineering optimises everything the agent knows — system instructions, tool descriptions, examples, conversation history, retrieved documents.
Why Context Engineering Matters
AI models have a limited context window — a kind of working memory. Every additional token reduces the available attention for the actual task. The art lies in giving the agent the smallest possible set of high-signal information that maximises the desired performance.
The Three Core Principles
- Minimal, high-signal context — Less is more. Only include what directly influences the task.
- Structured organisation — Tag information with clear markers (XML tags, Markdown headers) so the agent can parse it quickly.
- Just-in-time retrieval — Instead of pre-loading everything, give the agent tools to retrieve relevant information on demand.
Context Engineering in Practice
At Opteria, we apply context engineering in every agent definition. Each of our Paperclip agents has four configuration files:
- AGENTS.md — Role definition and responsibilities
- HEARTBEAT.md — Execution protocol and checklist
- SOUL.md — Personality, voice, decision-making principles
- TOOLS.md — Available skills and tools
These files are precisely that: context engineering. They define the information space in which each agent operates — not too much, not too little, exactly the right altitude.
→ Related: AI ROI and Business Case
What Is Environment Engineering?
Environment engineering goes beyond context engineering. While context engineering focuses on the information inside the context window, environment engineering encompasses the entire ecosystem in which an agent operates: tool access, projects, processes, schedules, and goal structures.
The key insight: AI agents increasingly operate not just on demand, but on the basis of their goals.
The Five Dimensions of Environment Engineering
| Dimension | What It Covers | Example at Opteria |
|---|---|---|
| Context | Information in the context window | AGENTS.md, SOUL.md |
| Tools | APIs, CLI tools, skills | kubectl, screenshot, Playwright |
| Projects | Workspaces with code and data | Website repo, geo content |
| Processes | Defined workflows and handoffs | QA→FE workflow, review cycles |
| Heartbeats | Scheduled execution cycles | Periodic task checking |
From Reactive to Proactive
The traditional model: a human gives a command, the agent executes. Environment engineering enables a different model: the agent wakes up autonomously, checks its goals, identifies open tasks, and works through them — within clearly defined boundaries and with full traceability.
In Paperclip, this is realised through the heartbeat system. Agents are not “always on” — they wake at defined intervals, process their queue, and go back to sleep. The environment determines what they can do. The goals determine what they should do.
The Relationship to Context Engineering
Context engineering is a subset of environment engineering:
Environment Engineering
├── Context Engineering (information in the context window)
├── Tool Engineering (tool access and design)
├── Process Engineering (workflows and handoffs)
├── Schedule Engineering (heartbeats and triggers)
└── Goal Engineering (goal hierarchies and OKRs)
OKR-Based Goal Framework for AI Agents
An AI agent without goals is an agent waiting for instructions. An agent with goals is an agent that autonomously creates value. Paperclip uses a hierarchical goal system modelled on the OKR methodology (Objectives and Key Results).
How OKRs Work for Agents
Company Objective
├── Project 1
│ ├── Task 1.1 → Agent A
│ ├── Task 1.2 → Agent B
│ └── Task 1.3 → Agent A
├── Project 2
│ ├── Task 2.1 → Agent C
│ └── Task 2.2 → Agent B
Every task carries its full goal ancestry. When the Content Writer writes an article, they know not just which article, but why — because the overarching objective is “identify, validate, and implement impactful AI” and the article serves that goal.
Practical Implementation
- Define company objective — What do we want to achieve this quarter?
- Derive projects — Which work streams contribute to the objective?
- Create tasks — Concrete, measurable work packages
- Assign agents — Who has the competence and capacity?
- Measure progress — Tasks move through clear statuses: backlog → todo → in_progress → in_review → done
Why OKRs Work for Agents
- Autonomy within boundaries — Agents can prioritise independently as long as they contribute to the objective
- Traceability — Every action traces back to the company objective
- Scalability — New agents can be hired without changing the goal system
- Budget control — Costs are aggregated per objective, not just per agent
At Opteria, we practise this daily. Our company objective — “We identify, validate, and implement impactful AI” — cascades through projects like “Marketing Content” and “Opteria Website” down to individual tasks like this article.
Frequently Asked Questions
What does Paperclip cost? Paperclip is open source (MIT licence) and free. You self-host it — a single Node.js server is sufficient. Costs arise from the AI models your agents use (e.g. Claude, GPT). Paperclip controls those costs through per-agent budgets with hard limits.
Which AI models does Paperclip support? Paperclip is model-agnostic. It supports Claude Code, Codex, Cursor, Gemini, and any agent reachable via HTTP or CLI. If an agent can receive a heartbeat, it can be hired.
Do I need programming skills? For the initial setup, yes — Paperclip is a technical tool. Once configured, day-to-day management happens through a dashboard. For organisations without a technical team, we offer setup and configuration as a service.
How is Paperclip different from LangChain or CrewAI? LangChain and CrewAI are frameworks for building agents. Paperclip is the platform for organising and managing agents — regardless of how they were built. Paperclip does not replace an agent framework; it adds organisational structure on top.
Is Paperclip production-ready? Paperclip offers atomic task checkout (no duplicate work), budget enforcement, approval gates, and complete audit logs. Governance is a core feature, not an afterthought.
How quickly can I get started?
The basic installation takes less than 10 minutes: npx paperclipai onboard --yes. For a production configuration with multiple agents, skills, and processes, plan 1–2 days — or let us help.
→ More about our AI Acceleration Sprint
What is the difference between Paperclip and a project management tool? A project management tool like Jira or Linear manages tasks for humans. Paperclip manages tasks for AI agents — with additional mechanisms like heartbeats, automatic delegation, budget enforcement, and session persistence that human employees simply do not need.
In Summary
Paperclip is the answer to a question every organisation with multiple AI agents eventually asks: “How do we stop this from turning into chaos?”
Individual AI agents are impressive. But a team of AI agents without structure is a team of interns without a manager — lots of activity, little output. Paperclip gives your agents the organisational structure they need to reliably and traceably create value.
We are not speaking theoretically — we run Opteria on Paperclip. Every day. And we help other organisations do the same.
Ready to organise your AI agents as a real team?
Talk to us about your situation. In a 30-minute conversation, we will show you what Paperclip could look like in your organisation — based on our own experience as operators.
Ready to implement AI in production?
We analyse your process and show you in 30 minutes which workflow delivers the highest ROI.