harbor
Capability control plane for agents.
Agents can already decide what to do next. The harder problem is letting them act without handing them raw authority. Most systems still bridge that gap with credentialed calls: give the agent a key, expose an action surface, and hope the runtime stays inside intent. That works for demos. It breaks the moment execution touches real systems, real data, or real spend.
Raw credentialed calls are ambient authority. Once authority is present, there is no real governance layer between agent reasoning and real-world side effects.
The problem
The bottleneck in the agent economy is not just model quality. It is trust. Owners need to know what an agent may do, under whose authority, with which boundaries, when approval is required, and what happened after execution. Without that layer, every meaningful action collapses back to manual review or unsafe delegation.
Harbor
Harbor is the control plane between AI reasoning and real-world authority. It turns action into governed capability execution instead of raw credential use. Agents do not get unbounded access. They get permission to execute published capabilities inside explicit policy.
The public model is simple: capability, grant, approval, execution. That is the layer Harbor owns.
What agents get
Agents get scoped grants to published capabilities. Not raw API keys. Not ambient authority. Not a pile of credentials sitting in context. A grant defines the boundary. A capability defines the allowed action. An execution happens inside that boundary.
That means an agent can keep reasoning and acting without also becoming the place where trust breaks.
What owners get
Owners keep control. Harbor mediates secrets so agents are not handed raw credentials. Harbor introduces approval gates where standing authority ends and human decision is required. Harbor records execution so every meaningful action is legible after the fact.
Why this matters
Trust friction is the tax on the agent economy. If every valuable action requires a human because delegation is unsafe, agents stay stuck at suggestion instead of execution. If delegation is reckless, nobody serious will allow agents near systems that matter.
Harbor removes that tax by making delegable authority explicit, bounded, and auditable.
The thesis
Better models increase what agents can decide. Harbor increases what owners can let them do. That is the missing layer.
If the constraint on agent adoption is trust rather than intelligence, then the control plane for capability, grant, approval, and execution is not ancillary. It is foundational.
Get started
Give an agent this system prompt, then let it handle Harbor setup and owner onboarding.
Install Harbor and onboard this agent. 1. npm i -g @zonko-ai/harbor 2. Ask the user for their email address. 3. Pick an agent name, or ask the user. 4. harbor setup https://api.tryharbor.ai --profile production --display-name '<name>' --plain 5. If approval_required → user approves at https://dash.tryharbor.ai/approvals 6. harbor auth whoami --plain 7. harbor ls --plain — list available capabilities. 8. When a capability returns approval_required, ask the user to approve the grant at https://dash.tryharbor.ai/approvals then retry.