How I Learned to Extend open webUI: Tools, Pipes, and External Pipelines (From a Learner’s POV)
I’ve been poking around open webUI for a few weeks now, and one thing that kept tripping me up early on was the vocabulary: Tools, Functions, Pipes, Manifolds, Pipelines — they all sound powerful, but what do they actually do for someone building an AI workflow? Writing this helped me clarify how I use each piece in practice, so here’s a tour from the perspective of a learner who’s been building small AI agents and prototypes.
Tools: extending the LLM itself
When I say “Tools,” I mean the extensions that let the underlying LLM do things it can’t do on its own — browse the web, call a third‑party API, or fetch a calendar event. These aren’t open webUI UI extensions; they expand what the model can do externally. Practically, I wire up a tool when I need the model to reach outside its tokenized context: think up‑to‑date info, third‑party data, or actions like sending an email.
In one project, I used a small browsing tool to let an AI agent pull current headlines and then summarize them. The interface in open webUI stays the same, but the LLM can invoke that tool mid‑conversation. Tools are where I add real-world effects to otherwise static model behavior.
Functions: extending open webUI (Filters and Actions)
Functions live inside open webUI and are the place to customize how messages flow. I break them into the two flavors I use:
Filter functions: These are preprocess and postprocess hooks. They let me modify user input before it hits the model (e.g., translate non‑English input to English, mask PII, or rate‑limit a noisy user) and inspect or change model output before it’s shown back (e.g., add fact‑check metadata or redact sensitive tokens). I rely on filters to enforce guardrails and to keep my workspace usable across international test users.
Action functions: These extend the UI with custom buttons or icons under messages. For repetitive prompts or multi‑step flows an AI agent needs, having one‑click actions is a huge productivity win. For example, I added an “Expand Brief” action to quickly reformat a terse model reply into a longer blog‑style draft. Instead of typing the same prompt every time, a click triggers the action function and does the heavy lifting.
Functions are how I shape the interaction and user experience of open webUI without touching the model weights or running separate infrastructure.
Pipes and Manifolds: building internal AI agents
When I wanted more control than the standard custom model interface offered, I turned to Pipes. Pipes let me author a single, specialized AI agent inside the open webUI environment (imagine a “content writer” agent or a “basic health advisor” agent). You get straight coding access, which means more conditional logic, custom prompts, and integrations with the Python stack available in the core environment.
Manifolds are simply collections of pipes — a way to group related agents that share logic. In one setup I built, I had three content‑focused pipes (outline generator, draft writer, SEO improver) bundled into a manifold because they shared the same authentication helper and a set of text normalization utilities.
Be mindful of a practical limitation: pipes and manifolds run within the open webUI Python environment, so you’re limited to the packages already installed there. For small to medium tasks, this is fine; for heavy or dependency‑specific needs, you’ll want to move outside.
Pipelines (external): freedom and heavy lifting
That’s where external Pipelines come in. Pipelines run in a separate Docker container or instance, and that separation brings two advantages I regularly exploit:
- Freedom from dependency constraints. If a task needs a specialized library not available in core open webUI — say a particular embedding or search library — I build a pipeline with that dependency in its own container. It talks to open webUI but doesn’t drag in incompatible packages or bloat the main app.
- Offloading computation. For workflows that crunch lots of data (batch processing documents, running nightly retraining jobs, or executing heavy feature extraction), running that work in a pipeline keeps the main open webUI responsive for interactive users. My nightly ingestion jobs live in pipelines and report summarized results back to the main app.
Putting it together: a simple workflow I use
A typical workflow I run now uses all of the above:
- The open webUI front end hosts the conversation. Filters enforce PII masking and language normalization.
- Action functions provide quick buttons like “Summarize this page” that trigger known prompts.
- If the user asks for current facts, the LLM calls a browsing Tool.
- For a complex content pipeline (ingest, enrich, embed, search), an external Pipeline performs the heavy lifting and exposes an endpoint the open webUI pipes or tools can call.
- Internally, I keep small specialized agents as pipes and group related ones into manifolds so they’re easy to maintain.
A few practical tips
- Start small: add a simple filter for PII and an action for a frequent prompt. Those yield big UX gains fast.
- Use pipes for logic that fits within the main environment; move to pipelines when you hit dependency or performance limits.
- Document shared code used by multiple pipes and keep it in a manifold to avoid duplication.
- Don’t forget to track the pieces: when I scale up I label services and preserve small runbooks so my MCP dashboards and logs stay readable.
Closing thoughts
For me, learning open webUI stopped being about “which button to click” and became about understanding where each capability lives and why. Tools change what the LLM can do externally, functions shape the UI and message flow, pipes/manifolds give you internal coding power, and external pipelines buy you dependency freedom and compute headroom. Together they let you design capable AI agents without forcing everything into one container.