Skip to main content

Build Data Pipelines Visually, Compile to Real SQL

Drag nodes onto a canvas, connect them, and Plotono compiles your entire pipeline to optimized SQL. No boilerplate. No YAML. No guessing what your pipeline actually does under the hood.

The Problem with Code-Only Pipeline Tools

Most data pipeline tools require you to express transformation logic entirely in code. That means writing hundreds of lines of SQL, Python, or YAML to describe what should be a straightforward sequence of operations: pull data from a source, filter out irrelevant rows, join with another table, aggregate the results, and output a clean dataset.

The barrier to entry is high. Junior analysts cannot contribute without learning a programming language first. Senior engineers spend time reviewing boilerplate instead of logic. Debugging means reading through nested CTEs or subqueries without any visual indication of where data flows or where a filter is applied incorrectly. When pipeline dependencies get complex, it becomes nearly impossible to see the full picture from code alone.

Plotono takes a different approach. The visual canvas shows your entire pipeline as a directed graph. Each node represents one transformation step, and the connections between nodes show exactly how data flows from source to output. You can still read and edit the generated SQL at any time, but you never have to start there.

How the Pipeline Builder Works

The pipeline builder uses a drag-and-drop canvas powered by ReactFlow. You pick a node type from the palette, drop it onto the canvas, configure its properties, and connect it to other nodes. Plotono validates the graph in real time and compiles the entire pipeline to SQL as you build.

Node Types

Source

Define where your data comes from. Point at a table, a data lake, or the output of another pipeline.

Filter

Apply WHERE conditions to remove rows that do not match your criteria. Chain multiple filters or combine conditions.

Join

Combine data from two sources. Supports INNER, LEFT, RIGHT, FULL, and CROSS join types with configurable join conditions.

Aggregate

Group rows and compute aggregations like SUM, COUNT, AVG, MIN, and MAX. Specify GROUP BY columns and aggregation expressions.

Select

Choose which columns to keep in your output. Rename columns and reorder them for downstream consumers.

Extend

Add computed columns using SQL expressions. Calculate ratios, apply conditional logic, or derive new fields from existing data.

Order By

Sort results by one or more columns, ascending or descending. Control the final output order of your pipeline.

Limit

Restrict the number of rows returned. Useful for sampling, pagination, and top-N queries.

SQL

Write custom SQL for operations that go beyond the built-in node types. Full DuckDB-compatible SQL with security validation.

Pipeline

Reference another pipeline by name or ID. Enables composition, reuse, and parameter passing across nested pipelines.

Pipeline Composition

Real data workflows are not flat sequences. They involve shared transformations that multiple downstream pipelines depend on. Plotono supports pipeline composition natively: any pipeline can reference another pipeline as a node in its own graph.

You can reference a pipeline by its unique ID, by its human-readable name, or by embedding it inline. When you reference a pipeline, its parameters become available in the parent. Parameter renaming lets you map a child pipeline's @minimum_age parameter to the parent's @min_age, keeping interfaces clean without modifying the original pipeline.

Composition works at multiple levels. A pipeline can reference a pipeline that itself references another pipeline, and parameter renaming propagates recursively through every layer. The compiler detects circular dependencies at compile time and reports a clear error rather than entering an infinite loop.

Macro Nodes

Macro nodes are higher-level building blocks that expand into multiple lower-level operations at compile time. They simplify common data preparation tasks that would otherwise require several nodes and careful configuration.

Anonymize

Replace sensitive column values with hashed or masked versions. Configure which columns to anonymize and which masking strategy to use. Useful for sharing datasets with external teams without exposing personal data.

Deduplicate

Remove duplicate rows based on one or more key columns. Specify which row to keep when duplicates are found, using ordering criteria like most recent timestamp or highest priority value.

Rename Columns

Batch rename columns using a mapping configuration. Standardize column names across sources without adding multiple Select nodes or writing repetitive SQL aliases.

Cast Columns

Convert column data types in bulk. Cast strings to integers, timestamps to dates, or numbers to formatted text. Type mismatches between sources are one of the most common pipeline errors, and this node prevents them early.

Fill Nulls

Replace NULL values in specified columns with default values. Use static defaults, column-specific fallbacks, or expression-based fills. Downstream aggregations and charts behave predictably when nulls are handled at the source.

SQL Compiler Under the Hood

Every visual pipeline compiles to standard SQL. Plotono supports both DuckDB and BigQuery as compilation targets, so the same pipeline definition can run against a local analytics database or a cloud data warehouse. The compiler follows a four-stage process: lexing (tokenization), parsing (AST generation), optimization, and code generation.

Plotono also provides a pipe syntax that sits between raw SQL and the visual builder. Pipe syntax lets you write queries as a chain of operations separated by the |> operator. The compiler transforms pipe syntax into the same internal AST that the visual builder produces, so optimizations apply equally regardless of which interface you use.

Twelve Query Optimizers

Before code generation, the AST passes through up to twelve optimization stages. Each optimizer targets a specific class of inefficiency:

ConstantFolder

Evaluates constant expressions at compile time instead of at query time.

BooleanSimplifier

Reduces complex boolean logic to simpler equivalent forms.

DeadBranchEliminator

Removes code branches that can never be reached based on constant conditions.

RedundantOpEliminator

Strips operations that have no effect on the query result.

PredicatePushdown

Moves filter conditions closer to the data source to reduce intermediate result sizes.

WhereMerger

Combines adjacent WHERE clauses into a single filter for fewer query stages.

LimitPushdown

Pushes LIMIT operations closer to the source to avoid processing unnecessary rows.

TypeAwareFolder

Uses schema type information to fold expressions that a generic folder cannot.

ProjectionPushdown

Removes unused columns early in the pipeline to reduce memory and I/O.

JoinReorderAdvisor

Suggests more efficient join orderings based on schema statistics.

CommonSubexprEliminator

Detects repeated subexpressions and factors them out to avoid redundant computation.

SubqueryUnnest

Converts correlated subqueries into equivalent joins for better execution plans.

SQL Editor Toggle

The pipeline editor supports three modes that you can switch between at any time. Visual mode gives you the drag-and-drop canvas. Pipe syntax mode shows the pipeline as a chain of operations using the |> operator. Raw SQL mode shows the compiled SQL output, which you can also edit directly.

All three modes produce the same underlying representation. A pipeline built visually generates the same SQL as one written in pipe syntax or raw SQL. This means you can start with the visual builder to prototype quickly, switch to pipe syntax to fine-tune the logic, and then review the raw SQL to verify what will actually execute against your database.

Multi-Tenant Workspaces

Plotono's workspace system is designed for teams of any size. Each organization gets a tenant with strict data isolation. Within a tenant, workspaces form a hierarchical tree up to ten levels deep, letting you mirror your organizational structure as precisely as needed.

Role-based access control assigns every workspace member one of three roles: admin, editor, or guest. Admins can manage workspace settings and members. Editors can create and modify pipelines, visualizations, and dashboards. Guests can view shared resources but cannot modify them. Permissions can propagate to child workspaces automatically, or be scoped to a single workspace.

For access patterns that do not fit neatly into the workspace hierarchy, tag-based access control lets you create cross-cutting permission groups. Apply a tag to a pipeline, connector, or visualization, and grant access to that tag independently of workspace membership.

From Pipeline to Dashboard

Once your pipeline is producing clean, structured output, you can map its columns directly to chart axes. The column mapping editor lets you assign columns to X and Y axes, color dimensions, size encodings, and labels. Choose from over twenty chart types and drop the resulting visualization onto a dashboard.

Because dashboards are connected to pipelines rather than static data exports, your charts update whenever the underlying data changes. Global dashboard filters bind to pipeline parameters, so a single date range picker or category dropdown can filter every chart on the page simultaneously.

This tight integration between pipeline building and visualization eliminates the disconnect that plagues traditional BI setups where the transformation layer and the visualization layer are separate products that communicate through database tables.

Start Building Pipelines Today

Contact sales to get started with the visual pipeline builder. No credit card required.