Here is a clean, economical articulation of the experience you’re describing—framed at the level of product behaviour rather than pitch.

Substack-for-data: imagined user flow The core promise is radical ease: the minimum friction path from “I have a dataset / link / visual” to “it’s published, discoverable, subscribable, and usable.”

  1. Entry point A new post begins exactly as on Substack: – title – short description – a single freeform editor block

  2. Drop-in inputs The editor accepts three primitives: – data file (CSV/JSON/Parquet) – URL (auto-scraped: screenshot, title, metadata, canonical link) – text/Markdown

Each primitive renders immediately: dataset previews, URL cards, basic charts inferred from the file.

  1. Publish unit: the “data post” A data post can be: – a dataset with a short note – a visual with a sentence or two – a curated link with auto-generated context – or any mix of the above

The constraints (title + description + one main block) keep the object small and uniform—analogous to Substack’s standard post, but data-centred.

  1. Distribution Subscribers receive posts by email exactly as on Substack. Posts live at clean, permanent URLs under a subdomain (e.g. username.flower.show). Users can follow authors or tags.

  2. Auto-packaging Every dataset uploaded or linked can be exported as a lightweight data package: – basic schema – downloadable file – an endpoint usable by DuckDB or similar This mimics the NPM dream: instant packaging without ceremony.

  3. Aggregation Collections of posts can form “series” or “projects” (e.g., an ongoing housing-data investigation). A user’s profile lists all datasets they’ve published or linked.

  4. Syndication / outputs Analogous to Substack’s “publish to YouTube” or “publish to podcast feed,” but for data: – push dataset to a public data registry – generate a static CSV/JSON endpoint – generate an embeddable chart snippet – provide a DuckDB-ready query link

  5. Social layer Lightweight interaction: likes, follows, simple comments. The main value is that datasets become social objects rather than static files.

In effect It combines the frictionless posting workflow of Substack with the file-first ethos of data journalism and the accessibility of lightweight data packaging. The emphasis is always on the lowest mental overhead: a place where dropping a file or link is already 80% of a publishable post.