Findry
Get started
Platform

Metrics + mappings

Bind a Findry metric to PostHog data. Five mapping kinds, one library, automatic outcome verdicts. Pick the right kind for your shape of question.

12 min

Contents
PostHog deep dives
On this page

Two halves of every mapping

Two halves of a metric mappingFINDRY METRICPOSTHOG DATA SOURCEMAPPINGMobile checkoutcompletionUNIT · PERCENTStable canonical nameyour team commits toMETRIC MAPPINGPick one binding kind:· Event· Insight· Template· Funnel· Raw HogQLSwap freely as youranalytics stack evolves
A mapping binds two sides — Findry metric ↔ PostHog data source

A mapping binds a Findry metric (the canonical name your PMs track) to a PostHog data source(what the adapter queries). Both sides have to exist before you can save the binding.

This split exists for a reason. The metric stays the same as your analytics stack evolves; the mapping changes. "Mobile checkout completion" is the canonical name your team commits to. Whether it's measured against PostHog today or Amplitude tomorrow is the mapping's job, not the metric's.

The metric library

Open Settings → Metrics. You'll see one section per project, each with its current metrics + a + New metricbutton.

Each metric carries a name (free text, max 80 chars), an optional description, and a unit — either % (percent) or# (absolute). The unit affects how the variance reads in outcomes: a +10% lift on a percent metric vs a +10 absolute lift on a count metric.

The 10 seed metrics (Activation rate, W2 retention, NPS, etc.) are flagged with a Seed badge. They're not special — you can rename, edit, or archive them like any other metric.

Two ways to create a metric:

  • From Settings → Metrics — best when you're setting up several at once or want to write a description
  • Inline in the Add mapping form — best when you're already in the integrations tile binding to PostHog and just need a metric to point at. The "+ New metric" toggle inside the modal opens a smaller form that auto-selects the new metric when it's done.

The five mapping kinds

Open Settings → Integrations → PostHog → Add mapping. Pick the project, pick (or create) the metric, then choose one of five binding kinds based on the shape of your question.

Quick decision tree:

  • Counting one event with optional aggregation? → Event
  • Already maintaining the metric as a saved insight in PostHog? → Insight
  • Common shape (activation rate, retention, NPS, feature adoption)? → Template
  • Multi-step conversion (most PM hypotheses)? → Funnel
  • None of the above? → Raw HogQL

Kind A · Event

The most common shape. Pick a single PostHog event + an aggregation and Findry counts (or sums / averages) it over the outcome's measurement window.

Form fields:

  • Event name — must match the PostHog event exactly (case-sensitive). Examples: signup_completed, $pageview,order_placed.
  • AggregationEvent count (raw count of the event),Unique users (distinct distinct_id),Sum / Avg / Percentile on a numeric property.
  • Filters (optional) — any number ofproperty = value filters that narrow the count (e.g. plan = pro, $browser = Safari).

The Validate button runs a 24hcount() and tells you "Event recognized, X in last 24h." Zero counts get a yellow "double-check the name is current" nudge — most often a typo or stale event name. You can still save the mapping anyway (PostHog dev instances often have no recent traffic).

Kind B · Insight

Bind to a saved PostHog Insight by its numeric ID. Best when you already maintain the metric in PostHog and don't want to re-define it in Findry.

Form fields:

  • Insight ID — from the PostHog URL. PostHog insight URLs look likehttps://us.posthog.com/project/12345/insights/67890 — the 67890 is the insight ID.

Validation hits GET /api/projects/N/insights/M/ to confirm the insight exists + returns its display name. Outcome measurements pull the insight's primary numeric result.

Caveat: insights with multiple series, complex breakdowns, or dashboard-time-range overrides get reduced to a single number. The reduction is the first numeric value in the response. If your insight has multiple meaningful numbers, prefer the Raw HogQL mapping where you can write the exact query Findry runs.

Kind C · Template

Pre-shaped queries for common metrics. Four ship in the box; each has its own free params:

  • activation_rate — % of users who triggered a chosen activationEvent within N days of a chosensignupEvent
  • week_2_retention — % of signups still active in week 2 (events in days 7–14 after first event). Aliases:w2_retention, two_week_retention,day_14_retention
  • feature_adoption_rate — % of users who used a chosen featureEvent in the lastwindowDays (default 30)
  • nps — promoters minus detractors over total respondents from survey sent events with a numeric $survey_response

The autocomplete on the metric_name field matches template aliases — typing "Day 14 Retention" or"two_week_retention" both resolve to theweek_2_retention template if the PM picks the template kind.

Kind D · Funnel

The shape most PM hypotheses are about. Inline funnels — you type the ordered event list directly into the mapping form, no saved Funnel insight needed in PostHog.

Form fields:

  • Funnel events — comma-separated, ordered, 2–10 steps. Example: land, signup_started, signup_completed, first_action
  • Target step (optional) — which step's conversion to measure. Omit to measure end-to-end (step 0 → last event). Provide an integer ≥ 1 to measure to an intermediate step.

Conversion rate = step[targetStep].count / step[0].count × 100, returned as a percent value. Sample size = the step 0 entry count.

Kind E · Raw HogQL

The escape hatch. Paste any HogQL query that returns a single numeric column called value (and optionallysample_size). Findry runs your query verbatim; nothing else is allowed to interpret it.

SELECT
  100.0 * countIf(properties.tier = 'pro') / count() AS value,
  count() AS sample_size
FROM events
WHERE event = 'subscription_changed'
  AND timestamp > now() - INTERVAL 30 DAY

Validation wraps your query inSELECT * FROM (...) LIMIT 1 and runs it; PostHog validates the HogQL syntax server-side and returns the resulting row.

Same caveat as funnels: raw HogQL queries don't support per-bucket time series. The baseline engine falls back to manual mode for raw-backed outcomes.

Use this when none of the other four kinds fit — usually for cross-event ratios, custom percentile aggregations, or anything that joins events to person properties in a non-trivial way.

How outcomes get computed

Once you've mapped a metric, the chain is automatic. Here's what happens in order:

  1. You promote a hypothesis to a Bet. The promotion form's predicted-impact field references the metric by ID and snapshots the metric_name into the bet'spredicted_impact JSONB column.
  2. The bet ships (manually or auto-detected via tracker webhook). The associated outcome moves from awaiting_ship tomeasuring with a real ship date.
  3. Once ship_date + timeframe_days <= now, the outcome-sweep cron picks it up (runs daily at 07:00 UTC).
  4. The cron looks up the metric's PostHog mapping, resolves which adapter call to make (event / insight / template / funnel / raw), and pulls the post-ship value over the measurement window.
  5. The baseline engine pulls the same metric over the 60-day pre-ship window and computes a baseline mean + std-dev. (For funnel + raw mappings, the baseline is whatever the PM manually entered on the bet.)
  6. The significance engine runs Welch's t-test on (post-ship mean, baseline mean), computes the effect size + 95% CI, and classifies the verdict: hit if effect lands inside the predicted band, near_hit if close,miss if the wrong direction or far below band,inconclusive if the sample is too small.
  7. The outcome row is updated with the verdict + actual value + CI bounds + p-value + diagnostics. PostHog gets a "● Outcome verdict" annotation at the measurement timestamp.
  8. You can override the verdict from the outcome detail page with a reason — common reason is "narrow CI plus sustained trend bumps this from near_hit to hit." The override flag is stored so the meta-analysis distinguishes auto vs manual verdicts.

Rename + archive semantics

Renames flow through (described above). Archive is more nuanced:

  • Archiving a metric soft-deletes it (deleted_at set) and clears the FK column on every bet that referenced it.
  • The bet's predicted_impact.metric_name snapshot stays — old bets still display the original name. They just can't be re-measured against the archived metric.
  • Outcome-sweep skips bets whose metric was archived; their verdict stays at whatever was last computed.
  • Archived metrics disappear from the picker in the bet promotion form. Re-creating a metric with the same name produces a new row (new ID); old bets don't auto-relink.

Archive is reversible at the data layer (just unsetdeleted_at) but not from the UI today. If you need to undo an archive, contact paulo@findry.io.

PreviousConnect PostHogUp nextSurveys + annotations