for product · continuous discovery
Eight discovery interviews a week without the PM running every conversation themselves.
Lacudelph is the interview surface for PMs trying to actually run the continuous discovery their playbook prescribes — including the feature-killing conversations nobody volunteers for. Send the link to a known cohort; the AI conductor handles the Socratic work; you watch the cohort picture form turn by turn, while the conversations are still running.
Why “continuous” discovery isn’t
Every product textbook prescribes 3–5 customer calls a week. Almost no PM actually runs them: five 30-minute Zooms is two full work-hours of the week, plus another two on synthesis, and you can’t justify spending half the week on calls when there’s a roadmap to ship. So discovery degrades into “we asked four people in #product-feedback Slack.”
- The bottleneck is the PM’s calendar, not the customer’s willingness — most users will give 12 thoughtful minutes but not 30 scheduled ones.
- Slack-channel feedback selects for the loudest 3% of users; the silent majority — the ones who quietly ignore a feature — never enter the data set.
- Feature-killing is even harder. Proving a feature should be cut requires interviewing people who don’t use it; nobody volunteers to argue against features the team built.
- Survey tools (Maze, Hotjar, in-app NPS) capture what users will type into a box — not what they discover when probed Socratically across two or three follow-up turns.
How Lacudelph changes it
1
PM writes one brief per discovery cycle
Five fields: who you want to learn from this week, what hypothesis you’re testing, what counts as a useful answer, what users get back. Optionally refine it in conversation — the platform interviews you about your own brief and surfaces the assumptions you didn’t name.
2
Drop the link into your existing user comms
Email a known cohort, share in the customer Slack, route from a feature-flag in-product. Each user opens the URL when it suits them; the AI conductor adapts per respondent — drills into specifics, moves on through generic answers.
3
Each user gets a private reflection
At session close they receive their own structured reflection — sections picked from what they actually said: what they named, where their thinking shifted, and one question they didn’t resolve. Turns ‘give us feedback’ from extraction into mutual exchange — response rates go up because the artifact is theirs.
4
Watch the cohort form turn by turn
The cohort aggregate builds while interviews are still running — convergent themes, divergent framings, recurring hedge shapes (‘all eight users said the feature was useful but none of them named a recent specific use’), and routing recommendations for which segment to dig deeper on. Pro tier.
What a feature-discovery brief looks like
A worked example for a PM trying to decide whether to invest in or kill a six-month-old feature with low usage. Plug in your own product’s context.
- Goal
- Decide whether the bulk-import feature shipped six months ago is misunderstood, mis-marketed, or genuinely unwanted. Distinguish polite engagement from real conviction across users in our two main segments.
- Audience
- 20 active users — 10 who have used bulk-import at least once, 10 who haven’t opened the feature despite hitting it in the UI. Both segments needed; the silent half is the signal.
- Hypotheses to check
- (a) Non-users have a workflow we didn't model — they batch differently, so 'bulk' isn't the right shape; (b) users who use it once and don't come back hit a specific failure mode at the parsing step; (c) the feature solves a problem we marketed wrong, and the right ICP doesn't know it exists.
- What users get back
- A reflection tailored to their answer — sections picked from what they actually said: what they would actually want this feature to do for them, what they’re currently doing instead, and where their account converged or diverged from the cohort. Useful for them, which is why response rates hold up.
Common questions
How is this different from Dovetail, Maze, or Hotjar?
Dovetail is a synthesis layer — it organises transcripts and clips you've already collected. Maze and Hotjar collect what users will type into a survey or click-test box. Lacudelph runs the actual conversation: a multi-turn adaptive AI conductor that probes Socratically across N participants in parallel and produces a cohort report (convergent themes, divergent framings, signal-strength bars per objective, routing recommendations) without you running each call.
Can I run it weekly, or just for big studies?
Designed for weekly cadence. Solo tier ($29/mo) gives you 5 active briefs and 20 sessions — fits one cycle a week. Pro ($99/mo) is unlimited briefs + 40 sessions + cross-cohort aggregation when you want quarterly synthesis on top of the weekly drumbeat.
What tier do I need for cross-cohort aggregation?
Pro tier ($99/mo). Free and Solo tiers can run individual interviews and produce per-user takeaways; Pro is the tier for the cohort report.
How do I kill a feature with this?
Send the brief to two segments — users who actively use the feature and users who don't. The cohort report distinguishes real conviction from polite engagement, surfaces what non-users are doing instead, and flags where the feature solves a problem you marketed wrong vs a problem nobody has. The 'inherited framing' check (is the participant just repeating what they heard?) is the single most useful signal here.
Is participant data sent to Anthropic?
Yes — interview turns are processed through Anthropic's API (US region). Sub-processors are listed in the DPA. Participants see the consent statement before starting and the AI-authorship disclosure on the takeaway.
Make discovery actually weekly
Free tier: 1 brief, 5 sessions/mo. Solo $29/mo: 5 briefs, 20 sessions — fits one cycle a week. Pro $99/mo: unlimited briefs, 40 sessions, plus cross-cohort aggregation for the quarterly review.