We get this question a lot! We work hand-in-hand with obs tools like Langfuse.
Langfuse is great for debugging technical issues on individual traces like timing conditions that resulted in failed API calls.
Voker focuses on product, business and user outcomes - like what intents did the user bring to your agent that you might not expect. We're built for the whole product team, whereas Langfuse focuses on engineers specifically.
One way to think about it would be: a PM notices in Voker that a new intent category is coming up frequently and the agent isn't handling it well. The PM can dig into the data with visualizations or our conversation reconstructions. Once they confirm its a real issue worth addressing, they can link their investigation to the AI engineer - who can use Voker AND Langfuse to debug and implement a fix/improvement.
do you have experience as PMs? Looking at website, it looks like you just use llms to guess what categories are? Seems like trap for garbage in garbage out. Otherwise you would need someone technical to figure out how to setup the proper KPI monitoring things...
We do! We have combined experience as PMs, ml engs, and data scientists across many verticals. We also have experience helping PMs and AI eng teams build agents across over 100 customers from our first product.
You're totally right, the analytics annotation primitives we detect (intents, corrections, resolutions) are the cornerstone to all the other analysis in our platform. It's critical that we get those right or all the data and insights in the world are useless.
LLMs are a core part of that detection, but we also do things like hierarchical classification, (https://voker.ai/blog/hierarchical-text-classification-with-...) and will eventually add in other ML methods where applicable. On top of our automated detections, we're building ways for the annotations to improve and adapt to your specific agent product, your data, and your feedback on our annotations.
Our SDK is architected to eventually accept any type of event you want to send as additional information like add to carts, or other conversion metrics that are valuable for analysis on agent performance.
You're definitely right, we don't expect a PM to instrument this all themselves - similar to web analytics or product analytics tools, the engineering team instruments and maintains the integration, and then our app makes the insights and data accessible to not just the engineer but the whole product team.
Yeah, this is a confusing one on wording. TLDR: Amplitude is analytics for your web/product data, Voker is analytics for your agent data.
We call Amplitude's feature an "AI Analyst". Essentially Amplitude is layering a LLM copilot on top of their own product - so you don't have to click the buttons or write reports to get insights.
We're an analytics platform built for tracking your agents. Different products with different problems they're solving.
Not sure if this helps, but essentially Amplitude could use Voker to track how well their AI Analyst agent product is actually working!
Thanks for clarifying - yes this is a much closer analog to what we're building. That being said, we haven't heard from anyone using it or tried it ourselves yet so can't speak to quality comparison.
From what I can tell in this video, it still seems like Amplitude is focusing on the obs trace details (latency, tokens, etc).
They don't seem to go as deep (or at least don't highlight) as much of the semantic data processing and detection we're doing (intents, corrections, resolutions) - and creating higher level classifications and insights from those. We're completely purpose built for monitoring agent products, so we're striving to do more than just visualizations, we intend to be best-in-category at the actual automated annotations and analysis of agent<>user interaction data.
What's the data model that lets you compare agents that differ a lot in tools/policies? Curious if you normalize on the "what did the user actually accomplish" layer or on raw token/turn metrics, because the two paint completely different pictures of "is this agent working." We struggle with this on the eval side of our own product (email pipeline outcomes, not agents, but same shape).
For the agent working, we're focusing on the user outcome, we think that the raw usage, number of turns, function calls are useful operationally but think of those as more observability than the core evaluation target. We do show some of these stats in our conversation view but don't aggregate to compare agents. Longer term we will look to add in more of these features so we can compare quality vs cost, for example
> High interaction volume (1k+ chat sessions per month)
I don't mean to be that typical HN commenter but you did lose me a bit there.
I know a lot of people are just getting started with agents but even for a lot of scrappy startups usage is a lot higher than that!
If I may suggest focusing on explaining how you can add value even when usage is super low to controlling costs even when usage can get super high?
I can validate you that it is a true problem that's solved by large companies but you have to hand-roll yourself @ startups (via airflow or queues, etc). But unfortunately one where I am not sure that a lot of stakeholders understand the benefits of (yet!). I think value has to be shown a bit more clearly here, sadly.
thanks for the honest insight! Very helpful to hear that you feel the problem is real, that's whats most important to us, we'll keep working on getting the solution and messaging right!
- we say >1K because two reasons:
1. its still feasible (although tedious) to put the full burden on analyzing agent performance and usage on your engineers equipped with obs tools and logs (not ideal but its what most AI teams we see do)
2. you're spot on - it actually surprised us too how few companies (even the really big public ones that have promised to build 200+ agents back in 2024) still have barely one or two agents in prod with only hundreds of convos.
1K convos was our best first guess at the cutoff point where the manual work of digging through traces, and the insights you can get start to make sense enough to need a tool for this. We're definitely planning to tune that number as time goes on!
-We definitely don't have pricing figured out yet, we plan to continue to iterate on the event volumes etc to make sure our product gives clear positive roi for the teams using it at every level. But in general we look at other analytics products as our early barometer. Yes the higher your event volume the more cost, but in my experience (back in Ecomm using Heap analytics) it was so incredibly worth it to pay more as our business and data volumes scaled (we ran on data) I think this is the challenge with all analytics products, they're not useful if you only have 5 site visitors. We see the progression as start with obs/logs -> evals -> analytics as your usage scales.
Curious to hear, whats the session volume of the agent products you run? Every datapoint helps us tune our tiers, pricing, and most importantly our product!
Voker focuses on product, business and user outcomes - like what intents did the user bring to your agent that you might not expect. We're built for the whole product team, whereas Langfuse focuses on engineers specifically.
One way to think about it would be: a PM notices in Voker that a new intent category is coming up frequently and the agent isn't handling it well. The PM can dig into the data with visualizations or our conversation reconstructions. Once they confirm its a real issue worth addressing, they can link their investigation to the AI engineer - who can use Voker AND Langfuse to debug and implement a fix/improvement.
You're totally right, the analytics annotation primitives we detect (intents, corrections, resolutions) are the cornerstone to all the other analysis in our platform. It's critical that we get those right or all the data and insights in the world are useless.
LLMs are a core part of that detection, but we also do things like hierarchical classification, (https://voker.ai/blog/hierarchical-text-classification-with-...) and will eventually add in other ML methods where applicable. On top of our automated detections, we're building ways for the annotations to improve and adapt to your specific agent product, your data, and your feedback on our annotations.
Our SDK is architected to eventually accept any type of event you want to send as additional information like add to carts, or other conversion metrics that are valuable for analysis on agent performance.
You're definitely right, we don't expect a PM to instrument this all themselves - similar to web analytics or product analytics tools, the engineering team instruments and maintains the integration, and then our app makes the insights and data accessible to not just the engineer but the whole product team.
Guess its human-sloppy :(
We call Amplitude's feature an "AI Analyst". Essentially Amplitude is layering a LLM copilot on top of their own product - so you don't have to click the buttons or write reports to get insights.
We're an analytics platform built for tracking your agents. Different products with different problems they're solving.
Not sure if this helps, but essentially Amplitude could use Voker to track how well their AI Analyst agent product is actually working!
From what I can tell in this video, it still seems like Amplitude is focusing on the obs trace details (latency, tokens, etc).
They don't seem to go as deep (or at least don't highlight) as much of the semantic data processing and detection we're doing (intents, corrections, resolutions) - and creating higher level classifications and insights from those. We're completely purpose built for monitoring agent products, so we're striving to do more than just visualizations, we intend to be best-in-category at the actual automated annotations and analysis of agent<>user interaction data.
I don't mean to be that typical HN commenter but you did lose me a bit there.
I know a lot of people are just getting started with agents but even for a lot of scrappy startups usage is a lot higher than that!
If I may suggest focusing on explaining how you can add value even when usage is super low to controlling costs even when usage can get super high?
I can validate you that it is a true problem that's solved by large companies but you have to hand-roll yourself @ startups (via airflow or queues, etc). But unfortunately one where I am not sure that a lot of stakeholders understand the benefits of (yet!). I think value has to be shown a bit more clearly here, sadly.
Congrats on the launch!
- we say >1K because two reasons: 1. its still feasible (although tedious) to put the full burden on analyzing agent performance and usage on your engineers equipped with obs tools and logs (not ideal but its what most AI teams we see do) 2. you're spot on - it actually surprised us too how few companies (even the really big public ones that have promised to build 200+ agents back in 2024) still have barely one or two agents in prod with only hundreds of convos. 1K convos was our best first guess at the cutoff point where the manual work of digging through traces, and the insights you can get start to make sense enough to need a tool for this. We're definitely planning to tune that number as time goes on!
-We definitely don't have pricing figured out yet, we plan to continue to iterate on the event volumes etc to make sure our product gives clear positive roi for the teams using it at every level. But in general we look at other analytics products as our early barometer. Yes the higher your event volume the more cost, but in my experience (back in Ecomm using Heap analytics) it was so incredibly worth it to pay more as our business and data volumes scaled (we ran on data) I think this is the challenge with all analytics products, they're not useful if you only have 5 site visitors. We see the progression as start with obs/logs -> evals -> analytics as your usage scales.
Curious to hear, whats the session volume of the agent products you run? Every datapoint helps us tune our tiers, pricing, and most importantly our product!