Investigation times got you down? See why it’s time to switch to Honeycomb.Learn more

AI Isn't Here to Replace Your Dashboard… Yet

Non-deterministic UIs are the future, but they’re not here yet. So until then, we’re stuck with conversational interfaces.

| December 2, 2025
AI Isn't Here to Replace Your Dashboard… Yet

Non-deterministic UIs are the future and will replace your dashboards, but they’re not here yet. So until then, we’re stuck with conversational interfaces. In an effort to try and describe what I consider the future of UIs to look like, I wrote about how you (and I) have been designing dashboards wrong. The core insight was that we've been designing for static representations of data that sit on a TV in the office, when the actual use case is someone at a desk using them to debug an issue.

My conclusion was that we need to stop thinking about designing "dashboards" and instead build "launchpads" to accelerate debugging experiences. But working on Honeycomb Canvas has shifted my thinking even further away from dashboards—and launchpads, too. The “fuzzy automation” that AI brings to the table is enabling a use case I didn’t think was possible. This is what I’m calling non-deterministic UIs, which is a nod to the fact that when we interact with an LLM, we need to embrace that the response is going to be non-deterministic, or more accurately, that the outcome is not knowable in advance.

AI-native or ChatGPT-native?

Integrating AI into applications is much more than just a textbox in the middle of your screen, nevermind the barrier for what I would consider “native.” What I’m seeing in the “AI for observability” space right now is not actually the best use of AI. It is the familiar interface though—the one that people associate with GenAI tools like ChatGPT or Claude.

ChatGPT has become synonymous with AI integration. But it’s not really ChatGPT: it’s the idea that the interface you use to interact with an AI is human language in text form. This appears to stem from the idea that humans speak to each other, and text or freeform language is the “best” way to communicate with the LLM. What we’re seeing, as a result, is the term “AI-native,” meaning “we have a prompt textbox that you can type into on every screen, and our AI will give you some text back.” At that stage, you might as well be “ChatGPT-native.” Text interaction, to reuse a ubiquitous phrase, is the worst interaction interface, except for all the others.

For applications to be considered to be “AI-native,” there needs to be much more to the integration. This doesn’t mean more textboxes across the site. It means getting the UI, and the application in general, to perform more tasks that can only be achieved using the kind of “fuzzy automation” that only a GenAI integration can do.

Automatically adaptable dashboards

There’s a scene from a film called Antitrust where the characters are walking through a hallway and the pictures on the walls detect their mood and change to reflect it. I want that for dashboards, but instead of the mood, it uses the current system state. I really wish dashboards could be dynamic/adaptive like that, in order to reflect the reality of a system that's in a constant state of change, and that’s the sort of invisible AI integration that would lean towards being “AI-native.”

Confession time: I love building UIs for the systems I build, from choosing the visualizations to curating the queries, all with the goal of creating a single window with all the information I’ll ever need. But what I keep finding is that when the proverbial shit hits the fan, they’re not all that useful.

That’s why I want dashboards that are automatically responsive to the needs of that moment, something beyond the dynamic/adaptive paradigms. A dynamic UI is something you can code—parts of the UI change based on the day of the week, like a calendar that shows different designs for weekdays versus weekends. An adaptive UI is something where you can change the layout based on your interaction with it. But what if there was a third option, where the UI is dynamic and it automatically adapts to the environment around it, both from data and the interactions from the user?

Conversational interfaces aren't the end game

With the rise in conversational interfaces, it feels like we’re getting closer to the goal of fully dynamic dashboards. I want to be honest here: I don't think the current incarnation of conversational interfaces is the final form that will change the world. It is, however, a stepping stone on our way to that utopia.

Why? Because I fundamentally don't believe that typing into a textbox is the best way to interact with a system as an engineer. Asking ChatGPT or Claude what's wrong with a system is too long-winded, inefficient, full of false positives, and generally untrustworthy when the criticality of our production systems is what's at stake. That doesn't mean it’s bad or that we can’t use this as a stop on the journey to something amazing.

I don't think a pure text interface, or even a fully automated interaction, is where we're heading, at least not in the short to medium term. Where we’re heading is for the system to understand the context we’re working in and present us with dashboards or at least correlated data, in relevant visualizations. We need the system to decide that we should be shown a heatmap of the latencies for checkout alongside the CPU of the pods for the checkout, the Redis service, and the queue length in Kafka.

This is where Canvas changed my mind. Instead of just providing a question and answer style interface, it’s able to string together multiple queries. This ability for the AI to walk through your system and use understanding gained at each stage, while showing you how it got there by showing the results of each query, is unique. This makes it more like a replacement for the debugging flows I describe in my Launchpads post than it is for the traditional dashboarding flow, and much more than a pure text conversation. Debugging requires evidence in order to build confidence in the outcomes, this is where I think that Canvas shines beyond a pure “conversation” style interface.

What I really want is J.A.R.V.I.S.

Where are my AR glasses with a heads-up display that reacts to me saying, "Show me what's going on with checkout" by providing dynamically-created interfaces with relevant visualizations? And where’s the voice that responds, “I think Redis is having an issue. Look at the graph on your visor”? This is the future I’m reaching for, and I honestly don’t think it’s that far off.

Right now, we’re using “agents” in this scenario, more for running queries looking at the results and providing narratives in text. We then ask more questions and try to articulate (in natural language) the queries we might have built ourselves.

This still requires you, the human, to give it the context of what you're looking for. But in the future, imagine that your UI is more proactive.

Imagine this… You log into your observability platform and the UI speaks to you: "Hi Martin, customers are reporting issues with checkout. Here are some relevant visualizations of response times, grouped by the discount code used, and the CPU graphs for all the relevant pods for the dependent services."

Essentially, what I really want is J.A.R.V.I.S. from Iron Man. I want it to give me a heads-up display that adapts to the current situation and gives me everything I need on a four inch display right in front of my face.

Is that where we're headed? Maybe not tomorrow. But it's where I think we should go, not to replace engineers, but to make them more effective at what they already do best: understanding complex systems and making them better. If we accept that debugging issues in production isn’t what engineers want to do, we also accept that building dashboards isn’t the thing that engineers get excited about.

Conclusion

AI integrations within products should be felt but not seen. As soon as we have to make the AI integration obvious, we’re selling AI and not how the functionality it enables will make your life better. Use AI to augment existing journeys, to make them better, to remove the minutiae of the day-to-day interactions people have with systems.

Tools like Canvas can’t replace a static dashboard that’s used as a wallboard or as a report. It can help in a debugging workflow, by gathering evidence and presenting that context to you, the human, to make some decisions.

Don’t look for the AI in the product, look for how the product features help you to get to resolutions quicker, and more confidently.

New to Honeycomb? Get your free account today.

Get access to distributed tracing, BubbleUp, triggers, and more.

Up to 20 million events per month included.