Will Smidlein's Blog

Where The Linear Chat Paradigm Starts To Fall Apart

Expanding on this comment from my last post:

I have yet to find a UI that lets me tag a chunk of response (eg a specific bullet point) to come back to, or what I really want which is a waterfall of the different conversations that split out of a long response. Typically I want to respond and ask more (or provide more detail) about 2-5 bullet points but in a purely linear structure I’m constantly scrolling back up and trying to remember things I want to loop back and ask about.

I put together a (totally fake) example diagram to illustrate what I mean:

Human

How can I improve this code?

Consistent Error Handling and Wrapping

Context-Aware Concurrency

More Idiomatic Data Structures

Reduce Nesting & Improve Legibility

Centralized Logging

Test Coverage & Benchmarks

Optional: Configurable Retries / Intervals

How do you suggest we solve?

Creates its own logger

Can you just use slog?

Uses slog

Okay, apply & can you check the whole codebase?

Finds ~30%

Okay fix; any more?

Finds another ~20%

Okay fix, I'll do the rest with find-and-replace

Why go beyond Go's built-in primitives?

Explanation

Hm, makes sense—apply

Client in client.go (code)

Try pseudocode?

{code}

Great, apply it

Which data structures?

bindingManager in internal/client/binding.go

Great, apply it. Any others?

TransactionMap in internal/client/transaction_manager.go

Great, apply it

Intentional; idiomatic Go

(No changes needed)

Duplicate; ignore

Expand on the benchmarks

Big verbose response

It's fine we use an external test suite

Expand on test coverage

Transaction Testing in internal/client/transaction.go

Is that the right place for the test?

No, (fix)

Great apply it

Binding Manager Testing in internal/client/binding_test.go

Can we collocate these?

Yes, {code}

Great apply it

Are these retries/intervals RFC-defined or other standard?

RFC 5766, 8656, 6062, 8489

Okay they're standardized, no need to make them configurable

As we get further into the “reasoning era” I think this problem will only become more pronounced. It’s surprising to me that none of the major LLM providers have explored branching logic (at least to my knowledge).

The current solution of “scroll back up and reference earlier in the conversation” falls apart as soon as you get past a few messages. You almost need a mechanism that says “pick back up with my state from here”.

More to come…