Hi, I'm Dean
I'm a cofounder at a stealth co. building agentic AI infra.
Our interface has processed 25B+ tokens since launch in July 2025, and we collaborate with university labs.
Outside AI, I run a crypto prop trading desk (since 2022) focused on arbitrage and news.
We deploy $15M+ in firm capital and are among the fastest on token listing alerts.
Previously: UIUC CS. NCSA research assistant in distributed systems. Helped maintain Blue Waters.
Writing
I did not set out to write about infrastructure. I kept seeing the same failure pattern in different places, so I decided to trace it all the way down. This also shares what I've been building with my friends over the past year, our new startup.
In news trading you learn fast that intelligence is not enough. A call can be right and the trade still goes wrong because the system around the call is fragile. A single retry does not fire, a parser chokes once in a hundred, a fallback eats an error, a feed stalls for a moment. You do not lose because you did not know. You lose because the sequence that turns knowing into doing falls apart.
When I started building with large models I saw the same shape. Most real applications were a clever prompt, a model call, a few tool calls, some glue code, and hope. It worked until it didn't. When it broke there was no steady way to replay a run, compare two outcomes, reason about state across steps, or contain side effects. It felt like trading again, except the uncertain component sits at the center by design.
That is the personal context. Now the part that convinced me this is not just my taste in systems but a real gap in the stack.
I still remember the years before Kubernetes took off. Containers were already useful. The problem was not the container itself. The problem was everything around it. People were writing ad hoc scripts to start and stop processes, wire networks, mount storage, restart after failure, and keep desired and actual state in line. Every team solved it a little differently, which meant nobody really solved it. The system had no shared model of the world.
Kubernetes won because it split the problem into two clean layers and let those layers meet through a simple contract.
The abstraction layer gave everyone the same nouns. Pod, Service, Namespace, Config, Secret, Job. You could point at the system and say what exists and how it ought to look. It was not a bag of flags. It was a small vocabulary that captured intent.
The orchestration layer did the hard work of making reality drift toward that intent. The control loop read your desired state, watched actual state in the cluster, and kept reconciling the two. Controllers handled rollouts, restarts, rescheduling, resource pressure, and failure. You did not tell it how to fix things step by step. You told it what you wanted the world to look like, and it kept working until the world matched.
Looking back, what Kubernetes really solved was not container scheduling. It solved something deeper: coordination. It gave distributed systems a shared model of the world and a mechanism that continuously reconciled local actions with global intent. Once coordination became explicit, everything else (deployment, scaling, failover) became a solvable engineering problem instead of a pile of scripts. Execution improved because coordination became a first class concept.
I think AI is now at the same kind of inflection, only the moving parts are different.
At the abstraction layer we do not yet share the nouns. We still talk about prompts and chains and agents as if that were a model of work. The things we actually need to name are closer to Task, Tool, Input, Output, Memory, State, Policy, Budget, Transcript. A task is a goal with constraints. A tool is a capability with side effects and a permission scope. State is the trace of what happened and the current plan. Policy is how to trade off speed, cost, and certainty. Budget is how far we allow the system to go. Transcript is the evidence to replay and compare.
At the orchestration layer we do not yet have a control loop. We have code that issues steps and hopes for the best. What would a control loop look like here? It would observe actual behavior, check it against an intended plan, and keep adjusting until the plan is satisfied or a clear boundary is reached. It would keep a durable log so that any run can be replayed line by line. It would retry with backoff, fall back to alternate tools or models, and degrade gracefully when policies say so. It would isolate side effects so a bad call cannot email the world or drop a table. It would treat multi step work as a first class thing, not a pile of ad hoc if statements.
When I look at AI systems today, the failures look superficially like execution bugs, but the real failure mode is almost always coordination. The model thinks one thing, a tool call assumes another, a policy expects something else, and the system has no mechanism to align them. Without a coordinator that tracks intent, observes behavior, and reconciles the drift between them, the whole workflow becomes a probabilistic graph instead of a system.
The hard part is not coordinating tools with deterministic side effects. The hard part is coordinating multiple reasoning processes where agents make decisions, models generate plans, and policies set boundaries, where each component carries uncertainty and its own view of state. This is a systems level multi agent control problem, not just an orchestration problem.
OpenRouter is one of the clearest signs that fragmentation is becoming a real tax on developers. They unified model access not because inference was the hard part, but because coordination across many model endpoints became too inconsistent. Their success suggests something deeper: once enough components exist, developers gravitate toward shared surfaces that reduce coordination overhead.
But inference unification is only the beginning. The harder problem, and the one I am personally drawn to, is how multiple reasoning processes coordinate over time to achieve a goal. Not just routing to the right model, but reconciling the behavior of multiple agents, tools, and decision points that drift apart as they execute. This is where execution breaks down today not because models are weak, but because the system around them cannot coordinate their behavior.
Here is a concrete picture that helps me test whether I am hand waving or not. Imagine a very standard support workflow. A customer writes in about a wrong charge. The system should read the ticket, pull the order, apply policy, decide the refund, write a reply in the right tone, call the payment API, and record the action.
The abstraction layer would express this as one task with a goal and constraints. The goal is resolve a charge dispute. Inputs are the ticket and user context. Tools include the order database, the refund API, and the email system. Policy states who qualifies for what. Budget limits model calls and time. The expected outputs are a refund transaction and a sent email. The transcript is the log that ties the steps together.
The orchestration layer would run this plan. It would choose a model according to policy, call tools with least privilege, and retry or switch tools when a call fails. It would write every step to a durable trace, including inputs and outputs of each call. If a human steps in it would fold that decision back into the plan. If something goes wrong it would be possible to replay the entire run deterministically to see where behavior diverged. If a new model becomes cheaper tomorrow the plan can be rerun with an alternate choice and the two runs compared. That is not a dream. It is the same reconciliation mindset that made container orchestration boring in the best way.
I do not think this layer emerges because someone says the word runtime. It emerges when we agree on a small set of nouns and accept that a control loop, not a long set of scripts, should own the drift between intent and reality. In other words, the same two moves that made Kubernetes useful can be copied here. Keep the abstraction small and the contract clear. Put the hard work into reconciliation, not templates.
There is a second reason I believe the timing is right. We already see islands of unification where pressure is highest. Unified model access, shared function calling shapes, shared evaluation harnesses. These islands do not solve coordination, but they reduce variance at the edges, which makes the next layer possible. If you can swap models without rewriting code, a coordination system can choose between them as conditions change. If tools expose consistent contracts, the system can reason about their interactions instead of treating every call as a special case.
But the real frontier is coordinating the reasoning processes themselves. When multiple agents or models are working toward a shared goal, how do you keep their plans aligned? How do you reconcile diverging beliefs about state? How do you handle drift between what one component assumes and what another component does? These are the questions that will determine whether AI systems can execute reliably at scale.
None of this makes the work easy. It is quiet work. Most days are spent on the parts no one sees. It is also work with a clear falsification path, which I like. If the nouns never settle, the abstraction layer will not hold. If the control loop cannot produce trustworthy replay and steady success rates on long tasks, the orchestration layer will not hold. If the system cannot move a real workflow between two model stacks in a day while holding quality steady, then the promise of flexibility is not real. These are measurable outcomes, not vibes. They are the kind of things I used to measure in trading and the kind of things I want to measure here.
I care about this not because it is glamorous. I care because the ceiling for AI is set by the structure that turns thinking into doing. Intelligence is here in some form. What we do not yet have is a clean way to carry it through a sequence of actions with the steadiness real work demands. The history of computing shows that a small shared abstraction and a patient control loop can turn chaos into a platform. I think that move can repeat. I also know it only works if we keep it small, make it testable, and let reality push back on the design every day.
That is the part of the system I enjoy working on. It is the same instinct that made me build guardrails in trading instead of new signals. It is the same instinct that made me like distributed systems more than demos. It is a taste for the part that holds when everything shakes. If AI is going to take the step from answers to actions, that part needs care. I want to put in the hours there.
Was just reviewing the market today, and it's wild how completely the crypto game has changed in just a few years. The meta shifts so incredibly fast.
Think back to 2021 and before. It was easy mode. The market was flooded with retail longs, which meant you could just sit back and farm those juicy funding fees. Opening a position was a non-issue; I was comfortably running 5x leverage on alts as a default play.
Then came 2022. Retail got absolutely farmed and finally learned how to short. So, the smart money adapted. They started pumping obscure, low-cap coins to bait all the new shorts in. Once the trap was set, they'd just keep buying up spot. The perps would trade at a huge discount to spot, and the negative funding would bleed you dry. You simply couldn't hold on.
The 2025 Meta: Hunting Arbitrageurs
Fast forward to now, let's call it '25. Retail is no longer falling for that trick; they know shorting illiquid alts is a death sentence. This created a new problem for the market makers: they lost their counterparty. They had no one left to trade against. So, the game evolved again. The new meta is hunting the arbitrageurs.
We've seen it play out. That $PUMPBTC incident was almost certainly a big fund getting blown up for tens of millions on a basis trade. The price doubled in three minutes, and in the next minute, their position was down $40 million. A textbook case of an arb trade getting liquidated, triggering an ADL cascade that lets the house clean up. The moves on $HYPE and $XPL were the same playbook, pure manipulation to blow up arbs. We saw it again last night with $SERAPH on Bybit.
I run these small-cap arb trades too, but my risk is managed on a different level. My liquidation price is usually set for a 100x move. It means even if a coin pulls a 10x in a single day, I have more than enough room to breathe.
The Whale Trap
For small capital, it's manageable. If a trade starts to look bad, you just cut it. Simple. But for big money, it's a trap. You can't get out. They are the ones hunting you.
Imagine this: you have a $20 million margin account and you open a $1 million arb trade on a small-cap. Your liquidation price seems miles away, maybe at a 20x move. But then they orchestrate one of those violent, 10x pumps in a single day. All of a sudden, the notional value of your position has ballooned to $10 million. In a market this thin, you can't exit a position that size. There's no liquidity to absorb it. The moment you try to de-risk, you'll cause massive slippage against yourself, which only accelerates your own liquidation. You're trapped. And the market makers know it. They are pushing the price with the sole intention of blowing you up.
The Basis Trade is Dead: Welcome to the ADL Meta.
Amidst the market's recent turbulence, many have asked for my current outlook. My core thesis hasn't fundamentally shifted, but I believe this is an opportune moment to articulate my framework for why the vast majority of crypto assets are valued at levels fundamentally disconnected from reality.
The principle is remarkably straightforward, yet widely ignored.
The Monetary Premium Reality
Crypto's primary, and arguably its only, widespread product market fit by dollar value has been as an alternative, speculative store of value (SoV). This is the foundational reason why Bitcoin, even after 15 years, maintains its hegemony. Ethereum, since circa 2020, has also cultivated its own potent monetary premium. Combined, these two assets command the lion's share of the market (over 75% excluding stablecoins) and an even greater portion of its liquidity. Legacy tokens like XRP and ADA also persist due to a similar, albeit smaller, demand profile.
Over the years, we have been inundated with grand theses for crypto, with doomer theories of imminent global economic collapse being a perennial favorite. Ironically, the global economy has proven exceptionally resilient, continuing its march of growth and productivity. This prosperity, in turn, has fueled greater demand for nonsovereign, alternative stores of value like BTC and ETH. Regardless of the narrative du jour, crypto has consistently ridden the powerful wave of monetary demand.
An Economy Without Productivity
This has birthed a new digital economy, anchored by BTC and ETH. The glaring problem, however, is that this economy is largely devoid of genuine productivity. This is the natural consequence when the overwhelming majority of an asset's value is derived from passive holding and speculation, rather than from generating economic output.
And here, speculation becomes the engine of the entire system.
You will find a staggering number of tokens (over 70 at last count) boasting valuations north of $1 billion. Many of these are zombie projects that have been around for years, pivoting endlessly without ever achieving meaningful product market fit. New tokens, with patently limited potential for any foreseeable use case, are inflated to multibillion dollar valuations overnight. The result is a market where assets that should, by any fundamental metric, be worth a few million at best are priced in the billions on the infinitesimal chance they might one day be worth something. Hundreds of other tokens, which are self evidently worthless, continue to trade for millions. This is all sustained by a massive speculative premium, a derivative of the industry's SoV backbone.
The Outliers
To be clear, I am not a nihilist. Pockets of genuine innovation and productivity do exist, but they are profound outliers. For instance, projects like Pendle and Curve are personal favorites of mine. Why? Because they have sustainable, real yield revenue models and are designed with positive value accrual incentives for their token holders. Hilariously, these productive assets are often undervalued, even when benchmarked against traditional value stocks, which speaks volumes about the market's priorities.
The Inevitable Fade
So, what is the solution? There is no magic bullet. This is the intrinsic nature of the industry. The dominant strategy has been, and continues to be, to gamble on speculative narratives and rotate profits back into an asset you deem to be a reliable store of value.
Of course, every asset has a demand ceiling. We witnessed the end of Bitcoin's exponential growth phase in 2017; its performance since has been modest, barely outpacing the NASDAQ. Diminishing returns are an inevitability. As the market for alternative monetary properties approaches saturation, the momentum will fade. We see this play out in micro cycles constantly. Take the recent narrative around Digital Asset Treasuries (DAT). As the hype cycle fatigues and the story becomes less compelling, you can expect buying pressure to wane significantly. Let's wait and see, but it is a classic pattern. This will eventually trigger a long, slow bleed to near zero for the legion of grotesquely overvalued tokens, a process that will likely take many years of sideways market action to fully play out.
In the here and now, however, the crypto market remains what it is: the most extraordinary and utterly unhinged casino the world has ever witnessed, completely detached from reality. And it will likely persist in this state for far longer than any rational mind might expect. The key is to see the game for what it is and play it accordingly.
In today's DeFi era, perp DEXs have become a prominent narrative. These platforms let users transition from CEXs to a more decentralized environment, replicating familiar benefits while adding some degree of decentralization.
However, most perp DEXs rely on off-chain order books, primarily due to smart contract computational limits. These implementations range from fully centralized matching engines to multiple trusted parties maintaining off-chain books. While this seems like a sensible trade-off (keeping settlement on-chain) it opens up exploitation through MEV.
The Matching Problem
Order matching is fundamental to any exchange. If left in the hands of privileged parties, it's susceptible to exploitation. An off-chain scenario could violate price-time priority, matching an incoming order against a later offer at a worse price. An on-chain order book would inherently match against the best and earliest liquidity.
Proposed off-chain solutions often involve social consensus with punishment mechanisms. But these systems are fallible and can be manipulated. The only assured way to ensure fair order books is maintaining the entire state on-chain. This might sound theoretical, but the issue is significant in practice.
MEV Beyond Transaction Ordering
Off-chain order books are vulnerable to MEV at the matching level, not just transaction ordering. This creates an environment where market makers can't quote reasonably, and good liquidity gets monopolized by traders colluding with order book operators. User liquidity suffers.
The alternative is running a centralized matching engine with a reputation for fairness. But this contradicts decentralization. If an exchange is inclined to this method, they may as well operate a CEX, sacrificing transparency for most operations.
The Path Forward
Order books must exist on-chain. This mandates infrastructure innovation, but it's necessary to ensure MEV exists only at transaction ordering, not order matching.
Building a successful decentralized exchange is complex (from user acquisition to liquidity maintenance). Adding decentralization makes it unsurprising no perp DEX has achieved mass adoption yet. But the landscape could shift within a year.
On-chain order books are vital to protect end-users. In my view, the most mature solution currently is off-chain matching with on-chain settlement. For perp DEXs, dYdX is a well-established model that balances these trade-offs.
Many people lump STEPN with P2E and a wave of rushed "Move to Earn" clones. That framing misses what makes STEPN distinct.
STEPN vs. GameFi
Is STEPN a game? In Sid Meier's words, "a game is a series of interesting choices." Repairing and upgrading scarce GST, deciding whether to fuse gems, opening or selling mystery boxes (these are all choices).
But STEPN does not need to be a game in the traditional sense. It is strategic without immersion, cut-scenes, or high-fidelity art. That is a feature, not a bug. Games compete on intrinsic fun. Fitness products sell external incentives and accountability.
In P2E, players often show up to make money, not for joy. When novelty fades, intrinsic motivation collapses and churn spikes. STEPN inverts this: external incentives help users build a durable exercise habit. Daily output caps and a long arc of goals nudge consistent behavior rather than bingeing.
I love Ring Fit Adventure and Just Dance, but once "finished," my motivation dropped. With STEPN I have kept going for three months and counting, because the system layers short, mid, and long-term objectives. As Sid Meier's Civilization design philosophy puts it, the best systems always leave you with something to anticipate.
If you still compare STEPN to Axie Infinity, look at the user structure. There are no guilds, no live rental market; NFT holders are the actual users. The team even reintroduced invite codes to throttle growth and protect the economy. In many P2E ecosystems, asset owners and players are split: surveys have shown low overlap between Axie players and AXS holders, and titles like Pegaxy concentrated ownership in guilds. Not everyone likes to play P2E. Nearly everyone can go for a walk.
STEPN vs. SocialFi
SocialFi has struggled: porting social graphs is hard and token incentives are often misaligned or unnecessary. STEPN sidesteps this. The app plays like a single-player loop, but strategy talk, randomness (opening boxes), and meta discussions create organic social energy outside the client. Think of esports metas or the Elden Ring discourse (community forms around systems, not feed mechanics).
It is also mainstream-friendly. STEPN ties to real-world relationships and health, so clusters form naturally among friends and family. Even in the worst case, it is easier to recommend a green fitness app than a collectible monster game.
STEPN vs. Move to Earn
Comparable fitness apps split into two groups:
- Apps that require external hardware
- Apps that do not
Hardware limits adoption. Ring Fit needs both a Switch and a Ring-Con; Just Dance leverages a phone gyroscope and goes cross-platform, which helps explain the sales gap.
Among hardware-free apps, STEPN's edge is verification. GPS and motion signals let the system cross-check movement, user state, and outcomes. Without credible proof of movement, fair "earn" is impossible.
Anti-botting is a moving target. The durable moat is not any single rule, but a fast, decisive team that treats integrity as a product surface.
STEPN is less "GameFi with steps" and more a habit engine with incentives and proof of movement. That positioning creates a different user structure, a healthier economy, and a clearer path to mainstream retention.
TL;DR. The public Desmos graph many people use for Curve leaves out an NN factor in the constant compared to the whitepaper. If you read exchange rates from that graph, you will underestimate curvature and pick an A that is too small. On-chain, get_y is solving the invariant (Newton iteration); it is not redefining the constant.
Motivation (why it matters)
- Wrong A leads to wrong slippage expectations under imbalance, misplaced alerts/hedges, and bad liquidity incentives.
- When N>2, the gap grows quickly because the missing factor is NN.
What the contract actually solves
Let N be the number of coins and x1,…,xN their balances. The invariant D is computed via get_D(x, A). When swapping into coin j, the contract solves for the new balance y of coin j so that the invariant holds for (x1,…,xj-1, y, xj+1,…,xN).
Define
$$ S' = \sum_{k\ne j} x_k,\qquad P' = \prod_{k\ne j} x_k. $$
Keeping D fixed, the update in the unknown y reduces at each Newton step to a quadratic
$$ y^2 + b\,y = c, $$
where b and c are deterministic functions of \(D, A, N, S', P'\) (as in get_y). The positive root gives the next iterate for y. This is why the implementation line you saw is from a solver, not a symbolic invariant.
Why the Desmos curve is off
The Desmos setup drops NN in the constant term (effectively a rescaling that flattens the curve). If you try to match on-chain behavior with that graph, you must multiply the A you read off (roughly) by more than N. Example (N=2): with \(K = \tfrac{2Axy}{D^2} = \tfrac{200xy}{D^2}\), the local slope crosses about \(-1.008\) near \(x=0.33D\).
Quick calibration recipe
- Pick the target imbalance point (e.g., \(x=0.33D\)) and your pool size N.
- Compute on-chain (or in your sim) using
get_Dthenget_yto get the slope at that point. - In Desmos, either multiply the constant by NN, or (if you can only tweak A) scale A upwards until the slope matches the on-chain slope.