kill the scoreboard

Feb 2026|7 min read

Why prediction markets are quietly destroying our ability to imagine a different future.

observers become players

In 2008, IndyMac Bank was functioning well enough. Not thriving, but functioning. Then Senator Chuck Schumer released a letter expressing concern about the bank's solvency. Within eleven days, depositors withdrew $1.3 billion and IndyMac collapsed. The letter didn't reveal hidden insolvency. It created the conditions for insolvency by making the possibility visible and legible. The observation became the outcome. This is the self-fulfilling bank run: a public signal about probability collapses into a guaranteed result because the signal itself changes behavior. The prediction didn't just measure risk. It manufactured it.

Wikipedia has a quieter version of the same problem. The platform's notability standards create a binary: either a topic is "notable enough" to warrant an article or it isn't. The intent is to passively document what matters. However, this relationship is bi-directional. New Wikipedia articles causally influence the language and framing of subsequently published scientific papers, even though virtually no scientists cite Wikipedia directly. And citogenesis runs the loop the other way: unsourced claims on Wikipedia get picked up by external writers, become "reliable sources," and get cited back into Wikipedia to validate the original entry. The encyclopedia built to reflect the state of knowledge is quietly reshaping it.

These are instances of a general principle: the observation tool reshapes what it observes. And the more visible, legible, and authoritative the tool, the stronger the reshaping effect.

This brings us to markets.

the stock market already plays this game

The Chicago Mercantile Exchange (CME) was created to solve a genuine coordination problem. Before standardized futures contracts, commodity pricing was bilateral, opaque, and inefficient. The CME gave farmers and buyers a shared mechanism for agreeing on fair prices months in advance as well as mechanisms to hedge a rough year. It rationalized a process that genuinely needed rationalizing.

But the CME didn't just observe commodity markets. It fundamentally restructured them. Farmers began to plant based on futures prices rather than local demand. Storage, logistics, and production timelines reshaped themselves around the exchange's calendar. The tool built to passively reflect a market became an active participant in it.

Stock exchanges followed the same arc at a much larger scale. Equities were designed to price business value, giving investors a way to allocate capital toward productive companies. And they do, mostly. But somewhere along the way, the scoreboard became the game. CEOs optimize quarterly earnings because the market punishes anything else. Politicians reverse policy positions within hours of equity drops. As Trump walked back tariff escalations after market selloffs, he wasn't responding to economic fundamentals (those have never worked for these kinds of widespread chaotic tariffs). Rather, he was responding to the numbers on the screen.

This is the pattern: markets begin as observers and mature into participants in the systems they measure. The CME restructured commodity production. Equity markets restructured corporate governance and, increasingly, public policy.

It's worth acknowledging that this gray zone, the area where we can't tell whether a market is mostly reflecting reality or producing it, isn't inherently catastrophic. Stock prices do encode real consequences for real people. The performance of the US healthcare system, the quality of public education, employment rates, retirement savings. These are deeply human outcomes that get priced into equities every day. The feedback loops are concerning, but the system is noisy and multidimensional enough that it doesn't reduce any single human question to a single number. A president can look at a market selloff and still exercise judgment about whether it reflects genuine risk or short-term panic. The signal is loud but not simple. The gray zone is dangerous but manageable.

But we should fight like hell to stop expanding it.

prediction markets cross the line

Polymarket and Kalshi represent something categorically different. They don't rationalize resource allocation or business performance. They rationalize decisions and their consequences directly. Will a ceasefire hold? Will a specific country escalate military action? Will enough people die to categorize a situation as a famine?

These questions are not coordination problems. There is no farmer who benefits from a better price signal here. There is no capital allocation that improves when we efficiently price the probability of violence against civilians. What these markets do is take a moral and political question and convert it into a spectator sport with a number attached, a ticker, a volume bar, and a portfolio position.

the real cost is what we stop being able to imagine

Every breakthrough in human history has come from someone who refused to accept the prevailing probability distribution. The Wright brothers didn't run a prediction market on powered flight. The civil rights movement didn't wait for favorable odds. None of these were rational bets — by any sober assessment at the time, each was improbable. They happened because enough people held unjustified conviction and acted on it anyway.

I don't mean this sentimentally — it's structural. Imagination is upstream of everything. Before any system changes, someone has to be able to imagine it changing. Before any injustice ends, someone has to believe it can end despite the evidence. The capacity to look at the world as it is and envision it as it could be is not a bug in human cognition. It is the mechanism by which humans actually reshape the world.

Psychologists have understood a version of this for decades through the lens of locus of control. On the individual level, when you convince someone that outcomes are determined by external forces rather than their own actions, they stop trying. If you tell a student that their background makes success statistically improbable, they're less likely to put in the work. Not because the statistics are wrong but because internalizing them changes behavior. External locus of control is self-reinforcing. The belief that you can't change things becomes the reason you don't.

Prediction markets do this at the macro level. When millions of people can see a live, market-weighted, dollar-staked probability that a geopolitical atrocity will occur, the cognitive shift is subtle but devastating. The event moves from the category of "something we might prevent" into the category of "something with a number attached." You stop asking "how do I help change this?" and start asking "what are the odds?" You relate to the event as a bettor, not as a citizen.

This is the crystallization effect. Once a probability becomes legible, visible, and staked, it doesn't just reflect the likelihood of an outcome. It hardens it. The bank run dynamic plays out in slow motion: people see the number, the number shapes their expectations, their expectations shape their actions (or more precisely, their inaction), and the inaction makes the outcome more likely. The market that was supposed to merely predict starts to produce.

And unlike a bank run, which is dramatic and visible, this crystallization is ambient. It's not a single catastrophic feedback loop. It's a slow, pervasive narrowing of what people consider possible. When Polymarket shows an 87% probability on some outcome, that number doesn't just inform. It constrains. It tells millions of people, implicitly, "this is basically settled." And settled things don't get fought over. Settled things don't get reimagined. Settled things get priced in.

The Wikipedia parallel is instructive here. When a topic doesn't have a Wikipedia page, it becomes harder for anyone to take it seriously, study it, build on it. The absence of documentation becomes a self-fulfilling judgment about importance. Prediction markets do something analogous to possibilities themselves. When a possible future doesn't have favorable odds on a prediction market, it becomes harder for anyone to organize around it, fund it, fight for it. The unfavorable odds become a self-fulfilling judgment about feasibility.

We are, in effect, building a machine that systematically kills long shots. And long shots are where all the change comes from.

the rationalization trap

There's a deeper philosophy embedded in the push toward prediction markets, which is the assumption that more information always produces better outcomes. If we can just measure probabilities accurately enough, the thinking goes, decisions will improve across the board. Rationality maximized. Uncertainty minimized. Progress.

But this only holds if the goal is optimization. And for large swaths of human life, optimization is the wrong objective function.

Humans are both rational and irrational, and this isn't a flaw to be corrected. It's a feature to be preserved. The rational side builds infrastructure, allocates capital, coordinates production. The irrational side starts movements, challenges systems, imagines impossible things into existence. Our institutions should reflect that duality. The CME serves the rational side beautifully. Stock markets serve it adequately, with some concerning feedback effects. But we do not need, and should not want, a mechanism that efficiently prices every dimension of human experience.

Some things need to stay illegible. Some probabilities need to stay unpriced — the inefficiency is load-bearing, and it's what preserves the space for people to act against the odds rather than accept them.

The same tension applies to artificial intelligence. The development of AI systems, particularly the push toward superintelligence, is itself a rationalization project: build systems that are maximally rational, better at prediction and optimization than humans could ever be. AI alignment, in this framing, is fundamentally a question about where to draw the line on rationalization. How much of human life do we want subjected to optimization? If we can't even agree that we shouldn't build liquid markets around the probability of children dying in conflicts, how are we going to navigate the far harder questions about what constraints to place on systems orders of magnitude more capable of rationalizing the world around them?

Prediction markets are a test case for the broader question. And right now, we're failing it.

draw the line before it draws itself

I don't have a clean answer for where the line should sit. The distinction isn't as simple as "rationalizing inputs is fine, rationalizing outcomes is not," because stock markets already rationalize outcomes and we've decided, on balance, that the tradeoffs are worth it.

But direction matters more than precision here. The gray zone, the space where observation tools become participation tools, has expanded with every new class of rationalization market. Each expansion has brought benefits, but each has also eroded something: the capacity of people and institutions to act on conviction rather than consensus, to imagine futures that the current data doesn't support, to treat the world as something to be shaped rather than something to be priced.

Prediction markets push that expansion into territory where the erosion accelerates and the benefits vanish. There is no coordination benefit to efficiently pricing human suffering. There is only the slow, ambient narrowing of what we collectively consider possible.

The pattern from IndyMac, from Wikipedia, from the stock market itself, is consistent. Observation tools reshape what they observe. The more legible and authoritative the tool, the stronger the effect. Prediction markets are the most legible, most authoritative version of this tool we've ever built, applied to the domains where the reshaping effect is most dangerous.

Kill the scoreboard. Or at least stop building new ones in places where the score is the last thing we should be keeping.

Jonathan Wen