OP, I built this chart as a way to stress-test AI narratives using a simple structural framework.
The “thing” I’m showing is the mapping itself: it separates two questions that often get conflated:
(1) how structurally aligned a public entity is with long-horizon AI value creation, and
(2) how much of that story already appears to be priced in.
The x-axis (M.I.N.D.) is a composite structural-alignment score (Material, Intelligence, Network, Diversification, inspired by the “Last Economy” framing). Scores are synthesized per entity after a skills/assets/capabilities analysis and a review of analyst research, using an LLM as a structured aggregation tool rather than an oracle.
Roughly speaking: Material captures control over scarce physical inputs, Intelligence reflects leverage over computation and models, Network captures ecosystem and data flywheels, and Diversification reflects exposure across multiple AI value paths.
The y-axis (valuation tension) is a rough proxy for expectation saturation. I’m treating it as a secondary signal; the primary thing I’m testing is whether structural alignment and narrative intensity decouple in interesting ways.
One weakness I’m actively unsure about is the M.I.N.D. formulation itself. Multiplying the four dimensions strongly penalizes any missing leg, which may or may not reflect how value actually compounds in AI systems. If that assumption is wrong, the framework will systematically mislead.
I’m especially interested in:
- whether these four dimensions are the right ones
- whether multiplication is the right way to combine them
- where this framework would clearly fail
Happy to answer questions or clarify assumptions.