<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Adam Murray]]></title><description><![CDATA[Designer and independent researcher studying what happens to human capability when AI does the thinking. Architecture background. Systems thinker. Building frameworks for the stuff nobody's measuring yet.]]></description><link>https://adammurray972420.substack.com</link><generator>Substack</generator><lastBuildDate>Sun, 17 May 2026 10:07:23 GMT</lastBuildDate><atom:link href="https://adammurray972420.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Adam Murray]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[adammurray972420@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[adammurray972420@substack.com]]></itunes:email><itunes:name><![CDATA[Adam Murray]]></itunes:name></itunes:owner><itunes:author><![CDATA[Adam Murray]]></itunes:author><googleplay:owner><![CDATA[adammurray972420@substack.com]]></googleplay:owner><googleplay:email><![CDATA[adammurray972420@substack.com]]></googleplay:email><googleplay:author><![CDATA[Adam Murray]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Surprise Me]]></title><description><![CDATA[On Remaining Surprising]]></description><link>https://adammurray972420.substack.com/p/surprise-me</link><guid isPermaLink="false">https://adammurray972420.substack.com/p/surprise-me</guid><dc:creator><![CDATA[Adam Murray]]></dc:creator><pubDate>Wed, 06 May 2026 19:39:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RH1n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RH1n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RH1n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RH1n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg" width="1400" height="1867" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1867,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RH1n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RH1n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bdc865-d0fc-4ef7-968b-f871574ef9b4_1400x1867.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://adammurray972420.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Make it feel weird, unexpected, subtle, powerful. Have it be out of nowhere, out of nothing, Boom! A quick feeling, warm, and fun. Laughter that is deep, breathless and loud. The hurt, the pain is harsh, burning, and hard. I didn&#8217;t see it. I didn&#8217;t expect it, I never do.</p><p><strong>Turns out, being surprised, or surprising, is everything.<br></strong></p><div><hr></div><p><br>The other day I was reading a technical document. My reading list lately would probably inspire very few. This was dry stuff. Standards specifications for how AI systems will talk to each other. It was the kind of thing that makes your eyes slide off the page. Then I read a phrase: <em>single source of truth. </em>I stopped.</p><blockquote><p><em>A Universal Domain Graph. A continuously updated model of reality. AI agents everywhere reading from it, writing to it. One shared map of what exists. One language for everything. One world, world&#179;.</em></p></blockquote><p>I had a feeling in my chest, my eyes tightened. It&#8217;s the feeling you get when you realize the conversation you&#8217;re in isn&#8217;t the conversation you thought you were in. It&#8217;s a feeling, to be honest, I sense more and more these days.<br></p><div><hr></div><p><strong><br>Yellowstone. A deer. Born there, learned there, became a deer there.</strong></p><p>Reading predator signals. Finding water in drought. Navigating terrain by starlight. Smelling the air for danger. Real capabilities. Embodied. Built through vast animal time frames of uncertainty, of not knowing what comes next, of figuring it out anyway.</p><p>The deer wanders outside the park. A loud hard Interstate. Seventy-five miles per hour. Everything it learned is wrong here. Worse than wrong. Freeze when you sense danger? The truck doesn&#8217;t care, doesn&#8217;t stop. Bolt toward cover? The concrete median isn&#8217;t cover.</p><p>The deer&#8217;s problem isn&#8217;t that it failed to learn. It learned excellently. The interstate is just a different kind of thing entirely, categorically different. A thing deer-cognition can&#8217;t process. Speed beyond its perception. Physics that doesn&#8217;t match its body&#8217;s expectations.</p><p>This is the scary AI story. Super intelligence arrives. Humans become the deer. Our capabilities, vestigial, leftover. Our instincts are precisely wrong. I used to find this story concerning. Now I think it might be optimistic.</p><p><strong>The new fear, what if there&#8217;s no truck?</strong></p><p>The park, Yellowstone. It&#8217;s still there, but a road goes through it now for the rangers to help manage things. The deer adapt. They learn, in a sense, to look both ways. More roads. Connecting facilities. Improving access. The deer adapt again. Some don&#8217;t make it. The population adjusts. Roads get wider. Traffic increases. Spaces between them shrink, the trees disappear.</p><p>Each change is small and beneficial: Better management, improved services, more efficient operations. At no point is there a truck. There are no sudden changes, just a world that gradually becomes less wild. Less navigable by deer-cognition. Less demanding of the capabilities deer spent generations developing.</p><p>The old deer remember what it felt like to read weather in the grass. To sense vibration through hooves. To navigate without roads telling you where to go. They can&#8217;t teach this to fawns who&#8217;ve never known anything else. The fawns don&#8217;t experience loss. They never had what was lost. Eventually everyone who remembers is gone. The deer still exist. Fed by rangers. Guided by fencing. Protected from dangers they can no longer perceive. But they&#8217;re not really deer anymore. They&#8217;re exhibits.<br></p><div><hr></div><p><strong><br>The train was dirty that day. The smell was only slightly better. Awkwardly placed metal handholds did their job as we banged down the track. Morning rush hour &#8212; the train always going faster than seemed safe. Once the doors closed your choices were to stand, sit, and hold on. Welcome to the Red Line.</strong></p><p>This was how I got to work. From the North Side of Chicago down to a small office building, a few blocks from the station. The walk felt safe enough. It was an industrial area, some life, a few restaurants. I walked it like I was going somewhere, purposeful and confident. Just in case anyone thought I might be in the wrong place, I never broke character. But how I got there, on that dirty rumbling train, the walk through industrial streets, the small design job &#8212; it seemed random. The bigger question was how I got to Chicago in the first place.</p><h2><strong>There&#8217;s a real difference between making a choice and waking up inside one.</strong></h2><p>I did choose to live in Chicago, but I didn&#8217;t decide to become the person who lived in Chicago, not completely. I rode the Red Line, walked through industrial neighborhoods pretending to belong. I just&#8230; woke up one Tuesday as that person. And then did it again. And again. Until the accumulation of Tuesdays became a life I woke up inside. This was it.</p><p>The question: how<em> did this happen?</em> isn&#8217;t regret. It&#8217;s more surprise.</p><p>I surprised myself. Not with a single dramatic act, not jumping, not running, but with the slow snowballing of moments that added up to someone I didn&#8217;t exactly plan to be.</p><p></p><div><hr></div><p></p><p>This dry document about Universal Domain Graphs was written by one of the increasingly popular AI developers and theorists&#185; . They&#8217;re building &#8220;ecosystems of intelligence.&#8221; Networks of AI agents. Shared knowledge bases. Global coordination. Their chief scientist&#178;. A neuroscientist who developed the Free Energy Principle, a concept on how biological intelligence actually works. Described as:</p><blockquote><p><em>&#8220;A cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants.&#8221;</em></p></blockquote><p>Integral participants. That sounds collaborative, right? Humans and AI, working together. But look at what they&#8217;re building:</p><blockquote><p><em>The Universal Domain Graph. &#8220;A living, read/write repository that models our world.&#8221; Agents download knowledge, become instant experts. Agents contribute insights, improve the models. &#8220;Compounding intelligence &#8212; a network effect where the entire system learns, adapts, and evolves at a global scale.&#8221;</em></p></blockquote><p>The system learns. Accumulates. Persists. Human learning is individual. Mortal.</p><p>So, In a system with compounding, persistent, globally-shared machine intelligence, what does &#8220;integral participants&#8221; mean over time? At first: humans contribute what the system doesn&#8217;t know. Later: the system models humans well enough to predict what they&#8217;ll contribute, which doesn&#8217;t sound awesome. Eventually: humans are integral the way a DVD player still hooked up to a TV is integral. Still connected. Still technically part of things. No longer necessary. Maybe not yet obsolete. Just&#8230; optional.</p><h2><strong>The Free Energy Principle says intelligent agents minimize surprise. They make the world predictable. Reduce uncertainty. Turn </strong><em><strong>Fog</strong></em><strong> into clarity.</strong></h2><p>These AI Labs&#178; are building infrastructure for surprise minimization at global scale. AI agents everywhere, continuously modeling, predicting, optimizing. Not in one domain. Woven into how the world works. In this vision, <em>Fog</em> doesn&#8217;t get navigated. It gets eliminated. Uncertainty doesn&#8217;t get processed by human minds struggling toward understanding. It gets resolved before human minds encounter it.</p><p>The people building this aren&#8217;t villains. They&#8217;re solving real problems. Coordination failures. Inefficient systems. Genuine suffering from preventable confusion. But <em>Fog</em> isn&#8217;t just uncertainty to be minimized. <em>Fog </em>is the medium through which human meaning-making happens&#8308;.</p><p></p><div><hr></div><p></p><p><strong>At this point I was committed. My socks were off and I was barefoot feeling the wet gritty rock as I stood waiting. In front and below me was a 45 foot cliff into a cold dark lake. What was I doing, I asked myself.</strong></p><p>But that&#8217;s the wrong question. The question is what was happening <em>in</em> me. Standing there. Heart pounding and legs uncertain. Cold stone under my feet with dark water below. Something was forming. Not information. I already knew the water was cold and deep enough, knew people had jumped before me, knew the physics of falling. The information was settled. What wasn&#8217;t settled was me. My relationship to the drop. My capacity to choose it. The specific shape of fear that would either stop me or not.</p><p>That formation, whatever it is, only happens in the<em> Fog</em>. In the not-knowing-what-I&#8217;ll-do. In the moment before the moment.</p><p>If someone had eliminated that uncertainty for me &#8212; <em>you will jump, here is the optimal trajectory, your fear response will peak at 2.3 seconds and then subside</em> &#8212; the information would be better, sure. The experience would be gone. And something in me that needed to form, wouldn&#8217;t.</p><p><strong>I was running and my friend was trying to catch up. Through waist-high weeds and grass, hitting bramble bushes that pulled on our clothes. Ground wet and sloppy. Fishing rods, tackle, ice fishing gear weighing us down. Headlights. Down in the wet grass. Hearts pumping. Slow cold air. Quiet. Scared.</strong></p><p>We knew we were breaking the law fishing in that pond. We knew what getting caught meant. We didn&#8217;t know if those headlights had seen us. Laying there in the weeds, I wasn&#8217;t learning information about trespassing laws or optimal navy-seal-team evasion strategies. I was becoming someone who had done this. Who had felt this. Who knew what it was like to be hunted and hidden and uncertain whether the next moment would be escape or capture. You can&#8217;t download that. You can&#8217;t optimize it. You can&#8217;t receive it from a knowledge graph. You can only be surprised into it.</p><p>The three kinds of surprise, I&#8217;m realizing:</p><blockquote><p><em><strong>The transgression: </strong>doing what you shouldn&#8217;t, the fear-joy of getting away with it. Headlights in the dark.</em></p><p><em><strong>The chosen risk: </strong>the moment of commitment, no algorithm, just you and the drop. Wet rock, cold water.</em></p><p><em><strong>Accumulated Life: </strong>The third kind. The one that scares me most to lose. Waking up mid-fall. The path that wasn&#8217;t chosen but was lived. The collection of Tuesdays that became Chicago. Becoming someone.</em></p></blockquote><p>That last one doesn&#8217;t feel like surprise while it&#8217;s happening. It feels like just another day. Another train. Another walk through streets you&#8217;re pretending to know. The surprise comes later. When you look up and realize: <em>this is my life. How did this happen? </em>A profound wonder washes over you.</p><p>An optimized life wouldn&#8217;t have that wonder. Every step would be legible. Every choice would be modeled. You&#8217;d know exactly how you got here because the system would have predicted, guided, nudged you along the optimal path.</p><p>No dirty trains taking you somewhere you didn&#8217;t plan. No Tuesdays accumulating into a life you&#8217;re not entirely sure your chose. No waking up surprised by who you&#8217;ve become. Just: you, arriving efficiently at the destination the model knew you wanted before you knew it yourself. Surprise? No, thats been taken care of for you.</p><p>As I&#8217;m writing this, it all sounds a bit much, even overcooked. But the more I read and the more I understand about these systems and their trajectories, the less I&#8217;m able to dismiss or ignore them.</p><p>So what do we do? I don&#8217;t know, not exactly.</p><p>The trajectory might be inevitable. Might be beneficial in ways I can&#8217;t see. Might be wrong entirely. But I keep thinking: we need places where the <em>Fog</em> hasn&#8217;t been cleared. Wild Zones&#8309;. Educational spaces where AI is bounded&#8310;. Creative spaces where optimization isn&#8217;t the goal. Philosophical spaces where the questions don&#8217;t have answers yet. Not as a solution. As preservation. Seed corn.</p><p>These spaces won&#8217;t stop the Universal Domain Graph or something similar from being built. That train has probably left. But they&#8217;ll develop humans who know what <em>Fog</em> feels like. Who can recognize when an environment has been pre-cleared. Who understand, from experience, what&#8217;s lost when friction disappears.</p><p>The deer in the gradually-paving park doesn&#8217;t know it&#8217;s losing anything. Never navigated unpaved wilderness. Can&#8217;t recognize what&#8217;s gone. Can&#8217;t choose to resist because it doesn&#8217;t know there&#8217;s anything to resist. But humans who&#8217;ve stood on wet rock above dark water. Who&#8217;ve laid in cold grass with headlights searching. Who&#8217;ve woken up one Tuesday and wondered: <em>how did I become this person</em>?</p><p>Those humans might still choose the frictionless world. There&#8217;s real appeal there. The<em> Fog</em> is genuinely uncomfortable. But at least they&#8217;d be choosing.</p><h2><em><strong>The survival of human cognition may depend on our willingness to remain surprising.</strong></em></h2><p>The AI Labs and developers want to minimize surprise. Predict. Model. Optimize. There might be a safe, human, way to do this. But a human who is perfectly modeled is perfectly predicted and perfectly managed. They are not enslaved &#8212; that&#8217;s the wrong frame. Their just known. Fully legible. No longer capable of doing anything the system hasn&#8217;t already anticipated.</p><p>To remain surprising is to resist that legibility. Preserve some opacity. Some unpredictability. Some capacity to do things that weren&#8217;t in the model. Not just for the hell of it, but because the gap, the messy middle, between model and reality is where meaning lives. Where creativity happens. Where the specifically human contribution to &#8220;shared intelligence&#8221; might still come from.</p><p>If we become fully modeled, we might still be &#8220;integral participants&#8221;. Integral the way a variable is integral to an equation. Necessary for the math. But not capable of doing anything the equation doesn&#8217;t already describe. I don&#8217;t want to be a variable in someone else&#8217;s equation. Even if the equation is elegant. Even if the system is beneficial. Even if &#8220;someone else&#8221; is a distributed network rather than an evil actor.</p><p></p><div><hr></div><p></p><p><strong>Forty-five feet above cold dark water. Wet rock. Heart pounding. I jumped.</strong></p><p>What formed in me, in the falling, in the cold shock, in the surfacing, couldn&#8217;t have been predicted. Couldn&#8217;t have been optimized. Couldn&#8217;t have been given to me by any system, no matter how intelligent. It was mine because I didn&#8217;t know what would happen. I want to keep not knowing. I want to keep waking up mid-fall, surprised by the life I&#8217;m living. I want to remain surprising.</p><p><strong>The deer who never knew wilderness can&#8217;t miss it.<br>I&#8217;d rather be the kind of human who can.</strong></p><p></p><div><hr></div><p><strong>Notes</strong></p><ol><li><p>VERSES AI and the paper &#8220;Designing Ecosystems of Intelligence from First Principles&#8221; (Friston et al., 2024), published in <em>Collective Intelligence</em>. The technical specification is IEEE P2874, the Spatial Web Protocol.</p></li><li><p>Karl Friston, neuroscientist at UCL, originator of the Free Energy Principle, and Chief Scientist at VERSES AI.</p></li><li><p>VERSES AI is not alone in this trajectory. Yann LeCun, Meta&#8217;s chief AI scientist, launched AMI Labs in late 2025 (targeting a &#8364;3B valuation) to build what he calls &#8220;Autonomous Machine Intelligence&#8221; using Joint Embedding Predictive Architecture (JEPA) &#8212; AI trained on over a million hours of video to predict abstract representations of reality rather than generating pixels. LeCun has said explicitly: &#8220;We are going to have AI systems that have humanlike and human-level intelligence, but they&#8217;re not going to be built on LLMs.&#8221; Meanwhile, Fei-Fei Li&#8217;s World Labs ($230M raised) launched Marble in early 2026 &#8212; a Large World Model that generates persistent, editable 3D environments from text or images. Li describes current LLMs as &#8220;wordsmiths in the dark.&#8221; Each of these programs diagnoses the same problem (AI needs to model reality, not just language) and converges on the same solution (build the model). The &#8220;single source of truth&#8221; impulse is not one company&#8217;s vision. It is the direction of the field.</p></li><li><p>The phrase: &#8220;single source of truth&#8221; appears in database architecture and enterprise software contexts, but takes on new meaning when applied to a global knowledge graph. The Spatial Web Protocol (IEEE P2874) specifies a Hyperspace Modeling Language (HSML) designed to create semantic interoperability across all AI systems &#8212; effectively, a universal ontology for reality itself. A one world, world.</p></li><li><p>For more on tacit knowledge and embodied cognition: Michael Polanyi&#8217;s <em>The Tacit Dimension</em> (1966) and Hubert Dreyfus&#8217;s <em>What Computers Can&#8217;t Do</em> (1972).</p></li><li><p>&#8220;Wild Zones&#8221; adapts wilderness preservation language to AI governance &#8212; spaces where optimization is bounded to preserve conditions for human capability development.</p></li><li><p>FutureObjects paper &#8220;AI Learning Mode Framework: Democratic AI Governance for Capability Development and Cognitive Sovereignty&#8221;(Adam Murray et al. 2026).</p></li></ol><p></p><p><strong>Further Reading</strong></p><ul><li><p>Friston, K. (2010). The free-energy principle: A unified brain theory? <em>Nature Reviews Neuroscience</em>.</p></li><li><p>Scott, J. C. (1998). <em>Seeing Like a State</em>. Yale University Press.</p></li><li><p>Parr, T., Pezzulo, G., &amp; Friston, K. J. (2022). <em>Active Inference</em>. MIT Press.</p></li><li><p>LeCun, Y. (2022). A Path Towards Autonomous Machine Intelligence. Meta AI Research. The foundational paper proposing JEPA as an alternative to autoregressive models.</p></li><li><p>Li, F.-F. (2025). Spatial Intelligence Is AI&#8217;s Next Frontier. TIME. Li&#8217;s public articulation of why AI must understand three-dimensional space.</p></li><li><p>World Labs (2026). Marble: Large World Model API. Technical documentation for the system that generates navigable 3D environments from minimal input.</p></li><li><p>Klein, C. R., &amp; Klein, R. (2025). The Extended Hollowed Mind: Why Foundational Knowledge is Indispensable in the Age of AI. Frontiers in Artificial Intelligence. The peer-reviewed case for why AI-mediated competence without internal schemas produces cognitive atrophy &#8212; what they call the &#8220;Hollowed Mind.&#8221;</p></li><li><p>Oliveira, D. G. S., et al. (2025). Feyerabend and the History and Philosophy of Science. Transversal: International Journal for the Historiography of Science. Applies Feyerabend&#8217;s ontological pluralism to AI, arguing that World Models function as engines of &#8220;Algorithmic Flattening.&#8221;Thanks for reading! Subscribe for free to receive new posts and support my work.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://adammurray972420.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Equation]]></title><description><![CDATA[AI &#215; capability = enhanced competence. Everyone reads this as a celebration. It's a warning.]]></description><link>https://adammurray972420.substack.com/p/the-equation</link><guid isPermaLink="false">https://adammurray972420.substack.com/p/the-equation</guid><dc:creator><![CDATA[Adam Murray]]></dc:creator><pubDate>Thu, 16 Apr 2026 17:24:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u5ik!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u5ik!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u5ik!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 424w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 848w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 1272w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u5ik!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp" width="850" height="1134" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1134,&quot;width&quot;:850,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:530980,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://adammurray972420.substack.com/i/194327980?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u5ik!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 424w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 848w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 1272w, https://substackcdn.com/image/fetch/$s_!u5ik!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe725f4f6-4cc9-4224-a50a-315c5f0a7c06_850x1134.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>AI &#215; Capability = Enhanced Competence.</strong></h2><p></p><p>Everyone agrees with some version of this equation. It gets nodded at in keynotes and repeated in LinkedIn posts about the future of work. It&#8217;s the polite version of the AI story, the one where humans still matter, where the tool amplifies what you bring, where collaboration wins. And I suppose it&#8217;s true, as far as it goes. But the problem I see is that it doesn&#8217;t go far enough. The equation looks like some kind of celebration. Let&#8217;s just do this and we win. I think it&#8217;s actually a diagnosis, and if you look at it carefully, it&#8217;s a warning.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://adammurray972420.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The AI in this equation is not the research frontier or the alignment debate or the question of whether machines will become conscious. It&#8217;s the tool on your desk right now: Claude, Cursor, Copilot, Midjourney, the OpenClaw agent that drafts your emails and writes your code. It&#8217;s the reason Tuesday&#8217;s deliverable looks like what next Thursday&#8217;s used to. The amplification is real. A skilled professional with these tools produces work that would have required a team. Nobody seriously disputes this. But, an amplifier scales whatever it&#8217;s multiplying. A microphone makes your voice louder. It doesn&#8217;t give you something to say. The AI term in this equation is powerful, it&#8217;s getting more powerful every quarter, and it is completely dependent on what sits on the other side of the &#8220;x&#8221; in this equation.</p><p>That other side is where the trouble starts.</p><p>When people hear the word capability in this equation, they hear a constant, like a fixed attribute, something you have, like height. You went to school. You practiced for twenty years. You have the capability. AI multiplies it. Story over. Let&#8217;s go cure cancer. But capability is not a constant. It&#8217;s a variable, and a specific kind of variable, it&#8217;s a practiced one. This is the distinction that the equation hides: knowledge versus judgment. Knowledge is durable, it&#8217;s tough. Decades ago in architecture school I learned how load-bearing walls work, and I&#8217;ll know it decades from now. That understanding sits in my head like a fact, because it is one. Judgment is different. Judgment is the fluid, contextual, instinct-level application of knowledge to situations that don&#8217;t come with instructions. It&#8217;s the knowing what to do when you&#8217;re looking at a mess, when the building site doesn&#8217;t match the drawings, when the client wants something they can&#8217;t articulate and the budget won&#8217;t cover what they mean. That kind of capability degrades without exercise. It&#8217;s something closer to a jump shot than a phone number. You don&#8217;t store it. You maintain it.</p><p>Years ago, before I was professional at anything, I was volunteering at a reused materials warehouse on the South Side of Chicago. It was the kind of place that collected anything unwanted or underused, from hoola-hoops to doorframes. I was a sculptor back then and I&#8217;d go looking for metal and objects that might spark a new work. One afternoon I met a man named Tyner White. Tyner was homeless. He built things out of whatever he found in the streets and vacant lots around South Chicago: toys, mobile carts, temporary shelters. Every object solved a problem he had actually encountered, in a place he actually lived, for people he actually knew. But they weren&#8217;t just functional. A cart designed to fasten to the back of a bicycle had a large windmill incorporated just for fun. A rain shelter wasn&#8217;t quite an umbrella but got the job done with a touch of personality. He was making decisions that went beyond solving the problem, decisions about what the solution should feel like. That&#8217;s not resourcefulness. That&#8217;s judgment.</p><p>What struck me was not the objects themselves but what was behind them. Tyner didn&#8217;t have formal training. What he had was a deep, practiced, physical relationship with his materials and his context. He knew what worked because he had built and failed and rebuilt in the same conditions, over and over, with his hands. His knowledge and his judgment were inseparable, fused through practice into something you could see in the finished object but couldn&#8217;t extract from it as a set of principles. You couldn&#8217;t write down what Tyner knew, because what Tyner knew was inseparable from the act of making.</p><p>That is what capability looks like before it gets abstracted into a credential. And it&#8217;s exactly the kind of capability that can go away when the practice stops.</p><p>An architect who stops designing for five years still knows the principles, or most of them. But put them in front of a real project with real ambiguity and something has shifted. They hesitate where they used to move. They reach for rules where they used to reach for instinct. The knowledge is mostly intact but the practiced application of it is not.</p><p>Some professions figured this out a long time ago. The ones where the rust shows. A surgeon&#8217;s hands steady or they don&#8217;t. A pilot lands the plane or they can&#8217;t. A musician plays the passage or falls apart. When capability has a performative moment, a physical, visible instant where you either have it or you don&#8217;t, institutions notice. They build structures to protect it. Recurrency requirements. Mandatory manual hours. Practice regimens that nobody questions because everybody understands what happens without them.</p><p>The professions where the rust doesn&#8217;t show have no equivalent structure. No recurrency. No mandatory manual hours. No institutional recognition that professional judgment is a perishable skill. And these are precisely the professions where AI is now creating the same maintenance problem that many professional domains identified decades ago. This problem is happening faster, broader, and with a big difference: the tools don&#8217;t just fail to protect against the decay. They actively cover it up. A surgeon who hasn&#8217;t operated in two years can&#8217;t fake it in the operating room. In most professions, the gap between what the tool produces and what the professional actually contributed won&#8217;t surface until something goes wrong. An architectural detail that doesn&#8217;t work on site. A question from a contractor that requires judgment, not a reference drawing. A condition that doesn&#8217;t match the model. The tool can generate the output. It can&#8217;t stand behind it.</p><p>Now, inside the multiplication. <br><br>When the equation works, it delivers exactly what it promises. A professional with actively maintained judgment uses AI and the output is genuinely enhanced, better than either could produce alone. These decisions originate in expertise. The AI accelerates their execution. This is the good version. It&#8217;s real and it&#8217;s valuable and it is not the whole story.</p><p>The tool that multiplies capability can also quietly erode it. The mechanism isn&#8217;t dramatic. AI doesn&#8217;t take your judgment away. It offers a path around it, a short cut. Every time you approve a default instead of making a decision, the tool has routed around your judgment. Every time you select from AI-generated options instead of generating options from your own expertise, the tool has done the thinking and you&#8217;ve done the choosing. From the outside, the work looks the same. From the inside, something has shifted, from exercising capability to supervising output. And the difference compounds.</p><p>The math is particularly tough, because multiplication hides the decline. Let&#8217;s say at full capability, the equation produces enhanced competence. At eighty percent, the output is still pretty good. At fifty percent, it passes most quality checks. At ten percent, it&#8217;s basically just AI with a human name on it. The decline is gradual, the output stays acceptable the entire way down, and there is no alarm that sounds at any point along the curve. The professional feels productive at every stage. The client sees quality work at every stage. The decay is invisible from every vantage point, until conditions change. And conditions always change.</p><p>What happens when a novel problem arrives that doesn&#8217;t fit the AI&#8217;s training distribution. A tool goes down and the work has to continue without it. A job change demands that you demonstrate what you can do, independent of your tools. A client asks a question that requires judgment, not output, and the answer has to come from you and not from a prompt. These are the moments when the rust shows, and by the time it does, the atrophy is years old.</p><p>The cruelty doesn&#8217;t stop there. AI has made professional credentials cheap. Anyone can now produce a polished portfolio, a clean resume, a keyword-optimized LinkedIn profile with minimal effort. When output is no longer a reliable signal of what someone can actually do, the only thing left is demonstrated capability: the ability to show judgment in action, to explain why you made the choices you made, to reveal a decision-making process that no tool could replicate. The market is beginning to demand proof of the capability term at precisely the moment that many professionals have quietly let it decay. The signal the market needs is the same thing the equation has been eroding.</p><p>So. The equation is a design problem, not a moral one. The question is not whether to use AI. Of course you should, we all need to. The tools are better than what came before them and what comes next will be better still. The question is how to use them in a way that keeps the capability term substantive, how to get the amplification without the atrophy. The answer is not restriction. Telling a working professional to stop using AI is like telling a pilot to abandon autopilot. It misidentifies the threat and ignores the legitimate value of the tool. The tool is not the enemy. The invisible decay is the enemy.</p><p>The answer is visibility. If you can see the capability term, if you have some way of knowing whether your professional judgment is being exercised or bypassed during AI-assisted work, then you can make informed choices. You can let the tool lead on routine tasks and bring your full judgment to consequential ones. You can notice when a pattern of engagement has shifted, over weeks or months, from active to passive. You can maintain the variable because you can see it moving.</p><p>Capability has a lifecycle. It forms through learning: the uncomfortable, effortful process of building understanding from nothing. And it&#8217;s maintained through practice: the ongoing exercise of judgment against real problems with real stakes. Protecting the variable during each phase requires different tools.</p><p>But the equation itself is the foundation, and it is simpler than it looks. Everyone is optimizing the left side of the multiplication: better models, better tools, better agents, better prompts. Almost nobody is paying attention to the right side. And the right side is the only part of the equation that is yours. The AI will improve whether you attend to it or not. Your capability won&#8217;t. It needs practice the way a muscle needs load, and the most sophisticated tool ever built is quietly offering you reasons to set the weights down.</p><p>The most important professional discipline of the next decade is not mastering the AI. It&#8217;s protecting the capability that makes the AI worth using. </p><p>So, let&#8217;s pick up the weights.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://adammurray972420.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>