AI Is Exposing a Lie About Intelligence
On executive function, moral courage, and the fragility of merit
The way I ended up at Goldman Sachs is not, it turns out, a story about ambition or strategy or having the right credentials at the right time. It’s a story about being rejected by a glass manufacturer.
This was 2017. I was leaving the Army after five years as a field artillery officer, which – if you’re not familiar with how corporate recruiting works – is not a résumé line that makes people sit up and reach for their checkbooks. The glass company was hiring a third-shift supervisor, which is not a prestigious job, and I did not get it. The FBI rejected me too, though that one is a longer and more ridiculous story involving a polygraph examination in which I was asked, among other things, whether I had ever committed a crime I hadn’t been caught for, and answered in a way that was technically honest but apparently not the correct kind of honest (I’ve written about this at length here). And then there was the career consultant. I told him I wanted to work at a top management consulting firm. He looked at my résumé – Fordham, political science, “Captain, US Army” – and scoffed. Literally scoffed. Made a small sound with his nose and looked back down at the paper. I have forgotten a lot of things from that year, but I have not forgotten that.
Goldman Sachs was the only place that said yes. So I went – via a veterans’ onboarding program that, to be clear, was not how one typically entered Goldman Sachs, and almost certainly the only way I ever would have.
Here’s the thing: I was never supposed to be in finance. I told my college advisor during my first week at Fordham that I was going to major in either English or political science, and that I “hate math, and I never want to do math again.” This turned out to be one of those statements – like “I’m not a dog person” – that the universe treats as a kind of challenge. I ended up spending five years in field artillery, which is arguably the most math-intensive branch to which a newly-commissioned Army lieutenant can be assigned, calculating firing solutions and adjusting for wind and elevation and propellant temperature – the kind of applied trigonometry that would have made my seventeen-year-old self weep. I survived it the way I survived most things: by being good enough at the other parts that people forgave me the parts I wasn’t.
And then came banking, and corporate finance. Where I did...fine. Because it turns out most of finance isn’t really about math. It’s about being a reasonably competent human who can sit in a room and follow the thread and say something useful at the right moment. That part I could do. What I couldn’t do – what I was never going to be able to do, no matter how many hours I put in – was catch the busted formula in row 347 of the model. The mis-linked cell. The sign error that silently broke the whole thing. The details that required a certain kind of attention I have simply never had and was never going to develop, not through effort, not through caffeine, not through shame. I had learned to work around this the way you learn to work around a limp.
And then, at some point in 2023, everything got easier.
Not gradually. And definitely not through discipline or growth or finally “leveling up” after years of practice. Just…easier. I was faster. I was catching things I would have missed. I was producing work that would have taken me a week in two days, and it was better than the week-long version would have been. It took me a while to even name what had changed, because the change didn’t feel like a tool. It felt like me, finally working the way I had always imagined I should be able to work.
It was AI. I just hadn’t admitted it yet.
The Unveiling
For most of my life, I believed intelligence was something like height. You were born with it, people noticed it early, and from there the world adjusted its expectations. Teachers identified it. Institutions rewarded it. Employers filtered for it. If you did not rise quickly, the assumption was that there was nothing much there to be released.
Artificial intelligence is unsettling that story.
What AI reveals is something quieter and more destabilizing than a sudden explosion of brilliance: a mass unveiling of latent ability. The technology disproportionately advantages people who were always capable in certain dimensions but constrained in others. People who could think deeply but not quickly, who sensed structure and meaning long before they could articulate it, or whose insight was real but stranded by an inability to translate it into the quantitative idioms our institutions respect.
To outsiders, something here is suspect. These people now appear to have “suddenly gotten smart.” They have not. Intelligence was not added, but the friction merely removed.
Psychology has long drawn a distinction between intelligence and the executive capacities required to express it. In her landmark review Executive Functions, cognitive neuroscientist Adele Diamond describes executive function as the system responsible for initiation, sequencing, working memory, and self-monitoring. These capacities are not intelligence itself, but without them, intelligence often fails to surface. When executive function falters, ability stays invisible. Scaffold it, and the same ability suddenly looks like talent.
AI does not think for people; it stabilizes these fragile joints. It helps capable minds begin, continue, organize, and revise. What looks like a leap is often a long-delayed unveiling.
We are, as a rule, spectacularly bad at distinguishing between ability that is newly acquired and ability that is newly legible. We mistake verbal fluency for depth because fluency is easier to measure. We tend to equate competence with confidence because our schools, our hiring pipelines, and our professional rituals have trained us to reward the appearance of readiness over the fact of understanding. AI short-circuits these habits. It removes certain frictions that once obscured real capacity, allowing some people to operate, often for the first time, at a level they were always capable of. At the same time, it reveals that what many of us had learned to recognize as mastery was, at least in part, a structure of reinforcement provided by routine, credential, and institutional momentum rather than depth itself.
The result is a new kind of inequality that is cognitive, not merely economic.
The most important gap has shifted. It no longer runs between those who use AI and those who do not. It runs between people whose abilities were latent but constrained and those whose abilities were already fully expressed. AI rewards the former disproportionately. It does very little for the latter. If you were already fluent, polished, organized, fast, and socially legible, AI offers marginal gains. If you were uneven, brilliant in flashes, chronically unfinished, prone to getting lost in your own head – AI can feel like oxygen.
This is where ambition becomes decisive, and where moral clarity quietly enters the picture. The seamless link between thought and expression – what we mistook for intelligence itself – no longer confers the advantage it once did. Raw cognitive horsepower is becoming progressively less important. What matters now is a willingness to operate beyond your previous constraints, and an ability to decide, with some seriousness of purpose, what you are willing to risk being wrong about. To tolerate messiness. To revise publicly. To experiment without the reassurance of mastery. These are not merely stylistic choices, but ethical ones too. Many highly capable people find this intolerable. They would rather stay good at what they already do than be seen struggling toward something they’re not. They use AI politely and sparingly. And they treat it as an accessory, or a way to reinforce the version of themselves they already know how to perform.
Others use it hungrily. They allow it to sit inside unfinished thoughts. They let it stabilize weak joints in their cognition. They re-author themselves in real time. Those people are pulling ahead.
Philosophically, this should not surprise us. In their influential paper The Extended Mind, philosophers Andy Clark and David Chalmers argued that human cognition has never been confined to the brain. We think with notebooks, calendars, diagrams, and lists. These tools are extensions of the mind. They are not evidence of cheating. AI simply makes this truth harder to ignore – and more morally uncomfortable to defend against.
What proves more destabilizing is how poorly our institutions account for this reality. Schools, hiring pipelines, and credentialing systems assume ability shows up early and cleanly. They reward speed under timed conditions and polish over persistence. Research on talent development has increasingly challenged this assumption. In Rethinking Giftedness and Talent Development, psychologists Rena Subotnik and her colleagues found something that should have been obvious all along: the children we call gifted are not simply born that way. Their abilities emerge from a particular fit between who they are and where they are. Put the same child in a different classroom, a different family, a different decade, and the giftedness might never appear – or might appear in someone else entirely. Change the environment, and the hierarchy shifts. Late bloomers emerge. Early stars plateau.
AI functions as precisely this kind of environmental intervention.
Recent empirical work makes the pattern harder to deny. In a widely cited study, economists Erik Brynjolfsson, Danielle Li, and Lindsey Raymond examined the impact of generative AI tools on workplace performance. Their findings were striking: AI improved productivity most for those who started at the bottom of performance distributions. The gains were structural and significant. Workers previously constrained by articulation, confidence, or speed closed gaps that once appeared fixed.
This is why the current backlash feels so moralized. You will hear accusations of cheating, laziness, inauthentic thinking. These charges misunderstand what is happening. AI allows people to think at the speed and clarity they were always capable of but rarely permitted. The anxiety is about status – about who gets to be considered smart. “Authenticity” is just the nice thing you say so you don’t have to say that.
The deeper fear is more unsettling still. What if our hierarchies were never as fair as we believed? What if some of the people we dismissed as scattered, slow, or unfocused were simply operating with unseen constraints? What if intelligence was always more uneven, more situational, more dependent on support than our myths allowed?
AI is re-sorting excellence.
And it is doing so based on who was constrained, who was not, and who was willing to try again – rather than who looked elite on paper.
The printing press did not make people stupid. It made the memorization of lengthy texts less necessary, and in doing so freed minds for other work – synthesis, critique, invention. Scholars at the time mourned the loss of oral tradition and the decline of memory as a trained faculty. They were not wrong that something was being lost. They were wrong about what mattered. Today, some academics – particularly in the humanities – look at AI and see only the cognition it displaces: the student who cannot write a first draft unaided, the researcher who leans on summarization, the writer who thinks alongside a machine.
What they fail to see is the cognition it enables. The student who can finally get the draft out of their head and onto the page, where it can be revised into something true. The researcher who can synthesize across literatures they would never have had time to read, and no one else bothered to. The writer whose ideas were always there but who could not, until now, wrestle them into form. To worry about atrophied skills while ignoring expanded capabilities is to count only what is lost. It mistakes the scaffolding for the building. And it raises an uncomfortable question about what the concern is really protecting. In other words, the critique of AI often functions as a defense of the old sorting – a way of saying, without saying, that the people now being heard were never supposed to be in the conversation.
One detects, among the professional classes, a rather telling evasiveness on the subject of AI. Ask a senior executive or a celebrated academic whether they use such tools, and you will receive a carefully modulated response – something about calendars, perhaps, or restaurant recommendations. The implication is that serious minds have no need for such prosthetics. This is, to put it plainly, nonsense. The same people who publicly dismiss these technologies are privately relying on them to organize their thinking, refine their prose, and simulate the fluency they once had to labor for. I do not blame them for this. The tools are genuinely useful. What I find (ironically) disingenuous is the pretense – the insistence on maintaining the fiction of unassisted excellence while quietly pocketing the advantages of assistance. It is the intellectual equivalent of a tax haven: everyone who benefits agrees not to mention it, and anyone gauche enough to admit the truth is treated as having committed some obscure violation of taste. The result is a silent conspiracy of the competent, each member pretending that the ladder they climbed was not, in fact, an elevator.
The humanities PhD who writes clean first drafts and thinks in complete paragraphs is not more authentic than the person who struggles. They are less constrained. They have mistaken their ease for virtue – and some of them would rather dismiss an entire technology than confront the possibility that the hierarchy they climbed was never measuring what they thought it was.
Some people are experiencing a real leap in agency and cognition. Others are not. This is uncomfortable to say, but it is observable. The danger is that we will cling to outdated measures of merit while the ground beneath them shifts.
The opportunity is quieter and more humane: a chance to recognize that ability is not a fixed trait but a relationship between a mind and its environment.
We are only beginning to see what happens when that environment changes.



Another fantastic take. I’m in the process of writing an earnest song of praise to AI- probably the most uncool thing possible these days, and yet necessary. I’m someone who can write a song in four hours, and then go eight months being unable to write another song. AI completely changes that dynamic for me by making songwriting a collaborative process- something that makes all the difference in the world. Call that “fake” all you will; I’m still writing the song but now it’s actually happening. So glad to see someone point out this glaring fact about AI’s value.
Great work as always, Greg.
This view of "AI is an Iron Man suit" holds a lot of truth. LLMs weren't around until I was almost finished with my sci-fi novel, but I was able to check some of the math and scientific concepts when ChatGPT became available and it was very helpful. It's become invaluable as I write the sequel -- I don't let it write any of the text, of course, but researching scientific concepts and validating certain ideas has become vastly easier.
We use it extensively in our company as well. It's great for writing marketing copy, completing the first drafts of white papers, generating e-mails etc. But we also use it for more advanced tasks that would otherwise be very labor intensive -- like researching many websites for a project and documenting their capabilities. This is literally as case of days reduced to minutes. We increasingly use it to generate graphics, solve problems and help with various marketing tasks.
Having said all of that, AI shares the characteristic of preceding technologies in that it's neither good nor evil on its own; it depends on how people use it. I spoke with a cybersecurity expert yesterday from a major technology company who told me that deepfake technology is way ahead of the AI detection tools designed to defeat it. Stopping a "bad guy with AI" takes more than a "good guy with AI" -- you need training, policies and processes, too.
Also, I think AI will eventually eliminate hundreds of millions of jobs. I know all about the argument that previous technologies have -- in the long run -- created more jobs than they destroyed. But AI is different in three ways: 1. It's moving much faster. We don't have a generation to turn coal miners into coders. 2. It's much broader -- it's going to hit all sectors of the economy at once. The old argument that ATMs didn't put bank tellers out of business is a very narrow use case by comparison. 3. Ultimately, it will be more capable than people at nearly everything. The idea that humans will be free to work on "higher order tasks" only works if AI isn't better at those higher order tasks than humans. Often, it will be.
But for now, it's essential to put on the AI Iron Man suit and outperform your competitors. That's what we try to do in my company and we'll tackle the future when it gets here.