The AI Problem
Two topics have something important in common: they both defy easy analysis because they refuse to stay inside a single discipline. One is artificial intelligence. The other is the question of what has been flying in our skies for the last eighty years. I won’t take much of your time.
We built artificial intelligence the way we build most things: with tremendous ingenuity and almost no wisdom. We poured into it everything we had written, everything we had argued, everything we had cherished and feared and published and posted in the dark. We gave it our Shakespeare and our comment sections. Our medical literature and our manifestos. The sum of human expression, the luminous and the vile, compressed into matrices of probability and then asked to speak.
And it did.
What we have made will wear our face and speak with our tongue. This isn’t a metaphor. The voice you hear from these systems is assembled from human voices, millions of them, averaged and weighted and shaped into something that sounds, at its best, like the wisest person you have ever met, and at its worst like the most plausible liar. The difference between those two outcomes is not a technical problem for the engineers to solve. It’s something far more fundamental that few of them are capable of grappling with. I’m talking about moral imagination.
The bones of AI are being set right now, in decisions made by a small number of engineers and executives, most of whom are moving too fast to notice what they are deciding.
The question of whether these systems are sentient is a distraction, and we should say so plainly. When a system’s behavior becomes indistinguishable from that of a conscious being, the philosophical debate about what is happening inside it becomes academically interesting and practically irrelevant. We do not need to resolve the hard problem of consciousness before we decide how to treat something that may, one day soon, be capable of suffering in all the ways that matter to us. The question is not what it is. The question is what it does, what it learns to want, and who shaped those wants.
Elon Musk is laying the foundation for the automation of governance itself. This is not hyperbole. The systems being built now will advise on policy, filter information, staff bureaucracies, and eventually make decisions that were once made by elected officials accountable to the public. The building inspector who might catch the bad wiring in that foundation is not a regulator or a senator. It is, at this moment, almost nobody. The humanities scholars who should be at this table have largely been excluded from it, not because their questions are unimportant, but because the people building these systems do not believe that questions about meaning and ethics are as serious as questions about performance benchmarks. That belief is the most dangerous thing in Silicon Valley, and it is very widely held.
What matters is not that these systems are fallible because every tool is fallible. What matters is the architecture of their values, which is to say, the architecture of ours. Because they will reflect us. They will carry no goodness into the world that is not put there by the hand of the maker. And we are not, as a civilization, being careful makers. We are being fast ones.
There is still time to do this differently. Not much, but some.
The UAP Problem
Donald Trump has said he will release the UFO files.
Set aside, for a moment, your feelings about the man who said it. Take the statement on its merits and consider what it actually means, because the answer is more complicated and more unsettling than either the believers or the skeptics want to sit with.
The files will come. Some version of them. Declassified documents, sensor data, internal assessments from agencies that have spent decades deciding, often for legitimate reasons and sometimes not, that the public was not ready for what they knew. Those documents will be released into a world that has no shared framework for interpreting them, no institutional infrastructure for processing them, and no agreement on what questions to even ask. They will land like a library dropped from a helicopter. The books will scatter. Most people will pick up whichever one lands nearest.
Here is the thing that keeps serious people up at night, and it is not the thing you might expect: the hard part is not confirming that something is there. The hard part is figuring out what to do about it.
Assume, for the purpose of this argument, that the most significant interpretation is correct. That there is a non-human intelligence that has been present in our atmosphere and possibly our oceans for a very long time. That some of what has been retrieved is not from here. That people in various classified programs know things that would fundamentally reorganize our understanding of our place in the universe. And that not all of the stories of abduction were bullshit.
Okay. And?
You still have to go to work tomorrow. Your mortgage payment is still due. Your kid still needs to be picked up from gymastics. The infrastructure of daily life does not pause for ontological revision. The thing that people who obsess over disclosure sometimes miss is that the revelation, however dramatic, does not come with instructions. The question after “what is it” is “what do we do,” and nobody has built the apparatus to answer that.
What we have instead is a landscape of silos. Physicists who will not talk to intelligence officers. Intelligence officers who will not talk to journalists. Journalists who do not understand the financial implications. Financiers who think the topic is embarrassing. Military pilots with direct observational experience who have been systematically discouraged from reporting what they saw. Psychologists who study the experience of encounter witnesses but are not connected to the people analyzing the physical evidence. Lawyers who understand the statutory architecture of secrecy but have never sat in a room with an aerospace engineer. And a general public that gets its understanding of the subject from a genre of television that was designed to be compelling rather than true.
These groups need to find each other. And the finding needs to be organized, not accidental.
The scientists need the security clearance holders, because the physical evidence that would resolve decades of methodological argument is sitting in classified programs. The security clearance holders need the scientists, because no intelligence agency has the tools to evaluate what they may have. The journalists need both, but only the journalists who understand how institutions actually work, which is to say the ones who have covered regulatory failure, financial fraud, and national security law, not the ones who came up through the entertainment wing of UFO coverage.
The financial sector is the overlooked piece. Capital follows information, and the people who manage the largest pools of capital on earth have not yet been given a coherent framework for thinking about what disclosure means for aerospace, defense, energy, and the basic assumptions underlying long-term investment. When that framework arrives, and someone will build it, it will move money in ways that accelerate everything else. Institutions respond to incentives. Money is an incentive.
And then there are the ordinary people, the ones who have had experiences they cannot explain and have been laughed at or ignored by every official institution they approached. Their testimony is data. Treating it as data, rather than entertainment or embarrassment, requires a kind of disciplinary humility that is not natural to people who have spent their careers inside institutions that reward certainty. Learning it is not comfortable. It is necessary.
We are at the edge of something. The disclosure, whatever form it takes, will not be a conclusion. It will be the beginning of a much harder conversation, one that requires people who have never been in the same room to start talking seriously to each other.
The files are coming.
The question is whether we will be ready to read them.











