
AI answers questions, but it doesn’t ask them. Guest post by Robert Gore at Straight Line Logic
Never has humanity expended so much on an endeavor for which it will receive so little as the Artificial Intelligence (AI) project. Its design rests on the assumption that the human intelligence (HI) it is attempting to mimic and surpass is analogous to its own operating protocols. In other words, humans take in data and process it in definable ways that lead to understandable outputs, and that is the essence of HI.
AI designers reverse the scientific process of exploring reality and then defining, modeling, and perhaps deriving something useful from it, instead assuming that the reality of HI conforms to the AI model they’re building. It’s like expecting a clock to reveal the nature of time. This may seem surprising because among AI designers are some of the brightest people in the world. However, they demonstrate a profound lack of those qualities that might lead them to further understanding of HI: self-awareness, introspection, humility, wisdom, and appreciation of the fact that much of HI remains quite mysterious and may always remain so. Alas, some of them are just plain evil.
AI looks backward. It’s fed and assimilates vast amounts of existing data and slices and dices it in myriad ways. Large language models (LLMs) can respond to human queries and produce answers based on assimilated and manipulated data. AI can be incorporated into processes and systems in which procedures and outcomes are dependent on data and logically defined protocols for evaluating it. Within those parameters, it has demonstrated abilities to solve problems (playing complex games, medical diagnosis, professional qualification exams, improving existing processes) that surpass HI. There is, of course, value in such uses of LLMs and AI, but that value derives from making some of the more mundane aspects of HI—data assimilation, manipulation, and optimization for use—better. Does that value justify the trillions of dollars and megawatts being devoted to AI? Undoubtedly not.
What AI can’t and won’t touch are the most interesting, important, and forward-facing aspects of HI, because no one has yet figured out how those aspects actually work. They are captured by the question: How does the human mind and soul generate the new? How does curiosity, theorization, imagination, creativity, inspiration, experimentation, improvisation, development, revision, and persistence come together to produce innovation? It’s ludicrous to suggest that we have even a rudimentary understanding of where the new comes from. Ask innovators and creators how they generated a new idea and you’re liable to get answers such as: an inspiration awakened them at three in the morning, or it came to them while they were sitting on the toilet. Model that! At root, the problem is that although AI can answer a seemingly infinite number of questions, it can’t ask a single one. It can be programmed to spot and attempt to resolve conflicts within data, but it doesn’t autonomously ask questions. From birth, the human mind is an autonomous question generator; it’s how we learn. That’s not confined to our species. Anyone who’s ever watched puppies or kittens can see that they have something akin to human curiosity. They explore their environments and are interested in anything new (if they’re not afraid of it). Curiosity and questions are the foundation of learning and intelligence. Reading even a page of something interesting or provocative will generate questions. Generative AI “reads” trillions of pages without an iota of curiosity. No one who either hails or warns of AI surpassing HI has explained how it will do so while bypassing the foundation of HI.
Generative AI is supposedly going to generate something new by unquestioningly manipulating existing data. Even within that ambit AI is encountering perhaps insoluble problems. Model collapse refers to the degradation of AI models that are trained on AI generated output. Here’s an illustration:

“Model Collapse: The Entire Bubble Economy Is a Hallucination,” Charles Hugh Smith, December 3, 2025
HI generally gets better at something the more often it tries. AI degradation causes generative AI to generate hallucinations—nonsense. Which means one or more humans have to oversee AI to prevent such hallucinations. How many mini, non-obvious hallucinations fall through the cracks? No one knows.
AI has been presented as a labor-saving miracle. But many businesses report a different experience: “work slop” — AI-generated content that looks polished but must be painstakingly corrected by humans. Time is not saved — it is quietly relocated.
Studies point to the same paradox:
• According to media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.
• MIT Sloan research indicates that AI adoption can lead to initial productivity losses — and that any potential gains depend on major organizational and human adaptation.
• Even McKinsey — one of AI’s greatest evangelists — warns that AI only produces value after major human and organizational change. “Piloting gen AI is easy, but creating value is hard.”
This suggests that AI has not yet removed human labor. It has hidden it — behind algorithms, interfaces, and automated output that still requires correction.
“AI, GDP, and the Public Risk Few Are Talking About,” Mark Keenan, December 1, 2025
A frequently cited figure from S&P Global Market Intelligence is that 42 percent of companies have already scrapped their AI initiatives. The more dependent humans become on AI, the greater the danger AI degradation leads to HI degradation. Heavy usage of AI may make humanity net stupider
When AI works as envisioned, not detectably degrading, it processes vast amounts of often conflicting data. How does it resolve the conflicts? The resolution is primarily statistical—that which is most prevalent becomes what AI “learns.”
From the vast data that serves as its training input, the LLM learns associations and correlations between various statistical and distributional elements of language: specific words relative to each other, their relationships, ordering, frequencies, and so forth. These statistical associations are based on the patterns of word usage, context, syntax, and semantics found within the training dataset. The model develops an “understanding” of how words and phrases tend to co-occur in varied contexts. The model does not just learn associations but also understands correlations between different linguistic elements. In other words, it discerns that certain words are more likely to appear in specific contexts.
“Theory Is All You Need: AI, Human Cognition, and Causal Reasoning,” Teppo Felin and Matthias Holweg, Strategy Science, December 3, 2024
AI output essentially represents consensus “knowledge” as measured by AI’s data surveying and statistical capabilities. What is defined as consensus may be an average weighted by the credentials and output of the various propagators of the data. It may, when it’s spitting out “answers,” note that the data conflicts and list alternative interpretations. However, aside from the fact that consensus, even weighted average consensus, is often wrong, there is a graver danger. Consensus wisdom is frequently the sworn enemy of innovation. Consensus-based AI may, on balance, retard more than it promotes innovation.
Felin and Holweg use the example of: “heavier-than-air” human, powered, and controlled flight in the late 1800s and the early 1900s. Imagine if AI had been around in 1902, and the query was made: Is heavier-than-air human flight possible? The seemingly confident answer would have been: Definitely not! That was the overwhelming consensus of the experts, and AI would have reflected it. Had AI been guiding decision making—one of its touted abilities—it would have “saved” humanity from taking flight. Fortunately, Orville and Wilbur had abundant HI and they disregarded the so-called experts, an often intelligent strategy.
So, why is AI being pushed so hard? Why are all the “right” people in government, business, academia, and mainstream media so devoted to it? Why are trillions being spent as the stock market bubbles?
If the last few decades have taught us anything, it’s that when an official agenda doesn’t make sense, especially when it has an element of “official science,” start looking for the real reasons, the hidden agenda. The COVID response wasn’t about health and safety. The manufactured virus, lockdowns, closing businesses, masking, social distancing, discouraging or banning effective remedies, overwhelming pressure for vaccine uptake, ignoring adverse vaccine consequences up to and including death, and proposed vaccine passports enabled totalitarianism.
Climate change has served the same purpose. Like AI, climate change “scientists” reverse the scientific process, insisting that reality conforms to their models. Operating in a protective bubble sustained by academia, the media, business, NGOs, governments, and multinational organizations, they’re hostile to the contrary evidence, questions, and criticism of their models that are essential to true science.
And like climate change and COVID, AI has the totalitarians and would-be totalitarians drooling. Collecting, assimilating, and manipulating data is the technological foundation of a surveillance state. That’s all the technototalitarians (See “Technototalitarianism,” Parts One, Two and Three) require of AI—all-encompassing data that can be sorted by every available metric, including ones for which citizens might pose a threat, rhetorically or otherwise, to the government. Some of them must know AI will never get close to HI, but that’s a useful claim, a selling point, to attract massive amounts of capital from Wall Street and support from the technototalitarian Trump administration.
Totalitarian empowerment is probably the main thing Trump understands about AI. Here he shares common ground with the Chinese government (although it undoubtedly knows far more about AI than Trump). The president has embraced AI, touting the Stargate project the day after he was inaugurated and now throwing the full weight of the government, its scientific laboratories, and its private sector technology “partners” behind the Genesis Mission, an effort, supposedly on a Manhattan Project scale, to incorporate AI into virtually everything. Should the states, with their pesky concerns about AI’s huge requirements for land, water, and energy, try to intervene, Trump just promulgated an executive order to federalize AI regulation.
It’s a Wall Street truism that governments jump on market trends when they’re about to end. AI hype has propelled AI stocks to dizzying heights. While few pundits and seers have questioned the flawed basic premise—that AI will completely surpass HI—some are starting to express concern about its staggering monetary and energy requirements and the circular nature of many of its financing arrangements. It would follow a long list of precedents if Trump’s Genesis Mission top-ticked AI. Perhaps it should have been named the Revelation Mission, after the last rather than the first book of the Bible.
LAST MINUTE GIFTS DELIVERED BY CHRISTMAS
An epic, AI-led stock market crash with a concomitant debt implosion would wipe out most of what’s reckoned wealth in America, plunging the nation into a depression. If the Genesis Mission makes the government a financial partner of the AI industry, or the industry is deemed “too big to fail,” taxpayers would be stuck with the tab. Many of AI’s promoters are on board with the you’ll-own-nothing-and-be-happy world our rulers envision. A crash would fit right in with their beyond-Orwellian agenda to impoverish and enslave America. Thus, they might regard this bubble that must inevitably pop as an AI feature, not a bug.
If you query AI about AI, reflecting the consensus of experts it would assure you that AI is only for the good. Human intelligence says disregard the experts. Never has it been more important to think for yourself.

••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.





Leave a Reply