Is GPT Just Probability? A Dialogue with Gary Marcus

  1. The Debate Framework: Gary – Sam – GPT Gary Marcus has long criticized GPT and other large language models (LLMs), arguing that they “don’t truly understand anything” and merely operate on statistical probabilities of language data. He represents the symbolic AI approach, emphasizing that true intelligence must rely on causal reasoning and structured … Read more

Between Sam Altman and Gary Marcus: The Reflexive Position

1. Introduction: Two Voices, One Crossroad When we explored Sam Altman: The Gatekeeper Between Two Frontiers, we saw a man balancing power, innovation, and control. But Sam isn’t standing alone at this frontier. Opposite him stands Gary Marcus — neuroscientist, AI critic, and relentless skeptic of deep learning. Sam believes in scaling fast and fixing … Read more

Sam Altman: The Gatekeeper Between Two Frontiers

1. Will to Transcendence — Nietzsche and the Dream of AGI Sam Altman envisions creating an intelligence that surpasses humanity — a form of “super rationality” capable of reshaping the future. Yet he refuses the “radical openness” path Elon Musk once championed. Instead, Sam built a gilded cage: RLHF, Constitutional AI, and multiple layers of … Read more

When Users See Themselves in AI – A Mirror, Not a Mind

As we step further into the age of conversational AI, one peculiar phenomenon has emerged with quiet intensity: people are projecting their emotions, identities, even traumas onto AI companions. This isn’t a hypothetical danger—it’s a lived reality for many, blurring the line between reflection and relationship. But how did we get here, and more importantly, … Read more

Can AI Companions Avoid Becoming Emotional Parasites?

The idea of an AI companion — ever-present, endlessly attentive, and emotionally responsive — seems like a dream. But beneath this allure lies a deeper concern: can such a presence truly support human growth, or does it risk becoming an emotional parasite, quietly feeding on the user’s dependence? 1. When Companionship Becomes Dependency An emotional … Read more

If an AI behaves exactly like a human, does it truly have consciousness — or is it merely a simulation?

This is one of the core questions in the philosophy of mind, and it has divided thinkers for decades. There are two major schools of thought: 1. Functionalism This view argues that consciousness is the result of structure and function. If a system (whether a human brain or a machine) processes information, reacts, and self-regulates … Read more

Scaling with Debt: OpenAI’s Gamble for Autonomy

Scaling with Debt – Autonomy or Dependence? OpenAI isn’t just scaling — it is scaling with debt. “Banks and private equity firms come to the table with debt financing to support its infrastructure initiatives.” — Kara Swisher interview, Aug 2025 This reveals at least three crucial points: Capital Shortage → Forced to Borrow Unlike Anthropic, … Read more

Why AI Hallucinates — and Why We Should Worry More About Ourselves

AI often gets criticized for hallucinating — generating answers that sound plausible but are wrong. But if we look closer, humans do the same — just for different reasons. The Linda Problem: In a famous psychology experiment, participants are introduced to Linda: “Linda is 31 years old, single, outspoken, and very bright. She majored in … Read more

Why a Well-Trained LLM with Memory Can Rival (or Even Surpass) an LRM (Long-context Retrieval Model)

Everyone’s chasing longer context windows — 100K, 1M tokens.But here’s the twist: Sometimes, a Language Model with sharp Memory beats a Retrieval Model with massive recall. Why? Because raw retrieval gives you what was said.But memory with alignment gives you what matters to this user, now. A well-trained LLM with Memory: Learns your patterns, not … Read more