<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Techblog – English</title>
  <subtitle>Science, technology, health, AI</subtitle>
  <link href="https://techblog.cz/en/feed.xml" rel="self" type="application/atom+xml"/>
  <link href="https://techblog.cz/en/" rel="alternate" type="text/html"/>
  <updated>2026-03-28T00:00:00+00:00</updated>
  <id>https://techblog.cz/en/feed.xml</id>
  <author>
    <name>Martin Šrubař</name>
  </author>
  
  <entry>
    <title type="html">The Difference Between AI and I</title>
    <link href="https://techblog.cz/en/the-difference-between-ai-and-i.html" rel="alternate" type="text/html" title="The Difference Between AI and I"/>
    <published>2026-03-28T00:00:00+00:00</published>
    <updated>2026-03-28T00:00:00+00:00</updated>
    <id>https://techblog.cz/en/the-difference-between-ai-and-i.html</id>
    <content type="html" xml:base="https://techblog.cz/en/the-difference-between-ai-and-i.html">&lt;p&gt;&lt;img src=&quot;/images/ai-and-i-difference.jpeg&quot; width=&quot;480&quot; alt=&quot;The Difference Between AI and I&quot; /&gt;&lt;br /&gt;
Every year, someone publishes a definitive list of things AI cannot do. And every year, the list gets shorter.&lt;/p&gt;

&lt;p&gt;In 2023, the consensus was that LLMs couldn’t reason, couldn’t do math reliably, couldn’t write code that actually works. By 2025, reasoning models like Gemini and Claude were solving problems that would have seemed impossible two years earlier. Coding assistants went from parlour trick to genuine productivity tool. Mathematical benchmarks that once stumped every model started falling.&lt;/p&gt;

&lt;p&gt;The list does still exist though. As of early 2026, the research literature points to several things LLMs genuinely struggle with. It’s worth understanding what they are — and more importantly, &lt;em&gt;why&lt;/em&gt; — before we ask whether they’re permanent.&lt;/p&gt;

&lt;h2 id=&quot;what-the-research-actually-shows&quot;&gt;What the research actually shows&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Reasoning hits a hard ceiling at high complexity.&lt;/strong&gt; Apple’s 2025 paper &lt;a href=&quot;https://machinelearning.apple.com/research/illusion-of-thinking&quot;&gt;&lt;em&gt;The Illusion of Thinking&lt;/em&gt;&lt;/a&gt; tested frontier reasoning models on controllable puzzles and found three regimes: on simple problems, standard models actually outperform reasoning models (which tend to overthink). On medium-complexity problems, reasoning models shine. But beyond a certain complexity threshold, every model — reasoning or not — collapses to zero accuracy. Even when researchers handed the models the exact algorithm needed, performance barely budged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creativity is capped at the population average.&lt;/strong&gt; A &lt;a href=&quot;https://www.nature.com/articles/s41598-025-25157-3&quot;&gt;2026 study comparing divergent thinking&lt;/a&gt; in LLMs and 100,000 humans found that while several LLMs now exceed average human creativity scores, the most creative humans still significantly outperform every model tested. A &lt;a href=&quot;https://futurism.com/artificial-intelligence/large-language-models-willnever-be-intelligent&quot;&gt;separate mathematical analysis&lt;/a&gt; concluded that probabilistic systems are fundamentally capped at average creative output under current design principles — they can remix and recombine, but not break genuinely new ground.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Models can’t tell what they don’t know.&lt;/strong&gt; LLMs are trained to produce the most statistically likely answer, not to assess their own confidence. &lt;a href=&quot;https://www.nature.com/articles/s42256-025-01113-8&quot;&gt;A 2025 study in Nature Machine Intelligence&lt;/a&gt; found they cannot reliably distinguish belief from knowledge and fact. They &lt;a href=&quot;https://blogs.library.duke.edu/blog/2026/01/05/its-2026-why-are-llms-still-hallucinating/&quot;&gt;hallucinate&lt;/a&gt;. They agree with you when you’re wrong (&lt;a href=&quot;https://arxiv.org/html/2505.23840v4&quot;&gt;sycophancy&lt;/a&gt;). And when they take a wrong turn in a multi-turn conversation, they don’t recover — a &lt;a href=&quot;https://arxiv.org/abs/2505.06120&quot;&gt;2025 Microsoft/Salesforce study&lt;/a&gt; found a 39% accuracy drop across all models in multi-turn settings compared to single-turn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They have no persistent memory.&lt;/strong&gt; Every conversation starts from scratch. The engineering workarounds (RAG, vector databases, &lt;a href=&quot;https://research.ibm.com/blog/memory-augmented-LLMs&quot;&gt;memory frameworks&lt;/a&gt;) create the illusion of continuity, but the underlying architecture is fundamentally stateless. No current system can accumulate experience over time the way a human expert does across years of practice.&lt;/p&gt;

&lt;p&gt;These are real limitations. But before asking whether they’re permanent, it’s worth questioning an assumption that underpins most of the debate.&lt;/p&gt;

&lt;h2 id=&quot;you-are-a-parrot-too&quot;&gt;You are a parrot too&lt;/h2&gt;

&lt;p&gt;“Stochastic parrot” is the favourite insult hurled at LLMs — the accusation that they’re merely predicting the next most likely token, not actually understanding anything. It’s meant to be a devastating critique. But let’s turn the lens around.&lt;/p&gt;

&lt;p&gt;Consider what a human brain actually does. It’s a biological neural network, processing inputs and producing outputs based on patterns learned from data. The architecture is different — persistent memory, emotional weighting, embodied sensory processing — but the fundamental mechanism is the same: pattern recognition and probabilistic inference over accumulated experience.&lt;/p&gt;

&lt;p&gt;You don’t choose your thoughts any more than Claude chooses its tokens. As Robert Sapolsky argues in &lt;a href=&quot;https://en.wikipedia.org/wiki/Determined:_A_Science_of_Life_Without_Free_Will&quot;&gt;&lt;em&gt;Determined&lt;/em&gt;&lt;/a&gt;, every human decision is the inevitable product of prior causes — your genetics, your neurochemistry, and crucially, every single experience you’ve ever had, including the ones you weren’t consciously aware of. There’s no ghost in the machine pulling the levers. There’s just a neural network that’s been training continuously since birth.&lt;/p&gt;

&lt;p&gt;This isn’t a fringe position. It’s the logical consequence of our best neuroscience. We experience something we call “understanding” and “intuition” and “creativity,” but these may be subjective labels for a process that is, at its core, doing exactly what LLMs are accused of: sophisticated pattern matching over accumulated data.&lt;/p&gt;

&lt;p&gt;If that’s uncomfortable, consider the comparison side by side:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Human&lt;/th&gt;
      &lt;th&gt;LLM&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Biological neural network, ~86 billion neurons&lt;/td&gt;
      &lt;td&gt;Artificial neural network, billions of parameters&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Training data&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Decades of continuous multimodal sensory input (vision, sound, touch, smell, emotion, social feedback)&lt;/td&gt;
      &lt;td&gt;Primarily text, some images&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Persistent, associative, emotionally weighted, consolidated during sleep&lt;/td&gt;
      &lt;td&gt;Stateless per session; memory bolted on externally&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Training method&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Continuous, from birth, including unconscious inputs, reinformcement learning (touching hot stove), supervised learning (parental feedback)&lt;/td&gt;
      &lt;td&gt;Pre-training on corpus, fine-tuning, RLHF&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;“Creativity”&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Recombination of experiences, occasionally producing something genuinely novel&lt;/td&gt;
      &lt;td&gt;Recombination of training data, occasionally producing something genuinely novel&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;“Intuition”&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Compressed pattern recognition from years of domain-specific experience&lt;/td&gt;
      &lt;td&gt;Not yet achieved — no mechanism for long-term experiential compression&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The differences are significant. But notice what they are: differences of &lt;em&gt;degree&lt;/em&gt;, not of &lt;em&gt;kind&lt;/em&gt;. More data, richer data types, better architecture, persistent memory. These are engineering variables, not metaphysical ones.&lt;/p&gt;

&lt;p&gt;A four-year-old child has processed roughly the same amount of raw sensory data as the largest LLMs have processed text — but the child’s data is continuous, multimodal, embodied, emotionally tagged, and socially embedded. It’s not that the child has something an LLM can never have. It’s that the child has enormously richer training data processed by a more sophisticated architecture over an uninterrupted timeframe.&lt;/p&gt;

&lt;h2 id=&quot;the-documentability-thesis&quot;&gt;The Documentability Thesis&lt;/h2&gt;

&lt;p&gt;This brings me to what I think is the real question — not “can AI think?” but “can we capture enough of the right inputs?”&lt;/p&gt;

&lt;p&gt;Here’s the thesis: &lt;strong&gt;every human-performed cognitive process is documentable in principle, given sufficient capture of context, inputs, rules, and desired outputs. The obstacles are practical, not theoretical.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A researcher’s “intuition” about which line of inquiry to pursue isn’t magic. It’s the compressed result of thousands of papers read, hundreds of experiments witnessed, dozens of dead ends experienced, filtered through that researcher’s specific cognitive architecture and emotional history. If we could track all of those inputs with sufficient granularity, we could document the intuition.&lt;/p&gt;

&lt;p&gt;A senior executive’s “judgment” about a deal isn’t some ineffable quality. It’s pattern matching built over decades of specific transactions, negotiations, wins, and losses — modulated by personality traits that are themselves the product of genetics and upbringing.&lt;/p&gt;

&lt;p&gt;A master craftsperson’s “feel” for when something is right isn’t mystical. It’s a neural network that has been training on a very specific sensory domain for years, encoding subtle patterns below the threshold of conscious articulation.&lt;/p&gt;

&lt;p&gt;None of this is theoretically undocumentable. It’s just hard. Three kinds of hard:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The context problem.&lt;/strong&gt; Documenting expertise requires capturing the context that produced it. A researcher’s intuition is the product of their entire professional history — and their personal one too, since motivation, risk tolerance, and aesthetic preferences all shape research direction. Tracking someone’s context from birth to the present is practically impossible retrospectively. But it’s not impossible prospectively, and even partial capture is valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The distribution problem.&lt;/strong&gt; This is the one I know best from my own work in IT. The knowledge needed for a single process rarely lives in one person’s head. It’s distributed across multiple people in multiple organisations, each holding a fragment, often not even aware they hold it. The hardest part of requirements gathering isn’t the documentation itself — it’s the &lt;em&gt;download&lt;/em&gt;: getting what’s between people’s ears into a format that can be shared, validated, and acted on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The unconscious processing problem.&lt;/strong&gt; Much of human expertise comes from inputs we don’t consciously register. A clinician who just “knows” something is off. A trader who feels the market shifting. These aren’t supernatural — they’re the result of genuine sensory data being processed below the level of conscious awareness. Current documentation methods miss this entirely because the expert can’t articulate what they can’t access consciously.&lt;/p&gt;

&lt;h2 id=&quot;the-philosophical-edge&quot;&gt;The philosophical edge&lt;/h2&gt;

&lt;p&gt;If you accept the documentability thesis, it leads somewhere interesting and slightly unsettling.&lt;/p&gt;

&lt;p&gt;If I had a complete record of every sensory input a person received from birth — every image, sound, touch, social interaction, emotional state — and I had an architecture capable of processing it the same way their brain does, would I have replicated that person? Not a copy of their knowledge, but &lt;em&gt;them&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;I think Sapolsky would say yes. If human cognition is entirely the product of neural architecture plus the sum of inputs, then a perfect simulation is a perfect replica. There’s no residual “self” that exists outside the computation.&lt;/p&gt;

&lt;p&gt;This isn’t just a thought experiment. It defines the theoretical ceiling of AI. If the answer is yes, then every AI limitation is an engineering problem — a matter of getting the data and building the right architecture. If the answer is no, then there’s something outside the computational framework that we don’t understand yet — call it consciousness, free will, or something we don’t have a name for.&lt;/p&gt;

&lt;p&gt;We don’t need to resolve this to be practical. But it’s worth noticing that every “AI can never…” claim quietly assumes the answer is no. And the evidence for that assumption is weaker than most people think.&lt;/p&gt;

&lt;h2 id=&quot;what-this-means-in-practice&quot;&gt;What this means in practice&lt;/h2&gt;

&lt;p&gt;We don’t need to solve the mysteries of human consciousness to extract economic value today. Even if we can’t capture the 100% required to simulate a human, capturing the actionable tacit knowledge fundamentally changes how useful AI can become even at today’s “intelligence” level.&lt;/p&gt;

&lt;p&gt;Here’s what I think the implications are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meta-skill of the AI age is knowledge elicitation.&lt;/strong&gt; Not prompt engineering. Not “learning to use AI tools.” The bottleneck isn’t AI capability — it’s our ability to extract what’s in people’s heads and encode it in a form AI can work with. Research shows that experts spontaneously &lt;a href=&quot;https://www.modsimworld.org/papers/2025/MODSIM_2025_paper_11.pdf&quot;&gt;omit 40–70% of their key decision steps&lt;/a&gt; when teaching without structured elicitation methods. Already in 1966 Polanyi discovered that &lt;a href=&quot;https://en.wikipedia.org/wiki/Polanyi%27s_paradox&quot;&gt;“we can know more than we can tell”&lt;/a&gt; None of it is an AI problem. That’s a caputure and human-to-human knowledge transfer problem that predates AI entirely — AI just makes it the binding constraint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you’re an expert, document your thinking now.&lt;/strong&gt; Not just what you do, but why. What alternatives you considered and rejected. What edge cases you handle without thinking about it. What subtle signals change your approach. This “meta-cognitive documentation” is the highest-leverage activity most professionals aren’t doing. You don’t need to become an AI expert. You need to become an &lt;a href=&quot;https://medium.com/@shashwatabhattacharjee9/the-uncodifiable-advantage-tacit-knowledge-as-the-strategic-bottleneck-in-ai-systems-d359dfe3967b&quot;&gt;expert at explaining your expertise&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The representation format matters enormously.&lt;/strong&gt; Natural language is one way to encode knowledge, but it may not be the best. Code captures procedural logic more precisely. Domain-specific languages (chemical formulas, musical notation, mathematical notation) encode specialist knowledge in formats that are both human-readable and machine-parseable. The question of &lt;em&gt;how&lt;/em&gt; to represent documented expertise is a design problem that’s barely been explored.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Arthur C. Clarke predicted that training a truly human-like intelligence would take decades.&lt;/strong&gt; I think he was right. The processing power exists or will soon. What we lack is decades of captured human experience — the rich, multimodal, continuous training data that human brains receive and that no current AI system gets. The timeline to human-level AI isn’t gated by compute. It’s gated by data capture.&lt;/p&gt;

&lt;h2 id=&quot;the-temporary-category&quot;&gt;The temporary category&lt;/h2&gt;

&lt;p&gt;If the documentability thesis is correct — if every human cognitive process is documentable given sufficient data — then “uniquely human” is not a permanent category. It’s a temporary one, defined by the current limits of our ability to capture and encode experience.&lt;/p&gt;

&lt;p&gt;The things we call uniquely human today — creativity, intuition, judgment, empathy — may be uniquely human only because we haven’t yet built the systems to capture the inputs that produce them and the architectures that process them. We are not special because of what we are. We are special because of how much data we’ve absorbed and how long we’ve been processing it.&lt;/p&gt;

&lt;p&gt;The good news is that this means the path forward is clear, even if it’s long. And in the meantime — which could be decades — the most valuable thing you can do is be the person who bridges the gap: who can take what’s inside human heads and make it available to machines. Not because it diminishes human expertise, but because it’s the only way to scale it.&lt;/p&gt;

&lt;p&gt;A human expert’s lifetime of experience shouldn’t die with them or walk out the door when they retire. It should be captured, encoded, and made available. That’s not a threat to human value. It’s an expression of it.&lt;/p&gt;
</content>
    <author>
      <name>Martin Šrubař</name>
    </author>
    
    <category term="AI"/>
    
    <summary type="html">
Every year, someone publishes a definitive list of things AI cannot do. And every year, the list gets shorter.

</summary>
  </entry>
  
  <entry>
    <title type="html">Should I Pay My Kid to Learn?</title>
    <link href="https://techblog.cz/en/should-i-pay-my-kid-to-learn.html" rel="alternate" type="text/html" title="Should I Pay My Kid to Learn?"/>
    <published>2026-03-20T00:00:00+00:00</published>
    <updated>2026-03-20T00:00:00+00:00</updated>
    <id>https://techblog.cz/en/should-i-pay-my-kid-to-learn.html</id>
    <content type="html" xml:base="https://techblog.cz/en/should-i-pay-my-kid-to-learn.html">&lt;p&gt;&lt;img src=&quot;/images/IMG_1194.jpeg&quot; width=&quot;480&quot; alt=&quot;Should I pay my kid to learn?&quot; /&gt;&lt;br /&gt;
It started with a podcast. On the &lt;a href=&quot;https://peterattiamd.com/joeliemandt/&quot;&gt;Peter Attia Drive&lt;/a&gt; podcast, I was listening to Joe Liemandt—a tech billionaire who’d built enterprise software for decades, poured a billion dollars into reinventing how children learn, and was now running a chain of private schools in Texas with no teachers, no homework, and no textbooks. His creation, &lt;a href=&quot;https://alpha.school/&quot;&gt;Alpha School&lt;/a&gt;, replaces traditional instruction with AI-driven apps. Students complete their entire academic curriculum in two hours each morning, then spend the rest of the day on workshops in entrepreneurship, public speaking, fitness, and financial literacy. The adults in the room aren’t called teachers — they’re guides and coaches, focused on motivation and emotional support rather than instruction.&lt;/p&gt;

&lt;p&gt;The results, at least as reported by the school, are striking. Students score in the top 1% nationally on standardised tests. Ninety-six percent say they love school. Forty to sixty percent say they’d rather be at school than on holiday. The &lt;a href=&quot;https://www.cnn.com/2026/01/29/politics/alpha-school-trump-ai-teaching&quot;&gt;first graduating class&lt;/a&gt; last year sent students to Stanford, Vanderbilt, and Northeastern.&lt;/p&gt;

&lt;p&gt;I found myself nodding along. The model made sense. The &lt;a href=&quot;https://en.wikipedia.org/wiki/Mastery_learning&quot;&gt;focus on mastery&lt;/a&gt; — not advancing until you truly understand the current material — aligned with everything I believed about learning. But then Liemandt mentioned that Alpha offers students $1,000 if they score above the 99th percentile. And something snagged. &lt;em&gt;Paying children to learn?&lt;/em&gt; That felt wrong. I filed it away as the one part of an otherwise compelling model that I didn’t like, and moved on.&lt;/p&gt;

&lt;p&gt;Then life provided a lesson of its own.&lt;/p&gt;

&lt;h2 id=&quot;long-division-a-failed-test-and-an-app-nobody-uses&quot;&gt;Long division, a failed test, and an app nobody uses&lt;/h2&gt;

&lt;p&gt;A few weeks after the podcast, my eleven-year-old son had a maths test coming up — long division by two-digit numbers. I sat with him the evening before, working through problems together. He was getting the method but wasn’t fluent with it. We practised. I felt reasonably good about where we’d got to.&lt;/p&gt;

&lt;p&gt;The test results were not good at all.&lt;/p&gt;

&lt;p&gt;It stung — for him and for me. I knew the issue wasn’t ability. It was repetition, practice, mastery of the fundamentals before you’re tested on applying them. And I thought: I can fix this. I can build something better than a worksheet.&lt;/p&gt;

&lt;p&gt;So I built a simple AI-powered app. It generates problems tailored to his level in maths. It adjusts difficulty based on performance. It tracks what he’s mastered and what still needs work. It’s genuinely good — I know this because I find myself using it to practise maths. It’s engaging, clear, and adaptive in a way that no textbook could be.&lt;/p&gt;

&lt;p&gt;My son tried it a few times. Then… stopped. Not because it’s bad. Not because he can’t do the work. He just doesn’t voluntarily sit down with it. The app sits there, ready, capable, personalised to his exact needs — and unused.&lt;/p&gt;

&lt;p&gt;And that’s when the $1,000 clicked.&lt;/p&gt;

&lt;p&gt;I’d been so quick to judge Alpha’s monetary incentives. But here I was, a parent who had built the tool, who could see it working, who knew it could help — and I couldn’t get my own child to use it. Alpha didn’t pay students $1,000 because they were lazy or because the system was flawed. They paid them because they’d built the world’s most efficient learning system and &lt;em&gt;still&lt;/em&gt; needed to solve the problem of getting a child to engage with it. They’d hit the same wall I’d hit. They’d just found a more direct way over it.&lt;/p&gt;

&lt;h2 id=&quot;what-ai-gets-right-mastery-before-moving-on&quot;&gt;What AI gets right: mastery before moving on&lt;/h2&gt;

&lt;p&gt;The traditional school model moves children through material on a fixed schedule. If your child understands 75% of fractions, they move on to the next topic anyway. That missing 25% doesn’t disappear — it compounds. By the time they hit multi-step algebra, they’re not struggling with algebra. They’re struggling with the fractions they never properly learnt three years ago.&lt;/p&gt;

&lt;p&gt;This is not a new insight. In 1984, the educational psychologist Benjamin Bloom published research showing that students tutored one-on-one using &lt;a href=&quot;https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem&quot;&gt;mastery-based techniques&lt;/a&gt; — where you don’t advance until you’ve truly understood the current material — performed two standard deviations better than classroom-taught students. The average tutored student outperformed 98% of the classroom. Bloom called finding a scalable way to replicate this “the 2 Sigma Problem.”&lt;/p&gt;

&lt;p&gt;Forty years later, AI might actually be the answer. An adaptive system can identify exactly where a child’s understanding breaks down, fill the gap, test again, and only advance them when they’ve genuinely mastered the prerequisite. No child left behind — not as a policy slogan, but as a mechanical reality of the software.&lt;/p&gt;

&lt;p&gt;Alpha’s principle is simple and compelling: be fast with times tables before you attempt multi-digit division. Master the foundations so thoroughly that the next level feels manageable rather than impossible. What I like about this is that it respects the child’s actual understanding rather than their age. A bright nine-year-old who missed a conceptual building block in Year 2 shouldn’t be forced to flounder through Year 4 material — they should go back, close the gap quickly, and then accelerate.&lt;/p&gt;

&lt;p&gt;The efficiency gains are what create the time. If a child can genuinely complete a year’s worth of a subject in &lt;a href=&quot;https://alpha.school/the-program/&quot;&gt;20-30 focused hours&lt;/a&gt; — and the evidence from adaptive learning platforms suggests this is plausible, if not yet conclusively proven at scale — then why are we keeping them in classrooms for six hours a day? The freed-up time could go to things the curriculum never has room for: learning to run a business, practising public speaking, training for a triathlon, or simply playing.&lt;/p&gt;

&lt;h2 id=&quot;ai-has-solved-the-content-problem-it-has-not-solved-the-motivation-problem&quot;&gt;AI has solved the content problem. It has not solved the motivation problem.&lt;/h2&gt;

&lt;p&gt;Here’s where my thinking shifted. AI can now generate a perfectly tailored learning experience for any child, in any subject, at any level. The content delivery problem is, if not fully solved, rapidly being solved. Khan Academy’s &lt;a href=&quot;https://www.khanmigo.ai/&quot;&gt;Khanmigo&lt;/a&gt; helps teachers and students in the classroom. China’s &lt;a href=&quot;https://www.technologyreview.com/2019/08/02/131198/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/&quot;&gt;Squirrel AI&lt;/a&gt; has broken middle school maths into over 10,000 discrete knowledge points and serves 1,700 learning centres. Duolingo’s AI creates adaptive language practice in real time. The tools exist and they’re getting better every month.&lt;/p&gt;

&lt;p&gt;But building the perfect learning resource turns out to be only half the challenge — and arguably the easier half. The harder half, the one that technology alone cannot solve, is this: how do you get a child to meaningfully engage with it?&lt;/p&gt;

&lt;p&gt;This is the insight that Alpha School has apparently grasped more clearly than most edtech companies. Joe Liemandt says motivation is “90% of the solution.” His entire model is engineered around it — the two-hour day is itself a motivational tool (finish your work and the afternoon is yours), the guides focus on encouragement rather than instruction, the workshops give students something to look forward to.&lt;/p&gt;

&lt;p&gt;And yet, even with all of that, they still offer financial incentives.&lt;/p&gt;

&lt;h2 id=&quot;should-i-be-paying-my-kids-to-learn&quot;&gt;Should I be paying my kids to learn?&lt;/h2&gt;

&lt;p&gt;This is the question I keep coming back to as a parent, and I don’t think the answer is simple.&lt;/p&gt;

&lt;p&gt;Alpha offers middle schoolers $1,000 for reaching the top 1% nationally. They also run smaller incentive programmes — $100 for a perfect score on state standardised tests. Liemandt draws on the work of Harvard economist &lt;a href=&quot;https://academic.oup.com/qje/article-abstract/126/4/1755/1924375?redirectedFrom=fulltext&amp;amp;login=false&quot;&gt;Roland Fryer, who ran large-scale experiments&lt;/a&gt; paying students in over 200 urban schools across Dallas, New York, and Chicago.&lt;/p&gt;

&lt;p&gt;Fryer’s findings are nuanced and worth understanding. Paying students for &lt;em&gt;outputs&lt;/em&gt; — higher test scores — had essentially zero effect. But paying students for &lt;em&gt;inputs&lt;/em&gt; — reading books, completing specific tasks — worked, at least for some groups. The distinction matters: when students know exactly what to do, incentives can push them to do it. When the path is unclear, money alone doesn’t help.&lt;/p&gt;

&lt;p&gt;Alpha’s argument is that their AI makes the path so clear — here are the exact lessons you need to complete, the system tells you precisely what to study — that even output-based incentives become effective. It’s a clever reframing. And by their own accounts, it works: students who believed they “couldn’t do maths” achieve top scores and, more importantly, change their self-perception. Liemandt describes this shift in identity as more valuable than the academic knowledge itself.&lt;/p&gt;

&lt;p&gt;I see the logic. And I can see how, for many children, this works. The $1,000 isn’t really about the money — it’s about showing a child that they’re capable of something they didn’t think possible. Once that belief shifts, the external reward becomes less necessary.&lt;/p&gt;

&lt;p&gt;But the research on motivation gives me pause. &lt;a href=&quot;https://en.wikipedia.org/wiki/Self-determination_theory&quot;&gt;Self-determination theory&lt;/a&gt; — the dominant framework in educational psychology — holds that lasting motivation comes from autonomy, competence, and relatedness. A &lt;a href=&quot;https://journals.sagepub.com/doi/10.3102/00346543071001001&quot;&gt;landmark meta-analysis&lt;/a&gt; by Deci, Koestner, and Ryan found that tangible rewards have a &lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S0272775716303478&quot;&gt;“substantial undermining effect” on intrinsic motivation&lt;/a&gt;. When you pay children to do something, they may stop wanting to do it for its own sake. The reward becomes the point, and when it’s removed, so is the engagement.&lt;/p&gt;

&lt;p&gt;There’s also a more personal concern. I want my children to find learning itself rewarding — to experience the quiet satisfaction of understanding something that was confusing yesterday. If I pay them to achieve, am I training them to see education as a transaction?&lt;/p&gt;

&lt;p&gt;Monetary rewards connect to my broader concern about gamification. Game designers know that the optimal challenge point — where a game is most engaging — sits at roughly an 85% success rate. You’re succeeding enough to feel competent but failing enough to feel challenged. This maps neatly onto &lt;a href=&quot;https://en.wikipedia.org/wiki/Zone_of_proximal_development&quot;&gt;Vygotsky’s Zone of Proximal Development&lt;/a&gt;, a well-established educational concept. Alpha and other AI systems essentially implement this principle.&lt;/p&gt;

&lt;p&gt;But there’s a difference between a child who finds the &lt;em&gt;learning itself&lt;/em&gt; rewarding (intrinsic) and a child who finds the &lt;em&gt;gamified feedback&lt;/em&gt; rewarding (extrinsic). Both will engage. But what happens when the game elements are removed? What happens in university, or in a job, where nobody gives you points for completing a task? The child who learned to find satisfaction in understanding is better prepared than the child who learned to chase the next reward.&lt;/p&gt;

&lt;p&gt;A counter-argument, which I think deserves honest consideration: this may only matter for children heading toward academic or intellectually demanding paths. For children who would otherwise disengage from education entirely, gamified AI learning that gets them to a solid foundation is a massive improvement over the status quo. Getting 80% of students to genuine competence through extrinsic motivation may be more valuable to society than getting 20% to a love of learning while the rest fall behind. I’m not sure how I feel about that trade-off, but I think it’s the real one we’re facing.&lt;/p&gt;

&lt;p&gt;Is my preference for intrinsic rewards idealistic? Is the pragmatic truth that some children simply need an external push to discover they’re capable of more?&lt;/p&gt;

&lt;p&gt;I genuinely don’t know. And I suspect the honest answer is: it depends on the child, it depends on the context, and it probably depends on what you do &lt;em&gt;after&lt;/em&gt; the incentive gets them started.&lt;/p&gt;

&lt;h2 id=&quot;what-the-screen-cant-replicate&quot;&gt;What the screen can’t replicate&lt;/h2&gt;

&lt;p&gt;There’s another dimension to this that I haven’t seen discussed much. Traditional classrooms, for all their inefficiency, provide something that AI on a screen does not: the unconscious social dynamics of learning alongside peers.&lt;/p&gt;

&lt;p&gt;When a child sees twenty other children working through times tables, something happens that isn’t in any curriculum. They absorb the message: &lt;em&gt;this is what we do here.&lt;/em&gt; The social comparison, the peer pressure, the simple observation that everyone around them is engaged — these are powerful motivational forces that operate largely below conscious awareness. Children naturally calibrate their effort to their environment.&lt;/p&gt;

&lt;p&gt;A child alone with an iPad and an AI tutor doesn’t have that. They have the content. They may even have a guide checking in on them. But they don’t have the ambient social signal that normalises effort and makes learning feel like a shared endeavour rather than a solitary task.&lt;/p&gt;

&lt;h2 id=&quot;the-new-models-that-are-emerging&quot;&gt;The new models that are emerging&lt;/h2&gt;

&lt;p&gt;What’s becoming clear is that “AI in education” isn’t one thing — it’s a spectrum of approaches, and we’re likely heading toward a world where multiple models coexist. Here’s how I see them emerging:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Augmented classrooms.&lt;/strong&gt; AI handles grading, assessment, and personalised feedback. Teachers remain central but are freed from administrative burden. This is the least disruptive model and probably the most likely in public education. Khan Academy’s Khanmigo is designed for this — a tutor for students and an assistant for teachers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Alpha model.&lt;/strong&gt; Heavily AI-focused learning in the morning, human-led life skills in the afternoon. Guides replace teachers. This requires reimagining what a school &lt;em&gt;is&lt;/em&gt; and what adults in schools &lt;em&gt;do&lt;/em&gt;. Currently available only to wealthy families ($40,000-$75,000/year), though Liemandt is building lower-cost versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid at-home learning.&lt;/strong&gt; AI and remote teaching handle academics. Children meet physically a few days per week for social activities, collaborative projects, and sports. This could dramatically reduce the infrastructure needed — fewer classrooms and teachers could serve more children.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-enhanced homeschooling.&lt;/strong&gt; Parents who already homeschool gain enormously powerful tools. The AI handles curriculum design, content delivery, and assessment. The parent provides motivation, social context, and values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Something entirely new.&lt;/strong&gt; Variations across multiple axes — home vs. school, individual vs. group, fixed curriculum vs. interest-led, coach vs. teacher — that we haven’t fully imagined yet. Perhaps schools that assess a child’s motivational profile and match them to the right model. Perhaps AI that adapts not just the &lt;em&gt;content&lt;/em&gt; but the &lt;em&gt;motivational strategy&lt;/em&gt; to each individual child.&lt;/p&gt;

&lt;p&gt;Globally, the experimentation is accelerating.&lt;/p&gt;

&lt;h2 id=&quot;the-uncomfortable-questions&quot;&gt;The uncomfortable questions&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Are we creating a two-tier system?&lt;/strong&gt; Alpha School &lt;a href=&quot;https://www.the74million.org/article/what-public-schools-and-parents-can-learn-from-a-40000-a-year-private-school/&quot;&gt;costs more than many universities&lt;/a&gt;. If AI-optimised education produces dramatically better outcomes, and only wealthy families can access it, we’ve widened the inequality gap rather than closing it. Liemandt talks about sub-$1,000 tablets serving a billion children, but that’s a vision, not a reality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is motivation style innate or learned?&lt;/strong&gt; Research in developmental psychology suggests it’s substantially shaped by environment, especially in the early years. Children are born with innate curiosity — what researchers call “&lt;a href=&quot;https://link.springer.com/article/10.1023/A:1025601110383&quot;&gt;mastery motivation&lt;/a&gt;.” But motivational patterns are established early, and the early childhood years are crucial for building intrinsic orientations that last a lifetime. By the time many children reach school, much of that natural motivation has already been lost or replaced with extrinsically motivated learning strategies.&lt;/p&gt;

&lt;p&gt;What shapes this? Parenting, primarily. &lt;a href=&quot;https://pmc.ncbi.nlm.nih.gov/articles/PMC8264621/&quot;&gt;Studies show&lt;/a&gt; that when parents display autonomy-supportive behaviours, children develop greater capacity for independent action and self-motivation. Conversely, interactions high in negative control — criticism, excessive correction — predict lower autonomy. &lt;a href=&quot;https://pmc.ncbi.nlm.nih.gov/articles/PMC9910790/&quot;&gt;Research on children’s curiosity&lt;/a&gt; finds that responding to a child’s interests encourages them to ask more questions and seek out information, while insecure or restrictive environments dampen exploration.&lt;/p&gt;

&lt;p&gt;Think about what this means in practice. A child who’s allowed to turn over rocks in the garden and marvel at what’s underneath is having their curiosity reinforced. A child who gets told off for getting dirty and mustn’t touch bugs is learning that exploration has consequences. A child left to wrestle with a puzzle develops persistence; a child constantly “helped” by a well-intentioned parent — or scolded each time they put a piece in the wrong place — learns that the point is the right answer, not the process. &lt;a href=&quot;https://pmc.ncbi.nlm.nih.gov/articles/PMC12837912/&quot;&gt;Recent research&lt;/a&gt; suggests curiosity may be shaped more by context than by age, underscoring the need to create environments that protect and promote children’s intrinsic interest as they grow.&lt;/p&gt;

&lt;p&gt;This means the educational environment we choose for our children actively shapes what kind of learners they become. That’s a heavy responsibility. If I put my child in a gamified AI system, I may be conditioning them to need external rewards. If I keep them in a traditional classroom, I may be conditioning them to learn passively. The choice isn’t neutral.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does higher education adapt?&lt;/strong&gt; We’re entering a potentially very long transition period where some students arrive at university with an extraordinarily high standard of education and others come through the traditional system. How do universities handle that variance? Does the lecture hall model survive when some first-years have already mastered material that others won’t encounter until their second year?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens if most people achieve their educational potential?&lt;/strong&gt; This is the biggest and most speculative question. If AI enables the majority of children to reach genuine competence in core subjects &lt;em&gt;and&lt;/em&gt; have time to develop skills and interests, what does that society look like? More entrepreneurs? More artists? More people who are competent but still searching for purpose? And what happens when world-class education arrives at the same moment AI can do most of the jobs that education was supposed to prepare you for? Where does that leave people’s motivation to learn?&lt;/p&gt;

&lt;h2 id=&quot;what-you-can-do-right-now&quot;&gt;What you can do right now&lt;/h2&gt;

&lt;p&gt;If you’re a parent like me, the immediate action is deceptively simple: ask an AI to create learning materials tailored to your child’s level and interests. It will do a remarkable job. The content problem is solved.&lt;/p&gt;

&lt;p&gt;The motivation problem is yours. An upcoming test at school is reasonable motivation for some children but not others. Making it a shared activity — sitting with your child and working through problems together — helps. But it doesn’t scale to every evening, and it requires a level of involvement that not every parent can sustain.&lt;/p&gt;

&lt;p&gt;If you’re a learner yourself — perhaps an adult wanting to pick up new skills — the opportunity is extraordinary. Use AI to teach you, test you, and challenge your understanding. Ask it to identify gaps in your knowledge. Have it ask you deep questions and trick questions. The tools that exist today, many of them free, would have been unimaginable five years ago.&lt;/p&gt;

&lt;p&gt;If you’re an entrepreneur, here’s what I’d say: the curriculum and content side of education is rapidly becoming a commodity. AI can generate, personalise, and assess learning material. But the company that solves &lt;em&gt;motivation&lt;/em&gt; at scale — that figures out how to make children genuinely engage with AI-driven learning without relying on financial incentives or constant parental oversight — that company will transform education. The content is the easy part. The human part is the hard part, and always has been.&lt;/p&gt;

&lt;h2 id=&quot;where-this-leaves-me&quot;&gt;Where this leaves me&lt;/h2&gt;

&lt;p&gt;I’m writing this as someone who is genuinely optimistic about what AI can do for education, and genuinely unsure about how we get from here to there. I built an app that could help my son nail long division and everything else coming his way. It works. He’d rather read a book or play outside. And honestly? Part of me thinks that’s exactly right — he’s eleven, and reading and playing outside is important too.&lt;/p&gt;

&lt;p&gt;But I also know that the world he’s growing up in will demand more of him than the world I grew up in demanded of me. If AI can compress the dull, repetitive parts of learning into two focused hours and free up the rest of the day for the things that make childhood rich — sport, creativity, friendships, exploration — that feels like a genuine improvement, not just for education but for what it means to be a child.&lt;/p&gt;

&lt;p&gt;The question I can’t yet answer is who ensures that those two hours actually happen, and what we’re willing to do — as parents, as a society — to make them meaningful. The technology is ready. We are not. Not yet.&lt;/p&gt;

&lt;p&gt;Will I pay my son to learn? I’ll let you know.&lt;/p&gt;
</content>
    <author>
      <name>Martin Šrubař</name>
    </author>
    
    <category term="AI"/>
    
    <summary type="html">
It started with a podcast. On the Peter Attia Drive podcast, I was listening to Joe Liemandt—a tech billionaire who’d built enterprise software for decades, poured a billion dollars into reinventing how children learn, and was now running a chain of private schools in Texas with no teachers, no ...</summary>
  </entry>
  
  <entry>
    <title type="html">From Average to Obsolete: What AI Means for Human Skills and Software Companies</title>
    <link href="https://techblog.cz/en/from-average-to-obsolete.html" rel="alternate" type="text/html" title="From Average to Obsolete: What AI Means for Human Skills and Software Companies"/>
    <published>2026-03-13T20:00:00+00:00</published>
    <updated>2026-03-13T20:00:00+00:00</updated>
    <id>https://techblog.cz/en/from-average-to-obsolete.html</id>
    <content type="html" xml:base="https://techblog.cz/en/from-average-to-obsolete.html">&lt;p&gt;&lt;img src=&quot;/images/specialista-manager-zbytek-se-topi.jpeg&quot; width=&quot;480&quot; alt=&quot;Specialist and manager stay afloat, the rest are drowning&quot; /&gt;&lt;br /&gt;
There’s a book from the pre-AI era called &lt;em&gt;“Be Obsessed or Be Average.”&lt;/em&gt; Back then, it sounded like a motivational catchphrase. Today, it sounds more like a warning. Artificial intelligence is rapidly erasing the line between “average” and “unnecessary” — for both people and software.&lt;/p&gt;

&lt;h2 id=&quot;average-is-no-longer-enough&quot;&gt;Average Is No Longer Enough&lt;/h2&gt;

&lt;p&gt;Chatbots handle customer support. Automated systems take over administration. According to a &lt;a href=&quot;https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work&quot;&gt;McKinsey report&lt;/a&gt;, many corporate departments will need fewer employees as a direct result of AI deployment — while revenue per employee keeps rising. Some forecasters even predict the &lt;a href=&quot;https://www.vox.com/future-perfect/403708/artificial-intelligence-robots-jobs-employment-remote-workers&quot;&gt;end of the “laptop class”&lt;/a&gt;: if your job doesn’t require physical presence, &lt;a href=&quot;https://situational-awareness.ai/from-gpt-4-to-agi/#:~:text=a-,drop%2Din%20remote%20worker,-.%20An&quot;&gt;AI can replace you&lt;/a&gt; — working around the clock for a fraction of your salary.&lt;/p&gt;

&lt;p&gt;And it’s not just about people. Think of all those video-to-MP3 converters, simple image editors, and similar utilities you once paid tens of dollars for. Today, you tell an AI what you need and get a custom-built app. Their era is ending. And it’s not just small tools — companies are seriously considering how to replace software they pay hundreds of thousands for annually. A small business can use AI to build its own CRM for a fraction of the cost of a commercial solution.&lt;/p&gt;

&lt;p&gt;More broadly, every product or service whose output isn’t a physical object is at risk. Publications, consulting, analytics, software tools — anything that can be expressed in words, numbers, or code is entering direct competition with AI.&lt;/p&gt;

&lt;h2 id=&quot;will-ai-replace-you&quot;&gt;Will AI Replace You?&lt;/h2&gt;
&lt;p&gt;Think about it — there’s a fairly simple test: can your work be documented? If so, it can be automated.&lt;/p&gt;

&lt;p&gt;This isn’t abstract theory. If your company has workers whose job consists of applying knowledge prepared by someone else — people working with internal procedures, manuals, knowledge bases — AI can replace them. Under one condition: that knowledge must be written down. AI can then not only make that knowledge accessible but actively apply it — faster and more consistently than a person who has to study it first.&lt;/p&gt;

&lt;p&gt;The same applies to software. If your application’s entire logic can be described by a set of rules that AI can understand, then your application no longer has a reason to exist. The user will simply have it generated.&lt;/p&gt;

&lt;p&gt;The boundary of replaceability doesn’t run between humans and machines, nor between software and AI. It runs between what can be written down and what cannot. Between routine and creativity. Between average and exceptional.&lt;/p&gt;

&lt;h2 id=&quot;a-new-definition-of-exceptional&quot;&gt;A New Definition of Exceptional&lt;/h2&gt;

&lt;p&gt;And here’s where it gets most interesting. Look at the highest-paid professions today. At the top, you’ll find two types of people: specialists who create new knowledge — commercial researchers, experts at the cutting edge of their field — and high-level managers who can extract maximum value from existing knowledge and resources. These two groups form the peaks of the value curve. Between them lies a broad middle layer of people who apply existing knowledge.&lt;/p&gt;

&lt;p&gt;AI is erasing that middle layer. And in doing so, it makes those two peaks the only path to staying irreplaceable.&lt;/p&gt;

&lt;p&gt;Either you’re a &lt;strong&gt;specialist who creates new knowledge&lt;/strong&gt; — pushing the boundaries of what we know and can do. Your output will eventually become documented knowledge that AI absorbs and applies. But the act of creation itself — seeing a problem no one else sees, coming up with an approach that didn’t exist before — that cannot be automated. You feed the machine what it learns from.&lt;/p&gt;

&lt;p&gt;Or you’re a &lt;strong&gt;manager who extracts value from AI&lt;/strong&gt; — you know what to ask, how to combine outputs, when to trust AI and when not to. You don’t create new knowledge, but you orchestrate existing knowledge. Your value lies in judgment, context, and the ability to bear responsibility for the outcome.&lt;/p&gt;

&lt;p&gt;Notice: this is nothing new. The world has always worked this way. It’s just that the middle layer — people who apply others’ knowledge without creating their own or making critical decisions — used to be numerous and well-paid. AI is compressing it, because this is precisely the kind of work that can be documented. And what can be documented can be automated.&lt;/p&gt;

&lt;p&gt;The same goes for software. The tools that survive will either create new value in ways AI cannot replicate, or serve as platforms for AI orchestration. Everything in between is at risk.&lt;/p&gt;

&lt;h2 id=&quot;what-to-do-about-it&quot;&gt;What to Do About It&lt;/h2&gt;
&lt;p&gt;If you’re reading this article thinking you belong to that middle layer — I have good news: realizing it is the first step. The question you need to ask yourself is simple: am I creating new knowledge, or applying existing knowledge?&lt;/p&gt;

&lt;p&gt;If you’re applying — move. Either toward creation: deepen your expertise where AI still struggles — in strategic thinking, in understanding context, in solving problems no one has solved before. Or toward orchestration: learn to effectively manage AI in your field, not as a toy, but as a tool that multiplies your productivity and decision-making ability.&lt;/p&gt;

&lt;p&gt;The same applies to software companies. If your product is easily replicable, you have a problem. But if you can integrate AI into your solution and offer customers something they can’t generate on their own, you have an opportunity, not a threat.&lt;/p&gt;

&lt;h2 id=&quot;be-obsessed-or-be-obsolete&quot;&gt;Be Obsessed or Be Obsolete&lt;/h2&gt;
&lt;p&gt;The era when you could make a good living by applying others’ knowledge is ending. The future belongs to those who create knowledge, or to those who can orchestrate it. The middle is hollowing out.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Be Obsessed or be Obsolete.&lt;/em&gt;&lt;/p&gt;
</content>
    <author>
      <name>Martin Šrubař</name>
    </author>
    
    <category term="AI"/>
    
    <summary type="html">
There’s a book from the pre-AI era called “Be Obsessed or Be Average.” Back then, it sounded like a motivational catchphrase. Today, it sounds more like a warning. Artificial intelligence is rapidly erasing the line between “average” and “unnecessary” — for both people and software.

</summary>
  </entry>
  
  <entry>
    <title type="html">Return from Technological Hibernation: Why My Sense of Scientific Stagnation Was Completely Wrong</title>
    <link href="https://techblog.cz/en/return-from-technological-hibernation.html" rel="alternate" type="text/html" title="Return from Technological Hibernation: Why My Sense of Scientific Stagnation Was Completely Wrong"/>
    <published>2026-03-13T16:00:00+00:00</published>
    <updated>2026-03-13T16:00:00+00:00</updated>
    <id>https://techblog.cz/en/return-from-technological-hibernation.html</id>
    <content type="html" xml:base="https://techblog.cz/en/return-from-technological-hibernation.html">&lt;p&gt;&lt;img src=&quot;/images/zaba-v-kadince-billboard.jpg&quot; width=&quot;480&quot; alt=&quot;Frog in a beaker&quot; /&gt;&lt;br /&gt;
In December 2011, I wrote an &lt;a href=&quot;/osobni/osme-vyroci-techblogcz.html&quot;&gt;article&lt;/a&gt; for the eighth anniversary of this blog. It was a rather melancholy piece. I lamented that since 2003, nothing truly groundbreaking had happened in science and technology. Off the top of my head, I could only name the rise of flat and touchscreen displays, which felt more like industrialization of already known ideas than a genuine revolution. I was waiting for the mass adoption of carbon nanotubes, for revolutionary materials… and nothing came.&lt;/p&gt;

&lt;p&gt;I ended that article with a question: &lt;em&gt;“Did I sleep through something, or are we entering a phase of human development where nothing revolutionary happens anymore, and we’re merely learning to use the full potential of existing technologies?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today, in 2026, I have a clear answer. Yes, I slept through it. And quite possibly, so did a large part of the public. While I had the feeling back then that science was somewhat boredly treading water, it was actually standing on the threshold of probably the most explosive era of discovery in modern human history.&lt;/p&gt;

&lt;p&gt;So where did my judgment go wrong? Why does the “nothing is happening” perspective differ so dramatically from today’s reality, where every morning we open the news feeling like we’re living in a sci-fi novel? Several factors contributed to this optical illusion:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Above all, I simply wasn’t following science and technology, as one reader correctly pointed out in the comments. If I tried to recall something off the top of my head now, I’d probably only remember the rise of AI.&lt;/li&gt;
  &lt;li&gt;The time of sowing and the time of harvest can be decades apart. The Cassini probe’s exploration of Saturn continued until 2017, even though Cassini launched in 1997, the mission was approved in 1990, and the first proposal for it dates back to 1982. You cannot judge how much science is advancing today solely by how many findings and achievements we’re harvesting right now.&lt;/li&gt;
  &lt;li&gt;The clash of marketing hype with the positive boiled frog syndrome — Scientific discoveries are often extrapolated to extremes in the media, promising revolution practically overnight. When the miracle doesn’t materialize by the next day, disappointment sets in. Real progress, however, happens stealthily, one small step at a time. It’s like boiling a frog in a positive sense — by living the change day by day, we get used to it before we have time to appreciate it. But if someone had teleported me from 2011 straight into today’s world of AI and space probes, I would undoubtedly have been struck speechless with amazement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Today I can say with a clear conscience that we live in an absolutely fascinating era. Stagnation was merely an illusion; perhaps the calm before the storm, or I simply missed the storm that was already raging. We still don’t have a space elevator made of carbon nanotubes in every household, but we’ve managed to rewrite the code of life itself, we’ve peered at the edge of the solar system, and we’ve created machines that can fluidly converse with us and solve complex tasks.&lt;/p&gt;

&lt;p&gt;Science is running at a pace that’s hard to keep up with, but it’s worth trying. That’s also why it’s time to revive this blog.&lt;/p&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;appendix-what-actually-happened-and-why-its-revolutionary&quot;&gt;Appendix: What Actually Happened (And Why It’s Revolutionary)&lt;/h3&gt;

&lt;p&gt;If you don’t believe my optimism today, I’ve prepared a brief overview of the most important things that actually happened between 2012 and today. Consider it a factual footnote and proof that we need to strike the word “stagnation” from our vocabulary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Golden Age of Space Exploration and Understanding the Solar System&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Curiosity (2012) and Perseverance (2021) on Mars:&lt;/strong&gt; The landing of the one-ton Curiosity rover using a “sky crane” showed that engineering knows no limits. The probe confirmed that Mars once had conditions suitable for life. Its successor, the Perseverance rover, raised the bar even higher — not only is it actively collecting samples for future return to Earth, but it also brought along the Ingenuity helicopter, the first human-made machine to achieve powered flight in the atmosphere of another planet.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;MESSENGER and Dawn probes (2015):&lt;/strong&gt; Exploration of the outer reaches of our system. MESSENGER completely mapped Mercury and surprisingly found water ice in permanently shadowed craters on this hellish planet. The Dawn probe became the first spacecraft to orbit two different celestial bodies (Vesta and Ceres), confirming that even the asteroid belt hides fascinating water worlds.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;New Horizons at Pluto (2015):&lt;/strong&gt; An absolutely groundbreaking mission. The first human probe to fly past Pluto. In just a few hours, it transformed a blurry dot from telescopes into a fascinating world with icy mountains, a blue atmosphere, and a giant heart-shaped glacier.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Juno at Jupiter (2016):&lt;/strong&gt; The Juno probe showed us the largest planet in our solar system in an entirely new light. It peered deep beneath its dense clouds, mapped its complex magnetic field and giant cyclones at the poles, and in recent years has brought us detailed images of fascinating moons such as Europa, Ganymede, and volcanic Io.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cassini’s Grand Finale (2017):&lt;/strong&gt; As I mentioned in the introduction, the Saturn mission began much earlier, but its breathtaking conclusion came in 2017. The probe deliberately burned up in Saturn’s atmosphere, but not before revealing that beneath the icy crust of the moon Enceladus lies a global ocean with hydrothermal vents — one of the most promising places to search for extraterrestrial life.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Astrophysics and optics:&lt;/strong&gt; In 2015, the detection of gravitational waves allowed us to “hear” the collision of black holes for the first time in history, confirming Einstein’s century-old prediction. The deployment of the James Webb Space Telescope (JWST) in 2021 enabled us to peer deeper into the past of the universe and analyze the composition of atmospheres on planets beyond our solar system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Artificial Intelligence (From Analysis to Creation)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Rise of Deep Learning:&lt;/strong&gt; The breakthrough came in 2012, when the neural network AlexNet decisively outperformed the competition in image recognition. Since then, AI has developed at breakneck speed, whether it was defeating the world champion in Go (AlphaGo, 2016) or the recent overwhelming surge of large language models (LLMs) like ChatGPT. Artificial intelligence stopped being a theoretical concept and became an everyday tool for programmers, scientists, and ordinary users alike.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Biology and Medicine (Revolution at the Molecular Level)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;CRISPR-Cas9 (2012):&lt;/strong&gt; The year 2012 was truly pivotal for science. The discovery of precise “genetic scissors” that allow cheap and accurate DNA editing was published. The technology has long since left the laboratories — at the end of 2023, the first real therapy was approved that uses CRISPR to permanently cure previously incurable sickle cell anemia.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;AlphaFold (2020):&lt;/strong&gt; An AI system by DeepMind managed to solve one of the biggest biological problems of the last 50 years — predicting the 3D folding of proteins. The fusion of software and biology here dramatically shortens the time needed to develop new drugs.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;mRNA vaccines:&lt;/strong&gt; A technology that had been quietly refined for years saved the world during the coronavirus pandemic. Even more importantly, it opened the door to developing personalized vaccines against various types of cancer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Physics and Energy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Fusion with net energy gain (2022):&lt;/strong&gt; Scientists at the American LLNL laboratory succeeded for the first time in obtaining more energy from nuclear fusion than they put into it using lasers. While decades of work still lie ahead before we light the first light bulb with fusion, for the first time in human history we confirmed that the holy grail of clean energy on Earth physically works.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Higgs boson (2012):&lt;/strong&gt; CERN confirmed the existence of the particle that gives mass to other particles, thereby successfully completing the Standard Model of particle physics. The Large Hadron Collider was built in 2008, and the accelerator project was approved as early as 1994.&lt;/li&gt;
&lt;/ul&gt;
</content>
    <author>
      <name>Martin Šrubař</name>
    </author>
    
    <category term="Personal"/>
    
    <summary type="html">
In December 2011, I wrote an article for the eighth anniversary of this blog. It was a rather melancholy piece. I lamented that since 2003, nothing truly groundbreaking had happened in science and technology. Off the top of my head, I could only name the rise of flat and touchscreen displays, wh...</summary>
  </entry>
  
</feed>
