Should I Pay My Kid to Learn?

Martin Šrubař · March 20, 2026

Should I pay my kid to learn?
It started with a podcast. On the Peter Attia Drive podcast, I was listening to Joe Liemandt—a tech billionaire who’d built enterprise software for decades, poured a billion dollars into reinventing how children learn, and was now running a chain of private schools in Texas with no teachers, no homework, and no textbooks. His creation, Alpha School, replaces traditional instruction with AI-driven apps. Students complete their entire academic curriculum in two hours each morning, then spend the rest of the day on workshops in entrepreneurship, public speaking, fitness, and financial literacy. The adults in the room aren’t called teachers — they’re guides and coaches, focused on motivation and emotional support rather than instruction.

The results, at least as reported by the school, are striking. Students score in the top 1% nationally on standardised tests. Ninety-six percent say they love school. Forty to sixty percent say they’d rather be at school than on holiday. The first graduating class last year sent students to Stanford, Vanderbilt, and Northeastern.

I found myself nodding along. The model made sense. The focus on mastery — not advancing until you truly understand the current material — aligned with everything I believed about learning. But then Liemandt mentioned that Alpha offers students $1,000 if they score above the 99th percentile. And something snagged. Paying children to learn? That felt wrong. I filed it away as the one part of an otherwise compelling model that I didn’t like, and moved on.

Then life provided a lesson of its own.

Long division, a failed test, and an app nobody uses

A few weeks after the podcast, my eleven-year-old son had a maths test coming up — long division by two-digit numbers. I sat with him the evening before, working through problems together. He was getting the method but wasn’t fluent with it. We practised. I felt reasonably good about where we’d got to.

The test results were not good at all.

It stung — for him and for me. I knew the issue wasn’t ability. It was repetition, practice, mastery of the fundamentals before you’re tested on applying them. And I thought: I can fix this. I can build something better than a worksheet.

So I built a simple AI-powered app. It generates problems tailored to his level in maths. It adjusts difficulty based on performance. It tracks what he’s mastered and what still needs work. It’s genuinely good — I know this because I find myself using it to practise maths. It’s engaging, clear, and adaptive in a way that no textbook could be.

My son tried it a few times. Then… stopped. Not because it’s bad. Not because he can’t do the work. He just doesn’t voluntarily sit down with it. The app sits there, ready, capable, personalised to his exact needs — and unused.

And that’s when the $1,000 clicked.

I’d been so quick to judge Alpha’s monetary incentives. But here I was, a parent who had built the tool, who could see it working, who knew it could help — and I couldn’t get my own child to use it. Alpha didn’t pay students $1,000 because they were lazy or because the system was flawed. They paid them because they’d built the world’s most efficient learning system and still needed to solve the problem of getting a child to engage with it. They’d hit the same wall I’d hit. They’d just found a more direct way over it.

What AI gets right: mastery before moving on

The traditional school model moves children through material on a fixed schedule. If your child understands 75% of fractions, they move on to the next topic anyway. That missing 25% doesn’t disappear — it compounds. By the time they hit multi-step algebra, they’re not struggling with algebra. They’re struggling with the fractions they never properly learnt three years ago.

This is not a new insight. In 1984, the educational psychologist Benjamin Bloom published research showing that students tutored one-on-one using mastery-based techniques — where you don’t advance until you’ve truly understood the current material — performed two standard deviations better than classroom-taught students. The average tutored student outperformed 98% of the classroom. Bloom called finding a scalable way to replicate this “the 2 Sigma Problem.”

Forty years later, AI might actually be the answer. An adaptive system can identify exactly where a child’s understanding breaks down, fill the gap, test again, and only advance them when they’ve genuinely mastered the prerequisite. No child left behind — not as a policy slogan, but as a mechanical reality of the software.

Alpha’s principle is simple and compelling: be fast with times tables before you attempt multi-digit division. Master the foundations so thoroughly that the next level feels manageable rather than impossible. What I like about this is that it respects the child’s actual understanding rather than their age. A bright nine-year-old who missed a conceptual building block in Year 2 shouldn’t be forced to flounder through Year 4 material — they should go back, close the gap quickly, and then accelerate.

The efficiency gains are what create the time. If a child can genuinely complete a year’s worth of a subject in 20-30 focused hours — and the evidence from adaptive learning platforms suggests this is plausible, if not yet conclusively proven at scale — then why are we keeping them in classrooms for six hours a day? The freed-up time could go to things the curriculum never has room for: learning to run a business, practising public speaking, training for a triathlon, or simply playing.

AI has solved the content problem. It has not solved the motivation problem.

Here’s where my thinking shifted. AI can now generate a perfectly tailored learning experience for any child, in any subject, at any level. The content delivery problem is, if not fully solved, rapidly being solved. Khan Academy’s Khanmigo helps teachers and students in the classroom. China’s Squirrel AI has broken middle school maths into over 10,000 discrete knowledge points and serves 1,700 learning centres. Duolingo’s AI creates adaptive language practice in real time. The tools exist and they’re getting better every month.

But building the perfect learning resource turns out to be only half the challenge — and arguably the easier half. The harder half, the one that technology alone cannot solve, is this: how do you get a child to meaningfully engage with it?

This is the insight that Alpha School has apparently grasped more clearly than most edtech companies. Joe Liemandt says motivation is “90% of the solution.” His entire model is engineered around it — the two-hour day is itself a motivational tool (finish your work and the afternoon is yours), the guides focus on encouragement rather than instruction, the workshops give students something to look forward to.

And yet, even with all of that, they still offer financial incentives.

Should I be paying my kids to learn?

This is the question I keep coming back to as a parent, and I don’t think the answer is simple.

Alpha offers middle schoolers $1,000 for reaching the top 1% nationally. They also run smaller incentive programmes — $100 for a perfect score on state standardised tests. Liemandt draws on the work of Harvard economist Roland Fryer, who ran large-scale experiments paying students in over 200 urban schools across Dallas, New York, and Chicago.

Fryer’s findings are nuanced and worth understanding. Paying students for outputs — higher test scores — had essentially zero effect. But paying students for inputs — reading books, completing specific tasks — worked, at least for some groups. The distinction matters: when students know exactly what to do, incentives can push them to do it. When the path is unclear, money alone doesn’t help.

Alpha’s argument is that their AI makes the path so clear — here are the exact lessons you need to complete, the system tells you precisely what to study — that even output-based incentives become effective. It’s a clever reframing. And by their own accounts, it works: students who believed they “couldn’t do maths” achieve top scores and, more importantly, change their self-perception. Liemandt describes this shift in identity as more valuable than the academic knowledge itself.

I see the logic. And I can see how, for many children, this works. The $1,000 isn’t really about the money — it’s about showing a child that they’re capable of something they didn’t think possible. Once that belief shifts, the external reward becomes less necessary.

But the research on motivation gives me pause. Self-determination theory — the dominant framework in educational psychology — holds that lasting motivation comes from autonomy, competence, and relatedness. A landmark meta-analysis by Deci, Koestner, and Ryan found that tangible rewards have a “substantial undermining effect” on intrinsic motivation. When you pay children to do something, they may stop wanting to do it for its own sake. The reward becomes the point, and when it’s removed, so is the engagement.

There’s also a more personal concern. I want my children to find learning itself rewarding — to experience the quiet satisfaction of understanding something that was confusing yesterday. If I pay them to achieve, am I training them to see education as a transaction?

Monetary rewards connect to my broader concern about gamification. Game designers know that the optimal challenge point — where a game is most engaging — sits at roughly an 85% success rate. You’re succeeding enough to feel competent but failing enough to feel challenged. This maps neatly onto Vygotsky’s Zone of Proximal Development, a well-established educational concept. Alpha and other AI systems essentially implement this principle.

But there’s a difference between a child who finds the learning itself rewarding (intrinsic) and a child who finds the gamified feedback rewarding (extrinsic). Both will engage. But what happens when the game elements are removed? What happens in university, or in a job, where nobody gives you points for completing a task? The child who learned to find satisfaction in understanding is better prepared than the child who learned to chase the next reward.

A counter-argument, which I think deserves honest consideration: this may only matter for children heading toward academic or intellectually demanding paths. For children who would otherwise disengage from education entirely, gamified AI learning that gets them to a solid foundation is a massive improvement over the status quo. Getting 80% of students to genuine competence through extrinsic motivation may be more valuable to society than getting 20% to a love of learning while the rest fall behind. I’m not sure how I feel about that trade-off, but I think it’s the real one we’re facing.

Is my preference for intrinsic rewards idealistic? Is the pragmatic truth that some children simply need an external push to discover they’re capable of more?

I genuinely don’t know. And I suspect the honest answer is: it depends on the child, it depends on the context, and it probably depends on what you do after the incentive gets them started.

What the screen can’t replicate

There’s another dimension to this that I haven’t seen discussed much. Traditional classrooms, for all their inefficiency, provide something that AI on a screen does not: the unconscious social dynamics of learning alongside peers.

When a child sees twenty other children working through times tables, something happens that isn’t in any curriculum. They absorb the message: this is what we do here. The social comparison, the peer pressure, the simple observation that everyone around them is engaged — these are powerful motivational forces that operate largely below conscious awareness. Children naturally calibrate their effort to their environment.

A child alone with an iPad and an AI tutor doesn’t have that. They have the content. They may even have a guide checking in on them. But they don’t have the ambient social signal that normalises effort and makes learning feel like a shared endeavour rather than a solitary task.

The new models that are emerging

What’s becoming clear is that “AI in education” isn’t one thing — it’s a spectrum of approaches, and we’re likely heading toward a world where multiple models coexist. Here’s how I see them emerging:

Augmented classrooms. AI handles grading, assessment, and personalised feedback. Teachers remain central but are freed from administrative burden. This is the least disruptive model and probably the most likely in public education. Khan Academy’s Khanmigo is designed for this — a tutor for students and an assistant for teachers.

The Alpha model. Heavily AI-focused learning in the morning, human-led life skills in the afternoon. Guides replace teachers. This requires reimagining what a school is and what adults in schools do. Currently available only to wealthy families ($40,000-$75,000/year), though Liemandt is building lower-cost versions.

Hybrid at-home learning. AI and remote teaching handle academics. Children meet physically a few days per week for social activities, collaborative projects, and sports. This could dramatically reduce the infrastructure needed — fewer classrooms and teachers could serve more children.

AI-enhanced homeschooling. Parents who already homeschool gain enormously powerful tools. The AI handles curriculum design, content delivery, and assessment. The parent provides motivation, social context, and values.

Something entirely new. Variations across multiple axes — home vs. school, individual vs. group, fixed curriculum vs. interest-led, coach vs. teacher — that we haven’t fully imagined yet. Perhaps schools that assess a child’s motivational profile and match them to the right model. Perhaps AI that adapts not just the content but the motivational strategy to each individual child.

Globally, the experimentation is accelerating.

The uncomfortable questions

Are we creating a two-tier system? Alpha School costs more than many universities. If AI-optimised education produces dramatically better outcomes, and only wealthy families can access it, we’ve widened the inequality gap rather than closing it. Liemandt talks about sub-$1,000 tablets serving a billion children, but that’s a vision, not a reality.

Is motivation style innate or learned? Research in developmental psychology suggests it’s substantially shaped by environment, especially in the early years. Children are born with innate curiosity — what researchers call “mastery motivation.” But motivational patterns are established early, and the early childhood years are crucial for building intrinsic orientations that last a lifetime. By the time many children reach school, much of that natural motivation has already been lost or replaced with extrinsically motivated learning strategies.

What shapes this? Parenting, primarily. Studies show that when parents display autonomy-supportive behaviours, children develop greater capacity for independent action and self-motivation. Conversely, interactions high in negative control — criticism, excessive correction — predict lower autonomy. Research on children’s curiosity finds that responding to a child’s interests encourages them to ask more questions and seek out information, while insecure or restrictive environments dampen exploration.

Think about what this means in practice. A child who’s allowed to turn over rocks in the garden and marvel at what’s underneath is having their curiosity reinforced. A child who gets told off for getting dirty and mustn’t touch bugs is learning that exploration has consequences. A child left to wrestle with a puzzle develops persistence; a child constantly “helped” by a well-intentioned parent — or scolded each time they put a piece in the wrong place — learns that the point is the right answer, not the process. Recent research suggests curiosity may be shaped more by context than by age, underscoring the need to create environments that protect and promote children’s intrinsic interest as they grow.

This means the educational environment we choose for our children actively shapes what kind of learners they become. That’s a heavy responsibility. If I put my child in a gamified AI system, I may be conditioning them to need external rewards. If I keep them in a traditional classroom, I may be conditioning them to learn passively. The choice isn’t neutral.

How does higher education adapt? We’re entering a potentially very long transition period where some students arrive at university with an extraordinarily high standard of education and others come through the traditional system. How do universities handle that variance? Does the lecture hall model survive when some first-years have already mastered material that others won’t encounter until their second year?

What happens if most people achieve their educational potential? This is the biggest and most speculative question. If AI enables the majority of children to reach genuine competence in core subjects and have time to develop skills and interests, what does that society look like? More entrepreneurs? More artists? More people who are competent but still searching for purpose? And what happens when world-class education arrives at the same moment AI can do most of the jobs that education was supposed to prepare you for? Where does that leave people’s motivation to learn?

What you can do right now

If you’re a parent like me, the immediate action is deceptively simple: ask an AI to create learning materials tailored to your child’s level and interests. It will do a remarkable job. The content problem is solved.

The motivation problem is yours. An upcoming test at school is reasonable motivation for some children but not others. Making it a shared activity — sitting with your child and working through problems together — helps. But it doesn’t scale to every evening, and it requires a level of involvement that not every parent can sustain.

If you’re a learner yourself — perhaps an adult wanting to pick up new skills — the opportunity is extraordinary. Use AI to teach you, test you, and challenge your understanding. Ask it to identify gaps in your knowledge. Have it ask you deep questions and trick questions. The tools that exist today, many of them free, would have been unimaginable five years ago.

If you’re an entrepreneur, here’s what I’d say: the curriculum and content side of education is rapidly becoming a commodity. AI can generate, personalise, and assess learning material. But the company that solves motivation at scale — that figures out how to make children genuinely engage with AI-driven learning without relying on financial incentives or constant parental oversight — that company will transform education. The content is the easy part. The human part is the hard part, and always has been.

Where this leaves me

I’m writing this as someone who is genuinely optimistic about what AI can do for education, and genuinely unsure about how we get from here to there. I built an app that could help my son nail long division and everything else coming his way. It works. He’d rather read a book or play outside. And honestly? Part of me thinks that’s exactly right — he’s eleven, and reading and playing outside is important too.

But I also know that the world he’s growing up in will demand more of him than the world I grew up in demanded of me. If AI can compress the dull, repetitive parts of learning into two focused hours and free up the rest of the day for the things that make childhood rich — sport, creativity, friendships, exploration — that feels like a genuine improvement, not just for education but for what it means to be a child.

The question I can’t yet answer is who ensures that those two hours actually happen, and what we’re willing to do — as parents, as a society — to make them meaningful. The technology is ready. We are not. Not yet.

Will I pay my son to learn? I’ll let you know.

You can send comments to this email.

Twitter, Facebook