engels.antonio.one

Fluency or Dependency

Learning AI has never looked so easy. Open an interactive notebook, load a large language model or LLM, type a prompt, and marvel at the output. Or lean on AI-assisted coding environments that assemble projects for you. Or let an AI generate entire blocks of code on demand.

All of it looks like fluency.

But too often it is dependency.

Interactive notebooks, AI-assisted coding, and AI-generated code create a polished surface that hides the underlying complexity. Students grow comfortable using someone else's scaffolding, clicking and pasting through demos, yet often unable to explain, debug, or extend the systems behind them. What looks like fluency is reliance on tools they cannot control. The test is simple: when the tool breaks or behaves unexpectedly, can you diagnose why? When requirements change, can you adapt the system? If not, you're dependent.

Running a toy project in class is one thing. Running a model in production is another. A notebook may make it seem simple to load an LLM. An AI coding assistant can produce a prototype instantly. AI-generated code can stitch pipelines together. But real systems bring costs, scaling challenges, and performance trade-offs. What happens when the first bug appears? When latency spikes? When concurrency collapses? Shortcuts hide these problems, leaving students unprepared.

The same pattern appears with Retrieval-Augmented Generation, or RAG. An interactive notebook might make it a single command. AI-assisted coding can assemble a neat interface. But real RAG requires cleaning data, designing search that finds the right context, and keeping responses fast under load. Shortcuts rarely teach why irrelevant results appear or why synchronous handling collapses under pressure.

This pattern extends across all the technical decisions that matter in practice. Even abstract concepts like synchronous versus asynchronous processing become unavoidable. Interactive notebooks hide it. AI coding demos rarely show it. Generated scripts rarely anticipate it. Only those who have worked through the complexity themselves understand the trade-offs and can design resilient solutions.

These difficult parts are not distractions. They are the work. When RAG fails, when concurrency falters, when generated code breaks silently, students learn why data quality, debugging, and design matter. Grappling with these problems turns dependency into fluency.

Learning cannot stop at polished interactive notebooks, quick AI-assisted demos, or AI-generated scripts. The parts removed to make learning "easier" are often the ones that matter most once systems face real workloads.

Easy starts help, but fluency built only on permanent abstractions is dependency by another name. The question is not whether to use abstractions, but whether students understand the systems they rely on and when those abstractions might not suffice. Students ready to deploy systems, debug failures, or optimize performance need more than surface fluency. Others may work effectively within abstracted environments throughout their careers, and that's legitimate. Well-tested abstractions often are the right choice. The issue is not abstraction itself, but mistaking familiarity with interfaces for understanding of systems.

The future will not be shaped by those who can only run code in interactive notebooks, remix AI-assisted prototypes, or copy AI-generated pipelines. It will be shaped by those who can diagnose failures, adapt when the defaults fall short, and design systems that endure. This does not mean beginners should start with low-level complexity. But it does mean that education must eventually move beyond convenience to capability. Students who never progress from using tools to understanding systems remain perpetual beginners. This is the fundamental choice in AI education: dependency that looks like fluency, or fluency that grows from moving beyond dependency.

Real builders are made when knowledge moves from the interface to the engine.
blog