-
AI programming tools are likely to change what it means to “be a developer”, as well as what it means to “learn to be a developer”.
-
Existing developers learned and worked without AI tools, and thus have blind spots as to the possibility and utility of these new tools
- we value certain things strongly
- (but some of those things may turn out to be less valuable);
- we are used to doing things a certain way
- (but different ways may end up being better by many measures);
- we find certain things easy
- (but the skill to make things “easy” was hard-earned, and, even making easy easier )
-
Analogy to the calculator and learning math
- Can calculators help with learning math? Yes
- Are calculators always the most effective way? No
- Can calculators be a crutch, and let students get away without a “true understanding”? Yes, but a “true understanding” might not always be necessary. Math is made up and more tautological than elementary schools make it out to be (”Why is 3 + 2 = 5? It’s by definition of +.”) One can make effective use of integration without a deep understanding of “why” it works.
- Did calculators change how math is taught? Yes (but, still in progress)
-
AI programming tools are a highly scalable educational resource (cheap, fast, available, custom) that provides “answers” that move a student forward (even if sometimes not 100% correct)
-
Isn’t it cheating?
- The point of programming is not “to able to answer code questions unaided”; it’s to solve problems with code.
-
Won’t getting immediate code generation just hurt a student’s learning / understanding?
- ChatGPT does let a learner go from “word problem” to “code that works but they don’t understand”, but, learning is not a linear process
- It’s better than the student being stuck
- Much of a student’s early code is just as inscrutable, even if they wrote it themselves
-
Won’t wrong answers hurt a student?
- Even without AI tools, students create their own wrong answers as part of the learning process.
- Learning involves a student building up an increasingly complex mental model. Often, this model is wrong, and much of learning is making corrections to the model.
- Bonus: wrong answers instill a habit of not trusting any code, and instead “proving” correctness (by testing, reasoning, etc.)