Archive for January 14, 2019

Do we know how to teach secure programming to K-12 students and end-user programmers?

I wrote my CACM Blog post this month on the terrific discussion that Shriram started in my recent post inspired by Annette Vee’s book (see original post here), “The ethical responsibilities of the student or end-user programmer.” I asked several others, besides the participants in the comment thread, about what responsibility they thought students and end-user programmers bore for their code.

One more issue to consider, which is more computing education-specific than the general issue in the CACM Blog. If we decided that K-12 students and end-user programmers need to know how to write secure programs, could we? Do we know how? We could tell students, “You’re responsible,” but that alone doesn’t do any good.

Simply teaching about security is unlikely to do much good. I wrote a blog post back in 2013 about the failings of financial literacy education (see post here) which is still useful to me when thinking about computing education. We can teach people not to make mistakes, or we can try to make it impossible to make mistakes. The latter tends to be more effective and cheaper than the former.

What would it take to get students to use best practices for writing secure programs and to test their programs for security vulnerabilities? In other words, how could you change the practice of K-12 student programmers and end-user programmers? This is a much harder problem than setting a learning objective like “Students should be able to sum all the elements in an array.” Security is a meta-learning objective. It’s about changing practice in all aspects of other learning objectives.

What it would take to get CS teachers to teach to improve security practices? Consider for example an idea generally accepted to be good practice: We could teach students to write and use unit tests. Will they when not required to? Will they write good unit tests and understand why they’re good? In most introductory courses for CS majors, students don’t write unit tests. That’s not because it’s not a good idea. It’s because we can’t convince all the CS teachers that it’s a good idea, so they don’t require it. How much harder will it be to teach K-12 CS teachers (or even science or mathematics teachers who might be integrating CS) to use unit tests — or to teach secure programming practices?

I have often wondered: Why don’t introductory students use debuggers, or use visualization tools effectively (see Juha Sorva’s excellent dissertation for a description of how student use visualizers)? My hypothesis is that debuggers and visualizers presume that the user has an adequate mental model of the notional machine. The debugging options Step In or Step Over only make sense if you have some understanding of what a function or method call does. If you don’t, then those options are completely foreign to you. You don’t use something that you don’t understand, at least, not when your goal is to develop your understanding.

Secure programming is similar. You can only write secure programs when you can envision alternative worlds where users type the wrong input, or are explicitly trying to break your program, or worse, are trying to do harm to your users (what security people sometimes call adversarial thinking). Most K-12 and end-user programmers are just trying to get their programs work in a perfect world. They simply don’t have a model of the world where any of those other things can happen. Writing secure programs is a meta-objective, and I don’t think we know how to achieve it for programmers other than professional software developers.

January 14, 2019 at 7:00 am 16 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,097,763 hits
January 2019
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

CS Teaching Tips