When do we know that a programming course is not working for non-CS majors?

November 9, 2018 at 7:00 am 12 comments

There’s a good discussion going on in Facebook that I wanted to make more public and raise as a question here.  The crush of undergraduates in CS today is making it difficult to offer tailored introductory CS courses to different majors.  The problem is particularly acute when designing instruction for future CS teachers.  If you put the CS teachers in the same course as the CS majors, it’s cheaper and easier — you just teach one big course, rather than multiple smaller courses. But is it as effective?

Some of my colleagues suggest that we can design undergraduate introductory computing courses that are effective for both non-CS and CS majors.  Here’s my question: How do you know when it’s not working?  At what point would you admit that the one course option isn’t working? It’s an interesting empirical question.

Here are some possible measures:

  • Learning is a tricky measure.  For any discipline, the majors in that discipline are more motivated to learn more than the outside-the-discipline majors. You can’t expect the non-CS majors to learn more than the CS majors.  Then again, the non-CS majors probably come in knowing less.  If you do pre- and post-tests on CS knowledge, do non-CS majors have as large of a gain as the CS majors?  I don’t know, but in any case, it’s not a great measure for deciding if a class is succeeding for the non-CS majors.
  • Taking more CS courses may be an effective measure, but only if you have more than one course that’s useful to non-CS majors.  If the rest of the classes are about software development, then non-CS majors will probably not want to go on, even if the intro course was effective and well-designed.
  • Retention is a reasonable measure.  If more of the non-CS majors are dropping out from the course than the CS majors, you may not be meeting their needs.
  • My favorite measure is relevance I argued in my blog post on Monday that programming is a practice that is relevant to many communities. Do the non-CS majors see the relevance of computing for them and their community after the introductory course?  If not, I don’t think it’s meeting their needs.
  • Another tricky measure is use. Should non-CS majors be able (after their first course) to build some program that they find useful?  Certainly, if you achieve that goal, you have also achieved relevance.  How do you judge useful?  CS faculty may not be good judges of what a non-CS major would find useful, and CS faculty are most likely going to assess in terms of code quality (e.g., modularization, appropriate variable and function/module names, commenting, code style, etc.), which I consider pretty unimportant for as a measure for the non-CS students’ experience in the first course.

What do you think?  How would you know if your intro course was meeting non-CS students’ needs?

Entry filed under: Uncategorized. Tags: , , , , .

What do I mean by Computing Education Research? The Social Science Perspective What do I mean by Computing Education Research? The Computer Science Perspective

12 Comments Add your own

  • 1. gregoryvwilson  |  November 9, 2018 at 8:27 am

    One of major measurable outcomes for Software Carpentry workshops is increase in learner confidence and self-efficacy (https://carpentries.org/assessment/ and https://zenodo.org/record/1325464).

    Reply
  • 2. Kat Brandenburg  |  November 9, 2018 at 9:30 am

    Hi Mark! I’d be interested to know your take on low-code platforms. I’m part of the University Team at Mendix and we provide completely free use of the Mendix low-code application development platform for classroom use. The coding has been abstracted and uses model-driven development. To-date, we’ve partnered with primarily Information Systems curriculum but we see an emerging trend that the IS and CS departments are overlapping more frequently. IS programs value low-code because it gives tech-savvy business students a skill set to develop a real app without needed to code. Would something like this help bridge the gap you speak of between non-CS majors taking CS courses?

    Thanks,
    Kat

    Reply
    • 3. Mark Guzdial  |  November 9, 2018 at 10:22 am

      Hi Kat. I’m thinking a lot about this question these days. We have lots of evidence that block-based programming is easier for intro students to learn and use than text-based languages, and still leads to learning about programming that transfers to traditional text-based languages. How far can we push that boundary? Can we make the user interface look even less like programming (e.g., does it have to be Turing-complete?) to make it easier to start, and still be “programming” and still lead to transferable learning?

      Reply
  • 4. Jeffrey A Graham  |  November 9, 2018 at 10:20 am

    I could see using continued non-major enrollment as being relevant. Word gets out to future students. If the current students find it irrelevant, it might affect future enrollments. In other words, students voting with their feet. This might take longer to notice than withdraw rates and such.

    Reply
    • 5. Mark Guzdial  |  November 9, 2018 at 10:23 am

      I’d love to see studies of how much this happens: How much do new students get “word” from former students? We as faculty think it happens a lot. I have my doubts that it’s actually that pervasive.

      Reply
      • 6. Jeffrey A Graham  |  November 9, 2018 at 11:37 am

        I’ve just got anecdotal evidence and not much of that. I’ve talked with students who’ve told me that this sort of thing was going on, but no hard evidence. How would one go about studying something like this? Just registrar’s data?

        Reply
        • 7. Jeffrey A Graham  |  November 9, 2018 at 11:39 am

          Also, I teach at a small college, might be a very different dynamic here than at GT or UMich or other big schools.

          Reply
      • 8. gasstationwithoutpumps  |  November 9, 2018 at 1:09 pm

        We don’t have studies of how often students follow advice from other students, but we do have a formal peer advising system in the Baskin School of Engineering. Anecdotally, students rely more heavily on advice from other students than faculty or staff advice when choosing classes.

        I don’t know how much advice from students affects intro CS course enrollments, though, as those courses are generally either mandatory or chosen for their perceived job value independent of how well or how badly they are taught. If different instructors teach the course in different quarters, student advice could affect when students try to take the course—but full classes and long waiting lists would hide even that effect, as students would take it when they can get in, rather than when they want to.

        Reply
  • 9. orcmid  |  November 9, 2018 at 11:03 am

    This might not be relevant in an undergraduate setting, but I wonder about pre-course and post-course surveys/questionnaires. That is, finding out what the student is looking for on starting the course and then finding out from them what they got and how it fit with what they said at the beginning. If a student withdraws, it would be good to find out what that is about also.

    Too much instructor effort? Should not get lost in survey design though. Need to know who it is, what year they are in, what their major is if any, and then just their answer to what is the most important thing they want to take away as learned from the course.

    Can’t be anonymous. So should not be part of any kind of post-course assessment asked of students. Hmm, now I’m over-thinking this.

    Reply
    • 10. orcmid  |  November 9, 2018 at 11:53 am

      There are ways to preserve student anonymity and deal with the longitudinal before/after aspect. In the distant past there were 3-part forms that could be adapted to this matter without any computer at all. The first email-like network application I constructed in the 1970s was modeled on that. Nowadays, makes for an interesting small application design and without much computing, if any automation at all in the loop with the student. (I had to add this so I can now stop thinking about it.)

      Reply
      • 11. orcmid  |  November 9, 2018 at 12:08 pm

        Still thinking. The common form that the student uses to enter their data, expectation, and personal outcome needs to be numbered. (This is easier with a sheet of paper used as a turn-around form, more difficult on-line.) The number is just to match post- and pre- submission copes of the form. The forms can be randomized and dealt out. The students can randomize them again by swapping with other students. Two rounds of that should be good enough.

        The instructors want a copy of the form when just the pre-course information is on it and you want the completed one back when the student ends their participation in the course.

        Two design challenges. 1. Keeping this low-tech and low-friction. 2. Finding a satisfactory computer-mediated version that is as simple as possible and simplifies things for the student also, especially in a distance-learning situation. A systems-analysis problem. Failure mode: asking questions of the student just because you can.

        Reply
  • 12. gasstationwithoutpumps  |  November 9, 2018 at 12:27 pm

    One slow, but powerful technique is to see whether other departments are creating their own intro courses. Because it is a lot of work and expensive to create courses, if other departments are doing it, you can be sure that your intro courses are not working for them.

    At UCSC, the Biomolecular Engineering Department (which houses the bioinformatics program) created a programming course for the life sciences, because the intro Java course was not suitable—no interesting programs were written in the two-quarter sequence and Java is not a particularly good language for biologists. The BME 160 course gets to biologically interesting projects within the first quarter and is taught in Python, which is of more use to biologists than Java. The pace of the course is also substantially faster than the lower-division CS courses, despite having the same number of units. The class is much smaller than the CS courses (only 56 students in Winter and 56 in Spring), which helps with handling the intensity of the course.

    Computer science at UCSC is completely revamping their curriculum for next year (starting in Python rather than Java), and it looks like they will accept the BME 160 course in place of their first two Python courses. Negotiations are still on-going for letting bioinformatics students into the algorithms course without having had assembly language (a course that seems to be just a filter prereq to keep the size down).

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,325 other followers

Feeds

Recent Posts

Blog Stats

  • 1,575,170 hits
November 2018
M T W T F S S
« Oct    
 1234
567891011
12131415161718
19202122232425
2627282930  

CS Teaching Tips


%d bloggers like this: