Posts tagged ‘CS1’
The ITICSE’14 paper referenced below is getting discussed a good bit in the CS Education community. Is it really the case that enhancing error messages doesn’t help students?
Yes, if you do an ineffective job of enhancing the error messages. I’m disappointed that the paper doesn’t even consider the prior work on how to enhance error messages in a useful way — and more importantly, what has been established as a better process. To start, the best paper award at SIGCSE’11 was on an empirical process for analyzing the effectiveness of error messages and a rubric for understanding student problems with them — a paper that isn’t even referenced in the ITICSE paper, let alone applying the rubric. That work and the work of Lewis Johnson in Proust point to the importance of bringing more knowledge to bear in creating useful error messages–by studying student intentionality, by figuring out what information they need to be successful. Andy Ko got it right when he said “Programming languages are the least usable, but most powerful human-computer interfaces ever invented.” We make them more usable by doing careful empirical work, not just tossing a bunch of data into a machine learning clustering algorithm.
I worry that titles like “Enhancing syntax error messages appears ineffectual” can stifle useful research. I already spoke to one researcher working on error messages who asked if new work is even useful, given this result. The result just comes from a bad job at enhancing error messages. Perhaps a better title would have been “An approach to enhancing syntax error messages that isn’t effective.”
Debugging is an important skill for novice programmers to acquire. Error messages help novices to locate and correct errors, but compiler messages are frequently inadequate. We have developed a system that provides enhanced error messages, including concrete examples that illustrate the kind of error that has occurred and how that kind of error could be corrected. We evaluate the effectiveness of the enhanced error messages with a controlled empirical study and find no significant effect.
The blog post linked below felt close to home, though I measure it differently than lines of code. The base point is that we tend to start introductory programming courses assuming way more knowledge than is already there. My experience this semester is that we tend to expect students to gain more knowledge more quickly than they do (and maybe, than they can).
I’m teaching Python Media Computation this semester, on campus (for the first time in 7 years). As readers know, I’ve become fascinated with worked examples as a way of learning programming, so I’m using a lot of those in this class. In Ray Lister terms, I’m teaching program reading more than program writing. In Bloom’s taxonomy terms, I’m teaching comprehension before synthesis.
As is common in our large courses at Georgia Tech (I’m teaching in a lecture of 155 students, and there’s another parallel section of just over 100), the course is run by a group of undergraduate TA’s. Our head TA took the course, and has been TA-ing it for six semesters. The TA’s create all homeworks and quizzes. I get to critique (which I do), and they do respond reasonably. I realize that all the TA’s expect that the first thing to measure in programming is writing code. All the homeworks are programming from a blank sheet of paper. Even the first quiz is “Write a function to…”. The TA’s aren’t trying to be difficult. They’re doing as they were taught.
One of the big focal research areas in the new NSF STEM-C solicitation is “learning progressions.” Where can we reasonably expect students to start in learning computer science? How fast can we reasonably expect them to learn? What is a reasonable order of topics and events? We clearly need to learn a lot more about these to construct effective CS education.
I’m not going to articulate the next few orders of magnitude, both because they are not relevant to beginner or intermediate programmers, and because I’m climbing the 1K → 10K transition myself, so I’m not able to articulate it well. But they have to do with elegance, abstraction, performance, scalability, collaboration, best practices, code as craft.
The 3am realization is that many, many “introduction” to programming materials start at the 1 → 10 transition. But learners start at the 0 → 1 transition — and a 10-line program has the approachability of Everest at that point.
This is from Jennie Kay, who was one of the organizers of the SIGCSE Robot Rodeo a few years ago, and is a leader in the use of robotics in CS education in the SIGCSE community.
Educational Robots for Absolute Beginners:
A Free On-Line Course that teaches the basics of LEGO NXT Robot Programming
Got a LEGO NXT robot kit but don’t know where to begin? Come learn the basics of LEGO NXT Robot Programming and discover a new way to bring math, science, and computer science content to your students both in and out of the classroom. By the end of this class, you (YES YOU!) will have built your own robot and programmed it to dance around the room.
This course, developed by the Rowan University Laboratory for Educational Robotics and supported by a generous grant from Google CS4HS, is specifically designed for K-12 teachers, but is free and open to anyone who is interested in learning about LEGO NXT robotics. The course will be starting at the end of October. Preregister now and we’ll send you an email when we open up the course. To preregister, as well as to see our video “trailer” and get the answers to frequently asked questions please visit: http://cs4hsrobots.appspot.com/
Interesting claim below. Do we believe that being able to build a JIT compiler will be a critical threshold for programming in 2040? Or will programming become so much a literacy, that there will be people who can just write grocery lists and letters to Grandma and there will be Shakespeares? I’m predicting a broader spread, not a higher bar.
The FizzBuzz problem described below is pretty interesting, a modern day version of the Rainfall problem. I will bet that the results claimed for FizzBuzz are true, but I haven’t seen any actual studies of it yet.
While that may be true today, what will matter far more in the future is the quality of programmers, not the quantity. Any programmer who can’t hack together a JIT compiler in 2040 will be as useless as a programmer who can’t solve FizzBuzz today.
Leo Porter, Charlie McDowell, Beth Simon, and I collaborated on a paper on how to make introductory programming work, now available in CACM. It’s a shorter, more accessible version of Leo and Beth’s best-paper-award winning SIGCSE 2013 paper, with history and kibitzing from Charlie and me :
Many Communications readers have been in faculty meetings where we have reviewed and bemoaned statistics about how bad attrition is in our introductory programming courses for computer science majors (CS1). Failure rates of 30%–50% are not uncommon worldwide. There are usually as many suggestions for how to improve the course as there are faculty in the meeting. But do we know anything that really works?
We do, and we have research evidence to back it up. Pair programming, peer instruction, and media computation are three approaches to reforming CS1 that have shown positive, measurable impacts. Each of them is successful separately at improving retention or helping students learn, and combined, they have a dramatic effect.
Definitely the most interesting MOOC experiment I’ve seen in the latest batches — an edX CS1 aimed at community college students, and offered in a blended format. I very much hope that they do good assessment here. If MOOCs are going to serve as an alternative to face-to-face classes for the majority of students, they have to work at the community college level and have better than face-to-face retention rates. Retention (and completion) rates are too low already in community colleges. If MOOCs are going to be part of a solution, part of making education better, then they need to have high completion rates.
The fast-moving world of online education, where anyone can take classes at a world-famous university, is making a new foray into the community college system, with a personal twist.
In a partnership billed as the first of its kind, the online education provider edX plans to announce Monday that it has teamed up with two Massachusetts community colleges to offer computer science classes that will combine virtual and classroom instruction.
Beginning next term, Bunker Hill and MassBay community colleges will offer versions of an online MIT course that will be supplemented with on-campus classes. Those classes, to be taught by instructors at the two-year schools, will give students a chance to review the online material and receive personal help.
“This allows for more one-to-one faculty mentoring” than exclusively online courses, said John O’Donnell, president of MassBay Community College in Wellesley. O’Donnell added that the schools’ involvement allows edX “to test its course content on a broader range of students.”
Students will pay the same amount they would for a standard class.
From Leigh Ann Sudol-DeLyser (firstname.lastname@example.org):
I am looking for faculty who are able to help me find subjects for my final study of my PhD thesis. I have built an online pedagogical IDE which uses problem knowledge to give students feedback about algorithmic components as they are writing code for simple array algorithms.
I am looking for faculty who are willing to assign a 5-problem sequence as a part of a homework assignment or final exam review in a CS1 course in Java. The 5 problems consist of writing code to find the sum of an array of integers, the maximum number in an array of integers, counting the number of values in a range of integers, and completing an indexOf method for an array of integers. These problems are similar to ones you might find in a system like CodingBat where students are given a method header and asked to implement code for the interior of a single method.
If you are willing to help me graduate (please!) send me your name, the university you teach at, and the number of students in your class and I will contact you with login codes for the students and further directions. I am looking for classes of all sizes from all types of colleges and universities. Please forward to your CS1 instructors where applicable.
This class sounds cool and similar to our “Computational Freakonomics” course, but at the data analysis stage rather than the statistics stage. I found that Allen Downey has taught another, also similar course “Think Stats” which dives into the algorithms behind the statistics. It’s an interesting set of classes that focus on relevance and introducing computing through a real-world data context.
The most unique feature of our class is that every assignment (after the first, which introduces Python basics) uses real-world data: DNA files straight out of a sequencer, measurements of ocean characteristics (salinity, chemical concentrations) and plankton biodiversity, social networking connections and messages, election returns, economic reports, etc. Whereas many classes explain that programming will be useful in the real world or give simplistic problems with a flavor of scientific analysis, we are not aware of other classes taught from a computer science perspective that use real-world datasets. (But, perhaps such exist; we would be happy to learn about them.)
Blockly is a web-based, graphical programming language. Users can drag blocks together to build an application. No typing required.
Check out the demos:
Maze – Use Blockly to solve a maze.
RTL – See what Blockly looks like in right-to-left mode (for Arabic and Hebrew).
Blockly is currently a technology preview. We want developers to be able to play with Blockly, give feedback, and think of novel uses for it. All the code is free and open source. Join the mailing list and let us know what you think.
CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts
Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!
The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.
This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.
I’ve raised this question before, but since I just saw Nora Newcombe speak at NCWIT, I thought it was worth raising the issue again. Here’s my picture of one of her slides — could definitely have used jitter-removal on my camera, but I hope it’s clear enough to make the point.
This is from a longitudinal study, testing students’ visual ability, then tracking what fields they go into later. Having significant visual ability most strongly predicts an Engineering career, but in second place (and really close) is “Mathematics and Computer Science.” That score at the bottom is worth noting: Having significant visual ability is negatively correlated with going into Education. Nora points out that this is a significant problem. Visual skills are not fixed. Training in visual skills improves those skills, and the effect is durable and transferable. But, the researchers at SILC found that teachers with low visual skills had more anxiety about teaching visual skills, and those teachers depressed the impact on their students. A key part of Nora’s talk was showing how the gender gap in visual skills can be easily reduced with training (relating to the earlier discussion about intelligence), such that women perform just as well as men.
The Spatial Intelligence and Learning Center (SILC) is now its sixth year of a ten year program. I don’t think that they’re going to get to computer science before the 10th year, but I hope that someone does. The results in mathematics alone are fascinating and suggest some significant interventions for computer science. For example, Nora mentioned an in-press paper by Sheryl Sorby showing how teaching students how to improve their spatial skills improved their performance in Calculus, and I have heard that she has similar results about computer science. Could we improve learning in computer science (especially data structures) by teaching spatial skills first?
A common question I get about contextualized approaches to CS1 is: “How can we possibly offer more than one introductory course with our few teachers?” Valerie Barr has a nice paper in the recent Journal of Computing Sciences in Schools where she explains how her small department was able to offer multiple CS1’s, and the positive impact it had on their enrollment.
The department currently has 6 full time faculty members, and a 6 course per year teaching load. Each introductory course is taught studio style, with integrated lecture and hands-on work. The old CS1 had a separate lab session and counted as 1.5 courses of teaching load. Now the introductory courses (except Programming for Engineers) continue this model, meet the additional time and count as 1.5 courses for the faculty member, allowing substantial time for hands-on activities. Each section is capped at 18 students and taught in a computer lab in order to facilitate the transition between lecture and hands-on work.
In order to make room in the course schedule for the increased number of CS1 offerings, the department eliminated the old CS0 course. A number of additional changes were made in order to accommodate the new approach to the introductory CS curriculum: reduction of the number of proscribed courses for the major from 8 (out of 10) to 5 (this has the added benefit, by increasing the number of electives, of giving students more flexibility and choice within the general guidelines of the major); put elective courses on a rotation schedule so that each one is taught every other or every third year; made available to students a 4-year schedule of offerings so that they can plan according to the course rotation.
The key interesting phrase in Rich’s quote below is “higher-quality.” Could we get higher-quality on-line for CS1? And by what measure?
There’s been a huge thread in the SIGCSE members’ list asking, “Is Python better than Java for introductory computer science?” I thought that the best insight in that thread came from Doug Blank when he pointed out that we don’t have one outcome for CS1. How do you define “better” when we don’t agree on where we’re trying to get? Is “better” getting more majors? Getting more retention in CS1? Getting more retention into the Sophomore year? Getting more students into internships in the summer? Getting more access? Or getting more learning — and then, about CS knowledge or CS skills? And which knowledge and skills?
It’s an open research question (or maybe an engineering challenge) whether we can create an on-line introductory computing science course that is better than a face-to-face CS1. But a bigger challenge is whether we can agree on what “better” means.
Mr. DeMillo: All you have to do is add up the amount of money spent on courses. Just take an introduction to computer science. Add up the amount of money that’s spent nationwide on introductory programming courses. It’s a big number, I’ll bet. What is the value received for that spend? If, in fact, there’s a large student population that can be served by a higher-quality course, what’s the argument for spending all that money on 6,000 introduction to programming courses?
A YouTube video of my talk (with Alan’s introduction) at C5 is now available.
Interesting finding that supporting older adults learning better problem-solving skills seems to lead to a change in a personality trait called “openness.” I find this interesting for two reasons. First, it’s wonderful to see continuing evidence about the plasticity of the human mind. Surprisingly little is “fixed” or “innate.” Second, I wonder how “openness” relates to “self-efficacy.” We heard at ICER 2011 how self-efficacy plays a significant role in student ability to succeed in introductory computing. Is there an implication here that if we could improve students’ understanding of computer science, before programming, that we could enhance their openness or self-efficacy, possibly leading to more success? That’s a related hypothesis to what we aim for in CSLearning4U (that studying programming in the small, worksheet-style, will make programming sessions more effective — more learning, less time, less pain), and I’d love to see more evidence for this.
Personality psychologists describe openness as one of five major personality traits. Studies suggest that the other four traits (agreeableness, conscientiousness, neuroticism and extraversion) operate independently of a person’s cognitive abilities. But openness — being flexible and creative, embracing new ideas and taking on challenging intellectual or cultural pursuits — does appear to be correlated with cognitive abilities.