Archive for March, 2016

Starting to track CS classes in Georgia: Few all-CS teachers

As I noted in my Blog@CACM post about the ECEP Cohort (see post here), some states are starting to track enrollments and sections of CS classes offered. Georgia is one of those, and I got to see the first presentation of these data at a CS Task Force meeting from Dr. Caitlin McMunn Dooley. Slides from the presentation are here.

There are five courses that currently count towards high school graduation in Georgia, and these are the ones being most closely tracked: AP CS Level A, CS Principles, IB CS Year 1, IB CS Year 2, and a Georgia-specific course, Programming Games, Apps, and Society. If you look at the counts of how many teachers are teaching of these course, it looks like good growth. There are just over 440 high schools in Georgia, so if there were one of these teachers in each high school, we could have 25-50% of Georgia high schools with CS teachers. What we don’t know is how many of these teachers are appearing in multiple categories. These are unlikely to be all unique teachers. How many AP CS teachers, for example, are also teaching CS Principles?

Counts-of-Teachers

The data below were the most surprising to me. Georgia’s state education system is broken into 16 Regional Education Service Agencies (RESAs). We have the counts of the number of teachers of CS classes in each of those 16 RESAs, and we have the count of the number of sections of CS classes offered in each RESA. Notice that the numbers are pretty similar. For the most part, high school CS teachers in Georgia are only teaching one or two sections of CS. We have very few high school teachers who teach CS full-time. There are a few, e.g., some of our teachers who worked with us in “Georgia Computes!” were teaching five sections of CS each day. Most Georgia CS teachers are likely teaching some other discipline most of the time, and offering just a couple sections of CS.

Georgia-RESAs

If our goal is for all high school students to have access to CS education, we need to have more than one section of 30-40 students per teacher or per school (which is roughly the same thing right now). We need more teachers offering a section of CS and/or we need each teacher to offer more sections of CS. Right now, too few teachers offer CS to too few students.

I don’t know how common these trends are nationwide. Few states are tracking CS classes yet. Georgia is one of the leading states measuring progress in CS education. We need more information to know what’s going on.

March 31, 2016 at 7:18 am 10 comments

Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation

Lauren Margulieux successfully defended her dissertation Using Subgoal Learning and Self-Explanation to Improve Programming Education in March. Lauren’s been exploring subgoal labeling for improving programming education in a series of fascinating and influential papers. Subgoal labels are inserted into the steps of a worked example to explain the purpose for a set of steps.

  • At ICER 2012 (see post here), her paper showed that subgoal labels inserted into App Inventor videos led to improved learning, retention (a week later), and even transfer to new App building problems, all compared to the exact same videos without the subgoal labels. This paper was cited by Rob Moore and his students at MIT in their work developing crowdsourced subgoal labels for videos (see post here).
  • At ICER 2015 (see post here), Lauren and Briana Morrison showed that subgoal labels also improved learning for textual programming languages, but the high cognitive load of textual programming language made some forms of subgoal labeling less successful than studies in other disciplines would predict. That paper won the Chairs Award at ICER.
  • At SIGCSE 2016 (see post here), Briana presented a paper with Lauren where they showed that subgoal labeling also improved performance on Parson’s Problems.

In her dissertation work, Lauren returned to the challenges of the ICER 2015 paper: Can we make subgoal labeling even more successful? She went back to using App Inventor, to reduce the cognitive load from teaching a textual language.

She compared three different ways of using subgoal labeling.

  • In the passive condition, students were just given subgoal labels like in her first experiments.
  • In the active condition, students were given a list of subgoal labels. The worked example was segmented into sets of steps that achieved a subgoal, but the label was left blank. Students had to pick the right subgoal label each blank.
  • In the constructive condition, students were just given a blank and asked to generate a subgoal label. She had two kinds of constructive conditions. One was “guided” in that there were blanks above sets of steps. The other was “unguided” — just a long list of steps, and she asked students to write labels into margins

Lauren was building on a theory that predicted that the constructive condition would have the best learning, but would also be the hardest. She provided two scaffolds.

  • For the conditions where it made sense (i.e., not the passive condition), she provided feedback. She showed half the participants the same worked examples with experimenter labels.
  • For half the constructive participants, the label wasn’t blank. Instead there was a hint. All the steps that achieved the same subgoal were labeled “Label 1,” and all the steps that achieved a different subgoal were labelled “Label 2,” and so on.

Here’s the big “interesting/surprising” graph from her dissertation.

Lauren-interesting-graph

As predicted, constructive was better than active or passive. What’s interesting is that the very best performance was guided constructive without hints but with feedback AND with hints but without feedback. Now that’s weird. Why would having more support (both hints and feedback) lead to worse performance?

There are several possible hypotheses for these results, and Lauren pursued one of these one step further. Maybe students developed their own cognitive model when they constructed their own labels with hints, and seeing the feedback (experimenter’s labels) created some kind of dissonance or conflict. Without hints, maybe the feedback helped them make sense of the worked example.

Lauren ran one more experiment where she contrasted getting scaffolding with the experimenter’s labels versus getting scaffolding with the student’s labels (put in all the right places in the worked example). Students who were scaffolded with their own labels performed better on later problem solving than those who were scaffolded with experimenter labels. Students scaffolded with experimenter labels did not perform better than those who did not receive any scaffolding at all. Her results support this hypothesis — the experimenter’s labels can get in the way of the understanding that the students are building.

using-learner-labels

There are several implications from Lauren’s dissertation. One is that we can do even better than just giving students labels — getting them to write them themselves is even better for learning. Feedback isn’t the most critical part of the learning when subgoal labeling, which is surprising and fascinating. Constructive subgoal labeling lends itself to an online implementation, which is the direction Lauren that is explicitly exploring. How do we build effective programming education online?

Lauren has accepted an Assistant Professor position in the Learning Technologies Division at Georgia State University. I’m so glad for her, and even happier that she’s nearby so that we can continue collaborating!

March 29, 2016 at 9:41 pm 7 comments

Survey explains one big reason there are so few women in technology

Betsy DiSalvo and I did a study of women in computing who chose not to participate in our OMS CS program.  One of the reasons we heard was that these women were experienced with computing education. They all had undergraduate degrees in computing. Every one of them talked about the sexism rampant in their classes and in the industry.  They were unwilling to be in a mostly-male online program.

We used to talk about getting the word out to women about the great job available in the tech industry, and about how that would attract more women. I fear that women today who are choosing not to go into the tech industry are doing so because they do know what it’s like.

A new study finds that sexism is rampant in the tech industry, with almost two-thirds of women reporting sexual harassment and nearly 90 percent reporting demeaning comments from male colleagues.The study, called “Elephant in the Valley,” surveyed 200 women who work at tech companies, including large companies like Google and Apple as well as start-ups. The study focused on women who had 10 years of experience in the industry, and most worked in Silicon Valley.

Source: A new survey explains one big reason there are so few women in technology – Vox

March 28, 2016 at 7:26 am 5 comments

Computing Education Research and the Technology Readiness Level

I just learned about this Technology Readiness Level (see Wikipedia page here) and found it interesting.  Does it make sense for computing education research, or any education research at all?  Aren’t we too much pragmatists when it comes to education research — we don’t become interested unless it can really work in classrooms.  Or maybe early stage education research is just called “psychology”?

There’s a useful high-tech concept called the Technology Readiness Level that helps explain why Uber pounced when it did. NASA came up with this scale to gauge the maturity of a given field of applied science. At Level 1, an area of scientific inquiry is so new that nobody understands its basic principles. At Level 9, the related technology is so mature it’s ready to be used in commercial products. ‘‘Basically, 1 is like Newton figuring out the laws of gravity, and 9 is you’ve been launching rockets into space, constantly and reliably,’’ says Jeff Legault, the director of strategic business development at the National Robotics Engineering Center.

Source: Uber Would Like to Buy Your Robotics Department – The New York Times

March 25, 2016 at 8:03 am 2 comments

NYPost: The folly of teaching computer science to high school kids–CS teaching and the teacher shortage

I’ve raised my concerns about where we’re going to find enough teachers for the NYC initiative (see blog post here).  I found it interesting that the New York Post is raising the a related concern.  They’re going one step further than I did.  In general, we have a national shortage of teachers.  Will growing CS teachers be stealing teachers away from math and science?

For instance, who the heck is going to teach it? There is already a shortage of qualified math and science teachers across the country. And let’s stipulate that the pool of people able to teach computer science is much smaller than those who can teach biology. And then there’s this: What kind of recent graduate with any knowledge of computer science would volunteer to teach in the New York public schools? They make oodles more money in business and get oodles more respect and opportunities for merit-based advancement in a private or parochial school.

Source: The folly of teaching computer science to high school kids | New York Post

March 23, 2016 at 7:27 am 12 comments

Infographic: What Happened To Women In Computer Science? 

The basic facts of this infographic were things I knew. Some of the details, particularly at the end were new for me — like I didn’t know that the quit-rate gap between men and women increased with age. (Thanks to Deepak Kumar who pointed to this infographic on Facebook.)

 

Source: What Happened To Women In Computer Science? | Women Who Code

March 21, 2016 at 7:50 am 2 comments

Forbes weighs in on Computational Thinking: I’m one of *those* critics!

Based on the Forbes article (quoted below), I can now be referred to as Reviewer #2 (see post explaining that academic meme).  I am one of *those* critics.

I’m not apologizing — I still don’t see evidence that we can teach computational thinking the way it’s been described (as I discussed here).  For example, is it true that “Computational thinking can also help in understanding and explaining how things work”?  By learning about computational things, students will learn how to understand non-computational things?  Maybe, but I don’t see much research trying to achieve that and how to measure whether it’s happening.  I do believe that you can use computational things to learn about non-computational things, through modeling and simulation.  But that’s different than saying that “computational thinking” will get you there.

The defense offered in Forbes (“Despite almost a decade of efforts”) is a weak one.  There are lots of things that humans have believed for a lot longer than a decade that are still wrong.  Lamarckian theories of evolution?  Spontaneous generation? Flat Earth?  Length of time of a belief is not a measure of its truth.

Young students in grades K-6 should learn the basic ideas in computing and how to solve problems computationally.  Computational thinking can also help in understanding and explaining how things work. Computational thinking can be taught as a complement to science and to principles of engineering design. It can also be taught to support students’ creative expression and artistic talents.  Despite almost a decade of efforts to define computational thinking, there are still critics that suggest we don’t know what computational thinking means or how to measure it. The previously mentioned work in standards setting and assessment is helping to more clearly define computational thinking and how it can be incorporated in the classroom.

Source: Thawing from a Long Winter in Computer Science Education – Forbes

March 20, 2016 at 7:27 am 3 comments

Brain training, like computational thinking, is unlikely to transfer to everyday problem-solving

In a recent blog post, I argued that problem-solving skills learned for solving problems in computational contexts (“computational thinking”) were unlikely to transfer to everyday situations (see post here).  We see a similar pattern in the recent controversy about “brain training.”  Yes, people get better at the particular exercises (e.g., people can learn to problem-solve better when programming). And they may still be better years later, which is great. That’s an indication of real learning.  But they are unlikely to transfer that learning to non-exercise contexts. Most surprisingly, they are unlikely to transfer that learning even though they are convinced that they do.  Just because you think you’re doing computational thinking doesn’t mean that you are.

Ten years later, tests showed that the subjects trained in processing speed and reasoning still outperformed the control group, though the people given memory training no longer did. And 60 percent of the trained participants, compared with 50 percent of the control group, said they had maintained or improved their ability to manage daily activities like shopping and finances. “They felt the training had made a difference,” said Dr. Rebok, who was a principal investigator.

So that’s far transfer — or is it? When the investigators administered tests that mimicked real-life activities, like managing medications, the differences between the trainees and the control group participants no longer reached statistical significance.

In subjects 18 to 30 years old, Dr. Redick also found limited transfer after computer training to improve working memory. Asked whether they thought they had improved, nearly all the participants said yes — and most had, on the training exercises themselves. They did no better, however, on tests of intelligence, multitasking and other cognitive abilities.

Source: F.T.C.’s Lumosity Penalty Doesn’t End Brain Training Debate – The New York Times

March 18, 2016 at 7:26 am 5 comments

ICER 2016 Call for Papers: Abstracts due April 15

Call for Papers and Submissions

ICER’16: International Computing Education Research Conference

September 8-12, 2016, Melbourne, Australia
http://icer.hosting.acm.org/

The twelfth annual ACM International Computing Education Research (ICER) Conference aims to gather high-quality contributions to the computing education research discipline. We invite submissions across a variety of categories for research investigating how people of all ages come to understand computational processes and devices, and empirical evaluation of approaches to improve that understanding in formal and informal learning environments.

Research areas of particular interest include:

  • discipline based education research (DBER) in computer science (CS), information sciences (IS), and related disciplines
  • learnability/usability of programming languages and the psychology of programming
  • pedagogical environments fostering computational thinking
  • design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge development
  • learning sciences work in the computing content domain
  • learning analytics and educational data mining in CS/IS content areas
  • informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals
  • measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines
  • research on CS/computing teacher thinking and professional development models at all levels

Submission Types

Continuing ICER’s longstanding commitment to fostering discussion and exploring new research areas we offer several ways to contribute.

  • Research Papers: Empirical and theoretical contributions to the computing education research literature will be peer-reviewed by members of the international program committee and will be published in conference proceedings in the ACM digital library. (8 pages, plus references)
  • Lightning Talks: Brief, timed talks highlighting a research issue/opportunity, a new project, other early-stage work. A lightning talk may be accompanied by a poster. (abstract submission)
  • Posters: Posters provide another avenue to disseminate your work in computing education at ICER. (abstract submission)
  • Work-in-Progress Workshop: An in-depth workshop environment providing extensive feedback on in-progress research (application form required)
  • Doctoral Consortium: PhD students pursuing research related to computing education are invited to submit abstracts for participation in the doctoral consortium. Abstracts from accepted participants are published in the conference proceedings (application and 2-page extended abstract)
  • Co-located Workshops: Pre/post conference workshop proposals related to computing education research are welcomed. (contact conference chairs)

For full details and submission information, see the conference website: http://icer.hosting.acm.org/icer-2016/cfp/

Important Deadlines

15 April, 2016 – Research paper abstract submission (mandatory)
22 April, 2016 – Research paper full copy, blind submission
22 April, 2016 – Co-located workshop proposals
20 May, 2016 – Doctoral consortium submissions due
3 June, 2016 – Notification to research paper authors
17 June, 2016 – Lighting talk & poster abstracts
17 June, 2016 – Work in progress workshop application deadline

Conference Chairs

Judy Sheard, Monash University, Australia – judy.sheard@monash.edu
Josh Tenenberg, University of Washington, Tacoma, USA – jtenenbg@uw.edu
Donald Chinn, University of Washington, Tacoma, USA – dchinn@uw.edu
Brian Dorn, University of Nebraska at Omaha, USA – bdorn@unomaha.edu

March 16, 2016 at 8:05 am Leave a comment

The capacity crisis in academic computer science – guest blog post by Eric Roberts

I’ve shared Eric’s insights into computing enrollments in the past (for example here and here). With his permission, I’m sharing his note after the recent SIGCSE 206 conference

Welcome back from Memphis and SIGCSE 2016! At this year’s conference, we heard many stories about skyrocketing student interest in computer science and the difficulty many colleges and universities are having in meeting that demand. For several years now, evidence has been building that academic computer science is heading toward a capacity crisis in which the pressures of expanding enrollment overwhelm the ability of institutions to hire the necessary faculty. Those signs are now clearer than ever.

The challenges involved in developing the necessary capacity are not easy. Fortunately, they are also not entirely new. Academic computer science has faced similar capacity crises in the past, most notably in the mid 1980s and the late 1990s. Each of those periods saw an increase in student interest in computer science at a pace so rapid that universities were unable to keep up.

For better or worse, I have had a ringside seat during each of these enrollment surges. In the mid 1980s, I was chairing the newly formed department of Computer Science at Wellesley College. During the dot-com expansion in the late 1990s, in addition to directing the undergraduate program at Stanford, I was a member of the ACM Education Board and a contributor to the National Academies study panel convened to address the situation.

In the current crisis, I have been asked to offer my historical perspective in many different venues. I was one of the authors — along with Ed Lazowska at the University of Washington and Jim Kurose at the National Science Foundation — of a talk on this issue at the 2014 Snowbird Conference and the National Center for Women in Information Technology’s 10th Anniversary Summit earlier that year. Along with Tracy Camp, who is the cochair of the Computing Research Association’s committee to study the impact of rapidly increasing enrollments and who presented a panel discussion at this year’s SIGCSE, I have been appointed to the National Academies’ Committee on the Growth of Computer Science Undergraduate Enrollments, which holds its first face-to-face meeting in two weeks.

After listening to the audience comments at the SIGCSE panel on the CRA effort, it is clear that many people struggling to keep up with the increased enrollments are still having trouble convincing their administrations that the problems we face are real and more than a transient maximum in a cyclical pattern. In many ways, the difficulty administrators have in appreciating the severity of the problem is understandable because our situation is so far outside what is unfamiliar to most academics. It is hard for most people in universities to imagine a field in which the number of open positions exceeds the number of applicants by a factor of five or more. Similarly, it is almost impossible to imagine that a faculty shortage could become so extreme that universities and colleges would be forced to cut enrollments in half, despite high demand from both students and prospective employers. Both of those situations, however, are part of the history of academic computer science. The crisis our field faces today is at least as serious as it has been at any time in the past.

It occurred to me that it might help many of you make the case for more resources if I shared a white paper on the history of the crisis that I wrote earlier in the year, originally to make the case at Stanford but now also to support the deliberations of the National Academies’ committee. I have put the white paper on my web site, both as a single PDF report and as a web document with internal links to facilitate browsing. The two versions of the document are:

I welcome any comments that you have along with ideas about solutions that I can share with the full National Academies’ committee.

Sincerely,

Eric Roberts

Charles Simonyi Professor of Computer Science, emeritus

Stanford University

March 14, 2016 at 8:02 am 9 comments

Helping Adults to Learn while Saving Face: Ukulele and MOOCs at Dagstuhl

I played ukulele every night while at the Dagstuhl seminar on CS learning assessment. Most nights, there was a group of us — some on guitars from the music room, one on piano, and several singers. It was wonderful fun! I don’t often get a chance to play in a group of other instruments and other singers, and I learned a lot about styles of play and synchronizing. The guitar players were all much more experienced, but we were all playing and singing music seen for the first time. We weren’t performance-quality — there were lots of off-key notes, missed entrances/exits. We were a bunch of amateurs having fun. (Thanks to Ben Shapiro, Jan Erik Moström, Lisa Kaczmarczyk, and Shriram Krishnamurthi for sharing these photos.)

Dagstuhl-playing-collage

We were not always a popular group. Some participants groaned when the guitars and ukulele came in to the room. One commenter asked if the singing was meant to drown out the playing.  Another complained that our choice of songs was “wrong” for the instruments and voices. Clearly, some of the complaints were for humorous effect, and some were pretty funny.

Here’s the thought experiment: Imagine these were kids playing music and singing. I predict the result would be different. I doubt the listeners would criticize the players and singers in the same way, not even for humorous effect. While adults certainly criticize children when in a teacher-student or mentoring relationship, casual criticism by passerby adults of a child playing or practicing is unusual.

Why is it different for adults?

I’ve talked before about the challenges of adult learning. We expect adults to have expertise. We expect quality. It’s hard for adults to be novices. It’s hard for adults to learn and to save face.  My colleague Betsy DiSalvo points out that we typically critique people at a near-peer level of power — we don’t casually critique those with much less power than us (children) because that’s mean, and we don’t casually critique our bosses and managers (to their faces) because that’s foolish.  Getting critiqued is a sign that you’re recognized as a peer.

After her work at Xerox PARC, Adele Goldberg helped develop learning systems, including systems for the Open University in the UK. She once told me that online systems were particularly important for adult learners. She said, “It’s hard for people with 20 years of expertise in a field to raise their hands and say that they don’t know something.”

Amy Ko framed MOOCs for me in a new way at the Dagstuhl Seminar on Assessment in CS. In the discussion of social and professional practice (see previous blog post), I told her about some ideas I had about helping people to retrain for the second half of life. We live much longer than people did 30-50 years ago. Many college-educated workers can expect a work life into our 70’s. I’ve been wondering what it might be like to support adult students who might retrain in their 40’s or 50’s for another 20 year career later. Amy pointed out MOOCs are perfect for this.

College-educated professionals currently in their careers do have prior education, which is a population with which MOOCs are most successful. MOOCs can allow well-educated students to retrain themselves as time permits and without loss of face. A recent Harvard study shows that students who participate Georgia Tech’s MOOC-based OMS CS program are in a demographic unlikely to have participated in a face-to-face MS in CS program (see page here). The MOOCs are serving an untapped need — it’s not necessarily reaching those who wouldn’t have access to education otherwise, but it can be a significant help to people who want to re-train themselves.

There are lots of uses of MOOCs that still don’t make sense to me.  Based on the empirical evidence of MOOCs today (in their current forms), I argue that:

  • MOOCs are not going to democratize education.  They have not been effective at motivating novices to learn required content, as opposed to elective or chosen content.
  • MOOCs are unlikely to broaden participation in computing.  Betsy DiSalvo and I ran a study about why women aren’t participating in OMS CS.  Those reasons are unlikely to change soon.
  • MOOCs may not work for adults who are being required to, or are asked to retrain, as opposed to those who choose to retrain.  Motivation matters. I have not yet seen convincing evidence that MOOCs can play a significant role in developing new CS teachers.  It’s hard to convince teachers to learn to be CS teachers — they’re not necessarily motivated to do so. Without the intrinsic motivation of choosing to be there, they may not complete.  A teacher who doesn’t complete doesn’t know the whole curriculum.

Adults will still have to have tough skins when practicing their new skills. We expect a lot of expertise out of the starting gate for adults in our society, even when retraining for a second career. MOOCs might be excellent preparation for adults in their second acts.

March 11, 2016 at 8:02 am 2 comments

A Dagstuhl Discussion about Social and Professional Practices

Another of the breakouts that I was in at the recent Dagstuhl seminar on assessment in CS learning focused on how we teach and assess in CS classes social and professional practices. This was a small group: Amy Ko, Lisa Kaczmarczyk, Jan Erik Moström, and me.

Amy and her students have been studying (via interviews and surveys) what makes a great engineer.

  • They’re good at decision-making.
  • They’re good at shifting levels of abstraction, e.g., describing how a line of code relates to a business strategy.
  • They have some particular inter-personal skills. They program ego-less-ly. They have empathy, e.g., “not an asshole.”
  • Senior engineers often spend a lot of time being teachers for more junior engineers.

Since I’ve worked with Lijun Ni on high school CS teachers, I know some of the social and professional practices of teachers. They have content knowledge, and they have pedagogical content knowledge. They know how to teach. They know how to identify and diagnose student misunderstandings, and they know techniques for addressing these.

We know some techniques for teaching these practices. We can have students watch professionals, by shadowing or using case-based systems like the Ask systems. We can put students in apprenticeships (like student teaching or internships) or in design teams. We could even use games and other simulations. We have to convey authenticity — students have to believe that these are the real social and professional practices. An interesting question we came up with: How would you know if you covered the set of social and professional practice?

Here’s the big question: How similar are these sets? They seem quite different to me, and these are just two possible communities of practice for students in an intro course. Are there social and professional practices that we might teach in the same intro CS — for any community of practice that the student might later join? My sense is that the important social and professional practices are not in the intersection. The most important are unique to the community of practice.

How would we know if we got there? How would you assess student learning about social and professional practice? Knowledge isn’t enough — we’re talking about practice. We have to know that they’d do the right things. And if you found out that they didn’t have the right practices, is it still actionable? Can we “fix” practices while in undergrad? Maybe students will just do the right things when they actually get out there?

The countries with low teacher attrition spend a lot of time on teacher on-boarding. In Japan, the whole school helps to prepare a new teacher, and the whole school feels a sense of failure if the first year teacher doesn’t pass the required certification exam. US schools tend not to have much on-boarding — at schools for teachers, or in industry for software engineers (as Begel and Simon found in their studies at Microsoft). On-boarding seems like a really good place, to me, for teaching professional practice. And since the student is then doing the job, assessment is job assessment.

The problems of teaching and assessing professional practice are particularly hard when you’re trying to design a new community of practice. We’d like computing to be more diverse, to be more welcoming to women and to people from under-represented groups. We’d want cultural sensitivity to be a practice for software professionals. How would you design that? How do you define a practice for a community that doesn’t exist yet? How do you convince students about the authenticity?

It’s an interesting set of problems, and some interesting questions to explore, but I came away dubious. Is this something that we can do effectively in school?  Perhaps it’s more effective to teach professional practices in the professional context?

March 9, 2016 at 8:00 am 2 comments

Notional Machines and Misconceptions in CS: Developing a Research Agenda at Dagstuhl

Seminar

I facilitated a breakout group at the Dagstuhl Seminar on Assessment in Introductory Computer Science. We started talking about what students know and should know, and several of us started using terms like “notional machines” and “mental models” — and there were some strong disagreements. We decided to have a breakout group to define our terms, and came up with a fascinating set of issues and questions.  It was a large group (maybe a dozen?), and I think there were some differences in attendance between the two days, so I’m not going to try to list everyone here.

Definitions

We agreed on the definition of a notional machine (NM) as a set of abstractions that define the structure and behavior of a computational device. A notional machine includes a grammar and a vocabulary, and is specific to a programming paradigm. It’s consistent and predictive — given a notional machine and a program to run on that machine, we should be able to define the result. The abstract machine of a compiler is a possible notional machine. This definition meshes with duBoulay’s original one and the one that Juha Sorva used in his dissertation (which we could check, because Juha was there).

Note that a NM doesn’t include function. It doesn’t tell a user, “Why would I use this feature? What is it for?” Carsten Shulte and Ashok Goel both found that students tend to focus on structure and behavior, and significant expertise is needed before students can discern function for a program or a NM component.

In CS education, we care about the student’s understanding of the notional machine. Mental model isn’t the right term for that understanding, because (for some) that implies a consistent, executable model in the student’s head. But modern learning science suggests that students are more likely to have “knowledge in pieces” (e.g., diSessa). Students will try to explain one program using one set of predictions about program behavior, and another program in another way. They respond to different programs differently When Michael Caspersen tried to replicate the Dehnadi and Bornat paper (Camel has two humps paper, and its retraction), he found that students would use one rule set for interpreting assignment in part of the test, and another set of rules later — and they either didn’t care or didn’t notice that they were inconsistent.

An early form of student understanding of the NM is simply mimicry. “I saw the teacher type commands like this. So if I repeat them exactly, I should get the same behavior.” As they start to realize that the program causes behavior, cognitive load limits how much of the NM students can think about at once. They can’t predict as we would like them to, simply because they can’t think about all of the NM components and all of the program at once. The greatest challenge to understanding the NM is Roy Pea’s Superbug — the belief that the computer is in fact a human-like intelligent agent trying to discern our intentions.

We define student misconceptions (about the NM) as incorrect beliefs about the notional machine that are reliable (the student will use more than once) and common (more than one student uses it). There are lots of misunderstandings that pop up, but those aren’t interesting if they’re not common and reliable. We decided to avoid the “alternative conception” model in science education because, unlike natural science, we know ground truth. CS is a science of the artificial. We construct notional machines. Conceptions are provably correct or incorrect about the NM.

One of the challenging aspects of student understandings of NM is that our current evidence suggests that students never fix existing models. We develop new understandings, and learn new triggers/indices when to apply these understandings. Sometimes we layer new understandings so deeply that we can’t reach the old ones. Sometimes, when we are stressed or face edge/corner conditions, we fall back on previous understandings. We help students develop new understandings by constraining their process to an appropriate path (e.g., cognitive tutors, cognitive apprenticeship) or by providing the right contexts and examples (like in Betsy Davis’s paper with Mike Clancy Mind your P’s and Q’s).

Where do misconceptions come from?

We don’t know for sure, but we have hypotheses and research questions to explore:

  • We know that some misconceptions come from making analogies to natural language.
  • Teaching can lead to misconceptions. Sometimes it’s a slip of the tongue. For example, students often confuse IF and WHILE. How often do we say (when tracing a WHILE) loop, “IF the expression is true…” Of course, the teacher may not have the right understanding.Research Question (RQ): What is intersection between teacher and student misconceptions? Do teacher misconceptions explain most student misconceptions, or do most student misconceptions come from factors outside of teaching?
  • Under-specification. Students may simply not see enough contexts or examples for them to construct a complete understanding.
  • Students incorrectly applying prior knowledge. RQ: Do students try to understand programs in terms of spreadsheets, the most common computational model that most students see?
  • Notation. We believe that = and == do lead to to significant misconceptions. RQ: Do Lisp’s set, Logo’s make/name, and Smalltalk’s back arrow lead to fewer assignment misconceptions? RQ: Dehnadi and Bornat did define a set of assignment misconceptions. How common are they? In what languages or contexts?

RQ: How much do students identify their own gaps in understanding of a NM (e.g., edge conditions, problem sets that don’t answer their questions)? Are they aware of what they don’t understand?  How do they try to answer their questions?

One advantage of CS over natural sciences is that we can design curriculum to cover the whole NM. (Gail Sinatra was mentioned as someone who has designed instruction to fill all gaps in a NM.) Shriram Krishnamurthi told us that he designs problem-sets to probe understanding of the Java notional machine that he expects students to miss, and his predictions are often right.

RQ: Could we do this automatically given a formal specification for an NM?  Could we define a set of examples that cover all paths in a NM?  Could we develop a model that predicts where students will likely develop misconceptions?

RQ: Do students try to understand their own computational world (e.g., how behavior in a Web page works, how an ATM works, how Web search works) with what we’re teaching them? Kathi Fisler predicts that they rarely do that, because transfer is hard. But if they’re actively trying to understand their computational world, it’s possible.

How do we find and assess gaps in student understanding?

We don’t know how much students think explicitly about a NM. We know from Juha’s work that students don’t always see visualizations as visible incarnations of the NM — for some students, it’s just another set of confusing abstractions.

Carsten Schulte pointed out that Ira Diethelm has a cool way of finding out what students are confused about. She gives them a “miracle question” — if you had an oracle that knew all, what one question would you ask about how the Internet works, or Scratch, or Java? Whatever they say — that’s a gap.

RQ: How we define the right set of examples or questions to probe gaps in understanding of a NM? Can we define it in terms of a NM? We want such a set to lead to reflection and self-explanation that might lead to improved understanding of the NM.

Geoffrey Herman had an interesting way of finding gaps in NM understanding: using historical texts. Turns out Newton used the wrong terms for many physical phenomena, or at least, the terms he used were problematic (“momentum” for both momentum and velocity) and we have better, more exact ones today. Terms that have changed meaning or have been used historically in more than one way tend to be the things that are hard to understand — for scholars past, and for students today.

State

State is a significant source of misconceptions for students. They often don’t differentiate input state, output state, and internal states. Visualization of state only works for students who can handle those kinds of abstractions. Specification of a NM through experimentation (trying out example programs) can really help if students see that programs causally determine behavior, and if they have enough cognitive load to computer behavior (and emergent behavior is particularly hard). System state is the collection of smaller states, which is a large tax on cognitive load. Geoffrey told us about three kinds of state problems: control state, data state, and indirection/reference state.

State has temporality, which is a source of misconceptions for students, like the common misconception that assignment states define a constraint, not an action in time. RQ: Why? Raymond Lister wondered about our understanding of state in the physical world and how that influences our understanding of state in the computational world. Does state in the real world have less temporality? Do students get confused about temporality in state in the physical world?

Another source of misconceptions is state in code, which is always invisible. The THEN part of an IF has implicit state — that block gets executed only if the expression is true. The block with a loop is different than the block after a condition (executed many times, versus once) but look identical. RQ: How common are code state misconceptions?

Scratch has state, but it’s implicit in sprites (e.g., position, costume). Deborah Fields and Yasmin Kafai found that students didn’t use variables much in state, but maybe because they didn’t tackle problems that needed them. RQ: What kinds of problems encourage use of state, and better understanding of state?

RQ: Some functional curricula move students from stateless computation to stateful computation. We don’t know if that’s easier. We don’t know if more/fewer/different misconceptions arise. Maybe the reverse is easier?

RQ: When students get confused about states, how do they think about? How do they resolve their gaps in understanding?

RQ: What if you start students thinking about data (state) before control? Most introductory curricula start out talking about control structures. Do students develop different understanding of state? Different misconceptions? What if you start with events (like in John Pane’s HANDS system)?

RQ: What if you teach different problem-solving strategies? Can we problematize gaps in NM understanding, so that students see them and actively try to correct them?

March 7, 2016 at 7:59 am 9 comments

Friction Between Programming Professionals and Beginners

It’s not obvious that professional programmers are the best people to answer questions for beginners, yet that’s often recommended as a strategy for providing support to CS students when there are too few teachers.  The below article gathers some stories about user experience, and offers advice on how to make the interaction of programming professionals and beginners more successful.

Where is the most obvious place to ask a programming question? Stack Overflow.

Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute. Join the Stack Overflow community to: Ask programming questions, […] — Stack Overflow (the front page)

It sounds like exactly the right place to be. It even sounds friendly. But actually asking questions on Stack Overflow is often far from friendly, for a beginner programmer.

“I gave up programming after the last time I asked a question on StackOverflow.”commenter on reddit

Stack Overflow users ands moderators are quick to downvote and close questions, for a multitude of reasons. These reasons are often surprising to first-time users.

I’m going to pick on Stack Overflow as an example in this article, because it is the most obvious place to ask questions, but the same problems can be seen anywhere that beginners ask questions.

“I must have gone to a couple dozen IRC rooms, whatever online communities I could find. Everywhere I went people shat on me, and I never got an answer to a single question.”— commenter on reddit

Source: Friction Between Programming Professionals and Beginners – Programming for Beginners

March 4, 2016 at 7:43 am 11 comments

How we actually get to #CSforAll in the US: Jan Cuny wins SIGCSE Outstanding Contribution Award

The President’s new “CS for All” initiative can only be influenced by the federal government.  In the United States, individual states make all school education decisions.  We just had a meeting of our ECEP cohort (the day after the announcement), and talked about where we’re at.  How close are we to CS for All?  What’s involved in getting there?  I did a Blog@CACM post summarizing the reports.

I’m mentioning this because tomorrow (Friday March 4), Jan Cuny will be recognized by SIGCSE for her Outstanding Contribution to CS Education (see announcement here).  Jan has done more for the CS for All effort than anyone else I know.  Her efforts in the NSF Broadening Participation in Computing have made significant, long-term progress in promoting CS for everyone, not just the people in CS today. It’s a well-deserved award.

Coincidentally, the day after the President’s announcement, a group of state and territory leaders who belong to the Expanding Computing Education Pathways (ECEP) Alliance presented their five-year plans at a meeting near Washington D.C. Leaders from Alabama, California, Connecticut, Georgia, New Hampshire, South Carolina, Maryland, Massachusetts, Puerto Rico, Texas, and Utah described how they plan to grow CS, broaden participation in computing, and develop teachers. These plans give us insight into the progress toward and challenges to achieving CSforAll.

Source: State of the States: Progress Toward CS for All | blog@CACM | Communications of the ACM

March 3, 2016 at 7:52 am 2 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,008 other followers

Feeds

Recent Posts

Blog Stats

  • 1,890,489 hits
March 2016
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

CS Teaching Tips