Posts tagged ‘computing education research’

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

June 15, 2018 at 1:00 am 3 comments

Teach two languages if you have to: Balancing ease of learning and learning objectives

My most recent CACM Blog post addresses a common question in computer science education: Should we teach two programming languages in a course to encourage abstraction, or just one? Does it hurt students to teach two? Does it help them to learn a second language earlier? My answer (in really short form) is “Just teach one, because it takes longer to learn one than you expect. If you teach two or more, students are going to struggle to develop deep understanding.”

But if your learning objective is for students to learn two (or more languages), teach two or more languages. You’re going to have to pay the piper sometime. Delaying is better, because it’s easier and more effective to transfer deep knowledge than to try to transfer surface-level representations.

The issue is like the question of recursion-first or iterative-control-structures-first. (See this earlier blog post.) If your students don’t have to learn iterative control structures, then teach recursion-only. Recursion is easier and more flexible. But if you have to teach both, teach iteration first. Yes, iteration is hard, and learning iteration-first makes recursion harder to learn later, but if you have to do it, iteration-first is the better order.

There’s a lot we know about making computing easier to learn. But sometimes, we just can’t use it, because there are external forces that require certain learning objectives.


I correct, continue, and explore tangents on this blog post here: https://computinged.wordpress.com/2018/06/15/are-you-talking-to-me-interaction-between-teachers-and-researchers-around-evidence-truth-and-decision-making/

June 4, 2018 at 7:00 am 9 comments

A Place to Get Feedback and Develop New Ideas: WIPW at ICER 2018

Everybody’s got an idea that they’re sure is great, or could be great with just a bit of development. Similarly, everyone has hit a tricky crossroads in their research and could use a little nudge to get unstuck. The ICER Work in Progress workshop is the place to get feedback and help on that idea, and give feedback and help to others on their cool ideas. I did it a few years ago at the Glasgow ICER and had a wonderful day. You learn a lot, and you get a bunch of new insights about your own idea. As Workshop Leader (and the inventor of the ICER Work in Progress workshop series) Colleen Lewis put it, “You get the chance to borrow the brains of some really awesome people to work on your problem.”

Colleen is the Senior Chair again this year, and I’m the Junior Chair-in-Training.

The workshop is only one day and super-fun. If you’re attending ICER this year, please apply for the Work in Progress workshop! https://icer.hosting.acm.org/icer-2018/work-in-progress/ The application is due June 8 (it’s just a quick Google form).

Let Colleen or me know if you have questions!

May 30, 2018 at 7:00 am 2 comments

Is there a “hype cycle” for educational programming languages?

As a longtime Smalltalk-er, I loved this piece: “The 50-year Gartner Hype Cycle for Smalltalk

Interesting how the hype cycle applies to Smalltalk:

  • Technology Trigger — the hype began with the famous 1981 BYTE cover and continued throughout the 1980s.
  • Peak of Inflated Expectations — in the 1990s, Smalltalk became the biggest OOP language after C++ and even IBM chose it as the centrepiece of their VisualAge enterprise initiative to replace COBOL.
  • Trough of Disillusionment — Java derailed Smalltalk by being: 1) free; and 2) Internet-ready. Free Squeak (1996) and Seaside web framework (2002) were not enough to save it.
  • Slope of Enlightenment — Pharo was released in 2008 and became the future of Smalltalk, thanks to its remarkable pace of evolution. We are still in this phase, which requires continuing and sustained advocacy.
  • Plateau of Productivity — we are waiting for this phase, perhaps in the next decade. I am sanguine.

Educational programming languages (or maybe just programming languages’ use in education) don’t seem to follow this curve at all.  Does a programming language ever “come back” once it has left classrooms?  Logo? Pascal?  Even if there’s a “Trough of Disillusionment” (e.g., when we realized just how hard C++ and Java are), we still see longterm use. Even if we later realize how good something was (e.g., Logo for integration into curriculum), it doesn’t come back.

I wonder what the similar curve looks like for programming languages in education.

May 18, 2018 at 7:00 am 14 comments

Why are CS students so hard to nudge? A theory for why it’s so hard to promote a growth mindset in CS1

Pearson took a lot of heat recently for trying to improve students’ mindset in My Programming Lab.  I’m slightly worried about the ethics of their “embedded experiment.” I’m more worried that it didn’t work.

Titled “Embedding Research-Inspired Innovations in EdTech: An RCT of Social-Psychological Interventions, at Scale,” the study placed 9,000 students using MyLab Programming into three groups, each receiving different messages from the software as they attempted to solve questions. Some students received “growth-mindset messages,” while others received “anchoring of effect” messages. (A third control group received no messaging at all.) The intent was to see if such messages encouraged students to solve more problems. Neither the students nor the professors were ever informed of the experiment, raising concerns of consent.

The “growth mindset messages” emphasized that learning a skill is a lengthy process, cautioning students offering wrong answers not to expect immediate success. One example: “No one is born a great programmer. Success takes hours and hours of practice.” “Anchoring of effect” messages told students how much effort is required to solve problems, such as: “Some students tried this question 26 times! Don’t worry if it takes you a few tries to get it right.”

As Education Week reports, the interventions offered seemingly no benefit to the students. Students who received no special messages attempted to solve more problems (212) than students in either the growth-mindset (174) or anchoring groups (156). The researchers emphasized this could have been due any of a variety of factors, as the software is used differently in different schools.

Source: Pearson Embedded a ‘Social-Psychological’ Experiment in Students’ Educational Software [Updated]

Beth Simon and her colleagues tried a similar experiment, reported at ICER 2008.  They did get informed consent.  They tried a similar kind of “nudge” to get students to adopt a growth mindset.  It didn’t work for Beth et al., either.

I advised Kantwon Rogers’ MS in HCI project, where he tried to nudge CS1 students (both on-line and off-line) to have a greater sense of “belongingness” in CS.  Similar to these previous studies, he sent email prompts to students — some just encouraged study skills, and others promoted a sense that they belongs and could succeed in CS.  In almost all of his conditions, belongingness dropped.

What’s going on here?  Why are CS students so impervious to these prompts that have been successful in other settings?

I have a theory.  There’s a notion in the behavioral sciences literature that you get more success changing behavior or promoting attitudes by reducing barriers than by prompting for desired behavior or attitudes.  The analogy is to a large boulder that you want to move: You can push it and push it, or you can just dig away the dirt from the bottom.  The latter is likely to get the boulder rolling without as much effort.

Here’s my theory: Introductory CS classes have systemic issues that encourage a fixed mindset and discourage a sense of belongingThere are too many signals to students that they can’t succeed, that they can’t get better, and that they don’t belong — perhaps especially in times of rising enrollment. Mere nudges are not going to move the boulder.  We’re going to have to remove the barriers to belonging, self-efficacy, and the sense that students can succeed at CS.

 

May 7, 2018 at 7:00 am 17 comments

Teaching Computational Thinking across an Entire University, With Guest Blogger Roland Tormey

During Spring Break, Barbara and I were invited to go to Switzerland.  Sure, when most people go someplace warm for Spring Break, let’s head to the mountains!

Roland Tormey organized a fascinating workshop at EPFL in Lausanne, Switzerland (see workshop page here) to inform a bold and innovative new effort at EPFL. They want to integrate computational thinking across their entire university, from required courses for freshman, to support for graduate students doing Computational X (where X is everything that EPFL does).  The initiative has the highest level of administrative support, with the President and Vice-President of Education for EPFL speaking at the workshop.  The faculty really bought in — the room held 80-some folks, and it was packed most of the day.

Roland got a good videographer who captured both of the keynotes well.  I had the first keynote on “Improving Computing Education with Learning Sciences: Methods for Teaching Computing Across Disciplines.”  I argued that we need different methods to teach computing across the curriculum — we can’t teach CS the same way we teach CS majors as future software developers.  I talk about Media Computation, predictions (and they caught my audio demo with ukulele playing well), subgoal labeling, and Parsons problems.

Shriram Krishnamurthi had the second keynote on “Curriculum Design as an Engineering Problem.”  He talked about the problems of transfer and how Bootstrap works.  I liked how he broke down the problem of transfer — there there are three requirements: Deep structural similarities between the problems, explicit instruction, and a process for performing tasks.  He showed how all other design disciplines have multi-stage processes, use multiple representations in their designs, and look at problems from multiple viewpoints.  Mostly in CS classes, we just code.  I learned about how Bootstrap scaffolds problem-solving, and includes all of those elements.  I recommend the talk.

Barb’s panel on teaching computational thinking wasn’t captured.  She talked about the methods she’s developed for teaching computing, including her great results on Parsons problems.  In a short talk, she gave a lot of pointers to her work and others’ on how to teach CT.

Roland sent me a note with what he took away from the workshop. I thought it was a great list, so with his permission, I’m including it here:

For me, we also had a lot of other valuable take home points from the day:

(1) We need to work on putting Computational thinking (and maybe Math and Physics too) into the context of the students’ own disciplines — at least, though the examples and exercises we choose.

(2) The drive to better develop scientific thinking in disciplines like chemistry and life sciences and the development of CT are entirely consistent, but one shouldn’t eclipse the other. It’s not about replacing existing scientific processes with CT. It’s about augmenting them.

(3) We need to help professors gather data on effective methods of teaching as well as help them become aware of methodologies with demonstrated effectiveness (like the Parsons Problems for example).

(4) The exercises and exercise sessions will be crucial for making the link between CT and disciplines, but this implies giving the doctoral and teaching assistants a clear understanding of the goals and methods of CT. They have to understand what we are trying to achieve.

(5) CT provides an understanding of, a language for, and a toolbox for analysing processes, and these can be applied in a lot of domains. However that is not going to happen unless we explicitly teach CT in ways that promote near and far transfer

(6) We need to make the most of the EPFL initiative by properly evaluating the impact, which implies the need to collect some pre-intervention data now.

April 20, 2018 at 7:00 am 10 comments

A job is a strange outcome measure: Udacity drops money-back guarantee on finding a job

Udacity has dropped a money-back guarantee that they were offering to students in some of their Nanodegree programs. The guarantee (with stipulations and caveats) was that students would find a job after getting the nanodegree, or they would get their money back.

An article in Inside Higher Ed (quoted below and linked here) describes some of the tensions. Other for-profit coding schools offer similar or better guarantees, but others do not. Ryan Craig, quoted below, suggests that Udacity might not have been hitting its targets for job placements. Does that mean that Udacity was doing something wrong?

A job is such a strange outcome measure for any kind of educational program.  I know some techniques for evaluating someone’s knowledge of programming, and I know how to create educational opportunities that might lead to successful evaluation.  There are factors like student attitude and motivation and whether students engage in deliberate practice that are not entirely within my control.  Even then, I’d be willing to say, “I can design a program where the majority of students will achieve this level of proficiency in coding.”  But a job?  Where I can’t control how the students interview, or where they apply, or what the companies are looking for (if they’re looking at all)?

A job is not a well-defined outcome measure for an educational intervention. That may be what the students are seeking, but they are being unrealistic if they think that any school can guarantee them that.

Ryan Craig, managing director of investment company University Ventures, noted that none of the major employers associated with Udacity will publicly commit to hire or interview nanodegree candidates. Craig pointed to a 2017 report from VentureBeat, which stated that of around 10,000 students who had earned nanodegrees since 2014, around 1,000 had found jobs as a result. “A placement rate of around 10 percent should spell the demise of any last-mile training program,” said Craig.

Craig said the effectiveness of Udacity’s job guarantee was likely very limited for students. “Money-back guarantees don’t address the real guarantee that students are seeking: a job,” said Craig.

Daniel Friedman, co-founder of coding school Thinkful, wrote in January 2016 that Udacity’s guarantee was vaguer and weaker than the guarantees offered by his own company and others such as Bloc and Flatiron School. Such guarantees are common at coding schools, though Friedman noted that some schools have had to drop guarantees because they conflicted with state regulations.

April 13, 2018 at 7:00 am 3 comments

Older Posts


Recent Posts

June 2018
M T W T F S S
« May    
 123
45678910
11121314151617
18192021222324
252627282930  

Feeds

Blog Stats

  • 1,517,045 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,272 other followers

CS Teaching Tips