Archive for June, 2018

We might want naive and delusional PhD students

We’re in the midst of cleaning out 25 years of accumulated stuff in our house in order to sell this house, buy a new house in Ann Arbor, and move to the University of Michigan by September 1.

As I was cleaning, I found the below — my original statement of purpose that I submitted to the University of Michigan in 1988 to start my doctorate.

I shared it with some friends, ruefully.  It felt silly, as well as grammatically flawed. I really did think that I was going to get a faculty position in “Computer Science and Education” when I graduated in the early 1990’s.  I was naive, maybe even delusional. I had no idea what academic CS was like when I applied. The reality is far different than what I imagined.  At the Home4CS event just this last April, I mentioned that it would be great if we had CS Education faculty slots in Schools of Education today.  As Diane Levitt reported on Twitter, the audience roared with laughter.  How crazy was I to think that we’d have some in the 1990’s?

But now, some positions like that do exist.  There are faculty who have been hired at US higher-education institutions to focus on CS Ed.  My new job at the University of Michigan is a joint position between CS and their Engineering Education Research program.  It took 25 years, but yeah, I’m going to have the kind of job for which I earned my PhD.

Some friends encouraged me to share this statement. Maybe it’s a good thing to have naive new PhD students.  Maybe that’s what we want in PhD students. We want PhD students to think long term, i.e., to have bought into a goal, a set of research questions, or a vision — and be willing to work at it for decades.  Eventually, if the student is really lucky and others are working on similar visions at the same time, the vision doesn’t seem not quite so naive, not quite so delusional.

I’ll be taking some time off from the blog while making the move to Michigan. I may post some guest contributions over the next few weeks, but for now, I’m putting the blog on hiatus.

June 29, 2018 at 7:00 am 5 comments

Visiting NTNU in Trondheim Norway June 3-23

Barbara and I are just back from a three week trip to NTNU in Trondheim, Norway. Katie Cunningham came with us (here’s a blog post about some of her work). Three weeks is enough time to come up with a dozen ideas for blog posts, but I don’t have the cycles for that. So let me just give you the high-level view, with pictures and links to learn more.

We went at the beginning of June because Barb and I (and the University of Michigan) are part of the IPIT network (International Partnerships for Excellent Education and Research in Information Technology) that had its kick-off meeting June 3-5. The partnership is about software engineering and computing education research, with a focus student and faculty exchange and meetings at each others’ institutions: NTNU, U. Michigan, Tsinghua University, and Nanjing University. I learned a lot about software engineering that I didn’t know before, especially about DevOps.

If you ever get the chance to go to a meeting organized by Letizia Jaccheri of NTNU, GO! She was the organizer for IPIT, co-chair of IDC 2018, and our overall host for our three weeks there. She has a wonderful sense for blending productivity with fun. During the IDC 2018 poster session, she brought in high school students dressed as storybook characters, just to wander around and “bring in a bit of whimsy.” For a bigger example, she wanted IPIT to connect with the NTNU campus at Ålesund, which just happens to be near the Geiranger fjord, one of the most beautiful in Norway. So, she flew the whole meeting to Ålesund from Trondheim! We took a large cruise-ship like boat with meeting rooms down the fjord. We got in some 5-6 hours of meetings, while also seeing amazing waterfalls and other views, and then visited the Ålesund campus the next day before flying home. We got work done and WOW!

For the next week and a half, we got to know the computing education research folks at NTNU. We were joined at the end of the first week by Elisa Rubegni from the University of Lincoln, and Roberto Martinez-Maldonado came by a couple days later. Barb, Elisa, and I held a workshop on the first Monday after IPIT. A couple days later, we had a half-day meeting with Michalis Giannakos’s group and Roberto, then Elisa led us all in a half-day design exercise (pictured below — Elisa, Sofia, Javi, and Katie). In between, we had individual meetings. I think I met with every one of the PhD students there working in computing education research. (And, in our non-meeting time, Barb and I were writing NSF proposals!)

Michalis’s group is doing some fascinating work. Let me tell you about some of the projects that most intrigued me.

  • Sofia (with Kshitij and Ilias) is lead on a project where they track what kids using Scratch are looking at, both on and off screen. It’s part of this cool project where kids program these beautiful artist-created robots with Scratch. It’s a pretty crazy looking experimental setup, with fiducial markers on notebooks and robots and screens.
  • Kshitij is trying to measure EEG and gaze in order to determine cognitive load in a user interface. Almost all cognitive load measures are based on self-report (including ours). They’re trying to measure cognitive load physiologically, and correlate it with self-report.
  • Katerina and Kshitij is using eye-tracking to measure how undergrads use tools like Eclipse. What I found most interesting was what they did not observe. I noticed in their data that they had no data on using the debugger. They explained that in 40 students, only five people even looked at the debugger. Nobody used data or control flow visualizations at all. I’m fascinated by this — what does it take to get students to actually look at the debuggers and visualizers that were designed to help them learn?
  • Roberto is doing this amazing work with learning analytics in physical spaces, where nurses are working on robot patients. Totally serious — they can gather all kinds of data about where people are standing, how they interact, and when they interact. For tasks like nursing, this is super important to understand what students are learning.

Then came FabLearn with an amazing keynote by Leah Buechley on art, craft, and computation. I have a long list of things to look up after her talk, including Desmos, computer controlled cutting machines (which I had never heard of before) which are way cheaper than 3-D printers but still allow you to do computational craft, and http://blog.recursiveprocess.com/ which is all about learning coding and mathematics. She made an argument that I find fascinating — that art is what helps diverse students reflect their identity and culture in their school, and that’s why students who get art classes (controlling for SES) are more likely to succeed in school and go onto post-secondary schooling. Can computing make it easier to bring art back into school? Can computing then play a role in engaging children with school again?

The next reason we were at NTNU was to attend the EXCITED Centre advisory board meeting. Barb and I were there for the launch of EXCITED in January 2017. It’s a very ambitious project, starting from students making informed decisions to go into CS/IT, helping students develop identities in CS, learning through construction, increasing diversity in CS, and moving into careers. We got to hang out with Arnold Pears, Mats Daniels, and Aletta Nylén of UpCERG (Upssala Computing Education Research Group), the world’s largest CER group.

Finally, for the last four days, we attended the Interaction, Design and Children Conference, IDC 2018. I wrote my Blog@CACM post for this month about my experiences there. I saw a lot there that’s relevant to people who read this blog. My favorite paper there tested the theory of concreteness fading on elementary school students learning computing concepts. Here’s a picture of a slide (not in the paper) that summarizes the groups in the experiment.

I’ll end with my favorite moment in IDC 2018, not in the Blog@CACM post. We met Letizia’s post-doc, Javier “Javi” Gomez at the end of our first week in Trondheim. Summer weather in Trondheim is pretty darn close to winter in Atlanta. One day, we woke up to 44F and rain. But we lucked out — the weekends were beautiful. On our first Saturday, Letizia invited us all to a festival near her home, and we met Javi and Elisa. That evening (but still bright sunlight), Javi, Elisa, Barb, and I took a wonderful kayaking trip down the Nidelva river. So it was a special treat to be at IDC 2018 to see Javi get TWO

awards for his contributions, one for his demo and an honorable mention for his note. The note was co-authored by Letizia, and was her first paper award (as she talks about in the lovely linked blog post). It was wonderful to be able to celebrate the success of our new friends.

On the way back, Barb and I stopped in London to spend a couple days with Alan Kay and his wife, Bonnie MacBird. If I could come up with a dozen blog post ideas from 3 weeks, it’s probably like two dozen per day with Alan and Bonnie, and we had two days with them. Visiting a science museum with an exhibit on early computers (including an Alto!) is absolutely amazing when you’re with Alan. But those blog posts will have to wait until after my blog hiatus.

June 28, 2018 at 7:00 am 2 comments

We can build new programming languages that people will teach, learn, and use: Scratch 3.0 in August

When I come out with blog posts saying that we need new programming languages (like this one), I regularly get a bunch of skepticism.  People will only use industry-approved languages, says one argument.  We need to teach the languages that exist, says another.

Then I just reply, “Scratch.”  It’s real programming, it’s popular, and it’s taught around the world.  We ought to study how Scratch succeeded.  One key insight: Don’t beat your head against the traditional CS1 teachers.  There’s a lot more people to teach, and not everyone has to become a software developer.

A new version of Scratch is coming this August!

Source: 3 Things To Know About Scratch 3.0 – The Scratch Team Blog – Medium

June 25, 2018 at 7:00 am 20 comments

It Matters a Lot Who Teaches Introductory Courses if We Want Students to Continue

Thanks to Gary Stager who sent this link to me. The results mesh with Pat Alexander’s Model of Domain Learning. A true novice to a field is not going to pursue studies because of interest in the field — a novice doesn’t know the field. The novice is going to pursue studies because of social pressures, e.g., it’s a requirement for a degree or a job, it’s expected by family or community, or the teacher is motivating.  As the novice becomes an intermediate, interest in the domain can drive further study.  These studies suggest that persistence is more likely to happen if the teacher is a committed, full-time teacher.

The first professor whom students encounter in a discipline, evidence suggests, plays a big role in whether they continue in it.

On many campuses, teaching introductory courses typically falls to less-experienced instructors. Sometimes the task is assigned to instructors whose very connection to the college is tenuous. A growing body of evidence suggests that this tension could have negative consequences for students.

Two papers presented at the American Educational Research Association’s annual meeting in New York on Sunday support this idea.

The first finds that community-college students who take a remedial or introductory course with an adjunct instructor are less likely to take the next course in the sequence.

The second finds negative associations between the proportion of a four-year college’s faculty members who are part-time or off the tenure track and outcomes for STEM majors.

Source: It Matters a Lot Who Teaches Introductory Courses. Here’s Why.

June 22, 2018 at 7:00 am 8 comments

The Story of MACOS: How getting curriculum development wrong cost the nation, and how we should do it better

Man: A Course of Study (MACOS) is one of the most ambitious US curriculum efforts I’ve ever heard about. The goal was to teach anthropology to 10 year olds. The effort was led by world-renowned educational psychologist Jerome Bruner, and included many developers, anthropologists, and educational psychologists (including Howard Gardner). It won awards from the American Education Research Association and from other education professional organization for its innovation and connection to research. At its height, MACOS was in thousands of schools, including whole school districts.

Today, MACOS isn’t taught anywhere. Funding for MACOS was debated in Congress in 1975, and the controversy led eventually to the de-funding of science education nationally.

Peter Dow’s 1991 book Schoolhouse Politics: Lessons from the Sputnik Era is a terrific book which should be required reading for everyone involved in computing education in K-12. Dow was the project manager for MACOS, and he’s candid in describing what they got wrong. It’s worthwhile understanding what happened so that we might avoid it in computing education. I just finished reading it, and here are some of the parts that I found particularly insightful.

First, Dow doesn’t dismiss the critics of MACOS. Rather, he recognizes that the tension is between learning objectives. What do we want for our children? What kind of society do we want to build?

I quickly learned that decisions about educational reform are driven far more by political considerations, such as the prevailing public mood, than they are by a systematic effort to improve instruction. Just as Soviet science supremacy had spawned a decade of curriculum reform led by some of our most creative research scientists during the late 1950s and 1960s, so now a new wave of political conservatism and religious fundamentalism in the early 1970s began to call into question the intrusion of university academics into the schools…Exposure to this debate caused me to recast the account to give more attention to educational politics. No discussion of school reform, it seems, can be separated from our vision of the society that the schools serve.

MACOS was based in the best of educational psychology at the time. Students engaged in inquiry with first-hand accounts, e.g., videos of Eskimos. The big mistake the developers made was they gave almost no thought to how it was going to get disseminated. Dow points out that MACOS was academic researchers intruding into K-12, without really understanding K-12. They didn’t plan for teacher professional development, and worse, didn’t build any mechanism for teachers to tell them how the materials should be changed to work in real classrooms. They were openly dismissive of the publishers who might get the materials into the world.

On teachers: There was ambivalence about teachers at ESI. On the one hand the Social Studies Program viewed its work as a panacea for teachers, a liberation from the drudgery of textbook materials and didactic lessons. On the other, professional educators were seen as dull-witted people who conversed in an incomprehensible “middle language” and were responsible for the uninspired state of American education.

On publishers: These two experienced and widely respected publishing executives listened politely while Bruner described our lofty education aspirations with characteristic eloquence, but the discussion soon turned to practical matters such as the procedures of state adoption committees, “tumbling test” requirements, per-pupil expenditures, readability formulas, and other restrictions that govern the basal textbook market. Spaulding and Kaplan tried valiantly to instruct us about the realities of the educational publishing world, but we dismissed their remarks as the musings of men who had been corrupted by commercialism. Did they not understand that our mission was to change education, not submit to the strictures that had made much of instruction so meaningless? Could not men so powerful in the publishing world commit some of their resources to support curriculum innovation? Had they no appreciation of the intellectual poverty of most social studies classrooms? I remember leaving that room depressed by the monumental conservatism of our visitors and more determined than ever to prove that there were ways to reach the schools with good materials. Our arrogance and naivete were not so easily cured.

By 1971, Dow realizes that the controversies around MACOS could easily have been avoided. They had made choices in their materials that highlighted the challenges of Eskimo life graphically, but the gory details weren’t really necessary to the learning objectives. They simply hadn’t thought enough about their users, which included the teachers, administrators, parents, and state education departments.

My favorite scene in the book is with Margaret Mead who tries to help Dow defend MACOS in Congress, but she’s frustrated by their arrogance and naivete.

Mead’s exasperation grew. “What do you tell the children that for?…I have been teaching anthropology for forty years,” she remarked, “and I have never had a controversy like this over what I have written.”

But Mead’s anger quickly returned. “No, no, you can’t tell the senators that! Don’t preach to them! You and I may believe that sort of thing, but that’s not what you say to these men. The trouble with you Cambridge intellectuals is that you have no political sense!”

Dow describes over two chapters the controversies around MACOS and the aftermath impacts on science education funding at NSF. But he also points out the problems with MACOS as a curriculum. Some of these are likely problems we’re facing in CS for All efforts.

For example, he talks about why MACOS was removed from Oregon schools, using the work of Lynda Falkenstein. (Read the below with an awareness of the Google-Gallup and EdWeek polls showing that administrators and principals are not supportive of CS in schools.)

She concluded that innovations that lacked the commitment of administrators able to provide long-term support and continuing teacher training beyond the initial implementation phase were bound to faster regardless of their quality. Even more than controversy, she found, the greatest barrier to successful innovation was the lack of continuity of support from the internal structure of the school system itself.

I highly recommend Schoolhouse Politics. It has me thinking about what it really takes to get any education reform to work and to scale. The book is light on evaluation evidence that MACOS worked. For example, I’m concerned that MACOS was so demanding that it may have been too much for underprepared students or teachers. I am totally convinced that it was innovative and brilliant. One of the best curriculum design efforts I’ve ever read about, in terms of building on theory and innovative design. I am also totally convinced that it wasn’t ready to scale — and the cost of that mistake was enormous. We need to avoid making those mistakes again.

June 18, 2018 at 7:00 am 13 comments

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

June 15, 2018 at 1:00 am 5 comments

Workshops for New Computing Faculty in Summer 2018: Both Research and Teaching Tracks

This is our fourth year, and our last NSF-funded year, for the New Computing Faculty Workshops which will be held August 5-10, 2018 in San Diego. The goal of the workshops is to help new computing faculty to be better and more efficient teachers. By learning a little about teaching, we will help new faculty (a) make their teaching more efficient and effective and (b) make their teaching more enjoyable. We want students to learn more and teachers to have fun teaching them. The workshops were described in Communications of the ACM in the May 2017 issue (see article here) which I talked about in this blog post. The workshop will be run by Beth Simon (UCSD), Cynthia Bailey Lee (Stanford), Leo Porter (UCSD), and Mark Guzdial (Georgia Tech).

This year, for the first time, we will offer two separate workshop tracks:

  • August 5-7 will be offered to tenure-track faculty starting at research-intensive institutions.
  • August 8-10 will be offered to faculty starting a teaching-track job at any school, or a tenure-track faculty line at a primarily undergraduate serving institution where evaluation is heavily based in teaching.

This year we added new organizers, Ben Shapiro (Boulder) for the research-intensive track, and Helen Hu (Westminster) and Colleen Lewis (Harvey Mudd) for the teaching-intensive track.

The new teaching-oriented faculty track is being added this year due to enthusiasm and feedback we heard from past participants and would-be participants. When I announced the workshops last year (see post here), we heard complaints (a little on email, and a lot on Twitter) asking why we were only including research-oriented faculty and institutions. We did have teaching-track faculty come to our last three years of new faculty workshops that were research-faculty focused, and unfortunately those participants were not satisfied. They didn’t get what they wanted or needed as new faculty. Yes, the sessions on peer instruction and how to build a syllabus were useful for everyone. But the teaching-track faculty also wanted to know how to set up their teaching portfolio, how to do research with undergraduate students, and how to get good student evaluations, and didn’t really care about how to minimize time spent preparing for teaching and how to build up a research program with graduate students while still enjoying teaching undergraduate students.

So, this year we made a special extension request to NSF, and we are very pleased to announce that the request was granted and we are able to offer two different workshops. The content will have substantial overlap, but with a different focus and framing in each.

To apply for registration, To apply for registration, please apply to the appropriate workshop based on the type of your position: research-focused position http://bit.ly/ncsfw2018-research or teaching-focused position http://bit.ly/ncsfw2018-teaching. Admission will be based on capacity, grant limitations, fit to the workshop goals, and application order, with a maximum of 40 participants. Apply on or before June 21 to ensure eligibility for workshop hotel accommodation. (We will notify respondents by June 30.)


Many thanks to Cynthia Lee who helped a lot with this post

June 12, 2018 at 6:00 am 1 comment

Reflections of a CS Professor and an End-User Programmer

In my last blog post, I talked about the Parsons problems generator that I used to put scrambled code problems on my quiz, study guide, and final exam. I’ve been reflecting on the experience and what it suggests to me about end-user programming.

I’m a computing professor, and while I enjoy programming, I mostly code to build exercises and examples for my students. I almost never code research prototypes anymore. I only occasionally code scripts that help me with something, like cleaning data, analyzing data, or in this case, generating problems for my students. In this case, I’m a casual end-user programmer — I’m a non-professional programmer who is making code to help him with some aspect of his job. This is in contrast:

  • To Philip Guo’s work on conversational programmers, who are people who learn programming in order to talk to programmers (see his post describing his papers on conversational programmers). I know how to talk to programmers, and I have been a professional programmer. Now, I have a different job, and sometimes programming is worthwhile in that job.
  • To computational scientists and engineers, which is the audience for Software Carpentry. Computational scientists and engineers might write code occasionally to solve a problem, but more importantly, they write code as part of their research.  I might write a script to handle an odd-job, but most of my research is not conducted with code.

Why did I spend the time writing a script to generate the problems in LaTeX? I was teaching a large class, over 200 students. Mistakes on quizzes and exams at that scale are expensive in terms of emails, complaints, and regrading. Scrambled code problems are tricky. It’s easy to randomly scramble code. It’s harder to keep track of the right ordering. I needed to be able to do this many times.

Was it worthwhile? I think it was. I had a couple Parsons problems on the quiz, maybe five on the study guide, and maybe three on the final exam. (Different numbers at different stages of development.) Each one got generated at least twice as I refined, improved, or fixed the problem. (One discovery: Don’t include comments. They can legally go anywhere, so it only makes grading harder.) The original code only took me about an hour to get working. The script got refined many times as I used it, but the initial investment was well worth it for making sure that the problem was right (e.g., I didn’t miss any lines, and indentation was preserved for Python code) and the solution was correct.

Would it be worthwhile for anyone else to write this script facing the same problems? That’s a lot harder question.

I realized that I brought a lot of knowledge to bear on this problem.

  • I have been a professional programmer.
  • I do not use LiveCode often, but I have used HyperTalk a lot, and the environment is forgiving with lots of help for casual programmers like me. LiveCode doesn’t offer much for data abstraction — basically, everything is a string.  I have experience using the tool’s facility with items, words, lines, and fields to structure data.
  • I know LaTeX and have used the exam class before. I know Python and the fact that I needed to preserve indentation.

Then I realized that it takes almost as much knowledge to use this generator. The few people who might want to use the Parsons problem generator that I posted would have to know about Parsons problems, want to use them, be using LaTeX for exams, and know how to use the output of the generator.

But I bet that all (or the majority?) of end-user programming experiences are like this. End-users are professionals in some domain. They know a lot of stuff. They’ll bring a lot of knowledge to their programming activity. The programs will require a lot of knowledge to write, to understand, and to use.

One of the potential implications is that this program (and maybe most end-user programs?) are probably not useful to many others.  Much of what we teach in CS1 for CS majors, or maybe even in Software Carpentry, is not useful to the occasional, casual end-user programmer.  Most of what we teach is for larger-scale programming.  Do we need to teach end-user programmers about software engineering practices that make code more readable by others?  Do we need to teach end-user programmers about tools for working in teams on software if they are not going to be working in teams to develop their small bits of code? Those are honest questions.  Shriram Krishnamurthi would remind me that end-user programmers, even more than any other class of programmers, are more likely to make errors and less likely to be able to debug them, so teaching end-user programmers practices and tools to catch and fix errors is particularly important for them.  That’s a strong argument. But I also know that, as an end-user programmer myself, I’m not willing to spend a lot of time that doesn’t directly contribute towards my end goal.  Balancing the real needs of end-user programmers with their occasional, casual use of programming is an interesting challenge.

The bigger question that I’m wondering about is whether someone else, facing a similar problem, could learn to code with a small enough time investment to make it worthwhile. I did a lot of programming in HyperTalk when I was a graduate student. I have that investment to build on. How much of an investment would someone else have to make to be able to write this kind of script as easily?

Why LiveCode? Why not Python? Or Smalltalk? I was originally going to write this in Python. Why not? I was teaching Python, and the problems would all be in Python. It’d good exercise for me.

I realized that I didn’t want to deal with files or a command line. I wanted a graphical user interface. I wanted to paste some code in (not put it in a file), and get some text that I could copy (not find it in one or more files). I didn’t want to have to remember what function(s) to call. I wanted a big button. I simply don’t have the time to deal with the cognitive load of file names and function names. Copy-paste the sorted code, press the button, then copy-paste the scrambled code and copy-paste the solution. I could do that. Maybe I could build a GUI in Python, but every time I have used a GUI tool in Python, it was way more work than LiveCode.

I also know Smalltalk better than most. Here’s a bit of an embarrassing confession: I’ve never really learned to build GUIs in Smalltalk. I’ve built a couple of toy examples in Morphic for class. But a real user interface with text areas that really work? That’s still hard for me. I didn’t want to deal with learning something new. LiveCode is just so easy — select the tool, drag the UI object into place.

LiveCode was the obvious answer for me, but that’s because of who I am and the background that I already have. What could we teach future professionals/end-user programmers that (a) they would find worthwhile learning (not too hard, not too time-consuming) and (b) they could use casually when they needed it, like my Parsons problem generator? That is an interesting computing education research question.

How does a student determine “worthwhile” when deciding what programming to learn for future end-user programming?  Let’s say that we decided to teach all STEM graduate students some programming so that they could use it in their future professional practice as end-user programmers.  What would you teach them?  How would they judge something “worthwhile” to learn for later?

We know some answers to this question.  We know that students judge the authenticity of the language based on what they see themselves doing in the future and what the current practice is in that field (see Betsy DiSalvo’s findings on Glitch and our results on Media Computation).

But what if that’s not a good programming language? What if there’s a better one?  What if the common practice in a field is ill-informed? I’m going to be that most people, faced with the general problem I was facing (wanting a GUI to do a text-processing task) would use JavaScript.  LiveCode is way better than JavaScript for an occasional, casual GUI task — easier to learn, more stable, more coherent implementation, and better programming support for casual users.  Yet, I predict most people would choose JavaScript because of the Principle of Social Proof.

I’ve been reading Robert Cialdini’s books on social psychology and influence, and he explains that social proof is how people make decisions when they’re uncertain (like how to choose a programming language when they don’t know much about programming) and there are others to copy.

First, we seem to assume that if a lot of people are doing the same thing, they must know something we don’t. Especially when we are uncertain, we are willing to place an enormous amount of trust in the collective knowledge of the crowd. Second, quite frequently the crowd is mistaken because they are not acting on the basis of any superior information but are reacting, themselves, to the principle of social proof.

Cialdini PhD, Robert B.. Influence (Collins Business Essentials) (Kindle Locations 2570-2573). HarperCollins. Kindle Edition.

How many people know both JavaScript and LiveCode well?  And don’t consider computer scientists. You can’t convince someone by telling them that computer scientists say “X is better than Y.”  People follow social proof from people whom they judge to be similar to them. It’s got to be someone in their field, someone who works like them.

It would be hard to teach the graduate students something other than what’s in common practice in their fields, even if it’s more inefficient to learn and harder to use than another choice.

June 11, 2018 at 2:00 am 2 comments

A Generator for Parsons problems on LaTeX exams and quizzes

I just finished teaching my Introduction to Media Computation a few weeks ago to over 200 students. After Barb finished her dissertation on Parsons problems this semester, I decided that I should include Parsons problems on my last quiz, on the final exam study guide, and on the final exam. Parsons problems are a great fit for this assessment task. We know that Parsons problems are a more sensitive measure of learning than code writing problems, they’re just as effective as code writing or code fixing problems for learning (so good for a study guide), and they take less time than code writing or fixing.

Barb’s work used an interactive tool for providing adaptive Parsons problems. I needed to use paper for the quiz and final exam. There have been several Parsons problems paper-based implementation, and Barb guided me in developing mine.

But I realized that there’s a challenge to doing a bunch of Parsons problems like this. Scrambling code is pretty easy, but what happens when you find that you got something wrong? The quiz, study guide, and final exam were all going to iterate several times as we developed them and tested them with the teaching assistants. How do I make sure that I always kept aligned the scrambled code and the right answer?

I decided to build a gadget in LiveCode to do it.

I paste the correctly ordered code into the field on the left. When I press “Scramble,” a random ordering of the code appears (in a Verbatim LaTeX environment) along with the right answers, to be used in the LaTeX exam class. If you want to list a number of points to be associated with each correct line, you can put a number into the field above the solution field. If empty, no points will be explicitly allocated in the exam document.

I’d then paste both of those fields into my LaTeX source document. (I usually also pasted in the original source code in the correct order, so that I could fix the code and re-run the scramble when I inevitably found that I did something wrong.)

The wording of the problem was significant. Barb coached me on the best practice. You allow students to write just the line number, but encourage them to write the whole line because the latter is going to be less cognitive load for them.

Unscramble the code below that halves the frequency of the input sound.

Put the code in the right order on the lines below. You may write the line numbers of the scrambled code in the right order, or you can write the lines themselves (or both). (If you include both, we will grade the code itself if there’s a mismatch.)

The problem as the student sees it looks like this:

The exam class can also automatically generate a version of the exam with answers for used in grading. I didn’t solve any of the really hard problems in my script, like how do I deal with lines that could be put in any order. When I found that problem, I just edited the answer fields to list the acceptable options.

I am making the LiveCode source available here: http://bit.ly/scrambled-latex-src

LiveCode generates executables very easily. I have generated Windows, MacOS, and Linux executables and put them in a (20 Mb, all three versions) zip here: http://bit.ly/scrambled-latex

I used this generator probably 10-20 times in the last few weeks of the semester. I have been reflecting on this experience as an example of end-user programming. I’ll talk about that in the next blog post.

June 8, 2018 at 2:00 am 5 comments

Teach two languages if you have to: Balancing ease of learning and learning objectives

My most recent CACM Blog post addresses a common question in computer science education: Should we teach two programming languages in a course to encourage abstraction, or just one? Does it hurt students to teach two? Does it help them to learn a second language earlier? My answer (in really short form) is “Just teach one, because it takes longer to learn one than you expect. If you teach two or more, students are going to struggle to develop deep understanding.”

But if your learning objective is for students to learn two (or more languages), teach two or more languages. You’re going to have to pay the piper sometime. Delaying is better, because it’s easier and more effective to transfer deep knowledge than to try to transfer surface-level representations.

The issue is like the question of recursion-first or iterative-control-structures-first. (See this earlier blog post.) If your students don’t have to learn iterative control structures, then teach recursion-only. Recursion is easier and more flexible. But if you have to teach both, teach iteration first. Yes, iteration is hard, and learning iteration-first makes recursion harder to learn later, but if you have to do it, iteration-first is the better order.

There’s a lot we know about making computing easier to learn. But sometimes, we just can’t use it, because there are external forces that require certain learning objectives.


I correct, continue, and explore tangents on this blog post here: https://computinged.wordpress.com/2018/06/15/are-you-talking-to-me-interaction-between-teachers-and-researchers-around-evidence-truth-and-decision-making/

June 4, 2018 at 7:00 am 9 comments

Integrating CS into other fields, so that other fields don’t feel threatened: Interview with Jane Prey

I really enjoyed the interview in the last SIGCSE Bulletin with Jane Prey.  Her reason for doing more to integrate CS into other disciplines, at the undergraduate level, is fascinating — one I hadn’t heard before.

Other fields are nervous because they think we’re taking so many students from them, and universities are nervous because they’re afraid of losing us to industry. I would hate to lose any other faculty position to add a CS professor. I really believe it’s important for computing professionals to be well-rounded, to be able to appreciate what they learned in history, biology, and anthropology classes. We need to do a better job of integrating more of a student’s educational experiences. For example, how do we do more work together with the education schools? We just aren’t there. We have to work cross-disciplines to develop a path forward, even though it’s really hard.

June 1, 2018 at 7:00 am Leave a comment


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,054,360 hits
June 2018
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

CS Teaching Tips