Archive for May, 2019

Come hang out with Wil and me to talk about new research ideas! ACM ICER 2019 Work in Progress Workshop

Wil Doane and I are co-hosting the ACM ICER 2019 Work in Progress workshop that Colleen Lewis introduced at ICER 2014 in Glasgow (my report on participating). Colleen and I co-hosted last year.

It really is a “hosting” job more than an “organizing” or “presenting” role.  I love Colleen’s informal description of WiP, “You’re borrowing 4 other smart people’s brains for an hour. Then you loan them yours.”  The participants do the presenting. For one hour, your group listens to your idea and helps you think through it, and then you pass the baton. The whole organizing task is “Let’s put these 4 people together, and those 4 people together, and so on. We give them 4 hours, and appropriate coffee/lunch breaks.” (Where the value “4” may be replaced with “5” or “6”.)

Another useful description of WiP is “doctoral consortia for after-graduation.”  Doctoral consortia are these great opportunities to share your research ideas and get feedback on them.  Then there’s this sense that you graduate and…not have those ideas anymore? Or don’t need to share them or get feedback on them?  I’ve expressed concern previously about the challenges of learning when you’re no longer seen as a learner. Of course, PhD graduates are supposed to have new research ideas, which go into proposals and papers. But how do you develop ideas when you’re at the early stages, when they’re not ready for proposals or papers?  That’s what the WiP is about.

The WiP page is here (and quoted in part below). To sign up, you just fill out this form, and later give us a drafty concept paper to share with your group.

The WIP Workshop (formerly named the Critical Research Review) is a dedicated 1-day workshop for ICER attendees to provide and receive friendly, constructive feedback on works-in-progress. To apply for the workshop you will specify a likely topic about which you’ll request feedback. WIP participants will be assigned to thematic groups with 4-6 participants.

Two weeks before ICER, participants will submit to the members of their group a 2-4 page primer document to help prepare for the session and identify the types of feedback sought. At WIP, depending upon group size, each participant will have 45-75 minutes to provide context, elicit advice, support, feedback, and critique. Typically, one of the other group members acts as a notetaker during an individual’s time in order to allow the presenter to engage fully in the discussion.

WIP may be the right experience for you, if you would like to provide and receive constructive advice, support, feedback, or critique on computing education research issues such as:

  • A kernel of a research idea
  • A grant proposal
  • A rejected ICER paper
  • A study design
  • A qualitative analysis approach
  • A quantitative analysis approach
  • A motivation for a research project
  • A theoretical framing
  • A challenge in a research project

The goal of the workshop is to provide a space where we can receive support and provide support. The workshop is intended for active CS education researchers. PhD students are instead encouraged to apply for the Doctoral Consortium, held on the same day as WIP.

May 31, 2019 at 7:00 am Leave a comment

Why I say task-specific programming languages instead of domain-specific programming languages

I’ve written several posts about task-specific programming languages over the last few weeks (here’s the first one), culminating in my new understanding of computational thinking (see that blog post).

The programming languages community talks about “domain-specific programming languages.”  That makes a lot of sense, as a contrast with “general purpose programming languages.” Why am I using a different term?

It’s inspired from my interaction with social studies teachers. They talk about “the language used in math class” and about “what language should we use in history?” History and mathematics are domains. If we talk about a programming language for all of history, that’s too big. It will be difficult to design languages to be easily learned and used.  There are lots of tasks in history that are amenable to using computing to improve learning, including data visualization and testing the rigor of arguments.

“Task-specific programming language” makes clear that we’re talking about a task, not a whole domain. I don’t want teachers rejecting a language because “I can’t use it for everything.”  I want teachers to accept a language because it helps their students learn something. I want it to be so easy to learn and use, that (a) it’s not adding much additional load and (b) it’s obvious that it would help.

I like “task-specific programming language,” too, because the name suggests how we might design them. Human-computer interface researchers and designers have been developing methods to analyze tasks and design interfaces for those tasks for decades. The purpose of that analysis is to create interfaces for users to achieve those tasks easily and with minimal up-front learning.  For 25 years (Soloway, Guzdial, and Hay, 1994) , we have been trying to extend those techniques to design for learners, so that users achieve the tasks and learn in the process.

Task-specific programming languages are domain-specific programming languages (from the PL community) that are designed using learner-centered design methods (HCI).  It’s about integrating between two communities to create something that enables integration of computing across the curriculum.

 

May 27, 2019 at 7:00 am 6 comments

Learning to code is really learning to code something: One doesn’t just “learn programming” nor “learn tracing”

I asked a group of social studies educators what programming language(s) they might want to use in their classes. One of the interesting themes in the responses was “the same as what’s in math and science classes.” One teacher said that she didn’t want a “weird hierarchy” where there’s one programming language in STEM and another in “history and English” for fear they’d be seen as “dumbed down.” Another said that maybe teaching JavaScript in history class “would make history cool.”

There’s a belief in this theme that I think is wrong. Learning to program in science class probably won’t transfer without a bunch of work to programming in mathematics class, and programming STEM classes will probably be a very different thing than programming in the humanities classes. Even expert programmers learn to program in a domain, and have a hard time transferring that knowledge of programming between domains. Expertise is expertise in a domain.

My advisor, Elliot Soloway, was involved in some of the early studies that supported this claim. The first paper was “The role of domain experience in software design” by Beth Adelson and Elliot Soloway from 1985. I quote from the abstract:

A designer’s expertise rests on the knowledge and skills which develop with experience in a domain. As a result, when a designer is designing an object in an unfamiliar domain he will not have the same knowledge and skills available to him as when he is designing an object in a familiar domain.

In this study, they took expert software designers in various fields, and have them design systems in other fields. They also asked novice designers to do some of the same tasks. For example, maybe we have a software designer who has been building banking software, and another who has been designing real-time control systems. Now, let’s ask both designers to design an elevator control system.

What they found was that the designers in the new domain struggled. They stopped planning (e.g., making notes). When they were in the familiar domain, they would often visualize the working system (“simulation” in the paper). Novices didn’t. The experts didn’t when they were faced with a new domain. Experts in an unfamiliar domain looked much like novices. Now, experts in an unfamiliar domain were better than the novices at noticing constraints on the design, so something transferred.

The second paper is even more striking. “Empirical Studies of Programming Knowledge” (1984) by Elliot Soloway and Kate Ehrlich. From the abstract:

We suggest that expert programmers have and use two types of programming knowledge: (1) programming plans, which are generic program fragments that represent stereotypic action sequences in programming, and (2) rules of programming discourse, which capture the conventions in programming and govern the composition of the plans into programs.

When we teach programming, we tend to focus on the syntax and semantics of the language. We don’t explicitly teach plans — chunks of code that do something useful. But we expect students to figure them out. We rarely teach discourse rules. The domain-specific knowledge lies in both plans and discourse rules.

To test the claim about the importance of these discourse rules, they produce sets of two programs: Alpha and Beta. Alpha is a perfectly fine program. Beta breaks the rules. For example, if you see a variable initialized n := 0;, you would find it weird to later see read(n); (to input a new value for n). It’s not wrong. The code might work just fine — in fact, it does work just fine in the experimental construction of Beta. But the program breaks the rules of discourse. They write:

Notice that both Alpha version and the Beta version are runnable programs that in almost all cases compute the same values. Moreover, to an untrained eye their differences may even not be apparent; they always only differ by a very few textual elements.

Here’s an example of one Alpha and Beta — these both work. In this case, they do not do the same thing:

Beta isn’t wrong. It successfully computes minimum. However, it uses the variable max which is confusing. It breaks our discourse rule. The program does work.

Different domains use different standards and different styles of programming. Engineers using MATLAB rarely use FOR or WHILE loops, for example. Graphic designers writing JavaScript code use far more exception handling than we ever expected.

Soloway and Ehrlich showed these programs to “experts” (undergraduate juniors to graduate students) and novices (students in their first programming course). When asked questions about Alpha (e.g., “What goes in this missing line in the code?” Or “Do you remember that code that I showed you?”), experts do far better than novices. When asked questions about Beta, experts do essentially the same as novices (no statistically significant differences).

I find it particularly notable that the expert drop is steeper.  Experts rely heavily on cues like variable names, even more than novices. CS expertise is really expertise in the discourse rules.

If expert programmers “knew programming,” they should be able to just trace the code (“be a computer”) and answer the questions correctly. Instead, they struggle to understand what’s going on. They’re pretty much like a student in their first semester of programming. The experts know Alpha well because it’s just like all the other programs they’ve ever seen — they can pattern match, rather than reason about the code itself. The experts struggle with Beta. It’s kind of like the difference between humans and Econs. Econs can reason through code rationally. Humans rely on expectations.

These results also suggest that the question of “Does tracing come before writing?” is moot.  Tracing what?  The program matters.  Some programs are harder to trace than others — for everyone, and particularly, with expertise.  There is no generic “tracing skill.”

Conclusion: People don’t just learn “coding.” Programmers in general know plans and discourse rules. Break the rules and you just have the programming language — and even experts aren’t really good at just applying the syntax and semantics rules. No better than a novice. If you have enough expertise in different domains, then you can work in different domains. When you start programming in a new domain, you’re not that much different than a new programmer.

The social studies teachers I’m working with have a sense that students can just
know JavaScript.” I don’t think that’s true. I think if I taught students to write JavaScript code to use Google’s Charts service for making data visualizations, it wouldn’t be much easier to teach them Web programming with React, to write scripts for Adobe Photoshop, nor to build simulations in Lively Web. It’s all JavaScript, and the syntax and semantics are the same in each — but in terms of what people really know and use (i.e., plans and discourse rules), it’s completely different.

May 20, 2019 at 7:00 am 7 comments

Seeking Data: What’s happening at your school as you cap CS major enrollment?

I’m just back from the 2019 NCWIT Summit (see link here), which was amazing — as always. I talked to people at schools who have instituted caps on undergraduate CS enrollment, and I’m hearing stories that I didn’t expect.  I’d love to hear your experience at your school.  Are you seeing these things?

  • One story is that students are taking and re-taking (“2-3 times”) the early classes, to get high enough grades to get past the grade cap.  Thus, the GPA grade cap has actually increased enrollment pressure on earlier classes.
  • Because of these course repeats, students are (presumably) taking longer to graduate. I didn’t talk to anyone with data on that — maybe it’s too soon, since caps are within the last 3-5 years at most institutions?
  • I was also hearing about incredible pressure that students are feeling because of the grade caps.  We expected to see impacts on enrollment for under-represented groups, but these reports say that everyone has increased stress because of the grade caps. The caps are leading to damage to department climate and even a spike in mental health issues. (I heard some pretty horrible stories.)

These are all just anecdote. I’m not sure how to cast a wider net for more information, but this blog might be a place to start.  Could you share your reports on how enrollment caps are impacting your course enrollment at the lower levels, on time to graduation, and on departmental climate (or other issues)? Thanks!

 

May 17, 2019 at 7:00 am 9 comments

Open Question around Mathematics in Undergraduate Computer Science

I’m always happy to see a new computing education blog, and I’m particularly excited by posts that identify open (research, and otherwise) questions.

At SIGCSE 2019 this past February, we organized a birds of a feather session (a one-hour discussion group) on modernizing mathematics in computer science. We expected a modest number of attendees but were surprised and delighted to host a completely filled room of discrete mathematics, algorithms, and theory of computation educators—60 attendees in total—interested in evolving how we, as a discipline, situate mathematical foundations in our curriculum!

What was even more surprising to us was how the discussion evolved over the hour. Our original intention was to focus on how we might re-shape the foundational portions of the computer curriculum in light of how computing has evolved over the last decade:

The undergraduate computer science curriculum is ever-changing but has seen particular turmoil recently. Topics such as machine learning, data science, and concurrency and parallelism have grown in importance over the last few years. As the content of our curriculum changes, so too does the mathematical foundations on which it rests. Do our current theoretical courses adequately support these foundations or must we consider new pedagogy that is more relevant to our students’ needs? In this BoF, we will discuss what a modern mathematics curriculum for computer scientists should cover and how we should go about accomplishing this in our classrooms. (https://dl.acm.org/citation.cfm?id=3293748)

At this point, we shifted our focus from trying to answer the original “concept” question to identifying the myriad of problems that educators wrestled with along these three dimensions. We outline the problems that people raised below:

From https://cs-foundations-ed.github.io/sigcse/2019/03/29/bof-report.html

May 13, 2019 at 7:00 am Leave a comment

Comparing performance in learning computer science between countries

Imagine that you are a high school chemistry teacher, and you’re convinced that you have developed a terrific way to teach the basic introductory to chemistry course. Your students do terrific on all your assessments and go on to success in chemistry in college. You decide that you want to test yourself — are your students really as good as you think they are?

You reach out to some friends in other schools and ask them to give your final exam to their students. You are careful about picking the other schools so that they’re really comparable along dimensions like student wealth, size of school, and student demographics. Your friends are willing, but they just have a few of their students take the test. You don’t know really how they pick. Maybe it’s the best students. Maybe it’s the students who need remedial help. Maybe it’s a punishment for students in detention. Of course, all of your students take the final exam.

In the end, you have lots of YOUR students who took YOUR exam, and you have a handful of other students. Your friends (who likely don’t teach like you) give you a few tests from their students. Is it at all surprising that your students will likely out-score the friends’ students?

That’s how I read this paper from Proceedings of the National Academy of Sciences of the US: “Computer science skills across China, India, Russia, and the United States.” The authors are quite careful about picking schools to compare, along dimensions of how “elite” the schools are. I’m quite willing to believe that there is a range of schools with different results along an “elite” spectrum.

They over-sample from the United States, compared to the population of these countries:

Altogether, 678 seniors from China (119 from elite programs), 364 seniors from India (71 from elite programs), and 551 seniors from Russia (116 from elite programs) took the examination…We also obtained assessment data on 6,847 seniors from a representative sample of CS programs in the United States (607 from elite programs).

The test they use is the “Major Field Test” from ETS. I don’t know that it’s a bad test. I do suspect that it’s US-centric. It’s like the final exam from our Chemistry teacher in my example. Compare that to the TIMSS assessments that go to great lengths to make sure that the data are contextualized and that the assessments are fair for everyone.

Maybe the results are true. Maybe US computer science students are far better than comparable CS students in Russia, China, and India. I’m just not convinced by this study.

May 6, 2019 at 7:00 am 2 comments

What’s NOT Computational Thinking? Curly braces, x = x + 1, and else.

In the previous blog post, I suggested that Computational Thinking is the friction necessary to make your problem solvable by a computer. It should be minimized unless it’s generative.  It’s a very different framing for computational thinking.  Rather than “what’s everything that we use in computing that might be useful for kids,” it’s closer to, “the day is full and students are already in so many subjects — what do they have to know about computing in order to use it to further their learning and their lives?”

What is NOT Computational Thinking

I have been talking to my students about what’s on the list of things that we typically teach but don’t fit into this model of computational thinking. Here’s what I’ve thought of so-far:

Here are criteria for what should NOT be part of teaching computational thinking:

  • These are hard for students — why go to that extra effort unless it’s worthwhile?
  • We have invented ways of framing problems for a computer that do not use these things, so they’re not necessary.
  • They are not generative. Knowing any of these things does not give you new leverage on thinking about problems within a domain.

If Computational Thinking is something we should teach to everyone, these are items that are not worth teaching to everyone.

Computational thinking includes programming, for me. It is generative.  It allows students to explore causal models that are tested with automation.  It’s the most powerful idea in computational thinking.

What is Computational Thinking for OTHER subjects

Then there are the ideas that are on most lists of computational thinking, like decomposition and abstraction. I absolutely believe that all programmers have those skills. They are absolutely generative. I believe that programming is a terrific place to try out and play with those ideas.

In the Rich et al. paper about learning trajectories that I reference so often, they talk about students learning “Different sets of instructions can produce the same outcome.” That’s a critical idea if you want students to learn that different decompositions can still result in the same outcome.

But does abstraction and decomposition belong in a Computational Thinking class?  They feel more like mathematics, science, and engineering to me.  Yes, use computing there, but don’t break them out into a separate class.  A mathematics teacher may be better prepared to teach decomposition and abstraction than are computer science teachers. It’s better to teach these ideas in a context with a teacher who has the PCK to teach them.

What’s more, it’s clear that you don’t need abstraction and decomposition to program computers as a way to learn.  Task-specific programming languages are usable for learning something else without developing new abilities to abstract or decompose. Our social studies teachers did in our participatory design study in March — they learned things about life expectancy in different parts of the world, using programming that they did themselves, in 10-20 minutes.

What is Computer Science that EVERYONE should know

There’s another list we could make that is ideas in computer science that everyone should know because it helps them to understand the computation in their lives.  Yes, there’s a lot in the school day — but this is worth it for the same reason that Physics or Biology is worth it. This is a different matter than what helps them solve problems (which is the guts of the computational thinking definitions we have seen earlier).  On my list, I’d include:

  • Bits, the atom of information processing.
  • Processes, what programs allow us to define.
  • Programming, as a way to define processes.

Other suggestions?

  • What’s on your list for what’s NOT necessary in Computational Thinking, and
  • What is in Computer Science that everyone needs but is not Computational Thinking?

May 3, 2019 at 7:00 am 37 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,005 other followers

Feeds

Recent Posts

Blog Stats

  • 1,880,333 hits
May 2019
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

CS Teaching Tips