Posts tagged ‘computing education’

High school students learning programming do better with block-based languages, and the impact is greatest for female and minority students

I learned about this study months ago, and I was so glad to see it published in ICLS 2018 this last summer.  The paper is “Blocks or Text? How Programming Language Modality Makes a Difference in Assessing Underrepresented Populations” by David Weintrop, Heather Killen, and Baker Franke.  Here’s the abstract:

Broadening participation in computing is a major goal in contemporary computer science education. The emergence of visual, block-based programming environments such as Scratch and Alice have created a new pathway into computing, bringing creativity and playfulness into introductory computing contexts. Building on these successes, national curricular efforts in the United States are starting to incorporate block-based programming into instructional materials alongside, or in place of, conventional text-based programming. To understand if this decision is helping learners from historically underrepresented populations succeed in computing classes, this paper presents an analysis of over 5,000 students answering questions presented in both block-based and text-based modalities. A comparative analysis shows that while all students perform better when questions are presented in the block-based form, female students and students from historically underrepresented minorities saw the largest improvements. This finding suggests the choice of representation can positively affect groups historically marginalized in computing.

Here’s the key idea as I see it. They studied over 5,000 high school students learning programming. They compared students use block-based and text-based programming questions.  Everyone does better with blocks, but the difference is largest for female students and those from under-represented groups.

Here’s the key graph from the paper:

Weintrop-blocks-text-icls18a-sub1402-i7_pdf__page_5_of_8_

So, why wouldn’t we start teaching programming with blocks?  There is an issue that students might think that it’s a “toy” and not authentic — Betsy DiSalvo saw that with her Glitch students. But a study with 5K students suggests that the advantages of blocks swamp the issues of inauthenticity.

The International Conference on the Learning Sciences (ICLS) 2018 Proceedings are available here.

August 20, 2018 at 7:00 am 5 comments

CRA Memo on Best Practices for Engaging Teaching Faculty in Research Computing Departments

I’m excited to see this memo from the Computing Research Association on the status of teaching faculty in computing departments. Computing departments are increasingly relying on teaching faculty, and it’s important to give them fair and equitable treatment.

I wrote in 2016 that “CS Teaching Faculty are like Tenant Farmers.” This memo addresses some of the issues I raised, though some are buried in the text of the memo.  I argued that teaching faculty should be involved in hiring for both traditional and teaching faculty, and that teaching faculty should serve in upper-level leadership positions.  The report does state halfway down the report, “Similarly, teaching faculty should be broadly included in faculty governance on matters related to their roles in the department, including participation in faculty meetings, voting rights on matters impacting the education mission, inclusion in evaluation of the teaching performance of other faculty, and input on hiring decisions.”  This memo is a step in the right direction.

To achieve their educational mission, computing departments at research universities increasingly depend on full-time teaching faculty who choose teaching as a long-term career. This memo discusses the need for teaching faculty, explores the impact of teaching faculty, and recommends best practices.

Essential best practices for departments include:

  • Departments should provide teaching faculty with equitable rights and resources, except in limited areas where differing job responsibilities make that inappropriate.

  • Departments should encourage teaching faculty to be equal and active partners on projects and committees with the goal of contributing to the department’s educational mission.

  • Departments should set course, preparation, student, and service loads of teaching faculty at a level that allows for innovation and quality instruction.

    ….

Source: Laying a Foundation: Best Practices for Engaging Teaching Faculty in Research Computing Departments

August 17, 2018 at 7:00 am 4 comments

CS educators listen to authority more than evidence: Time to move on

My CACM Blog post for July starts from Stuart Reges’ inflammatory blog post in June “Why Women Don’t Code.”  I use his post and other writing as a foil to critique how we make arguments in computing education.  They tend to be arguments from authority, not from evidence.

Why is that? Why do CS educators use evidence and research less than (as quoted in the CACM post) Physics educators?  Is it because of the youth of the field, so when we grow up we’ll think more about research on how to teach well?  Is it because of the economics of the field?  Getting a CS background is so lucrative that students are desperate to succeed in the classes. We don’t have to teach well — student motivation will make up for where our teaching lacks. Or is it something else — is it something about CS in its nature that leads to opposition to using evidence and research when making educational decisions?

In June, Stuart Reges, principal lecturer in Computer Science and Engineering at the University of Washington, published a blog post Why Women Don’t Code that led to several articles and blog posts in response (e.g., Seattle Times and GeekWire). Reges argues that women are simply never going to enter computing at significant numbers, and 20% is about all that we’re ever going to get.

Our community must face the difficult truth that we aren’t likely to make further progress in attracting women to computer science. Women can code, but often they don’t want to. We will never reach gender parity. You can shame and fire all of the Damores you find, but that won’t change the underlying reality.

It’s time for everyone to be honest, and my honest view is that having 20 percent women in tech is probably the best we are likely to achieve. Accepting that idea doesn’t mean that women should feel unwelcome. Recognizing that women will be in the minority makes me even more appreciative of the women who choose to join us.

Hank Levy, Director of the U-W CSE School, wrote a great statement in response (see here). Levy disagrees with Reges’s conclusions, but supports Reges’s right to make his argument. Levy puts the current gender ratio in computer science in context by comparing to other disciplines.

I was most struck by the 20% claim. That’s easily proven wrong. There are many CS educational programs in the US with more than 20% female (like Computational Media at Georgia Tech). There are countries where CS is more than 50% female. How can Reges claim that 20% is the best that we can possibly do?

Here’s something important about Stuart Reges that people outside of CS education might not know — he’s a rockstar. He packs the house when he speaks at education conferences. He publishes regularly in the field. He has written a popular book on how to teach Java in introductory computer science (see Building Java Programs). Students love him, and teachers want to be like him. When Stuart Reges speaks, CS educators listen.

In this post, I want to step back and consider how Reges is making his argument, because it says something about how we make decisions in computing education. I am going to characterize the argument style in computing education as argument from authority which Wikipedia describes as “a claimed authority’s support is used as evidence for an argument’s conclusion.” We need to recognize the form before we can move beyond it.

Click here to read the rest of the CACM Blog Post.

August 6, 2018 at 7:00 am 8 comments

Visiting NTNU in Trondheim Norway June 3-23

Barbara and I are just back from a three week trip to NTNU in Trondheim, Norway. Katie Cunningham came with us (here’s a blog post about some of her work). Three weeks is enough time to come up with a dozen ideas for blog posts, but I don’t have the cycles for that. So let me just give you the high-level view, with pictures and links to learn more.

We went at the beginning of June because Barb and I (and the University of Michigan) are part of the IPIT network (International Partnerships for Excellent Education and Research in Information Technology) that had its kick-off meeting June 3-5. The partnership is about software engineering and computing education research, with a focus student and faculty exchange and meetings at each others’ institutions: NTNU, U. Michigan, Tsinghua University, and Nanjing University. I learned a lot about software engineering that I didn’t know before, especially about DevOps.

If you ever get the chance to go to a meeting organized by Letizia Jaccheri of NTNU, GO! She was the organizer for IPIT, co-chair of IDC 2018, and our overall host for our three weeks there. She has a wonderful sense for blending productivity with fun. During the IDC 2018 poster session, she brought in high school students dressed as storybook characters, just to wander around and “bring in a bit of whimsy.” For a bigger example, she wanted IPIT to connect with the NTNU campus at Ålesund, which just happens to be near the Geiranger fjord, one of the most beautiful in Norway. So, she flew the whole meeting to Ålesund from Trondheim! We took a large cruise-ship like boat with meeting rooms down the fjord. We got in some 5-6 hours of meetings, while also seeing amazing waterfalls and other views, and then visited the Ålesund campus the next day before flying home. We got work done and WOW!

For the next week and a half, we got to know the computing education research folks at NTNU. We were joined at the end of the first week by Elisa Rubegni from the University of Lincoln, and Roberto Martinez-Maldonado came by a couple days later. Barb, Elisa, and I held a workshop on the first Monday after IPIT. A couple days later, we had a half-day meeting with Michalis Giannakos’s group and Roberto, then Elisa led us all in a half-day design exercise (pictured below — Elisa, Sofia, Javi, and Katie). In between, we had individual meetings. I think I met with every one of the PhD students there working in computing education research. (And, in our non-meeting time, Barb and I were writing NSF proposals!)

Michalis’s group is doing some fascinating work. Let me tell you about some of the projects that most intrigued me.

  • Sofia (with Kshitij and Ilias) is lead on a project where they track what kids using Scratch are looking at, both on and off screen. It’s part of this cool project where kids program these beautiful artist-created robots with Scratch. It’s a pretty crazy looking experimental setup, with fiducial markers on notebooks and robots and screens.
  • Kshitij is trying to measure EEG and gaze in order to determine cognitive load in a user interface. Almost all cognitive load measures are based on self-report (including ours). They’re trying to measure cognitive load physiologically, and correlate it with self-report.
  • Katerina and Kshitij is using eye-tracking to measure how undergrads use tools like Eclipse. What I found most interesting was what they did not observe. I noticed in their data that they had no data on using the debugger. They explained that in 40 students, only five people even looked at the debugger. Nobody used data or control flow visualizations at all. I’m fascinated by this — what does it take to get students to actually look at the debuggers and visualizers that were designed to help them learn?
  • Roberto is doing this amazing work with learning analytics in physical spaces, where nurses are working on robot patients. Totally serious — they can gather all kinds of data about where people are standing, how they interact, and when they interact. For tasks like nursing, this is super important to understand what students are learning.

Then came FabLearn with an amazing keynote by Leah Buechley on art, craft, and computation. I have a long list of things to look up after her talk, including Desmos, computer controlled cutting machines (which I had never heard of before) which are way cheaper than 3-D printers but still allow you to do computational craft, and http://blog.recursiveprocess.com/ which is all about learning coding and mathematics. She made an argument that I find fascinating — that art is what helps diverse students reflect their identity and culture in their school, and that’s why students who get art classes (controlling for SES) are more likely to succeed in school and go onto post-secondary schooling. Can computing make it easier to bring art back into school? Can computing then play a role in engaging children with school again?

The next reason we were at NTNU was to attend the EXCITED Centre advisory board meeting. Barb and I were there for the launch of EXCITED in January 2017. It’s a very ambitious project, starting from students making informed decisions to go into CS/IT, helping students develop identities in CS, learning through construction, increasing diversity in CS, and moving into careers. We got to hang out with Arnold Pears, Mats Daniels, and Aletta Nylén of UpCERG (Upssala Computing Education Research Group), the world’s largest CER group.

Finally, for the last four days, we attended the Interaction, Design and Children Conference, IDC 2018. I wrote my Blog@CACM post for this month about my experiences there. I saw a lot there that’s relevant to people who read this blog. My favorite paper there tested the theory of concreteness fading on elementary school students learning computing concepts. Here’s a picture of a slide (not in the paper) that summarizes the groups in the experiment.

I’ll end with my favorite moment in IDC 2018, not in the Blog@CACM post. We met Letizia’s post-doc, Javier “Javi” Gomez at the end of our first week in Trondheim. Summer weather in Trondheim is pretty darn close to winter in Atlanta. One day, we woke up to 44F and rain. But we lucked out — the weekends were beautiful. On our first Saturday, Letizia invited us all to a festival near her home, and we met Javi and Elisa. That evening (but still bright sunlight), Javi, Elisa, Barb, and I took a wonderful kayaking trip down the Nidelva river. So it was a special treat to be at IDC 2018 to see Javi get TWO

awards for his contributions, one for his demo and an honorable mention for his note. The note was co-authored by Letizia, and was her first paper award (as she talks about in the lovely linked blog post). It was wonderful to be able to celebrate the success of our new friends.

On the way back, Barb and I stopped in London to spend a couple days with Alan Kay and his wife, Bonnie MacBird. If I could come up with a dozen blog post ideas from 3 weeks, it’s probably like two dozen per day with Alan and Bonnie, and we had two days with them. Visiting a science museum with an exhibit on early computers (including an Alto!) is absolutely amazing when you’re with Alan. But those blog posts will have to wait until after my blog hiatus.

June 28, 2018 at 7:00 am 2 comments

Workshops for New Computing Faculty in Summer 2018: Both Research and Teaching Tracks

This is our fourth year, and our last NSF-funded year, for the New Computing Faculty Workshops which will be held August 5-10, 2018 in San Diego. The goal of the workshops is to help new computing faculty to be better and more efficient teachers. By learning a little about teaching, we will help new faculty (a) make their teaching more efficient and effective and (b) make their teaching more enjoyable. We want students to learn more and teachers to have fun teaching them. The workshops were described in Communications of the ACM in the May 2017 issue (see article here) which I talked about in this blog post. The workshop will be run by Beth Simon (UCSD), Cynthia Bailey Lee (Stanford), Leo Porter (UCSD), and Mark Guzdial (Georgia Tech).

This year, for the first time, we will offer two separate workshop tracks:

  • August 5-7 will be offered to tenure-track faculty starting at research-intensive institutions.
  • August 8-10 will be offered to faculty starting a teaching-track job at any school, or a tenure-track faculty line at a primarily undergraduate serving institution where evaluation is heavily based in teaching.

The new teaching-oriented faculty track is being added this year due to enthusiasm and feedback we heard from past participants and would-be participants. When I announced the workshops last year (see post here), we heard complaints (a little on email, and a lot on Twitter) asking why we were only including research-oriented faculty and institutions. We did have teaching-track faculty come to our last three years of new faculty workshops that were research-faculty focused, and unfortunately those participants were not satisfied. They didn’t get what they wanted or needed as new faculty. Yes, the sessions on peer instruction and how to build a syllabus were useful for everyone. But the teaching-track faculty also wanted to know how to set up their teaching portfolio, how to do research with undergraduate students, and how to get good student evaluations, and didn’t really care about how to minimize time spent preparing for teaching and how to build up a research program with graduate students while still enjoying teaching undergraduate students.

So, this year we made a special extension request to NSF, and we are very pleased to announce that the request was granted and we are able to offer two different workshops. The content will have substantial overlap, but with a different focus and framing in each.

To apply for registration, To apply for registration, please apply to the appropriate workshop based on the type of your position: research-focused position http://bit.ly/ncsfw2018-research or teaching-focused position http://bit.ly/ncsfw2018-teaching. Admission will be based on capacity, grant limitations, fit to the workshop goals, and application order, with a maximum of 40 participants. Apply on or before June 21 to ensure eligibility for workshop hotel accommodation. (We will notify respondents by June 30.)


Many thanks to Cynthia Lee who helped a lot with this post

June 12, 2018 at 6:00 am Leave a comment

Reflections of a CS Professor and an End-User Programmer

In my last blog post, I talked about the Parsons problems generator that I used to put scrambled code problems on my quiz, study guide, and final exam. I’ve been reflecting on the experience and what it suggests to me about end-user programming.

I’m a computing professor, and while I enjoy programming, I mostly code to build exercises and examples for my students. I almost never code research prototypes anymore. I only occasionally code scripts that help me with something, like cleaning data, analyzing data, or in this case, generating problems for my students. In this case, I’m a casual end-user programmer — I’m a non-professional programmer who is making code to help him with some aspect of his job. This is in contrast:

  • To Philip Guo’s work on conversational programmers, who are people who learn programming in order to talk to programmers (see his post describing his papers on conversational programmers). I know how to talk to programmers, and I have been a professional programmer. Now, I have a different job, and sometimes programming is worthwhile in that job.
  • To computational scientists and engineers, which is the audience for Software Carpentry. Computational scientists and engineers might write code occasionally to solve a problem, but more importantly, they write code as part of their research.  I might write a script to handle an odd-job, but most of my research is not conducted with code.

Why did I spend the time writing a script to generate the problems in LaTeX? I was teaching a large class, over 200 students. Mistakes on quizzes and exams at that scale are expensive in terms of emails, complaints, and regrading. Scrambled code problems are tricky. It’s easy to randomly scramble code. It’s harder to keep track of the right ordering. I needed to be able to do this many times.

Was it worthwhile? I think it was. I had a couple Parsons problems on the quiz, maybe five on the study guide, and maybe three on the final exam. (Different numbers at different stages of development.) Each one got generated at least twice as I refined, improved, or fixed the problem. (One discovery: Don’t include comments. They can legally go anywhere, so it only makes grading harder.) The original code only took me about an hour to get working. The script got refined many times as I used it, but the initial investment was well worth it for making sure that the problem was right (e.g., I didn’t miss any lines, and indentation was preserved for Python code) and the solution was correct.

Would it be worthwhile for anyone else to write this script facing the same problems? That’s a lot harder question.

I realized that I brought a lot of knowledge to bear on this problem.

  • I have been a professional programmer.
  • I do not use LiveCode often, but I have used HyperTalk a lot, and the environment is forgiving with lots of help for casual programmers like me. LiveCode doesn’t offer much for data abstraction — basically, everything is a string.  I have experience using the tool’s facility with items, words, lines, and fields to structure data.
  • I know LaTeX and have used the exam class before. I know Python and the fact that I needed to preserve indentation.

Then I realized that it takes almost as much knowledge to use this generator. The few people who might want to use the Parsons problem generator that I posted would have to know about Parsons problems, want to use them, be using LaTeX for exams, and know how to use the output of the generator.

But I bet that all (or the majority?) of end-user programming experiences are like this. End-users are professionals in some domain. They know a lot of stuff. They’ll bring a lot of knowledge to their programming activity. The programs will require a lot of knowledge to write, to understand, and to use.

One of the potential implications is that this program (and maybe most end-user programs?) are probably not useful to many others.  Much of what we teach in CS1 for CS majors, or maybe even in Software Carpentry, is not useful to the occasional, casual end-user programmer.  Most of what we teach is for larger-scale programming.  Do we need to teach end-user programmers about software engineering practices that make code more readable by others?  Do we need to teach end-user programmers about tools for working in teams on software if they are not going to be working in teams to develop their small bits of code? Those are honest questions.  Shriram Krishnamurthi would remind me that end-user programmers, even more than any other class of programmers, are more likely to make errors and less likely to be able to debug them, so teaching end-user programmers practices and tools to catch and fix errors is particularly important for them.  That’s a strong argument. But I also know that, as an end-user programmer myself, I’m not willing to spend a lot of time that doesn’t directly contribute towards my end goal.  Balancing the real needs of end-user programmers with their occasional, casual use of programming is an interesting challenge.

The bigger question that I’m wondering about is whether someone else, facing a similar problem, could learn to code with a small enough time investment to make it worthwhile. I did a lot of programming in HyperTalk when I was a graduate student. I have that investment to build on. How much of an investment would someone else have to make to be able to write this kind of script as easily?

Why LiveCode? Why not Python? Or Smalltalk? I was originally going to write this in Python. Why not? I was teaching Python, and the problems would all be in Python. It’d good exercise for me.

I realized that I didn’t want to deal with files or a command line. I wanted a graphical user interface. I wanted to paste some code in (not put it in a file), and get some text that I could copy (not find it in one or more files). I didn’t want to have to remember what function(s) to call. I wanted a big button. I simply don’t have the time to deal with the cognitive load of file names and function names. Copy-paste the sorted code, press the button, then copy-paste the scrambled code and copy-paste the solution. I could do that. Maybe I could build a GUI in Python, but every time I have used a GUI tool in Python, it was way more work than LiveCode.

I also know Smalltalk better than most. Here’s a bit of an embarrassing confession: I’ve never really learned to build GUIs in Smalltalk. I’ve built a couple of toy examples in Morphic for class. But a real user interface with text areas that really work? That’s still hard for me. I didn’t want to deal with learning something new. LiveCode is just so easy — select the tool, drag the UI object into place.

LiveCode was the obvious answer for me, but that’s because of who I am and the background that I already have. What could we teach future professionals/end-user programmers that (a) they would find worthwhile learning (not too hard, not too time-consuming) and (b) they could use casually when they needed it, like my Parsons problem generator? That is an interesting computing education research question.

How does a student determine “worthwhile” when deciding what programming to learn for future end-user programming?  Let’s say that we decided to teach all STEM graduate students some programming so that they could use it in their future professional practice as end-user programmers.  What would you teach them?  How would they judge something “worthwhile” to learn for later?

We know some answers to this question.  We know that students judge the authenticity of the language based on what they see themselves doing in the future and what the current practice is in that field (see Betsy DiSalvo’s findings on Glitch and our results on Media Computation).

But what if that’s not a good programming language? What if there’s a better one?  What if the common practice in a field is ill-informed? I’m going to be that most people, faced with the general problem I was facing (wanting a GUI to do a text-processing task) would use JavaScript.  LiveCode is way better than JavaScript for an occasional, casual GUI task — easier to learn, more stable, more coherent implementation, and better programming support for casual users.  Yet, I predict most people would choose JavaScript because of the Principle of Social Proof.

I’ve been reading Robert Cialdini’s books on social psychology and influence, and he explains that social proof is how people make decisions when they’re uncertain (like how to choose a programming language when they don’t know much about programming) and there are others to copy.

First, we seem to assume that if a lot of people are doing the same thing, they must know something we don’t. Especially when we are uncertain, we are willing to place an enormous amount of trust in the collective knowledge of the crowd. Second, quite frequently the crowd is mistaken because they are not acting on the basis of any superior information but are reacting, themselves, to the principle of social proof.

Cialdini PhD, Robert B.. Influence (Collins Business Essentials) (Kindle Locations 2570-2573). HarperCollins. Kindle Edition.

How many people know both JavaScript and LiveCode well?  And don’t consider computer scientists. You can’t convince someone by telling them that computer scientists say “X is better than Y.”  People follow social proof from people whom they judge to be similar to them. It’s got to be someone in their field, someone who works like them.

It would be hard to teach the graduate students something other than what’s in common practice in their fields, even if it’s more inefficient to learn and harder to use than another choice.

June 11, 2018 at 2:00 am 2 comments

A Generator for Parsons problems on LaTeX exams and quizzes

I just finished teaching my Introduction to Media Computation a few weeks ago to over 200 students. After Barb finished her dissertation on Parsons problems this semester, I decided that I should include Parsons problems on my last quiz, on the final exam study guide, and on the final exam. Parsons problems are a great fit for this assessment task. We know that Parsons problems are a more sensitive measure of learning than code writing problems, they’re just as effective as code writing or code fixing problems for learning (so good for a study guide), and they take less time than code writing or fixing.

Barb’s work used an interactive tool for providing adaptive Parsons problems. I needed to use paper for the quiz and final exam. There have been several Parsons problems paper-based implementation, and Barb guided me in developing mine.

But I realized that there’s a challenge to doing a bunch of Parsons problems like this. Scrambling code is pretty easy, but what happens when you find that you got something wrong? The quiz, study guide, and final exam were all going to iterate several times as we developed them and tested them with the teaching assistants. How do I make sure that I always kept aligned the scrambled code and the right answer?

I decided to build a gadget in LiveCode to do it.

I paste the correctly ordered code into the field on the left. When I press “Scramble,” a random ordering of the code appears (in a Verbatim LaTeX environment) along with the right answers, to be used in the LaTeX exam class. If you want to list a number of points to be associated with each correct line, you can put a number into the field above the solution field. If empty, no points will be explicitly allocated in the exam document.

I’d then paste both of those fields into my LaTeX source document. (I usually also pasted in the original source code in the correct order, so that I could fix the code and re-run the scramble when I inevitably found that I did something wrong.)

The wording of the problem was significant. Barb coached me on the best practice. You allow students to write just the line number, but encourage them to write the whole line because the latter is going to be less cognitive load for them.

Unscramble the code below that halves the frequency of the input sound.

Put the code in the right order on the lines below. You may write the line numbers of the scrambled code in the right order, or you can write the lines themselves (or both). (If you include both, we will grade the code itself if there’s a mismatch.)

The problem as the student sees it looks like this:

The exam class can also automatically generate a version of the exam with answers for used in grading. I didn’t solve any of the really hard problems in my script, like how do I deal with lines that could be put in any order. When I found that problem, I just edited the answer fields to list the acceptable options.

I am making the LiveCode source available here: http://bit.ly/scrambled-latex-src

LiveCode generates executables very easily. I have generated Windows, MacOS, and Linux executables and put them in a (20 Mb, all three versions) zip here: http://bit.ly/scrambled-latex

I used this generator probably 10-20 times in the last few weeks of the semester. I have been reflecting on this experience as an example of end-user programming. I’ll talk about that in the next blog post.

June 8, 2018 at 2:00 am 2 comments

Older Posts


Recent Posts

August 2018
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Feeds

Blog Stats

  • 1,538,499 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,301 other followers

CS Teaching Tips