Posts tagged ‘assessment’

Adaptive Parsons problems, and the role of SES and Gesture in learning computing: ICER 2018 Preview

 

Next week is the 2018 International Computing Education Research Conference in Espoo, Finland. The proceedings are (as of this writing) available here: https://dl.acm.org/citation.cfm?id=3230977. Our group has three papers in the 28 accepted this year.

“Evaluating the efficiency and effectiveness of adaptive Parsons problems” by Barbara Ericson, Jim Foley, and Jochen (“Jeff”) Rick

These are the final studies from Barb Ericson’s dissertation (I blogged about her defense here). In her experiment, she compared four conditions: Students learning through writing code, through fixing code, through solving Parsons problems, and through solving her new adaptive Parsons problems. She had a control group this time (different from her Koli Calling paper) that did turtle graphics between the pre-test and post-test, so that she could be sure that there wasn’t just a testing effect of pre-test followed by a post-test. The bottom line was basically what she predicted: Learning did occur, with no significant difference between treatment groups, but the Parsons problems groups took less time. Our ebooks now include some of her adaptive Parsons problems, so she can compare performance across many students on adaptive and non-adaptive forms of the same problem. She finds that students solve the problems more and with fewer trials on the adaptive problems. So, adaptive Parsons problems lead to the same amount of learning, in less time, with fewer failures. (Failures matter, since self-efficacy is a big deal in computer science education.)

“Socioeconomic status and Computer science achievement: Spatial ability as a mediating variable in a novel model of understanding” by Miranda Parker, Amber Solomon, Brianna Pritchett, David Illingworth, Lauren Margulieux, and Mark Guzdial

(Link to last version I reviewed.)

This study is a response to the paper Steve Cooper presented at ICER 2015 (see blog post here), where they found that spatial reasoning training erased performance differences between higher and lower socioeconomic status (SES) students, while the comparison class had higher-SES students performing better than lower-SES students. Miranda and Amber wanted to test this relationship at a larger scale.

Why should wealthier students do better in CS? The most common reason I’ve heard is that wealthier students have more opportunities to study CS — they have greater access. Sometimes that’s called preparatory privilege.

Miranda and Amber and their team wanted to test whether access is really the right intermediate variable. They gave students at two different Universities four tests:

  • Part of Miranda’s SCS1 to measure performance in CS.
  • A standardized test of SES.
  • A test of spatial reasoning.
  • A survey about the amount of access they had to CS education, e.g., formal classes, code clubs, summer camps, etc.

David and Lauren did the factor analysis and structural equation modeling to compare two hypotheses: Does higher SES lead to greater access which leads to greater success in CS, or does higher SES lead to higher spatial reasoning which leads to greater success in CS? Neither hypothesis accounted for a significant amount of the differences in CS performance, but the spatial reasoning model did better than the access model.

There are some significant limitations of this study. The biggest is that they gathered data at universities. A lot of SES variance just disappears when you look at college students — they tend to be wealthier than average.

Still, the result is important for challenging the prevailing assumption about why wealthier kids do better in CS. More, spatial reasoning is an interesting variable because it’s rather inexpensively taught. It’s expensive to prepare CS teachers and get them into all schools. Steve showed that we can teach spatial reasoning within an existing CS class and reduce SES differences.

“Applying a Gesture Taxonomy to Introductory Computing Concepts” by Amber Solomon, Betsy DiSalvo, Mark Guzdial, and Ben Shapiro

(Link to last version I saw.)

We were a bit surprised (quite pleasantly!) that this paper got into ICER. I love the paper, but it’s different from most ICER papers.

Amber is interested in the role that gestures play in teaching CS. She started this paper from a taxonomy of gestures seen in other STEM classes. She observed a CS classroom and used her observations to provide concrete examples of the gestures seen in other kinds of classes. This isn’t a report of empirical findings. This is a report of using a lens borrowed from another field to look at CS learning and teaching in a new way.

My favorite part of of this paper is when Amber points out what parts of CS gestures don’t really fit in the taxonomy. It’s one thing to point to lines of code – that’s relatively concrete. It’s another thing to “point” to reference data, e.g., when explaining a sort and you gesture at the two elements you’re comparing or swapping. What exactly/concretely are we pointing at? Arrays are neither horizontal nor vertical — that distinction doesn’t really exist in memory. Arrays have no physical representation, but we act (usually) as if they’re laid out horizontally in front of us. What assumptions are we making in order to use gestures in our teaching? And what if students don’t share in those assumptions?

August 10, 2018 at 7:00 am Leave a comment

A Generator for Parsons problems on LaTeX exams and quizzes

I just finished teaching my Introduction to Media Computation a few weeks ago to over 200 students. After Barb finished her dissertation on Parsons problems this semester, I decided that I should include Parsons problems on my last quiz, on the final exam study guide, and on the final exam. Parsons problems are a great fit for this assessment task. We know that Parsons problems are a more sensitive measure of learning than code writing problems, they’re just as effective as code writing or code fixing problems for learning (so good for a study guide), and they take less time than code writing or fixing.

Barb’s work used an interactive tool for providing adaptive Parsons problems. I needed to use paper for the quiz and final exam. There have been several Parsons problems paper-based implementation, and Barb guided me in developing mine.

But I realized that there’s a challenge to doing a bunch of Parsons problems like this. Scrambling code is pretty easy, but what happens when you find that you got something wrong? The quiz, study guide, and final exam were all going to iterate several times as we developed them and tested them with the teaching assistants. How do I make sure that I always kept aligned the scrambled code and the right answer?

I decided to build a gadget in LiveCode to do it.

I paste the correctly ordered code into the field on the left. When I press “Scramble,” a random ordering of the code appears (in a Verbatim LaTeX environment) along with the right answers, to be used in the LaTeX exam class. If you want to list a number of points to be associated with each correct line, you can put a number into the field above the solution field. If empty, no points will be explicitly allocated in the exam document.

I’d then paste both of those fields into my LaTeX source document. (I usually also pasted in the original source code in the correct order, so that I could fix the code and re-run the scramble when I inevitably found that I did something wrong.)

The wording of the problem was significant. Barb coached me on the best practice. You allow students to write just the line number, but encourage them to write the whole line because the latter is going to be less cognitive load for them.

Unscramble the code below that halves the frequency of the input sound.

Put the code in the right order on the lines below. You may write the line numbers of the scrambled code in the right order, or you can write the lines themselves (or both). (If you include both, we will grade the code itself if there’s a mismatch.)

The problem as the student sees it looks like this:

The exam class can also automatically generate a version of the exam with answers for used in grading. I didn’t solve any of the really hard problems in my script, like how do I deal with lines that could be put in any order. When I found that problem, I just edited the answer fields to list the acceptable options.

I am making the LiveCode source available here: http://bit.ly/scrambled-latex-src

LiveCode generates executables very easily. I have generated Windows, MacOS, and Linux executables and put them in a (20 Mb, all three versions) zip here: http://bit.ly/scrambled-latex

I used this generator probably 10-20 times in the last few weeks of the semester. I have been reflecting on this experience as an example of end-user programming. I’ll talk about that in the next blog post.

June 8, 2018 at 2:00 am 2 comments

Attending the amazing 2017 Computing at School conference #CASConf17

June 17, Barbara and I attended the Computing at School conference in Birmingham, England (which I wrote about here).  The slides from my talk are below. I highly recommend the summary from Duncan Hull which I quote at the bottom.

CAS was a terrifically fun event. It was packed full with 300 attendees. I under-estimated the length of my talk (I tend to talk too fast), so instead of a brief Q&A, there was almost half the time for Q&A. Interacting with the audience to answer teachers’ questions was more fun (and hopefully, more useful and entertaining) than me talking for longer. The session was well received based on the Tweets I read. In fact, that’s probably the best way to get a sense for the whole day — on Twitter, hashtag #CASConf17. (I’m going to try to embed some tweets with pictures below.)

Barbara’s two workshops on Media Computation in Python using our ebooks went over really well.

I enjoyed my interactions all day long. I was asked about research results in just about every conversation — the CAS teachers are eager to see what computing education research can offer them.  I met several computing education research PhD students, which was particularly exciting and fun. England takes computing education research seriously.

Miles Berry demonstrated Project Quantum by having participants answer questions from the database.  That was an engaging and fascinating interactive presentation.

Linda Liukas gave a terrific closing keynote. She views the world from a perspective that reminded me of Mitchel Resnick’s Lifelong Kindergarten and Seymour Papert’s playfulness. I was inspired.

The session that most made me think was from Peter Kemp on the report that he and co-authors have just completed on the state of computing education in England. That one deserves a separate blog post – coming Wednesday.

Check out Duncan’s summary of the conference:

The Computing At School (CAS) conference is an annual event for educators, mostly primary and secondary school teachers from the public and private sector in the UK. Now in its ninth year, it attracts over 300 delegates from across the UK and beyond to the University of Birmingham, see the brochure for details. One of the purposes of the conference is to give teachers new ideas to use in their classrooms to teach Computer Science and Computational Thinking. I went along for my first time (*blushes*) seeking ideas to use in an after school Code Club (ages 7-10) I’ve been running for a few years and also for approaches that undergraduate students in Computer Science (age 20+) at the University of Manchester could use in their final year Computer Science Education projects that I supervise. So here are nine ideas (in random brain dump order) I’ll be putting to immediate use in clubs, classrooms, labs and lecture theatres:

Source: Nine ideas for teaching Computing at School from the 2017 CAS conference | O’Really?

My talk slides:

July 10, 2017 at 7:00 am 1 comment

Assessing Learning In Introductory Computer Science: Dagstuhl Seminar Report now Available

I have written about this Dagstuhl Seminar (see earlier post). The formal report is now available.

This seminar discussed educational outcomes for first-year (university-level) computer science. We explored which outcomes were widely shared across both countries and individual universities, best practices for assessing outcomes, and research projects that would significantly advance assessment of learning in computer science. We considered both technical and professional outcomes (some narrow and some broad) as well as how to create assessments that focused on individual learners. Several concrete research projects took shape during the seminar and are being pursued by some participants.

Source: DROPS – Assessing Learning In Introductory Computer Science (Dagstuhl Seminar 16072)

September 26, 2016 at 7:26 am Leave a comment

Preview ICER 2016: Ebooks Design-Based Research and Replications in Assessment and Cognitive Load Studies

The International Computing Education Research (ICER) Conference 2016 is September 8-12 in Melbourne, Australia (see website here). There were 102 papers submitted, and 26 papers accepted for a 25% acceptance rate. Georgia Tech computing education researchers are justifiably proud — we submitted three papers to ICER 2016, and we had three acceptances. We’re over 10% of all papers at ICER 2016.

One of the papers extends the ebook work that I’ve reported on here (see here where we made them available and our paper on usability and usage from WiPSCE 2015). Identifying Design Principles for CS Teacher Ebooks through Design-Based Research (click on the title to get to the ACM DL page) by Barbara Ericson, Kantwon Rogers, Miranda Parker, Briana Morrison, and I use a Design-Based Research perspective on our ebooks work. We describe our theory for the ebooks, then describe the iterations of what we designed, what happened when we deployed (data-driven), and how we then re-designed.

Two of our papers are replication studies — so grateful to the ICER reviewers and communities for seeing the value of replication studies. The first is Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment by Miranda Parker, me, and Shelly Engleman. This is Miranda’s paper expanding on her SIGCSE 2016 poster introducing the SCS1 validated and language-independent measure of CS1 knowledge. The paper does a great survey of validated measures of learning, explains her process, and then presents what one can and can’t claim with a validated instrument.

The second is Learning Loops: A Replication Study Illuminates Impact of HS Courses by Briana Morrison, Adrienne Decker, and Lauren Margulieux. Briana and Lauren have both now left Georgia Tech, but they were still here when they did this paper, so we’re claiming them. Readers of this blog may recall Briana and Lauren’s confusing results from SIGCSE 2016 result that suggest that cognitive load in CS textual programming is so high that it blows away our experimental instructional treatments. Was that an aberration? With Adrienne Decker’s help (and student participants), they replicated the study. I’ll give away the bottom line: It wasn’t an aberration. One new finding is that students who did not have high school CS classes caught up with those who did in the experiment, with respect to understanding loops

We’re sending three of our Human-Centered Computing PhD students to the ICER 2016 Doctoral Consortium. These folks will be in the DC on Sept 8, and will present posters to the conference on Sept 9 afternoon.

September 2, 2016 at 7:53 am 16 comments

Crowd-sourcing high-quality CS Ed Assessments: CAS’s Project Quantum

Bold new project from the UK’s Computing at School project aims to create high-quality assessments for their entire computing curriculum, across grade levels.  The goal is to generate crowd-sourced problems with quality control checks to produce a large online resource of free assessments. It’s a remarkable idea — I’ve not heard of anything this scale before.  If it works, it’ll be a significant education outcome, as well as an enormous resource for computing educators.

I’m a bit concerned whether it can work. Let’s use open-source software as a comparison. While there are many great open-source projects, most of them die off.  There simply aren’t enough programmers in open-source to contribute to all the great ideas and keep them all going.  There are fewer people who can write high-quality assessment questions in computing, and fewer still who will do it for free. Can we get enough assessments made for this to be useful?

Project Quantum will help computing teachers check their students’ understanding, and support their progress, by providing free access to an online assessment system. The assessments will be formative, automatically marked, of high quality, and will support teaching by guiding content, measuring progress, and identifying misconceptions.Teachers will be able to direct pupils to specific quizzes and their pupils’ responses can be analysed to inform future teaching. Teachers can write questions themselves, and can create quizzes using their own questions or questions drawn from the question bank. A significant outcome is the crowd-sourced quality-checked question bank itself, and the subsequent anonymised analysis of the pupils’ responses to identify common misconceptions.

Source: CAS Community | Quantum: tests worth teaching to

May 25, 2016 at 7:51 am 3 comments

A Dagstuhl Discussion about Social and Professional Practices

Another of the breakouts that I was in at the recent Dagstuhl seminar on assessment in CS learning focused on how we teach and assess in CS classes social and professional practices. This was a small group: Andy Ko, Lisa Kaczmarczyk, Jan Erik Moström, and me.

Andy and his students have been studying (via interviews and surveys) what makes a great engineer.

  • They’re good at decision-making.
  • They’re good at shifting levels of abstraction, e.g., describing how a line of code relates to a business strategy.
  • They have some particular inter-personal skills. They program ego-less-ly. They have empathy, e.g., “not an asshole.”
  • Senior engineers often spend a lot of time being teachers for more junior engineers.

Since I’ve worked with Lijun Ni on high school CS teachers, I know some of the social and professional practices of teachers. They have content knowledge, and they have pedagogical content knowledge. They know how to teach. They know how to identify and diagnose student misunderstandings, and they know techniques for addressing these.

We know some techniques for teaching these practices. We can have students watch professionals, by shadowing or using case-based systems like the Ask systems. We can put students in apprenticeships (like student teaching or internships) or in design teams. We could even use games and other simulations. We have to convey authenticity — students have to believe that these are the real social and professional practices. An interesting question we came up with: How would you know if you covered the set of social and professional practice?

Here’s the big question: How similar are these sets? They seem quite different to me, and these are just two possible communities of practice for students in an intro course. Are there social and professional practices that we might teach in the same intro CS — for any community of practice that the student might later join? My sense is that the important social and professional practices are not in the intersection. The most important are unique to the community of practice.

How would we know if we got there? How would you assess student learning about social and professional practice? Knowledge isn’t enough — we’re talking about practice. We have to know that they’d do the right things. And if you found out that they didn’t have the right practices, is it still actionable? Can we “fix” practices while in undergrad? Maybe students will just do the right things when they actually get out there?

The countries with low teacher attrition spend a lot of time on teacher on-boarding. In Japan, the whole school helps to prepare a new teacher, and the whole school feels a sense of failure if the first year teacher doesn’t pass the required certification exam. US schools tend not to have much on-boarding — at schools for teachers, or in industry for software engineers (as Begel and Simon found in their studies at Microsoft). On-boarding seems like a really good place, to me, for teaching professional practice. And since the student is then doing the job, assessment is job assessment.

The problems of teaching and assessing professional practice are particularly hard when you’re trying to design a new community of practice. We’d like computing to be more diverse, to be more welcoming to women and to people from under-represented groups. We’d want cultural sensitivity to be a practice for software professionals. How would you design that? How do you define a practice for a community that doesn’t exist yet? How do you convince students about the authenticity?

It’s an interesting set of problems, and some interesting questions to explore, but I came away dubious. Is this something that we can do effectively in school?  Perhaps it’s more effective to teach professional practices in the professional context?

March 9, 2016 at 8:00 am 2 comments

Older Posts


Recent Posts

August 2018
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Feeds

Blog Stats

  • 1,538,499 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,301 other followers

CS Teaching Tips