Archive for February, 2020

BDSI – A New Validated Assessment for Basic Data Structures: Guest Blog Post from Leo Porter and colleagues

Leo Porter, Michael Clancy, Cynthia Lee, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, and Daniel Zingaro have developed a new concept inventory that they are making available to instructors and researchers. They have written this guest blog post to describe their new instrument and explain why you should use it. I’m grateful for their contribution!

We recently published a Concept Inventory for Basic Data Structures at ICER 2019 [1] and hope it will be of use to you in your classes and/or research.

The BDSI is a validated instrument to measure student knowledge of Basic Data Structure Concepts [1].  To validate the BDSI, we engaged faculty at a diverse set of institutions to decide on topics, help with question design, and ensure the questions are valued by instructors.  We also conducted over one hundred interviews with students in order to identify common misconceptions and to ensure students properly interpret the questions. Lastly, we ran pilots of the instrument at seven different institutions and performed a statistical evaluation of the instrument to ensure the questions are properly interpreted and discriminate between students’ abilities well.

What Our Assessment Measures

The BDSI measures student performance on Basic Data Structure concepts commonly found in a CS2 course.  To arrive at the topics and content of the exam, we worked with fifteen faculty at thirteen different institutions to ensure broad applicability.  The resulting topics on the CI include: Interfaces, Array-Based Lists, Linked-Lists, and Binary Search Trees. If you are curious about the learning goals or want more details on the process we used in arriving at these goals, please see our SIGCSE 2018 publication [2].

Why Validated Assessments are Great for Instructors

Suppose you want to know how well your students understand various topics in your CS2 course.  How could you figure out how much your students are learning relative to other schools? You could, perhaps, get a final exam from another school and use it in your class to compare results, but invariably, the final exam may not be a good fit.  Moreover, you may find flaws in some of the questions and wonder if students interpret them properly. Instead, you can use a validated assessment. The advantage of using a validated assessment is there is general agreement that it is measuring what you want to measure and it accurately measures student thinking.  As such, you can compare your findings to results from other schools who have used the instrument to determine if your students are learning particular topics better or worse than cohorts and similar institutions.

Why Validated Assessments are Great for Researchers

As CS researchers, we often experiment with new ways to teach courses.  For example, many people use Media Computation or Peer Instruction (PI), two complementary pedagogical approaches developed over the past several decades.  It’s important to establish whether these changes are helping our students. Do more students pass? Do fewer students withdraw? Do more students continue studying CS?  Does it boost outcomes for under-represented groups? Answering these questions using a variety of courses can give us insight into whether what we do corresponds with our expectations.

One important question is: using our new approach, do students learn more than before?  Unfortunately, answering this is complicated by the lack of standardized, validated assessments.  If students score 5% higher on an exam when studying with PI vs. not studying with PI, all we know is that PI students did better on that exam.  But exams are designed by one instructor, for one course at one institution, not for the purposes of cross-institution, cross-cohort comparisons.  They are not validated. They do not take into account the perspectives of other CS experts. When students answer a question on an exam correctly, we assume that it’s because they know the material; when they answer incorrectly, we assume it’s because they don’t know the material.  But we don’t know: maybe the exam contains incidental cues that subtly influence how students respond.

A Concept Inventory (CI) solves these problems.  Its rigorous design process leads to an assessment that can be used across schools and cohorts, and can be used to validly compare teaching approaches.

How to Obtain the BDSI

The BDSI is available via the google group.  If you’re interested in using it, please join the group and add a post with your name, institution, and how you plan to use the BDSI.

How to Use the BDSI

The BDSI is designed to be given as a post-test after students have completed the covered material.  Because the BDSI was validated as a full instrument, it is important to use the entire assessment, and not alter or remove any of the questions.  We ask that instructors not make copies of the assessment available to students after giving the BDSI, to try to avoid the questions becoming public.  We likewise recommend giving participation credit, but not correctness credit, to students for taking the BDSI, to avoid incentivizing cheating.  We have found giving the BDSI as part of a final review session, collecting the assessment from students, and then going over the answers to be a successful methodology for having students take it. 

Want to Learn More?

If you’re interested in learning more about how to build a CI, please come to our talk at SIGCSE 2020 (from 3:45-4:10pm on Thursday, March 12th) or read our paper [3].  If you are interested in learning more about how to use validated assessments, please come to our Birds of a Feather session on “Using Validated Assessments to Learn About Your Students” at SIGCSE 2020 (5:30-6:20pm on Thursday, March 12th) or our tutorial on using the BDSI at CCSC-SW 2020 (March 20-21).

References:

[1] Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A Validated Concept Inventory for Basic Data Structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER ’19).

[2] Leo Porter, Daniel Zingaro, Cynthia Lee, Cynthia Taylor, Kevin C. Webb, and Michael Clancy. 2018. Developing Course-Level Learning Goals for Basic Data Structures in CS2. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ’18).

[3] Cynthia Taylor, Michael Clancy, Kevin C. Webb, Daniel Zingaro, Cynthia Lee, and Leo Porter. 2020. The Practical Details of Building a CS Concept Inventory. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20).

February 24, 2020 at 7:00 am Leave a comment

Call for participation at the 2nd Excited Summer School on Research in Computing Education

Sharing a note from Birgit Rognebakke Krogstie:

Call for participation at the 2nd Excited Summer School on Research in Computing Education

We are happy to announce the second Excited Summer School on Research in Computing Education, which will take place 8-12 June 2020 at NTNU campus Gløshaugen in Trondheim, Norway. The school is intended for PhD students and post docs. There will be varied sessions led by experts in the computing education field, combining presentations with discussion and hands-on groupwork. Topics range from how to teach beginner-level programming courses to the inclusion of environmental sustainability in IT education. The school is a great arena for network building and informal discussion with fellow students as well as with the invited experts. For more information, see our web site: https://www.ntnu.edu/excited/call-for-participants-excited-summer-school-2020. Application deadline: 10 March 2020.

February 21, 2020 at 9:34 am 3 comments

Importance of considering race in CS education research and discussion

I was talking with one of my colleagues here at Michigan about the fascinating recent journal article from Tim Weston and collaborators based on NCWIT Aspirations award applicants, which I blogged about here. I was telling him about the results — what correlated with women’s persistence in technology and computing, and what didn’t or was negatively correlated.

He said that he was dubious. I asked why. He said, “What about the Black girls?”

His argument that the NCWIT Aspirations awards tends to be white and tends to be in wealthy, privileged school districts. Would those correlations be the same if you looked at Black women, or Latina women?

I went back to the Weston et al. paper. They write:

Although all respondents were female, they were diverse in race and ethnicity. Because we know that there are differentiated experiences for students of color in secondary and post-secondary education in the US, and especially women of color, we wanted to make sure we captured any differences in outcomes in our analysis. To do so, we created a variable called Under-represented Minority in Computing (URMC) status that grouped students by race/ethnicity. URMC indicated persons from groups historically under-represented in computing–African-American, Hispanic, or Native American. White, Asian and students of two or more races were coded as “Majority” in this variable. Unfortunately, further disaggregation by specific race/ethnicity was not possible due to low numbers. Thus, even though the numbers in the respondent pool were not high enough to disaggregate by specific race/ethnicity, we could still identify trends by over-representation and under-representation.

18% of their population was tagged URMC. URMC was included as a variable in their analyses, and their results suggest that being in the URMC group did not influence persistence significantly. If I understand their regressions right, that doesn’t tell us if the correlations were different by race/ethnicity. URMC wasn’t a significant factor in the outcomes, but that is not the same as thinking that those other variables differ by race and ethnicity. Do Black females have a different relationship with video games or with community than white females, for example? Or with Latina students?

While the analysis did not leave race out of the analysis entirely, there was not enough diversity there to answer my colleague’s question. I do agree with the authors that we would expect differentiated experiences. If our analysis does not include race, can we account for the differentiated experiences?

It’s hard to include race in many of our post-secondary CS ed analyses simply because the number of non-white and non-Asian students is so small. We couldn’t say that Media Computation was successful with a diverse student body until University of Illinois Chicago published their results. Georgia Tech has few students from under-served groups in the CS classes we were studying.

There’s a real danger that we’re going to make strong claims about what works and doesn’t work in computer science based only on what works for students in the majority groups. We need to make sure that we include race in our CS education discussions, that we’re taking into account these differentiated experiences. If we don’t, we risk that any improvements or optimizations we make on the basis of these results will only work with the privileged students, or worse yet, may even exacerbate the differentiated experiences.

February 17, 2020 at 7:00 am 7 comments

Barbara Ericson’s analysis of the 2019 Advanced Placement CS data

Barb spoke at CornellTech’s “To Code and Beyond” workshop on January 10 on her analysis of the Advanced Placement CS data (both A and CS Principles). She’s shared the slides and her analysis at her blog.

As usual, the analyses are fascinating and dismal. It’s amazing to see how few people are really getting access to AP CS.

This year, she did a bunch of intersectional analyses which were eye opening. Here’s a couple of the results that I found surprising. Only 9 states had more than 10 Black Women pass the AP CS A exam. Only 14 states had more than 10 Hispanic Women pass the AP CSA exam. Those aren’t percentages — that’s a raw number of exam-takers who passed. AP CSP numbers are larger, but still disappointing.

February 10, 2020 at 7:00 am 4 comments

Learning the Craft of CS Education: Recommending the CS-Ed Podcast

I’ve just finished listening to the first three episodes of the new podcast on CS Education from Dr. Kristin Stephens-Martinez (click here for the podcast homepage). If you teach computer science, I highly recommend it to you. It’s not about CS education per se, except in the sense that research often informs the education topics being discussed. Rather, it’s a nuts-and-bolts discussion of issues relevant to the craft of being a CS educator.

Kristin is terrific as the interviewer on the podcast. She plays all of us — teaching CS, looking for tips, trying to get what she can from these experts.

  • The first episode was with David Malan of CS50 at Harvard. It was an in-depth discussion of the tools they’re creating for CS50. I didn’t hear any that I was particularly interested in using, but I did hear about tools that I wanted to recommend to colleagues who teach those topics.
  • Dan Garcia of BJC fame at Berkeley was absolutely delightful. He didn’t talk about Snap! or Beauty and Joy in Computing. Rather, he gave a concrete checklist of how to develop good exams in CS. I’m a fan of checklists, and his were great. I’ll definitely use these tips in the future.
  • Amy Ko of U.Washington talked about her research, but in really concrete, practitioner-oriented terms. The first part was about how to help students debug. She definitely gave me insights on how to help students develop debugging skills. The second part was about how Donald Knuth was really a qualitative researcher — fascinating stuff.

I’m going to show up later in the series. I don’t know remember what I said! I hope that Kristin is kind to me in the post-production phase, because I don’t think I’m typically as grounded and offering concrete advise as these first three were. I’ll find out in the next few weeks.

TL;DR: If you teach CS, go listen to the CS-Ed podcast. You’ll get something useful out of it that’s worth your time.

February 3, 2020 at 7:00 am 1 comment


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,014 other followers

Feeds

Recent Posts

Blog Stats

  • 1,937,019 hits
February 2020
M T W T F S S
 12
3456789
10111213141516
17181920212223
242526272829  

CS Teaching Tips