Posts tagged ‘computing education research’

Ebooks, Handbooks, Strong Themes, and Undergraduate Research: SIGCSE 2020 Preview

A few items on things that we’re doing at SIGCSE 2020. Yes, SIGCSE 2020 is still have a face-to-face meeting. Attendance looks to be down by at least 30% because of coronavirus fears.

Barbara Ericson (and Brad Miller, who won’t be there) are presenting a paper on their amazingly successful Runestone open-source platform for publishing ebooks: Runestone: A Platform for Free, On-line, and Interactive Ebooks on Sat Mar 14, 2020 11:10 AM – 11:35 AM in D135. They are also hosting a workshop to help others to develop with Runestone: Workshop #401: Using and Customizing Ebooks for Computing Courses with Runestone Interactive on Sat Mar 14, 2020 3:30 PM – 6:30 PM in C120.

I’m part of the massive special session on Thursday 1:45 PM – 3:00 PM in B113 that Colleen Lewis is organizing: Session 2H: The Cambridge Handbook of Computing Education Research Summarized in 75 minutes. Colleen, who must have done graduate work in organizational management (or perhaps cat herding), has organized 25 authors (!) to present the entire Handbook in a single session. Even if I wasn’t one of the presenters, I’d go just to see if we can all pull it off! It’s going to be kind of like watching NASCAR — you’re on the edge of your seat as everyone tries to avoid crashing into one another.

Bravo to Bob Sloan who got this panel accepted this year: Session 6K: CS + X Meets CS 1: Strongly Themed Intro Courses on Fri Mar 13, 2020 3:45 PM – 5:00 PM in Portland Ball Room 255. The panelists are teachers and developers who have put together contextualized introductions to computing, like Media Computation. The panelists have done interesting classes, and I’m eager to hear what they have to say about them.

I am collaborating with Sindhu Kutty on her interesting summer reading group to engage undergraduates in CS research. (Read as: we meet occasionally to work on assessment, but Sindhu is really doing all the work.) The evidence suggests that she’s able to give undergraduates a better understanding of CS graduate research, at a larger scale (e.g. a couple dozen students to one faculty member) than typical undergraduate research programs. It seems like it might feel a bit safer and easier to try for female students. She was going to present a poster at RESPECT on Wednesday Undergraduate Student Research With Low Faculty Cost, but it’s now going to be virtual. I’m not sure how it’s going to work right now.

March 11, 2020 at 7:00 am 9 comments

BDSI – A New Validated Assessment for Basic Data Structures: Guest Blog Post from Leo Porter and colleagues

Leo Porter, Michael Clancy, Cynthia Lee, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, and Daniel Zingaro have developed a new concept inventory that they are making available to instructors and researchers. They have written this guest blog post to describe their new instrument and explain why you should use it. I’m grateful for their contribution!

We recently published a Concept Inventory for Basic Data Structures at ICER 2019 [1] and hope it will be of use to you in your classes and/or research.

The BDSI is a validated instrument to measure student knowledge of Basic Data Structure Concepts [1].  To validate the BDSI, we engaged faculty at a diverse set of institutions to decide on topics, help with question design, and ensure the questions are valued by instructors.  We also conducted over one hundred interviews with students in order to identify common misconceptions and to ensure students properly interpret the questions. Lastly, we ran pilots of the instrument at seven different institutions and performed a statistical evaluation of the instrument to ensure the questions are properly interpreted and discriminate between students’ abilities well.

What Our Assessment Measures

The BDSI measures student performance on Basic Data Structure concepts commonly found in a CS2 course.  To arrive at the topics and content of the exam, we worked with fifteen faculty at thirteen different institutions to ensure broad applicability.  The resulting topics on the CI include: Interfaces, Array-Based Lists, Linked-Lists, and Binary Search Trees. If you are curious about the learning goals or want more details on the process we used in arriving at these goals, please see our SIGCSE 2018 publication [2].

Why Validated Assessments are Great for Instructors

Suppose you want to know how well your students understand various topics in your CS2 course.  How could you figure out how much your students are learning relative to other schools? You could, perhaps, get a final exam from another school and use it in your class to compare results, but invariably, the final exam may not be a good fit.  Moreover, you may find flaws in some of the questions and wonder if students interpret them properly. Instead, you can use a validated assessment. The advantage of using a validated assessment is there is general agreement that it is measuring what you want to measure and it accurately measures student thinking.  As such, you can compare your findings to results from other schools who have used the instrument to determine if your students are learning particular topics better or worse than cohorts and similar institutions.

Why Validated Assessments are Great for Researchers

As CS researchers, we often experiment with new ways to teach courses.  For example, many people use Media Computation or Peer Instruction (PI), two complementary pedagogical approaches developed over the past several decades.  It’s important to establish whether these changes are helping our students. Do more students pass? Do fewer students withdraw? Do more students continue studying CS?  Does it boost outcomes for under-represented groups? Answering these questions using a variety of courses can give us insight into whether what we do corresponds with our expectations.

One important question is: using our new approach, do students learn more than before?  Unfortunately, answering this is complicated by the lack of standardized, validated assessments.  If students score 5% higher on an exam when studying with PI vs. not studying with PI, all we know is that PI students did better on that exam.  But exams are designed by one instructor, for one course at one institution, not for the purposes of cross-institution, cross-cohort comparisons.  They are not validated. They do not take into account the perspectives of other CS experts. When students answer a question on an exam correctly, we assume that it’s because they know the material; when they answer incorrectly, we assume it’s because they don’t know the material.  But we don’t know: maybe the exam contains incidental cues that subtly influence how students respond.

A Concept Inventory (CI) solves these problems.  Its rigorous design process leads to an assessment that can be used across schools and cohorts, and can be used to validly compare teaching approaches.

How to Obtain the BDSI

The BDSI is available via the google group.  If you’re interested in using it, please join the group and add a post with your name, institution, and how you plan to use the BDSI.

How to Use the BDSI

The BDSI is designed to be given as a post-test after students have completed the covered material.  Because the BDSI was validated as a full instrument, it is important to use the entire assessment, and not alter or remove any of the questions.  We ask that instructors not make copies of the assessment available to students after giving the BDSI, to try to avoid the questions becoming public.  We likewise recommend giving participation credit, but not correctness credit, to students for taking the BDSI, to avoid incentivizing cheating.  We have found giving the BDSI as part of a final review session, collecting the assessment from students, and then going over the answers to be a successful methodology for having students take it. 

Want to Learn More?

If you’re interested in learning more about how to build a CI, please come to our talk at SIGCSE 2020 (from 3:45-4:10pm on Thursday, March 12th) or read our paper [3].  If you are interested in learning more about how to use validated assessments, please come to our Birds of a Feather session on “Using Validated Assessments to Learn About Your Students” at SIGCSE 2020 (5:30-6:20pm on Thursday, March 12th) or our tutorial on using the BDSI at CCSC-SW 2020 (March 20-21).

References:

[1] Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A Validated Concept Inventory for Basic Data Structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER ’19).

[2] Leo Porter, Daniel Zingaro, Cynthia Lee, Cynthia Taylor, Kevin C. Webb, and Michael Clancy. 2018. Developing Course-Level Learning Goals for Basic Data Structures in CS2. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ’18).

[3] Cynthia Taylor, Michael Clancy, Kevin C. Webb, Daniel Zingaro, Cynthia Lee, and Leo Porter. 2020. The Practical Details of Building a CS Concept Inventory. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20).

February 24, 2020 at 7:00 am Leave a comment

Call for participation at the 2nd Excited Summer School on Research in Computing Education

Sharing a note from Birgit Rognebakke Krogstie:

Call for participation at the 2nd Excited Summer School on Research in Computing Education

We are happy to announce the second Excited Summer School on Research in Computing Education, which will take place 8-12 June 2020 at NTNU campus Gløshaugen in Trondheim, Norway. The school is intended for PhD students and post docs. There will be varied sessions led by experts in the computing education field, combining presentations with discussion and hands-on groupwork. Topics range from how to teach beginner-level programming courses to the inclusion of environmental sustainability in IT education. The school is a great arena for network building and informal discussion with fellow students as well as with the invited experts. For more information, see our web site: https://www.ntnu.edu/excited/call-for-participants-excited-summer-school-2020. Application deadline: 10 March 2020.

February 21, 2020 at 9:34 am 3 comments

Importance of considering race in CS education research and discussion

I was talking with one of my colleagues here at Michigan about the fascinating recent journal article from Tim Weston and collaborators based on NCWIT Aspirations award applicants, which I blogged about here. I was telling him about the results — what correlated with women’s persistence in technology and computing, and what didn’t or was negatively correlated.

He said that he was dubious. I asked why. He said, “What about the Black girls?”

His argument that the NCWIT Aspirations awards tends to be white and tends to be in wealthy, privileged school districts. Would those correlations be the same if you looked at Black women, or Latina women?

I went back to the Weston et al. paper. They write:

Although all respondents were female, they were diverse in race and ethnicity. Because we know that there are differentiated experiences for students of color in secondary and post-secondary education in the US, and especially women of color, we wanted to make sure we captured any differences in outcomes in our analysis. To do so, we created a variable called Under-represented Minority in Computing (URMC) status that grouped students by race/ethnicity. URMC indicated persons from groups historically under-represented in computing–African-American, Hispanic, or Native American. White, Asian and students of two or more races were coded as “Majority” in this variable. Unfortunately, further disaggregation by specific race/ethnicity was not possible due to low numbers. Thus, even though the numbers in the respondent pool were not high enough to disaggregate by specific race/ethnicity, we could still identify trends by over-representation and under-representation.

18% of their population was tagged URMC. URMC was included as a variable in their analyses, and their results suggest that being in the URMC group did not influence persistence significantly. If I understand their regressions right, that doesn’t tell us if the correlations were different by race/ethnicity. URMC wasn’t a significant factor in the outcomes, but that is not the same as thinking that those other variables differ by race and ethnicity. Do Black females have a different relationship with video games or with community than white females, for example? Or with Latina students?

While the analysis did not leave race out of the analysis entirely, there was not enough diversity there to answer my colleague’s question. I do agree with the authors that we would expect differentiated experiences. If our analysis does not include race, can we account for the differentiated experiences?

It’s hard to include race in many of our post-secondary CS ed analyses simply because the number of non-white and non-Asian students is so small. We couldn’t say that Media Computation was successful with a diverse student body until University of Illinois Chicago published their results. Georgia Tech has few students from under-served groups in the CS classes we were studying.

There’s a real danger that we’re going to make strong claims about what works and doesn’t work in computer science based only on what works for students in the majority groups. We need to make sure that we include race in our CS education discussions, that we’re taking into account these differentiated experiences. If we don’t, we risk that any improvements or optimizations we make on the basis of these results will only work with the privileged students, or worse yet, may even exacerbate the differentiated experiences.

February 17, 2020 at 7:00 am 7 comments

Is there a Geek Gene? Are CS grades bi-modal? Moving computing ed research forward

This month’s Communications of the ACM published Elizabeth Patitsas’s ICER paper about bimodality in CS grades (or rather, the lack thereof) as a research highlight, Evidence that Computer Science Grades are not Bimodal. It’s a big deal to have computing education in this position in the ACM flagship publication, and thanks to Shriram Krishnamurthi for his efforts in making this happen.

I wrote about Elizabeth’s paper when it was originally published at ICER at this blog post. Elizabeth wrote a guest blog post here on these topics (see here). These are important issues — Wired has just published an article talking about the Geek Gene with a great discussion of Betsy DiSalvo’s work (see post here about some of Betsy’s work).

I wrote the introductory page to the article (available here). I point out that Elizabeth’s article doesn’t end the debate, but it does move forward how we address questions about how we teach and how students learn:

This paper does not prove there is no Geek Gene. There may actually be bimodality in CS grades at some (or even many) institutions. What this paper does admirably is to use empirical methods to question some of our long-held (but possibly mistaken) beliefs about CS education. Through papers like these, we will learn to measure and improve computing education, by moving it from folk wisdom to evidence-based decision-making.

January 21, 2020 at 7:00 am 11 comments

Abstract Submissions Opens for FIE 2020!

Frontiers in Education 2020 conference will be in Uppsala, Sweden this year. Abstracts due Feb 2. Post below from Arnold Pears.

Featuring an all new submission site and new paper review system launched for FIE 2020.

Don’t delay! Secure your place at the 50th Anniversary FIE event by registering your Abstract today!

The 2020 programme features co-located workshops on Computational Thinking Skills for the 21st Century, the Launch Event for the IEEE/ACM Joint Curriculum project Computing Curricula 2020, a once in a lifetime conference banquet experience in the Uppsala Castle, and much much more.
Submit your Abstract NOW!

FIE 2020

January 18, 2020 at 7:00 am Leave a comment

Computing Education Lessons Learned from the 2010’s: What I Got Wrong

There’s a trend on Twitter over the last few weeks where people (especially the academics I follow) tweet about their accomplishments over the last 10 years. They write about the number of papers published, the number of PhD students graduated, and the amount of grant money they received. It’s a nice reflective activity which highlights many great things that have happened in the 2010’s.

I started this blog in June 2009, so most of it has been written in the 2010’s. The most interesting thing I find in looking back is what I got wrong. There were lots of things that I thought were true, ideas that I worked on, but I later realized were wrong. Since I use this blog as a thinking space, it’s a sign of learning that I now realize that some of that thinking was wrong. And for better or worse, here’s a permanent Internet record.

There are the easy ones — the ones I’ve been able to identify in blog posts as mistakes. There was the time I said Stanford was switching from Java to JavaScript. I should have fought for more CS in the K-12 CS Framework. And I should have been saying “multi-lingual” instead of “language independent” for years. And there was the blog post where I just listed the organizational mistakes I’d made.

The more interesting mistakes are the ones that are more subtle (at least to me), that took me years to figure out, and that maybe I’m still figuring out:

Creating pre-service CS teacher programs would be easy. I thought that we could create programs to develop more pre-service computer science teachers. We just needed the will to do it. You can find posts from me talking about this from 2010 and from 2015. I now realize that this is so hard that it’s unlikely to happen in most US states. My Blog@CACM post this month is about me getting schooled by a group of education faculty in December. We are much more likely to integrate CS into mathematics or science teacher programs than to have standalone CS teacher professional development — and even that will require an enormous effort.

CS for All is about Access. I used to think that the barrier to more students taking CS was getting CS classes into high schools. You can find me complaining about how there were too few high school CS classes in 2016. I really bought into the goal of CS10K (as I talked about in 2014). By 2018, I realized that there was a difference between access and participation. But now we have Miranda Parker’s dissertation and we know that the problem is much deeper than just having teachers and classes. Even if you have classes, you might not get students taking them, or it may just be more of the same kinds of students (as the Roehampton Report has shown us). Diverse participation is really hard.

Constructionism is the way to use computing in education. I grew up as a constructionist, both as a “technically precocious boy” and as a researcher. Seymour Papert wrote me a letter of recommendation when I graduated with my PhD. My post on constructionism is still one of the most-read. In 2011, I thought that the One Laptop Per Child project would work. I read Morgan Ames’ The Charisma Machine, and it’s pretty clear that it didn’t.

The idea of building as a way of learning makes sense. It’s at the heart of Janet Kolodner’s Learning by Design, Yasmin Kafai’s work, Scratch, and lots of other successful approaches. But if you read Seymour carefully, you’ll see that his vision is mostly about learning mathematics and code, through teaching yourself code. That only goes so far. It doesn’t include everyone, and at the worst implementations of his vision, it leaves out teachers.

I was in a design meeting once with Seymour, where he was arguing for making a new Logo implementation much more complicated. “Teachers will hate it!” several of us argued. “But some students will love it,” he countered. Seymour cared about the students who would seek out technical understanding, without (or in spite of) teachers, as he did.

Constructionism in the Mindstorms sense only works for a small percentage of students, which is what Ames’ story tells us. Some students do want to understand the computer soup-to-nuts, and that’s great, and it’s worthwhile making that work for as many students as possible. But I believe that it still won’t be many students. Students care about lots of other things (from business to design, from history to geography) that don’t easily map to a focus on code and mathematics. I still believe in the value of having students program for learning lots of different things, but I’m no longer convinced that the “hard fun” of Logo is the most useful or productive path for using the power of computing for learning. I am less interested in making things for just a few precocious students, especially if teachers hate it. I believe in making things with teachers.

The trick is to define Computational Thinking. Then there’s Computational Thinking. I thought that the problem was that we didn’t have a clear definition. If we had that, we could do studies in order to measure the value (if any) of CT. I blogged about definitions of it in 2011, in 2012, in 2016, and in 2019. I’ve written and lectured on Computational Thinking. The paper I wrote last Fall with Alan Kay, Cathie Norris, and Elliot Soloway may be the last that I will write on CT. I realized that CT is just not that interesting as a research topic (especially with no well-accepted definition) compared to the challenge of designing computation for better thinking. We can try to teach everyone about computational thinking, but that won’t get as far as improving the computing to help everyone’s thinking. Fix the environment, not the people.

But I could be wrong on that, too.

January 13, 2020 at 7:00 am 46 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,680 other followers

Feeds

Recent Posts

Blog Stats

  • 1,744,439 hits
April 2020
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  

CS Teaching Tips