NSF Report: Women, Minorities, and Persons with Disabilities in Science and Engineering

A useful report when trying to make an argument for the importance of Broadening Participation in Computing efforts:

Women, Minorities, and Persons with Disabilities in Science and Engineering provides statistical information about the participation of these three groups in science and engineering education and employment. Its primary purpose is to serve as a statistical abstract with no endorsement of or recommendations about policies or programs. National Science Foundation reporting on this topic is mandated by the Science and Engineering Equal Opportunities Act (Public Law 96-516).This digest highlights key statistics drawn from a wide variety of data sources. Data and figures in this digest are organized into five topical areas—enrollment, field of degree, occupation, employment status, and early career doctorate holders.

Source: About this report – nsf.gov – Women, Minorities, and Persons with Disabilities in Science and Engineering – NCSES – US National Science Foundation (NSF)

April 12, 2017 at 7:00 am Leave a comment

University CS graduation surpasses its 2003 peak, with poor diversity

Code.org just blogged that we have set a record in the number of BS in CS graduates.

University CS graduates have set a new record, finally surpassing the number of degrees earned 14 years ago.With a 15% increase in computer science graduates (49,291 bachelor’s degrees), 2015 had the largest number of CS graduates EVER! The previous high point was over a decade ago, in 2003.

Source: University computer science finally surpasses its 2003 peak!

But look at the female numbers there — they are less than what they were in 2003.  We are graduating 2/3 as many women today as in 2003.  (Thanks to Bobby Schnabel for pointing this out.) We have lost ground.

My most recent Blog@CACM is on the new CRA “Generation CS” report, and about the impacts the rise in enrollment are having on diversity.  One of the positive messages in this report is that departments that have worked to improve their diversity have been successful.  As a national statistic, this doesn’t feel like a celebration when CS is becoming less diverse in just 12 years.

 

April 10, 2017 at 7:00 am 3 comments

Research Highlight: CRA Board Member Susanne Hambrusch

I’ve worked with Susanne in a variety of contexts and recommend checking out her page linked below.  She’s taught me a lot about how computing education research connects to the rest of CS, as described in the quote below.  She’s done some interesting work on CS teacher professional development. Most recently, she is one of the authors of the “Generation CS” CRA report.

As computer science evolves into a recognized subject in K-12 curricula, we not only need to know how students learn, but we also need to know how to educate and prepare their teachers. The National Science Foundation’s CS10K effort has been an ambitious project with a significant impact on schools and computer science education research. Online learning opportunities, including MOOCs, Khan Academy, Stack Overflow, and Code.org, help many students learn to code and advance their computing knowledge. Online forums can provide data on clicks, completions, progress, and more. How can this data be used to advance how users learn? How can the background and the goals of the learner be integrated into providing personalized and more meaningful help that advances and enhances learning? To answer questions like this, we need to apply knowledge from a range of areas. Computer science education research is an interdisciplinary field that combines learning sciences and areas of computer science, including software engineering, programming languages, machine learning, human-computer interaction, and natural language processing. Techniques, approaches, and tools developed by researchers in these areas have the potential to create new knowledge about learning and teaching computer science. In turn, this new knowledge has the potential to drive new research in computer science.

Source: Research Highlight: CRA Board Member Susanne Hambrusch – CRA

April 7, 2017 at 7:00 am Leave a comment

The Limitations of Computational Thinking: NYTimes

The New York Times ran a pair of articles on computing education yesterday, one on Computational Thinking (linked above and quoted below) and one on the new AP CS Principles exam.  Shriram and I are quoted as offering a more curmudgeonly view on computational thinking.  (Yes, I fixed the name of my institution in the below quote, from what how it is phrased in the actual article.)

Despite his chosen field, Dr. Krishnamurthi worries about the current cultural tendency to view computer science knowledge as supreme, better than that gained in other fields. Right now, he said, “we are just overly intoxicated with computer science.”

It is certainly worth wondering if some applications of computational thinking are trivial, unnecessary or a Stepford Wife-like abdication of devilishly random judgment.

Alexander Torres, a senior majoring in English at Stanford, has noted how the campus’s proximity to Google has lured all but the rare student to computer science courses. He’s a holdout. But “I don’t see myself as having skills missing,” he said. In earning his degree he has practiced critical thinking, problem solving, analysis and making logical arguments. “When you are analyzing a Dickinson or Whitman or Melville, you have to unpack that language and synthesize it back.”

There is no reliable research showing that computing makes one more creative or more able to problem-solve. It won’t make you better at something unless that something is explicitly taught, said Mark Guzdial, a professor in the School of Interactive Computing at Georgia Tech who studies computing in education. “You can’t prove a negative,” he said, but in decades of research no one has found that skills automatically transfer.

April 5, 2017 at 7:00 am 5 comments

Elementary School Computer Science – Misconceptions and Developmental Progressions: Papers from SIGCSE 2017

March 8-11, Seattle hosted the ACM SIGCSE Technical Symposium for 2017. This was the largest SIGCSE ever, with over 1500 attendees. I was there and stayed busy (as I described here). This post isn’t a trip report. I want to talk about two of my favorite papers (and one disappointing one) that I’ve read so far.

We are starting to gather evidence on what makes elementary school computer science different than undergraduate computer science. Most of our research on learning programming and computer science is from undergraduates, published in SIGCSE venues. We know relatively little about elementary school students, and it’s obvious that it’s going to be different. But how?

Shuchi Grover and Satabdi Basu of SRI are starting to answer that question in their paper “Measuring Student Learning in Introductory Block-Based Programming: Examining Misconceptions of Loops, Variables, and Boolean Logic.” They looked at the problems that 6th, 7th, and 8th graders had when programming in Scratch. They’re reporting on things that I’ve never heard of before as misconceptions at the undergraduate level. Like this quote:

Students harbored the misconception that a variable is a letter that is used as a short form for an unknown number – an idea that comes from middle school mathematics classes. Together, this led students to believe that repeat(NumberOfTimes) was a new command. One student conjectured it was a command for multiplication by 5 (the value of NumberOfTimes), while another thought it would print each number five times… After being told that NumberOfTimes was indeed a variable, the students could correctly predict the program output, though they continued to take issue with the length of the variable name.

I find their description believable and fascinating. Their paper made me realize that middle school students are expending cognitive load on issues like multi-character variable names that probably no computer scientist even considered. That’s a real problem, but probably fixable — though the fix might be in the mathematics classes, as well as in the CS classes.

The paper that most impressed me was from Diana Franklin’s group, “Using Upper-Elementary Student Performance to Understand Conceptual Sequencing in a Blocks-based Curriculum.” They’re studying over 100 students, and starting to develop general findings about what works at each of these grade levels. Three of their findings are quoted here:

Finding 1: Placing simple instructions in sequence and using simple events in a block-based language is accessible to 4th-6th grade students.

Finding 2: Initialization is challenging for 4th and 5th grade students.

Finding 3: 6th grade students are more precise at 2-dimension navigation than 4th and 5th grade students.

I’ve always suspected that there was likely to be an interaction between a student’s level of cognitive development and what they would likely be able to do in programming, given how much students are learning about abstraction and representation at these ages. Certainly, programming might influence cognitive development. It’s important to figure out what we might expect.

That’s what Diana’s group is doing. She isn’t saying that fourth grader’s can’t initialize variables and properties. She’s saying it’s challenging for them. Her results are likely influenced by Scratch and by how the students were taught — it’s still an important result. Diana’s group is offering a starting point for exploring these interactions and understanding what we can expect to be easy and what might be hard for the average elementary school student at different ages.  There may be studies that also tell us about developmental progressions in countries that are ahead of the US in elementary school CS (e.g., maybe Israel or Germany). This is the first study of its kind that I’ve read.

SIGCSE 2017 introduced having Best Paper awards in multiple categories and Exemplary Paper awards. I applaud these initiatives. Other conferences have these kinds of awards. The awards helps our authors stand out in job searches and promotion time.

To be really meaningful awards, though, SIGCSE has to fix the reviewing processes. There were hiccups in this year’s reviewing where there wasn’t much of a match between reviewer expertise and the paper’s topic. The hiccups led to papers with significant flaws getting high rankings.

The Best Paper award in the Experience Report category was “Making Noise: Using Sound-Art to Explore Technological Fluency.” The authors describe a really nifty idea. They implement a “maker” kind of curriculum. One of the options is that students get toys that make noise then modify and reprogram them. The toys already work, so it’s about understanding a system, then modifying and augmenting it. The class sounds great, but as Leah Buchele has pointed out, “maker” curricula can be overwhelmingly male. I was surprised that this award-winning paper doesn’t mention females or gender — at all. (There is one picture of a female student in the paper.) I understand that it’s an Experience Report, but gender diversity is a critical issue in CS education, particularly with maker curricula. I consider the omission of even a mention of gender to be a significant flaw in the paper.

April 3, 2017 at 7:00 am 9 comments

The need for better software and systems to support active CS learning

I believe strongly in active learning, such as Peer Instruction (as I have argued here and here).  I have discovered that it is far harder than I thought to do for large CS classes.

I decided to use clickers in CS1315 this semester (n=217), rather than use the colored index cards that I’ve used in the past for Peer Instruction (see blog post here). With cards, I can only take a vote — no histogram of results, and I can’t provide any grade value for the participation. With clickers, I can use the evidence-based practice as developed by Eric Mazur, Cynthia Lee, Beth Simon, Leo Porter, et al. (plugging the Peer Instruction for CS website):

  • Ask everyone to answer to prime their thinking about the question,
  • ask students to discuss the question in groups of 2-3,
  • then vote again (consensus within groups), and
  • show the results and discuss the misconceptions.

To make it worthwhile, I’m giving 10 points of final course grade for scoring over 50% on the second question (only — first one is just to get predictions and activate knowledge), 5 points for scoring over 30%.

I’m trying to do this all with campus-approved standards: TurningPoint clickers, TurningPoint software.  I’d love to use an app-based solution, but our campus Office of Information Technologies warns against it.  They can’t guarantee that, in large classes, the network will support all the traffic for everyone to vote at once.

The process is so complicated: Turn on clickers in our learning management software (a form of Sakai called T-Square), download the participant list, open up ResponseWare and define a session (for those using the app version), plug in receiver. After class, save the session, integrate the session with the participant list, then integrate the results with T-Square for grades. The default question-creation process in TurningPoint software automatically shows results and demands a specific format (e.g., which makes it hard to show screenshots as part of a question), so I’m using “Poll Anywhere” option, which requires me to process the session file after class to delete the first question (where everyone votes to prime their thinking) and to define the correct response(s) for each question.

I’m willing to do all that. But it’s more complicated than that.

Turns out that Georgia Tech hasn’t upgraded to the latest version of the TurningPoint software (TurningPoint Cloud).  GT only supports TurningPoint 5. TurningPoint stopped distributing that version of the software in May 2016, so you have to get it directly from the on-campus Center for Teaching and Learning. I got the software and installed it — and discovered that it doesn’t run on the current version of MacOS, Sierra.

I did find a solution. Here’s what I do.  Before each lecture, I move my lecture slides to a network drive.  When I get to class, I load my lecture on the lecture/podium computer (which runs Windows and TurningPoint 5 and has a receiver built-in).  I gather all the session data while I teach with the podium computer and do live coding on my computer (two screens in the massive lecture hall).  I save the session data back to the network drive.  Back in my office, I use an older Mac that still runs an older version of MacOS to download the session data, import it using TurningPoint 5, do all the deletions of priming questions and correct-marking of other questions, then integrate and upload to T-Square.

Counting my laptop where I make up slides and do live coding, my Peer Instruction classes require three computers.

Every CS teacher should use active learning methodologies in our classes.  Our classes are huge.  We need better and easier mechanisms to make this work.

 

March 31, 2017 at 7:00 am 7 comments

Visit to researchers at ExcITEd Center at NTNU

In January, Barbara Ericson and I were invited to visit the new ExcITED Center at NTNU in Trondheim, Norway. ExcITED is the Centre for Excellent IT Education. It was a whirlwind trip, fitting it in after the start of our semester at Georgia Tech, but really wonderful. We got there just as NTNU was celebrating their new Department of Computer Science with an “IDIovation” celebration which included some great research talks and (a highlight for me) a live coding computer music performance. The whole event was recorded and is available here.

Our host for the visit was Michail Giannakos, who is a learning scientist interested in a variety of educational technologies. We got a chance to meet with several of the faculty and many of the students working in ExcITED. Like I said, it was a whirlwind trip, so please excuse me if I only mention a few of the projects we saw — the ones that particularly stuck with me, despite the jet-lag.

One team at ExcITED is logging student interactions with the IDE that they use in their classes at the University, like the BlueJ Blackbox effort. What makes what they’re doing remarkable is that they’re immediately turning the data around, to present a process mirror to the students. They show students a visualization of what they have been doing. The goal is to encourage reflection, to get students to realize when they’re spending too much time on one phase of their work, or maybe not enough (e.g., in testing). The challenge is mapping from the low-level user interactions to higher-level visualizations that might inform students.

There are several projects that are working with children who are programming in Scratch (which can be localized to Norwegian). The one that most captured my attention was where students were programming these beautiful robotic sculptures, created by professional artists. The team is exploring how this influences student motivation. How does motivation change when the robots under the students’ control are neither student-generated nor stereotypically “robotic”?

The Tiles project by Simone Mora, Francesco Gianni, and Monica Divitini aims to engage designers in ubiquitous computing. They have these cool cards that they use in an activity with designers to get them thinking about the kinds of everyday items in which computation might be embedded. They want designers to think about how sensors and actuators might be used to support user activity.

IMG_4159

They’re now working to extend these cards with ties to JavaScript code that would actually allow designers to build the things that they designed. It’s an innovative activity to engage designers with embedded computing and then to carry the designs to prototype.

On the weekend after our visit, the chair of the department, Letizia Jaccheri, took Barb and I off to ski in Sweden in Åre. We arrived on a Thursday, spoke at IDIovation that night, met with ExcITED researchers on Friday, traveled to Sweden to ski on Saturday, back on Sunday, and flew home on Monday. An absolutely amazing trip for which we were both grateful to have had the opportunity!

March 29, 2017 at 7:00 am 1 comment

Older Posts Newer Posts


Recent Posts

April 2017
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930

Feeds

Blog Stats

  • 1,377,980 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,653 other followers

CS Teaching Tips