Archive for December, 2016

Raising the Floor: Sharing What Works in Workplace Diversity, Equity, and Inclusion

A really interesting set of proposals.  I saw many that are applicable to improving diversity in higher-education CS, as well as the stated goal of improving workplace diversity.

Workplace diversity is probably the biggest factor inhibiting women in computing.  We used to say that females avoided CS, not knowing what it is.  I think we can now fairly say that many females avoid CS because they know what it is.

This is a great ending blog post of 2016.  See you in January! Happy Holidays and a Great New Year!

Over the past few months, we and our colleagues at OSTP have had conversations with dozens of Federal agencies, companies, investors, and individuals about their science and technology workforces, and we have consistently heard people express a commitment to bringing more diversity, equity, and inclusion to their workplaces. They understand the strategic importance. Yet often we found that many of the same people who want to create high-performing, innovative teams and workforces do not know the steps and solutions that others are already effectively using to achieve their diversity, equity, and inclusion goals.

In order to help accelerate this work, we have compiled insights and tips into an Action Grid designed to be a resource for those striving to create more diverse, equitable, and inclusive science and technology teams and workforces, so that we can all learn from each other.

Diversity, equity, and inclusion work is not one size fits all. We hope this set of potential actions clustered by leadership engagement, retention and advancement, hiring, and ecosystem support provides ideas and a jumping off point for conversations within your team or organization on steps that you can take to increase diversity and to make your workforce more reflective of the communities you serve, customers you sell to, and talent pools you draw from.

Source: Raising the Floor: Sharing What Works in Workplace Diversity, Equity, and Inclusion | whitehouse.gov

December 21, 2016 at 7:15 am 8 comments

After Leaving Computing, New Majors Tend to Differ by Gender – CRN

I found these differences fascinating, though I’m not sure what to make of them.  Once leaving computing, students head to different majors with a big gender difference.  Only 5% of women go into an Engineering field after CS, while 32% of men go into some form of Engineering.  Why is that?

As computing departments across the U.S. wrestle with increased enrollment, it is important to recognize that not everyone who becomes a computing major stays a computing major. In 2014, CERP collected data from a cohort of U.S. undergraduate students who agreed to be contacted for follow-up surveys in 2015. While most of the students surveyed remained computing majors (96%), some students changed to a non-computing major. As shown in the graphic above, students in our sample moved to a variety of majors, and the type of new major tended to differ by gender. Most men (69%) who left a computing major switched to engineering, math/statistics, or physical science majors. On the other hand, most women (53%) tended to move to social sciences, or humanities/arts. These data are consistent with existing social science research indicating women tend to choose fields that have clear social applications, such as the social sciences, arts, and humanities. CERP’s future analyses will explore why women, versus men, say they are leaving computing for other fields.

Source: After Leaving Computing, New Majors Tend to Differ by Gender – CRN

December 19, 2016 at 7:22 am 5 comments

Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research

I am posting this on the day that I am honored to “hood” Dr. Briana Morrison. “Hooding” is where doctoral candidates are given their academic regalia indicating their doctorate degree. It’s one of those ancient parts of academia that I find really cool. I like the way that the Wikiversity describes it: “The Hooding Ceremony is symbolic of passing the guard from one generation of doctors to the next generation of doctors.”

I’ve written about Briana’s work a lot over the years here:

But what I find most interesting about Briana’s dissertation work were the things that didn’t work:

  • She tried to show a difference in getting program instruction via audio or text. She didn’t find one. The research on modality effects suggested that she would.
  • She tried to show a difference between loop-and-a-half and exit-in-the-middle WHILE loops. Previous studies had found one. She did not.

These kinds of results are so cool to me, because they point out what we don’t know about computing education yet. The prior results and theory were really clear. The study was well-designed and vetted by her committee. The results were contrary to what we expected. WHAT HAPPENED?!? It’s for the next group of researchers to try to figure out.

The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question.

Here’s the experimental set-up. We’ve got six groups.

  1. All groups are learning with pairs of a worked example (a completely worked out piece of code) and then a practice problem (maybe a Parson’s Problem, maybe writing some code). We’ll call these WE-P pairs (Worked Example-Practice). Now, some WE-P pairs have the same context (think of it as the story of a story problem), and some have different contexts. Maybe in the same context, you’re asked to compute the average tips for several days of tips as a barista. Maybe in a different context, you compute tips in the worked example, but you compute the average test score in the practice. In general, we predict that different contexts will be harder for the student than having everything the same.
  2. So we’ve got same context vs different context as one variable we’re manipulating. The other variable is whether the participants get the worked example with NO subgoal labels, or GENERATED subgoal labels, or the participant has to GENERATE subgoal labels. Think of a subgoal label as a comment that explains some code, but it’s the same comment that will appear in several different programs. It’s meant to encourage the student to abstract the meaning of the code.

In the GENERATE condition, the participants get blanks, to encourage them to abstract for themselves. Typically, we’d expect (for research in other parts of STEM with subgoal labels) that GENERATE would lead to more learning than GIVEN labels, but it’s harder. We might get cognitive overload.

In general, GIVEN labels beats out no labels. No problem — that’s what we expect given all the past work on subgoal labels. But when we consider all six groups, we get this picture.

Why would having the same context do worse with GIVEN labels than no labels? Why would the same context do much better with GENERATE labels, but worse when it’s different contexts?

So, Briana, Lauren, and Adrienne Decker replicated the experiment with Adrienne’s students at RIT (ICER 2016). And they found:

The same strange “W” pattern, where we have this odd interaction between context and GIVEN vs. GENERATE that we just don’t have an explanation for.

But here’s the really intriguing part: they also did the experiment with second semester students at RIT. All the weird interactions disappeared! Same context beat different context. GIVEN labels beat GENERATE labels. No labels do the worst. When students get enough experience, they figure things out and behave like students in other parts of STEM.

The puzzle for the community is WHY. Briana has a hypothesis. Novice students don’t attend to the details that they need, unless you change the contexts. Without changing contexts, students even GIVEN labels don’t learn because they’re not paying enough attention. Changing contexts gets them to think, “What’s going on here?” GENERATE is just too hard for novices — the cognitive load of figuring out the code and generating labels is just overwhelming for students, so they do badly when we’d expect them to do better.

Here we have a theory-conflicting result, that has been replicated in two different populations. It’s like the Rainfall Problem. Nobody expected the Rainfall Problem to be hard, but it was. More and more people tried it with their students, and still, it was hard. It took Kathi Fisler to figure out how to teach CS so that most students could succeed at the Rainfall Problem. What could we teach novice CS students so that they avoid the “W” pattern? Is it just time? Will all second semester students avoid the “W”?

Dr. Morrison gave us a really interesting dissertation — some big wins, and some intriguing puzzles for the next researchers to wrestle with. Briana has now joined the computing education research group at U. Nebraska – Omaha, where I expect to see more great results.

December 16, 2016 at 7:00 am 8 comments

If you really want a diverse workforce, why not go where there is diversity?

Nick Black, brilliant GT alum and (now former) Google engineer, says it like he sees it.  His critique of Google and their efforts to improving diversity extend to most of Silicon Valley.  If you really want a diverse workforce, open offices where there’s diversity.

Nick’s analysis (and I encourage you to read the whole post below) talks about the density of middle class Black workers. He doesn’t consider where there are Black workers who know computing.  Computing education is still pretty rare in the US.  Let’s use AP CS exam-taking as a measure of where there is CS education.  In Michigan last year, there were 19 Black AP CS exam-takers. 11 in Missouri.  None in Mississippi.  There are middle class Black families in these states.  They may not be getting access to CS education.

Google talks endlessly about diversity, and spends millions of dollars on the cause. My NYC office lends its prodigiously expensive square feet to Black Girls Code. We attempt to hook the recruiting pipeline up to HBCUs. We tweet about social justice and blog about the very real problem of racial inequality in America. Noble endeavors, all. It’s too bad that they’re not taking place where black people actually, you know, live.

According to census.gov’s data as of 2016, Mountain View is 2% black. In 2010, the Bay Area Census Project recorded 1,468 blacks in MTV. I saw more black people than that crossing Peachtree Street today. census.gov reports, as of 2010, blacks making up 25.1% of NYC, 9.6% of Los Angeles, and 6.1% of famously liberal San Francisco. census.gov does not provide data for Dublin or Zürich, but we can make some reasonable assumptions about those other largest Google offices, n’est-ce pas?

And let’s be honest — I doubt much of that 25.1% of NYC is centered around Chelsea.

Atlanta’s a bit down from 67% in 1990, but 54% ain’t so bad.

Source: A dispatch from Terminus – dankwiki, the wiki of nick black

December 12, 2016 at 7:05 am 7 comments

Making Hard Choices in State Computing Education Policy towards #CSforAll #CSEdWeek

At the ECEP Summit, I sat with the team from North Carolina as they were reviewing data that our evaluation team from Sagefox had assembled.  It was fascinating to work with them as they reviewed their state data.  I realized in a new way the difficult choices that a state has to make when deciding how to make progress towards the CS for All goal.  In the discussion that follows, I don’t mean to critique North Carolina in any way — every state has similar strengths and weaknesses, and has to make difficult choices.  I just spent time working with the North Carolina team, so I have their numbers at-hand.

North Carolina has 5,000 students taking CS in the state right now.  That was higher than some of the other states in the room.  I had been sitting with the Georgia state team, and knew that Georgia was unsure if we have even one full-time CS teacher in a public high school in the whole state.  The North Carolina team knew for a fact that they had at least 10 full-time high school CS teachers.

Some of the other statistics that Sagefox had gathered:

  • In 2015, the only 18% of Blacks in North Carolina who took the AP CS exam passed it. (It rose to 28% in 2016, but we didn’t have those results at the summit.) The overall pass rate for AP CS in North Carolina is over 40%.
  • Only 68 teachers in the state took any kind of CS Professional Development (that Sagefox could track).  There are 727 high schools in the state.
  • Knowing that there are 727 high schools in the state, we can put the 5,000 high school students in CS in perspective.  We know that there at 10 full-time CS teachers in North Carolina, each teaching six classes of 20 students each.  That accounts for 1,200 of those 5,000.  3,800 students divided by 717 high schools, with class sizes typically at 20 students, suggests that not all high schools in North Carolina have any CS at all.

Given all of this, if you wanted to achieve CS for All, where would you make a strategic investment?

  • Maybe you’d want to raise that Black student pass rate.  North Carolina is 22% African-American.  If you can improve quality for those students, you can make a huge impact on the state and make big steps towards broadening participation in computing.
  • Maybe you’d want to work towards all high schools having a CS teacher.  Each teacher is only going to reach at most 120 students (that’s full-time), but that would go a long way towards more equitable access to CS education in the state.
  • Maybe you’d want to have more full-time CS teachers — not just one class, but more teachers who just teach CS for the maximum six courses a year.  Then, you reach more students, and you create an incentive for more pre-service education and a pipeline for CS teachers, since then you’d have jobs for them.

The problem is that you can’t do all of these things.  Each of these is expensive.  You can really only go after one goal at a time.  Which one first?  It’s a hard choice, and we don’t have enough evidence to advise which is likely to pay off the most in the long run.  And you can’t achieve all of the goal all at once — as I described in Blog@CACM, you take incremental steps. These are all tough choices.

December 9, 2016 at 7:51 am Leave a comment

NSF Education Research Questions and Warnings for #CSforAll during #CSEdWeek

Joan Ferrini-Mundy spoke at our White House Symposium on State Implementation of CS for All (pictured above). Joan is the Assistant Director at NSF for the Education and Human Resources Directorate. She speaks for Education Research. She phrased her remarks as three research areas for the CS for All initiative, but I think that they could be reasonably interpreted as three sets of warnings. These are the things that could go wrong, that we ought to be paying attention to.

1. Graduation Requirements: Joan noted that many states are making CS “count” towards high school graduation requirements. She mentioned that we ought to consider the comments of organizations such as NSTA (National Science Teachers Association) and NCTM (National Council of Teachers of Mathematics). She asked us to think about how we resolve these tensions, and to track what are the long term effects of these “counting” choices.

People in the room may not have been aware that NSTA had just (October 17) come out with a statement, “Computer Science Should Supplement, not Supplant Science Education.”

The NCTM’s statement (March 2015) is more friendly towards computer science, it’s still voiced as a concern:

Ensuring that students complete college- and career-readiness requirements in mathematics is essential. Although knowledge of computer science is also fundamental, a computer science course should be considered as a substitute for a mathematics course graduation requirement only if the substitution does not interfere with a student’s ability to complete core readiness requirements in mathematics. For example, in states requiring four years of mathematics courses for high school graduation, such a substitution would be unlikely to adversely affect readiness.

Both the NSTA and NCTM statements are really saying that you ought to have enough science and mathematics. If you only require a couple science or math courses, then you shouldn’t swap out CS for one of those. I think it’s a reasonable position, but Joan is suggesting that we ought to be checking. How much CS, science, and mathematics are high school students getting? Is it enough to be prepared for college and career? Do we need to re-think CS counting as science or mathematics?

2. Teacher Credentialing: Teacher credentials in computer science are a mishmash. Rarely is there a specific CS credential. Most often, teachers have a credential in business or other Career and Technical Education (CTE or CATE, depending on the state), and sometimes mathematics or science. Joan asked us, “How is that working?” Does the background matter? Which works best? It’s not an obvious choice. For example, some CS Ed researchers have pointed out that CTE teachers are often better at teaching diverse audiences than science or mathematics teachers, so CTE teachers might be better for broadening participation in computing. We ought to be checking.

3. The Mix of Curricular Issues: While STEM has a bunch of frameworks and standards to deal with, we know what they are. There’s NGSS (Next Generation Science Standards) and the National Research Council Framework. There’s Common Core. There are the NCTM recommendations.

In Computer Science, everything is new and just developing. We just had the K-12 CS Framework released. There are ISTE Standards, and CSTA Standards, and individual state standards like in Massachusetts. Unlike science and mathematics, CS has almost no assessments for these standards. Joan explicitly asked, “What works where?” Are our frameworks and standards good? Who’s going to develop the assessments? What’s working, and under what conditions?

I’d say Joan is being a critical friend. She wants to see CS for All succeed, but she doesn’t want that to cost achievement in other areas of STEM. She wants us to think about the quality of CS education with the same critical eye that we apply to mathematics and science education.

December 7, 2016 at 7:00 am 4 comments

AP CS A Exam Data for 2016: Barb Ericson’s analysis, Hai Hong’s guest blog post #CSedWeek

As usual, Barbara Ericson went heads-down, focused on the AP CS A data when the 2016 results were released.  But now, I’m only one of many writing about it.  Education Week is covering her analysis (see article here), and Hai Hong of Google did a much nicer summary than the one I usually put together. Barb’s work with Project Rise Up 4 CS and Sisters Rise Up have received funding from the Google Rise program, which Hai is part of. I’m including it here with his permission — thanks, Hai!

Every year, I’m super thankful that Barb Ericson at Georgia Tech grabs the AP CS A data from the College Board and puts it all into a couple of spreadsheets to share with the world.  🙂
Here’s the 2016 data, downloadable as spreadsheets: Overall and By Race & Gender.  For reference, you can find 2015 data here and here.
Below is a round-up of the most salient findings, along with some comparison to last year’s.  More detailed info is in the links above.  Spoiler: Check out the 46% increase in Hispanic AP exam takers!
  • Overall: Continued increases in test-taking, but a dip in pass rates.
    • 54,379 test-takers in 2016.  This reflects a 17.3% increase from 2015 — which, while impressive, is a slower increase than 24.2% in 2015 and 26.3% in 2014.
    • Overall pass rate was 64% (same as last year; 61% in 2014)
  • Girls
    • Female exam takers: 23% (upward trend from 22% in 2015, 20% in 2014)
    • Female pass rate: 61% (same as last year; 57% in 2014)
    • In 8 states fewer than 10 females took the exam: Alaska (9/60), Nebraska (8/88), North Dakota (6/35 ), Kansas (4/57), Wyoming (2/6 ), South Dakota (1/26 ), Mississippi (0/16), Montana(0/9). Two states had no females take the exam: Mississippi and Montana.
  • Black
    • Black exam takers: 2,027 (Increase of 13% from 1,784 in 2015; last year’s increase was 21% from 1,469 in 2014)
    • Black pass rate: 33% (down from 38% in 2015, but close to 2014 pass rate of 33.4%).
    • Twenty-four states had fewer than 10 African American students take the AP CS A exam. Nine states had no African American students take the AP CS A exam: Maine (0/165), Rhode Island (0/94), New Mexico (0/79), Vermont (0/70), Kansas (0/57), North Dakota (0/35), Mississippi (0/16), Montana (0/9), Wyoming (0/6)
  • Hispanic
    • Hispanic exam takers: 6,256 (46% increase from 4,272 in 2015!)
    • Hispanic pass rate: 41.5% (up from 40.5% in 2015)
    • Fifteen states had fewer than 10 Hispanics take the exam: Delaware, Nebraska, Rhode Island, New Hampshire, Maine, Kansas, Idaho, West Virginia, Wyoming, Vermont, Mississippi, Alaska, North Dakota, Montana, and South Dakota. Three states had no Hispanics take the exam: North Dakota(0/35), Montana (0/9), South Dakota (0/26).
And as a hat-tip to Barb Ericson (whose programs we’ve partnered with and helped grow through the RISE Awards these last 3 years) and the state of Georgia:
  • 2,033 exam takers in 2016 (this represents something like a 410% increase in 12 years!)
  • New record number of African Americans and females pass the exam in Georgia again this year!
  • 47% increase (464 in 2016 vs. 315 in 2015) in girls taking the exam.
  • Nationally, the African American pass rate dropped from 37% to 33%.  In Georgia it increased from 32% to 34%.
  • The pass rate for female students also increased in Georgia from 48% to 51%.
  • Only one African American female scored a 5 on the AP CS A exam in Georgia in 2016 and she was in Sisters Rise Up 4 CS (RISE supported project).

December 5, 2016 at 7:13 am 4 comments

Research+Practice Partnerships and Finding the Sweet Spots: Notes from the ECEP and White House Summit

nichole

I wrote back in October about the summit on state implementation of the CS for All initiative which we at Expanding Computing Education Pathways (ECEP) alliance organized with the White House Office of Science and Technology Policy (OSTP). You can see the agenda here and a press release on the two days of meetings here.

I have been meaning to write about some of the lessons I learned in those two days, but have been simply slammed this month. I did finally write about some of the incremental steps that states are taking towards CS for All in my Blog@CACM post for November. That post is about the models of teacher certification that are developing, the CSNYC school-based mandate, and New Hampshire’s micro-certifications.

In this post, I want to tell you about a couple of the RPC ideas that I found most compelling. The first part of the day at the Eisenhower Executive Office Building (EEOB) on the White House grounds was organized by the Research+Practice Collaboratory (RPC). I was the moderator for the first panel of the day, where Phil Bell, Nichole Pinkard, and Dan Gallagher talked about the benefits of combining research plus practice.

I was excited to hear about the amazing work that Nichole Pinkard (pictured above) is doing in Chicago, working with Brenda Wilkerson in Chicago Public Schools. Nichole is a learning scientist who has been developing innovative approaches to engaging urban youth (see her Digital Youth Network website). She has all these cool things she’s doing to make the CS for All efforts in Chicago work. She’s partnering with Chicago parks and libraries — other than schools, they’re the ones who cover the city and connect with all kids. She’s partnering with Comcast to create vans that can go to parks to create hotspots for connectivity. Because she’s a researcher working directly with schools, they can do things that researchers alone would find hard to do — like when a student shows up to a CS activity, she can email the student’s parents to tell them the next steps to make sure that they continue the activity at home.

There was a second panel on “Finding the Sweet Spot: What Problems of Practice are Ripe for Knowledge Generation?” I didn’t know Shelley Pasnik from the Center for Children and Technology, and she had an idea I really liked that connected to one of Nichole’s points. Shelley emphasizes “2Gen learning,” having students bring with them parents or even grandparents so that there are two generations of learners involved. The older generation can learn alongside the student, and keep the student focused on the activity.

I know that the RPC folks are producing a report on their activity at the summit, so I’m sure we’ll be hearing more about their work.

December 2, 2016 at 7:00 am 2 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 8,460 other followers

Feeds

Recent Posts

Blog Stats

  • 1,859,834 hits
December 2016
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips