Archive for June, 2013

Why AP CS:Principles is a good thing: Responding to Gas Station without Pumps

Kevin Karplus recently wrote a post (on his highly-recommended Gas Station without Pumps blog) about why funding the new AP CS:Principles (AP CS:P) is such a bad idea, mentioning my positive comments on the news. I actually agree with many of the Gas Station points, but I have a more optimistic take on them.

CS:P was never meant to give credit towards a computing degree. The attestation effort showed that many schools do offer some kind of course like what’s in CS:P. It’s true at UCSC, too:

My own campus has several intro programming courses, some at the level of the AP CSP course.  I suspect that our campus would offer credit in these low-level courses for the AP CSP exam. These lowest-level courses do not count towards any major, though—they provide elective credit for what should be high-school level courses.  The intent (as is apparently the intent for AP CSP) is to provide an extremely low barrier to entry into the field.

That’s really the main point. We need more CS education in high schools. When there’s only 1 AP CS teacher for every 12 high schools, there is very little computer science education out there. AP courses is a big lever to get low barrier courses out there.

Gas Station then points out that courses like these may not actually have much of an impact downstream.

I don’t know how well the low barrier to entry works, though.  I’ve not seen much evidence on our campus that the lowest level courses produce many students who continue to take higher level CS courses…We still have appallingly low numbers of women finishing in CS (and the new game-design major within CS is even more heavily male), so I can’t say that the lower-level intro courses have done much to address the gender imbalance.

That’s a fair point. We don’t know that it will work to get more students into computing. I just did a Blog@CACM post that suggests that the evidence we have is promising in terms of impact on careers, especially for under-represented minorities. You can’t really use a single campus to test the idea though. The game is at the level of thousands of high schools where there is no computer science at all.

I share the Gas Station concern over the professional development challenge.

The success of CSP also depends on thousands of high schools suddenly deciding to teach the course and getting training for their teachers to do this. I (along with many others) have grave doubts that the schools have the desire or the ability to do this. It is true that the CSP course should be a bit easier to train people for than the current AP CS A course (if only because Java syntax, the core of CS A, is so deadly dull).

The question that we need answered is: how important the “Advanced Placement” lever is? Is it so important (big payoff) that having a more accessible AP course in CS (thus, lower cost to adopt) changes the balance for schools? I just had an all-day meeting with folks from the Georgia Department of Education two weeks ago, and they are building AP CS:P into their curriculum plans because it’s now AP. That designator matters. Does it matter enough to draw more teachers into professional development, to get more schools to hire CS teachers? I’m optimistic, but I share the Gas Station concern.

We should also be clear that there really isn’t a single “CS:Principles” course yet. There have been several pilots, and some assessment questions tested, but there is no well-defined curriculum yet and no exemplar test. I have exactly the same question as Gas Station:

The new CSP exam is not supposed to be so language-dependent, which may allow for better pedagogy. Of course, I’m curious how the exam will be written to be language-independent, and whether it will be able to make any meaningful measurements of what the students have learned.

The plan is to use a portfolio approach, like what’s being used in art AP exams now. I really don’t know if it’ll work. I trust that the people working on it, but do see it as an unsolved problem.

I don’t share the Gas Station concern about “Gresham’s Law for pedagogy” (which I’d not heard of previously):

I suspect that the easier AP CSP will replace AP CS A at many high schools, and that CS A will disappear the way that CS AB did in May 2009 (Gresham’s Law for pedagogy: easier courses drive out harder ones).  Whether this is a good or bad outcome depends on how good the AP CSP course turns out to be.

The fact that there already are CS:P-like courses on many campuses, co-existing with CS1’s (intro CS for majors) is evidence that easier courses don’t always drive out harder ones. On our campus, we offer three CS1’s. The MediaComp course would probably be easier for Engineering students than the challenging MATLAB-based on that they currently require, but the Engineering faculty have not been eager to swap it out. The existence of “Physics for Poets” and Calculus aimed at different kinds of students is more evidence that Gresham’s Law doesn’t always hold for classes.

There are lots of challenges to CS:P. AP CS Level A is doing better these days, and I’m glad for that. I want both to succeed. I want a lot of CS in lots high schools. Will the new AP CS:P lead to more CS majors and more people in computing careers? I don’t know — I think so, but I’m not really worried about it. I believe in “computing for everyone” and that lots of people (even non-IT professionals) need to know more about computer science, so having more access to computing education in more schools is a positive end-goal for me.

June 28, 2013 at 1:57 am 23 comments

Zydeco: Supporting Cross-Context Inquiry in Formal and Informal Settings

In a sense, what Chris Quintana is doing here is a connectivist MOOC, but one where the student is guided via software-realized scaffolding through a self-study on a topic of their own interest.  It’s an interesting idea, to help students organize a wide variety of learning opportunities in support of inquiry learning.

We aim to support cross-context inquiry that spans formal and informal settings by developing Zydeco Sci-To-Go, a system integrating mobile devices and cloud technologies for middle school science inquiry. Zydeco enables teachers and students to create science investigations by defining goals, questions, and “labels” to annotate, organize, and reflect on multimodal data e.g., photos, videos, audio, text that they collect in museums, parks, home, etc. As students collect this information, it is stored in the cloud so that students and teachers can access that annotated information later and use it with Zydeco tools to develop a scientific explanation addressing the question they are investigating.

via Zydeco | Mobile Devices for Cross-Context Inquiry in Formal and Informal Settings.

June 28, 2013 at 1:21 am Leave a comment

Even for Experts! What Makes Code Hard to Understand?

When I visited Indiana earlier this year, I got a chance to meet with Rob Goldstone who told me about these fascinating results that Michael Hansen describes in the blog post linked below — that adding two blank lines to a Python program (which has no change to execution) significantly changes how programmers understand the code.  Are his participants getting confused, because spacing matters horizontally in Python but not vertically?

The other experiments that Michael describes below, like the one I’m quoting below, are also amazing.  Michael isn’t dealing with students — most of his participants are programmers with 2-10 years worth of experience, and graduate degrees.  How could they get this code so wrong, when the problem is the kind of thing we might give on a CS1 exam?  Here’s one hypothesis: We really don’t know just how hard programming is, and both students and programmers understand it far less well than we expect.

Why did 50% of our participants get this program wrong? There is a strong expectation amongst programmers that you don’t include code that won’t be used. Elliot Soloway identified this and other maxims (or rules of discourse) in 1984. Like conversational norms, these unwritten rules can have a powerful influence on interpretation.

via What Makes Code Hard to Understand? | synesthesiam.

June 27, 2013 at 1:31 am 7 comments

Learning for today versus learning for tomorrow: Teaching evaluations

Really interesting set of experiments that give us new insight into the value of teaching evaluations.  The second is particularly striking and points to the difficulty of measuring teaching quality — good today isn’t the same as good tomorrow.

When you measure performance in the courses the professors taught i.e., how intro students did in intro, the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors students did best in follow-on courses i.e., their intro students did best in advanced classes.The authors speculate that the more experienced professors tend to “broaden the curriculum and produce students with a deeper understanding of the material.” p. 430 That is, because they don’t teach directly to the test, they do worse in the short run but better in the long run.

via Do the Best Professors Get the Worst Ratings? | Psychology Today.

June 26, 2013 at 1:20 am 1 comment

Disaggregating Asian-American educational attainment

Computer science is mostly white or Asian and male.  We have lots of data to support that.  What I didn’t realize was how sub-groups within Asian-American differ markedly in their educational attainment.  A new report from NYU and ETS disaggregates the data, and below is the startling graphic that Rick Adrion pointed me to.

Ed-attainment-Asian-American

June 25, 2013 at 1:06 am 32 comments

17th in the Top 100 Influential Education Blogs

I don’t know who Onalytica is and if they do high-quality rankings, but I found the methodology interesting. This blog came in 17th among the top 100 most influential education blogs. What’s surprising is that it has one of the lowest “popularity” rankings in the top 20, but one of the highest “over-influence” ratio of influence-to-popularity. As Alfred Thompson suggested to me on Facebook, that points to the small community of CS Ed researchers and bloggers, but that a high percentage of them read here. I appreciate that!

For a detailed explanation of the methodology we refer to out previous post. As before, we report the following metrics: Onalytica Influence Index, Popularity and Over-Influence.

Influence index is the impact factor of the blogs, similar to the impact factor of academic journals; Popularity measures how well-known a blog is among other education blogs and Over-Influence seeks to capture how influential a blog is compared to how popular it is.

The movements in the ranking have been caused by a change in the quantity and quality of citations that a blog has received. If a blog has gone up it means that it has been cited by more influential blogs lately and/or has received a higher number of citations. Moreover, there are new influential blogs that we have only recently started monitoring.

Change In Rank Rank Name Influence Popularity Over-Influence
New Entry ★ 17 Computing Education Blog 35.9 9.0 2.7

via What has changed in the Top 100 Influential Education Blogs ranking? | Onalytica Blog.

June 24, 2013 at 1:40 am 3 comments

NCTQ and US News Report on Teacher Prep: Making CS Teacher Prep Better

The National Council on Teacher Quality and US News and World Report have  released a state-by-state report on teacher preparation — and it’s pretty dismal.  I’ve copied some of the top “take-aways” below.

Important “take-aways”

  • In countries where students outperform the U.S., teacher prep schools recruit candidates from the top third of the college-going population. The Review found only one in four U.S. programs restricts admissions to even the top half of the college-going population.

  • A large majority of programs (71 percent) are not providing elementary teacher candidates with practical, research-based training in reading instruction methods that could reduce the current rate of reading failure (30 percent) to less than 10 percent of the student population.

  • Only 11 percent of elementary programs and 47 percent of secondary programs are providing adequate content preparation for teachers in the subjects they will teach.

via Teacher Prep: Findings.

There is some significant critique of the NCTQ study, particularly on its methodology. This is from Diane Ravitch’s blog:

NCTQ is not a professional association. It did not make site visits. It made its harsh judgments by reviewing course syllabi and catalogs. The criteria that it rated as most important was the institution’s fidelity to the Common Core standards.

As Rutgers’ Bruce Baker pointed out in his response, NCTQ boasts of its regard for teachers but its review of the nation’s teacher-training institutions says nothing about faculty. They don’t matter. They are irrelevant. All that matters is what is in the course catalog.

via That NCTQ Report on Teacher Education: F | Diane Ravitch’s blog.

I’d rather see the NCTQ study as pointing out problems for computing education programs to avoid. Given the results coming in from the UChicago Landscape study, I doubt if we’re doing much better now in computer science.  From a positive perspective, the best practices identified in the NCTQ report can inform what we do in computing education teacher professional development.  As Jeanne Century said at SIGCSE this last year, one advantage we have is that we’re starting from a pretty much clean slate — there’s not much out there.  We can try to build it right from the start.

June 24, 2013 at 1:18 am Leave a comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,297 other followers

Feeds

Recent Posts

Blog Stats

  • 1,672,124 hits
June 2013
M T W T F S S
« May   Jul »
 12
3456789
10111213141516
17181920212223
24252627282930

CS Teaching Tips