Posts tagged ‘computing education’
The papers for ICER 2013 are available in the ACM Digital Library now at http://dl.acm.org/citation.cfm?id=2493394. I think that they remain free for a month (so, until September 12), so grab them quick.
ICER 2013 was a fabulous conference. I learned alot, and am already using some of the ideas I gained there in my research and in my teaching. I can’t possibly summarize all the papers, so here’s my unofficial list of what struck me.
I was invited to be a discussant in the Doctoral Consortium, and that was an absolute thrill. The students were so bright and had so many interesting ideas. I’m eager to hear about many of the results. We also noted that we had participants from several major research universities this year (Stanford, MIT, Virginia Tech, University of Washington). For some, it was the first time that they’d ever sent someone to the ICER DC. Why? Andy Ko (U. Washington) said that it was because it’s been three years since CE21 funding started, and that’s enough time to have something for a doctoral student to want to show. Really shows the importance of having funding in an area.
One of the big ideas for me at ICER this year was the value of big data — what can you do with lots of data? Neil Brown showed that the Computing At Schools website is growing enormously fast, and he told us that the BlueJ Blackbox data are now available to researchers. Elena Glassman talked about how to use and visualize student activity to support finding different paths to a solution. Colleen Lewis presented with two of her undergraduate collaborators from Berkeley on data mining the AP CS exam answers.
My favorite example of the value of big data for CS Ed came from my favorite paper of the conference. Michael Lee and Andy Ko presented on their research on how adding assessments into a programming video game increased persistence in the game. The below graph appears in their paper, but in the talk, Michael annotated it with what was being taught in the levels that led to drop-offs in participation. (Thanks to Michael for providing it to me.) The control and assessment groups split on lists. Variables were another big drop-off, as were objects and functions. Here is empirical measurement of “how hard is that topic.” I’ve submitted my request to gain access to the Blackbox, because I’m starting to understand what questions we can ask with a bunch of anonymized data.
There were several papers that looked at student artifacts as a proxy for their understanding. I was concerned about that practice. As Scott Klemmer told us in his opening keynote, people program mostly today by grabbing stuff off the Web and copying it — sometimes, without understanding it. Can you really trust that students using some code means that they understand the idea behind that code?
Raymond Lister led a really great one hour special session around the idea of “Geek genes,” whether CS really does generate a bi-modal distribution of grades, and whether the learning edge momentum theory describes our results. It was a great session because it played to ICER’s strengths, e.g., really intense discussion, and yet generated moments of enormous laughter. I came away thinking that there are no geek genes, we don’t have bimodal distributions, and the jury is still out on the learning edge momentum.
Elizabeth Patitsas presented a nice paper comparing introducing algorithms serially (“Here’s algorithm A that solves that problem…and now here’s algorithm B…”) vs as compare-and-contrast (“Here are two algorithms that solve that problem…”). Compare-and-contrast is better, and better when learning algorithms than even the existing education literature suggests. I mentioned this result in class just yesterday. I’m teaching our TA preparation class, and a student who teaches algorithms asked me, “Am I responsible for my students’ learning?” I showed the students Elizabeth’s result then asked, “If you know that teaching one way leads to more learning than another, aren’t you morally or ethically required to teach using the better method?”
Michelle Friend and Rob Cutler described a group of middle school girls figuring out a complicated algorithm problem (involving finding the maximum height that an egg drop protection mechanism will work). They showed that, without scaffolding, the girls were able to come up with some fairly sophisticated algorithms and good analyses of the speed of their algorithms. We’re getting somewhere with our understanding of CS learning at the schools age.
And I totally admit that my impression of this ICER is influenced by my paper on Media Computation winning the Chair’s Paper Award. Michael Lee won the popular vote “John Henry Award.” (I voted for him, too.)
I’m skipping a lot: Mike Hewner presenting on his thesis, an interesting replication of the McCracken study, new ideas about PCK and threshold concepts. It was a great event, and I could write a half dozen posts about the ideas from the conference. Next year’s ICER is in Glasgow, 11-12 August. I am very much looking forward to it, and am working on my papers to submit already.
I answered the criticism leveled below previously — it really is the case that many people who aren’t professional programmers are going to need to learn to program as part of their other-than-software jobs. Why are programmers pushing back against people learning to code? (And there seems to be a lot of pushback going on, as this mashup suggests.) Is it a sense of “What I do is important, and if everyone can do it, it lessens the importance”? I don’t really think that they’re afraid for their jobs — it does take a lot of hours and effort to learn to code well.
The argument that it won’t “stick” (as suggested below) doesn’t work for me. Just because we don’t know now how to teach computer science to everyone doesn’t mean that we can’t learn how to teach computer science to everyone who needs it. Our lack of ability is not the same as the lack of need. We don’t teach everyone to read well and understand mathematics yet — does that mean we shouldn’t try?
But if you aren’t dreaming of becoming a programmer—and therefore planning to embark on a lengthy course of study, whether self-directed or formal—I can’t endorse learning to code. Yes, it is a creative endeavor. At its base, it’s problem-solving, and the rewards for exposing holes in your thinking and discovering elegant solutions are awesome. I really think that some programs are beautiful. But I don’t think that most who “learn to code” will end up learning anything that sticks. One common argument for promoting programming to novices is that technology’s unprecedented pervasiveness in our lives demands that we understand the nitty-gritty details. But the fact is that no matter how pervasive a technology is, we don’t need to understand how it works—our society divides its labor so that everyone can use things without going to the trouble of making them. To justify everyone learning about programming, you would need to show that most jobs will actually require this. But instead all I see are vague predictions that the growth in “IT jobs” means that we must either “program or be programmed” and that a few rich companies want more programmers—which is not terribly persuasive.
I saw the below exchange on Twitter, and thought it captured the argument well:
I’ve mentioned before how much we need schools of education to guarantee the future stability of computing education. The new CSTA report on certification makes the point better than I do.
I just wrote a Blog@CACM post explaining why we in CS need collaboration with schools of education. We don’t want to be in the business of certifying teachers. We certainly do not have the background to prepare teachers for a lifelong career in education. That’s what pre-service education faculty do.
How we get from here to there is an interesting question. Michelle Friend suggests that we start by finding (or getting hired) faculty in science and mathematics education who are interested in starting computing programs. Few schools would be willing to take the risk of establishing computing education programs or departments today. They might exist one day, but they’ll probably grow out of math or science ed — just as many CS departments grew out of math or science or engineering roots.
Given that (in the US) we lose close to 50% of our STEM teachers within the first five years of teaching, we have to establish reliable production of CS teachers, if we don’t want CS10K to be only CS5K five years later. To establish that reliable production, we need schools of education.
Posted to the SIGCSE-Members list — I really like this idea! Our work on DCCE showed that communities of teachers was an effective way of improving teacher’s sense of belonging and desire to improve. Will it work for faculty? ASEE is the organization to try!
This is a great opportunity for CS faculty to work with like-minded faculty from across the country to explore and share support for introducing new instructional practices into your classroom. Please consider this for yourself and pass it on to your colleagues.
Engineering education research has shown that many research-based instructional approaches improve student learning but these have not diffused widely. This is because (1) faculty members find it difficult to acquire the required knowledge and skills by themselves and (2) sustaining the on-going implementation efforts without continued encouragement and support is challenging. This project will explore ways to overcome both obstacles through virtual communities.
I couldn’t believe this when Mark Miller sent the below to me. ”Maybe it’s true in aggregate, but I’m sure it’s not true at Georgia Tech.” I checked. And yes, it has *declined*. In 2003 (summing Fall/Winter/Spring), the College of Computing had 367 graduates. In 2012, we had 217. Enrollments are up, but completions are down.
What does this mean for the argument that we have a labor shortage in computer science, so we need to introduce computing earlier (in K-12) to get more people into computing? We have more people in computing (enrolled) today, and we’re producing fewer graduates. Maybe our real problem is the productivity at the college level?
I shared these data with Rick Adrion, and he pointed out that degree output necessarily lags enrollment by 4-6 years. Yes, 2012 is at a high for enrollment, but the students who graduated in 2012 came into school in 2008 or 2007, when we were still “flatlined.” We’ll have to watch to see if output rises over the next few years.
Computer-related degree output at U.S. universities and colleges flatlined from 2006 to 2009 and have steadily increased in the years since. But the fact remains: Total degree production (associate’s and above) was lower by almost 14,000 degrees in 2012 than in 2003. The biggest overall decreases came in three programs — computer science, computer and information sciences, general, and computer and information sciences and support services, other.
This might reflect the surge in certifications and employer training programs, or the fact that some programmers can get jobs (or work independently) without a degree or formal training because their skills are in-demand.
Of the 15 metros with the most computer and IT degrees in 2012, 10 saw decreases from their 2003 totals. That includes New York City (a 52% drop), San Francisco (55%), Atlanta (33%), Miami (32%), and Los Angeles (31%).
The scientific community must also do the same, by convincing the public that it is worth spending tax dollars on research. Scientists: this isn’t someone else’s job – this is your job, starting immediately. If you personally hope to receive government research funds in the future, public engagement is now part of your job description. And if you and your colleagues don’t convincingly make the case to the public that your discipline should be funded, well then it won’t be. Without a public broadly supportive of funding science, it is all too easy for politicians looking for programs to cut to single out esoteric-sounding research programs as an excuse to further slash science funding.
Katrina Falkner has written up an excellent reflection (with gorgeous example student work) on her new MediaComp course at the University of Adelaide. I loved the artwork she shared, and I was particularly struck by the points she made about the value of “slowness” of the language, the challenges of helping students decontextualize programming after learning MediaComp, and the students complaining about using a curriculum “not invented here.”
The students didn’t really like working with Jython as it was very slow, but this had an unintended consequence, in that they became aware of the efficiency of their algorithms. I don’t think I have ever taught a first year course where students introduced efficiency as a discussion point on their own initiative. However, when working with their own images, which could sometimes be huge, they had to start thinking about whether there was a better way of solving their problems. I think this was a big win.
The NCAA has now revised their eligibility criteria, in favor of computer science: http://fs.ncaa.org/Docs/eligibility_center/CoreCourseInfo/Common_Core_Course_Questions/engage.html. The NCAA does an audit of an athlete’s high school classes, to decide if they really did complete a high school degree (e.g., rather than four years of gym all day every day). Computer science did not used to count. Under the new criteria, computer science can count if the state recognizes it as counting. This is a win, and as I understand it, this is due to the efforts of Hadi Partovi and Code.org.
I got to meet Cameron Fadjo at the CSTA Conference in July. He’s really excited about the project, with lots of energy. Google says that, if successful, they plan to move it into other areas of the country later.
Six of the fellows are recent STEM graduates. Google is heavily involved with STEM and has a number of national initiatives, including programs in Berkeley County and the surrounding areas.
In addition there are two education researchers: Project Lead Cameron Fadjo and Project Manager Kate Berrio.
“We have fellows from all around the region,” Fadjo said. “The next couple of weeks is introducing them to new things, training them to teach computer science and computer science pedagogy.”
“We envision these folks will be the next leaders in this area,” Berrio said. “We’re adding a leadership element to it. We want to make sure they are well-rounded when they go out into the world.”
I wrote a blog post recently, where I suggested that we in computing need to be careful that TEALS doesn’t end up diminishing demand for high school CS teachers. Kevin Wang, who runs TEALS, contacted me after that post and we had a useful phone conversation.
Kevin sees TEALS as primarily a professional development activity. TEALS provides IT professionals to teach computer science courses and to be a teacher-asssistant in these courses. TEALS goes into a school only if the school signs a contract with TEALS that (a) there is a teacher assigned to teaching computer science in that school, who will undertake professional development during the time that the course is being taught and (b) that teacher will take over the course after the engagement with TEALS ends. The professional development is really just the student sitting in on the class with the students — no pedagogical development, no teaching methods, no community with other teachers. For most schools, it’s a many-volunteer to one-school ratio — a couple of teachers, and some teaching assistants. TEALS is now experimenting with volunteers who provide the teaching via video at distance.
They don’t have a lot of data yet. TEALS doesn’t know yet how well the teachers learn, sitting in on the class alongside the students. They don’t know how yet how well the teachers like doing professional development like this — I wonder if teachers find it demeaning to their professionalism, to sit taking the class alongside the students, rather than in groups of their peers. TEALS doesn’t know yet much about how well the schools succeed teaching computer science after the professionals leave. They don’t know if students are learning overall (they have great results in some classes), or about how the students are doing with IT professionals who have little preparation for teaching, or if the TEALS classes are better or worse than others at engaging women and under-represented minorities.
The quote below is from a blog post that I highly recommend reading. It’s by one of the TEALS volunteers and his experience in teaching AP CS. The author, Dan Kasun, was a teaching assistant to an existing AP CS teacher. I don’t know how common that model is.
TEALS sounds like it’s trying to make computer science succeed for the long haul. Computing education reform can’t be about the students — or rather, it can’t be about the students here and now. It has to be about the long term. Yes, by providing a set of IT professionals to a school, one can help a class of 35 students to do remarkably well in AP CS. But if you develop a full-time CS teacher to be in multiple classes, and to improve over years, and to stay in that school for a decade or more (or even the five years that only half of STEM teachers last), you get to far more than 30 kids.
I want computer science to be in schools, long after TEALS runs out of volunteers. I believe that Kevin Wang wants that, too. I don’t know if TEALS is helping yet, but am interested to see what we learn from it.
I had the opportunity to support one of the local Loudoun County High Schools this year by volunteering to assist in AP Computer Science as part of the TEALS program (www.tealsk12.org). TEALS provides volunteers who can teach an entire computer science class for schools that do not have access to trained educators, and also provides teacher assistants (TAs) for schools that already have teachers, but would like additional support in their programs. Loudoun already had teachers, so I volunteered as a TA (which was fortunate, as my schedule wouldn’t have supported the responsibility of the full class).
Once upon a time, all computer scientists understood how floating point numbers were represented in binary. Numerical methods was an important part of every computing curriculum. I know few undergraduate programs that require numerical methods today.
Results like the below make me think about what else we teach that will one day become passé, irrelevant, or automatized. The second result is particularly striking. If descriptions from programming competitions can lead to automatic program generation, what does that imply about what we’re testing in programming competitions — and why?
The researchers’ recent papers demonstrate both approaches. In work presented in June at the annual Conference of the North American Chapter of the Association for Computational Linguistics, Barzilay and graduate student Nate Kushman used examples harvested from the Web to train a computer system to convert natural-language descriptions into so-called “regular expressions”: combinations of symbols that enable file searches that are far more flexible than the standard search functions available in desktop software.
In a paper being presented at the Association for Computational Linguistics’ annual conference in August, Barzilay and another of her graduate students, Tao Lei, team up with professor of electrical engineering and computer science Martin Rinard and his graduate student Fan Long to describe a system that automatically learned how to handle data stored in different file formats, based on specifications prepared for a popular programming competition.
Barbara Ericson has generated her 2012 Advanced Placement Computer Science report. http://home.cc.gatech.edu/ice-gt/321 has all of her reports. http://home.cc.gatech.edu/ice-gt/548 has her more detailed analysis just of 2012. Since one of our concerns with GaComputes and ECEP is on pass rates, not just test-takers, she dug deeper into pass rates. For a point of comparison, she looked up AP Calculus pass rates. What she found is somewhat surprising — below is quoted from her page.
Comparison of AP CS A to AP Calculus AB in 2012
The number of students that take the exam per teacher is much higher for AP Calculus AB at 21 students per teacher versus 11 for Computer Science A
The number of schools that teach Calculus is 11,694 versus 2,103
AP CS A had a higher pass rate than Calculus – 63% versus 59%
AP CS A had a higher female pass rate than Calculus – 56% versus 55%
AP CS A had a higher Hispanic pass rate than Calculus – 39.8% versus 38.4%
AP Calculus had a higher black pass rate than CS – 28.7% versus 27.3%
Calculus had a much higher percentage of women take the exam than CS – 48.3% versus 18.7%
Calculus had a higher percentage of black students take the exam than CS – 5.4% versus 4.0%
Calculus had a higher percentage of Hispanic/Latino students take the exam than CS – 11.5% versus 7.7%
Stuart Wray has a remarkable blog that I recommend to CS teachers. He shares his innovations in teaching, and grounds them in his exploration of the literature into the psychology of programming. The quote and link below is an excellent example, where his explanation led to me a paper I’m eager to dive into. Stuart has built an interesting warm-up activity for his class that involves robots. What I’m most intrigued by is his explanation for why it works as it does. The paper that he cites by Jones and Burnett is not one that I’d seen before, but it explores an idea that I’ve been interested in for awhile, ever since I discovered the Spatial Intelligence and Learning Center: Is spatial ability a pre-requisite for learning in computer science? And if so, can we teach it explicitly to improve CS learning?
The game is quite fun and doesn’t take very long to play — usually around a quarter of an hour or less. It’s almost always quite close at the end, because of course it’s a race between the last robot in each team. There’s plenty of opportunity for delaying tactics and clever blocking moves near the exit by the team which is behind, provided they don’t just individually run for the exit as fast as possible.
But turning back to the idea from James Randi, how does this game work? It seems from my experience to be doing something useful, but how does it really work as an opening routine for a programming class? Perhaps first of all, I think it lets me give the impression to the students that the rest of the class might be fun. Lots of students don’t seem to like the idea of programming, so perhaps playing a team game like this at the start of the class surprises them into giving it a second chance.
I think also that there is an element of “sizing the audience up” — it’s a way to see how the students interact with one another, to see who is retiring and who is bold, who is methodical and who is careless. The people who like clever tricks in the game seem often to be the people who like clever tricks in programming. There is also some evidence that facility with mental rotation is correlated with programming ability. (See Spatial ability and learning to program by Sue Jones and Gary Burnett in Human Technology, vol.4(1), May 2008, pp.47-61.) To the extent that this is true, I might be getting a hint about who will have trouble with programming from seeing who has trouble making their robot turn the correct direction.
Talking to teachers from Texas at the CSTA Conference, I heard that the loan forgiveness program isn’t all that good. But the fact that Texas is listing CS as #2 on their “shortage” list is an indication that it’s something that they want more of.
The Texas Education Agency (TEA) has received approval from the US Department of Education (USDE) for the 2013-2014 teacher shortage areas. Please note the shortage areas have changed from previous years.
The approved shortage areas for the 2013-2014 school year are:
- Bilingual/English as a Second Language
- Computer Science
- Languages Other Than English (Foreign Language)
- Special Education
The approved shortage areas allow the administrator the ability to recruit and retain qualified teachers and to help reward teachers for their hard work using the loan forgiveness opportunities. School principals can act on behalf of the Commissioner of Education to certify that a teacher has met the minimum qualifications required for certain loan forgiveness programs.
I finished Nathan Ensmenger’s 2010 book “The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise” and wrote a Blog@CACM post inspired by it. In my Blog@CACM article, I considered what our goals are for an undergraduate CS degree and how we know if we got there. Ensmenger presents evidence that the mathematics requirements in undergraduate computer science are unnecessarily rigorous, and that computer science has never successfully become a profession. The former isn’t particularly convincing (there may be no supporting evidence that mathematics is necessary for computer programming, but that doesn’t mean it’s not useful or important), but the latter is well-supported. Computer programming has not become a profession like law, or medicine, or even like engineering. What’s more, Ensmenger argues, the efforts to professionalize computer programming may have played a role in driving away the women.
Ensmenger talks about software engineering as a way of making-do with the programmers we have available. The industry couldn’t figure out how to make good programmers, so software engineering was created to produce software with sub-par programmers:
Jack Little lamented the tendency of manufacturers to design languages “for use by some sub-human species in order to get around training and having good programmers.” When the Department of Defense proposed ADA as a solution to yet another outbreak of the software crisi, it was trumpeted as a means of “replacing the idiosyncratic ‘artistic’ ethos that has longer governed software writing with a more efficient, cost-effective engineering mind-set.”
What is that “more efficient” mind-set? Ensmenger suggests that it’s for programmers to become factory line workers, nearly-mindlessly plugging in “reusable and interchangeable parts.”
The appeal of the software factory model might appear obvious to corporate managers; for skilled computer professionals, the idea of becoming a factory worker is understandably less desirable.
Ensmenger traces the history of software engineering as a process of dumbing-down the task of programming, or rather, separating the highest-ability programmers who would analyze and design systems, from the low-ability programmers. Quotes from the book:
- They organized SDC along the lines of a “software factory” that relied less on skilled workers, and more on centralized planning and control…Programmers in the software factory were machine operators; they had to be trained, but only in the basic mechanisms of implementing someone else’s design.
- The CPT, although it was developed at the IBM Federal Systems Division, reflects an entirely different approach to programmer management oriented around the leadership of a single managerially minded superprogrammer.
- The DSL permits a chief programmer to exercise a wider span of control over the programming, resulting in fewer programmers doing the same job.
In the 1980’s, even the superprogrammer was demoted.
A revised chief programmer team (RCPT) in which “the project leader is viewed as a leader rather than a ‘super-programmer.’” The RCPT approach was clearly intended to address a concern faced by many traditionally trained department-level managers—namely, that top executives had “abdicated their responsbility and let the ‘computer boys’ take over.”
The attempts to professionalize computer programming is a kind of response to early software engineering. The suggestion is that we programmers are as effective at handling projects as management. But in the end, he provides evidence from multiple perspectives that professionalization of computer programming has failed.
They were unable, for example, to develop two of the most defining characteristics of a profession: control over entry into the profession, and the adoption of a shared body of abstract occupational knowledge—a “hard core of mutual understanding”—common across the entire occupational community.
Ensmenger doesn’t actually talk about “education” as such very often, but it’s clearly the elephant in the room. That “control over entry into the profession” is about a CS degree not being a necessary condition for entering into a computing programming career. That “adoption of a shared body of abstract occupational knowledge” is about a widely-adopted, shared, and consistent definition of curriculum. There are many definitions of “CS1” (look at the effort Allison Elliott Tew had to go through to define CS1 knowledge), and so many definitions of “CS2” as to make the term meaningless.
The eccentric, rude, asocial stereotype of the programmer dates back to those early days of computing. Ensmenger says hiring that followed that stereotype is the source of many of our problems in developing software. Instead of allowing that eccentricity, we should have hired programmers who created a profession that embraced the user’s problems.
Computer programmers in particular sat in the uncomfortable “interface between the world of ill-stated problems and the computers.” Design in a heterogeneous environment is difficult; design is as much as social and political process as it is technical[^1]; cultivating skilled designers requires a comprehensive and balanced approach to education, training, and career development.”
The “software crisis” that lead to the creation of software engineering was really about getting design wrong. He sees the industry as trying to solve the design problem by focusing on the production of the software, when the real “crisis” was a mismatch between the software being produced and the needs of the user. Rather than developing increasingly complicated processes for managing the production of software, we should have been focusing on better design processes that helped match the software to the user. Modern software engineering techniques are trying to make software better matched to the user (e.g., agile methods like Scrum where the customer and the programming team work together closely with a rapid iterative development-and-feedback loop) as well as disciplines like user-experience design.
I found Ensmenger’s tale to be fascinating, but his perspective as a labor historian is limiting. He focuses only on the “computer programmer,” and not the “computer scientist.” (Though he does have a fascinating piece about how the field got the name “computer science.”) Most of his history of computing seems to be a struggle between labor and management (including an interesting reference to Karl Marx). With a different lens, he might have considered (for example) the development of the additional disciplines of information systems, information technology, user experience design, human-centered design and engineering, and even modern software engineering. Do these disciplines produce professionals that are better suited for managing the heterogeneous design that Ensmenger describes? How does the development of “I-Schools” (Schools of Information or Informatics) change the story? In a real sense, the modern computing industry is responding to exactly the issues Ensmenger is identifying, though perhaps without seeing the issues as sharply as he describes them.
Even with the limitations, I recommend “The Computer Boys Take Over.” Ensmenger covers history of computing that I didn’t know about. He gave me some new perspectives on how to think about computing education today.
[^1]: Yes, both semi-colons are in the original.