Posts tagged ‘computing education research’
This is part of Briana Morrison’s dissertation work. She’s asking the question about the role of explaining programs in different modalities (e.g., visual vs. oral text) have on understanding. If you know potential applicants (e.g., maybe advertise it to your whole class?), please forward this to them. We’d appreciate it!
Do you like to watch videos on the internet?
Want to help with a research study?
We need volunteers, age 18 and older, with no computer programming experience to help us determine the best way to explain code using videos.
No more than 2 hours of your time!
Completing a portion of the study allows you to enter a raffle for one of four
$50 Amazon Gift Cards
Completion of entire study allows you to enter a raffle for one
$100 Amazon Gift Card
Interested? Go to the following website:
The ITICSE’14 paper referenced below is getting discussed a good bit in the CS Education community. Is it really the case that enhancing error messages doesn’t help students?
Yes, if you do an ineffective job of enhancing the error messages. I’m disappointed that the paper doesn’t even consider the prior work on how to enhance error messages in a useful way — and more importantly, what has been established as a better process. To start, the best paper award at SIGCSE’11 was on an empirical process for analyzing the effectiveness of error messages and a rubric for understanding student problems with them — a paper that isn’t even referenced in the ITICSE paper, let alone applying the rubric. That work and the work of Lewis Johnson in Proust point to the importance of bringing more knowledge to bear in creating useful error messages–by studying student intentionality, by figuring out what information they need to be successful. Andy Ko got it right when he said “Programming languages are the least usable, but most powerful human-computer interfaces ever invented.” We make them more usable by doing careful empirical work, not just tossing a bunch of data into a machine learning clustering algorithm.
I worry that titles like “Enhancing syntax error messages appears ineffectual” can stifle useful research. I already spoke to one researcher working on error messages who asked if new work is even useful, given this result. The result just comes from a bad job at enhancing error messages. Perhaps a better title would have been “An approach to enhancing syntax error messages that isn’t effective.”
Debugging is an important skill for novice programmers to acquire. Error messages help novices to locate and correct errors, but compiler messages are frequently inadequate. We have developed a system that provides enhanced error messages, including concrete examples that illustrate the kind of error that has occurred and how that kind of error could be corrected. We evaluate the effectiveness of the enhanced error messages with a controlled empirical study and find no significant effect.
An important new working paper from the ExploringCS group asks the question: If we achieve CS10K, how do we avoid only having CS5K left after only five years? This is exactly the question that Lijun Ni was exploring in her dissertation on CS teacher identity.
Of the 81 teachers who have participated in the ECS program over the last
five years, 40 are currently teaching ECS in LAUSD. These numbers reveal that we
have “lost” more teachers than we have “retained.” Of the 40 teachers who are
currently teaching the ECS course, 5 of them had a 1-2 year interval in which they
did not teach the course. This means that fully 45 of the 81 teachers who have
participated in the ECS program have experienced a teaching “disruption” which has
ended their participation in the ECS teacher community for a year or longer.
In particular, they ask us to consider the dangers of short-term fixes to long-term problems, which is a point I was trying to make when arguing that we may be 100 years behind other STEM subjects in terms of making our discipline-based education available to all.
In response to scaling up challenges, we can expect a rise of “quick-fix”
solutions that have a potential to undercut progress. One quick-fix “solution” to
address CS teacher shortage or the need for deepened teacher content knowledge
are programs that bring industry professionals to assist teachers in CS classrooms.
While we are interested in learning more about the outcomes of these programs,
because there can be value in students hearing from experts in the field, there are
also risks to having industry professionals take on a teaching role in the classroom
without professional development in effective and relevant pedagogy and belief
systems and equitable practices. Will industry professionals deliver content
knowledge the way they were taught, not having had experience working with the
novice learner? Will they focus on working with the students who think more like
they do, to the neglect of the other students? In short quick fixes like these may
inadvertently perpetuate the persistent divides in the field.
I add to their list of questions: Does bringing in IT professionals reduce the administrative pressure that pushes teachers out of CS? Does it help to create the context and environment that supports CS teachers?
I used this working paper in my post this month for Blog@CACM. Vint Cerf recently gave testimony in the Senate recommending a requirement for CS in all primary and secondary schools. The ECS experience (and Lijun Ni’s work) point toward the need to create a supportive environment for CS teaching if we want to achieve Vint’s recommendation.
Highly recommended read.
The below-linked article is highly recommended. It’s an insightful consideration of the different definitions of “University” we have in the US, and how the goals of helping students become educated for middle class jobs and of being a research university are not the same thing.
This article gave me new insight into the challenges of discipline-based education research, like computing education research. We really are doing research, as one would expect in a research university, e.g., trying to understand what it means for a human to understand computation and how to improve that understanding. But what we study is a kind of activity that occurs at that other kind of university. That puts us in a weird place, between the two definitions of the role of a university. It gives me new insight into the challenges I faced when I was the director of undergraduate studies in the College of Computing and when I was implementing Media Computation. Education research isn’t just thrown over the wall into implementation. The same challenges of technology adoption and, necessarily, technology adaption have to occur.
At the “TIME Summit on Higher Education” that the Carnegie Corporation of New York and Time magazine co-sponsored in September 2013 along with the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation, the disconnect between the views of the research university from inside and outside was vividly on display. A procession of distinguished leaders of higher education mainly emphasized the need to protect—in particular, to finance adequately—the university’s research mission. A procession of equally distinguished outsiders, including the U.S. secretary of education, mainly emphasized the need to make higher education more cost-effective for its students and their families, which almost inevitably entails twisting the dial away from research and toward the emphasis on skills instruction that characterizes the mass higher-education model. Time’s own cover story that followed from the conference hardly mentioned research it was mainly about how much economically useful material students are learning, even though the research university was explicitly the main focus of the conference.
Guest Post from Shriram Krishnamurthi: Growing respect for Research around Computational Learning and Thinking
Shriram and I had an email correspondence around the blog posts aboutrenaming the field and gaining respect for the study of how people learn and think about computation. He suggested a path forward that was about re-connecting to the fields that the CSEd community broke away from. I invited him to prepare a guest post that conveyed these ideas. Thanks to him for this!
Let me suggest you are probably trying to achieve two very different things here.
1. Create an actual community. There is real value to having all the interesting people from one area in one room. (This is why, despite the trouble it is to get there and back, I almost never say no to a Dagstuhl invitation.)
2. Have your students publish in venues such that, when they go out onto the job market, research universities such as yours (Georgia Tech) and mine (Brown) will notice and respect them, interview them, and make them tenure-track offers so they can have students of their own.
Unfortunately, I believe that right now these are fundamentally conflicting goals. SIGCSE, ITiCSE, and ICER address the former but not really the latter.
One, unrealistic, option is for 1 and 2 to merge. For this to happen, these venues need to become a whole lot better. I hear great things about the structure of ICER, but some of the papers are great while others are at best so-so. Changing the other two is harder than turning around an aircraft carrier. It may be possible to make ICER a stronger conference, but one small conference cannot really a whole area make. Plus, you still need to convince people to pay attention to it.
The only other option I see is to do both. Attend whatever conferences you need to to form a community. But get your students to publish at really good venues outside the area. That way, they can write a gung-ho application: “Look, I’m perfectly capable of holding my own in the open competition of conferences you respect”. People like Andy Ko—who published his work in conferences like ICSE—or my colleague Jeff Huang—an HCI person with strong publications in information retrieval—are examplars whose technical chops can’t be questioned.
In other words, this is a long response that could be abbreviated to “Yes, you should grow CSEd by sending it to more respected venues”, but I’m also showing you some of my thinking (because a good teacher grades the work, not just the answer!). A student who publishes a few papers in some conference already recognized as respectable technical CS is going to stand a far better chance. Once a dozen of those populate good departments and start producing students of their own, you’ve pretty much gotten over any prejudice and can then reset your standards. (Though I would still say it’s unhealthy to drop ties to these other areas and retreat into a CSEd shell.)
Which conferences, of course, depend on the student. For students doing HCI work, it might be SIGCHI; for those doing software engineering, it might be ICSE; for machine learning, ICML; for information retrieval, SIGIR; and so on. One good bit of advice to a young CSEd PhD student might be, “Find another area of CS in which you can demonstrate enough depth to publish papers in its good conferences and be able to hold your own in conversations with an expert in that field”.
Here are three other things to consider.
1. Being able to hold one’s own in another field creates natural allies in a department. A non-CSEd faculty member who realizes there will not be hires in their own area is likely to become an advocate for a CSEd candidate who has at least some presence in their area.
2. I feel the CSEd community has let itself be put into the “liberal arts ghetto” or, at the research university level, “instructor ghetto”. The leaders of “research” are tenured professors, but the leaders of “education” are Instructors, Professors of Practice, and so forth. This is a self-perpetuating cycle. For instance, who is the CSEd applicant going to get letters from? Getting a letter from an Instructor naturally makes the tenured faculty think, “Hmm, why should we take this person seriously?”
3. Finally, candidates need to be able to demonstrate a growth path. When I look at a candidate we’ve decided to interview, I’m only so interested in what they did before: their past achievements got them their interview, so now I’m interested in what lies ahead. I care to see what kind of agenda they have mapped out—is it interesting, is it hard, could someone else do it, etc.—and what skills they bring to the table (can they do it, and can they do it better than others).
I imagine this step is hard for some CSEd candidates. If you got a PhD studying some population, it may or may not be interesting to keep studying that population or to study the next such population or whatever. At the very least, then, if you intervened, showed an N% improvement, and have good plans to get to much more, and then show a path to bigger and more interesting problems, now you’ve got my interest. Put differently, think in terms of active interventions that demonstrate impact. Now you become comparable to students who are building or verifying software, deriving inferences from datasets, and so on. I don’t know whether CSEd students are getting advice in terms of presenting themselves this way.
There are lots of claims about the benefits of introducing computing early. This article in the NYTimes (even just the quote below) considers several of them:
- Important for individual students’ future career prospects. That seems unlikely, that elementary school CS would lead to better career prospects.
- Influence countries’ economic competitiveness. There might be a stronger argument here. Elementary school is about general literacy. There is likely an economic cost to computing illiteracy.
- Technology industry’s ability to find qualified workers. By putting computing into elementary school? Does industry want to hire kids who know Scratch and Alice? As Elliot suggested, it’s mostly a video game to young kids.
- “Exposing students to coding from an early age helps to demystify an area that can be intimidating.” I strongly agree with that one. We know that kids have weird ideas about CS, and seeing any real CS has a dramatic impact (especially on under-represented groups).
- “Breaks down stereotypes of computer scientists as boring geeks.” Maybe. Not all exposure to real computing leads to breaking down stereotypes. Sometimes they’re enhanced. I think this can happen, but we have to be careful to make it work.
- “Programming is highly creative.” True.
- “Studying it can help to develop problem-solving abilities.” False.
- “Equip students for a world transformed by technology.” Maybe. Does teaching kids about technology when they’re 8 prepare them for entering the workforce 10 years later? If computing literacy matters, that’s true. But I don’t believe that playing with Blockly in 3rd grade “equips” you with much. Most technology doesn’t look like Blockly.
We do have to make our message clear, and it should be a message that’s supported by research. If the computing education policy-and-PR machine ignores the research, we’re showing more disrespect for the field of computing education research and makes it even harder to establish reforms.
Around the world, students from elementary school to the Ph.D. level are increasingly getting acquainted with the basics of coding, as computer programming is also known. From Singapore to Tallinn, governments, educators and advocates from the tech industry argue that it has become crucial to hold at least a basic understanding of how the devices that play such a large role in modern life actually work.
Such knowledge, the advocates say, is important not only to individual students’ future career prospects, but also for their countries’ economic competitiveness and the technology industry’s ability to find qualified workers.
Exposing students to coding from an early age helps to demystify an area that can be intimidating. It also breaks down stereotypes of computer scientists as boring geeks, supporters argue. Plus, they say, programming is highly creative: Studying it can help to develop problem-solving abilities, as well as equip students for a world transformed by technology.
Here at Georgia Tech, this week is Finals Week. From here on out is summer.
I’m going to cut back on my blogging for summer. I have a lot of writing to do.
(1) I need to have a complete draft of our CS Principles ebook for testing with teachers by the end of May.
(2) I’m giving a workshop at the NCWIT Summit (also in May) on how to launch state-wide computing education reform.
(3) My biggest project of the summer: I need to to turn this:
into the 4th edition of the Python MediaComp book.
(4) I’m teaching at a summer school in Tarragona, Spain in July on teaching computer science and computing education research. I’d like to produce some lecture notes before that.
This Fall will still be a lot of writing. I’ve promised to produce a book on computing education research from a learning sciences perspective for Jack Carroll’s Synthesis Lectures on Human-Centered Informatics.
Blog post writing isn’t all that time consuming. The real time costs are (a) spare writing time goes to the blog rather than to larger, long-term projects and (b) I read all comments (here and on Twitter and Facebook) and think about whether to respond and how. So please excuse fewer posts while I focus on directing my writing energies toward these bigger projects.
Sally Fincher and I are organizing this year’s Doctoral Consortium for students working in computing education. Do come join us in Glasgow!
ICER DC Call for Proposals
The ICER 2014 Doctoral Consortium provides an opportunity for doctoral students to explore and develop their research interests in a workshop environment with a panel of established researchers. We invite students to apply for this opportunity to share their work with students in a similar situation as well as senior researchers in the field. We welcome submissions from students at any stage of their doctoral studies.
Sally Fincher, University of Kent at Canterbury
Mark Guzdial, Georgia Institute of Technology
Contact us at: firstname.lastname@example.org
What is the Doctoral Consortium?
The DC has the following objectives:
- Provide a supportive setting for feedback on students’ research and research direction
- Offer each student comments and fresh perspectives on their work from researchers and students outside their own institution
- Promote the development of a supportive community of scholars
- Support a new generation of researchers with information and advice on research and academic career paths
- Contribute to the conference goals through interaction with other researchers and conference events
The DC will be held on Sunday, August 10 2014. Students at any stage of their doctoral studies are welcome to apply and attend. The number of participants is limited to 12. Applicants who are selected will receive a limited partial reimbursement of travel, accommodation and subsistence (i.e., food) expenses of $600 (USD).
Preparing and Submitting your Consortium Proposal Extended Abstract
Candidates should prepare a 2-page research description covering central aspects of your PhD work, which follows the structure, details and format specified in the ICER Doctoral Consortium submission template Word<http://icer.hosting.acm.org/wp-content/uploads/2013/05/ICER2013-dc-template.doc> / LaTeX<http://icer.hosting.acm.org/wp-content/uploads/2013/05/ICER2013_dc_template.zip>.
Key points include:
- Your situation, i.e., the university doctoral program context in which your work is being conducted.
- Context and motivation that drives your dissertation research
- Miniature Background/literature review of key works that frames your research
- Hypothesis/thesis and/or problem statement
- Research objectives/goals
- Your research approach and methods, including relevant rationale
- Results to date and your argument for their validity
- Current and expected contributions
Appendix 1. A letter of nomination from your primary dissertation advisor, that supports your participation in the DC, explains how your work connects with the ICER community, and describes the expected timeline for your completion of your doctorate.
Appendix 2. Your concise current Curriculum Vita (1–2 pages)
Once you have assembled – and tested – the PDF file, the entire submission file should be emailed to email@example.com no later than 17:00 PDT on 21 May 2014. When submitting the applications, please put “ICER DC 2014 – <Last Name>” in the Subject line.
Friday 21st May – initial submission
Monday 2nd June – notification of acceptance
Monday 16th June – camera ready copy due
Doctoral Consortium Review Process
The review and decision of acceptance will balance many factors. This includes the quality of your proposal, and where you are within your doctoral education program. It also includes external factors, so that the group of accepted candidates exhibit a diversity of backgrounds and topics. Your institution will also be taken into account, where we are unlikely to accept more than two students from the same institution. Confidentiality of submissions is maintained during the review process. All rejected submissions will be kept confidential in perpetuity. Upon Acceptance of your Doctoral Consortium Proposal Authors will be notified of acceptance or rejection on 2 June 2014, or shortly after.
Authors of accepted submissions will receive instructions on how to submit publication-ready copy (this will consist of your extended abstract only), and will receive information about attending the Doctoral Consortium, about preparing your presentation and poster, about how to register for the conference, travel arrangements and reimbursement details. Registration benefits are contingent on attending the Doctoral Consortium.
Please note that submissions will not be published without a signed form releasing publishing copyright to the ACM. Attaining permissions to use video, audio, or pictures of identifiable people or proprietary content rests with the author, not the ACM or the ICER conference.
Before the Conference
Since the goals of the Doctoral Consortium include building scholarship and community, participants will be expected to read all of the Extended Abstracts of your colleagues prior to the beginning of the consortium with a goal of preparing careful and thoughtful critique. Although many fine pieces of work may have to be rejected due to lack of space, being accepted into the Consortium involves a commitment to giving and receiving thoughtful commentary.
At the Conference
All participants are expected to attend all portions of the Doctoral Consortium. We will also be arranging an informal Welcome Dinner for participants and discussants on Saturday August 9, 2014 before the consortium begins. Please make your travel plans to join us this evening to get acquainted.
Within the DC, each student will present his or her work to the group with substantial time allowed for discussion and questions by participating researchers and other students. Students will also present a poster of their work at the main conference. In addition to the conference poster, each student should bring a “one-pager” describing their research (perhaps a small version of the poster using letter or A4 paper) for sharing with faculty mentors and other students.
After the Conference
Accepted Doctoral Consortium abstracts will be distributed in the ACM Digital Library, where they will remain accessible to thousands of researchers and practitioners worldwide.
AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be one week prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.
Important article that gets at some of my concerns about using MOOCs to inform education research. The sampling bias mentioned in the article below is one of my responses to the claim that we can inform education research by analyzing the results of MOOCs. We can only learn from the data of participants. If 90% of the students go away, we can’t learn about them. Making claims about computing education based on the 10% who complete a CS MOOC (and mostly white/Asian, male, wealthy, and well-educated at that) is bad science.
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”
Last month, Steve Cooper organized a remarkable workshop at Stanford on the Future of Computing Education Research. The question was, “How do we grow computing education research in the United States?” We pretty quickly agreed that we have a labor shortage — there are too few people doing computing education research in the US. We need more. In particular, we need more CS Ed PhD students. The PhD students do the new and exciting research. They bring energy and enthusiasm into a field.
We also need these students to fit into Computing departments, where that could be Computer Science, or Informatics, or Information Systems/Technology/Just-Information Departments/Schools/Colleges. Yes, we need a presence in Education Schools at some point, to influence how we develop new teachers, but that’s not how we’ll best push the research.
How do we get there?
Roy Pea came to the event. He could only spare a few hours for us, and he only gave a brief 10 minute talk, but it was one of the highlights of the two days for me. He encouraged us to think about Learning Sciences as a model. Learning Science grew out of cognitive science and computer science. It’s a field that CS folks recognize and value. It’s not the same as Education, and that’s a positive thing for our identity. He told us that the field must grow within Computing departments because Domain Matters. The representations, the practices, the abstractions, the mental models — they all differ between domains. If we want to understand the learning of computing, we have to study it from within computing.
I asked Roy, “But how do we influence teacher education? I don’t see learning science classes in most pre-service teacher development programs.” He pointed out that I was thinking about it all wrong. (Not his words — he was more polite than that.) He described how learning sciences has influenced teacher development, integrated into it. It’s not about a separate course: “Learning science for teachers.” It’s about changing the perspective in the existing classes.
Ken Hay, a learning scientist (and long-time friend and colleague) who is at Indiana University, echoed Roy’s recommendation to draw on the learning sciences as a model. He pointed out that Language Matters. He said that when Indiana tried to hire a “CS Education Researcher,” faculty in the CS department said, “I teach CS. I’m a CS Educator. How is s/he different than me?”
We started talking about how “Computer Science Education Research” is a dead-end name for the research that we want to situate in computing departments. It’s the right name for the umbrella set of issues and challenges with growing computing education in the United States. It includes issues like teacher professional development and K-12 curricula. But that’s not what’s going to succeed in computing departments. It’s the part that looks like the learning sciences that can find a home in computing departments. Susanne Hambrusch of Purdue offered a thought experiment that brought it home for me. Imagine that there is a CS department that has CS Ed Research as a research area. They want to list it on their Research web page. Well, drop the word “Research” — this is the Research web page, so that’s a given. And drop the “CS” because this is the CS department, after all. So all you list is “Education.” That conveys a set of meanings that don’t necessarily belong in a CS department and don’t obviously connect to our research questions.
In particular, we want to separate (a) the research about how people learn and practice computing from (b) making teaching and learning occur better in a computing department. (a) can lead to (b), but you don’t want to demand that all (a) inform (b). We need to make the research on learning and practice in computing be a value for computing departments, a differentiator. “We’re not just a CS department. We embrace the human side and engage in social and learning science research.” Lots of schools offer outreach, and some are getting involved in professional development. But to do those things informed by learning sciences and informing learning sciences (e.g., can get published in ICER and ICLS and JLS and AERA) — that’s what we want to encourage and promote.
I was in a breakout that tried to generate names. Michael Horn of Northwestern came up with several of my favorites. Unfortunately, none of them were particularly catchy:
- Learning Sciences of Computing
- Learning Sciences for Computing
- Computational Learning and Practice (sounds too much like machine learning)
- Learning Sciences in Computing Contexts
- Learning and Practice in Computing
- Computational Learning and Literacy
We do have a name for a journal picked out that I really like: Journal of Computational Thinking and Learning.
I’d appreciate your thoughts on these. What would be a good name for the field which studies how people learn computing, how to improve that learning, how professionals practice computing (e.g., end-user programming, computational science & engineering), and how to help novices join those professional communities of practice?
I can’t remember the last time I learned so much and had my preconceived notions so challenged in just two days. I have a lot more notes on the workshop, and they may make it into some future blog posts. Kudos to Steve for organizing an excellent workshop, and my thanks to all the participants!
The report on the CCC’s workshop on MOOCs and other online education technologies is now out.
In February 2013 the Computing Community Consortium (CCC) sponsored the Workshop on Multidisciplinary Research for Online Education (MROE). This visioning activity explored the research opportunities at the intersection of the learning sciences, and the many areas of computing, to include human-computer interactions, social computing, artificial intelligence, machine learning, and modeling and simulation.
The workshop was motivated and informed by high profile activities in massive, open, online education (MOOE). Point values of “massive” and “open” are extreme values that make explicit, in ways not fully appreciated previously, variability along multiple dimensions of scale and openness.
The report for MROE has been recently completed and is online. It summarizes the workshop activities and format, and synthesizes across these activities, elaborating on 4 recurring themes:
- Next Generation MOOCs and Beyond MOOCs
- Evolving Roles and Support for Instructors
- Characteristics of Online and Physical Modalities
- Physical and Virtual Community
Andy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which he explained in a blog post. It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”
How would we make them better? I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.“
We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on. Usability is one of the goals, but typically, in a theoretical sense. Quorum is the only programming language that I know of that tested usability as part of the design process.
But what if we took Andy Ko’s argument seriously? What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks? Value would become more obvious. It would be more easily adopted by a community. The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.
What would our design process be? How much more usable and learnable could our programming languages become? How much easier would computing education be if the languages were more usable and learnable? I’d love it if programming language designers could put me out of a job.
Great to see Dan Garcia and his class getting this kind of press! I’m not sure I buy the argument that SFGate is making, though. Do female students at Berkeley find out about this terrific class and then decide to take it? Or are they deciding to take some CS and end up in this class? Based on Mike Hewner’s work, I don’t think that students know much about the content of even great classes like Dan’s before they get there.
It is a predictable college scene, but this Berkeley computer science class is at the vanguard of a tech world shift. The class has 106 women and 104 men.
The gender flip first occurred last spring. It was the first time since at least 1993 – as far back as university enrollment records are digitized – that more women enrolled in an introductory computer science course. It was likely the first time ever.
It’s a small but a significant benchmark. Male computer science majors still far outnumber female, but Prof. Dan Garcia’s class is a sign that efforts to attract more women to a field where they have always been vastly underrepresented are working.
“We are starting to see a shift,” said Telle Whitney, president of the Anita Borg Institute for Women and Technology.
Mihaela Sabin at University of New Hampshire Manchester took Barb’s AP analysis, and produced a version specific to New Hampshire. Quite interesting — would be great to see other states do this!
77% exam takers passed the test, which is closer to the upper end of the 43% – 83% range reported across all states.
Only twelve girls took the AP CS exam, which represents 11.88% of all AP CS exam takers. This participation percentile of girls taking the exam is 4 times smaller that female representation in the state and nation.
Half of the girls who took the exam passed. 82% of the boys who took the exam passed.
One Hispanic and two Black students took the AP CS exam. The College Board requires that a minimum of five students from a gender, racial, and ethnic group take the test in order to have their passing scores recorded.
2012 NH census data reports that Blacks represent 1.4% of the state population and Hispanics represent 3%. Having two Black students taking the test in 2013 means that their participation of 1.98% of all AP CS exam takers is 1.4 times higher than the percentage of the Black population in the state of NH. However, Hispanics participation in the AP CS exam of 0.99% is 3 times lower than their representation of 3% in the state.
Are we getting better at handling abstraction? – Radiolab podcast on Killing Babies, Saving the World
I’m a fan of Radiolab podcasts. The one referenced below talks about the Flynn effect. Comparison of various tests of IQ over decades suggest that we’ve been getting smarter over the last 100 years. Josh Greene argues that we (as humans in the developing world) may be developing greater ability to handle abstract thinking. Abstraction isn’t everything in computer science (as Bennedsen and Caspersen showed us in 2008), but it is important. Could our problems with computing education resolve over time, because we’re all getting better at abstraction? Might it become easier to teach computer science in future decades, as we develop better cognitive abilities? Given that performance on the Rainfall Problem has not improved over the last thirty years, I doubt it, but it’s an intriguing hypothesis.
Robert talks to Josh Greene, the Harvard professor we had on our Morality show. They revisit some ideas from that show in the context of the big, complicated problems of today (think global warming and nuclear war). Josh argues that to deal with those problems, we’re going to have to learn how to make better use of that tiny part of our brain that handles abstract thinking. Not a simple proposition, but, despite the odds, Josh has hope.