Posts tagged ‘computing education research’
Sally Fincher and I are organizing this year’s Doctoral Consortium for students working in computing education. Do come join us in Glasgow!
ICER DC Call for Proposals
The ICER 2014 Doctoral Consortium provides an opportunity for doctoral students to explore and develop their research interests in a workshop environment with a panel of established researchers. We invite students to apply for this opportunity to share their work with students in a similar situation as well as senior researchers in the field. We welcome submissions from students at any stage of their doctoral studies.
Sally Fincher, University of Kent at Canterbury
Mark Guzdial, Georgia Institute of Technology
Contact us at: firstname.lastname@example.org
What is the Doctoral Consortium?
The DC has the following objectives:
- Provide a supportive setting for feedback on students’ research and research direction
- Offer each student comments and fresh perspectives on their work from researchers and students outside their own institution
- Promote the development of a supportive community of scholars
- Support a new generation of researchers with information and advice on research and academic career paths
- Contribute to the conference goals through interaction with other researchers and conference events
The DC will be held on Sunday, August 10 2014. Students at any stage of their doctoral studies are welcome to apply and attend. The number of participants is limited to 12. Applicants who are selected will receive a limited partial reimbursement of travel, accommodation and subsistence (i.e., food) expenses of $600 (USD).
Preparing and Submitting your Consortium Proposal Extended Abstract
Candidates should prepare a 2-page research description covering central aspects of your PhD work, which follows the structure, details and format specified in the ICER Doctoral Consortium submission template Word<http://icer.hosting.acm.org/wp-content/uploads/2013/05/ICER2013-dc-template.doc> / LaTeX<http://icer.hosting.acm.org/wp-content/uploads/2013/05/ICER2013_dc_template.zip>.
Key points include:
- Your situation, i.e., the university doctoral program context in which your work is being conducted.
- Context and motivation that drives your dissertation research
- Miniature Background/literature review of key works that frames your research
- Hypothesis/thesis and/or problem statement
- Research objectives/goals
- Your research approach and methods, including relevant rationale
- Results to date and your argument for their validity
- Current and expected contributions
Appendix 1. A letter of nomination from your primary dissertation advisor, that supports your participation in the DC, explains how your work connects with the ICER community, and describes the expected timeline for your completion of your doctorate.
Appendix 2. Your concise current Curriculum Vita (1–2 pages)
Once you have assembled – and tested – the PDF file, the entire submission file should be emailed to email@example.com no later than 17:00 PDT on 21 May 2014. When submitting the applications, please put “ICER DC 2014 – <Last Name>” in the Subject line.
Friday 21st May – initial submission
Monday 2nd June – notification of acceptance
Monday 16th June – camera ready copy due
Doctoral Consortium Review Process
The review and decision of acceptance will balance many factors. This includes the quality of your proposal, and where you are within your doctoral education program. It also includes external factors, so that the group of accepted candidates exhibit a diversity of backgrounds and topics. Your institution will also be taken into account, where we are unlikely to accept more than two students from the same institution. Confidentiality of submissions is maintained during the review process. All rejected submissions will be kept confidential in perpetuity. Upon Acceptance of your Doctoral Consortium Proposal Authors will be notified of acceptance or rejection on 2 June 2014, or shortly after.
Authors of accepted submissions will receive instructions on how to submit publication-ready copy (this will consist of your extended abstract only), and will receive information about attending the Doctoral Consortium, about preparing your presentation and poster, about how to register for the conference, travel arrangements and reimbursement details. Registration benefits are contingent on attending the Doctoral Consortium.
Please note that submissions will not be published without a signed form releasing publishing copyright to the ACM. Attaining permissions to use video, audio, or pictures of identifiable people or proprietary content rests with the author, not the ACM or the ICER conference.
Before the Conference
Since the goals of the Doctoral Consortium include building scholarship and community, participants will be expected to read all of the Extended Abstracts of your colleagues prior to the beginning of the consortium with a goal of preparing careful and thoughtful critique. Although many fine pieces of work may have to be rejected due to lack of space, being accepted into the Consortium involves a commitment to giving and receiving thoughtful commentary.
At the Conference
All participants are expected to attend all portions of the Doctoral Consortium. We will also be arranging an informal Welcome Dinner for participants and discussants on Saturday August 9, 2014 before the consortium begins. Please make your travel plans to join us this evening to get acquainted.
Within the DC, each student will present his or her work to the group with substantial time allowed for discussion and questions by participating researchers and other students. Students will also present a poster of their work at the main conference. In addition to the conference poster, each student should bring a “one-pager” describing their research (perhaps a small version of the poster using letter or A4 paper) for sharing with faculty mentors and other students.
After the Conference
Accepted Doctoral Consortium abstracts will be distributed in the ACM Digital Library, where they will remain accessible to thousands of researchers and practitioners worldwide.
AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be one week prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.
Important article that gets at some of my concerns about using MOOCs to inform education research. The sampling bias mentioned in the article below is one of my responses to the claim that we can inform education research by analyzing the results of MOOCs. We can only learn from the data of participants. If 90% of the students go away, we can’t learn about them. Making claims about computing education based on the 10% who complete a CS MOOC (and mostly white/Asian, male, wealthy, and well-educated at that) is bad science.
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”
Last month, Steve Cooper organized a remarkable workshop at Stanford on the Future of Computing Education Research. The question was, “How do we grow computing education research in the United States?” We pretty quickly agreed that we have a labor shortage — there are too few people doing computing education research in the US. We need more. In particular, we need more CS Ed PhD students. The PhD students do the new and exciting research. They bring energy and enthusiasm into a field.
We also need these students to fit into Computing departments, where that could be Computer Science, or Informatics, or Information Systems/Technology/Just-Information Departments/Schools/Colleges. Yes, we need a presence in Education Schools at some point, to influence how we develop new teachers, but that’s not how we’ll best push the research.
How do we get there?
Roy Pea came to the event. He could only spare a few hours for us, and he only gave a brief 10 minute talk, but it was one of the highlights of the two days for me. He encouraged us to think about Learning Sciences as a model. Learning Science grew out of cognitive science and computer science. It’s a field that CS folks recognize and value. It’s not the same as Education, and that’s a positive thing for our identity. He told us that the field must grow within Computing departments because Domain Matters. The representations, the practices, the abstractions, the mental models — they all differ between domains. If we want to understand the learning of computing, we have to study it from within computing.
I asked Roy, “But how do we influence teacher education? I don’t see learning science classes in most pre-service teacher development programs.” He pointed out that I was thinking about it all wrong. (Not his words — he was more polite than that.) He described how learning sciences has influenced teacher development, integrated into it. It’s not about a separate course: “Learning science for teachers.” It’s about changing the perspective in the existing classes.
Ken Hay, a learning scientist (and long-time friend and colleague) who is at Indiana University, echoed Roy’s recommendation to draw on the learning sciences as a model. He pointed out that Language Matters. He said that when Indiana tried to hire a “CS Education Researcher,” faculty in the CS department said, “I teach CS. I’m a CS Educator. How is s/he different than me?”
We started talking about how “Computer Science Education Research” is a dead-end name for the research that we want to situate in computing departments. It’s the right name for the umbrella set of issues and challenges with growing computing education in the United States. It includes issues like teacher professional development and K-12 curricula. But that’s not what’s going to succeed in computing departments. It’s the part that looks like the learning sciences that can find a home in computing departments. Susanne Hambrusch of Purdue offered a thought experiment that brought it home for me. Imagine that there is a CS department that has CS Ed Research as a research area. They want to list it on their Research web page. Well, drop the word “Research” — this is the Research web page, so that’s a given. And drop the “CS” because this is the CS department, after all. So all you list is “Education.” That conveys a set of meanings that don’t necessarily belong in a CS department and don’t obviously connect to our research questions.
In particular, we want to separate (a) the research about how people learn and practice computing from (b) making teaching and learning occur better in a computing department. (a) can lead to (b), but you don’t want to demand that all (a) inform (b). We need to make the research on learning and practice in computing be a value for computing departments, a differentiator. “We’re not just a CS department. We embrace the human side and engage in social and learning science research.” Lots of schools offer outreach, and some are getting involved in professional development. But to do those things informed by learning sciences and informing learning sciences (e.g., can get published in ICER and ICLS and JLS and AERA) — that’s what we want to encourage and promote.
I was in a breakout that tried to generate names. Michael Horn of Northwestern came up with several of my favorites. Unfortunately, none of them were particularly catchy:
- Learning Sciences of Computing
- Learning Sciences for Computing
- Computational Learning and Practice (sounds too much like machine learning)
- Learning Sciences in Computing Contexts
- Learning and Practice in Computing
- Computational Learning and Literacy
We do have a name for a journal picked out that I really like: Journal of Computational Thinking and Learning.
I’d appreciate your thoughts on these. What would be a good name for the field which studies how people learn computing, how to improve that learning, how professionals practice computing (e.g., end-user programming, computational science & engineering), and how to help novices join those professional communities of practice?
I can’t remember the last time I learned so much and had my preconceived notions so challenged in just two days. I have a lot more notes on the workshop, and they may make it into some future blog posts. Kudos to Steve for organizing an excellent workshop, and my thanks to all the participants!
The report on the CCC’s workshop on MOOCs and other online education technologies is now out.
In February 2013 the Computing Community Consortium (CCC) sponsored the Workshop on Multidisciplinary Research for Online Education (MROE). This visioning activity explored the research opportunities at the intersection of the learning sciences, and the many areas of computing, to include human-computer interactions, social computing, artificial intelligence, machine learning, and modeling and simulation.
The workshop was motivated and informed by high profile activities in massive, open, online education (MOOE). Point values of “massive” and “open” are extreme values that make explicit, in ways not fully appreciated previously, variability along multiple dimensions of scale and openness.
The report for MROE has been recently completed and is online. It summarizes the workshop activities and format, and synthesizes across these activities, elaborating on 4 recurring themes:
- Next Generation MOOCs and Beyond MOOCs
- Evolving Roles and Support for Instructors
- Characteristics of Online and Physical Modalities
- Physical and Virtual Community
Andy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which he explained in a blog post. It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”
How would we make them better? I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.“
We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on. Usability is one of the goals, but typically, in a theoretical sense. Quorum is the only programming language that I know of that tested usability as part of the design process.
But what if we took Andy Ko’s argument seriously? What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks? Value would become more obvious. It would be more easily adopted by a community. The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.
What would our design process be? How much more usable and learnable could our programming languages become? How much easier would computing education be if the languages were more usable and learnable? I’d love it if programming language designers could put me out of a job.
Great to see Dan Garcia and his class getting this kind of press! I’m not sure I buy the argument that SFGate is making, though. Do female students at Berkeley find out about this terrific class and then decide to take it? Or are they deciding to take some CS and end up in this class? Based on Mike Hewner’s work, I don’t think that students know much about the content of even great classes like Dan’s before they get there.
It is a predictable college scene, but this Berkeley computer science class is at the vanguard of a tech world shift. The class has 106 women and 104 men.
The gender flip first occurred last spring. It was the first time since at least 1993 – as far back as university enrollment records are digitized – that more women enrolled in an introductory computer science course. It was likely the first time ever.
It’s a small but a significant benchmark. Male computer science majors still far outnumber female, but Prof. Dan Garcia’s class is a sign that efforts to attract more women to a field where they have always been vastly underrepresented are working.
“We are starting to see a shift,” said Telle Whitney, president of the Anita Borg Institute for Women and Technology.
Mihaela Sabin at University of New Hampshire Manchester took Barb’s AP analysis, and produced a version specific to New Hampshire. Quite interesting — would be great to see other states do this!
77% exam takers passed the test, which is closer to the upper end of the 43% – 83% range reported across all states.
Only twelve girls took the AP CS exam, which represents 11.88% of all AP CS exam takers. This participation percentile of girls taking the exam is 4 times smaller that female representation in the state and nation.
Half of the girls who took the exam passed. 82% of the boys who took the exam passed.
One Hispanic and two Black students took the AP CS exam. The College Board requires that a minimum of five students from a gender, racial, and ethnic group take the test in order to have their passing scores recorded.
2012 NH census data reports that Blacks represent 1.4% of the state population and Hispanics represent 3%. Having two Black students taking the test in 2013 means that their participation of 1.98% of all AP CS exam takers is 1.4 times higher than the percentage of the Black population in the state of NH. However, Hispanics participation in the AP CS exam of 0.99% is 3 times lower than their representation of 3% in the state.
Are we getting better at handling abstraction? – Radiolab podcast on Killing Babies, Saving the World
I’m a fan of Radiolab podcasts. The one referenced below talks about the Flynn effect. Comparison of various tests of IQ over decades suggest that we’ve been getting smarter over the last 100 years. Josh Greene argues that we (as humans in the developing world) may be developing greater ability to handle abstract thinking. Abstraction isn’t everything in computer science (as Bennedsen and Caspersen showed us in 2008), but it is important. Could our problems with computing education resolve over time, because we’re all getting better at abstraction? Might it become easier to teach computer science in future decades, as we develop better cognitive abilities? Given that performance on the Rainfall Problem has not improved over the last thirty years, I doubt it, but it’s an intriguing hypothesis.
Robert talks to Josh Greene, the Harvard professor we had on our Morality show. They revisit some ideas from that show in the context of the big, complicated problems of today (think global warming and nuclear war). Josh argues that to deal with those problems, we’re going to have to learn how to make better use of that tiny part of our brain that handles abstract thinking. Not a simple proposition, but, despite the odds, Josh has hope.
SIGCSE Preview: Project Rise Up 4 CS: Increasing the Number of Black Students who Pass AP CS A — by paying them
I’m guessing that Barbara’s paper on Friday 1:45-3 (in Hanover FG – whole program here) is going to be controversial. She’s working on a problem we’ve had in GaComputes for years. Besides Betsy DiSalvo’s work on Glitch, we’ve made little progress in increasing numbers of Black students taking AP CS A and even less progress in getting more of them to pass the test.
She’s had significant progress this last year using an approach that NMSI used successfully in Texas and elsewhere. She’s offering $100 to Black students who attend extra sessions to help them pass the exam and who do pass the exam. She’s expanding the program now with a Google RISE grant. Her approach is informed by Betsy’s work – it’s about going beyond interests to values and giving students help in navigating past their motivations to not-learn. She does have aspects of the project in place to counteract the disincentives of cash payments for academic achievement. In the final interviews, students didn’t talk about the money. It may be that the money wasn’t an incentive as much as a face-saving strategy. (Barb’s preview talk was also recorded as part of a GVU Brown Bag.)
This paper describes Project Rise Up 4 CS, an attempt to increase the number of Black students in Georgia that pass the Advanced Placement (AP) Computer Science (CS) A exam. In 2012 Black students had the lowest pass rates on the AP CS A exam both in Georgia and nationally. Project Rise Up 4 CS provided Black students with role models, hands-on learning, competitions, a financial incentive, and webinars on AP CS A content. The first cohort started in January of 2013 and finished in May 2013. Of the 27 students who enrolled in the first cohort, 14 met all of the completion requirements, and 9 (69%) of the 13 who took the exam passed. For comparison, in 2012 only 22 (16%) of 137 Black students passed the exam in Georgia. In 2013, 28 (22%) of 129 Black students passed the exam in Georgia. This was the highest number of Black students to pass the AP CS A exam ever in Georgia and a 27% increase from 2012. In addition, students who met the completion requirements for Project Rise Up 4 CS exhibited statistically significant changes in attitudes towards computing and also demonstrated significant learning gains. This paper discusses the motivation for the project, provides project details, presents the evaluation results, and future plans.
2nd Annual ACM NDC Study
Of Non-Doctoral Granting Departments in Computing
Please contact ACM Education Manager Yan Timanovsky (firstname.lastname@example.org) ASAP! Deadline is March 16 (extensions possible upon request).
• As an annual survey, NDC produces timely data on enrollment, degree production, student body composition, and faculty salaries/demographics that can benchmark your institution/program(s) and invite useful conversations with your faculty and administration.
• Those who qualify for and complete NDC in its entirety will be entered in a drawing to receive one of (3) unrestricted grants of $2,500 toward your department’s discretionary fund.
SIGCSE Preview: Measuring Demographics and Performance in Computer Science Education at a Nationwide Scale Using AP CS Data
Barbara and I are speaking Thursday 3:45-5 (with Neil Brown on his Blackbox work) in Hanover DE on our AP CS analysis paper (also previewed at a GVU Brown Bag). The full paper is available here: http://bit.ly/SIGCSE14-APCS This is a different story than the AP CS 2013 analysis that Barbara has been getting such press for. This is a bit deeper analysis on the 2006-2012 results.
Here are a couple of the figures that I think are interesting. What’s fitting into these histograms are states, and it’s the same number of bins in each histogram, so that one can compare across.
Fitting this story into the six page SIGCSE format was really tough. I wanted to make the figures bigger, and I wanted to tell more stories about the regressions we explored. I focused on the path from state wealth to exam-takers because I hadn’t seen that story in CS Ed previously (though everyone would predict that it was there), but there’s a lot more to tell about these data.
Figure 1: Histograms describing (a) the number of schools passing the audit over the population (measured in 10K), (b) number of exam-takers over the population, and (c) percentage of exam-takers who passed.
Measuring Demographics and Performance in Computer Science Education at a Nationwide Scale Using AP CS Data
Abstract: Before we can reform or improve computing education, we need to know the current state. Data on computing education are difficult to come by, since it’s not tracked in US public education systems. Most of our data are survey-based or interview-based, or are limited to a region. By using a large and nationwide quantitative data source, we can gain new insights into who is participating in computing education, where the greatest need is, and what factors explain variance between states. We used data from the Advanced Placement Computer Science A (AP CS A) exam to get a detailed view of demographics of who is taking the exam across the United States and in each state, and how they are performing on the exam. We use economic and census data to develop a more detailed view of one slice (at the end of secondary school and before university) of computer science education nationwide. We find that minority group involvement is low in AP CS A, but the variance between states in terms of exam-takers is driven by minority group involvement. We find that wealth in a state has a significant impact on exam-taking.
For the first time ever, CS Education research is a field eligible for NSF CAREER. Applicants will be able to select STEM-CP: CE21 as the program for the July deadline. Please help getting the word out to potential applicants. We’d like to see some good proposals in this first year inviting CE21 CAREER proposals.
The National Science Foundation’s Computer and Information Science and Engineering Directorate (CISE) invites proposals this year to the Faculty Early Career Development (CAREER) program for faculty engaging in Computing Education research. That is, if you apply for the CAREER program, you’ll be able to select “STEM-CP: CE21″ as your Unit of Consideration. The intent of the CAREER program (http://www.nsf.gov/career) is to provide stable support at a sufficient level and duration to enable awardees to develop careers as outstanding researchers and educators who effectively integrate teaching, learning and discovery.
CISE is organizing a one-day proposal writing workshop (registration and details at: http://cs.gmu.edu/events/nsfcisecareer2014/) for CAREER-eligible faculty on March 31, 2014 in Arlington, VA. The registration deadline is February 28th. Unlike past years, this will be the only CISE CAREER workshop during this calendar year. Please circulate this information among interested faculty. The next deadline for CISE CAREER proposals is July 21, 2014.
Please let me know if you have any questions or concerns.
Jeffrey R.N. Forbes Program Director CISE/CNS Education and Workforce Cluster National Science Foundation email@example.com, +1 (919) 292-4291
I’m completely open to the idea that completion rates are the wrong measures of success for MOOCs. But I do believe that we need some measure. What would success for MOOCs mean? How do we know if it’s being achieved? Or if it’s a waste of time and money?
In the meantime, the Harvard and MIT researchers said they hoped the new studies would help people understand that technology and scale are not the only things that distinguish MOOCs from other kinds of higher education.
“People are projecting their own desires onto MOOCs,” said Mr. Ho, “and then holding them accountable for criteria that the instructors and institutions and, most importantly, students don’t hold for themselves.”
What a cool idea! Rob Moore is building on the subgoal labeling work that we (read: “Lauren”) did, and is using crowd-sourcing techniques to generate the labels.
Subgoal labeling  is a technique known to support learning new knowledge by clustering a group of steps into a higher-level conceptual unit. It has been shown to improve learning by helping learners to form the right mental model. While many learners view video tutorials nowadays, subgoal labels are often not available unless manually provided at production time. This work addresses the challenge of collecting and presenting subgoal labels to a large number of video tutorials. We introduce a mixed-initiative approach to collect subgoal labels in a scalable and efficient manner. The key component of this method is learnersourcing, which channels learners’ activities using the video interface into useful input to the system. The presented method will contribute to the broader availability of subgoal labels in how-to videos.
An important and interesting position, that I first learned about from the work of Caroline Simard. There is significant evidence that Silicon Vally is not a meritocracy, but there is significant advantage to the people in power there to maintain the myth.
But if the tech scene is really a meritocracy, why are so many of its key players, from Mark Zuckerberg to Steve Jobs, white men? If entrepreneurs are born, not made, why are there so many programs attempting to create entrepreneurs? If tech is truly game-changing, why are old-fashioned capitalism and the commodification of personal information never truly questioned?
The myths of meritocracy and entrepreneurialism reinforce ideals of the tech scene that shore up its power structures and privileges.
The myths of authenticity, meritocracy, and entrepreneurialism do have some basis in fact. But they are powerful because they reinforce ideals of the tech scene that shore up its power structures and privileges. Believing that the tech scene is a meritocracy implies that those who obtain great wealth deserve it, and that those who don’t succeed do not. The undue emphasis placed on entrepreneurship, combined with a limited view of who “counts” as an entrepreneur, function to exclude entire categories of people from ascending to the upper echelon of the industry. And the ideal of authenticity privileges a particular type of self-presentation that encourages people to strategically apply business logics to the way they see themselves and others.