Archive for July, 2013

Writing programs using ordinary language: Implications for computing education

Once upon a time, all computer scientists understood how floating point numbers were represented in binary.  Numerical methods was an important part of every computing curriculum.  I know few undergraduate programs that require numerical methods today.

Results like the below make me think about what else we teach that will one day become passé, irrelevant, or automatized.  The second result is particularly striking.  If descriptions from programming competitions can lead to automatic program generation, what does that imply about what we’re testing in programming competitions — and why?

The researchers’ recent papers demonstrate both approaches. In work presented in June at the annual Conference of the North American Chapter of the Association for Computational Linguistics, Barzilay and graduate student Nate Kushman used examples harvested from the Web to train a computer system to convert natural-language descriptions into so-called “regular expressions”: combinations of symbols that enable file searches that are far more flexible than the standard search functions available in desktop software.

In a paper being presented at the Association for Computational Linguistics’ annual conference in August, Barzilay and another of her graduate students, Tao Lei, team up with professor of electrical engineering and computer science Martin Rinard and his graduate student Fan Long to describe a system that automatically learned how to handle data stored in different file formats, based on specifications prepared for a popular programming competition.

via Writing programs using ordinary language – MIT News Office.

July 31, 2013 at 1:36 am 2 comments

Taking a test is better than studying, even if you just guess: We need to flip the flipped classroom

The benefits of testing for learning are fascinating, and the result described below makes me even more impressed with the effect.  It suggests even more strongly that the critical feature of learning is trying to understand, trying to generate an answer, even more than reading an answer.

Suppose, for example, that I present you with an English vocabulary word you don’t know and either (1) provide a definition that you read (2) ask you to make up a definition or (3) ask you to choose from among a couple of candidate definitions. In conditions 2 & 3 you obviously must simply guess. (And if you get it wrong I’ll give you corrective feedback.) Will we see a testing effect?

That’s what Rosalind Potts & David Shanks set out to find, and across four experiments the evidence is quite consistent. Yes, there is a testing effect. Subjects better remember the new definitions of English words when they first guess at what the meaning is–no matter how wild the guess.

via Better studying = less studying. Wait, what? – Daniel Willingham.

These results mesh well with a new study from Stanford.  They found that the order of events in a “flipped” classroom matters — the problem-solving activity (in the classroom) should come before the reading or videos (at home). The general theme is the same in both sets of studies: problem-solving drives learning, and it’s less true that studying prepares one for problem-solving.

A new study from the Stanford Graduate School of Education flips upside down the notion that students learn best by first independently reading texts or watching online videos before coming to class to engage in hands-on projects. Studying a particular lesson, the Stanford researchers showed that when the order was reversed, students’ performances improved substantially.

via Classes should do hands-on exercises before reading and video, Stanford researchers say.

July 30, 2013 at 1:47 am 15 comments

The challenges of integrated engineering education

I spent a couple days at Michigan State University (July 11-12) learning about integrated engineering education. The idea of integrated engineering education is to get students to see how the mathematics and physics (and other requirements) fit into their goals of becoming engineers. In part, it’s a response to students learning calculus here and physical principles there, but having no idea what role they play when it comes to design and solving real engineering problems. (Computer science hasn’t played a significant role in previous experiments in integrated engineering education, but if one were to do it today, you probably would include CS — that’s why I was invited, as someone interested in CS for other disciplines.)  The results of integrated engineering education are positive, including higher retention (a pretty consistent result across all the examples we saw), higher GPA’s (often), and better learning (some data).

But these programs rarely last. A program at U. Massachusetts-Dartmouth is one of the longest running (9 years), but it’s gone through extensive revision — not clear it’s the same program. These are hard programs to get set up. It is an even bigger challenge  to sustain them.

The programs lie across a spectrum of integration. The most intense was a program at Rose-Hulman that lasted for five years. All the core first year engineering courses were combined in a single 12 credit hour course, co-taught by faculty from all the relevant disciplines. That’s tight integration. On the other end is a program at Wright State University, where the engineering faculty established a course on “Engineering Math” that meets Calculus I requirements for Physics, but is all about solving problems (e.g., using real physical units) that involve calculus. The students still take Calculus I, but later. The result is higher retention and students who get the purpose for the mathematics — but at a cost of greater disconnect between Engineering and mathematics. (No math faculty are involved in the Engineering Math course.)

My most significant insight was: The greater the integration, the greater the need for incentives. And the greater the need for the incentives, the higher in the organization you need support. If you just want to set up a single course to help Engineers understand problem-solving with mathematics, you can do that with your department or school, and you only have to provide incentives to a single faculty member each year. If you want to do something across departments, you need greater incentives to keep it going, and you’ll need multiple chairs or deans. If you want a 12 credit hour course that combines four or five disciplines, maybe you need the Provost or President to make it happen and keep it going.

Overall, I wasn’t convinced that integrated engineering education efforts are worth the costs. Are the results that we have merely a Hawthorne effect?  It’s hard to sustain integrated anything in American universities (as Cuban told us in “How Scholars Trumped Teachers”). (Here’s an interesting review of Cuban’s book.) Retention is good and important (especially of women and under-represented students), but if Engineering programs are already over-subscribed (which many in the workshop were), then why improvement retention of students in the first year if there is no space for them in the latter years? Integration probably leads to better learning, but there are deeper American University structural problems to fix first, which would reduce the costs in doing the right things for learning.

July 29, 2013 at 1:41 am 4 comments

Call for papers for first ACM Conference on Learning at Scale

The First Annual ACM Conference on Learning at Scale will be held March 4-5,
2014 in Atlanta, GA (immediately prior to and collocated with SIGCSE-14).

The Learning at Scale conference is intended to promote scientific exchange
of interdisciplinary research at the intersection of the learning sciences
and computer science. Inspired by the emergence of Massive Open Online
Courses (MOOCs) and the accompanying huge shift in thinking about education,
this conference was created by ACM as a new scholarly venue and key focal
point for the review and presentation of the highest quality research on how
learning and teaching can change and improve when done at scale.

“Learning at Scale” refers to new approaches for students to learn and for
teachers to teach, when engaging large numbers of students, either in a
face-to-face setting or remotely, whether synchronous or asynchronous, with
the requirement that the techniques involve large numbers of students (where
“large” is preferably thousands of students, but can also apply to hundreds
in in-person settings). Topics include, but are not limited to: Usability
Studies, Tools for Automated Feedback and Grading, Learning Analytics,
Analysis of Log Data, Studies of Application of Existing Learning Theory,
Investigation of Student Behavior and Correlation with Learning Outcomes,
New Learning and Teaching Techniques at Scale.

IMPORTANT DATES
—————
November 8, 2013: Paper submissions due
November 8, 2013: Tutorial proposals due
December 23, 2013: Notification to authors of full papers
January 2, 2014: Works-in-progress submissions due (posters and demos)
January 14, 2014: Notification to authors of acceptance of works-in-progress
January 17, 2014: All revised and camera-ready materials due
March 4-5, 2014: Learning at Scale meeting

Additional information is available at: http://learningatscale.acm.org/

July 29, 2013 at 1:22 am Leave a comment

Congressional Panels Dump on STEM Reshuffling Plan

Will TUES exist again?  Will STEM-C get created?  Looks like it’s all up in the air now.

A bill approved yesterday by the House of Representatives science committee to reauthorize NASA programs, for example, rejects the two key elements of what the administration has proposed—stripping the agency of most of its STEM education agencies and putting the rest under one roof. “The administration may not implement any proposed STEM education and outreach-related changes proposed [for NASA] in the president’s 2014 budget request,” the bill flatly declares. “Funds devoted to education and public outreach should be maintained in the [science, aeronautics, exploration, and mission] directorates, and the consolidation of those activities within the Education Directorate is prohibited.”

Likewise, the House version of the CJS spending bill would restore money for STEM education activities at NASA and the National Oceanic and Atmospheric Administration and put the kibosh on a realignment of undergraduate STEM education programs at NSF. “The committee supports the concept of improving efficiency and effectiveness, through streamlining and better coordination, but does not believe that this particular restructuring proposal achieves that goal,” the legislators explain in a report this week accompanying the spending bill. The report also notes that “the ideas presented in the budget request lack any substantive implementation plan and have little support within the STEM education community.”

via Congressional Panels Dump on STEM Reshuffling Plan – ScienceInsider.

More from the Senate report on the STEM Consolidation:

“While the Committee maintains its support of greater efficiencies and consolidation – as evident by adopting some of the STEM consolidation recommendations made by the administration’s budget request – the Committee has concerns that the proposal as a whole has not been thoroughly vetted with the education community or congressional authorizing committees, and lacks thorough guidance and input from Federal agencies affected by this proposal, from both those that stand to lose education and outreach programs and from those that stand to gain them. The administration has yet to provide a viable plan ensuring that the new lead STEM institutions – the National Science Foundation, the Department of Education, and the Smithsonian Institution – can support the unique fellowship, training, and outreach programs now managed by other agencies. Conversely, what is proposed as a consolidation of existing STEM programs from NOAA, NASA, and NIST into the new lead STEM agencies is really the elimination of many proven and successful programs with no evaluation on why they were deemed duplicative or ineffective.

via FY 2014 Senate Appropriations: STEM Consolidation and Public Access.

The STEM-C program was recommended by one committee, but not CAUSE (the program created instead of TUES). Said the House report, “Consistent with the Committee’s position on the proposed STEM education restructuring, the recommendation does not support the establishment of the new CAUSE program or the transition of the GRF program into the interagency National GRF.”

July 26, 2013 at 1:57 am Leave a comment

More women pass AP CS than AP Calculus

Barbara Ericson has generated her 2012 Advanced Placement Computer Science report. http://home.cc.gatech.edu/ice-gt/321 has all of her reports. http://home.cc.gatech.edu/ice-gt/548 has her more detailed analysis just of 2012. Since one of our concerns with GaComputes and ECEP is on pass rates, not just test-takers, she dug deeper into pass rates.  For a point of comparison, she looked up AP Calculus pass rates.  What she found is somewhat surprising — below is quoted from her page.

Comparison of AP CS A to AP Calculus AB in 2012

  • The number of students that take the exam per teacher is much higher for AP Calculus AB at 21 students per teacher versus 11 for Computer Science A

  • The number of schools that teach Calculus is 11,694 versus 2,103

  • AP CS A had a higher pass rate than Calculus – 63% versus 59%

  • AP CS A had a higher female pass rate than Calculus – 56% versus 55%

  • AP CS A had a higher Hispanic pass rate than Calculus – 39.8% versus 38.4%

  • AP Calculus had a higher black pass rate than CS – 28.7% versus 27.3%

  • Calculus had a much higher percentage of women take the exam than CS – 48.3% versus 18.7%

  • Calculus had a higher percentage of black students take the exam than CS – 5.4% versus 4.0%

  • Calculus had a higher percentage of Hispanic/Latino students take the exam than CS – 11.5% versus 7.7%

July 26, 2013 at 1:39 am 1 comment

Starting with Robots: Linking Spatial Ability and Learning to Program

Stuart Wray has a remarkable blog that I recommend to CS teachers.  He shares his innovations in teaching, and grounds them in his exploration of the literature into the psychology of programming.  The quote and link below is an excellent example, where his explanation led to me a paper I’m eager to dive into.  Stuart has built an interesting warm-up activity for his class that involves robots.  What I’m most intrigued by is his explanation for why it works as it does.  The paper that he cites by Jones and Burnett is not one that I’d seen before, but it explores an idea that I’ve been interested in for awhile, ever since I discovered the Spatial Intelligence and Learning Center:  Is spatial ability a pre-requisite for learning in computer science?  And if so, can we teach it explicitly to improve CS learning?

The game is quite fun and doesn’t take very long to play — usually around a quarter of an hour or less. It’s almost always quite close at the end, because of course it’s a race between the last robot in each team. There’s plenty of opportunity for delaying tactics and clever blocking moves near the exit by the team which is behind, provided they don’t just individually run for the exit as fast as possible.

But turning back to the idea from James Randi, how does this game work? It seems from my experience to be doing something useful, but how does it really work as an opening routine for a programming class? Perhaps first of all, I think it lets me give the impression to the students that the rest of the class might be fun. Lots of students don’t seem to like the idea of programming, so perhaps playing a team game like this at the start of the class surprises them into giving it a second chance.

I think also that there is an element of “sizing the audience up” — it’s a way to see how the students interact with one another, to see who is retiring and who is bold, who is methodical and who is careless. The people who like clever tricks in the game seem often to be the people who like clever tricks in programming. There is also some evidence that facility with mental rotation is correlated with programming ability. (See Spatial ability and learning to program by Sue Jones and Gary Burnett in Human Technology, vol.4(1), May 2008, pp.47-61.) To the extent that this is true, I might be getting a hint about who will have trouble with programming from seeing who has trouble making their robot turn the correct direction.

via On Food and Coding: The Robots Game.

July 25, 2013 at 1:12 am 6 comments

Rupert Murdoch wants to teach your kids in an AP CS MOOC

We have very few AP CS teachers in the United States — about 1 for every 12 high schools, and they’re not evenly distributed.  I do get that an AP CS MOOC may make it more available to more students.  Still, I’m not too excited about a MOOC to teach AP CS.  AP CS is already overwhelmingly white and male.  The demographic data from existing CS MOOCs is even more white and male than our face-to-face classes.  I can’t see how an AP CS MOOC will improve diversity, and we have a desperate need to improve diversity.

But beyond that — Rupert Murdoch?!?  Really?  Why is he interested in CS education?  I do note that he is starting out with a monetizing scheme.  Want your questions answered?  $200 per student per year.  I do see how this AP CS MOOC may deal with some of the shortcomings of other MOOCs, and may even be better with diversity than existing MOOCs, because of the availability of direct support — at a price.

Now, Rupert Murdoch, the billionaire media mogul behind News Corp., wants to do something about the lack of computer science education. Murdoch’s Amplify education unit plans to launch a new advanced placement online computer science course this fall, taught by longtime high-school instructor Rebecca Dovi.

The course is described as a MOOC, short for massive open online course. It is free to high school students, though additional resources will be made available for $200 per student. It is geared toward those who want to take the computer science AP exam in 2014.

via Rupert Murdoch wants to teach your kids computer science with this new online AP course – GeekWire.

July 24, 2013 at 1:40 am 1 comment

Interaction between stereotypes, expectations of success, and learning from failure

An interesting study suggesting that role models and how they’re described (in terms of their achievements, or in terms of their struggles) has an interaction with students’ stereotypes about scientists and other professionals in STEM fields.  So there are not just cognitive benefits to learning from failure, but there are affective dimensions to focusing on the struggle (including failures) and not just the success.

But when the researchers exposed middle-school girls to women who were feminine and successful in STEM fields, the experience actually diminished the girls’ interest in math, depressed their plans to study math, and reduced their expectations of future success. The women’s “combination of femininity and success seemed particularly unattainable to STEM-disidentified girls,” the authors conclude, adding that “gender-neutral STEM role models,” as well as feminine women who were successful in non-STEM fields, did not have this effect.

Does this mean that we have to give up our most illustrious role models? There is a way to gain inspiration from truly exceptional individuals: attend to their failures as well as their successes. This was demonstrated in a study by Huang-Yao Hong of National Chengchi University in Taiwan and Xiaodong Lin-Siegler of Columbia University.

The researchers gave a group of physics students information about the theories of Galileo Galilei, Issac Newton and Albert Einstein. A second group received readings praising the achievements of these scientists. And a third group was given a text that described the thinkers’ struggles. The students who learned about scientists’ struggles developed less-stereotyped images of scientists, became more interested in science, remembered the material better, and did better at complex open-ended problem-solving tasks related to the lesson—while the students who read the achievement-based text actually developed more stereotypical images of scientists.

via why you’re choosing the wrong role models.

July 23, 2013 at 1:31 am 3 comments

Call for proposals on systemic reviews on computer science education

I met Jeff Froyd at the MSU Workshop in Integrated Engineering Education, and he asked me to share this call for a special issue of IEEE Transactions on Education.  The whole notion of a “systemic review” is pretty interesting, and relates to the Blog@CACM post I wrote recently.  His call has detailed and interesting references at the bottom.

Request for Proposals

2015 Special Issue on Systematic Reviews

Overview

The IEEE Transactions on Education solicits proposals for a special issue of systematic reviews on education in electrical engineering, computer engineering, computer science, software engineering, and other fields within the scope of interest of IEEE to be published in 2015. The deadline for 2,000‐word proposals is 9 September 2013. Proposals should be emailed as PDF documents to the Editor‐in‐Chief, Jeffrey E. Froyd, at jefffroyd@ieee.org. Questions about proposals should be directed to the Editor‐in‐ Chief, Jeffrey E. Froyd, at jefffroyd@ieee.org.

Special Issue Timeline

  • 9 September 2013: Interested interdisciplinary, global teams of authors should submit proposals for full papers by 9 September 2013.
  • 14 October 2013: The editorial team for the special issue will review proposals and notify authors of the status of their submission by 14 October 2013.
  • 31 December 2014: For proposals that are accepted, the authors will be asked to prepare manuscripts that will go through the standard review process for the IEEE Transactions on Education in the Scholarship of Integration. Completed draft manuscripts will be due on 31 December 2014. Papers are expected to be between 8000‐10,000 words in length.
  • Xxx – 31 December 2014: Plan (timeline, milestones, activities…) will be collaboratively developed to support manuscript completion by 31 December 2014. Steps in the process of preparing a systematic review include: (i) establishing the research questions, (ii) selecting the databases to be searched and the search strings, (iii), establishing inclusion/exclusion criteria, (iv) selecting articles to be studied, etc. Meetings, in‐person or virtual, will be scheduled to provide support for systematic review methodologies. Meetings will be intended to help develop systematic review expertise across the teams and to improve quality of published systematic reviews.
  • 2015: Manuscripts accepted for publication are expected to be published in 2015.

Proposal Guidelines

Proposals for systematic review manuscripts must provide the following sections:

  1. (i)  Contact information and institutional affiliation of the lead author
  2. (ii)  An initial list of the team members who will prepare the systematic review, indicating howthese team members provide requisite expertise and global representation. Given the requirements for systematic reviews, it is expected that a qualified, interdisciplinary team will include one or more individuals with expertise in library sciences, one or more individuals with expertise in synthesizing methodologies (qualitative, quantitative, mixed method, or combinations of the three), and one or more individuals with domain expertisein the proposed content area. Given the need to promote global community in the fields in which ToE publishes, it is expected that a qualified team will represent the diverse global regions that comprise the IEEE.
  3. (iii)  Description of the proposed content area, why a systematic review of education in the proposed content area is timely, why a systematic review will enhance development of the field, and how future initiatives might build on the systematic review.
  4. (iv)  Initial description of the proposed systematic review methodology. The project will provide support to promote development of systematic review methodology across all participating teams. However, demonstration of initial familiarity with systematic review methodology will strengthen a proposal.

Brief Overview of Systematic Review Methodology
Diverse fields are developing systematic review, a study of primary (and other) studies to address a crafted set of questions, as a research methodology in and of itself. With risks of considerable oversimplification, systematic review methodology rests on two basic ideas. First, interdisciplinary systematic review teams can use large databases of journals, conference proceedings, and grey literature that have been constructed to search the literature using keywords. Then, the team systematically evaluates returned items using explicit criteria to identify the set of articles that will be reviewed. The first basic idea provides a transparent, unbiased, replicable process to identify relevant articles. Second, teams can apply synthesizing methodologies that have been developed in the last 50 years to extract trends, patterns, themes, relationships, gaps… from the identified set of articles. Synthesizing methodologies draw from a wide variety of quantitative (e.g., statistical meta‐analysis, network meta‐analysis), qualitative (e.g., meta‐ethnography, content analysis), mixed method approaches, and combinations of the three. Systematic, transparent use of literature search and synthesizing methodologies can produce systematic reviews of the literature that may be seminal contributions to the community that has created the literature. Good introductions to systematic reviews can be found at:

ToE has already established review criteria for the scholarship of integration, the area addressed by the proposed special issue. These review criteria can be found at http://sites.ieee.org/review‐criteria‐toe/.

Examples

This section offers examples of systematic reviews that have been done in STEM education. Generally, topics for these examples are outside topical areas that would be considered for the IEEE Transactions on Education, but they show examples of good practices for some steps in systematic reviews.

page3image25000

L. Springer, M. E. Stanne and S. S. Donovan, “Effects of small‐group learning on undergraduates in science, mathematics, engineering, and technology: A meta‐analysis.” Review of Educational Research, vol. 69, no. 1, pp. 21‐51. 1999 (doi: 10.3102/00346543069001021)

F. B. V. Benitti, “Exploring the educational potential of robotics in schools: A systematic review,” Comput. & Educ., vol. 58, no. 3, pp. 978‐988, 2012

N. Meese, and C. McMahon, ”Knowledge sharing for sustainable development in civil engineering: A systematic review,” AI and Soc., vol. 27, no. 4, pp. 437‐449, 2012

N. Salleh, E. Mendes, Emilia, and J. Grundy, “Empirical studies of pair programming for CS/SE teaching in higher education: A systematic literature review,” IEEE Trans. Softw. Eng., vol. 37, no. 4, pp. 509‐525, 2011

R. M. Tamim, R. M. Bernard, E. Borokhovski, P. C. Abrami, and R. F. Schmid, “What forty years of research says about the impact of technology on learning: A second‐order meta‐analysis and validation study,” Review of Educ. Research, vol. 81, no. 1, pp. 4‐28, 2011

Resources

These resources provide guides to systematic review methodologies:

E. Barnett‐Page, and J. Thomas, “Methods for the synthesis of qualitative research: A critical review,” BMC Medical Research Methodology, vol. 9, no. 1, p. 59, 2009
Abstract:

Background: In recent years, a growing number of methods for synthesising qualitative research have emerged, particularly in relation to health‐related research. There is a need for both researchers and commissioners to be able to distinguish between these methods and to select which method is the most appropriate to their situation.

Discussion: A number of methodological and conceptual links between these methods were identified and explored, while contrasting epistemological positions explained differences in approaches to issues such as quality assessment and extent of iteration. Methods broadly fall into ‘realist’ or ‘idealist’ epistemologies, which partly accounts for these differences.

Summary: Methods for qualitative synthesis vary across a range of dimensions. Commissioners of qualitative syntheses might wish to consider the kind of product they want and select their method – or type of method – accordingly.

M. Borrego, , E.P. Douglas and C.T. Amelink, “Quantitative, qualitative, and mixed research methods in engineering education“ Journal of Eng. Educ., vol. 98, no. 1, pp. 53‐66, 2009
Abstract: The purpose of this research review is to open dialog about quantitative, qualitative, and mixed research methods in engineering education research. Our position is that no particular method is privileged over any other. Rather, the choice must be driven by the research questions. For each approach we offer a definition, aims, appropriate research questions, evaluation criteria, and examples from the Journal of Engineering Education. Then, we present empirical results from a prestigious international conference on engineering education research. Participants expressed disappointment in the low representation of qualitative studies; nonetheless, there appeared to be a strong preference for quantitative methods, particularly classroom‐based experiments. Given the wide variety of issues still to be explored within engineering education, we expect that quantitative, qualitative, and mixed approaches will be essential in the future. We encourage readers to further investigate alternate research methods by accessing some of our sources and collaborating across education/social science and engineering disciplinary boundaries.

D.A. Cook and C.P. West, “Conducting systematic reviews in medical education: a stepwise approach,” Medical Education, vol.46, pp. 943‐952, 2012
Abstract:

Objectives: As medical education research continues to proliferate, evidence syntheses will become increasingly important. The purpose of this article is to provide a concise and practical guide to the conduct and reporting of systematic reviews.

Results: (i) Define a focused question addressing the population, intervention, comparison (if any) and outcomes. (ii) Evaluate whether a systematic review is appropriate to answer the question. Systematic and non‐ systematic approaches are complementary; the former summarise research on focused topics and highlight strengths and weaknesses in existing bodies of evidence, whereas the latter integrate research from diverse fields and identify new insights. (iii) Assemble a team and write a study protocol. (iv) Search for eligible studies using multiple databases (MEDLINE alone is insufficient) and other resources (article reference lists, author files, content experts). Expert assistance is helpful. (v) Decide on the inclusion or exclusion of each identified study, ideally in duplicate, using explicitly defined criteria. (vi) Abstract key information (including on study design, participants, intervention and comparison features, and outcomes) for each included article, ideally in duplicate. (vii) Analyse and synthesise the results by narrative or quantitative pooling, investigating heterogeneity, and exploring the validity and assumptions of the review itself. In addition to the seven key steps, the authors provide information on electronic tools to facilitate the review process, practical tips to facilitate the reporting process and an annotated bibliography.

M. Petticrew and H. Roberts, Systematic Reviews in the Social Sciences: A Practical Guide. Malden, MA: Blackwell Publishing, 2006

A. C. Tricco, J. Tetzlaff and D. Moher, “The art and science of knowledge synthesis,” Journal of Clinical Epidemiology, vol. 64, no. 1, pp. 11‐20, 2011
Abstract:

Objectives: To review methods for completing knowledge synthesis.

Study Design and Setting: We discuss how to complete a broad range of knowledge syntheses. Our article is intended as an introductory guide.

Results: Many groups worldwide conduct knowledge syntheses, and some methods are applicable to most reviews. However, variations of these methods are apparent for different types of reviews, such as realist reviews and mixed‐model reviews. Review validity is dependent on the validity of the included primary studies and the review process itself. Steps should be taken to avoid bias in the conduct of knowledge synthesis. Transparency in reporting will help readers assess review validity and applicability, increasing its utility.

Conclusion: Given the magnitude of the literature, the increasing demands on knowledge syntheses teams, and the diversity of approaches, continuing efforts will be important to increase the efficiency, validity, and applicability of systematic reviews. Future research should focus on increasing the uptake of knowledge synthesis, how best to update reviews, the comparability between different types of reviews (eg, rapid vs. comprehensive reviews), and how to prioritize knowledge synthesis topics.

July 22, 2013 at 1:44 am 1 comment

New UK curriculum: Five-year-olds to learn programming and algorithms

I haven’t read the new framework myself yet, but the press coverage suggests that this is really something noteworthy.  I do hope that there is some serious assessment going on with this new curriculum.  I’m curious about what happens when five year olds start programming.  How far can they get?  In Yasmin Kafai’s studies of Scratch and in Amy Bruckman’s studies of MOOSE Crossing, almost none of the younger students ever used conditionals or loops.  But those were small studies compared to a national curriculum.  How much transfers forward?  If you do an abstract activity (programming) so early, does it lead to concrete operational reasoning earlier?  Or does it get re-interpreted by the student when she reaches concrete operational?  And, of course, the biggest question right now is: how can they get enough teachers quickly enough?

The new curriculum will be mandatory from September 2014, and spans the breadth of all four ‘key stages’, from when a child first enters school at age five to when they end their GCSEs at 16. The initial draft of the curriculum was written by the British Computer Society (BCS) and the Royal Academy of Engineering in October 2012, before being handed back to the DfE for further tweaks.

By the end of key stage one, students will be expected to ‘create and debug simple programs’ as well as ‘use technology safely and respectfully’. They will also be taught to, ‘understand what algorithms are; how they are implemented as programs on digital devices; and that programs execute by following precise and unambiguous instructions’.

via Five-year-olds to learn programming and algorithms in major computing curriculum shake-up – IT News from V3.co.uk.

Not everyone is happy about the new curriculum.  Neil Brown has a nice post talking about some of the issues.  He kindly sent me a set of links to the debate there, and I found this discussion from a transcript of Parliament proceedings fascinating — these are all really good issues.

First, on professional development, the Minister made the point that some money was being made available for some of the professional development work. Does he feel that it will be sufficient? There is a serious issue about ongoing professional development throughout the system, starting at primary level, where updating computer skills will be part of a range of updated skills which all primary teachers will need to deliver the new curriculum. It is also an issue at secondary level, where it may not be easy but is possible to recruit specialist staff with up-to-date computing skills. However, if you are not careful, that knowledge and those skills can fall out of date very quickly.

Secondly, what more are the Government planning to do to attract new specialist computing staff to teach in schools? It is fairly obvious that there would be alternative, better paid jobs for high-class performers in computing. They may not necessarily rush into the teaching profession.

Thirdly, can the Minister confirm that the change in name does not represent a narrowing of the curriculum, and that pupils will be taught some of those broader skills such as internet use and safety, word processing and data processing, so that the subject will actually give people a range of knowledge and skills which the word “computing” does not necessarily encompass?

Fourthly, the teaching will be successful only if it is supported by sufficient funds to modernise IT facilities and to keep modernising them as technology changes. The noble Lord made reference to some low-cost initiatives in terms of facilities in schools. However, I have seen reference to 3D printers. That is fine, it is just one example, but 3D printers are very expensive. The fact is that, for children to have an up-to-date and relevant experience, you would need to keep providing not just low-cost but some quite expensive technological equipment in schools on an ongoing basis. Will sufficient funds be available to do that?

Finally, given that computing skills and the supporting equipment that would be needed are increasingly integral to the teaching of all subjects, not just computing, have the Government given sufficient thought to what computing skills should be taught within the confines of the computing curriculum and what computing skills need to be provided with all the other arts and science subjects that people will be studying, in all of which pupils will increasingly require computing skills to participate fully? Has that division of responsibilities been thought through? I look forward to the Minister’s response.

via Lords Hansard text for 8 Jul 201308 July 2013 (pt 0001).

We just had the ECEP Day at the Computer Science Teachers Assocation (CSTA) Conference on July 14, where I heard representatives from 16 states talk about their efforts to improve computing education.  Special interests, where do state legislators have to be involved, what does “Computing” mean anyway — all of the states reported pretty much the same issues, but each in a completely different context. The issues seem to be pretty much the same in the UK, too.

July 22, 2013 at 1:21 am 7 comments

Get your student loan forgiven: Teach CS in Texas

Talking to teachers from Texas at the CSTA Conference, I heard that the loan forgiveness program isn’t all that good.  But the fact that Texas is listing CS as #2 on their “shortage” list is an indication that it’s something that they want more of.

The Texas Education Agency (TEA) has received approval from the US Department of Education (USDE) for the 2013-2014 teacher shortage areas.  Please note the shortage areas have changed from previous years.

The approved shortage areas for the 2013-2014 school year are:

  • Bilingual/English as a Second Language
  • Computer Science
  • Languages Other Than English (Foreign Language)
  • Mathematics
  • Science
  • Special Education

The approved shortage areas allow the administrator the ability to recruit and retain qualified teachers and to help reward teachers for their hard work using the loan forgiveness opportunities. School principals can act on behalf of the Commissioner of Education to certify that a teacher has met the minimum qualifications required for certain loan forgiveness programs.

via Texas Education Agency – 2013-2014 Teacher Shortage Areas.

July 19, 2013 at 1:47 am 3 comments

If we can’t teach programming, create software engineering for poor programmers

I finished Nathan Ensmenger’s 2010 book “The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise” and wrote a Blog@CACM post inspired by it. In my Blog@CACM article, I considered what our goals are for an undergraduate CS degree and how we know if we got there. Ensmenger presents evidence that the mathematics requirements in undergraduate computer science are unnecessarily rigorous, and that computer science has never successfully become a profession. The former isn’t particularly convincing (there may be no supporting evidence that mathematics is necessary for computer programming, but that doesn’t mean it’s not useful or important), but the latter is well-supported. Computer programming has not become a profession like law, or medicine, or even like engineering. What’s more, Ensmenger argues, the efforts to professionalize computer programming may have played a role in driving away the women.

Ensmenger talks about software engineering as a way of making-do with the programmers we have available. The industry couldn’t figure out how to make good programmers, so software engineering was created to produce software with sub-par programmers:

Jack Little lamented the tendency of manufacturers to design languages “for use by some sub-human species in order to get around training and having good programmers.” When the Department of Defense proposed ADA as a solution to yet another outbreak of the software crisi, it was trumpeted as a means of “replacing the idiosyncratic ‘artistic’ ethos that has longer governed software writing with a more efficient, cost-effective engineering mind-set.”

What is that “more efficient” mind-set? Ensmenger suggests that it’s for programmers to become factory line workers, nearly-mindlessly plugging in “reusable and interchangeable parts.”

The appeal of the software factory model might appear obvious to corporate managers; for skilled computer professionals, the idea of becoming a factory worker is understandably less desirable.

Ensmenger traces the history of software engineering as a process of dumbing-down the task of programming, or rather, separating the highest-ability programmers who would analyze and design systems, from the low-ability programmers. Quotes from the book:

  • They organized SDC along the lines of a “software factory” that relied less on skilled workers, and more on centralized planning and control…Programmers in the software factory were machine operators; they had to be trained, but only in the basic mechanisms of implementing someone else’s design.
  • The CPT, although it was developed at the IBM Federal Systems Division, reflects an entirely different approach to programmer management oriented around the leadership of a single managerially minded superprogrammer.
  • The DSL permits a chief programmer to exercise a wider span of control over the programming, resulting in fewer programmers doing the same job.

In the 1980’s, even the superprogrammer was demoted.

A revised chief programmer team (RCPT) in which “the project leader is viewed as a leader rather than a ‘super-programmer.’” The RCPT approach was clearly intended to address a concern faced by many traditionally trained department-level managers—namely, that top executives had “abdicated their responsbility and let the ‘computer boys’ take over.”

The attempts to professionalize computer programming is a kind of response to early software engineering. The suggestion is that we programmers are as effective at handling projects as management. But in the end, he provides evidence from multiple perspectives that professionalization of computer programming has failed.

They were unable, for example, to develop two of the most defining characteristics of a profession: control over entry into the profession, and the adoption of a shared body of abstract occupational knowledge—a “hard core of mutual understanding”—common across the entire occupational community.

Ensmenger doesn’t actually talk about “education” as such very often, but it’s clearly the elephant in the room. That “control over entry into the profession” is about a CS degree not being a necessary condition for entering into a computing programming career. That “adoption of a shared body of abstract occupational knowledge” is about a widely-adopted, shared, and consistent definition of curriculum. There are many definitions of “CS1” (look at the effort Allison Elliott Tew had to go through to define CS1 knowledge), and so many definitions of “CS2” as to make the term meaningless.

The eccentric, rude, asocial stereotype of the programmer dates back to those early days of computing. Ensmenger says hiring that followed that stereotype is the source of many of our problems in developing software. Instead of allowing that eccentricity, we should have hired programmers who created a profession that embraced the user’s problems.

Computer programmers in particular sat in the uncomfortable “interface between the world of ill-stated problems and the computers.” Design in a heterogeneous environment is difficult; design is as much as social and political process as it is technical[^1]; cultivating skilled designers requires a comprehensive and balanced approach to education, training, and career development.”

The “software crisis” that lead to the creation of software engineering was really about getting design wrong.  He sees the industry as trying to solve the design problem by focusing on the production of the software, when the real “crisis” was a mismatch between the software being produced and the needs of the user.  Rather than developing increasingly complicated processes for managing the production of software, we should have been focusing on better design processes that helped match the software to the user.  Modern software engineering techniques are trying to make software better matched to the user (e.g., agile methods like Scrum where the customer and the programming team work together closely with a rapid iterative development-and-feedback loop) as well as disciplines like user-experience design.

I found Ensmenger’s tale to be fascinating, but his perspective as a labor historian is limiting. He focuses only on the “computer programmer,” and not the “computer scientist.” (Though he does have a fascinating piece about how the field got the name “computer science.”)  Most of his history of computing seems to be a struggle between labor and management (including an interesting reference to Karl Marx). With a different lens, he might have considered (for example) the development of the additional disciplines of information systems, information technology, user experience design, human-centered design and engineering, and even modern software engineering. Do these disciplines produce professionals that are better suited for managing the heterogeneous design that Ensmenger describes?  How does the development of “I-Schools” (Schools of Information or Informatics) change the story?  In a real sense, the modern computing industry is responding to exactly the issues Ensmenger is identifying, though perhaps without seeing the issues as sharply as he describes them.

Even with the limitations, I recommend “The Computer Boys Take Over.” Ensmenger covers history of computing that I didn’t know about. He gave me some new perspectives on how to think about computing education today.

[^1]: Yes, both semi-colons are in the original.

July 19, 2013 at 1:20 am 21 comments

In Massachusetts schools, computer science students are still the outliers – The Boston Globe

Interesting piece on the challenges that our ECEP colleagues are facing in getting CS into Massachusetts schools.

Last year, 23 of the state’s 378 public high schools taught a programming class in which 10 or more students took an Advanced Placement exam in the subject, according to Mass Insight Education, a nonprofit promoting advanced learning.

Of the 85,753 AP exams taken by Massachusetts students last year, only 913 were in computing.

But putting even the most basic programming classes in every school would be a massive undertaking and require years to design new statewide computing standards and curriculum, and to train and hire new teachers, admits MassCAN , the business coalition pushing to expand computer science education.

via In Massachusetts schools, computer science students are still the outliers – Business – The Boston Globe.

July 18, 2013 at 1:20 am 1 comment

More women nix outdated ‘nerd’ stereotype with single CS class

A fascinating set of studies!  (Follow the link below to see the description of the second one.)  It reminds me of our GaComputes findings about the importance of early computing experiences for minority students. Just taking a single CS class changed the women’s definitions of what a computer scientist is.  I’ve written on Blog@CACM about how under-represented minorities were more likely than majority students to have had some CS experience in middle or high school that influenced them.  These studies together support the argument that having some CS in K12 will likely have a significant impact on later attitudes towards computing.

First, they asked undergraduates from the UW and Stanford University to describe computer science majors.

They found students who were not computer science majors believed computer scientists to be intelligent but with poor social skills; they also perceived them as liking science fiction and spending hours playing video games. Some participants went so far as to describe computer scientists as thin, pale (from being inside all the time), and having poor hygiene.

“We were surprised to see the extent to which students were willing to say stereotypical things, and give us very specific descriptions. One student said computer science majors play ‘World of Warcraft’ all day long. And that’s a very specific, and inaccurate, thing to say about a very large group of people,” Cheryan said.

However, women who had taken at least one computer science class were less likely to mention a stereotypical characteristic. There was no difference in men’s descriptions, whether or not they had taken a computer science class.

via More women pick computer science if media nix outdated ‘nerd’ stereotype | UW Today.

July 17, 2013 at 1:46 am 1 comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers

Feeds

Recent Posts

Blog Stats

  • 1,875,256 hits
July 2013
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  

CS Teaching Tips