Archive for April, 2015

Call for Participants for ICER 2015 Doctoral Consortium: Come join us!

The ICER 2015 Doctoral Consortium provides an opportunity for doctoral students studying computing education to explore and develop their research interests in a workshop environment with a panel of established researchers. We invite students to apply for this opportunity to share their work with students in a similar situation as well as senior researchers in the field.

Applicants to the Doctoral Consortium should have begun their research, but should not have completed it in its entirety. We want people who have questions to raise with their peers and the more senior mentors. Someone who is all-but-defended probably does not want to be raising any questions.

DC Co-Chairs for 2015:

Mark Guzdial, Georgia Institute of Technology
Anthony Robins, University of Otago, New Zealand
Contact us at: icerdc2015@gmail.com

The DC has the following objectives:

  • Provide a supportive setting for feedback on students’ research and research direction
  • Offer each student comments and fresh perspectives on their work from researchers and students outside their own institution
  • Promote the development of a supportive community of scholars
  • Support a new generation of researchers with information and advice on research and academic career paths
  • Contribute to the conference goals through interaction with other researchers and conference events

The DC will be held on Sunday, August 9 2015. Students at any stage of their doctoral studies are welcome to apply and attend. The number of participants is limited to 15. Applicants who are selected will receive a limited partial reimbursement of travel, accommodation and subsistence (i.e., food) expenses of $600 (USD).

Process Timeline:

  • Wednesday 20th May – initial submission
  • Monday 1st June – notification of acceptance
  • Monday 15th June – camera ready copy due

You can find more information on applying at http://icer.hosting.acm.org/icer-2015/doctoral-consortium/

April 29, 2015 at 7:45 am Leave a comment

End the ‘leaky pipeline’ metaphor when discussing women in science: Technical knowledge can be used in many domains

I’m familiar with the argument that we shouldn’t speak of a “pipeline” because students come to STEM (and computing, specifically) in lots of ways, and go from computing into lots of disciplines.  The below-linked essay makes a particular point that I find compelling.  By using the “leaky pipeline” metaphor, we stigmatize and discount the achievements of people (women, in particular in this article) who take their technical knowledge and apply it in non-computing domains.  Sure, we want more women in computing, but we ought not to blame the women who leave for the low numbers.

However, new research of which I am the coauthor shows this pervasive leaky pipeline metaphor is wrong for nearly all postsecondary pathways in science and engineering. It also devalues students who want to use their technical training to make important societal contributions elsewhere.

How could the metaphor be so wrong? Wouldn’t factors such as cultural beliefs and gender bias cause women to leave science at higher rates?

My research, published last month in Frontiers in Psychology, shows this metaphor was at least partially accurate in the past. The bachelor’s-to-Ph.D. pipeline in science and engineering leaked more women than men among college graduates in the 1970’s and 80’s, but not recently.

Men still outnumber women among Ph.D. earners in fields like physical science and engineering. However, this representation gap stems from college major choices, not persistence after college.

Other research finds remaining persistence gaps after the Ph.D. in life science, but surprisingly not in physical science or engineering — fields in which women are more underrepresented. Persistence gaps in college are also exaggerated.

via Essay calls for ending the ‘leaky pipeline’ metaphor when discussing women in science @insidehighered.

April 27, 2015 at 8:17 am 6 comments

Showing Google Maps in JES from Susan Elliott Sim

Cool example of using JES to access external data!

I’ve been teaching CCT 374: Technologies for Knowledge Media course this term. It seemed a natural fit to use a Media Computation approach to teach Python programming. The students have a term project where they had to design an application that uses City of Toronto Open Data. Just about every team decided to make something that involved displaying something on a map. So, I had to figure out how to display arbitrary maps programmatically, as simply as possible. Using the Google Maps API would have been beyond most of the students. Then I found a blog post with a Python program to retrieve static images from Google Maps.

I have adapted the code from the blog post to work within JES (Java Environment for Students) using the Media Computation libraries. I’ve made the code available on a gist.

via Showing Google Maps in JES | Susan Elliott Sim.

April 24, 2015 at 8:29 am Leave a comment

Stanford president refines his vision for MOOCs in education

These do sound like the kinds of things that learning scientists were saying at the start of the MOOC hype (like this post), but I’m glad that he now realizes that MOOCs have limited use and that students vary widely.

And as for MOOCs, which many still predict will displace traditional teaching, he said that they “were the answer when we weren’t sure what the question was.”

He said that their massive nature, which attracted so much attention, was ultimately a problem. “When I think about MOOCs, the advantage — the ability to prepare a course and offer it without personal interaction — is what makes them inexpensive and makes them very limited.”

Students “vary widely in terms of their skills and capability,” he said, such that massiveness is simply not an educational advantage. “For some it’s too deep and for some it is too shallow.”

via Stanford president offers predictions on a more digital future for higher education @insidehighered.

April 22, 2015 at 8:25 am 2 comments

Student Course Evaluations Can’t Measure Teacher’s Knowledge: But We Could

It’s that time of year when Deans and Chairs start prodding students and teachers about course evaluations. What do the students think about their courses? What do the students think about their teachers?

There is a significant body of evidence that suggests that course evaluations are a stable measure about the teachers themselves. For example, the scores for a teacher are consistent across instantiations of the course over time (see Nira Hartiva’s work). However, they still might not be measuring something that we consider significant about teaching.

For example, it’s a mistake to think that student course evaluations tell us what a teacher knows about teaching. The teacher’s pedagogical content knowledge is invisible to the student. The student only sees what the teacher has decided to do to interact with the students. The student can’t see the knowledge that the teacher used in making that choice.

Three kinds of knowledge that are particularly relevant to a CS teacher are:

  • Knowledge about how to teach. A good teacher knows more than one way to teach a particular subject, and knows to change methods for a given student or to change the pace of a class. When I see students driving away in the back of my class, I know that it’s time for a Peer Instruction activity.
  • Knowledge about misconceptions. As was shown in Phil Sadler’s exceptional study (see blog post), a characteristic of teacher expertise is knowledge about what students typically get wrong. Based on that knowledge, teachers can address those misconceptions, and lead students to discover and correct the misconceptions themselves.
  • Knowledge about how to broaden participation in computing, which is particularly relevant to CS teachers. These include how to teach avoiding stereotype threat and triggering the imposter phenomenon, about how to give everyone a voice in the class and not let the loud and boisterous define the teacher’s perceptions of the course. I can offer a negative example, taken from real life but might have been invented after reading the negative examples in Unlocking the Clubhouse.

Teacher: How many of you students had Python in a previous class?
(Most students raise their hands, since it’s the language used in the pre-requisite class.)
Teacher: Well, you learned a terrible language. You’ll have to forget everything you know if you want to pass this class.
(Every student suffering imposter syndrome at this point decides to drop.)

This teacher actually has quite high course evaluation scores — and double the drop rate of every other teacher of that class.

Pedagogical content knowledge (PCK) is the key difference between novice and expert teachers, but is invisible to students. This is another reason why student evaluations of teaching (aka, Student Reviews of Instruction (SRI)) are inadequate as measures of teaching quality. They can’t measure a key indicator of teacher expertise.

I’ve been wondering how post-secondary teaching might change if we were to take a PCK perspective seriously. The knowledge of good teaching is definable and measurable.

  • We might define courses not just in terms of learning objectives but in terms of what knowledge the teacher should have to teach the class effectively.
  • We could evaluate University and College teachers based on their PCK — literally, taking a test on what they know about teaching the class.
  • PCK tests would finally create an impetus for University and College faculty to pursue professional development — that’s where they’d learn the teaching methods, student misconceptions, and methods for encouraging BPC that they need to answer the PCK tests. One might even imagine teachers taking a class on how to teach a new class that they’ll be offering in the future — preparing for a course by developing expertise in teaching that course. What an interesting thought that is, that higher education faculty might study how to teach.

April 20, 2015 at 8:30 am 33 comments

BBC is giving away 1 million mini computers so kids can learn to code: Prediction — little impact on broadening participation

I agree that these boards are cool, but I’m a geeky white guy.  I predict that they’ll have little impact in increasing access to computing education or in diversifying computing. Bare board computers are not more attractive to teachers, so we don’t get more teachers going into CS. They’re not more attractive than existing computers to women who aren’t already interested in computing. Why are people so excited about handing out bare board computers to grade school children?  Is this just white males emphasizing the attributes that attract them?  Judith Bishop of MSR (whose TouchDevelop will work on these new computers) says that she’s seen girls get engaged by these new computers, but nobody has done any research to see if that’s more than the 20% of females who get interested in computing now, or if that happens outside of the pilot classrooms.

Currently in development, the Micro Bit is a small piece of programmable, wearable hardware that helps kids learn basic coding and programming. It could act as a springboard for more advanced coding on products, such as the single-board computer Raspberry Pi, according to the BBC.

Children will be able to plug the device into a computer, and start creating with it immediately.

“BBC Make it Digital could help digital creativity become as familiar and fundamental as writing, and I’m truly excited by what Britain, and future great Britons, can achieve,” BBC director general Tony Hall said in a statement Thursday.

via BBC is giving away 1 million mini computers so kids can learn to code.

April 17, 2015 at 8:32 am 16 comments

Is Computing Just for Men? Where are the women in the enrollment surge?

Nice blog post from Barbara Ericson exploring the lack of women in the new surge in CS undergraduate enrollment.

A Surge in Majors, but Where Are the Women?

While a number of colleges and universities in the United States have recently seen a tremendous increase in the number of students who want to major in computing, the percentage of women who are interested is still low. A study conducted by the Association for Computing Machinery and the WGBH Educational Foundation in 2008 found that only 9 percent of college-bound teen girls thought that a career in computing was a very good choice for them, and only 17 percent thought that it was a good career choice. Teen girls associated computing with typing, math, and boredom. While the percentage of bachelor’s degrees awarded to women in the United States did increase from 11.7 percent in 2010–11 to 12.9 percent in 2011–12, women are still dramatically underrepresented.

The Percentage of Women Taking the Computer Science AP Exam Lags

The Advanced Placement (AP) computer science A course is equivalent to a college-level introductory computer science course. It focuses on object-oriented programming in Java. In 2014, only about 20 percent of AP computer science A exam takers were women. While that was an increase from the previous year, when the percentage was 18.5 percent, it is still far below the percentage of women who took the AP calculus AB exam (48.7 percent) and the percentage of women who took the AP biology exam (59 percent). It is even well below the percentage of women who took the physics B exam (34.7 percent), as shown below.

via Is Computing Just for Men? : AAUW: Empowering Women Since 1881.

April 15, 2015 at 8:32 am 2 comments

Where Have All The Teachers Gone? It’s not just CS!

My first thought when seeing this article was, “Well, I’m glad it’s not just CS.”  (See my post about how recruiting teachers is our biggest challenge in CS10K.) And my second thought was, “WHERE are we going to get all the teachers we need, across subjects?!?”  And how are we going to retain them?

Several big states have seen alarming drops in enrollment at teacher training programs. The numbers are grim among some of the nation’s largest producers of new teachers: In California, enrollment is down 53 percent over the past five years. It’s down sharply in New York and Texas as well.

In North Carolina, enrollment is down nearly 20 percent in three years.

“The erosion is steady. That’s a steady downward line on a graph. And there’s no sign that it’s being turned around,” says Bill McDiarmid, the dean of the University of North Carolina School of Education.

Why have the numbers fallen so far, so fast?

McDiarmid points to the strengthening U.S. economy and the erosion of teaching’s image as a stable career. There’s a growing sense, he says, that K-12 teachers simply have less control over their professional lives in an increasingly bitter, politicized environment.

via Where Have All The Teachers Gone? : NPR Ed : NPR.

April 13, 2015 at 8:24 am 22 comments

STEM as the Goal. STEAM as a Pathway.

Dr. Gary May, Dean of the College of Engineering at Georgia Tech, is one of my role models.  I’ve learned from him on how to broaden participation in computing, what academic leadership looks like, and how to make sure that education gets its due attention, even at a research-intensive university.

He wrote an essay (linked below) critical of the idea of “STEAM” (Science, Technology, the Arts, and Mathematics).  I just recently wrote a blog post saying that STEAM was a good idea (see link here).  I’m not convinced that I’m at odds with Gary’s point.  I suspect that the single acronym, “STEM” or “STEAM,” has too many assumptions built into it.  We probably agree on “STEM,” but may have different interpretations of “STEAM.”

The term “STEM” has come to represent an emphasis on science, technology, engineering, and mathematics education in schools. A recent Washington Post article critiques exactly that focus: Why America’s obsession with STEM education is dangerous.

From Gary’s essay, I think he reads “STEAM” to mean “We need to integrate Arts into STEM education.”  Or maybe, “We need to emphasize Arts as well as STEM in our schools.”  Or even, “All STEM majors must also study Art.” Gary argues that STEM is too important to risk diffusing by adding Art into the mix.

That’s not exactly what I mean when I see a value for STEAM.  I agree that STEM is the goal.  I see STEAM as a pathway.

Media Computation is a form of blending STEM plus Art.  I’m teaching computer science by using the manipulation of media at different levels of abstraction (pixels and pictures, samples and sounds, characters and HTML, frames and video) as an inviting entryway into STEM. There are many possible and equally valid pathways into Computing, as one form of STEM.  I am saying that my STEAM approach may bring people to STEM who might not otherwise consider it.  I do have a lot of evidence that MediaComp has engaged and retained students who didn’t used to succeed in CS, and that part of that success has been because students see MediaComp as a “creative” form of computing (see my ICER 2013 paper).

I have heard arguments for STEAM as enhancing STEM.  For example, design studio approaches can enhance engineering education (as in Chris Hundhausen’s work — see link here).  In that sense of STEAM, Art offers ways of investigating and inventing that may enhance engineering design and problem-solving.  That’s about using STEAM to enhance STEM, not to dilute or create new course requirements.  Jessica Hodgins gave an inspiring opening keynote lecture at SIGCSE 2015 (mentioned here) where she talked about classes that combined art and engineering students in teams.  Students learned from each other new perspectives that informed and improved their practice.

“STEM” and “STEAM” as acronyms don’t have enough content to say whether we’ve in favor or against them.  There is a connotation for “STEM” about a goal: More kids need to know STEM subjects, and we should emphasize STEM subjects in school.  For me, STEM is an important goal (meaning an emphasis on science, technology, engineering, and mathematics in schools), and STEAM is one pathway (meaning using art to engage STEM learning, or using art as a valuable perspective for STEM learners) to that goal.

No one — least of all me — is suggesting that STEM majors should not study the arts. The arts are a source of enlightenment and inspiration, and exposure to the arts broadens one’s perspective. Such a broad perspective is crucial to the creativity and critical thinking that is required for effective engineering design and innovation. The humanities fuel inquisitiveness and expansive thinking, providing the scientific mind with larger context and the potential to communicate better.

The clear value of the arts would seem to make adding A to STEM a no-brainer. But when taken too far, this leads to the generic idea of a well-rounded education, which dilutes the essential need and focus for STEM.

via Essay criticizes idea of adding the arts to push for STEM education @insidehighered.

April 10, 2015 at 7:55 am 12 comments

Media Computation for CS Principles

At conferences like SIGCSE 2015 and at meetings like the CS Principles Advisory Board meeting in Chicago in February, I’m hearing from pilot teachers of the new AP CS Principles Curriculum (see website here) who are building Media Computation (specifically, in Python) into their classes.  In the preface to the new 4th Edition (see Amazon page here), I went through the Big Ideas and Learning Objectives (as they were on the website at that time) that are being addressed in the new version.  Explicitly, I added content to address CS Principles learning objectives, e.g., measuring two different algorithms by using clock time and manipulating “live” CSV data downloaded from websites.

Below is quoted from the preface:

The Advanced Placement exam in CS Principles has now been defined. We have explicitly written the fourth edition with CS Principles in mind. For example, we show how to measure the speed of a program empirically in order to contrast two algorithms (Learning Objective 4.2.4), and we explore multiple ways of analyzing CSV data from the Internet (Learning Objectives 3.1.1, 3.2.1, and 3.2.2).

Overall, we address the CS Principles learning objectives explicitly in this book as shown below:

  • In Big Idea I: Creativity:
  • LO 1.1.1: . . . use computing tools and techniques to create artifacts.
  • LO 1.2.1: . . . use computing tools and techniques for creative expression.
  • LO 1.2.2: . . . create a computational artifact using computing tools and techniques to solve a problem.
  • LO 1.2.3: . . . create a new computational artifact by combining or modifyingexisting artifacts.
  • LO 1.2.5: . . . analyze the correctness, usability, functionality, and suitability ofcomputational artifacts.
  • LO 1.3.1: . . . use programming as a creative tool.
  • In Big Idea II: Abstraction:
  • LO 2.1.1: . . . describe the variety of abstractions used to represent data.
  • LO 2.1.2: . . . explain how binary sequences are used to represent digital data.
  • LO 2.2.2: . . . use multiple levels of abstraction in computation.
  • LO 2.2.3: . . . identify multiple levels of abstractions being used when writingprograms.
  • In Big Idea III: Data and information:
  • LO 3.1.1: . . . use computers to process information, find patterns, and test hy-potheses about digitally processed information to gain insight and knowledge.
  • LO 3.2.1: . . . extract information from data to discover and explain connections,patterns, or trends.
  • LO 3.2.2: . . . use large data sets to explore and discover information and knowledge.
  • LO 3.3.1: . . . analyze how data representation, storage, security, and transmission of data involve computational manipulation of information.
  • In Big Idea IV: Algorithms:
  • LO 4.1.1: . . . develop an algorithm designed to be implemented to run on a computer.
  • LO 4.1.2: . . . express an algorithm in a language.
  • LO 4.2.1: . . . explain the difference between algorithms that run in a reasonable time and those that do not run in a reasonable time.
  • LO 4.2.2: . . . explain the difference between solvable and unsolvable problems in computer science.
  • LO 4.2.4: . . . evaluate algorithms analytically and empirically for efficiency, correctness, and clarity.
  • In Big Idea V: Programming:
  • LO 5.1.1: . . . develop a program for creative expression, to satisfy personal curiosity or to create new knowledge.
  • LO 5.1.2: . . . develop a correct program to solve problems
  • LO 5.2.1: . . . explain how programs implement algorithms.
  • LO 5.3.1: . . . use abstraction to manage complexity in programs.
  • LO 5.5.1: . . . employ appropriate mathematical and logical concepts in programming.
  • In Big Idea VI: The Internet:
  • LO 6.1.1: . . . explain the abstractions in the Internet and how the Internet functions.

April 8, 2015 at 8:46 am 2 comments

Repeatability as a Core Value in March CACM: For Software and Education

Repeatability presumes evidence (which can be repeated).  Computer scientists have not valued evidence and repeatability as much as we need to for rigor and scientific advancement — in education, too.  One of my favorite papers by Michael Caspersen is his Mental models and programming aptitude ITICSE 2007 paper where he and his colleagues attempt to replicate the results of the famous and controversial Dehnadi and Bornat paper (see here).  Michael and his colleagues are unable to replicate the result, and they propose a research method for understanding the differences.  That’s good science — attempting to replicate another’s result, and then developing the next steps to understand the differences.

Science advances faster when we can build on existing results, and when new ideas can easily be measured against the state of the art. This is exceedingly difficult in an environment that does not reward the production of reusable software artifacts. Our goal is to get to the point where any published idea that has been evaluated, measured, or benchmarked is accompanied by the artifact that embodies it. Just as formal results are increasingly expected to come with mechanized proofs, empirical results should come with code.

If a paper makes, or implies, claims that require software, those claims must be backed up.

via The Real Software Crisis: Repeatability as a Core Value | March 2015 | Communications of the ACM.

April 6, 2015 at 8:24 am Leave a comment

Let me tell you what I know about gender and CS: An undergrad teaches her faculty about diversity

Nice story and presentation from Katie Cunningham about how she informed her faculty about why there are so few women in CS, and what they can do about it.

I based the main arc of my presentation on a book chapter by Whitecraft and Williams that Greg Wilson of Software Carpentry was kind enough to forward to me. It’s an evenhanded look at much of the research in this area, including theories that are often out of favor in most places I frequent. It served as a great overview, though I felt it could have focused more on issues involving differences in prior programming experience pre-college and intimidation brought on by “nerdy strutting“. (Update: I just discovered a fantastic 2012 report by NCWIT that can also serve as a great overview. It covers cultural issues more comprehensively, with more recent research and more focus on the pre-college years.)

via Computer Science, Education, Fog: Let me tell you what I know about gender and CS.

April 3, 2015 at 8:24 am 1 comment

Launching our Teacher Ebook for learning Python and CS Principles

Back in September 2011, I announced that we received NSF funding to try to “beat the book.” (See post here.) Could we create an electronic (Web-based) book that was better for CS teacher learning than reading a physical book? Took us three years, but I’m confident that the answer is now, “Yes.”

Our ebook is hosted by Brad Miller’s Runestone tools and site.  We use worked examples (as mentioned here) interleaved with practice, as Trafton and Reiser recommend.  We have coding in the book as well as Philip Guo’s visualizations.  There are audio tours to provide multi-modality code explanations (see modality effect), and Parson’s problems to provide low cognitive load practice (see mention here). We support book clubs that set their own schedule, in order to create social pressure to complete, but at a scale that makes sense for teachers.

2011 was a long time ago.  That original post didn’t even mention MOOCs.  We ran two studies in the Fall, one on learning with novices and one on usability (which involved several of you — thank you for responding to my call for participants!). I’m not going to say anything about those results here, pending review and publication. We have updated the book based on the results of those studies.  I don’t know if we beat the MOOC.  We’re running at about a 50% completion rate, but we’ll only really know when we go to scale.

I am pleased to announce the book is ready for release!

Please send this url to any teacher you think might want to learn about teaching CS (especially for the AP CS Principles — see learning objectives here) in Python: http://ebooks.cc.gatech.edu/TeachCSP-Python/  Thanks!

Our next steps are to develop a student ebook.  By Fall, we hope to have a teacher and a student CSP ebook, which may make for an additional incentive for teachers to complete.

April 1, 2015 at 7:49 am 18 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,004 other followers

Feeds

Recent Posts

Blog Stats

  • 1,876,339 hits
April 2015
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  

CS Teaching Tips