Posts tagged ‘evaluation’

How do we test the cultural assumptions of our assessments?

I’m teaching a course on user interface software development for about 260 students this semester. We just had a Midterm where I felt I bobbled one of the assessment questions because I made cultural assumptions. I’m wondering how I could have avoided that.

I’m a big fan of multiple choice, fill-in-the-blank, and Parsons problems on my assessments. I use my Parson problem generator a lot (see link here). For example, on this one, students had to arrange the scrambled parts of an HTML file in order to achieve a given DOM tree, and there were two programs in JavaScript (using constructors and prototypes) that they had to unscramble.

I typically ask some definitional questions about user interfaces at the start, about ideas like signifiers, affordances, learned associations, and metaphors. Like Dan Garcia (see his CS-Ed Podcast), I believe in starting out the exam with some easy things, to buoy confidence. They’re typically only worth a couple points, and I try to make the distractors fun. Here’s an example:

Since we watched in lecture a nice video starring Don Norman explaining “Norman doors,” I was pretty sure that anyone who actually attended lecture that day would know that the answer was the first one in the list. Still, maybe a half-dozen students chose the second item.

Here’s the one that bothered me much more.

I meant for the answer to be the first item on the list. In fact, almost the exact words were on the midterm exam review, so that students who studied the review guide would know immediately what we wanted. (I do know that working memory doesn’t actually store more for experts — I made a simplification to make the definition easier to keep in mind.)

Perhaps a dozen student chose the second item: “Familiarity breeds contempt. Experts contempt for their user interfaces allows them to use them without a sense of cognitive overload.” I had several students ask me during the exam, “What’s contempt?” I realized that many of my students didn’t know the word or the famous phrase (dates back to Chaucer).

Then one student actually wrote on his exam, “I’m assuming that contempt means learned contentment.” If you make that assumption, the item doesn’t sound ridiculous: “Familiarity breeds learned contentment. Experts learned contentment for their user interfaces allows them to use them without a sense of cognitive overload.”

I had accidentally created an assessment that expected a particular cultural context. The midterm was developed over several weeks, and reviewed by my co-instructor, graduate student instructor, five undergraduate assistants, and three undergraduate graders. We’re a pretty diverse bunch. We had found and fixed perhaps a dozen errors in the exam during the development period. We’d never noted this problem.

I’m not sure how I could have avoided this mistake. How does one remain aware of one’s own cultural assumptions? I’m thinking of the McLuhan quote: “I don’t know who discovered water, but it wasn’t a fish.” I feel bad for the students who got this problem wrong because they didn’t know the quote or the meaning of the word “contempt.” What do you think? How might I have discovered the cultural assumptions in my assessment?

March 16, 2020 at 1:57 pm 15 comments

BDSI – A New Validated Assessment for Basic Data Structures: Guest Blog Post from Leo Porter and colleagues

Leo Porter, Michael Clancy, Cynthia Lee, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, and Daniel Zingaro have developed a new concept inventory that they are making available to instructors and researchers. They have written this guest blog post to describe their new instrument and explain why you should use it. I’m grateful for their contribution!

We recently published a Concept Inventory for Basic Data Structures at ICER 2019 [1] and hope it will be of use to you in your classes and/or research.

The BDSI is a validated instrument to measure student knowledge of Basic Data Structure Concepts [1].  To validate the BDSI, we engaged faculty at a diverse set of institutions to decide on topics, help with question design, and ensure the questions are valued by instructors.  We also conducted over one hundred interviews with students in order to identify common misconceptions and to ensure students properly interpret the questions. Lastly, we ran pilots of the instrument at seven different institutions and performed a statistical evaluation of the instrument to ensure the questions are properly interpreted and discriminate between students’ abilities well.

What Our Assessment Measures

The BDSI measures student performance on Basic Data Structure concepts commonly found in a CS2 course.  To arrive at the topics and content of the exam, we worked with fifteen faculty at thirteen different institutions to ensure broad applicability.  The resulting topics on the CI include: Interfaces, Array-Based Lists, Linked-Lists, and Binary Search Trees. If you are curious about the learning goals or want more details on the process we used in arriving at these goals, please see our SIGCSE 2018 publication [2].

Why Validated Assessments are Great for Instructors

Suppose you want to know how well your students understand various topics in your CS2 course.  How could you figure out how much your students are learning relative to other schools? You could, perhaps, get a final exam from another school and use it in your class to compare results, but invariably, the final exam may not be a good fit.  Moreover, you may find flaws in some of the questions and wonder if students interpret them properly. Instead, you can use a validated assessment. The advantage of using a validated assessment is there is general agreement that it is measuring what you want to measure and it accurately measures student thinking.  As such, you can compare your findings to results from other schools who have used the instrument to determine if your students are learning particular topics better or worse than cohorts and similar institutions.

Why Validated Assessments are Great for Researchers

As CS researchers, we often experiment with new ways to teach courses.  For example, many people use Media Computation or Peer Instruction (PI), two complementary pedagogical approaches developed over the past several decades.  It’s important to establish whether these changes are helping our students. Do more students pass? Do fewer students withdraw? Do more students continue studying CS?  Does it boost outcomes for under-represented groups? Answering these questions using a variety of courses can give us insight into whether what we do corresponds with our expectations.

One important question is: using our new approach, do students learn more than before?  Unfortunately, answering this is complicated by the lack of standardized, validated assessments.  If students score 5% higher on an exam when studying with PI vs. not studying with PI, all we know is that PI students did better on that exam.  But exams are designed by one instructor, for one course at one institution, not for the purposes of cross-institution, cross-cohort comparisons.  They are not validated. They do not take into account the perspectives of other CS experts. When students answer a question on an exam correctly, we assume that it’s because they know the material; when they answer incorrectly, we assume it’s because they don’t know the material.  But we don’t know: maybe the exam contains incidental cues that subtly influence how students respond.

A Concept Inventory (CI) solves these problems.  Its rigorous design process leads to an assessment that can be used across schools and cohorts, and can be used to validly compare teaching approaches.

How to Obtain the BDSI

The BDSI is available via the google group.  If you’re interested in using it, please join the group and add a post with your name, institution, and how you plan to use the BDSI.

How to Use the BDSI

The BDSI is designed to be given as a post-test after students have completed the covered material.  Because the BDSI was validated as a full instrument, it is important to use the entire assessment, and not alter or remove any of the questions.  We ask that instructors not make copies of the assessment available to students after giving the BDSI, to try to avoid the questions becoming public.  We likewise recommend giving participation credit, but not correctness credit, to students for taking the BDSI, to avoid incentivizing cheating.  We have found giving the BDSI as part of a final review session, collecting the assessment from students, and then going over the answers to be a successful methodology for having students take it. 

Want to Learn More?

If you’re interested in learning more about how to build a CI, please come to our talk at SIGCSE 2020 (from 3:45-4:10pm on Thursday, March 12th) or read our paper [3].  If you are interested in learning more about how to use validated assessments, please come to our Birds of a Feather session on “Using Validated Assessments to Learn About Your Students” at SIGCSE 2020 (5:30-6:20pm on Thursday, March 12th) or our tutorial on using the BDSI at CCSC-SW 2020 (March 20-21).

References:

[1] Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A Validated Concept Inventory for Basic Data Structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER ’19).

[2] Leo Porter, Daniel Zingaro, Cynthia Lee, Cynthia Taylor, Kevin C. Webb, and Michael Clancy. 2018. Developing Course-Level Learning Goals for Basic Data Structures in CS2. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ’18).

[3] Cynthia Taylor, Michael Clancy, Kevin C. Webb, Daniel Zingaro, Cynthia Lee, and Leo Porter. 2020. The Practical Details of Building a CS Concept Inventory. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE ’20).

February 24, 2020 at 7:00 am Leave a comment

Attending the amazing 2017 Computing at School conference #CASConf17

June 17, Barbara and I attended the Computing at School conference in Birmingham, England (which I wrote about here).  The slides from my talk are below. I highly recommend the summary from Duncan Hull which I quote at the bottom.

CAS was a terrifically fun event. It was packed full with 300 attendees. I under-estimated the length of my talk (I tend to talk too fast), so instead of a brief Q&A, there was almost half the time for Q&A. Interacting with the audience to answer teachers’ questions was more fun (and hopefully, more useful and entertaining) than me talking for longer. The session was well received based on the Tweets I read. In fact, that’s probably the best way to get a sense for the whole day — on Twitter, hashtag #CASConf17. (I’m going to try to embed some tweets with pictures below.)

Barbara’s two workshops on Media Computation in Python using our ebooks went over really well.

I enjoyed my interactions all day long. I was asked about research results in just about every conversation — the CAS teachers are eager to see what computing education research can offer them.  I met several computing education research PhD students, which was particularly exciting and fun. England takes computing education research seriously.

Miles Berry demonstrated Project Quantum by having participants answer questions from the database.  That was an engaging and fascinating interactive presentation.

Linda Liukas gave a terrific closing keynote. She views the world from a perspective that reminded me of Mitchel Resnick’s Lifelong Kindergarten and Seymour Papert’s playfulness. I was inspired.

The session that most made me think was from Peter Kemp on the report that he and co-authors have just completed on the state of computing education in England. That one deserves a separate blog post – coming Wednesday.

Check out Duncan’s summary of the conference:

The Computing At School (CAS) conference is an annual event for educators, mostly primary and secondary school teachers from the public and private sector in the UK. Now in its ninth year, it attracts over 300 delegates from across the UK and beyond to the University of Birmingham, see the brochure for details. One of the purposes of the conference is to give teachers new ideas to use in their classrooms to teach Computer Science and Computational Thinking. I went along for my first time (*blushes*) seeking ideas to use in an after school Code Club (ages 7-10) I’ve been running for a few years and also for approaches that undergraduate students in Computer Science (age 20+) at the University of Manchester could use in their final year Computer Science Education projects that I supervise. So here are nine ideas (in random brain dump order) I’ll be putting to immediate use in clubs, classrooms, labs and lecture theatres:

Source: Nine ideas for teaching Computing at School from the 2017 CAS conference | O’Really?

My talk slides:

July 10, 2017 at 7:00 am 1 comment

SIGCSE 2016 Preview: Miranda Parker replicated the FCS1

I’ve been waiting a long time to write this post, though I do so even now with some trepidation.

In 2010, Allison Elliott Tew completed her dissertation on building FCS1, the first language-independent and validated measure of introductory computer science knowledge (see this post summarizing the work). The FCS1 was a significant accomplishment, but it didn’t get used much. Allison had concerns about the test becoming freely available and no longer useful as a research instrument.

Miranda Parker joined our group and replicated the FCS1. She created an isomorphic test (which we’re calling SCS1 for Secondary CS1 instrument — it comes after the first). She then followed a rigorous process for replicating a validated instrument, including think-aloud protocols to check usability (do the problems read as she meant them?), large-scale counter-balanced study using both tests, and analysis, including correlational and item-response theory (IRT) analysis. Her results support that SCS1 is effectively identical to FCS1, but also point out the weaknesses of both tests and why we need more and better assessments.

(Note: Complaining in this paragraph — some readers might just want to skip this.) As the first time anyone had ever replicated a validated CS research instrument, the process is a significant result. SIGCSE reviewers did not agree. The Associate Chair’s comment on our rejected paper said, “Two reviewers had concerns about appropriateness of this paper for SIGCSE: #XXX because it didn’t directly address improved learning, and #YYY because replicating the FCS1 wasn’t deemed to be as noteworthy as the original work.” An assessment tool doesn’t improve learning, and a first-ever replication is not publishable.

Miranda was hesitant to release SCS1 for use (e.g., post in my blog, send emails on CSEd-Research email lists) until the result was peer-reviewed. A disadvantage that my students have suffered for having an advisor who blogs — some reviewers have rejected my students’ papers because my blogging made it discoverable who did the research, and thus our papers can’t be sufficiently anonymized to meet those reviewers’ standards. So, I haven’t talked about SCS1, despite my pleasure and pride in Miranda’s accomplishment.

I’m posting this now because Miranda does have a poster on SCS1 at the SIGCSE 2016 Technical Symposium. Come see her at the 3-5 pm Poster Session on Friday. Miranda had a major success in her first year as a PhD student, and the research community now has a new validated research instrument.

Here’s the trepidation part: her paper on the replication process was just rejected for ITICSE. There’s no Associate Chair for ITICSE, so there’s no meta-review that gives the overall reasons.  One reviewer raised some concerns about the statistics, which we’ll have to investigate.  Another reviewer strongly disagrees with the idea of a replication, much like the #YYY reviewer at SIGCSE. One reviewer complained that this paper was awfully similar to a paper by Elliott Tew and Guzdial, so maybe it shouldn’t be published.  I’m not sure how we convince SIGCSE and ITICSE reviewers that replication is important and something that most STEM disciplines are calling for more of. (Particularly aggravating point: Because FCS1 is not freely available, the reviewer doesn’t believe that FCS1 is “valid, consistent, and reliable” without inspecting it — as if you can tell those characteristics just by looking at the test?)

I’m talking about SCS1 now because she has her poster accepted, so she has a publication on that.  We really want to publish her process and in particular, the insights we now have about both instruments.  We’ll have to wait to publish that — and I hope the reviewers of the next conference don’t give us grief because I talked about the result here.

Contact Miranda at scs1assessment@gmail.com for access to the test.

March 2, 2016 at 8:00 am 10 comments

CMU launches initiative to improve student learning with technology

Interesting results, and nice to hear that the new initiative will be named for Herb Simon.

The Science of Learning Center, known as LearnLab, has already collected more than 500,000 hours’ worth of student data since it initially received funding from the National Science Foundation about nine years ago, its director Ken Koedinger said. That number translates to about 200 million times when students of a variety of age groups and subject areas have clicked on a graph, typed an equation or solved a puzzle.

The center collects studies conducted on data gathered from technology-enhanced courses in algebra, chemistry, Chinese, English as a second language, French, geometry and physics in an open wiki.

One such study showed that students performed better in algebra if asked to explain what they learned in their own words, for example. In another study, physics students who took time answering reflection questions performed better on tests than their peers.

via Carnegie Mellon U. launches initiative to improve student learning with technology | Inside Higher Ed.

January 3, 2014 at 1:03 am Leave a comment

Entrepreneurial MOOCs to teach CS: Different values, different evaluation

Lisa Kaczmarczyk wrote a blog post about a bunch of the private, for-profit groups teaching CS who visited the ACM Education Council meeting on Nov. 2.  I quoted below the section where the Ed Council asked tough questions about evaluation.  I wonder if the private efforts to educate mean the same things about evaluation as the academic and research folks mean by “evaluation.”  There are different goals and different value systems between each.  Learning for all in public education is very different from a privatized MOOC where it’s perfectly okay for 1-10% to complete.

Of course there was controversy; members of the Ed Council asked all of the panelists some tough questions. One recurrent theme had to do with how they know what they are doing works. Evaluation how? what kind? what makes sense? what is practical? is an ongoing challenge in any pedagogical setting and when you are talking about a startup as 3 out of the 4 companies on the panel were in the fast paced world of high tech – its tricky. Some panelists addressed this question better than others. Needless to say I spent quite a bit of time on this – it was one of the longer topics of discussion over dinner at my table.

Neil Fraser from Googles Blockly project said some things that were unquestionably controversial. The one that really got me was when he said several times, and with followup detail that one of the things they had learned was to ignore user feedback. I can’t remember his exact words after that but the idea seemed to be that users didnt know what was best for them. Coming on the heels of earlier comments that were less than tactful about computing degree programs and their graduates … I have to give Neil credit for having the guts to share his views.

via Interdisciplinary Computing Blog: Entrepreneurial MOOCs at the ACM Ed Council Meeting.

November 12, 2013 at 1:07 am Leave a comment

Say Goodbye to Myers-Briggs, the Fad That Won’t Die

Once in our Learning Sciences seminar, we all took the Myers-Briggs test on day 1 of the semester, and again at the end.  Almost everybody’s score changed.  So, why do people still use it as some kind of reliable test of personality?

A test is reliable if it produces the same results from different sources. If you think your leg is broken, you can be more confident when two different radiologists diagnose a fracture. In personality testing, reliability means getting consistent results over time, or similar scores when rated by multiple people who know me well. As my inconsistent scores foreshadowed, the MBTI does poorly on reliability. Research shows “that as many as three-quarters of test takers achieve a different personality type when tested again,” writes Annie Murphy Paul in The Cult of Personality Testing, “and the sixteen distinctive types described by the Myers-Briggs have no scientific basis whatsoever.” In a recent article, Roman Krznaric adds that “if you retake the test after only a five-week gap, there’s around a 50% chance that you will fall into a different personality category.”

via Say Goodbye to MBTI, the Fad That Won’t Die | LinkedIn.

November 5, 2013 at 1:53 am 5 comments

What to do about laptops in lectures: Worse for the bystanders

Fascinating result: The bystanders have their learning impacted more than the ones who opened up the laptop.

There is a fundamental tension here, and I don’t know how to resolve it. On the one hand, I like it when students have their laptops in class. Many of them are more comfortable taking notes this way than longhand. In the middle of a lecture I might ask someone to look something up that I don’t know off the top of my head.

On the other hand, the potential for distraction is terrible. I’ve walked in the back of the classroom of many of my colleagues and seen that perhaps 50% of the students are on the Web.

via What to do about laptops in lectures? – Daniel Willingham.

November 1, 2013 at 1:07 am 7 comments

September 2013 Special Issue of IEEE Computer on Computing Education

Betsy DiSalvo and I were guest editors for the September 2013 special issue of IEEE Computer on Computing Education.  (The cover, copied above, is really nice!)  The five articles in the issue did a great job of pushing computing education beyond our traditional image of CS education.  Below I’m pasting our original introduction to the special issue — before copy-editing, but free for me to share, and it’s a reasonable overview of the issue.

Introduction to the Special Issue

Computing education is in the news regularly these days. England has just adopted a new computer science curriculum. Thousands of people are taking on-line courses in computer science. Code.org’s viral video had millions of people thinking about learning to code.

A common thread in all of this new computer science education is that it’s not how we normally think about computing education. Traditional computing education brings to mind undergraduates working late night in labs drinking highly-caffeinated beverages. “CS Class” brings to mind images of students gaining valuable vocational skills in classrooms. The new movement towards computing education is about computing education for everyone, from children to working adults. It’s about people learning about computing in places you wouldn’t expect, from your local elementary school to afterschool clubs. It’s about people making their own computing on things that only a few years ago were not computable at all, like your personal cellphone and even your clothing.

Computing has changed. In the 1950’s and 1960’s, computing moved from the laboratory into the business office. In the PC revolution, it moved into our homes. Now in the early 21st Century, it is ubiquitous. We use dozens of computers in our everyday life, often without even recognizing that the processors are there. Knowing about computing today is necessary for understanding the world we live in. Computer science is as valuable as biology, physics, or chemistry to our students. Consider a computer science concept: that all digitized information is represented in a computer, and the same information could be a picture or text or a virus. That is more relevant to a student today than the difference between meiosis and mitosis, or how to balance an equilibrium equation.

Computing also gives us the most powerful tool for creative expression humans have ever invented. The desktop user interface we use today was created at Xerox PARC in order to make the computer a creative device. Today, we can use computing to communicate, to inform, to delight, and to amaze. That is a powerful set of reasons for learning to control the computer with programming.

The papers in this special issue highlight how computing education has moved beyond the classroom. They highlight computing as porous education that crosses the boundaries of the classroom, and even boundaries of disciplines. These papers help us to understand the implications and the new needs of computing education today.

Maria Knobelsdorf and Jan Vahrenhold write on “Addressing the Full Range of Students: Challenges in K-12 Computer Science Education”. The issues change as computer science education moves down from higher education into primary and secondary education. What curricula should we use in schools? How do prepare enough teachers? Maria and Jan lay out the challenges, and use examples from Germany on how these challenges might be addressed.

“STEAM-Powered Computing Education using E-Textiles: Impacting Learning and Broadening Participation” by Kylie Peppler talks about integrating art into traditional STEM (Science, Technology, Engineering, and Mathematics) classrooms through use of new kinds of media. Kylie has students sewing computers into fabrics. Her students combine roles of engineers, designers, scientists and artists as they explore issues of fashion and design with electronic circuits and computer programming.

In “The Porous Classroom: Professional practices in the computing curriculum”, Sally Fincher and Daniel Knox consider how computer science students learn beyond the classroom. Learning in the classroom is typically scripted with careful attention to students activities that lead to learning outcomes. The wild and unconstrained world outside the classroom offers many more opportunities to learn, and Sally and Daniel look at how the opportunities outside the school walls influence students as they move between the classroom and the world beyond.

Karen Brennan’s paper “Learning Computing through Creating and Connecting” starts from the programming language, Scratch, which was created to introduce computing into afterschool computer clubhouses. Students using Scratch learned through creating wonderful digital stories and animations, then sharing them with others, and further learning by mixing and re-mixing what was shared. Karen then considers the porous education from the opposite direction — what does it take to take an informal learning tool, such as Scratch, into the traditional classroom?

The paper by Allison Elliott Tew and Brian Dorn, “The Case for Validated Tools in Computing Education Research”, describes how to measure the impacts of computing education, in terms of learning and attitudes. This work ties these themes together and back to the traditional classroom. Wherever the learning is occurring, we want to know that there is learning happening.  We need good measurement tools to help us know what’s working and what’s not, and how to compare different kinds of contexts for different students. Allison and Brian tell us that “initial research and development investment can pay dividends for the community because validated instruments enable and enhance a host of activities in terms of both teaching and research that would not otherwise be feasible.”   Tools such as these validated instruments may allow us to measure the impact of informal, maker-based, or practice-based approaches.  Work in basic tools for measurement help us to ground and connect the work that goes on beyond our single classroom through the porous boundary to other disciplines and other contexts.

The story that this special issue tells is about computer science moving from subject to literacy. Students sometimes learn computer science because they are interested in computers. More often today, students learn computer science because of what they can do with computers. Computing is a form of expression and a tool for thinking. It is becoming a basic literacy, like reading, writing, and arithmetic. We use reading and writing in all subject areas. We see that students are increasingly using programming in the same way. The papers in this special issue offer a view into that new era of computing education.

September 18, 2013 at 1:54 pm Leave a comment

1st “BOOC” on Scaling-Up What Works about to start at Indiana University

I talked with Dan Hickey about this — it’s an interesting alternative to MOOCs, and the topic is relevant for this blog.

In the fall semester of 2013, IU School of Education Researcher and Associate Professor Dr. Daniel Hickey will be leading an online course. The 11-week course will begin on September 9 and is being called a ‘BOOC’ or “Big Open Online Course”. The main topic being taught is ”Educational Assessment: Practices, Principles, and Policies”. Here students will develop “WikiFolios”, endorse each other’s work, and earn bonafide Digital Badges based on the work they complete. Additionally, the course provides an opportunity for Dr. Hickey to observe how these activities translate from the same for-credit, online course that initially seated 25 students to the new ‘BOOC’ format hosting 500 participants: During his small scale experimental study, Dr. Hickey stated:

“I feel like I came up with some nice strategies for streamlining the course and making it a little less demanding which I think is necessary for an open, non-credit course. I learned ways to shorten the class, to get it from the normal 15 week semester to the 11 weeks. I condensed some of the assignments and gave students options; they do performance or portfolio assessment, they don’t do both. I thought that was pretty good for students.”

via 1st “BOOC” To Begin In September, Scaling-Up What Works | BOOC at Indiana University.

September 5, 2013 at 1:46 am Leave a comment

Minerva Project Announces Annual $500,000 Prize for Professors: Measured how?

How would one measure extraordinary, innovative teaching?  We have a difficult time measuring regular teaching!

The Minerva Project, a San Francisco venture with lofty but untested plans to redefine higher education, said on Monday that starting next year it would award an annual $500,000 prize to a faculty member at any institution in the world who has demonstrated extraordinary, innovative teaching.

via Minerva Project Announces Annual $500,000 Prize for Professors – NYTimes.com.

May 17, 2013 at 1:48 am 3 comments

SIGCSE 2013 Preview: Measuring attitudes in introductory computing

Brian Dorn and Allison Elliott Tew have been working on a new assessment instrument for measuring attitudes towards computing.  They published a paper at ICER 2012 on its development, and the new SIGCSE 2013 paper is on its initial uses.

In general, we have too few research measures in computing education research.  Allison’s dissertation work stands alone as the only validated language-independent measure of CS1.  Brian and Allison have been following a careful process of developing the Computing Attitudes Survey (CAS).  They’re developing their instrument based on a measure created for Physics. The Physics instrument has already been adapted for Chemistry and Biology, so the process of adaptation is well-defined.

What’s particularly cool about CAS is that it can be used as a pre-test/post-test.  What were the attitude effects of a particular intervention?  The SIGCSE 2013 paper describes use of CAS in a set of pre-test/post-test situations.

Here comes the remarkable part.  In the other fields, an introductory course actually leads to decreased interest in the field (more specifically, in attitudes less-like experts in the field).  But not in computer science!  The CAS indicates increased interest in the field after the first course.

Why is that?  I like the hypothesis that Brian and Allison suggest.  Students have some clue what physics, biology, and chemistry — but it’s probably significantly wrong about real practice, and real practice is more rigorous than they thought.  Students have almost no clue what computer science is. They probably have misconceptions, but they are not tightly held — we’ve found that high school students’ perceptions of what CS is can be changed pretty easily.  After a first CS course, students realize that it’s more interesting than they thought, so attitudes become more expert-like and positive.

February 15, 2013 at 1:49 am 5 comments

Stages of acceptance, reversed: How do you prove something works?

“Gas station without pumps” has a great point here (linked below), but I’d go a bit further.  As he suggests, proponents of an educational intervention (“fad”) rarely admit that it’s a bad idea, rarely gather evidence showing that they’re wrong, and swamp the research literature with evidence that they’re right.

But what if external observers test the idea, and find that it works as hypothesized?  Does that mean that it will work for everyone?  Media Computation has been successfully used to improve retention at several institutions with both CS majors and non-CS majors, in evaluations not connected to me and my students.  That doesn’t mean that it will work for any teacher and every teacher.  There are so many variables in any educational setting.  Despite the promises of the “What Works Clearinghouse,” even the well-supported interventions will sometimes fail, and there are interventions that are not well-supported that sometimes works.  Well-supported interventions are certainly more promising and more likely to work. The only way to be sure, as the blog post below says, is to try it — and to measure it as well as you can, to see if it’s working for you.

I would posit that there is another series of responses to educational fads:

  1. It is great, everyone should do this.
  2. Maybe it doesn’t work that well in everybody’s hands.
  3. It was a terrible idea—no one should ever do that.

Think, for example, of the Gates Foundation’s attempt to make small high schools.  They were initially very enthusiastic, then saw that it didn’t really work in a lot of the schools where they tried it, then they abandoned the idea as being completely useless and even counter-productive.

The difficult thing for practitioners is that the behavior of proponents in stage 1 of an educational fad is exactly the same as in Falkner’s third stage of acceptance.  It is quite difficult to see whether a pedagogical method is robust, well-tested, and applicable to a particular course or unit—especially when so much of the information about any given method is hype from proponents. Educational experiments seem like a way to cut through the hype, but research results from educational experiments are often on insignificantly small samples, on very different courses from the one the practitioner needs to teach, and with all sorts of other confounding variables.  Often the only way to determine whether a particular pedagogic technique works for a particular class is to try it and see, which requires a leap of faith, a high risk of failure, and (often) a large investment in developing new course materials.

via Stages of acceptance, reversed « Gas station without pumps.

February 1, 2013 at 1:19 am 3 comments

Grades are in for a pioneering free Johns Hopkins online class: Adding more to the public good

Some more statistics from another Coursera course.  The final comments are interesting: Through MOOCs, “everyone can get at least some fraction of what we believe is fundamental knowledge.”  That’s true.  The interesting question is whether MOOCs get more students a fraction that they didn’t have previously (see the edX data about 80% repeating the course) than a similar face-to-face course.  It’s not obvious to me either way — there are certainly results that have us questioning the effectiveness of our face-to-face classes.  While MOOCs lead to few finishing, maybe those that do finish learn more than in a face-to-face class, and maybe overall (amount of learning across number of students), MOOCs contribute more to the public good?

Read on for the final metrics on Caffo’s class and a few thoughts from the associate professor at the university’s school of public health.

Number of students who signed up for Caffo’s class: 15,930.

Number who ordinarily sign up for the class when it is taught solely on campus in Baltimore: a few dozen.

Active users in the final week of the class: 2,778

Total unique visitors who watched Caffo’s video lectures: 8,380

Total who submitted a quiz: 2,882

Total who submitted homework: 2,492

Total who passed the course (averaging 70 percent or better on quizzes): 748

Total who passed with distinction (averaging 90 percent or better): 447

And here is Caffo’s take:

“Regardless of how MOOCs wind up, it is awesome to be a professor in a time where teaching is the hottest topic in higher education at research-driven universities. I also have a lot of sympathy for democratizing education and information. Very few people will have the privilege of a Johns Hopkins University Bloomberg School of Public Health education. But, with these efforts [including free online initiatives such as Open Courseware, iTunes U, Coursera] everyone can get at least some fraction of what we believe is fundamental knowledge for attacking the world’s public health problems.”

via Grades are in for a pioneering free Johns Hopkins online class – College, Inc. – The Washington Post.

January 31, 2013 at 1:00 am 7 comments

Most Americans want more online learning, and don’t expect to learn as much

This week’s Time magazine piece on MOOCs is very good. The author was fair and even-handed in identifying strengths and weaknesses both of the current models of higher-education and of MOOCs. I was surprised by the sidebar on the results of a survey by Time and the Carnegie Corporation.

They reported that 68% of the general population believe that “Much of the teaching on college campuses can be replaced by online classes.” (Only 22% of the senior college administrators surveyed agreed with that statement.). 52% of the general population agree that “Students will not learn as much in online courses as they will in traditional courses.” (45% of college leaders agreed.). So the majority believes that courses will go online, and students won’t learn as much. This sounds like evidence for the argument made a while back that quality isn’t really a critical variable in decision-making about higher education. Completion rates and cost are two of the most critical variables in the Time piece.

The article says the economic burden of higher education is so great now, something has to change.

I was optimistic after reading the Time coverage — MOOCs could lead to positive changes in all of higher education. If MOOC completion is going to be accredited, it will have to be tested. If face-to-face colleges are going to demonstrate that they have greater value, they will want to show that they lead to testable performance, at least as good as MOOCs. The demand for better tests might lead to education research to develop more and better assessment methods. Actually measuring learning in higher education classes could be a real step forward, in terms of providing motivation to improve learning against those assessments — for both MOOCs and for face-to-face classes.

October 25, 2012 at 9:00 am 6 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,185 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,060,393 hits
June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

CS Teaching Tips