Posts tagged ‘educational psychology’

Learning Myths And Realities From Brain Science

Interesting results, but also, concerning.  People really believe that intelligence is “fixed at birth” and that teachers don’t need to know content?  The article has more of these:

On the topic of “growth mindset,” more than one-quarter of respondents believed intelligence is “fixed at birth”. Neuroscience says otherwise.

Nearly 60 percent argued that quizzes are not an effective way to gain new skills and knowledge. In fact, quizzing yourself on something you’ve just read is a great example of active learning, the best way to learn.

More than 40 percent of respondents believed that teachers don’t need to know a subject area such as math or science, as long as they have good instructional skills. In fact, research shows that deep subject matter expertise is a key element in helping teachers excel.

Source: Learning Myths And Realities From Brain Science : NPR Ed : NPR

May 15, 2017 at 7:00 am 2 comments

Power law of practice in software implementation: Does this explain the “W” going away?

I wonder if this result explains why the second semester students in Briana’s studies (see previous blog post) didn’t have the “W” effect.  If you do enough code, you move down the power law of practice, and now you can attend to things like context and generating subgoal labels.

Different subjects start the experiment with different amounts of ability and past experience. Before starting, subjects took a multiple choice test of their knowledge. If we take the results of this test as a proxy for the ability/knowledge at the start of the experiment, then the power law equation becomes (a similar modification can be made to the exponential equation):

eqn

That is, the test score is treated as equivalent to performing some number of rounds of implementation). A power law is a better fit than exponential to this data (code+data); the fit captures the general shape, but misses lots of what look like important details.

Source: The Shape of Code » Power law of practice in software implementation

January 18, 2017 at 7:26 am 2 comments

Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research

I am posting this on the day that I am honored to “hood” Dr. Briana Morrison. “Hooding” is where doctoral candidates are given their academic regalia indicating their doctorate degree. It’s one of those ancient parts of academia that I find really cool. I like the way that the Wikiversity describes it: “The Hooding Ceremony is symbolic of passing the guard from one generation of doctors to the next generation of doctors.”

I’ve written about Briana’s work a lot over the years here:

But what I find most interesting about Briana’s dissertation work were the things that didn’t work:

  • She tried to show a difference in getting program instruction via audio or text. She didn’t find one. The research on modality effects suggested that she would.
  • She tried to show a difference between loop-and-a-half and exit-in-the-middle WHILE loops. Previous studies had found one. She did not.

These kinds of results are so cool to me, because they point out what we don’t know about computing education yet. The prior results and theory were really clear. The study was well-designed and vetted by her committee. The results were contrary to what we expected. WHAT HAPPENED?!? It’s for the next group of researchers to try to figure out.

The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question.

Here’s the experimental set-up. We’ve got six groups.

  1. All groups are learning with pairs of a worked example (a completely worked out piece of code) and then a practice problem (maybe a Parson’s Problem, maybe writing some code). We’ll call these WE-P pairs (Worked Example-Practice). Now, some WE-P pairs have the same context (think of it as the story of a story problem), and some have different contexts. Maybe in the same context, you’re asked to compute the average tips for several days of tips as a barista. Maybe in a different context, you compute tips in the worked example, but you compute the average test score in the practice. In general, we predict that different contexts will be harder for the student than having everything the same.
  2. So we’ve got same context vs different context as one variable we’re manipulating. The other variable is whether the participants get the worked example with NO subgoal labels, or GENERATED subgoal labels, or the participant has to GENERATE subgoal labels. Think of a subgoal label as a comment that explains some code, but it’s the same comment that will appear in several different programs. It’s meant to encourage the student to abstract the meaning of the code.

In the GENERATE condition, the participants get blanks, to encourage them to abstract for themselves. Typically, we’d expect (for research in other parts of STEM with subgoal labels) that GENERATE would lead to more learning than GIVEN labels, but it’s harder. We might get cognitive overload.

In general, GIVEN labels beats out no labels. No problem — that’s what we expect given all the past work on subgoal labels. But when we consider all six groups, we get this picture.

Why would having the same context do worse with GIVEN labels than no labels? Why would the same context do much better with GENERATE labels, but worse when it’s different contexts?

So, Briana, Lauren, and Adrienne Decker replicated the experiment with Adrienne’s students at RIT (ICER 2016). And they found:

The same strange “W” pattern, where we have this odd interaction between context and GIVEN vs. GENERATE that we just don’t have an explanation for.

But here’s the really intriguing part: they also did the experiment with second semester students at RIT. All the weird interactions disappeared! Same context beat different context. GIVEN labels beat GENERATE labels. No labels do the worst. When students get enough experience, they figure things out and behave like students in other parts of STEM.

The puzzle for the community is WHY. Briana has a hypothesis. Novice students don’t attend to the details that they need, unless you change the contexts. Without changing contexts, students even GIVEN labels don’t learn because they’re not paying enough attention. Changing contexts gets them to think, “What’s going on here?” GENERATE is just too hard for novices — the cognitive load of figuring out the code and generating labels is just overwhelming for students, so they do badly when we’d expect them to do better.

Here we have a theory-conflicting result, that has been replicated in two different populations. It’s like the Rainfall Problem. Nobody expected the Rainfall Problem to be hard, but it was. More and more people tried it with their students, and still, it was hard. It took Kathi Fisler to figure out how to teach CS so that most students could succeed at the Rainfall Problem. What could we teach novice CS students so that they avoid the “W” pattern? Is it just time? Will all second semester students avoid the “W”?

Dr. Morrison gave us a really interesting dissertation — some big wins, and some intriguing puzzles for the next researchers to wrestle with. Briana has now joined the computing education research group at U. Nebraska – Omaha, where I expect to see more great results.

December 16, 2016 at 7:00 am 2 comments

How to Write a Guzdial Chart: Defining a Proposal in One Table

In my School, we use a technique for representing an entire research proposal in a single table. I started asking students to build these logic models when I got to Georgia Tech in the 1990’s. In Georgia Tech’s Human-Centered Computing PhD program, they have become pretty common. People talk about building “Guzdial Charts.” I thought that was cute — a local cultural practice that got my name on it.

Then someone pointed out to me that our HCC graduates have been carrying the practice with them. Amy Voida (now at U. Colorado-Boulder) has been requiring them in her research classes (see syllabus here and here). Sarita Yardi (U. Michigan) has written up a guide for her students on how to summarize a proposal in a single table. Guzdial Charts are a kind of “thing” now, at least in some human-centered computing schools.

Here, I explain what a Guzdial Chart is, where it came from, and why it should really be a Blumenfeld Chart [*].

Phyllis Teaches Elliot Logic Models

In 1990, I was in Elliot Soloway’s office at the University of Michigan as he was trying to explain an NSF proposal he was planning with School of Education professor, Phyllis Blumenfeld. (When I mention Phyllis’s name to CS folks, they usually ask “who?” When I mention her name to Education folks, they almost always know her — maybe for her work in defining project-based learning or maybe her work in instructional planning or maybe her work in engagement. She’s retired now, but is still a Big Name in Education.) Phyllis kept asking questions. “How many students in that study?” and “How are you going to measure that?” She finally got exasperated.

She went to the whiteboard and said, “Draw me a table like this.” Each row of the table is one study in the overall project.

  • Leftmost column: What are you trying to achieve? What’s the research question?
  • Next column: What data are you going to collect? What measures are you going to use (e.g., survey, log file, GPS location)?
  • Next column: How much data are you going to collect? How many participants? How often are you going to use these measures with these participants (e.g., pre/post? Midterm? After a week delay?)?
  • Next column: How are you going to analyze these data?
  • Rightmost column: What do you expect to find? What’s your hypothesis for what’s going to happen?

This is a kind of a logic model, and you can find guides on how to build logic models. Logic models are used by program evaluators to describe how resources and activities will lead to desired impacts. This is a variation that Phyllis made us use in all of our proposals at UMich. (Maybe she invented it?) This version focused on the research being proposed. Each study reads on a row from left-to-right,

  • from why you were doing it,
  • to what you were doing,
  • to what you expected to find.

When I got to Georgia Tech, I made one for every proposal I wrote. I made my students do them for their proposals, too. Somewhere along the way, lots of people started doing them. I think Beth Mynatt first called them “Guzdial Charts,” and despite my story about Phyllis Blumenfeld’s invention, the name stuck. People at Georgia Tech don’t know Phyllis, but they did know Guzdial.

Variations on a Guzdial Chart Theme

The critical part of a Guzdial Chart is that each study is one row, and includes purpose, methods, and expected outcome. There are lots of variations. Here’s an example of one that Jason Freeman (in our School of Music) wrote up for a proposal he was doing on EarSketch. He doesn’t list hypotheses, but it still describes purpose and methods, one row per study.

In Sarita’s variation, she has the students put the Expected Publication in the rightmost column. I like that — very concrete. If you’re in a discipline with some clearly defined publication targets, with a clear distinction between them (e.g. , the HCI community where Designing Interactive Systems (DIS) is often about process, and User Interface Software and Technology (UIST) is about UI technologies), then the publication targets are concrete and definable.

My former student, Mike Hewner, did one of the most qualitative dissertations of any of my students. He used a Guzdial Chart, but modified it for his study. Still one row per study, still including research question, hypothesis, analysis, and sampling.

I still use Guzdial Charts, and so do my students. For example, we used one to work through the story for a paper. Here’s one that we started on a whiteboard outside of my office, and we left it there for several weeks, filling in the cells as they made sense to us.

img_9540-smaller

A Guzdial Chart is a handy way of summarizing a research project and making sure that it makes sense (or to use when making sense), row-by-row, left-to-right.

 

___________

[*] Because Ulysses now makes it super-easy to post to blogs, and I do most of my writing in Ulysses, I accidentally posted this post to Medium — my first ever Medium post.  I wanted this to appear in my WordPress blog, also, so I decided to two blog posts: The Medium one on Blumenfeld Charts, and this one on Guzdial Charts.

October 3, 2016 at 7:05 am 2 comments

Making learning effective, efficient, and engaging: An Interview With an Educational Realist and Grumpy Old Man, Paul Kirschner

I am a fan of Paul Kirschner‘s work.  This interview is great with useful insights about education — deep and pragmatic thinking.

I want to fundamentally understand how people can learn in effective, efficient, and enjoyable ways, and how you can teach and design learning materials to achieve this objective. If a learner doesn’t enjoy the learning experience, even if it’s effective and/or efficient, they won’t do it. The same is true for teaching: that is it must also be effective, efficient, and enjoyable for the teacher because if a teacher doesn’t enjoy the teaching process, even if it’s effective and/or efficient, they won’t do it.

Source: GUEST POST: An Interview With an Educational Realist and Grumpy Old Man — The Learning Scientists

September 28, 2016 at 7:59 am 1 comment

Learning Curves, Given vs Generated Subgoal Labels, Replicating a US study in India, and Frames vs Text: More ICER 2016 Trip Reports

My Blog@CACM post for this month is a trip report on ICER 2016. I recommend Andy Ko’s excellent ICER 2016 trip report for another take on the conference. You can also see the Twitter live feed with hashtag #ICER2016.

I write in the Blog@CACM post about three papers (and reference two others), but I could easily write reports on a dozen more. The findings were that interesting and that well done. I’m going to give four more mini-summaries here, where the results are more confusing or surprising than those I included in the CACM Blog post.

This year was the first time we had a neck-and-neck race for the attendee-selected award, the “John Henry” award. The runner-up was Learning Curve Analysis for Programming: Which Concepts do Students Struggle With? by Kelly Rivers, Erik Harpstead, and Ken Koedinger. Tutoring systems can be used to track errors on knowledge concepts over multiple practice problems. Tutoring systems developers can show these lovely decreasing error curves as students get more practice, which clearly demonstrate learning. Kelly wanted to see if she could do that with open editing of code, not in a tutoring system. She tried to use AST graphs as a sense of programming “concepts,” and measure errors in use of the various constructs. It didn’t work, as Kelly explains in her paper. It was a nice example of an interesting and promising idea that didn’t pan out, but with careful explanation for the next try.

I mentioned in this blog previously that Briana Morrison and Lauren Margulieux had a replication study (see paper here), written with Adrienne Decker using participants from Adrienne’s institution. I hadn’t read the paper when I wrote that first blog post, and I was amazed by their results. Recall that they had this unexpected result where changing contexts for subgoal labeling worked better (i.e., led to better performance) for students than keeping students in the same context. The weird contextual-transfer problems that they’d seen previously went away in the second (follow-on) CS class — see below snap from their slides. The weird result was replicated in the first class at this new institution, so we know it’s not just one strange student population, and now we know that it’s a novice problem. That’s fascinating, but still doesn’t really explain why. Even more interesting was that when the context transfer issues go away, students did better when they were given subgoal labels than when they generated them. That’s not what happens in other fields. Why is CS different? It’s such an interesting trail that they’re exploring!

img_3874

Mike Hewner and Shitanshu Mishra replicated Mike’s dissertation study about how students choose CS as a major, but in Indian institutions rather than in US institutions: When Everyone Knows CS is the Best Major: Decisions about CS in an Indian context. The results that came out of the Grounded Theory analysis were quite different! Mike had found that US students use enjoyment as a proxy for ability — “If I like CS, I must be good at it, so I’ll major in that.” But Indian students already thought CS was the best major. The social pressures were completely different. So, Indian students chose CS — if they had no other plans. CS was the default behavior.

One of the more surprising results was from Thomas W. Price, Neil C.C. Brown, Dragan Lipovac, Tiffany Barnes, and Michael Kölling, Evaluation of a Frame-based Programming Editor. They asked a group of middle school students in a short laboratory study (not the most optimal choice, but an acceptable starting place) to program in Java or in Stride, the new frame-based language and editing environment from the BlueJ/Greenfoot team.  They found no statistically significant differences between the two different languages, in terms of number of objectives completed, student frustration/satisfaction, or amount of time spent on the tasks. Yes, Java students got more syntax errors, but it didn’t seem to have a significant impact on performance or satisfaction. I found that totally unexpected. This is a result that cries out for more exploration and explanation.

There’s a lot more I could say, from Colleen Lewis’s terrific ideas to reduce the impact of CS stereotypes to a promising new method of expert heuristic evaluation of cognitive load.  I recommend reviewing the papers while they’re still free to download.

September 16, 2016 at 7:07 am 4 comments

Preview ICER 2016: Ebooks Design-Based Research and Replications in Assessment and Cognitive Load Studies

The International Computing Education Research (ICER) Conference 2016 is September 8-12 in Melbourne, Australia (see website here). There were 102 papers submitted, and 26 papers accepted for a 25% acceptance rate. Georgia Tech computing education researchers are justifiably proud — we submitted three papers to ICER 2016, and we had three acceptances. We’re over 10% of all papers at ICER 2016.

One of the papers extends the ebook work that I’ve reported on here (see here where we made them available and our paper on usability and usage from WiPSCE 2015). Identifying Design Principles for CS Teacher Ebooks through Design-Based Research (click on the title to get to the ACM DL page) by Barbara Ericson, Kantwon Rogers, Miranda Parker, Briana Morrison, and I use a Design-Based Research perspective on our ebooks work. We describe our theory for the ebooks, then describe the iterations of what we designed, what happened when we deployed (data-driven), and how we then re-designed.

Two of our papers are replication studies — so grateful to the ICER reviewers and communities for seeing the value of replication studies. The first is Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment by Miranda Parker, me, and Shelly Engleman. This is Miranda’s paper expanding on her SIGCSE 2016 poster introducing the SCS1 validated and language-independent measure of CS1 knowledge. The paper does a great survey of validated measures of learning, explains her process, and then presents what one can and can’t claim with a validated instrument.

The second is Learning Loops: A Replication Study Illuminates Impact of HS Courses by Briana Morrison, Adrienne Decker, and Lauren Margulieux. Briana and Lauren have both now left Georgia Tech, but they were still here when they did this paper, so we’re claiming them. Readers of this blog may recall Briana and Lauren’s confusing results from SIGCSE 2016 result that suggest that cognitive load in CS textual programming is so high that it blows away our experimental instructional treatments. Was that an aberration? With Adrienne Decker’s help (and student participants), they replicated the study. I’ll give away the bottom line: It wasn’t an aberration. One new finding is that students who did not have high school CS classes caught up with those who did in the experiment, with respect to understanding loops

We’re sending three of our Human-Centered Computing PhD students to the ICER 2016 Doctoral Consortium. These folks will be in the DC on Sept 8, and will present posters to the conference on Sept 9 afternoon.

September 2, 2016 at 7:53 am 16 comments

Older Posts


Recent Posts

December 2017
M T W T F S S
« Nov    
 123
45678910
11121314151617
18192021222324
25262728293031

Feeds

Blog Stats

  • 1,460,181 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,193 other followers

CS Teaching Tips