Posts tagged ‘educational psychology’

Transfer of learning: Making sense of what education research is telling us

I enjoy reading “Gas station without pumps,” and the below-quoted post was one I wanted to respond to.

Two of the popular memes of education researchers, “transferability is an illusion” and “the growth mindset”, are almost in direct opposition, and I don’t know how to reconcile them.

One possibility is that few students actually attempt to learn the general problem-solving skills that math, CS, and engineering design are rich domains for.  Most are content to learn one tiny skill at a time, in complete isolation from other skills and ideas. Students who are particularly good at memory work often choose this route, memorizing pages of trigonometric identities, for example, rather than learning how to derive them at need from a few basics. If students don’t make an attempt to learn transferable skills, then they probably won’t.  This is roughly equivalent to claiming that most students have a fixed mindset with respect to transferable skills, and suggests that transferability is possible, even if it is not currently being learned.

Teaching and testing techniques are often designed to foster an isolation of ideas, focusing on one idea at a time to reduce student confusion. Unfortunately, transferable learning comes not from practice of ideas in isolation, but from learning to retrieve and combine ideas—from doing multi-step problems that are not scaffolded by the teacher.

Source: Transfer of learning | Gas station without pumps

The problem with “transferability” is that it’s an ill-defined term.  Certainly, there is transfer of skill between domains.  Sharon Carver showed a long time ago that she could teach debugging Logo programs, and students would transfer that debugging process to instructions on a map (mentioned in post here).  That’s transferring a skill or a procedure.  We probably do transfer big, high-level heuristics like “divide-and-conquer” or “isolate the problem.”  One issue is whether we can teach them.  John Sweller says that we can’t — we must learn them (it’s a necessary survival skill), but they’re learned from abstracting experience (see Neil Brown’s nice summary of Sweller’s SIGCSE keynote).

Whether we can teach them or not, what we do know is that higher-order thinking is built on lots of content knowledge. Novices are unlikely to transfer until they know a lot of stuff, a lot of examples, a lot of situations. For example, novice designers often have “design fixation.”  They decide that the first thing they think of must be the right answer.  We can insist that novice designers generate more designs, but they’re not going to generate more good designs until they know more designs.  Transfer happens pretty easily when you know a lot of content and have seen a lot of situations, and you recognize that one situation is actually like another.

Everybody starts out learning one tiny skill at a time.  If you know a lot of skills (maybe because you have lots of prior experience, maybe because you have thought about these skills a lot and have recognized the general principles), you can start chunking these skills and learning whole schema and higher-level skills.  But you can’t do that until you know lots of skills.  Students who want to learn one tiny skill at a time may actually need to still learn one tiny skill at a time. People abstract (e.g., able to derive a solution rather than memorize it) when they know enough content that it’s useful and possible for them to abstract over it.  I completely agree that students have to try to abstract.  They have to learn a lot of stuff, and then they have to be in a situation where it’s useful for them to abstract.

“Growth mindset” is a necessity for any of this to work.  Students have to believe that content is worth knowing and that they can learn it.  If students believe that content is useless, or that they just “don’t do math” or “am not a computer person” (both of which I’ve heard in just the last week), they are unlikely to learn content, they are unlikely to see patterns in it, and they are unlikely to abstract over it.

Kevin is probably right that we don’t teach problem solving in engineering or computing well.  I blogged on this theme for CACM last month — laboratory experiments work better for a wider range students than classroom studies.  Maybe we teach better in labs than in classrooms?  The worked examples effect suggests that we may be asking students to problem solve too much.  We should show students more completely worked out problems.  As Sweller said at SIGCSE, we can’t expect students to solve novel problems.  We have to expect students to match new problems to solutions that they have already seen.  We do want students to solve problems, too, and not just review example solutions. Trafton and Reiser showed that these should be interleaved: Example, Problem, Example, Problem… (see this page for a summary of some of the worked examples research, including Trafton & Reiser).

When I used to do Engineering Education research, one of my largest projects was a complete flop.  We had all this prior work showing the benefits of a particular collaborative learning technology and technique, then we took it into the engineering classroom and…poof! Nothing happened.  In response, we started a project to figure out why it failed so badly.  One of our findings was that “learned helplessness” was rampant in our classes, which is a symptom of a fixed mindset.  “I know that I’m wrong, and there’s nothing that I can do about it.  Collaboration just puts my errors on display for everyone,” was the kind of response we’ve got. (See here for one of our papers on this work.)

I believe that all the things Kevin sees going wrong in his classes really are happening.  I believe he’s not seeing transfer that he might reasonably expect to see.  I believe that he doesn’t see students trying to abstract across lower-level skills.  But I suspect that the problem is the lack of a growth mindset.  In our work, we saw Engineering students simply give up.  They felt like they couldn’t learn, they couldn’t keep up, so they just memorized.  I don’t know that that’s the cause of the problems that Kevin is seeing.  In my work, I’ve often found that motivation and incentive are key to engagement and learning.

April 25, 2016 at 7:33 am Leave a comment

Optimizing Learning with Subgoal Labeling: Lauren Margulieux Defends her Dissertation

Lauren Margulieux successfully defended her dissertation Using Subgoal Learning and Self-Explanation to Improve Programming Education in March. Lauren’s been exploring subgoal labeling for improving programming education in a series of fascinating and influential papers. Subgoal labels are inserted into the steps of a worked example to explain the purpose for a set of steps.

  • At ICER 2012 (see post here), her paper showed that subgoal labels inserted into App Inventor videos led to improved learning, retention (a week later), and even transfer to new App building problems, all compared to the exact same videos without the subgoal labels. This paper was cited by Rob Moore and his students at MIT in their work developing crowdsourced subgoal labels for videos (see post here).
  • At ICER 2015 (see post here), Lauren and Briana Morrison showed that subgoal labels also improved learning for textual programming languages, but the high cognitive load of textual programming language made some forms of subgoal labeling less successful than studies in other disciplines would predict. That paper won the Chairs Award at ICER.
  • At SIGCSE 2016 (see post here), Briana presented a paper with Lauren where they showed that subgoal labeling also improved performance on Parson’s Problems.

In her dissertation work, Lauren returned to the challenges of the ICER 2015 paper: Can we make subgoal labeling even more successful? She went back to using App Inventor, to reduce the cognitive load from teaching a textual language.

She compared three different ways of using subgoal labeling.

  • In the passive condition, students were just given subgoal labels like in her first experiments.
  • In the active condition, students were given a list of subgoal labels. The worked example was segmented into sets of steps that achieved a subgoal, but the label was left blank. Students had to pick the right subgoal label each blank.
  • In the constructive condition, students were just given a blank and asked to generate a subgoal label. She had two kinds of constructive conditions. One was “guided” in that there were blanks above sets of steps. The other was “unguided” — just a long list of steps, and she asked students to write labels into margins

Lauren was building on a theory that predicted that the constructive condition would have the best learning, but would also be the hardest. She provided two scaffolds.

  • For the conditions where it made sense (i.e., not the passive condition), she provided feedback. She showed half the participants the same worked examples with experimenter labels.
  • For half the constructive participants, the label wasn’t blank. Instead there was a hint. All the steps that achieved the same subgoal were labeled “Label 1,” and all the steps that achieved a different subgoal were labelled “Label 2,” and so on.

Here’s the big “interesting/surprising” graph from her dissertation.

Lauren-interesting-graph

As predicted, constructive was better than active or passive. What’s interesting is that the very best performance was guided constructive without hints but with feedback AND with hints but without feedback. Now that’s weird. Why would having more support (both hints and feedback) lead to worse performance?

There are several possible hypotheses for these results, and Lauren pursued one of these one step further. Maybe students developed their own cognitive model when they constructed their own labels with hints, and seeing the feedback (experimenter’s labels) created some kind of dissonance or conflict. Without hints, maybe the feedback helped them make sense of the worked example.

Lauren ran one more experiment where she contrasted getting scaffolding with the experimenter’s labels versus getting scaffolding with the student’s labels (put in all the right places in the worked example). Students who were scaffolded with their own labels performed better on later problem solving than those who were scaffolded with experimenter labels. Students scaffolded with experimenter labels did not perform better than those who did not receive any scaffolding at all. Her results support this hypothesis — the experimenter’s labels can get in the way of the understanding that the students are building.

using-learner-labels

There are several implications from Lauren’s dissertation. One is that we can do even better than just giving students labels — getting them to write them themselves is even better for learning. Feedback isn’t the most critical part of the learning when subgoal labeling, which is surprising and fascinating. Constructive subgoal labeling lends itself to an online implementation, which is the direction Lauren that is explicitly exploring. How do we build effective programming education online?

Lauren has accepted an Assistant Professor position in the Learning Technologies Division at Georgia State University. I’m so glad for her, and even happier that she’s nearby so that we can continue collaborating!

March 29, 2016 at 9:41 pm Leave a comment

Brain training, like computational thinking, is unlikely to transfer to everyday problem-solving

In a recent blog post, I argued that problem-solving skills learned for solving problems in computational contexts (“computational thinking”) were unlikely to transfer to everyday situations (see post here).  We see a similar pattern in the recent controversy about “brain training.”  Yes, people get better at the particular exercises (e.g., people can learn to problem-solve better when programming). And they may still be better years later, which is great. That’s an indication of real learning.  But they are unlikely to transfer that learning to non-exercise contexts. Most surprisingly, they are unlikely to transfer that learning even though they are convinced that they do.  Just because you think you’re doing computational thinking doesn’t mean that you are.

Ten years later, tests showed that the subjects trained in processing speed and reasoning still outperformed the control group, though the people given memory training no longer did. And 60 percent of the trained participants, compared with 50 percent of the control group, said they had maintained or improved their ability to manage daily activities like shopping and finances. “They felt the training had made a difference,” said Dr. Rebok, who was a principal investigator.

So that’s far transfer — or is it? When the investigators administered tests that mimicked real-life activities, like managing medications, the differences between the trainees and the control group participants no longer reached statistical significance.

In subjects 18 to 30 years old, Dr. Redick also found limited transfer after computer training to improve working memory. Asked whether they thought they had improved, nearly all the participants said yes — and most had, on the training exercises themselves. They did no better, however, on tests of intelligence, multitasking and other cognitive abilities.

Source: F.T.C.’s Lumosity Penalty Doesn’t End Brain Training Debate – The New York Times

March 18, 2016 at 7:26 am 3 comments

Notional Machines and Misconceptions in CS: Developing a Research Agenda at Dagstuhl

Seminar

I facilitated a breakout group at the Dagstuhl Seminar on Assessment in Introductory Computer Science. We started talking about what students know and should know, and several of us started using terms like “notional machines” and “mental models” — and there were some strong disagreements. We decided to have a breakout group to define our terms, and came up with a fascinating set of issues and questions.  It was a large group (maybe a dozen?), and I think there were some differences in attendance between the two days, so I’m not going to try to list everyone here.

Definitions

We agreed on the definition of a notional machine (NM) as a set of abstractions that define the structure and behavior of a computational device. A notional machine includes a grammar and a vocabulary, and is specific to a programming paradigm. It’s consistent and predictive — given a notional machine and a program to run on that machine, we should be able to define the result. The abstract machine of a compiler is a possible notional machine. This definition meshes with duBoulay’s original one and the one that Juha Sorva used in his dissertation (which we could check, because Juha was there).

Note that a NM doesn’t include function. It doesn’t tell a user, “Why would I use this feature? What is it for?” Carsten Shulte and Ashok Goel both found that students tend to focus on structure and behavior, and significant expertise is needed before students can discern function for a program or a NM component.

In CS education, we care about the student’s understanding of the notional machine. Mental model isn’t the right term for that understanding, because (for some) that implies a consistent, executable model in the student’s head. But modern learning science suggests that students are more likely to have “knowledge in pieces” (e.g., diSessa). Students will try to explain one program using one set of predictions about program behavior, and another program in another way. They respond to different programs differently When Michael Caspersen tried to replicate the Dehnadi and Bornat paper (Camel has two humps paper, and its retraction), he found that students would use one rule set for interpreting assignment in part of the test, and another set of rules later — and they either didn’t care or didn’t notice that they were inconsistent.

An early form of student understanding of the NM is simply mimicry. “I saw the teacher type commands like this. So if I repeat them exactly, I should get the same behavior.” As they start to realize that the program causes behavior, cognitive load limits how much of the NM students can think about at once. They can’t predict as we would like them to, simply because they can’t think about all of the NM components and all of the program at once. The greatest challenge to understanding the NM is Roy Pea’s Superbug — the belief that the computer is in fact a human-like intelligent agent trying to discern our intentions.

We define student misconceptions (about the NM) as incorrect beliefs about the notional machine that are reliable (the student will use more than once) and common (more than one student uses it). There are lots of misunderstandings that pop up, but those aren’t interesting if they’re not common and reliable. We decided to avoid the “alternative conception” model in science education because, unlike natural science, we know ground truth. CS is a science of the artificial. We construct notional machines. Conceptions are provably correct or incorrect about the NM.

One of the challenging aspects of student understandings of NM is that our current evidence suggests that students never fix existing models. We develop new understandings, and learn new triggers/indices when to apply these understandings. Sometimes we layer new understandings so deeply that we can’t reach the old ones. Sometimes, when we are stressed or face edge/corner conditions, we fall back on previous understandings. We help students develop new understandings by constraining their process to an appropriate path (e.g., cognitive tutors, cognitive apprenticeship) or by providing the right contexts and examples (like in Betsy Davis’s paper with Mike Clancy Mind your P’s and Q’s).

Where do misconceptions come from?

We don’t know for sure, but we have hypotheses and research questions to explore:

  • We know that some misconceptions come from making analogies to natural language.
  • Teaching can lead to misconceptions. Sometimes it’s a slip of the tongue. For example, students often confuse IF and WHILE. How often do we say (when tracing a WHILE) loop, “IF the expression is true…” Of course, the teacher may not have the right understanding.Research Question (RQ): What is intersection between teacher and student misconceptions? Do teacher misconceptions explain most student misconceptions, or do most student misconceptions come from factors outside of teaching?
  • Under-specification. Students may simply not see enough contexts or examples for them to construct a complete understanding.
  • Students incorrectly applying prior knowledge. RQ: Do students try to understand programs in terms of spreadsheets, the most common computational model that most students see?
  • Notation. We believe that = and == do lead to to significant misconceptions. RQ: Do Lisp’s set, Logo’s make/name, and Smalltalk’s back arrow lead to fewer assignment misconceptions? RQ: Dehnadi and Bornat did define a set of assignment misconceptions. How common are they? In what languages or contexts?

RQ: How much do students identify their own gaps in understanding of a NM (e.g., edge conditions, problem sets that don’t answer their questions)? Are they aware of what they don’t understand?  How do they try to answer their questions?

One advantage of CS over natural sciences is that we can design curriculum to cover the whole NM. (Gail Sinatra was mentioned as someone who has designed instruction to fill all gaps in a NM.) Shriram Krishnamurthi told us that he designs problem-sets to probe understanding of the Java notional machine that he expects students to miss, and his predictions are often right.

RQ: Could we do this automatically given a formal specification for an NM?  Could we define a set of examples that cover all paths in a NM?  Could we develop a model that predicts where students will likely develop misconceptions?

RQ: Do students try to understand their own computational world (e.g., how behavior in a Web page works, how an ATM works, how Web search works) with what we’re teaching them? Kathi Fisler predicts that they rarely do that, because transfer is hard. But if they’re actively trying to understand their computational world, it’s possible.

How do we find and assess gaps in student understanding?

We don’t know how much students think explicitly about a NM. We know from Juha’s work that students don’t always see visualizations as visible incarnations of the NM — for some students, it’s just another set of confusing abstractions.

Carsten Schulte pointed out that Ira Diethelm has a cool way of finding out what students are confused about. She gives them a “miracle question” — if you had an oracle that knew all, what one question would you ask about how the Internet works, or Scratch, or Java? Whatever they say — that’s a gap.

RQ: How we define the right set of examples or questions to probe gaps in understanding of a NM? Can we define it in terms of a NM? We want such a set to lead to reflection and self-explanation that might lead to improved understanding of the NM.

Geoffrey Herman had an interesting way of finding gaps in NM understanding: using historical texts. Turns out Newton used the wrong terms for many physical phenomena, or at least, the terms he used were problematic (“momentum” for both momentum and velocity) and we have better, more exact ones today. Terms that have changed meaning or have been used historically in more than one way tend to be the things that are hard to understand — for scholars past, and for students today.

State

State is a significant source of misconceptions for students. They often don’t differentiate input state, output state, and internal states. Visualization of state only works for students who can handle those kinds of abstractions. Specification of a NM through experimentation (trying out example programs) can really help if students see that programs causally determine behavior, and if they have enough cognitive load to computer behavior (and emergent behavior is particularly hard). System state is the collection of smaller states, which is a large tax on cognitive load. Geoffrey told us about three kinds of state problems: control state, data state, and indirection/reference state.

State has temporality, which is a source of misconceptions for students, like the common misconception that assignment states define a constraint, not an action in time. RQ: Why? Raymond Lister wondered about our understanding of state in the physical world and how that influences our understanding of state in the computational world. Does state in the real world have less temporality? Do students get confused about temporality in state in the physical world?

Another source of misconceptions is state in code, which is always invisible. The THEN part of an IF has implicit state — that block gets executed only if the expression is true. The block with a loop is different than the block after a condition (executed many times, versus once) but look identical. RQ: How common are code state misconceptions?

Scratch has state, but it’s implicit in sprites (e.g., position, costume). Deborah Fields and Yasmin Kafai found that students didn’t use variables much in state, but maybe because they didn’t tackle problems that needed them. RQ: What kinds of problems encourage use of state, and better understanding of state?

RQ: Some functional curricula move students from stateless computation to stateful computation. We don’t know if that’s easier. We don’t know if more/fewer/different misconceptions arise. Maybe the reverse is easier?

RQ: When students get confused about states, how do they think about? How do they resolve their gaps in understanding?

RQ: What if you start students thinking about data (state) before control? Most introductory curricula start out talking about control structures. Do students develop different understanding of state? Different misconceptions? What if you start with events (like in John Pane’s HANDS system)?

RQ: What if you teach different problem-solving strategies? Can we problematize gaps in NM understanding, so that students see them and actively try to correct them?

March 7, 2016 at 7:59 am 1 comment

Interaction beats out video lectures and even reading for learning

I’m looking forward to these results!  That interaction is better than video lectures is really not surprising.  That it leads to better learning than even reading is quite a surprise.  My guess is that this is mediated by student ability as a reader, but as a description of where students are today (like the prior posts on active learning), it’s a useful result.

Koedinger and his team further tested whether their theory that “learning by doing” is better than lectures and reading in other subjects. Unfortunately, the data on video watching were incomplete. But they were able to determine across four different courses in computer science, biology, statistics and psychology that active exercises were six times more effective than reading. In one class, the active exercises were 16 times more effective than reading. (Koedinger is currently drafting a paper on these results to present at a conference in 2016.)

Source: Did you love watching lectures from your professors? – The Hechinger Report

January 6, 2016 at 8:12 am 7 comments

Blog Post #2000: Barbara Ericson Proposes: Effectiveness and Efficiency of Adaptive Parsons Problems #CSEdWeek

My 1000th blog post looked backward and forward.  This 2000th blog post is completely forward looking, from a personal perspective.  Today, my wife and research partner, Barbara Ericson, proposes her dissertation.

Interesting side note: One of our most famous theory professors just blogged on the theory implications of the Parsons Problems that Barb is studying. See post here.

Barb’s proposal is the beginning of the end of this stage in our lives.  Our youngest child is a senior in high school. When Barbara finishes her Human-Centered Computing PhD (expected mid-2017), we will be empty-nesters and ready to head out on a new adventure.

Title: EVALUATING THE EFFECTIVINESS AND EFFICIENCY OF PARSONS PROBLEMS AND DYNAMICALLY ADAPTIVE PARSONS PROBLEMS AS A TYPE OF LOW COGNITIVE LOAD PRACTICE PROBLEM

Barbara J. Ericson
Ph.D. student
Human Centered Computing
College of Computing
Georgia Institute of Technology

Date: Wednesday, December 9, 2015
Time: 12pm to 2pm EDT
Location: TSRB 223

Committee
————–
Dr. James Foley, School of Interactive Computing (advisor)
Dr. Amy Bruckman, School of Interactive Computing
Dr. Ashok Goel, School of Interactive Computing
Dr. Richard Catrambone, School of Psychology
Dr. Mitchel Resnick, Media Laboratory, Massachusetts Institute of Technology

Abstract
———–

Learning to program can be difficult and can result in hours of frustration looking
for syntactic or semantic errors. This can make it especially difficult to prepare inservice
(working) high school teachers who don’t have any prior programming
experience to teach programming, since it requires an unpredictable amount of time for
practice in order to learn programming. The United States is trying to prepare 10,000
high school teachers to teach introductory programming courses by fall 2016. Most
introductory programming courses and textbooks rely on having learners gain experience
by writing lots of programs. However, writing programs is a complex cognitive task,
which can easily overload working memory, which impedes learning.

One way to potentially decrease the cognitive load of learning to program is to
use Parsons problems to give teachers practice with syntactic and semantic errors as well
as exposure to common algorithms. Parsons problems are a type of low cognitive load
code completion problem in which the correct code is provided, but is mixed up and has
to be placed in the correct order. Some variants of Parsons problems also require the
code to be indented to show the block structure. Distractor code can also be provided
that contains syntactic and semantic errors.

In my research I will compare solving Parsons problems that contain syntactic and
semantic errors, to fixing code with the same syntactic and semantic errors, and to writing
the equivalent code. I will examine learning from pre- to post-test as well as student
reported cognitive load. In addition, I will create dynamically adaptive Parsons problems
where the difficulty level of the problem is based on the learners’ prior and current
progress. If the learner solves one Parsons problem in one attempt the next problem will
be made more difficult. If the learner is having trouble solving a Parsons problem the
current problem will be made easier. This should enhance learning by keeping the
problem in the learner’s zone of proximal development as described by Vygotsky. I will
compare non-adaptive Parsons problems to dynamically adaptive Parsons problems in
terms of enjoyment, completion, learning, and cognitive load.

The major contributions of this work are a better understanding of how variants of
Parsons problems can be used to improve the efficiency and effectiveness of learning to
program and how they relate to code fixing and code writing. Parsons problems can help
teachers practice programming in order to prepare them to teach introductory computer
science at the high school level and potentially help reduce the frustration and difficulty
all beginning programmers face in learning to program.

 

December 9, 2015 at 7:37 am 4 comments

Cognitive Load as a Significant Problem in Learning Programming: Briana Morrison’s Dissertation Proposal

Briana Morrison is defending her proposal today.  One chapter of her work is based on her ICER 2015 paper that won the Chairs Award for best paper (see post here). Good luck, Briana!

Title: Replicating Experiments from Educational Psychology to Develop Insights into Computing Education: Cognitive Load as a Significant Problem in Learning Programming

Briana Morrison
Ph.D. student
Human Centered Computing
College of Computing
Georgia Institute of Technology

Date: Wednesday, November 11, 2015
Time: 2 PM to 4 PM EDT
Location: TSRB 223

Committee
————–
Dr. Mark Guzdial, School of Interactive Computing (advisor)
Dr. Betsy DiSalvo, School of Interactive Computing
Dr. Wendy Newstetter, School of Interactive Computing
Dr. Richard Catrambone, School of Psychology
Dr. Beth Simon, Jacobs School of Engineering at University of California San Diego and Principal Teaching and Learning Specialist, Coursera

Abstract
———–
Students often find learning to program difficult. This may be because the concepts are inherently difficult due to the fact that the elements of learning to program are highly interconnected. Instructors may be able to lower the complexity of learning to program by designing instructional materials that use educational psychology principles.

The overarching goal of this research is to gain more understanding and insight into the optimal conditions under which learning programming can be successful which is defined as students being able to apply their acquired knowledge and skills in new or familiar problem-solving situations. Cognitive load theory (CLT), and its associated effects, describe the role of the learner’s memory during the learning process. By minimizing undesirable loads within the instructional materials the learner’s memory can hold more relevant information, thereby improving the effectiveness of the learning process.

This proposal uses cognitive load theory to improve learning in programming.  First an instrument for measuring cognitive load components within introductory programming was developed and initially validated. We have explored reducing the cognitive load by changing the modality in which students receive the learning material. This had no effect on novices’ retention of knowledge or their ability to transfer knowledge. We then attempted to reduce the cognitive load by adding subgoal labels to the instructional material. This had some effect on the learning gains under some conditions. Students who learned using subgoal labels demonstrated higher learning gains than the other conditions on the programming assessment task. We also explored using a low cognitive load assessment task, a Parsons problem, to measure learning gains. This low cognitive load assessment task proved more sensitive than the open ended programming assessment tasks in capturing student learning. Students who were given subgoal labels regardless of context transfer condition out performed those in the other conditions.

In my final, proposed study I change how we teach a programming construct through its format and content in order to reduce cognitive load. The changed construct is presumed to be a more natural cognitive fit for students based on previous research.

November 11, 2015 at 8:48 am 2 comments

Older Posts


Recent Posts

May 2016
M T W T F S S
« Apr    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Feeds

Blog Stats

  • 1,224,523 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,208 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 4,208 other followers