Posts tagged ‘computing education research’

Teaching Computational Thinking across an Entire University, With Guest Blogger Roland Tormey

During Spring Break, Barbara and I were invited to go to Switzerland.  Sure, when most people go someplace warm for Spring Break, let’s head to the mountains!

Roland Tormey organized a fascinating workshop at EPFL in Lausanne, Switzerland (see workshop page here) to inform a bold and innovative new effort at EPFL. They want to integrate computational thinking across their entire university, from required courses for freshman, to support for graduate students doing Computational X (where X is everything that EPFL does).  The initiative has the highest level of administrative support, with the President and Vice-President of Education for EPFL speaking at the workshop.  The faculty really bought in — the room held 80-some folks, and it was packed most of the day.

Roland got a good videographer who captured both of the keynotes well.  I had the first keynote on “Improving Computing Education with Learning Sciences: Methods for Teaching Computing Across Disciplines.”  I argued that we need different methods to teach computing across the curriculum — we can’t teach CS the same way we teach CS majors as future software developers.  I talk about Media Computation, predictions (and they caught my audio demo with ukulele playing well), subgoal labeling, and Parsons problems.

Shriram Krishnamurthi had the second keynote on “Curriculum Design as an Engineering Problem.”  He talked about the problems of transfer and how Bootstrap works.  I liked how he broke down the problem of transfer — there there are three requirements: Deep structural similarities between the problems, explicit instruction, and a process for performing tasks.  He showed how all other design disciplines have multi-stage processes, use multiple representations in their designs, and look at problems from multiple viewpoints.  Mostly in CS classes, we just code.  I learned about how Bootstrap scaffolds problem-solving, and includes all of those elements.  I recommend the talk.

Barb’s panel on teaching computational thinking wasn’t captured.  She talked about the methods she’s developed for teaching computing, including her great results on Parsons problems.  In a short talk, she gave a lot of pointers to her work and others’ on how to teach CT.

Roland sent me a note with what he took away from the workshop. I thought it was a great list, so with his permission, I’m including it here:

For me, we also had a lot of other valuable take home points from the day:

(1) We need to work on putting Computational thinking (and maybe Math and Physics too) into the context of the students’ own disciplines — at least, though the examples and exercises we choose.

(2) The drive to better develop scientific thinking in disciplines like chemistry and life sciences and the development of CT are entirely consistent, but one shouldn’t eclipse the other. It’s not about replacing existing scientific processes with CT. It’s about augmenting them.

(3) We need to help professors gather data on effective methods of teaching as well as help them become aware of methodologies with demonstrated effectiveness (like the Parsons Problems for example).

(4) The exercises and exercise sessions will be crucial for making the link between CT and disciplines, but this implies giving the doctoral and teaching assistants a clear understanding of the goals and methods of CT. They have to understand what we are trying to achieve.

(5) CT provides an understanding of, a language for, and a toolbox for analysing processes, and these can be applied in a lot of domains. However that is not going to happen unless we explicitly teach CT in ways that promote near and far transfer

(6) We need to make the most of the EPFL initiative by properly evaluating the impact, which implies the need to collect some pre-intervention data now.

April 20, 2018 at 7:00 am 9 comments

A job is a strange outcome measure: Udacity drops money-back guarantee on finding a job

Udacity has dropped a money-back guarantee that they were offering to students in some of their Nanodegree programs. The guarantee (with stipulations and caveats) was that students would find a job after getting the nanodegree, or they would get their money back.

An article in Inside Higher Ed (quoted below and linked here) describes some of the tensions. Other for-profit coding schools offer similar or better guarantees, but others do not. Ryan Craig, quoted below, suggests that Udacity might not have been hitting its targets for job placements. Does that mean that Udacity was doing something wrong?

A job is such a strange outcome measure for any kind of educational program.  I know some techniques for evaluating someone’s knowledge of programming, and I know how to create educational opportunities that might lead to successful evaluation.  There are factors like student attitude and motivation and whether students engage in deliberate practice that are not entirely within my control.  Even then, I’d be willing to say, “I can design a program where the majority of students will achieve this level of proficiency in coding.”  But a job?  Where I can’t control how the students interview, or where they apply, or what the companies are looking for (if they’re looking at all)?

A job is not a well-defined outcome measure for an educational intervention. That may be what the students are seeking, but they are being unrealistic if they think that any school can guarantee them that.

Ryan Craig, managing director of investment company University Ventures, noted that none of the major employers associated with Udacity will publicly commit to hire or interview nanodegree candidates. Craig pointed to a 2017 report from VentureBeat, which stated that of around 10,000 students who had earned nanodegrees since 2014, around 1,000 had found jobs as a result. “A placement rate of around 10 percent should spell the demise of any last-mile training program,” said Craig.

Craig said the effectiveness of Udacity’s job guarantee was likely very limited for students. “Money-back guarantees don’t address the real guarantee that students are seeking: a job,” said Craig.

Daniel Friedman, co-founder of coding school Thinkful, wrote in January 2016 that Udacity’s guarantee was vaguer and weaker than the guarantees offered by his own company and others such as Bloc and Flatiron School. Such guarantees are common at coding schools, though Friedman noted that some schools have had to drop guarantees because they conflicted with state regulations.

April 13, 2018 at 7:00 am 3 comments

Teaching to develop a mental model of program behavior: How do students learn the notional machine

“To understand a program you must become both the machine and the program” – Perlis 1982, cited in Sorva 2013

I’ve been thinking for a few years now about an open research question in computing education. How do students come to understand how programs work? Put in a more technical way, How do students develop their mental model of the language’s notional machine?

I have been thinking about this question in terms of Ashok Goel’s Structure-Behavior-Function (SBF) model of how people think about systems.

  • Structure is the parts of the system — for us in CS, think about the code.
  • Function is what the system does — for us in CS, the requirements or the description of what the program is supposed to do.
  • Behavior is how the structural elements interact to achieve the function. It’s the understanding of the semantic model of the programming language (the notional machine) plus how that plays out in a specific program.

There are studies of students learning notional machines (e.g., Raymond Lister and Juha Sorva are some of the top researchers here). What I don’t know is how it develops and how to help it develop. Lister tells us the stages of development (e.g., of tracing skill). Sorva tells us about theories of how to teach the notional machine, but with little evidence. We have models for how people learn to read and write code (e.g., Elliot Soloway’s plans). But we not have a cognitive model for how they develop a mental model of how the code works.

A Pedagogical Problem That I Faced

I’m teaching Media Computation (n=234) this semester, and students had a disappointing performance on two programming problems on a recent quiz. (We have a 30 minute quiz every other week.) They didn’t really bomb, but an average of 82% on one programming problem (Problem #3 below) and 76% on the second (Problem #4) was lower than I was hoping for. Those are all mostly due to partial credit — only 25 of my 234 students got full credit on Problem #4. Worse yet, we had a “simple” matching problem where we offered four pictures and four programs — which program generated which picture? More than half the students got at least two wrong. The score on the matching problem was 72%, even lower than the programming task problems. My conclusion is that my students can’t yet read code and understand it.

How do I teach my students to understand code?

With my researcher hat on, I don’t have a solid answer. With my teacher hat on, I have to do something. So, I drew on what I know from the research to come up with a best guess solution.

I decided to drop the next two lecture topics from the schedule, to instead re-focus on manipulation of pictures. I know from the learning sciences literature that it’s much better to go deeper than broader. Teaching for mastery is more important than teaching for coverage. Things that students know well are more likely to persist and transfer than things that students are merely familiar with.

I decided to do a live-coded session revisiting the quiz. I had graded a bunch of the programming problems on the quiz. I saw several different strategies for solving those two problems. I had a unique teachable moment here — every student had attempted these two problems. They were primed for the right answer. I decided to solve the problems in a live-coding session (starting from a blank editor, and talking aloud as I wrote the code) in each of the ways that I saw students doing it — three ways for the first problem, four ways for the second problem. While I wrote, I drew pictures to describe the behavior, drawing from Sorva’s visualization approach and the SILC emphasis on sketching rather than diagrams. After writing each program, I tested it on a picture. Along the way, I answered questions and wrote additional examples based on those questions.

This idea is based on Marton’s Variation Theory. You have to vary critical aspects of examples for students to figure out the differences. Janet Kolodner talks about a similar idea when she emphasizes contrasting cases for learning informed by case-based reasoning.  In SBF terms, I was keeping the Function constant, but varying the Structure and Behavior. In Goal-Plan-Code terms, I was achieving the same Goal, but varying the underlying Plan and Code.

Could an exploration of these variations/contrasts help students see how the code changes related to behavior changes?  I don’t actually know how to evaluate the result as a researcher, but as a teacher, I got good response from students.  I’m looking forward to seeing how they do on similar problems on future quizzes.

The rest of this blog post is a static replay of the lecture. I’ll show you the slides I showed, the sketches I made, and the code I wrote (while talking aloud).

Problem clearTopHalf

Solution #1: Iterate through all pixels

def clearTopHalf1(pic):
  h = getHeight(pic)
  for pixel in getPixels(pic):
     y = getY(pixel)
     if y < h/2:
       setColor(pixel,white)

Solution #2: Iterate through half of the pixel indices

def clearTopHalf2(pic):
  all = getPixels(pic)
  for index in range(0,len(all)/2):
    pixel = all[index]
    setColor(pixel,white)

Solution #3: Iterate through all x and y positions in the top half

def clearTopHalf3(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  for y in range(0,h/2):
   for x in range(0,w):
    pixel = getPixel(pic,x,y)
    setColor(pixel,white)

A student asked, “Could we do x first and then y?” Sure!

def clearTopHalf3b(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  for x in range(0,w):
    for y in range(0,h/2):
       pixel = getPixel(pic,x,y)
       setColor(pixel,white)

Pause for Reflection and Discussion

At this point, I asked students to turn to the person next to them and ask, “Which one do you prefer? Which makes the most sense to you?”

I always encourage students to discuss during peer instruction questions. I have never had such an explosion of noise as I did with this invitation. From wandering around the room, what I heard students discussing was, “This is what I did, and this is what I got wrong.”

When we had the whole class discussion, the first and third approaches (all pixels or coordinates) were the preferences. Students who spoke up disliked the index approach — it felt “like there’s too much indirection” (said one student).

Problem copyThirdDown

Solution #1: Iterating through all pixels

def copyAThird1(pic):
  h = getHeight(pic)
  for pixel in getPixels(pic):
    x = getX(pixel)
    y = getY(pixel)
    if y < h/3:
      targetPixel=getPixel(pic,x,y+(2*h/3))
      setColor(targetPixel,getColor(pixel))

Solution #2: Iterate through first 1/3 of pixels

def copyAThird2(pic):
  all = getPixels(pic)
  h = getHeight(pic)
  for index in range(0,len(all)/3):
    pixel = all[index]
    color = getColor(pixel)
    x = getX(pixel)
    y = getY(pixel)
    targetPixel = getPixel(pic,x,y+(2*h/3))
    setColor(targetPixel,color)

Solution #3: Iterate through top 1/3 of picture by coordinates

def copyAThird3(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  for x in range(0,w):
   for y in range(0,h/3):
    pixel = getPixel(pic,x,y)
    color = getColor(pixel)
    #Copies
    targetPixel = getPixel(pic,x,y+(2*h/3))
    setColor(targetPixel,color)

At this point, someone said that they did it by subtracting y from the height. I showed them that this approach mirrors. This is the first incorrect solution that I demonstrated.

def copyAThird3b(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  for x in range(0,w):
    for y in range(0,h/3):
      pixel = getPixel(pic,x,y)
      color = getColor(pixel)
      # Mirrors instead of copies
      targetPixel = getPixel(pic,x,h-y-1)
      setColor(targetPixel,color)

Solution #4: Iterating through the bottom 1/3 of the picture by x and y coordinates

This was an unusual approach that I saw a few students try: They used nested loops to iterate through the bottom 2/3 of pixel coordinates, and then compute the top 1/3 to copy down. They iterated through the target and computed the source.

def copyAThird4(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  for x in range(0,w):
    for y in range(2*h/3,h):
      targetPixel=getPixel(pic,x,y)
      srcPixel=getPixel(pic,x,y-(2*h/3))
      setColor(targetPixel,getColor(srcPixel))

After I wrote that program, someone asked, “Couldn’t you make an empty picture and copy the top third into the new picture at the top and bottom?” With her guidance, I modified the program above to create a new version, which does exactly as she describes, leaving the middle third blank. So, this was the second incorrect version I wrote to respond to student queries.


def copyAThirdEmpty(pic):
  h = getHeight(pic)
  w = getWidth(pic)
  canvas = makeEmptyPicture(w,h)
  for x in range(0,w):
    for y in range(0,h/3):
      pixel = getPixel(pic,x,y)
      color = getColor(pixel)
      targetPixel = getPixel(canvas,x,y)
      setColor(targetPixel,color)
      targetPixel = getPixel(canvas,x,y+(2*h/3))
      setColor(targetPixel,color)
  explore(canvas)

When I asked students a second time which version made the most sense to them, there was a bigger split. Indexing the array continued to be the least preferred one, but students liked both versions with nested loops, and many still preferred the first version.

April 6, 2018 at 7:00 am 14 comments

New programming languages are important to develop as we improve our knowledge of how students learn computing

I was at a workshop at Google a couple weeks ago where someone asked me, “Do you still think that there’s a place for developing new programming languages in computing education?” I said, “ABSOLUTELY!”.

We know little about how people learn programming, and developing new programming languages is important for improving usability, learnability, and productivity of programmers (professional, novice, end-user, casual, or conversational). The interplay between design of programming languages and research into how people learn programming languages is a hot and important research topic. (See, for example, the recent Dagstuhl seminar on empirical data for programming language design.)

My Blog@CACM post for this month (see link here) is based on the cover story for the March Communications of the ACM (CACM), on “A Programmable Programming Language.” The (interesting and recommended) article is on building problem-specific programming languages. My post was about the educational questions raised by these languages. Would they be easier or harder to learn if they’re problem-specific? Will novices be willing to put in the effort to learn a programming language that is specific to a problem? Do problem-specific languages make it harder or easier to find (or train) programmers to work on old software (built in these problem-specific languages)? If a programmer learns a problem-specific programming language created at Company X, then leaves for Company Y and creates a similar problem-specific programming language, was intellectual property stolen?

Barbara Ericson’s defense was March 12 (as mentioned here). It was very successful — not only did she pass, but all of her committee signed off on the same day. She’s Dr. Ericson!

Alan Kay was on her committee and asked some insightful questions about her work with Parsons problems. In a Parson problem, students are ordering lines of code into a correct solution. Barb did her research using Python, and she’s also done work with Parsons problems in Java. These are pretty similar languages in terms of notional machines.

What’s the influence of the programming language on student success with Parsons problems? What if the underlying notional machine was simpler to understand? Would students find it easier to sequence a program? In general, we explore non-imperative programming paradigms so rarely in computing education research. We change modality (e.g., Scratch), but not the underlying computational model. The work with Racket is a rare example. Alan mentioned HyperCard in his comments, which was explicitly designed to be easy to learn. Would HyperCard programs be easier for students to order correctly?

I hope that we continue to invent new programming languages and explore the educational implications of them. There’s a big space of possible designs, and we have only started evaluating them empirically.

March 26, 2018 at 7:00 am Leave a comment

When more information leads to worse performance: Beware throwing in “something fun and totally optional”

Eliane Wiese gave a talk here this last week. She told a story that I found fascinating. It connects to a story I just read about from Kahneman and Tversky. The theme has important implications for the design of software for CS education.

Story One: In Eliane’s dissertation work she explored how to give grounded feedback that would lead students to learn from mistakes. Here (in summary form) is the result of one of her studies.

In some questions, students were shown graphical representations of fractions. In other questions, they were shown some combination of graphical representations and symbolic fractions. In a fourth kind of questions, they’re just shown symbolic fractions. The vertical axis is performance.

The part that I find amazing is the results for condition two and three for fraction addition. Getting more information led to worse performance. Symbolic fractions are so confusing that their appearance depresses performance, even when the graphical information is still there. The students don’t just ignore the fractions. The mere presence of the fractions makes the problem harder for students.

(Original paper available here. Her follow-up/replication study can be found here. Thanks to Eliane for reviewing this post and sending me these links!)

Story Two: I just finished reading The Undoing Project (Amazon link) by Michael Lewis, the story of Daniel Kahneman and Amos Tversky’s amazing collaboration and friendship. One of their experiments is particularly relevant to Eliane’s finding.

You tell people that they’re going to pick a person at random from a pool of 100 people, 70 of whom are engineers and 30 of whom are lawyers. What is the probability that you’re going to get an engineer? Participants in the studies correctly guess 70%. You can change it to lawyers, or change around the ratios, and people solve this problem correctly and easily.

Now you tell them that, from the same pool, they have selected “Dick.”

Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues.

Now, what is the probability that Dick is an engineer? Participants say that the probability is 50% — they can’t tell. Notice that the description of Dick offers no additional information to discern if he is an engineer or a lawyer. Yet, people can’t ignore the useless descriptive information. They can’t just rely on the numbers. Getting more information leads to worse performance. People seem to feel a need to use all available information, even if it’s not useful, even if leads to worst performance.

What’s the implication for CS Ed? Our programming languages and professional IDE’s are complex. How about public static void main(String[] args)? How about all the bells and whistles in Eclipse?

When I point these out to teachers, the most common response I get is, “It’s okay. Students just ignore that part.”

I’m not sure that they do, or that they even can. People try to make sense of the information in front of them. We are drawn to create narratives. It is difficult for us to ignore information and make decisions based on only the relevant information. This is particularly hard for novices who don’t understand the relevant information, let alone separate the relevant from the irrelevant.

Before we toss something into our classes, we should pause and consider these stories. Sure, your CS1 students could use a cool new library that lets them do something cool (whatever — robotics, data visualizations, social network analysis) but has a confusing API and almost no documentation. The new library will consume their time and effort to understand. Sure, you might decide to introduce something (maybe list comprehensions or lambda expressions) into your Python code, just as “something fun” and “totally optional.” But students will try to understand it, and might not learn the things you really want them to learn. Sure, you could throw in a quick algorithm animation or use some super cool new debugger, but if your students are already confused, you’ve now just given them yet another representation or interface to make sense of. Think about the fact that the additional/extra/irrelevant information may be distracting your students from what is important. And that might lead to worse performance.

March 23, 2018 at 7:00 am 15 comments

Announcing Barbara Ericson’s Defense on Effectiveness and Efficiency of Parsons Problems and Dynamically Adaptive Parsons Problems: Next stop, University of Michigan

Today, Barbara Ericson defends her dissertation. I usually do a blog post talking about the defending student’s work as I’ve blogged about it in the past, but that’s really hard with Barb.  I’ve written over 90 blog posts referencing Barb in the last 9 years.  That happens when we have been married for 32 years and collaborators on CS education work for some 15 years.

Barb did her dissertation on adaptive Parsons problems, but she could have done it on Project Rise Up or some deeper analysis of her years of AP CS analyses. She chose well. Her results are fantastic, and summarized below. (Yes, she does have six committee members, including two external members.)

Starting September 1, Barbara and I will be faculty at the University of Michigan. Barb will be an assistant professor in the University of Michigan School of Information (UMSI). I will be a professor in the Computer Science and Engineering (CSE) Division of the Electrical Engineering and Computer Science Department, jointly with their new Engineering Education Research program. Moving from Georgia Tech and Atlanta will be hard — all three of our children will still be here as we leave. We are excited about the opportunities and new colleagues that we will have in Ann Arbor.

Title: Evaluating the Effectiveness and Efficiency of Parsons Problems and Dynamically Adaptive Parsons Problems as a Type of Low Cognitive Load Practice Problem

Barbara J. Ericson

Human-Centered Computing

School of Interactive Computing

College of Computing

Georgia Institute of Technology

Date: Monday, March 12, 2018

Time: 12pm – 3pm

Location: TSRB 222

Committee:

Dr. Jim Foley (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Amy Bruckman (School of Interactive Computing, Georgia Institute of Technology)

Dr. Ashok K. Goel (School of Interactive Computing, Georgia Institute of Technology)

Dr. Richard Catrambone (School of Psychology, Georgia Institute of Technology)

Dr. Alan Kay (Computer Science Department, University of California, Los Angeles)

Dr. Mitchel Resnick (Media Laboratory, Massachusetts Institute of Technology)

Abstract:

Learning to program can be difficult and time consuming.  Learners can spend hours trying to figure out why their program doesn’t compile or run correctly. Many countries, including the United States, want to train thousands of secondary teachers to teach programming.  However, busy in-service teachers do not have hours to waste on compiler errors or debugging.  They need a more efficient way to learn.

One way to reduce learning time is to use a completion task.  Parsons problems are a type of code completion problem in which the learner must place blocks of correct, but mixed up, code in the correct order. Parsons problems can also have distractor blocks, which are not needed in a correct solution.  Distractor blocks include common syntax errors like a missing colon on a for loop or semantic errors like the wrong condition on a loop.

In this dissertation, I conducted three studies to compare the efficiency and effectiveness of solving Parsons problems, fixing code, and writing code. (Editor’s note: I blogged on her first study here.) I also tested two forms of adaptation. For the second study, I added intra-problem adaptation, which dynamically makes the current problem easier.  For the last study, I added inter-problem adaptation which makes the next problem easier or harder depending on the learner’s performance.  The studies provided evidence that students can complete Parsons problems significantly faster than fixing or writing code while achieving the same learning gains from pretest to posttest.  The studies also provided evidence that adaptation helped more learners successfully solve Parsons problems.

These studies were the first to empirically test the efficiency and effectiveness of solving Parsons problems versus fixing and writing code.  They were also the first to explore the impact of both intra-problem and inter-problem adaptive Parsons problems.  Finding a more efficient and just as effective form of practice could reduce the frustration that many novices feel when learning programming and help prepare thousands of secondary teachers to teach introductory computing courses.

March 12, 2018 at 7:00 am 7 comments

Exploring the question of teaching recursion or iterative control structures first

Someone raised the question on the SIGCSE Members list: Which should we teach first, iteration or recursion?

I offered this response:

The research evidence suggests that one should teach iterative control structures before recursion, IF you’re going to teach both.  If you are only going to teach one, recursion is easier for students.  If you teach recursion first, the evidence (Kessler & Anderson, 1986; Wiedenbeck, 1989) suggests that it becomes harder to learn the iterative control structures.

The push back I got was, “Surely, we have better data than 30 year old studies?!?”  Here was my reply:

I agree that it would be great to do these studies again.  Given that we have an experiment and a successful replication, it could be an MS or advanced undergrad project to replicate one of those earlier experiments.

For myself, I don’t expect much difference.  As you say, student brains have stayed the same.  While the languages have changed, the basic iterative control structures (for, while, repeat) haven’t changed much in modern languages from what they were in C and even Pascal.  Curriculum may be a factor, and that would be interesting to explore.

Two directions that I think would be great to explore in this space:

(1) The Role of Block-Based Languages: As you say, the previous research found that iterative control structures are syntactically complicated for novices.  But multiple studies have found that block-based iterative structures are much easier for novices than text-based versions.  What if we went recursion->block iteration->text iteration?  Would that scaffold the transition to the more complicated text-based iterative control structures?

(2) The Role of High-Level Functions: I don’t know of any studies exploring high-level functions (like the ones that Kathi Fisler used to beat the Rainfall Problem, or even map/reduce/filter) in the development of understanding of recursion and iterative control structures.  High-level functions have a fixed form, like for/repeat/while, but it’s a simpler, functional form.  Could we teach high-level functions first, to lead into recursion or iterative control structures?  Or maybe even teach recursion or iterative control structures as two different ways of implementing the high-level functions?

In general, there are too many questions to explore and too few people asking these questions with empirical data. We might rely on our teaching experience to inform our answers to these questions, but as Neil Brown showed us (see CACM Blog post this month that talks about this result), higher-education CS teachers are actually way off when it comes to estimating what students find hard.

SIGCSE-Members, please consider asking some of these questions on your campus with your students. There are well-formed questions here that could be answered in a laboratory study that could be encapsulated in a single semester.  The students will get the opportunity to do empirical research with humans, which is a useful skill in many parts of computing.

March 9, 2018 at 7:00 am 3 comments

Older Posts


Recent Posts

April 2018
M T W T F S S
« Mar    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Feeds

Blog Stats

  • 1,499,288 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,248 other followers

CS Teaching Tips