Posts tagged ‘contextualized computing education’

Is Liveness a critical factor in learning Computer Science? Context, motivation, and feedback for learning programming

My CACM Blog post for November is on the topic of Direct Instruction, why it’s better than Discovery Learning, and how we should teach programming “directly.”

I wonder about the limitations of Direct Instruction.  I don’t think everything can be learned with direct instruction, even with deliberate practice.

At SIGCSE 2016, John Sweller made a provocative claim (that I haven’t yet found in his published papers).  He said that humans must be able to learn higher-order thinking skills.  We’d be dead if we didn’t. However, we cannot teach them.  Students have to figure them out from experience. Is programming a similar kind of task?

I have been studying Spanish on a streak of over 600 days in DuoLingo now. DuoLingo is the best direct instruction I’ve ever had.  Everything I do is deliberate practice — it’s really good at figuring out what I’m not good at, and giving me more problems on that.  I am nowhere near fluent.  I know some words. I can read some. I’m getting better at hearing. I am not fluent.  Maybe learning natural and programming languages both require more than direct instruction.

What leads to fluency, in natural languages or programming languages?  I suspect that part of it is context and motivation.  You have to be in a position to want to say something (in a natural or programming language) in order to learn it.

But I also think it’s about feedback.  I don’t really learn Spanish well because I’m rarely in a position to use it. If I did, I’d get a response to what I said. Can anyone learn to program without trying to write some code and getting feedback on whether it works? The issue of feedback came up several times in the recent discussion about the relationship between teaching programming and teaching composition.

Steven Tanimoto talks about the value of “liveness” in a programming environment (see paper here), which is about the ease of writing code and getting different kinds of feedback on the code. Maybe Liveness encapsulates the kinds of things we need for successful CS learning. Of course, even “liveness” doesn’t give the kind of feedback that a human reader can, but it does shorten the feedback timing loop.

 

 

November 30, 2018 at 7:00 am 17 comments

Designing for Wide Walls with Contextualized Computing Education

Nice blog post from Mitchel.  The wide walls metaphor is an argument for contextualized computing education.  Computing is a literacy, and we have to offer a variety of genres and purposes to engage students.

But the most important lesson that I learned from Seymour isn’t captured in the low-floor/high-ceiling metaphor. For a more complete picture, we need to add an extra dimension: wide walls. It’s not enough to provide a single path from low floor to high ceiling; we need to provide wide walls so that kids can explore multiple pathways from floor to ceiling.Why are wide walls important? We know that kids will become most engaged, and learn the most, when they are working on projects that are personally meaningful to them. But no single project will be meaningful to all kids. So if we want to engage all kids—from many different backgrounds, with many different interests—we need to support a wide diversity of pathways and projects.

Source: Mitchel Resnick: Designing for Wide Walls | Design.blog

November 9, 2016 at 7:37 am 2 comments

The Future of Computing Education is beyond CS majors: Report from Snowbird #CSforAll

Last week, I attended the Computing Research Association (CRA) Snowbird conference of deans and chairs of computing. (See agenda here with slides linked.)  I presented on a panel on why CS departments should embrace computing education research, and another on what CS departments can do to support the CS for All initiative. I talked in that second session about the leadership role that universities can play in creating state partnerships and influencing state policy (see the handout for my discussion table).

Amy Ko was in both sessions with me, and she’s already written up a blog post about her experiences, which match mine closely (including the feeling of being an imposter).  I recommend reading her post.

Here, I’m sharing a key insight I saw and learned at Snowbird.  Before the conference even started, our Senior Associate Dean for the College of Computing, Charles Isbell, challenged me to name another field that is overwhelmed with majors AND offers service courses to so many other majors. (Maybe biology because of pre-meds?)  Computer Science is increasingly the provider of courses to non-CS majors, and those majors want something different than CS majors.

The morning of the first day was dedicated to the enrollment surge.  CRA has been gathering data at many institutions on the surge, and Tracy Camp did a great job presenting some of the results.  (Her slides are now available here, so you don’t have to rely on my pictures of her slides.) Here’s the bottomline: Student growth has been enormous (across different types of institutions), without a matching growth in faculty.  The workload is increasing.

Growth-in-CS

But here’s the surprise: Much of the growth in course enrollment is not CS majors.  A large percentage of the growth is in other majors taking CS classes. The below graph is for “mid-level” CS courses, and there are similar patterns in intro and upper-level courses.

growth-in-non-majors

Tracy also presented a survey of students (slides available here), which was really fascinating.  Below is a survey of (a lot) of intro students at several institutions.  All the differences described are significant at p<0.05 (not 0.5 as it says).  The difference in what non-majors want and CS majors was is interesting.  Majors want (significantly more than non-majors) to “make a lot of money.”  Non-majors more significantly want to “Give back to my community” and “Take time off work to care for family.”

community-v-money

U. Illinois has the most innovative program I have heard of for meeting these new needs.  They are creating a range of CS+X degree programs.  First, these CS+X programs are significant parts of the “X” departments.

CS+x-share-other

But these stats blew me away: CS+X is now 30% of all of CS at U. Illinois (which is a top-5 CS department), and 50% of all admitted first years this year! And it’s 28% female.

CS+X-stats

It’s pretty clear to me that the future of computing education is as much about providing service to other departments as it is about our own CS major.  We have suspected that the growth is in the non-majors for awhile, but now we have empirical evidence.  I’ve been promoting the idea of contextualized-computing education, and the notion that other majors need a different kind of CS than what CS majors need.  We need to take serious the education of non-CS majors in Computer Science.

July 25, 2016 at 7:16 am 10 comments

Blog Post #1999: The Georgia Tech School of Computing Education #CSEdWeek

Three and a half years, and 1000 blog posts ago, I wrote my 999th blog post about research questions in computing education (see post here). I just recently wrote a blog post offering my students’ take on research questions in computing education (see post here), which serves to update the previous post. In this blog post, I’m going to go more meta.

In my CS Education Research class (see description here), my students read a lot of work by me and my students, some work on EarSketch by Brian Magerko and Jason Freeman, and some by Betsy DiSalvo. There are other researchers doing work related to computing education in the College of Computing at Georgia Tech, notably John Stasko’s work on algorithm visualization, Jim Foley’s work on flipped classrooms (predating MOOCs by several years), and David Joyner and Ashok Goel’s work on knowledge-based AI in flipped and MOOC classrooms, and my students know some of this work. I posed the question to my students:

If you were going to characterize the Georgia Tech school of thought in computing education, how would you describe it?

We talked some about the contrasts. Work at CMU emphasizes cognitive science and cognitive tutoring technologies. Work at the MIT Media Lab is constructionist-based.

GT-School

Below is my interpretation of what I wrote on the board as they called out comments.

  • Contextualization. The Georgia Tech School of Computing education emphasizes learning computing in the context of an application domain or non-CS discipline.
  • Beyond average, white male. We are less interested in supporting the current majority learner in CS.
  • Targeted interventions. Georgia Tech computing education researchers create interventions with particular expectations or hypotheses. We want to attract this kind of learner. We aim to improve learning, or we aim to improve retention. We make public bets before we try something.
  • Broader community. Our goal is to have a broaden participation in computing, to extend the reach of computer science.
  • We are less interested in making good CS students better. To use an analogy, we are not about raising the ceiling. We’re about pushing back the walls and lowering the floors, and sometimes, creating whole new adjacent buildings.
  • We draw on learning sciences theory, which includes cognitive science and educational psychology (e.g., cognitive load theory).
  • We draw on social theories, especially distributed cognition, situated learning, social cognitive theory (e.g., expectancy-value theory, self-efficacy).

I might have spent hours coming up with a list like this, but in ten minutes, my students came up with a good characterization of what constitutes the Georgia Tech School of Thought in Computing Education.

December 7, 2015 at 7:43 am 1 comment

Why the Maker Movement is important for Schools: Outside the Skinner Box

I liked Gary Stager’s argument in the post below about what’s important about the Maker Movement for schools: it’s authentic in a physical way, and it contextualizes mathematics and computing in an artistic setting.

For too long, models, simulations, and rhetoric limited schools to abstraction. Schools embracing the energy, tools, and passion of the Maker Movement recognize that, for the first time in history, kids can make real things – and, as a result, their learning is that much more authentic. Best of all, these new technologies carry the seeds of education reform dreamed of for a century. Seymour Papert said that John Dewey’s educational vision was sound but impossible with the technology of his day. In the early- to mid-20th century, the humanities could be taught in a ­project-based, hands-on fashion, but the technology would not afford similarly authentic opportunities in mathematics, science, and engineering. This is no longer the case.

Increasingly affordable 3-D printers, laser cutters, and computer numerical control (CNC) machines allow laypeople to design and produce real objects on their computers. The revolution is not in having seventh-graders 3-D print identical Yoda key chains, but in providing children with access to the Z-axis for the first time. Usable 3-D design software allows students to engage with powerful mathematical ideas while producing an aesthetically pleasing artifact. Most important, the emerging fabrication technologies point to a day when we will use technology to produce the objects we need to solve specific problems.

via Outside the Skinner Box.

January 28, 2015 at 7:42 am 1 comment

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 22 comments

Teaching intro CS and programming by way of scientific data analysis

This class sounds cool and similar to our “Computational Freakonomics” course, but at the data analysis stage rather than the statistics stage. I found that Allen Downey has taught another, also similar course “Think Stats” which dives into the algorithms behind the statistics. It’s an interesting set of classes that focus on relevance and introducing computing through a real-world data context.

The most unique feature of our class is that every assignment (after the first, which introduces Python basics) uses real-world data: DNA files straight out of a sequencer, measurements of ocean characteristics (salinity, chemical concentrations) and plankton biodiversity, social networking connections and messages, election returns, economic reports, etc. Whereas many classes explain that programming will be useful in the real world or give simplistic problems with a flavor of scientific analysis, we are not aware of other classes taught from a computer science perspective that use real-world datasets. (But, perhaps such exist; we would be happy to learn about them.)

via PATPAT: Program analysis, the practice and theory: Teaching intro CS and programming by way of scientific data analysis.

September 10, 2012 at 3:33 pm Leave a comment

Report on “Computational Freakonomics” Class: Olympics, game consoles, the Euro, and Facebook

I’ve told you a bit about how the Media Computation class went this summer, with the new things that I tried.  Let me tell you something about how the “Computational Freakonomics” (CompFreak) class went.

The CompFreak class wasn’t new.  Richard Catrambone and I taught it once in 2006.  But we’ve never taught it since then, and I’d never taught it before on my own, so it was “new” for me.  There were six weeks in the term at Oxford.  Each week was roughly the same:

  • On Monday, we discussed a chapter from the “Freakonomics” book.
  • We then discussed social science issues related to that chapter, from the nature of science, through t-tests and ANOVA, up to multiple linear regression.  Sometimes, we did a debate about issues in the chapter (e.g., on “Atlanta is a crime-ridden city” and on “Roe v. Wade is the most significant explanation for the drop in crime in the 1990’s.”)
  • Then I showed them how to implement the methods in SciPy to do real analysis of some Internet-based data sets.  I give them a bunch of example data sets, and show them how to read data from flat text files and from CSV files.

At the end of the course, students do a project where they ask a question, any question they want from any database.  Then, they do it again, but in pair, after a bunch of feedback from me (both on the first project, and on their proposal for the final project).  The idea is that the final projects are better than the first round, since they get feedback and combine efforts in the pair.  And they were.

  • One team looked at the so-called “medal slump” after a country hosts the Olympics.  The “medal slump” got mentioned in some UK newspapers this summer.  One member of the team had found in his first project that, indeed, the host country wins a statistically significant fewer medals in the following year.  But as a pair of students, they found that there was no medal “slump.”  Instead, during the Olympics of hosting, there was a huge medal “bump”!  When hosting, the country gets more medals, but the prior two and following two Olympics all follow the same trends in terms of medals won.
  • Another team looked at Eurozone countries and how their GDP changes tracked one another after moving to the Euro, then tried to explain that in terms of monetary policy and internal trading.  It is this case that Eurozone countries who did move to the Euro found that their GDP started correlating with one another, much more than with non-Euro Eurozone countries or with other countries of similar GDP size.  But the team couldn’t figure out a good explanation for why, e.g., was it because internal trading was facilitated, or because of joint monetary policy, or something else?
  • One team figured out the Facebook API (which they said was awful) and looked at different company’s “likes” versus their stock price over time.  Strongly correlated, but “likes” are basically linear — almost nobody un-likes a company.  Since stock prices generally rise, it’s a clear correlation, but not meaningful.
  • Another team looked at the impact of new consoles on the video game market.  Video game consoles are a huge hit on the stock price of the developing company in the year of release, while the game manufacturers stock rises dramatically.  But the team realized a weakness of their study: They looked at the year of a console’s release.  The real benefit of a new console is in the long lifespan.  The year that the PS3 came out, it was outsold by the PS2.  But that’s hard to see in stock prices.
  • The last team looked at impact of Olympics on the host country’s GDP.  No correlation at all between hosting and changes in GDP.  Olympics is a big deal, but it’s still a small drop in the overall country’s economy.

One of my favorite observations from their presentations: Their honesty.  Most of the groups found nothing significant, or they got it wrong — and they all admitted that.  Maybe it was because it was a class context, versus a tenure-race-influenced conference.  They had a wonderful honesty about what they found and what they didn’t.

I’ve posted the syllabus, course notes, slides that I used (Richard never used PowerPoint, but I needed PowerPoint to prop up my efforts to be Richard), and the final exam that I used on the CompFreak Swiki.  I also posted the student course-instructor opinion survey results, which are interesting to read in terms of what didn’t work.

  • Clearly, I was no Richard Catrambone. Richard is known around campus for how well he explains statistics, and I learned a lot from listening to his lectures in 2006. Students found my discussion of inferential statistics to be the most boring part.
  • They wanted more in-class coding! I had them code in-class every week. After each new test I showed them (correlation, t-test, ANOVA, etc.), I made them code it in pairs (with any data they wanted), and then we all discussed what they found in the last five minutes of class. I felt guilty that they were just programming away while I worked with pairs that had questions or read email. I guess they liked that part and wanted more.
  • I get credit from the students for something that Richard taught me to do. Richard pointed out that his reading of cognitive overload suggests that nobody can pay attention for 90 minutes straight. Our classes were 90 minutes a day, four days a week. In a 90 minute class, I made them get up halfway through and go outside (when it wasn’t raining). They liked that part.
  • Students did learn more about computing, inspired by the questions that they were trying to answer.  They talk in their survey comments about studying more Python on their own and wishing I’d covered more Python and computing.
  • In general, though, they seemed to like the class, and encourage us to offer it on-campus, which we’ve not yet done.

Students who talked to me about the class at the end said that they found it interesting to use statistics for something.  Turns out that I happened to get a bunch of students who had taken a lot of statistics before (e.g., high school AP Statistics).  But they still liked the class because (a) the coding and (b) applying statistics to real datasets.  My students asked all kinds of questions, from what factors influenced money earned by golf pros, to the influences on attendance at Braves games (unemployment is much more significant than how much the team is in contention for the playoffs).  One of the other more interesting findings for me: GPD correlates strongly and significantly with number of Olympic gold medals that a country wins, i.e., rich countries win more medals. However, GPD-per-capita has almost no correlation. One interpretation: To win in the Olympics, you need lots of rich people (vs. a large middle class).

Anyway, I still don’t know if we’ll ever offer this class again, on-campus or study-abroad.  It was great fun to teach.  It’s particularly fun for me as an exploration of other contexts in contextualized computing education.  This isn’t robotics or video games.  This is “studying the world, computationally and quantitatively” as a reason for learning more about computing.

August 16, 2012 at 8:27 am 6 comments

CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts

Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!

The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.

This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.

via CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts.

May 31, 2012 at 7:14 am 2 comments

How can we teach multiple CS1’s?

A common question I get about contextualized approaches to CS1 is: “How can we possibly offer more than one introductory course with our few teachers?”  Valerie Barr has a nice paper in the recent Journal of Computing Sciences in Schools where she explains how her small department was able to offer multiple CS1’s, and the positive impact it had on their enrollment.

The department currently has 6 full time faculty members, and a 6 course per year teaching load. Each introductory course is taught studio style, with integrated lecture and hands-on work. The old CS1 had a separate lab session and counted as 1.5 courses of teaching load. Now the introductory courses (except Programming for Engineers) continue this model, meet the additional time and count as 1.5 courses for the faculty member, allowing substantial time for hands-on activities. Each section is capped at 18 students and taught in a computer lab in order to facilitate the transition between lecture and hands-on work.

In order to make room in the course schedule for the increased number of CS1 offerings, the department eliminated the old CS0 course. A number of additional changes were made in order to accommodate the new approach to the introductory CS curriculum: reduction of the number of proscribed courses for the major from 8 (out of 10) to 5 (this has the added benefit, by increasing the number of electives, of giving students more flexibility and choice within the general guidelines of the major); put elective courses on a rotation schedule so that each one is taught every other or every third year; made available to students a 4-year schedule of offerings so that they can plan according to the course rotation.

May 8, 2012 at 7:23 am 2 comments

A CS Emporium would be wonderful idea: Efficient and Tailored Computing Education

Over the weekend, I read a post by GasStationsWithoutPumps on speeding through college.  The Washington Post has a great article about Virginia Tech’s Math Emporium that provides a mechanism to do that: Self-paced mathematics instruction, with human instructors available for one-on-one help.  It’s efficient, and it provides student learning at their pace.  I would love to see a computer science version of this.  In particular, it would be great if students could explore problems in a variety of contexts (from media to games to robotics to interactive fiction), and get the time in that they need to develop some skill and proficiency.  Like the distance education efforts, this is about improving the efficiency of higher education.  Unlike distance education, the Emporium includes 1:1 human interaction and the potential for individualized approaches and curriculum.  And there’s potential synergy: the content needed to make a CS Emporium work could also be used in a distance education.  Here’s my prediction: Without the 1:1 help, I’d expect the distance folks to still have a higher WFD rate.

No academic initiative has delivered more handsomely on the oft-stated promise of efficiency-via-technology in higher education, said Carol Twigg, president of the National Center for Academic Transformation, a nonprofit that studies technological innovations to improve learning and reduce cost. She calls the Emporium “a solution to the math problem” in colleges.

It may be an idea whose time has come. Since its creation in 1997, the Emporium model has spread to the universities of Alabama and Idaho (in 2000) and to Louisiana State University (in 2004). Interest has swelled as of late; Twigg says the Emporium has been adopted by about 100 schools. This academic year, Emporium-style math arrived at Montgomery College in Maryland and Northern Virginia Community College.

“How could computers not change mathematics?” said Peter Haskell, math department chairman at Virginia Tech. “How could they not change higher education? They’ve changed everything else.”

Emporium courses include pre-calculus, calculus, trigonometry and geometry, subjects taken mostly by freshmen to satisfy math requirements. The format seems to work best in subjects that stress skill development — such as solving problems over and over. Computer-led lessons show promise for remedial English instruction and perhaps foreign language, Twigg said. Machines will never replace humans in poetry seminars.

via At Virginia Tech, computers help solve a math class problem – The Washington Post.

April 25, 2012 at 8:58 am 4 comments

Nice List: Seven misconceptions about how students learn

I would have written the first one a bit different for a CS Ed audience.  There’s a big push in CS Ed to make sure students learn the “right” basic facts so that they don’t have to “unlearn” bad habits later.  Absolutely, that’s a real risk.  But that doesn’t mean that we can teach the basic facts first.  Context comes first — students have to know why they’re learning something in order to get deep learning.

Here are seven of the biggest myths about learning that, unfortunately, guide the way that many schools are organized in this era of standardized test-based public school reform.

Basic Facts Come Before Deep Learning

This one translates roughly as, “Students must do the boring stuff before they can do the interesting stuff.” Or, “Students must memorize before they can be allowed to think.” In truth, students are most likely to achieve long-term mastery of basic facts in the context of engaging, student-directed learning.

via Seven misconceptions about how students learn – The Answer Sheet – The Washington Post.

March 19, 2012 at 8:01 am 3 comments

Helping Everyone Create with Computing: Video of C5 Talk

A YouTube video of my talk (with Alan’s introduction) at C5 is now available.

February 15, 2012 at 10:33 am 2 comments

Thoughts on Code Year, Codecademy, and Learning to Code (with C5 Side Note)

The blog piece below is the most biting criticism I’ve read of Codecademy.  (And of course, I’m always glad to read someone else pushing context as important for computing education!)  The author has a very good point quote below.  I’m not sure that we know how to achieve the goals of Code Year.  It’s amazing that Codecademy has raised $2.5M to support Code Year, but I do wonder if there’s a better use for that money–one that moves us closer to the goal of ubiquitous computing literacy.

Learning anything without context is hardly learning. I wish that Code Year was 2013 and 2012 was “some smart people with good ideas and a lot of money built took the time to build great pedagogically-driven tool to really solve an existing problem for folks who want and need training in this area.”

via Thoughts on Code Year, Codecademy, and Learning to Code | thickbook.com.

Side note: I should be visiting with Alan Kay in 4 or 5 hours.  He’s introducing my keynote at the C5 Conference (http://www.cm.is.ritsumei.ac.jp/c5-12/), which I’m excited about.  Two of the C’s of C5 is “creating” and “computing,” and my talk is going to be about the challenges of supporting everyone in creating (for me, including “programming”) with computing.  I’m going to tell the MediaComp story, talk about Brian Dorn’s work with graphics designers, and with Klara Benda’s and Lijun Ni’s work that tells us about teachers’ needs to learn computer science.

January 18, 2012 at 7:06 am 8 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 7,966 other followers

Feeds

Recent Posts

Blog Stats

  • 1,783,874 hits
August 2020
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

CS Teaching Tips