Ruthe Farmer’s important big idea: The Last Mile Education Fund to increase diversity in STEM

I met Ruthe Farmer (Wikipedia page) when she represented the Girl Scouts in the early days of the NSF Broadening Participation in Computing (BPC) alliances. She played a significant role in NCWIT. I had many opportunities to interact with her in her roles at NCWIT and CSforAll. Ruthe organized the White House summit with ECEP in 2016 (see blog post) when she was with the Office of Science and Technology Policy in the Obama administration. Her latest project may be the one that’s closest to my heart.

Ruthe has founded and is CEO of the Last Mile Education Fund. Their mission is:

The Last Mile Education Fund offers a disruptive approach to increasing diversity in tech and engineering fields by addressing critical gaps in financial support for low-income underrepresented students within four semesters of graduation.

I was still at Georgia Tech when I heard about the completion microgrant program at Georgia State. Georgia State was (and still is) making headlines for their use of big data to boost retention and get students graduated. Georgia State is the kind of institution where over half of their students are classified as low-income. There is a huge social benefit when GSU can improve their retention statistics. The completion grant program was started in 2011 and focuses on students who could graduate (e.g., their grades were fine), but they had run out of money before they finished. The grant program gave no more than $2,500 per student (Inside Higher Education article). Today, we know that the average grant has actually been $900. That’s a shockingly low cost for getting students the rest of the way to their college degree. It’s a great idea, and deserves to be applied more broadly than one university.

The Last Mile Education Fund especially focuses on getting students from diverse backgrounds into STEM careers. These last gaps in funding are among the barriers that keep girls out from STEM careers (where Ruthe’s focus was in the Girl Scouts and NCWIT) but also low-income students and people of color.

I was reminded to write about the Last Mile Education Fund by Alfred Thompson’s blog (see post here). He’s got a lot more information about the Last Mile Education Fund there.

I am a first generation graduate. My parents and I had no idea how to even apply to college. I am forever grateful that Wayne State University found me in my high school, guided me to applying, and gave me a scholarship to attend. I’m a privileged white guy. Not everybody gets the opportunities I had. It’s critical to extend the opportunity of a higher education degree to a broader and more diverse audience.

The Last Mile Education Fund is important for closing the gap for students from diverse backgrounds. I’m a monthly supporter, and I encourage you to consider giving, too.

May 26, 2022 at 7:00 am Leave a comment

Three types of computing education research: for CS, for CS but not professionally, and for everyone

In February, I was invited to give a lecture at the University of Washington’s Allen School. I had a great day visiting there, even though it was all on Zoom. My talk is available on YouTube:

I got a chance to talk to Jeff Heer and Amy Ko before my visit. The U-W CSE department had been thinking about making a push into computing education research. They suggested that I describe the lay of the land — and particularly, to identify where I fit in that space. What I do these days (e.g. Teaspoon languages for history and mathematics classes) isn’t in the mainstream of computing education research, and it was important to tell people unfamiliar with the field, “There’s a lot more out there, and most of it doesn’t look like this.”

CS Education research dates back to the late 1960’s (see the history chapter that Ben du Boulay and I wrote). ACM SIGCSE started in 1968 with a particular focus on how to teach Computer Science and Information Technology majors. Much of what SIGCSE has published is focused even more specifically on the first course, which we now call CS1. This is a big and important space. These majors will be significant drivers of the world’s infrastructure.

There is a growing trend in computing education research to look at people who are learning programming (like in the first circles), but not for the purpose of becoming technology professionals. This includes K-12 CS teachers, end-user programmers, and conversational programmers. This kind of research sometimes appears in venues like CHI, CSCW, and VL/HCC, and occasionally in venues like SIGCSE, RESPECT, and ITiCSE. These circles aren’t scaled correctly by size of potential student population. By most measures, the outer circle (of people learning programming but who aren’t going to become technology professionals) is at least ten times the size of the student population inside the first circles.

My research is one level further out. I’m interested in studying what should we be teaching to everyone, whether or not they’re going to program like professionals, and how do we facilitate that learning. These students might not use the same tools or languages, and certainly have different goals for studying computing. I offer three reasons for the broader “everyone” to learn computing (drawn from the work of C.P. Snow, Alan Perlis, Peter Naur, and Seymour Papert — see this earlier blog post):

  • To make sure that technology is controlled by a democracy.
  • To support new ways of thinking and learning.
  • To be part of a new computational literacy, a new tool for human expression.

This outer circle is far bigger in terms of number of students potentially impacted than any of the inner circles. But it’s also where we know the least in terms of research results.

Take a look at the talk for more on this way of thinking about the field, and how I connect that to existing research. I’d be interested in your perspective on this framing.

May 25, 2022 at 7:00 am 2 comments

College Board stops sharing data on Advanced Placement Computer Science exams

Barb Ericson has been gathering data on the Advanced Placement exams in Computer Science for a decade. The College Board made available data about who took the exam (demographic statistics) and how well they did for each state, for AP CS Level A and then for AP CS Principles when that exam started. When she first started in 2010, she would download each state’s reports, then copy the data from the PDF’s into her Excel spreadsheets. By the time she processed the 2020 data, it was mostly mechanized. Her annual reports on the AP CS exam results were posted here until 2018. She now makes her reports and her archived data collection available at her blog.

However, the 2020 data she has posted are now the last data that are available. The College Board is no longer sharing data on AP CS exams. The archive is gone, and the 2021 data are not posted.

Researchers can request the data. Barb did several months ago. She still hasn’t received it. She was told that they would sign an agreement with the University of Michigan to give her access to the data — but not to her personally. She would also have to promise that she wouldn’t share the data.

Barb talked to someone at the College Board who explained that this is a cost-saving measure — but that doesn’t make much sense. The College Board still produces all the reports and distributes them to the states. They have just stopped making them publicly available.

I agree with Joanna Goode in this tweet from April:

The National Science Foundation paid for the development of the AP CS Principles exam explicitly to broaden participation in computer science. The goal was to create an AP CS exam that any high school could teach, that would be welcoming, and that would encourage more and more diverse students to discover computing. But now, the data showing us whether that’s working are being hidden. Why?

May 17, 2022 at 7:00 am 7 comments

Updates: Workshop on Contextualized Approaches to Introduction to Computing, from the Center for Inclusive Computing at Northeastern University

From Nov 2020 to Nov 2021, I was a Technical Consultant for the Center for Inclusive Computing at Northeastern University, directed by Carla Brodley. (Website here.) CIC works directly with CS departments to create significant improvements in female participation in computer science programs. I’m no longer in the TC role, but I’m still working with CIC and Carla. I’ll be participating in a workshop that they’re running on Monday March 21. I’ll be talking about Media Computation in Python, and probably show some of the things we’re working on for the new classes here at Michigan.

https://www.khoury.northeastern.edu/event/contextual-approaches-to-introduction-to-computing/

Contextual Approaches to Introduction to Computing

Monday 3/21/22, 3pmEST/12pmPST

Moderator: Carla Brodley;  Speakers: Valerie Barr, Mark Guzdial, Ben Hescott, Ran Libeskind-Hadas, Jakita Thomas

Brought to you by the Center for Inclusive Computing at Northeastern University

 

In this 1.5 hour virtual workshop, faculty from five different universities in the U.S. will present their approach to creating and offering an introductory computer science class (CS0 or CS1) for students with no prior exposure to computing. The key differentiator of these approaches is that the introduction is contextualized in one area outside of computing throughout the semester. Using the context of areas such as cooking, business, biology, media arts, and digital humanities, these courses appeal to students across the university and have realized spectacular results for student retention in CS0/CS1, persistence to taking additional CS courses, and declaring a major or minor in computing. The importance of attracting students to computing after they enter university is critical to moving the needle on increasing the demographic diversity of students who graduate in computing. Interdisciplinary introductory computing classes provide a pathway to students discovering and enjoying computing after they start university. They also help students with no prior coding experience gain familiarity with computing before taking additional courses required for the CS major. The workshop will begin with a short presentation by each faculty member on their approach to contextualized CS0/CS1 and will touch upon the university politics involved in its creation, the curriculum, and the outcomes. We will then split into smaller breakout sessions five times to enable participants to meet with each of the five presenters for questions and more in-depth conversations.

February 25, 2022 at 7:00 am 1 comment

Updates: Dr. Barbara Ericson awarded ACM SIGCSE 2022 Outstanding Contributions to Education

March 2-5 is the ACM SIGCSE Technical Symposium for 2022 in Providence, RI. (Schedule is here.) I am absolutely thrilled that my collaborator, co-author, and wife is receiving the Outstanding Contributions to Education award! She is giving a keynote on Friday morning. Her abstract is below.

She’s got more papers there, on CS Awesome, on her ebooks, and on Sisters Rise Up. I’m not going to summarize them here. I’ll let you look them up in the schedule.

A couple of observations about the SIGCSE Awards this year that I love. Both Barb and the Lifetime Service to the Computer Science Education Community awardee, Simon, earned their PhD’s later in life, both within the last 10 years. Barb is the first Assistant Professor to win the Outstanding Contributions award in the 40 year history of the award.

I have one Lightning Talk. The work I’m doing these days is computing education, but it’s not in the mainstream of CS education — I focus on computing education for people who don’t want to study CS. So, I’m doing a five minute lightning talk on Teaspoon languages as provocation to come talk to me about this approach to integrating computing into non-CS subjects. You can see the YouTube version here. This is my attempt to show that each Teaspoon language can be learned in 10 minutes — I describe all of two of them in less than five minutes!

Outstanding Contribution Plenary

Friday, March 4 / 8:15 – 9:45

Ballroom A-E (RICC)

Barbara Ericson (University of Michigan)

Improving Diversity in Computing through Increased Access and Success

My goal is to increase diversity in computing. In this talk I explain why diversity is important to me. My strategy to improve diversity is to increase access and success. This work includes teacher professional development, summer camps, weekend workshops with youth serving organizations, curriculum development, helping states make systemic changes to computing education, publicizing gender and race issues in Advanced Placement Computer Science, creating free and interactive ebooks, testing new types of practice problems/tools, and offering near-peer mentoring programs.

Barbara Ericson is an Assistant Professor in the School of Information at the University of Michigan. She conducts research at the intersection of computing education, the learning sciences and HCI, to improve students’ access to and success in computing. With her husband and colleague, Dr. Mark Guzdial, she received the 2010 ACM Karl V. Karlstrom Outstanding Educator Award for their work on media computation. She was the 2012 winner of the A. Richard Newton Educator Award for her efforts to attract more females to computing. She is also an ACM Distinguished Member for Outstanding Educational Contributions to Computing.

February 24, 2022 at 7:00 am 2 comments

Updates: NSF Funding to Study Learning with Teaspoon Languages for Discrete Mathematics

A few months before the pandemic started, Dr. Elise Lockwood at Oregon State reached out to me. She’d heard that I was interested in programming for teaching non-CS subjects, and that’s what she was doing. I loved what she was doing, and we started having regular chats.

Elise is a mathematics education researcher who has been studying how students come to understand counting problems. Like “If you have three letters and four digits, how many license plates can you make?” Or “How many two letter words can you make from the letters ROCKET, if you don’t allow double letters?” She’s been exploring having students learn counting problems by manipulating Python programs to generate all the possible combinations, then counting them. (Check out her recent papers on her Google Scholar page, especially those with her student Adaline De Chenne.)

As I said, I loved what she was doing, but Python seemed heavy-handed for this. I was starting to work on our Teaspoon languages. Could we build lighter-weight languages for the same problems?

As I kept reading Elise’s papers, I started working on two possible designs.

In one of them (called Counting Sheets), we play off of students’ understanding of spreadsheets. You can just describe what you want in each column, and the system will exhaustively generate every combination:

Or you can use an “=“ formula that knows how to do very simple operations with sets. Here’s a solution to the two letter words from ROCKET without repeating problem:

This is one of the tools that we’ve been building in support for both Spanish and English keywords (like Pixel Equations, that I talked about last September):

Elise found Counting Sheets intriguing, but she was worried if it would work to make the iterative structures implicit and declarative. Would students need to see the iteration to be able to reason about the counting processes?

So, I built a second Teaspoon language, called Programmed Counting. Here, the loops are explicit, like Python, but the only variable type is a set, and the words and phrases of the language come from counting problems.

Elise was a real sport, trying out the languages as I generated prototypes and finding the holes in what I was doing. We met face-to-face only once, when I went to Portland for SIGCSE 2020 — the one that got cancelled the very morning it was supposed to start. I had lunch with Elise, and we worked for a few hours on the designs. Barb and I went home the next day, and the big pandemic lockdown started right afterwards.

Will these work for learning? We don’t know — but we just got funding from NSF to find out! “We” here is me and PhD student Emma Dodoo, and we’ll be involving Adaline as a consultant. Elise is currently a rotator at NSF, so she’s involved only from the sidelines because of NSF COI issues. Our plan is to run experiments with various combinations of the Teaspoon languages (one or both), standalone and with Python. Do we need Python if we have the Teaspoon languages? Do the Teaspoon languages serve as scaffolding to introduce concepts before starting into Python?

Below is the abstract on the new IUSE grant, as an overview of the project. University of Michigan CSE Communications wrote a nice article about the work, available here. Huge thanks to Jessie Houghton, Angela Li, and Derrick White who turned my LiveCode prototypes into functioning Web versions.

Abstract for NSF

Programming is a powerful tool that scientists, engineers, and mathematicians use to gain insight into their problems. Educators have shown how programming integrated into other subjects can be a powerful tool to enhance learning, from algebra to language arts. However, the cost is learning the programming language. Few students in the US learn programming — less than 5% of high school students nationwide. Most students do not have the opportunity to use programming to support ™ learning. This project is investigating a new approach to designing and implementing programming languages in classrooms: Task-specific programming (TSP) languages. TSP languages are explicitly design for integration in specific classes, to meet teacher needs, and to be usable with less than 10 minutes of instruction. TSP languages can make the power of programming to enhance learning more accessible. This project will test the value of TSP languages in discrete mathematics, which is a gateway course in some computer science programs.

The proposed project tests the use of two different TSP languages and contrasting that with a traditional programming language, Python. The proposed work will contribute to understanding about (1) the role of programming in learning in discrete mathematics, (2) the value of task-specific languages to scaffold learning, (3) how alternative representational forms for programming influence student use of TSP languages, and (4) how the use of TSP languages alone or in combination with traditional languages enhance students’ sense of authenticity and ability to transfer knowledge.

February 23, 2022 at 7:00 am 1 comment

Updates: Developing the University of Michigan LSA Program in Computing for the Arts and Science

This blog is pretty old. I started it in June 2009 — almost 13 years ago. The pace of posting has varied from every day (today, I can’t understand how I ever did that!) to once every couple of months (most recently). There are things happening around here that are worth sharing and might be valuable to some readers, but I’m not finding much time to write. So, the posts the rest of this week will be quick updates with links for more information.

During most of the pandemic, I co-chaired (with Gus Evrard, a Physics professor and computational cosmologist) the Computing Education Task Force (website) for the University of Michigan’s College of Literature, Science, and the Arts (LSA). LSA is huge — about 20K students. (I blogged about this effort in April of last year.) Our job was to figure out what LSA was doing in computing education, and what else was needed. Back in November, I talked here about the three themes that we identified as computing education in LSA:

Our report was released last month. You can see the release statement here, and the full report here. It’s a big report, covering dozens of interviews, a hundred survey responses, and a huge effort searching over syllabi and course descriptions to find where computing is in LSA. We made recommendations about creating a new program, new courses, new majors and minors, and coordinating computing education across LSA.

Now, we’re in the next phase — acting on the recommendations. LSA bought me out of my teaching for this semester, and it’s my full-time job to define a computing education program for LSA and to create the first courses in the program. We’re calling it the Program for Computing in the Arts and Science (PCAS). I’m designing courses for the Computing for Expression and Computing for Justice themes, in an active dialogue (drawing on the participatory design methods I learned from Betsy DiSalvo) with advisors from across LSA. (There are courses in LSA that can serve as introductions to the Computing for Discovery theme, and Gus is leading the effort to coordinate them.) The plan is to put up the program this summer, and I’ll start teaching the new courses in the Fall.

February 22, 2022 at 7:00 am 6 comments

Helping social studies teachers to teach data literacy with Teaspoon languages

Last year, Tammy Shreiner and I received NSF funding to develop and evaluate computational supports for helping social studies teachers to teach data literacy and computing(see post here). We’re excited about what we’re doing and what we’re learning. Here’s an update on where we’re at on the project.

Teaspoon Languages

We have a chapter in the new book by Aman Yadav and Ulf Dalvad Berthelsen Computational Thinking in Education: A Pedagogical Perspective. This is the publication where we introduce the idea of Teaspoon Languages. Teaspoon languages are a form of task-specific languages (TSP => Teaspoon — see?). Teaspoon languages:

  • Support learning tasks that teachers (typically non-CS teachers) want students to achieve;
  • Are programming languages, in that they specify computational processes for a computational agent to execute; and
  • Are learnable in less than 10 minutes, so that they can be learned and used in a one hour lesson. If the language is never used again, it wasn’t a significant learning cost and still provided the benefit of a computational lesson.

We say that we’re adding a teaspoon of computing to other subjects. The goal is to address the goal of “CS for All” by integrating computing into other subjects, by placing the non-CS subjects first. We believe that programming can be useful in learning other subjects. Our primary goal is to meet learning objectives outside of CS using programming. Teachers (and students eventually) will be learning foundational CS content — but not necessarily the content we typically teach in CS classes. All students should learn that a program is non-WYSIWYG, that it’s a specification of a computational process that gets interpreted by a computational agent, that programming languages can be in many forms, and that all students can be successful at programming.

Our chapter, “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples” (see link here) offers two use cases of how we imagine teaspoon languages to work in classrooms (history and language arts in these examples). The first use case is around DV4L, our Data Visualization for Learning tool. The second is around a chatbot language that we developed —- and have long since discarded.

We develop our teaspoon languages in a participatory design process, where teachers try our prototypes in authentic tasks as design probes, and then they tell us what we got wrong and what they really want. Our current iteration is called Charla-bots and is notable for having user-definable languages. We have a variety of Charla-bot languages now, with English, Spanish, and mixed keywords.

Our vision for teaspoon languages is a contrast with the “Hour of Code” approach. The “Hour of Code” is a one hour programming activity that many schools use in every grade, typically once a year during CS Ed Week (in early December). The great idea is to build familiarity and confidence in programming by showing students real computer science every year. The teaspoon languages approach is to imagine one or two little learning programming activity in every social studies, language arts, and mathematics class every year. Each of these languages is tiny and different. The goal is that by the time that US students take a CS class (typically, in high school or undergraduate), they will have had many programming experiences, have seen a variety of types of programming languages, and have a sense that “programming isn’t hard.”

Meeting the Needs of Social Studies Teachers

The second paper, “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies” (see link here) was just presented in October by Tammy at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. (We have a longer form of this paper that we have just submitted to a journal.) This is an exciting paper for me because it’s exactly addressing the critical challenge in our work. We can design and implement all kinds of prototype Teaspoon languages, but to achieve our goals, teachers in disciplines other than CS have to see value and adopt them.

The paper is about our workshops with practicing social studies teachers. Tammy has a goal to teach social studies teachers how to teach data literacy. She has built a large online education resource (OER) on teaching data literacy in social studies. Learning data literacy involves being able to read, comprehend, and argue with data visualizations, but also being able to create them. That’s where we come in. Her OER links to several tools for creating data visualizations, like Timeline JS, CODAP, and GapMinder. Most of them were not created for social studies teachers or classes. When we run these workshops, our tools are just in-the-mix. We offer scaffolding for using all of them. These are our design probes. The teachers use the tools and then tell us what they really want. These are our data, and we analyze them in detail —- as in this paper.

Let’s jump to the bottom line: We’re not there yet. The teachers love the OER, but get confused about why should do in their classes. They find the tools for data visualization fascinating, but overwhelming. They like DV4L a lot:

One pre-service teacher explained that they preferred our prototype over other tools because “(with the prototype DV4L) I found myself asking questions connected to the data itself, rather than asking questions in order to figure out how to work the visual.”

Recently, I held a focus group with some social studies teachers who told me that they won’t use any computational tools —- they believe in teaching data visualization, but all created with pencil and ruler. That’s our challenge: Can we be more powerful, more enticing, and easy enough to beat out pencil and ruler? Our tool, DV4L, is purpose-built for these teachers, and they appreciate its advantages — and yet, few are adopting. That’s where we need to work next.

Opportunities for Social Studies Teachers to Get Involved

If you know a social studies teacher who would want to keep informed about our work and perhaps participate in our workshops or studies, please have them sign up on our mailing list. Thank you!

Often, what teachers tell us they really want suggests new features or entirely new tools. We have two ongoing studies where we are looking for design feedback from social studies teachers. If you know social studies teachers who would like to play with something new (and we’ll pay them for their time), would you please forward these to them?

Timeline Builder

We’re looking for K-12 Social Studies teachers to try out our new timeline visualization tool, TimelineBuilder. TimelineBuilder has been made with teachers and usability in mind. In it, ‘events’ are added to a timeline using a form-based interface. Changes to the timeline can be seen automatically, with events showing up as soon as they are added.

This study will consist of completing 2 surveys and 3 asynchronous activities guided by worksheets. All participants will be compensated with a $20 gift card for survey and activity completion. There is an additional option to be invited to a focus group, which will provide additional compensation.

If you are interested in participating in this study, you can complete the consent form and 1st survey here. (Plain text Link: https://forms.gle/gwxfn5bRgTjyothF6 )

Please contact Mark Guzdial (mjguz@umich.edu) or Tamara Nelson-Fromm (tamaranf@umich.edu) with any questions.

The University of Michigan Institutional Review Board Health Sciences and Behavioral Sciences has determined that this study is exempt from IRB oversight.

DV4L Scripting Study

Through our work with social studies educators thus far, we have designed the tools DV4L-Basic and DV4L-Scripting specifically to support data literacy standards in social studies classrooms. If you are a social studies middle or high school teacher, we would love to hear your feedback. If you can spare less than an hour of your time to participate in our study, we will send you a $50 gift card for your time and valuable feedback.

If you are interested but want more details, please visit/complete the consent form here: https://forms.gle/yo3yWGThQ1wnhu7g7

For questions or concerns, please contact Mark Guzdial (mjguz@umich.edu) or Bahare Naimipour (baharen@umich.edu).

References

Guzdial, M. and Tamara L. Shreiner. 2021. “Integrating Computing through Task-Specific Programming for Disciplinary Relevance: Considerations and Examples.” In Computational Thinking in Education: A Pedagogical Perspective, Aman Yadav and Ulf Dalvad Berthelsen (Eds). PDF of Submitted.

Shreiner, Tamara L., Mark Guzdial, and Bahare Naimipour. 2021. “Using Participatory Design Research to Support the Teaching and Learning of Data Literacy in Social Studies.” Presented at CUFA, the College and University Faculty Assembly 2021 of the National Council of the Social Studies. PDF

December 22, 2021 at 10:00 am 10 comments

Computer Science was always supposed to be taught to everyone, and it wasn’t about getting a job: A historical perspective

I gave four keynote talks in the last two months, at SIGITE, Models 2021 Educators’ Symposium, VL/HCC, and CSERC. I’m honored to be invited to them, but I do suspect that four keynotes in six weeks suggest some “personal issues” in planning and saying “No.” Some of these were recorded, but I don’t believe than any of them are publicly available

The keynotes had a similar structure and themes. (A lot easier than four completely different keynotes!) My activities in computing education these days are organized around two main projects:

My goal was to put both of these efforts in a historical context. My argument is that computer science was originally invented to be taught to everyone, but not for economic advantage. I see the LSA effort and our Teaspoon languages connected to the original goals for computer science. The talks were similar to my SIGCSE 2019 keynote (blog post about that talk here, and video version here), but puts some of the early history in a different perspective. I’m not going to go into the LSA Computing Education effort or Teaspoon languages here. I’m writing this up because I hope that it’s a perspective on the early history that might be useful to others.

I start out with C.P. Snow.

My PhD advisor, Elliot Soloway, would have all of his students read this book, “The Two Cultures.” Snow was a scientist who bemoaned the split between science and humanities in Western culture. Snow mostly blamed the humanities. That wasn’t Elliot’s point for having us read his book. Elliot wanted us to think about “Who could use what we have to teach, but might not even enter our classroom?”

This is George Forsythe. Donald Knuth claims that George Forthye first published the term “computer science” in a paper in the Journal of Engineering Education in 1961. Forsythe argued (in a 1968 article) that the most valuable parts of a scientific or technical education were facility with natural language, mathematics, and computer science.

In 1961, the MIT Sloan School held a symposium on “Computers and the World of the Future.” It was an amazing event. Attendees included Gene Amdahl, John McCarthy, Alan Newell, and Grace Hopper. Martin Greenberger’s book in 1962 included transcripts of all the lectures and all the discussants’ comments.

C.P. Snow’s chapter (with Norbert Wiener of Cybernetics as discussant) predicted a world where software would rule our lives, but the people who wrote the software would be outside the democratic process. He wrote, “A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.” He argued that everyone needed to learn about computer science, in order to have democratic control of these processes.

In 1967, Turing laureate Peter Naur made a similar argument (quoting from Michael Caspersen’s paper): “Once informatics has become well established in general education, the mystery surrounding computers in many people’s perceptions will vanish. This must be regarded as perhaps the most important reason for promoting the understanding of informatics. This is a necessary condition for humankind’s supremacy over computers and for ensuring that their use do not become a matter for a small group of experts, but become a usual democratic matter, and thus through the democratic system will lie where it should, with all of us.” The Danish computing curriculum explicitly includes informing students about the risks of technology in society.

Alan Perlis (first ACM Turing Award laureate) made a different argument in his chapter. He suggested that everyone at University should learn to program because it changes how we understand everything else. He argued that you can’t think about integral calculus the same after you learn about computational iteration. He described efforts at Carnegie Tech to build economics models and learn through simulating them. He was foreshadowing modern computational science, and in particular, computational social science.

Perlis’s discussants include J.C.R. Licklider, grandfather of the Internet, and Peter Elias. Michael Mateas has written a fascinating analysis of their discussion (see paper here) which he uses to contextualize his work on teaching computation as an expressive medium.

In 1967, Perlis with Herb Simon and Alan Newell published a definition for computer science in the journal Science. They said that CS was “the study of computers and all the phenomena surrounding them.” I love that definition, but it’s too broad for many computer scientists. I think most people would accept that as a definition for “computing” as a field of study.

Then, we fast forward to 2016 when then-President Obama announced the goal of “CS for All.” He proposed:

Computer science (CS) is a “new basic” skill necessary for economic opportunity and social mobility.

I completely buy the necessity part and the basic skill part, and it’s true that CS can provide economic opportunity and social mobility. But that’s not what Perlis, Simon, Newell, Snow, and Forsythe were arguing for. They were proposing “CS for All” decades before Silicon Valley. There is value in learning computer science that is older and more broadly applicable than the economic benefits.

The first name that many think of when talking about teaching computing to everyone is Seymour Papert. Seymour believed, like Alan Perlis, “that children can learn to program and learning to program can affect the way that they learn everything else.”

The picture in the lower right of this slide is important. On the right is Gary Stager, who kindly shared this picture with me. On the left is Wally Feurzeig who implemented the programming language Logo with Danny Bobrow, when Seymour was a consultant to their group at BBN. In the center is Cynthia Solomon who collaborated with Seymour on the invention of the Turtle (originally a robot, seen at the top) and the development of Logo curriculum.

Cynthia was the lead author of a recent paper describing the history of Logo (see link here), which included the example of early Logo use on the upper right of this slide, which generates random sentences. Logo is named for the Greek word logos for “word.” The first examples of Logo were about manipulating natural language. Logo has always been used as an expressive medium (music, graphics, storytelling, and animation), as well as for learning mathematics (see the great book Turtle Geometry).

This is the context in which I think about the work with the LSA Computing Education Task Force. Our question was: At an R1 University with a Computer Science & Engineering undergraduate degree and an undergraduate BS in Information (with tracks in information analysis and user experience (UX) design), what else might undergraduates need? What are the purposes for computing that are broader and older than the economic advantages of professional software development? We ended up defining three themes of what LSA faculty do with computing and what they want their students to know:

  • Computing for Discovery – LSA computational scientists create models and simulate them (not just analyze data that already exists), just as Alan Perlis suggested in 1961.
  • Computing for Expression – Computing has created new ways for humans to express themselves, which is important to study and to use to explore, invent, and create new forms of expression, as the Logo community did starting in the 1960’s.
  • Computing for Justice – LSA scholars investigate how computing systems can encode and exacerbate inequities, which requires some understand of computing, just as C.P. Snow talked about in 1961.

We develop our Teaspoon languages to meet the needs of teachers in teaching non-CS and even non-STEM classes. We argue that there are computing education learning objectives that we address with Teaspoon languages, even if they don’t include common languages features like for, while, and if statements. A common argument against our work in Teaspoon languages is that we’re undertaking a Sisyphean task. Computing is what it is, programming languages are what they are, and education is not going to be a driving force for changing anything in computing.

And yet, that’s exactly how the desktop user interface was invented.

Alan Kay (another Turing laureate in this story), Adele Goldberg, and Dan Ingalls led the development of Smalltalk in Xerox PARC in the 1970’s. The goal for Smalltalk was to realize Alan’s vision of a Dynabook, using the computer as a tool for learning. The WIMP (overlapping Windows, Icons, Menus, and mouse Pointer) interface was invented in order to achieve computing education goals. For the purposes of education, the user interface that you are using right now was invented.

The Smalltalk work tells us that we don’t have to accept computing as it is. Computing education today focuses mostly on preparing students to be professional software developers, using the tools of professional software development. That’s important and useful, but often eclipses other, broader goals for learning computing. The earliest goals for computing education are different from those in most of today’s computing education. We should question our goals, our tools, and our assumptions. Computing for everyone is likely going to look different than the computing we have today which has been defined for a narrow set of goals and for far fewer people than “all.”

November 26, 2021 at 10:00 am 22 comments

Media Computation today: Runestone, Snap!, Python 3, and a Teaspoon Language

I don’t get to teach Media Computation1 since I moved to the University of Michigan, so I haven’t done as much development on the curriculum and infrastructure as I might like if I were teaching it today. I did get a new version of JES (Jython Environment for Students) released in March 2020 (blog post here), but have rarely even started JES since then.

But using Jython for Media Computation is so 2002. Where is Media Computation going today?

I’ve written a couple of blog posts about where Media Computation is showing up outside of JES and undergraduate CS. Jens Moenig has been doing amazing things with doing Media Computation in Snap! — see this blog post from last year on his Snap!Con keynote talk. SAP is now offering a course From Media Computation to Data Science using Snap! (see link here). Barbara Ericson’s work with Runestone ebooks (see an example blog post here) includes image manipulation in Python inside the browser at an AP CS Principles level (see example here). The amazing CS Awesome ebook that Beryl Hoffman and Jen Rosato have been doing with Barb for AP CS A includes in-browser coding of Java for the Picture Lab (see example here).

I was contacted this last January by Russ Tuck and Jonathan Senning. They’re at Gordon College where they teach Media Computation, but they wanted to do it in Python 3 instead of Jython. You can find it here. It works SO well! I miss having the image and sound explorers, but my basic demos with both images and sounds work exactly as-is, with no code changes. Bravo to the Gordon College team!

On the right is Python 3 code doing Media Computation. On the left are two images -- the original in the middle, and a red-reduced image on the far left.

Most of my research these days is grounded in Task-Specific Programming languages, which I’ve blogged about here (here’s a thread of examples here and here’s an announcement of funding for the work in social studies). We now refer to the project as Teaspoon Computing or Teaspoon Languages — task-specific programming => TSP => Teaspoon. We’re adding a teaspoon of computing into other subjects. Tammy Shreiner and I have contributed a chapter on Teaspoon computing to a new book by Aman Yadav and Ulf Dalvad Berthelsen (see announcement of the book here).

We have a new Teaspoon language, Pixel Equations, that uses Media Computation to support an Engineering course in a Detroit Public School. Here, students choose a picture as input, then (1) enter the boolean equations for what pixels to select and (2) enter equations for new red, green, and blue values for those pixels. The conditionals and pixel loops are now implicit.

In several of our tools, we’re now exploring bilingual or multilingual interfaces, inspired by Sara Vogel’s work on translanguaging (see paper here) and Manuel Pérez-Quiñones’s recent work on providing interfaces for bilingual users (see his TED talk here and his ACM Interactions paper here). You can see in the screenshot below that colors can be referenced in either English or Spanish names. We’re now running participatory design sessions with teachers using Pixel Equations.

I’m planning a series of blog posts on all our Teaspoon languages work, but it’ll take a while until I get there.


  1. For new readers, Media Computation is a way of introducing computing by focusing on data abstractions used in digital media. Students write programs to manipulate pixels of a picture (to create photo filters), samples of a sound (e.g., to reverse sounds), characters of a text, and frames of a video (for video special effects). More at http://mediacomputation.org

September 6, 2021 at 7:00 am 5 comments

ICER 2021 Preview: The Challenges of Validated Assessments, Developing Rich Conceptualizations, and Understanding Interest #icer2021

The International Computing Education Research Conference (ICER) 2021 is this week (website here). It should have been in Charleston, South Carolina (one of my favorite cities), but it will instead be all on-line. Unlike previous years, if you are not already registered, you’re unfortunately out of luck. As seen in Matthias Hauswirth’s terrific guest blog post from last week (see here), getting set up in Clowdr is complicated. ICER won’t have the resources to bring people on-line and get them through the half hour prep sessions on-the-fly. There will be no “onsite” registration.

However, all the papers should be available in the ACM Digital Library (free for some time), and I think all the videos of the talks will be made available after the fact, so you can still gain a lot from the conference. Let me point out a few of the highlights that I’m excited about. (As of this writing, the papers are not yet appearing in the ACM DL — all the DOI links are failing for me. I’ll include the links here in hopes that everything is fixed soon.)

Our keynoter is Tammy Clegg, whom I got to know when she was a PhD student at Georgia Tech. She’s now at U. Maryland doing amazing work around computation and relevant science learning. I’m so looking forward to hearing what she has to say to the ICER community.

Miranda Parker, Allison Elliott Tew, and I have a paper “Uses, Revisions, and the Future of Validated Assessments in Computing Education: A Case Study of the FCS1 and SCS1.” This is a paper that we planned to write when Miranda first developed the SCS1 (first published in 2016). We created the SCS1 in order to send it out to the world for use in research. We hoped that we could sometime later do in CS what Richard Hake did in Physics, when he used the FCI to make some strong statements about teaching practices with a pool of 6,000 students (see paper here). Hake’s paper had a huge impact, as it started making the case to shift from lecture to active learning. Could we use the collected use of the SCS1 to make some strong arguments for improving CS learning? We decided that we couldn’t. The FCI was used in pretty comparable situations, and it’s tightly focused on force. CS1 is far too broad, and FCS1 and SCS1 are being used in so many different places — not all of which it’s been validated for. Our retrospective paper is kind of a systemic literature review, but it’s done from the perspective of tracing these two instruments and how they’ve been used by the research community.

One of the papers that I got a sneak peek at was “When Wrong is Right: The Instructional Power of Multiple Conceptions” by Lauren Margulieux, Paul Denny, Katie Cunningham, Mike Deutsch, and Ben Shapiro. The paper is exploring the tensions between direct instruction and more student-directed approaches (like constructionism and inquiry learning) (see a piece I did in 2015 about these tensions). The basic argument of this new paper is that just telling students the right answer is not enough to develop rich understanding. We have to figure out how to help students to be able to hold and compare multiple conceptions (not all of which is canonical or held by experts), so that they can compare and contrast, and use the right one at the right time.

I’m chair for a session on interest. While I haven’t seen the papers yet, I got to watch the presentations (which are already loaded in Clowdr). “Children’s Implicit and Explicit Stereotypes on the Gender, Social Skills, and Interests of a Computer Scientist” by de Wit, Hermans, and Aivaloglou is a report on a really interesting experiment. They look at how kids associate gender with activities (e.g., are boys more connected to video games than girls?). The innovative part is that they asked the questions and timed the answers. A quick answer likely connects to implicit beliefs. If they take a long time to answer, maybe they told you what they thought you wanted to hear? The second paper “All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing” by Lishinski and Rosenberg asks about what leads to students succeeding and wanting to continue in computing. They look at students affective state coming into CS1 (e..g, how much do they like computing? How much do they think that they can succeed in computing?), and relate that to students’ experiences and affective state after the class. They make some interesting claims about gender — that gender gaps are really self-efficacy gaps.

One of the more unusual sessions is a pair of papers from IT University of Copenhagen that make up a whole session. ICER doesn’t often give over a whole session to a single research group on multiple papers. One is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (Recruitment Perspective)” and the other is “Computing Educational Activities Involving People Rather Than Things Appeal More to Women (CS1 Appeal Perspective).” The pitch is that framing CS1 as being about people rather than things leads to better recruitment (first paper) and more success in CS1 (second paper) in terms of gender diversity. It’s empirical support for a hypothesis that we’ve heard before, and the authors frame the direction succinctly: “CS is about people not things.” Is that succinct enough to get CS faculty to adopt this and teach CS differently?

August 16, 2021 at 7:00 am Leave a comment

The Drawbacks of the One-Second Conference Trip. Or, how to prepare for ICER 2021. Guest Blog Post from Matthias Hauswirth

I miss physical conferences. But there are some things about them I do not miss at all. I don’t miss sprinting through airports to catch a connecting flight. I don’t miss standing in line at immigration for over an hour, just to enter the next long line to get through customs. And I don’t miss sitting in a tight middle seat for ten hours straight.

With today’s virtual conferences the trips are more pleasant. I can travel there with a single mouse click. It’s a one-second trip. And I love that! *

However, by eliminating the trip to the conference, we also eliminated an opportunity to prepare for the conference while being stuck in airports, planes, stations, and trains. My physical conference trips used to provide ample idle time. I used that time to contact colleagues to schedule a dinner, lunch, or coffee at the conference; to read the conference program and highlight the talks I wanted to see; to check out the map of the venue to know where to find the relevant rooms; and even to read a paper or two to prepare for talking to the authors at the conference.

That kind of preparation takes more than a second. And without the time provided by those arduous trips, I might show up ill prepared and miss out on half of the fun.

So here is my plan. For my next one-second conference trip, I will allocate a little bit of extra time to prepare. Not crammed into an airplane seat, but at home, in a comfy chair, with a nice cup of coffee.

Oh, and if your next conference trip takes you to ICER 2021 this coming Monday, here are some suggestions from the ICER Chairs for how to prepare for this conference, which will be hosted in the most recent version of Clowdr:

  • Find the invitation email you received from Clowdr (check your spam folder, too!) and log in (3 minutes).
  • Watch the ICER 2021 Clowdr Intro video (13 minutes). This will teach you the basics of how to navigate the platform. We recommend following along interactively on the Clowdr site as you watch, to familiarize yourself with the navigation
  • Watch the ICER 2021 Paper Sessions: Participant Experience video (14 minutes). This will teach you how our paper sessions will work. You won’t just be watching videos, you’ll also be interacting while you watch, talking in small groups afterwards, and asking questions.
  • Once logged in, read the ICER Clowdr Experience FAQ page (4 minutes). This has the videos above and more detail for specific types of events.
  • On Clowdr, read the Code of Conduct page (3 minutes). Everyone is responsible for following these rules to ensure everyone feels safe and welcome.
  • On Clowdr, read the How to Set Up Your Profile page and set up your profile (3 minutes). This ensures people know who you are, what your name and pronouns are, where you’re visiting from, and what roles you’re playing at the conference. 

In Clowdr you will find a lot of content, including the entire program. We recommend that inside Clowdr you “star” events you are interested in to create your personal schedule. There is a page for each paper and poster/lightning talk. On each paper page you already find the presentation as an embedded video, on each ICER poster page there’s the poster pitch video and the PDF of the poster, and on each ICER lightning talk page you find the talk slide. Have a quick look to plan your personal schedule. And while you’re there, why not already leave a message or comment for the authors in the chat at the right of the paper/poster’s page? Note that the links to the papers in the ACM DL are not yet active; we expect ACM to make the DOIs work and the papers visible in the DL by the start of the conference.

We are confident that with an hour or so of up-front effort you will get much more out of the conference! (We suspect, though, that you will end up spending more than an hour because the content draws you in!) ICER 2021 is a compact conference packed with exciting content and interaction. Log in now to make the most of it!

*) I also very much love the minimal carbon footprint, low cost, and reduced health risks of virtual conferences.

August 13, 2021 at 1:00 pm 1 comment

Why aren’t more girls in the UK choosing to study computing and technology? Guest blog post by Peter Kemp

The Guardian raised the question in the title in this article in June. Pat Yongpradit sent it to me and Peter Kemp, and Peter’s response was terrific — insightful and informed by data. I asked him if I could share it here as a guest post, and he graciously agreed.

We’ve just started a 3 year project, scaricomp, that aims to look at girls’ performance and participation in computer science in English schools. There’s not much to see at the moment, as we started in April, but we’re hoping to sample 5000+ students across schools with large numbers of students taking CS and/or high numbers of females in the CS cohorts. I’ll let you know when we have some analysis in hand.

You reference The Guardian article’s quote: “In 2019, 17,158 girls studied computer science, compared with the 20,577 girls who studied ICT in 2018”. It’s worth noting that the 2018 ICT figure was the end of the line for ICT, numbers in previous years were much higher, and the female figure was actually ~40% of the overall ICT entries, whilst it represents about 20% of the GCSE CS cohort, i.e. females were proportionally better represented in ICT than CS. For a fuller picture of the changing numbers and demographics in English computing, see slide 8 of this, or the video presentation). It’s also worth noting that since the curriculum change in 2012/13 we’ve lost the majority of time dedicated to teaching computing (including CS) at age 14-16, I’ve argued that this has had a disproportionate impact on girls and poorer students (page 45-48).

To add a bit of context from England: Students typically pick 8-10 subjects for GCSE, though their ‘options’ might be limited. Most schools will insist that students take Maths, English Language, English Literature, Physics, Chemistry, Biology, and often: French or German, and History or Geography. This leaves students with one or two actual ‘options’. Many schools are also imposing entry requirements on GCSE CS, only letting the high achieving students (often focusing on maths) onto the course; this will likely have an impact in access to the curriculum for poorer students who are less likely to achieve well in mathematics. Why don’t females pick CS in the same way they picked ICT? This might well be linked to curriculum, role models, contextualisation etc.

One of the reasons given for the curriculum change in 2012 was that students were being “bored to death” by ICT, with ICT generally being the application of software products to solve problems and the implication of technology on the world. The application of technology to the world lends itself to the contextualisation of the curriculum and the assessment materials. There was a lot of project-based assessment with real world scenarios for students to engage with, e.g. making marketing materials for businesses, using spreadsheets to organise holiday bookings etc https://web.archive.org/web/20161130183550if_/http://www.aqa.org.uk/subjects/computer-science-and-it/gcse/information-and-communication-technology-4520) . The GCSE CS is a different beast. It can be contextualised, but this is probably more difficult to do as there is an awful lot of material to cover and the assessment methodology is entirely exam based and on paper for the largest exam boards. Anecdotally we hear of schools cutting down on programming time on computers, as the exam is handwritten.

Data looking at what females ‘liked’ in the old ICT curriculum is quite limited, but what does exist places some of the ‘non-CS’ elements quite highly. So, the actual curriculum content might have a part to play here. Having taught ICT (and CS) for many years, most students I knew really enjoyed the ICT components. I’d argue that the pre-reform discourse around ICT being: “useless, boring, easy”, CS being: “useful, exciting, rigorous” was an easy political position to take, and not reflective of reality where schools had competent teachers. We now find ourselves in a position where we probably have a little too much CS, and not enough digital literacy / ICT for the general needs of students. I and people like Miles Berry (p49) have argued for more generalist qualification which maintains elements of CS. Though there appears to be little political will to make this happen.

To add another suggestions as to why we’re seeing females disengaging, within the English context, we see females substantially underachieving at GCSE in comparison to their other subjects and males of similar ‘abilities’ (ability here being similar grade profiles in other subjects). Why this is remains unclear, we see similar under achievement in Maths and Physics. My fear is that encouraging females to take CS might lead them to having their self-efficacy knocked and therefore make them less likely to pursue further study or a career in tech. We also found that females from poorer backgrounds were more likely to pick GCSE CS than their middle-class peers, we speculate that this might be the result of different cultural/family pressures and a keener engagement with the ’employability’ and ‘good pay’ discourse that often surrounds the representation of studying CS, however true this might be for these groups in reality. More research on the above coming soon through scaricomp.

Additionally, in terms of the UK picture, you’ll probably want to check in with Sue Sentance and the Gender Balance in Computing Project. One of their theories for the decline in computing is that CS is being timetabled at the same time as other (generally) more attractive subjects for females. I’m not sure if they’ve started this part of the research yet, but it’s worth checking in. They are running interventions across the country, but I don’t believe that they are trying to do a nationally representative survey.

August 2, 2021 at 7:00 am Leave a comment

Announcing the inaugural Illinois Computer Science Summer Teaching Workshop: Guest blog post from Geoffrey Challen

We are excited to invite you to the inaugural Illinois Computer Science Summer Teaching Workshop: https://teaching-workshop.cs.illinois.edu/. The 2021 workshop will be held virtually over two half-days on August 10–11, 2021. The workshop is free to attend, and teaching faculty, research faculty, as well as graduate and undergraduate students are all invited to participate—either by presenting, or by joining the conversation. The deadline to submit an abstract is Tuesday July 20th.


Our goal is to bring together college instructors who are engaged with teaching computer science to discuss best practices, present new ideas, challenge the status quo, propose new directions, debunk existing assumptions, advocate for new approaches, and present surprising or preliminary results. This year’s theme is “How the Pandemic Transformed Our Teaching“, allowing participants to reflect on the difficult year behind us as we prepare to return to classrooms next fall. We are excited to welcome Professors Margo Seltzer (UBC), Tiffani Williams (Illinois), Susan Rodger (Duke), Nate Derbinsky (Northeastern), and David Malan (Harvard) as invited speakers.

July 17, 2021 at 10:51 am 2 comments

Considering the Danish Informatics Curriculum: Comparing National Computer Science Curricula

Michael Caspersen invited me to review a chapter on the Danish Informatics curriculum (see a link here). He asked me to compare it to existing school CS curriculum with which I’m familiar. That was an interesting idea — how does anyone relate curricula across diverse contexts, even between nations? I gave it a shot. I most likely missed, in that there are many curricula that I don’t know or don’t know well enough. I welcome comments on other CS curricula.

The Danish Informatics curriculum is unique for its focus on four competence areas:

  • Digital empowerment which describes the ability to review and critique digital artifacts to ask where the strict demands of a computational system may not serve well the messy world in which humans live.
  • Digital design and design processes which describes the ways in which designers come to understand the problem domain for which we design digital artifacts.
  • Computational thinking and modeling which describes how data and algorithms are used to construct digital solutions and artifacts.
  • Technological knowledge and skills which describes the tools (e.g., programming languages) and infrastructures (e.g., computer systems, networking) used to construct digital solutions and artifacts.

I am not familiar with any curriculum that encompasses all four competencies. I’m most familiar with elementary and high school curricula in the United States. Each US state has control over its own school system (i.e., there is no national curriculum) though many are influenced by recommendations from the Computer Science Teachers Association (CSTA) (see link here) and the K12 CS Framework (link here).

In the United States, most computing curricula focus on technological knowledge and skills and computational thinking and modeling. The former is important because the economic argument for computing education in schools is the most salient in the United States. The latter most often appears as a focus on learning computing skills without programming, e.g., like in the CS Unplugged activities from Tim Bell at the University of Canterbury (link).

Modeling is surprising rare in most state curricula. Calls for modeling and simulation are common in US mathematics and science education frameworks like the Next Generation Science Standards (link), but these have influenced few state curricula around computing education. Efforts to integrate computing to serve the needs of mathematics and science education are growing, but only a handful of states actively promote computing education to support mandatory education. For example, Indiana has include computing learning objectives in their state’s science education standards, in order to develop more integrated approaches.

I don’t know of any state curricula that include digital empowerment nor digital design and design processes. These are critically important. Caspersen’s arguments for the Danish Informatics curriculum build on quotes from Henry Kissinger and Peter Naur, but could also build on the work of C.P. Snow and Alan Perlis (the first ACM Turing Award laureate). In 1961, Snow and Perlis both argued for mandatory computing (though at the University level). Perlis argued that computing gave us new ways to understand the world. He would have recognized the digital design and design processes competency area. Snow warned that everyone should learn computing in order to understand how computing is influencing our world. He wrote: “A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.” He would recognize the concerns of Kissinger and Naur, and the importance of digital empowerment.

The Danish Informatics curriculum is unique in its breadth and for considering the social aspects of computing artifacts and design. It encompasses important needs for citizens of the 21st Century.

July 12, 2021 at 7:00 am 9 comments

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,038 other followers

Feeds

Recent Posts

Blog Stats

  • 2,014,369 hits
May 2022
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

CS Teaching Tips