So what’s a notional machine anyway? A guest blog post from Ben Shapiro

Last week, we had a Dagstuhl Seminar about the concept of notional machines, as I mentioned in an earlier blog post about the work of Ben Shapiro and his student Abbie Zimmermann-Niefield. There is an amazing amount being written about the seminar already (see the Twitter stream here), with a detailed description from Andy Ko here in his blog and several posts from Felienne on her blog. I have written my own summary statement on the CACM Blog (see post here). It seems appropriate to let Ben have the summary word here, since I started the seminar with a reference to his work.

I’m heading back to Boulder from a Dagstuhl seminar on Notional Machines and Programming Language Semantics in Education. The natural question to ask is: what is a notional machine?

I don’t think we converged on an answer, but here’s my take: A notional machine is an explanation of the rules of a programmable system. The rules account for what makes a program a valid one and how a system will execute it.

Why this definition? Well, for one, it’s consistent with how du Boulay, coiner of the term notional machine, defined it at the workshop (“the best lie that explains what the computer does”). Two, it has discriminant utility (i.e. precision): the definition allows us to say that some things are notional machines and some are not. Three, it is consistent with a reasonable definition of formal semantics, and thus lets us imagine a continuum of notional machines that include descriptions of formal semantics, but also descriptions that are too imprecise — too informal — to be formal semantics but that still have explanatory value.

The first affordance is desirable because it allows us to avoid a breaking change in nomenclature. It would be good if people reading research papers about notional machines (see Juha Sorva’s nice review), including work on how people understand them, how teachers generate or select them, etc., don’t need to wrestle with what contemporary uses of the term mean in comparison to how du Boulay used the term thirty years ago. It may make it easier for the research community to converge on a shared sense of notional machine, unlike, say, computational thinking, where this has not been possible.

The second affordance, discriminant utility, is useful because it gives us a reason to want to have a term like notional machine in our vocabulary when we already have other useful and related terms like explanation and model and pedagogical content knowledge. Why popularize a new term when you already have perfectly good ones? A good reason to do so is because you’d like to refer to a distinct set of things than those terms refer to.

The scope of our workshop was explicitly pedagogical: it was about notional machines “in education.” It was common within the workshop for people to refer to notional machines as pedagogical devices. It is often the case that notional machines are invented for pedagogical purposes, but other contexts may also give rise to them. Consider the case of Newtonian mechanics. Newton’s laws, and the representations that we construct around them (e.g. free body diagrams), were invented before Einstein described relativity. Newton’s laws weren’t intended as pedagogical tools but as tools to describe the laws of the universe, within the scales of size and velocity that were accessible to humans at the time. Today we sequence physics curriculum to offer up Newtonian physics before quantum because we believe it is easier to understand. But in many cases, even experts will continue to use it, even if they have studied (and hopefully understand) quantum physics. This is because in many cases, the additional complexity of working within a quantum model offers no additional utility over using the simpler abstractions that Newtonian physics provides. It doesn’t help one to predict the behavior of a system any better within the context of use, but likely does impose additional work on the system doing the calculation. So, while pedagogical contexts may be a primary locus for the generation, selection, and learning of notional machines, they are not solely of pedagogical value.

Within the workshop, I noticed that people often seemed to want their definitions, taxonomies, and examples of notional machines to include entities and details beyond those encompassed by the definition I have provided above. For example, some participants suggested that action rules can be, or be part of, notional machines. An example of an action rule might be “use descriptive variable names” or “make sure to check for None when programming in Python.” While both of these practices can be quite helpful, my definition of notional machines accepts neither of them. It rejects them because they aren’t about the rules by which a computer executes a program. In most languages, what one names variables does not matter, so long as one uses a name consistently within the appropriate scope. “Make sure to check for None” is a good heuristic for writing a correct program, but not an account of the rules a programming environment uses to run a program. In contrast, “dereferencing a null pointer causes a crash” is a valid notional machine, or at least a fragment of one.

Why do I want to exclude these things? Because a) I think it’s valuable to have a term that refers to the ways we communicate about what programming languages are and how the programs written in them will behave. And b) a broader definition will refer to just about everything that has anything to do with the practice of programming. That doesn’t seem worth having another term in our lexicon, and it would be less helpful for designing and interpreting research studies for computing education.

The third affordance is desirable because it may allow us to form stronger bridges to the programming languages research world. It allows us to examine — and value — the kinds of artifacts that they produce (programming languages and semantics for those languages) while also studying the contradictions between the values embedded in the production of those artifacts and the values that drive our own work. Programming languages (PL) researchers are generally quite focused on demonstrating the soundness of designs they create, but typically pay little attention to the usability of the artifacts they produce. Research languages and written (with Greek) semantics have difficult user interfaces, at least to those of us sitting on the outside of that community. How can we create a research community that includes the people, practices, and artifacts of PL and that conducts research on learning? One way is to decide to treat the practices and artifacts of PL researchers, such as writing down formal semantics, an instance of something that computing education researchers care about: producing explanations of how programming systems work. PL researchers describing languages’ semantics aren’t doing something that is very different in kind than what educators do when they explain how programming languages work. But (I think) they usually do so with greater precision and less abstraction than educators do. Educators’ abstractions may be metaphorical (e.g. “There’s a little man inside the box that reads what you wrote, and follows your instructions, line by line…”) but at least if we use my definition, they are of the same category as the descriptions that semanticists write down. As such, the range of things that can be notional machines, in addition to the programming languages they describe, may serve as boundary objects to link our communities together. I think we can learn a lot from each other.

That overlap presents opportunities. It’s an opportunity for us to learn from each other and an opportunity to conduct new lines of research. Imagine that we are faced with the desire to explain a programming system. How would a semanticist explain this system? How would an experienced teacher? An inexperienced teacher? What do the teachers’ explanations tell us about what’s important? What does a semanticist’s explanation tell us about what’s the kernel of truth that must be conveyed? How do these overlap? How do they diverge? What actually works for students? Can pedagogical explanations be more precise (and less metaphorical) and still be as helpful to students? Are more precise definitions actually more helpful to students than less precise ones? If so, what does one need to know to write a formal semantics? How does one learn to do that? How does one teach educators to do that? How can we design better programming languages, where better is defined as being easier to understand or use? How can we design better programming languages when we have different theories of what it means to program well? How do we support and assess learning of programming, and design programming languages and notional machines to explain them, when we have different goals for what’s important to accomplish with programming?

There are many other questions we could ask too. Several groups at the workshop held breakout sessions to brainstorm these, but I think it’s best to let them tell their own stories.

In summary, I think the term notional machines has value to computing education research, but only if we can come to a consensus about what the term means, and what it doesn’t. That’s my definition and why I’ve scoped it how I have. What’s your take?

If you’d like to read more (including viewpoints different than mine), make sure to check out Felienne’s and Andy’s blog posts on this same topic.

Thank you to Shriram, Mark, Jan, and Juha for organizing the workshop, and to the other participants in the workshop for many lively and generous conversations. Thanks as well to the wonderful Dagstuhl staff.

 

July 15, 2019 at 12:00 pm 11 comments

How to reduce the defensive climate, and what students really need to understand code: ITICSE 2019 Preview

This year, we’re presenting two papers at the 2019 ACM SIGCSE Innovation and Technology in CS Education (ITICSE) conference in Aberdeen, Scotland. I’ve been to ITICSE several times and enjoy it every time, but I can’t always justify the trip. This year, Barbara Ericson, Katie Cunningham, and I are all in Germany for a Dagstuhl Seminar on Notional Machines the week before, so we took the opportunity to submit and are fortunate to be invited to present.

Making CS Learning Visible: Case Studies on How Visibility of Student Work Supports a Community of Learners in CS Classrooms by Amber Solomon, Vanessa Oguamanam, Mark Guzdial, and Betsy DiSalvo

When I taught CS Ed Research this last semester (see the blog post about open questions from the class), the students so resonated with Lecia Barker’s papers about defensive climate (the classic paper is here). The story about how CS classes are “characterized by competitiveness rather cooperation, judgments about others, superiority, and neutrality rather than empathy” still rings true, 17 years after first written. Several of my students proposed research to follow-up on the original study.

Amber and Vanessa are also motivated by the concerns about defensive climate in CS classes, but they don’t want to measure it. They are suggesting an intervention.

They suggest that a community of learners approach would reduce defensive climate. Key to creating a community of CS learners, they propose, is making student work and process visible. Vanessa works in marker-oriented curricula, where student work is physical and the work process is visible. Amber did the evaluation of the AR design studio classroom that I’ve written about previously. In both of these case studies, they observed student communication patterns that were different from the defensive climate studies and more in keeping with a community of learners culture. Defensive climate is still a problem, and changing culture and community is the way to address it.

Novice Rationales for Sketching and Tracing, and How They Try to Avoid It by Katie Cunningham, Shannon Ke, Mark Guzdial, and Barbara Ericson

At ICER 2017, Katie presented a study of how students trace their programs on paper (see ICER 2017 paper here, and my blog post summary here). She had some fascinating and surprising results. For example, if students traced their programs only part way and then guessed at the final result, they were more likely to get the problem wrong than if they’d never traced before. But why? Why would students trace the program part way and then stop — only to get it wrong?

That’s what Katie explored in this follow-up paper. She had an innovative experimental design. She replicated her original tracing experiment, then pulled out about a dozen interesting participants, and invited them back for a retrospective interview. She could show them the original problem and what they traced — and then ask them why? Here’s one of the answers: They thought that they “got the hang of it.” They recognized a goal or pattern. They just recognized wrong.

One of my favorite parts of the paper is completely visual. Katie had a terrific idea — let’s ask the teacher of the class to trace the problems. Here’s one of the traces that the teacher did:

Here are some examples of what students did:

Notice a distinct lack of similarity? Why? Why don’t students trace like the instructor did?

This is a qualitative study, so it’s rich with interview data. I recommend reading the whole paper. There’s a neat part at the end where Katie points out, “Program visualizations do X. When students trace, they do Y. Why are these things so different?”

July 14, 2019 at 1:00 pm 4 comments

Learning to build machine learning applications without code as an example of computing education research

 

Ben Shapiro shared a nice video that his student Abbie Zimmermann-Niefield made about their new paper at IDC. They built a system that allows kids to build applications with machine learning to detect (in this example) good from bad soccer passes.

The video (and paper) are perfect for IDC. It’s a wonderful example of giving students a new computational medium to build new kinds of applications that they couldn’t previously.  But the video also raised a bunch of questions for me.  Abbie at one point talks about users of her system as “learners.”  What are they learning?

David Moon put his finger on some of the issues for me with his tweet:

You don’t have to code to build ML applications.  But then, is it programming?  In the About page for this blog, I have defined computing education research as studying how people come to understand computing, and how to improve that process.  What are the students coming to understand in Abbie and Ben’s application?  Is studying how students come to build, understand, and debug their ML applications an example of computing education research?

I exchanged some messages with Ben, and came to an understanding of what he’s doing — which in turn gave me a new understanding of what I do.

In a blog post inspired by Juha Sorva, I suggested a refinement of my original definition.  “Coming to understand computing” means to develop a workable mental model or to learn a notional machine of a computing system. Programming is about intentionally defining a process for a computational agent to execute at another time. A notional machine is an explanation for the behavior of a system — it’s a teacher’s attempt to influence the mental model that the student is forming about the system.  I learned more about notional machines at a later Dagstuhl, and I’m excited to be attending a Dagstuhl Seminar this week where I’ll learn a lot more about notional machines.

Abbie’s participants are developing a mental model of how the system works — it’s not very elaborate, and it’s mostly wrong.  One student tells Abbie that she needs to have more both good and bad examples to make the system more accurate.  Okay, but why?

Ben says that they want to reach the point where students develop a working mental model of the system: About why they need to oversample some kinds of events, to be able to choose between different kinds of machine learning models, to be able able to judge what makes for a good data set, and to decide how to test the system to determine if it’s classifying all the desired inputs correctly.  Really, these are all the kinds of things we want students building any kind of program to think about — did I build this correctly (what I wanted it to be), how do I know, and how do I test to make sure?  Whether it’s by constructing data or by writing code, it’s still about intentionally defining a process for a computational agent, and then testing that process to determine if it matches the desired function.

It’s a fascinating question (which I expect we’ll be discussing this week) about what notional machines one uses to explain machine learning models.  It’s an important computing education research question: what mental models do students form about machine learning systems?  A different one is: what notional machines do we teach in order to improve the mental models that students develop about machine learning models?

Now, does it matter if students can’t see the code?  I don’t think so.  It probably matters for CS major undergraduates (which Ben, Peter, and Rebecca have argued elsewhere), but for the general population?  What does it mean to “see the code” anyway?

  • At a high level, I’m a big fan of block-based languages (as mentioned in a recent blog post about one of David Weintrop’s results). Block-based languages are also a higher-level representation of code.  That doesn’t matter. It’s still programming. It’s still computing.
  • At a low level, who really understand what their code does anymore?  With code optimization, processor prefetching, cache memory, and branch prediction, it’s really hard to know what’s really going on anyways.  Some people do. Most people don’t. And it really doesn’t matter.

The lack of code might make the notional machine harder to teach.  There is no code to point at when explaining an algorithm (see Amber Solomon’s work on the role of gestures in teaching CS).  Maybe you wouldn’t explain an algorithm.  Maybe instead you’d point at examples and point at key features of those examples.  Maybe. It’s an open and interesting research question.

So. Computing education is about helping students to develop mental models of computing systems. These models must be workable to the point of being usable for intentional construction and debugging. Studying how students build machine learning applications without code is also computing education research.

July 8, 2019 at 2:00 am 11 comments

Iterative and interdisciplinary participatory design sessions: Seeking advice on a research method

Here’s an unusual post for this blog: I’m looking for a research methodology, and I don’t know where to look for it. I’m hoping somebody here will have a suggestion — please do forward this blog post to others you think might have suggestions for me.

We’re running participatory design sessions with teachers — asking teachers to try out programming languages with scaffolded activities, and then tell us about what they’d like for their classroom. I’m collaborating with Tammy Shreiner and Bradford Dykes at Grand Valley State University around having social studies teachers build data visualizations. We’re scouring the book Participatory Design for Learning, and in particular, we’re using Michelle Wilkerson’s chapter (which I’ve read twice now) because it matches the kind of work we’re doing.

Michelle uses a technique called Conjecture Mapping to describe how her teams thinks about the components of a participatory design session. A session has a specific embodiment (things you put into the classroom or session), which you hope will lead to mediating processes (e.g., participants exploring data, people talking about their code, etc.). These are processes which should lead to desired outcomes based on theory. A conjecture map is like a logic model in that it connects your design to what you want to have happen, but a conjecture map is less about measuring outcomes. Rather, it’s more about describing mediating processes, which you theorize will lead to desired outcomes. The mediating process column is really key — it tells you what to look for when you run the design session. If you don’t hear the kind of talk you want and see participant success in the activity, something has gone wrong. Fix it before the next iteration. The paper on this technique is Conjecture Mapping: An Approach to Systematic Educational Design Research by William Sandoval.

So here’s the first problem: We have different set of outcomes for our sessions. They’re interdisciplinary. I want to see the teachers being successful with their programs, my collaborators Tammy Shreiner and Bradford Dyke wants to see them talking about their data, and we all want to see participants relating their data visualizations to their history class (e.g., they shouldn’t be just making pretty pictures, and they should be connecting the visual elements to the historical meaning). Should we put all of these mediating processes and outcomes into one big conjecture map?

We are not satisfied with this combined approach, because we’re going to be iterating on our designs over time. Most participatory design approaches are iterative, but I haven’t seen a way of tracking changes (in embodiment or mediating practices) over time. Right now, we’re working with Vega-Lite and JavaScript. In our next iterations, we’ll likely do different examples with Vega-Lite. Over time, we want to be building prototypes of data visualization languages designed explicitly for social studies educators (task-specific programming languages).

We are concerned about two big problems as we iterate:

  • Missing Out. I don’t want to lose any of our mediating processes. I want to make sure that we continue to see success with programming, engagement with data, and meaningful visualizations.
  • Changing the balance. The easiest trap for our work will be to over-emphasize the programming, and swamp out the data literacy and data visualization processes and outcomes. If our sessions become perceived as primarily a programming activity, we’re moving in the wrong direction. We want the data literacy and data visualization to be primary, with programming as a supporting activity.

The diagram at the bottom may help describe the problem — it’s the sketch that my PhD student Bahare Naimipour and I made while talking through this problem. We need to track multiple disciplinary processes and outcomes over time as we iterate across different embodiments. This isn’t about assessing an intervention or design. This is about gathering input as we design and implement technology prototypes. We want to be moving in the right direction, for the processes and outcomes that we want.

Here’s where I’m asking for help: Where should we be looking for exemplars? Who else is doing iterative, multidisciplinary participatory design sessions? What are good methods for use to use?

Thanks!

July 1, 2019 at 7:00 am 4 comments

What a CS Ed Letter Writer Needs: Evaluating Impact for Promotion and Tenure in Computing Education

I’ve been asked, “When I’m writing a tenure or promotion letter for someone who works in CS education, what should I say?” I’m motivated to finally answer, in response to an excellent post by Andy Ko, On the academic quantified self. I recommend it highly, and suggest you go read that before this post.

Andy’s post is on how to present his scholarly self. His key question is “How can senior faculty like myself model scholarly selves rather than quantified selves?” He critiques his own biographic paragraph, which contains phrases like “is the author of over 80 peer-reviewed publications, 11 receiving best paper awards and 3 receiving most influential paper awards.” He restructures it to emphasize the narrative of his research, with sentences like this:

His most recent investigations have conceptualized the skills involved in programming, theorizing about the interplay between rigorous knowledge of programming language semantics, strategies for addressing the range of problems that arise in programming, and self-regulation skills for managing the selection, execution, and abandonment of strategies; these are impacting how programming is learned and taught.

Andy is the program chair at the University of Washington’s School of Information. He writes as a role model for how to present oneself in academia — not just numbers, but a narrative about knowledge-building.

I have a slightly different perspective. I am frequently a letter writer for promotion or tenure (and often both). I don’t get to set the criteria — those are set by the institution. The challenge gets harder when the criteria were clearly written for traditional Computer Science Scholarship of Discovery (versus the other forms of scholarship described by Boyer such Scholarship of Application or Integration), but the candidate specializes in computing education researcher or is teaching-track faculty.

The criterion that most departments agree on for academic success is impact. So there’s the question: How do we evaluate impact of academic work in computing education?

As a letter writer, I need a combination of both of Andy’s biographical paragraphs, but the latter is more valuable for me. Statistics like “80 peer-reviewed publications, 11 receiving best paper awards and 3 receiving most influential paper awards” tells me about perceptions of quality by the reviewers. Peer review (for papers and grants) and paper awards are really important for third year review and sometimes for tenure, to make the argument that the candidate is doing good work and is on a promising trajectory.

A letter writer should not just cite the numbers. The promotion and tenure committees are looking for judgment based on the letter writers’ expertise. Construct a narrative. Make an argument.

An argument for impact has to be about realized potential. Andy’s second paragraph tells me where to look for that impact. Phrases like “these are impacting how programming is learned and taught” inform me where to look for evidence. I want to see that this work is actually changing learning and teaching practices — by someone other than the candidate.

If the candidate is in computing education research, then some of the traditional measures of Scholarship of Discovery still work. One important form of impact is on other researchers. Candidates can help me as a letter writer when they can show in the narrative of their research statement how other researchers and other projects are building on their work. I once was reviewing a candidate in the US who showed that a whole funding program in another country referenced and built upon their work. Indirectly, that candidate impacted every research project that that program funded — that’s amazing impact, but hard to measure. As Andy says, you have to spell out the narrative.

As much as we dislike bean-counting, an H-index (and similar metrics) does provide evidence that other researchers are building on the work of the candidate. It’s not the only measure. It’s just a number, and it has to be put in context with judgment informed by the letter writers’ expertise.

If a candidate is only focused on teaching, I usually turn away the request to write the letter.  I have some research interest in how to measure high-quality teaching (e.g., how to measure CS PCK), but I don’t know how to evaluate the practice of teaching computing.

If the candidate is (a) tenure-track in computing education or (b) teaching track and aims to influence others’ practice, the argument for impact may require some non-traditional measures. Some that I’ve used in my letters:

  • If a candidate can find evidence that even one other instructor adopted curriculum or teaching practices invented by the candidate, that’s impact. That means somebody else looked at the candidate’s work, saw the value in it, and adopted it. Links to syllabi, letters from instructors or schools, and even textbooks that incorporate the candidate’s work (even if not cited directly) are all good forms of evidence.
  • One of the reasons I get asked to write letters is that I’m still active in computing education. I can give evidence of impact from my personal experience. Researchers influence the research discourse, even before it shows up in the research literature. The discourse happens in hallways of conferences, in social media, and in workshops and seminars like Dagstuhl. This is inherently a biased form of evidence — I can’t be everywhere and hear everything. I might not notice everything that gets discussed. An institution only gets my evidence if they ask me. That bias is one reason why any case for promotion and tenure asks for several letters.
  • Sometimes, there is impact by influence and association. I have written a supportive letter for candidate who had not published a lot, but had been critical in the success of several other people. The candidate’s co-authors and co-investigators on projects had become influential leaders in computing education. I knew from talking to those co-authors that the candidate had been a leader on the projects. The candidate had launched significant projects and advanced the careers of others. That’s an important form of impact.
  • It’s hard to use success of students as an indicator of candidate’s impact. How much did the candidate influence the success of those students? Letters from the students can be helpful, but it’s still hard to make that kind of case. If a candidate works with terrific students, the candidate does not have to make much impact, and the students will still be successful. How do you argue for the value added by the candidate? If a whole class dramatically improves in performance or retention due to the efforts of a candidate — that’s a positive and measurable form of impact.
  • I’m a big fan of using Boyer’s Scholarship of Integration and Application in letters. If a candidate is one of the first to integrate two areas of research, or to apply a new method, or to build a curriculum or tool that meets a unique need, that is a potential form of impact. I still like to see evidence that the work itself had influence (e.g., was adopted by someone else, or changed student demographics, or changed practice of others).

We need to write letters that advance computing education candidates. Other countries are further than the US in recognizing computing education contributions (see post on that theme here). We need to learn how to tell the stories of impact in computing education, in order to advance the candidates doing that kind of work.

(Thanks to Andy Ko and Shriram Krishnamurthi who gave me feedback on earlier forms of this post.)

June 24, 2019 at 7:00 am 2 comments

An Ebook for Java AP CS Review: Guest Blog Post from Barbara Ericson

My research partner, co-author, and wife, Barbara Ericson, has been building an ebook (like the ones we’ve been making for AP CSP, as mentioned here and here) for students studying Advanced Placement (AP) CS Level A. We wanted to write a blog post about it, to help more AP CS A students and teachers find it. She kindly wrote this blog post on the ebooks

I started creating a free interactive ebook for the Advanced Placement (AP) Computer Science (CS) A course in 2014.  See http://tinyurl.com/JavaReview-new. The AP CSA course is intended to be equivalent to a first course for computer science majors at the college level.  It covers programming fundamentals (variables, strings, conditionals, loops), one and two dimensional arrays, lists, recursion, searching, sorting, and object-oriented programming in Java.

The AP CSA ebook was originally intended to be used as a review for the AP CSA exam.  I had created a web-site that thousands of students were using to take practice multiple-choice exams, but that web-site couldn’t handle the load and kept crashing.  Our team at Georgia Tech was creating a free interactive ebook for Advanced Placement Computer Science Principles (CSP) course on the Runestone platform. The Runestone platform was easily handling thousands of learners per day, so I moved the multiple choice questions into a new interactive ebook for AP CSA.  I also added a short description of each topic on the AP CSA exam and several practice exams.

Over the years, my team of undergraduate and high school students and I have added more content to the Java Review ebook and thousands of learners have used it.  It includes text, pictures, videos, executable and modifiable Java code, multiple-choice questions, fill-in-the-blank problems, mixed-up code problems (Parsons problems), clickable area problems, short answer questions, drag and drop questions, timed exams, and links to other practice sites such as CodingBat (https://codingbat.com/java) and the Java Tutor (http://pythontutor.com/java.html#mode=edit). It also includes free response (write code) questions from past exams.

Fill-in-the-blank problems ask a user to type in the answer to a question and the answer is checked against a regular expression. See https://tinyurl.com/fillInBlankEx.   Mixed-up code problems (Parsons problems) provide the correct code to solve a problem, but the code is broken into code blocks and mixed up.  The learner must drag the blocks into the correct order. See https://tinyurl.com/ParsonsEx.  I studied Parsons problems for my dissertation and invented two types of adaptation to modify the difficulty of Parsons problems to keep learners challenged, but not frustrated.  Clickable area questions ask learners to click on either lines of code or table elements to answer a question. See https://tinyurl.com/clickableEx.   Short answer questions allow users to type in text in response to a question.  See https://tinyurl.com/shortAnsEx. Drag and drop questions allow the learner to drag a definition to a concept.  See https://tinyurl.com/y68cxmpw.  Timed exams give the learner practice a set amount of time to finish an exam.  It shows the questions in the exam one at a time and doesn’t give the learner feedback about the correctness of the answer until after the exam.  See https://tinyurl.com/timedEx.

I am currently analyzing the log file data from both the AP CSA and CSP ebooks.  Learners typically attempt to answer the practice type questions, but don’t always run the example code or watch the videos.  In an observation study I ran as part of my dissertation work, teachers said that they didn’t run the code if the got the related practice question correct. They also didn’t always watch the videos, especially if the video content was also in the text.  Usage of the ebook tends to drop from the first chapter to the last instructional chapter, but increases again in the practice exam chapters at the end of the ebook. Usage also drops across the instructional material in a chapter and then increases again in the practice item subchapters near the end of each chapter.

Beryl Hoffman, an Associate Professor of Computer Science at Elms College and a member of the Mobile CSP team, has been creating a new AP CSA ebook based on my AP CSA ebook, but revised to match the changes to the AP CSA course for 2019-20202.  See https://tinyurl.com/csawesome.  One of the reasons for creating this new ebook is to help Mobile CSP teaches prepare to teach CSA.  The Mobile CSP team is piloting this book currently with CSP teachers.

June 17, 2019 at 7:00 am Leave a comment

Blocks and Beyond 2019 and SnapCon19 Call for Papers

#SnapCon19, the first Snap Conference, will be held September 22-25, 2019, in Heidelberg, Germany.  Register by June 24 at this website.


Blocks and Beyond 2019: Beyond Blocks 

VL/HCC workshop in Memphis, TN, USAFri Oct 18, 2019
http://cs.wellesley.edu/blocks-and-beyond

Scope and Goals

Blocks programming has become increasingly popular in programming environments targeted at beginner programmers, end users, and casual programmers. Capitalizing on the energy and enthusiasm from the first two Blocks and Beyond workshops, we are pleased to announce the 2019 Blocks and Beyond workshop.

Since blocks are only a small step towards leveraging visual languages and notations for specifying and understanding computation, the emphasis of the 2019 workshop is on the Beyond aspect of Blocks & Beyond: what kinds of visual notations and programming environment scaffolding facilitate: Understanding program semantics? Learning computational concepts? Developing computational identity and fostering computational participation and computational action?

The goal of this workshop is to bring together language designers, educators, researchers, and members of the broader VL/HCC community to answer these questions. We seek participants with diverse expertise, including, but not limited to: design of programming environments, instruction with these environments, human factors, the learning sciences, and learning analytics.

This workshop will engage participants to (1) discuss the state of the art of visual languages targeted at beginners, end users, and casual programmers; (2) assess the usability and effectiveness of these languages and their associated pedagogies; and (3) brainstorm about future directions for these languages.

Suggested Topics for Discussion

  • In what ways have blocks languages succeeded or failed at fulfilling the promise of visual languages to enhance the ability of humans to express computation?
  • How can visual languages and environments better support dynamic semantics and pragmatics, particularly with features for liveness, debugging, and understanding the dynamic execution of programs?
  • How usable and effective are visual environments for teaching computational thinking and programming? For democratizing programming and enabling computational participation and computational action? How do we know?
  • In what ways does visual programming help or hinder those who use them as a stepping stone to traditional text-based languages? What are good ways to support the transition between visual languages and text-based languages? How important is this?
  • How does the two-dimensional nature of visual language workspaces affect the way people create, modify, navigate, and search through their code?
  • What tools are there for creating new visual languages, especially domain-specific ones?
  • What are effective mechanisms for multiple people to collaborate on a visual programming when they (1) are co-located or (2) are working together remotely?
  • What are effective pedagogical strategies to use with visual languages, both in traditional classroom settings and in informal and open-ended learning environments?
  • What are the most effective ways to provide help to visual programmers, especially in settings outside the classroom?
  • How can visual environments and associated curricular materials be made more accessible to everyone, especially those with visual and motor impairments and underrepresented populations in computing?
  • What lessons from the visual programming community are worth sharing with other language designers? Are there features of visual languages that should be incorporated into IDEs for traditional programming environments? What features of modern IDEs are lacking in visual languages?
  • How can online communities associated with these environments be leveraged to support users? Are these online communities inclusive and how can they be more inclusive?
  • For these environments, what data can be collected, and how can that data be analyzed to determine answers to questions like those above? How can we use such data to answer larger scale questions about early experiences with programming?

Submission

We invite three kinds of paper submissions to spark discussion at the workshop:

  • A 2 to 3 page position statement describing an idea, research question, or work in progress related to the design, teaching, or study of visual programming environments.
  • short paper (up to 4 pages, excluding references and/or acknowledgments) describing previously unpublished results involving the design, study, or pedagogy of visual programming.
  • long paper (up to 8 pages, excluding references and/or acknowledgments), with the same goals and content requirements of the short paper, but with a more substantial contribution.

To maximize discussion time at the workshop, paper presentation times will be very short.

All workshop participants (whether or not they have an accepted paper) are encouraged to present a demo and/or poster of their work during the workshop. Anyone wishing to present a demo/poster should submit a 1 to 2 paragraph abstract. There is also an option to submit a 1 to 2 page demo/poster summary document that will appear in the proceedings.

Submission details for papers and demo/poster abstracts and summary documents can be found at the workshop website:  http://cs.wellesley.edu/blocks-and-beyond

As with the first two Blocks and Beyond workshops, we are applying to publish the proceedings of this workshop with the IEEE.

Important Dates

  • Fri 12 Jul 2019: Paper submissions due (due by end of day, anytime on Earth)
  • Fri 09 Aug 2019: Author notification
  • Fri 16 Aug — Fri 20 Sep 2019: Rolling demo/poster abstract submissions
  • Fri 16 Aug — Fri 25 Oct 2019: Rolling demo/poster summary document submissions
  • Mon 09 Sep 2019: Camera ready paper submissions and copyright forms due
  • Fri 13 Sep 2019: Early registration for VL/HCC and B&B ends
  • Fri 18 Oct 2019: Workshop in Memphis
  • Fri 01 Nov 2019: Camera-ready demo/poster summary documents and copyright forms due

 

June 12, 2019 at 7:00 am Leave a comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,273 other followers

Feeds

Recent Posts

Blog Stats

  • 1,664,033 hits
July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

CS Teaching Tips