Iterative and interdisciplinary participatory design sessions: Seeking advice on a research method

Here’s an unusual post for this blog: I’m looking for a research methodology, and I don’t know where to look for it. I’m hoping somebody here will have a suggestion — please do forward this blog post to others you think might have suggestions for me.

We’re running participatory design sessions with teachers — asking teachers to try out programming languages with scaffolded activities, and then tell us about what they’d like for their classroom. I’m collaborating with Tammy Shreiner and Bradford Dykes at Grand Valley State University around having social studies teachers build data visualizations. We’re scouring the book Participatory Design for Learning, and in particular, we’re using Michelle Wilkerson’s chapter (which I’ve read twice now) because it matches the kind of work we’re doing.

Michelle uses a technique called Conjecture Mapping to describe how her teams thinks about the components of a participatory design session. A session has a specific embodiment (things you put into the classroom or session), which you hope will lead to mediating processes (e.g., participants exploring data, people talking about their code, etc.). These are processes which should lead to desired outcomes based on theory. A conjecture map is like a logic model in that it connects your design to what you want to have happen, but a conjecture map is less about measuring outcomes. Rather, it’s more about describing mediating processes, which you theorize will lead to desired outcomes. The mediating process column is really key — it tells you what to look for when you run the design session. If you don’t hear the kind of talk you want and see participant success in the activity, something has gone wrong. Fix it before the next iteration. The paper on this technique is Conjecture Mapping: An Approach to Systematic Educational Design Research by William Sandoval.

So here’s the first problem: We have different set of outcomes for our sessions. They’re interdisciplinary. I want to see the teachers being successful with their programs, my collaborators Tammy Shreiner and Bradford Dyke wants to see them talking about their data, and we all want to see participants relating their data visualizations to their history class (e.g., they shouldn’t be just making pretty pictures, and they should be connecting the visual elements to the historical meaning). Should we put all of these mediating processes and outcomes into one big conjecture map?

We are not satisfied with this combined approach, because we’re going to be iterating on our designs over time. Most participatory design approaches are iterative, but I haven’t seen a way of tracking changes (in embodiment or mediating practices) over time. Right now, we’re working with Vega-Lite and JavaScript. In our next iterations, we’ll likely do different examples with Vega-Lite. Over time, we want to be building prototypes of data visualization languages designed explicitly for social studies educators (task-specific programming languages).

We are concerned about two big problems as we iterate:

  • Missing Out. I don’t want to lose any of our mediating processes. I want to make sure that we continue to see success with programming, engagement with data, and meaningful visualizations.
  • Changing the balance. The easiest trap for our work will be to over-emphasize the programming, and swamp out the data literacy and data visualization processes and outcomes. If our sessions become perceived as primarily a programming activity, we’re moving in the wrong direction. We want the data literacy and data visualization to be primary, with programming as a supporting activity.

The diagram at the bottom may help describe the problem — it’s the sketch that my PhD student Bahare Naimipour and I made while talking through this problem. We need to track multiple disciplinary processes and outcomes over time as we iterate across different embodiments. This isn’t about assessing an intervention or design. This is about gathering input as we design and implement technology prototypes. We want to be moving in the right direction, for the processes and outcomes that we want.

Here’s where I’m asking for help: Where should we be looking for exemplars? Who else is doing iterative, multidisciplinary participatory design sessions? What are good methods for use to use?

Thanks!

July 1, 2019 at 7:00 am 4 comments

What a CS Ed Letter Writer Needs: Evaluating Impact for Promotion and Tenure in Computing Education

I’ve been asked, “When I’m writing a tenure or promotion letter for someone who works in CS education, what should I say?” I’m motivated to finally answer, in response to an excellent post by Andy Ko, On the academic quantified self. I recommend it highly, and suggest you go read that before this post.

Andy’s post is on how to present his scholarly self. His key question is “How can senior faculty like myself model scholarly selves rather than quantified selves?” He critiques his own biographic paragraph, which contains phrases like “is the author of over 80 peer-reviewed publications, 11 receiving best paper awards and 3 receiving most influential paper awards.” He restructures it to emphasize the narrative of his research, with sentences like this:

His most recent investigations have conceptualized the skills involved in programming, theorizing about the interplay between rigorous knowledge of programming language semantics, strategies for addressing the range of problems that arise in programming, and self-regulation skills for managing the selection, execution, and abandonment of strategies; these are impacting how programming is learned and taught.

Andy is the program chair at the University of Washington’s School of Information. He writes as a role model for how to present oneself in academia — not just numbers, but a narrative about knowledge-building.

I have a slightly different perspective. I am frequently a letter writer for promotion or tenure (and often both). I don’t get to set the criteria — those are set by the institution. The challenge gets harder when the criteria were clearly written for traditional Computer Science Scholarship of Discovery (versus the other forms of scholarship described by Boyer such Scholarship of Application or Integration), but the candidate specializes in computing education researcher or is teaching-track faculty.

The criterion that most departments agree on for academic success is impact. So there’s the question: How do we evaluate impact of academic work in computing education?

As a letter writer, I need a combination of both of Andy’s biographical paragraphs, but the latter is more valuable for me. Statistics like “80 peer-reviewed publications, 11 receiving best paper awards and 3 receiving most influential paper awards” tells me about perceptions of quality by the reviewers. Peer review (for papers and grants) and paper awards are really important for third year review and sometimes for tenure, to make the argument that the candidate is doing good work and is on a promising trajectory.

A letter writer should not just cite the numbers. The promotion and tenure committees are looking for judgment based on the letter writers’ expertise. Construct a narrative. Make an argument.

An argument for impact has to be about realized potential. Andy’s second paragraph tells me where to look for that impact. Phrases like “these are impacting how programming is learned and taught” inform me where to look for evidence. I want to see that this work is actually changing learning and teaching practices — by someone other than the candidate.

If the candidate is in computing education research, then some of the traditional measures of Scholarship of Discovery still work. One important form of impact is on other researchers. Candidates can help me as a letter writer when they can show in the narrative of their research statement how other researchers and other projects are building on their work. I once was reviewing a candidate in the US who showed that a whole funding program in another country referenced and built upon their work. Indirectly, that candidate impacted every research project that that program funded — that’s amazing impact, but hard to measure. As Andy says, you have to spell out the narrative.

As much as we dislike bean-counting, an H-index (and similar metrics) does provide evidence that other researchers are building on the work of the candidate. It’s not the only measure. It’s just a number, and it has to be put in context with judgment informed by the letter writers’ expertise.

If a candidate is only focused on teaching, I usually turn away the request to write the letter.  I have some research interest in how to measure high-quality teaching (e.g., how to measure CS PCK), but I don’t know how to evaluate the practice of teaching computing.

If the candidate is (a) tenure-track in computing education or (b) teaching track and aims to influence others’ practice, the argument for impact may require some non-traditional measures. Some that I’ve used in my letters:

  • If a candidate can find evidence that even one other instructor adopted curriculum or teaching practices invented by the candidate, that’s impact. That means somebody else looked at the candidate’s work, saw the value in it, and adopted it. Links to syllabi, letters from instructors or schools, and even textbooks that incorporate the candidate’s work (even if not cited directly) are all good forms of evidence.
  • One of the reasons I get asked to write letters is that I’m still active in computing education. I can give evidence of impact from my personal experience. Researchers influence the research discourse, even before it shows up in the research literature. The discourse happens in hallways of conferences, in social media, and in workshops and seminars like Dagstuhl. This is inherently a biased form of evidence — I can’t be everywhere and hear everything. I might not notice everything that gets discussed. An institution only gets my evidence if they ask me. That bias is one reason why any case for promotion and tenure asks for several letters.
  • Sometimes, there is impact by influence and association. I have written a supportive letter for candidate who had not published a lot, but had been critical in the success of several other people. The candidate’s co-authors and co-investigators on projects had become influential leaders in computing education. I knew from talking to those co-authors that the candidate had been a leader on the projects. The candidate had launched significant projects and advanced the careers of others. That’s an important form of impact.
  • It’s hard to use success of students as an indicator of candidate’s impact. How much did the candidate influence the success of those students? Letters from the students can be helpful, but it’s still hard to make that kind of case. If a candidate works with terrific students, the candidate does not have to make much impact, and the students will still be successful. How do you argue for the value added by the candidate? If a whole class dramatically improves in performance or retention due to the efforts of a candidate — that’s a positive and measurable form of impact.
  • I’m a big fan of using Boyer’s Scholarship of Integration and Application in letters. If a candidate is one of the first to integrate two areas of research, or to apply a new method, or to build a curriculum or tool that meets a unique need, that is a potential form of impact. I still like to see evidence that the work itself had influence (e.g., was adopted by someone else, or changed student demographics, or changed practice of others).

We need to write letters that advance computing education candidates. Other countries are further than the US in recognizing computing education contributions (see post on that theme here). We need to learn how to tell the stories of impact in computing education, in order to advance the candidates doing that kind of work.

(Thanks to Andy Ko and Shriram Krishnamurthi who gave me feedback on earlier forms of this post.)

June 24, 2019 at 7:00 am 3 comments

An Ebook for Java AP CS Review: Guest Blog Post from Barbara Ericson

My research partner, co-author, and wife, Barbara Ericson, has been building an ebook (like the ones we’ve been making for AP CSP, as mentioned here and here) for students studying Advanced Placement (AP) CS Level A. We wanted to write a blog post about it, to help more AP CS A students and teachers find it. She kindly wrote this blog post on the ebooks

I started creating a free interactive ebook for the Advanced Placement (AP) Computer Science (CS) A course in 2014.  See http://tinyurl.com/JavaReview-new. The AP CSA course is intended to be equivalent to a first course for computer science majors at the college level.  It covers programming fundamentals (variables, strings, conditionals, loops), one and two dimensional arrays, lists, recursion, searching, sorting, and object-oriented programming in Java.

The AP CSA ebook was originally intended to be used as a review for the AP CSA exam.  I had created a web-site that thousands of students were using to take practice multiple-choice exams, but that web-site couldn’t handle the load and kept crashing.  Our team at Georgia Tech was creating a free interactive ebook for Advanced Placement Computer Science Principles (CSP) course on the Runestone platform. The Runestone platform was easily handling thousands of learners per day, so I moved the multiple choice questions into a new interactive ebook for AP CSA.  I also added a short description of each topic on the AP CSA exam and several practice exams.

Over the years, my team of undergraduate and high school students and I have added more content to the Java Review ebook and thousands of learners have used it.  It includes text, pictures, videos, executable and modifiable Java code, multiple-choice questions, fill-in-the-blank problems, mixed-up code problems (Parsons problems), clickable area problems, short answer questions, drag and drop questions, timed exams, and links to other practice sites such as CodingBat (https://codingbat.com/java) and the Java Tutor (http://pythontutor.com/java.html#mode=edit). It also includes free response (write code) questions from past exams.

Fill-in-the-blank problems ask a user to type in the answer to a question and the answer is checked against a regular expression. See https://tinyurl.com/fillInBlankEx.   Mixed-up code problems (Parsons problems) provide the correct code to solve a problem, but the code is broken into code blocks and mixed up.  The learner must drag the blocks into the correct order. See https://tinyurl.com/ParsonsEx.  I studied Parsons problems for my dissertation and invented two types of adaptation to modify the difficulty of Parsons problems to keep learners challenged, but not frustrated.  Clickable area questions ask learners to click on either lines of code or table elements to answer a question. See https://tinyurl.com/clickableEx.   Short answer questions allow users to type in text in response to a question.  See https://tinyurl.com/shortAnsEx. Drag and drop questions allow the learner to drag a definition to a concept.  See https://tinyurl.com/y68cxmpw.  Timed exams give the learner practice a set amount of time to finish an exam.  It shows the questions in the exam one at a time and doesn’t give the learner feedback about the correctness of the answer until after the exam.  See https://tinyurl.com/timedEx.

I am currently analyzing the log file data from both the AP CSA and CSP ebooks.  Learners typically attempt to answer the practice type questions, but don’t always run the example code or watch the videos.  In an observation study I ran as part of my dissertation work, teachers said that they didn’t run the code if the got the related practice question correct. They also didn’t always watch the videos, especially if the video content was also in the text.  Usage of the ebook tends to drop from the first chapter to the last instructional chapter, but increases again in the practice exam chapters at the end of the ebook. Usage also drops across the instructional material in a chapter and then increases again in the practice item subchapters near the end of each chapter.

Beryl Hoffman, an Associate Professor of Computer Science at Elms College and a member of the Mobile CSP team, has been creating a new AP CSA ebook based on my AP CSA ebook, but revised to match the changes to the AP CSA course for 2019-20202.  See https://tinyurl.com/csawesome.  One of the reasons for creating this new ebook is to help Mobile CSP teaches prepare to teach CSA.  The Mobile CSP team is piloting this book currently with CSP teachers.

June 17, 2019 at 7:00 am Leave a comment

Blocks and Beyond 2019 and SnapCon19 Call for Papers

#SnapCon19, the first Snap Conference, will be held September 22-25, 2019, in Heidelberg, Germany.  Register by June 24 at this website.


Blocks and Beyond 2019: Beyond Blocks 

VL/HCC workshop in Memphis, TN, USAFri Oct 18, 2019
http://cs.wellesley.edu/blocks-and-beyond

Scope and Goals

Blocks programming has become increasingly popular in programming environments targeted at beginner programmers, end users, and casual programmers. Capitalizing on the energy and enthusiasm from the first two Blocks and Beyond workshops, we are pleased to announce the 2019 Blocks and Beyond workshop.

Since blocks are only a small step towards leveraging visual languages and notations for specifying and understanding computation, the emphasis of the 2019 workshop is on the Beyond aspect of Blocks & Beyond: what kinds of visual notations and programming environment scaffolding facilitate: Understanding program semantics? Learning computational concepts? Developing computational identity and fostering computational participation and computational action?

The goal of this workshop is to bring together language designers, educators, researchers, and members of the broader VL/HCC community to answer these questions. We seek participants with diverse expertise, including, but not limited to: design of programming environments, instruction with these environments, human factors, the learning sciences, and learning analytics.

This workshop will engage participants to (1) discuss the state of the art of visual languages targeted at beginners, end users, and casual programmers; (2) assess the usability and effectiveness of these languages and their associated pedagogies; and (3) brainstorm about future directions for these languages.

Suggested Topics for Discussion

  • In what ways have blocks languages succeeded or failed at fulfilling the promise of visual languages to enhance the ability of humans to express computation?
  • How can visual languages and environments better support dynamic semantics and pragmatics, particularly with features for liveness, debugging, and understanding the dynamic execution of programs?
  • How usable and effective are visual environments for teaching computational thinking and programming? For democratizing programming and enabling computational participation and computational action? How do we know?
  • In what ways does visual programming help or hinder those who use them as a stepping stone to traditional text-based languages? What are good ways to support the transition between visual languages and text-based languages? How important is this?
  • How does the two-dimensional nature of visual language workspaces affect the way people create, modify, navigate, and search through their code?
  • What tools are there for creating new visual languages, especially domain-specific ones?
  • What are effective mechanisms for multiple people to collaborate on a visual programming when they (1) are co-located or (2) are working together remotely?
  • What are effective pedagogical strategies to use with visual languages, both in traditional classroom settings and in informal and open-ended learning environments?
  • What are the most effective ways to provide help to visual programmers, especially in settings outside the classroom?
  • How can visual environments and associated curricular materials be made more accessible to everyone, especially those with visual and motor impairments and underrepresented populations in computing?
  • What lessons from the visual programming community are worth sharing with other language designers? Are there features of visual languages that should be incorporated into IDEs for traditional programming environments? What features of modern IDEs are lacking in visual languages?
  • How can online communities associated with these environments be leveraged to support users? Are these online communities inclusive and how can they be more inclusive?
  • For these environments, what data can be collected, and how can that data be analyzed to determine answers to questions like those above? How can we use such data to answer larger scale questions about early experiences with programming?

Submission

We invite three kinds of paper submissions to spark discussion at the workshop:

  • A 2 to 3 page position statement describing an idea, research question, or work in progress related to the design, teaching, or study of visual programming environments.
  • short paper (up to 4 pages, excluding references and/or acknowledgments) describing previously unpublished results involving the design, study, or pedagogy of visual programming.
  • long paper (up to 8 pages, excluding references and/or acknowledgments), with the same goals and content requirements of the short paper, but with a more substantial contribution.

To maximize discussion time at the workshop, paper presentation times will be very short.

All workshop participants (whether or not they have an accepted paper) are encouraged to present a demo and/or poster of their work during the workshop. Anyone wishing to present a demo/poster should submit a 1 to 2 paragraph abstract. There is also an option to submit a 1 to 2 page demo/poster summary document that will appear in the proceedings.

Submission details for papers and demo/poster abstracts and summary documents can be found at the workshop website:  http://cs.wellesley.edu/blocks-and-beyond

As with the first two Blocks and Beyond workshops, we are applying to publish the proceedings of this workshop with the IEEE.

Important Dates

  • Fri 12 Jul 2019: Paper submissions due (due by end of day, anytime on Earth)
  • Fri 09 Aug 2019: Author notification
  • Fri 16 Aug — Fri 20 Sep 2019: Rolling demo/poster abstract submissions
  • Fri 16 Aug — Fri 25 Oct 2019: Rolling demo/poster summary document submissions
  • Mon 09 Sep 2019: Camera ready paper submissions and copyright forms due
  • Fri 13 Sep 2019: Early registration for VL/HCC and B&B ends
  • Fri 18 Oct 2019: Workshop in Memphis
  • Fri 01 Nov 2019: Camera-ready demo/poster summary documents and copyright forms due

 

June 12, 2019 at 7:00 am Leave a comment

Computer Science Teachers as Provocateurs: All learning starts from a problem

One of the surprising benefits of working with social science educators (history and economics) has been new perspectives on my own teaching. I’ve studied education for several years, and have worked with science and mathematics education researchers in the past. It hadn’t occurred to me that history education is so different that it would give me a new way of looking at my own teaching.

Last week, I was in a research meeting with Bob Bain, a history and education professor here at U. Michigan. He was describing how historians understand knowledge and what historian’s practice looks like, and how that should be reflected in the history classroom.

He said that all learning in history starts from a problem. That gave me pause. What’s a “problem” in history?

Bob explained that he defines problem as John Dewey did, as something that disturbs the equilibrium. “Activities at the Dewey School arose from the child’s own interests and from the need to solve problems that aroused the child’s curiosity and that led to creative solutions.” We don’t think until our environment is disturbed, but that environment may just be in your own head.

We each have our own stories that we use to explain the world, and these make up our own personal equilibria. Maybe students have always been told that the American Civil War was about states’ rights, and then they read the Georgia Declaration of Secession. Maybe they’ve thought of Columbus as the explorer who discovered America, and then note that he wasn’t celebrated until 1792, 300 years after his arrival. Why wasn’t he celebrated earlier, and why him and at that time? A good history teacher sets up these conflicts, disequilibria, or problems. Bob says it can be easy enough to create, simply by showing two contrasting accounts of the same historical event.

Research in the learning sciences supports this definition of learning. Roger Schank talked about the importance of learning through “expectation failure.” You learn when you realize that you don’t know something:

The understanding cycle – expectation failure – explanation – reminding – generalization – is a natural one. No one teaches it to us. We are not taught to have goals, nor to attempt to develop plans to achieve those goals by adapting old plans from similar situations. We need not be taught this because the process is so basic to what comprises intelligence. Learning is a natural act.

In progressive education, we’re told that the teacher should be a “Guide on the Side, not the Sage on the Stage.” When Janet Kolodner was developing Learning By Design, she talked about the role of teacher as coach and orchestrator. Those were roles I was familiar with. Bob was describing a different role.

I challenged him explicitly, “You’re a provocateur. You create the problems in the students’ minds.” He agreed.

Bob got me thinking about the role of the teacher in the computer science class. We can sometimes be a guide, a coach, and orchestrator — when students are working away on some problem or project. But sometimes, we have to be the provocateur.

We should always start from a problem. In science education, this is easy. Kids naturally do wonder why the sky is blue, why sunsets are more red, why heat travels along metal but not wood, and why stars twinkle. In more advanced computer science, we can also start from questions that students’ already have. I’m taking a MOOC right now because it explains things I’ve wondered about.

But in introductory classes, students already use a computer without problems. They may not see enough of real computing to wonder about how it works. The teacher has to generate a problem, inculcate curiosity — be a provocateur.

We should only teach something when it solves a problem for the student. A lecture on variables and types should be motivated by a problem that the variables and types solve. A lecture on loops should happen when students need to do something so often that copy-pasting the code repeatedly won’t work. Saying “You’re going to need this later” is not motivation enough — that doesn’t match the cycle that Schank described as natural. Nobody remembers things they will need in the future. Learning results when you need new knowledge to resolve the current problem, disequilibria, or conflict.

Note: Computer science doesn’t teach problem-solving. Dewey’s and Schank’s point is that problem-solving is a natural way in which people learn. Learning to program still doesn’t teach problem-solving skills.

June 10, 2019 at 7:00 am 1 comment

The gender imbalance in AI is greater than in CS overall, and that’s a big problem

My colleague, Rada Mihalcea, sent me a copy of a new (April 2019) report from the AI Now Institute on Discriminating Systems: Gender, Race, and Power in AI (see link here) which describes the diversity crisis in AI:

There is a diversity crisis in the AI sector across gender and race. Recent studies found only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men. This disparity is extreme in the AI industry: women comprise only 15% of AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities. For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. Given decades of concern and investment to redress this imbalance, the current state of the field is alarming.

Without a doubt, those percentages do not match the distribution of gender and ethnicity in the population at large. But we already know that participation in CS does not match the population. How do the AI distributions match the distribution of gender and ethnicity among CS researchers?

A sample to compare to is the latest graduates with CS PhDs. Take a look at the 2018 Taulbee Survey from the CRA (see link here).  19.3% of CS PhD’s went to women. That’s terrible gender diversity when compared to the population, and AI  (at 10%, 15%, or 18%) is doing worse. Only 1.4% of new CS PhD’s were Black. From an ethnicity perspective, Google, Facebook, and Microsoft are doing surprisingly well.

The AI Now Institute report is concerned about intersectionality. “The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others.” I heard this concern at the recent NCWIT Summit (see link here).  The issues of women are not identical across ethnicities. The other direction of intersectionality is also a concern. My student, Amber Solomon, has published on how interventions for Black students in CS often focus on Black males: Not Just Black and Not Just a Woman: Black Women Belonging in Computing (see link here).

I had not seen previously a report on diversity in just one part of CS, and I’m glad to see it. AI (and particularly the sub-field of machine learning) is growing in importance. We know that having more diversity in the design team makes it more likely that a broader range of issues are considered in the design process. We also know that biased AI technologies are already being developed and deployed (see the Algorithmic Justice League). A new Brookings Institute Report identifies many of the biases and suggests ways of avoiding them (see report here). AI is one of the sub-fields of computer science where developing greater diversity is particularly important.

 

June 3, 2019 at 7:00 am 1 comment

Come hang out with Wil and me to talk about new research ideas! ACM ICER 2019 Work in Progress Workshop

Wil Doane and I are co-hosting the ACM ICER 2019 Work in Progress workshop that Colleen Lewis introduced at ICER 2014 in Glasgow (my report on participating). Colleen and I co-hosted last year.

It really is a “hosting” job more than an “organizing” or “presenting” role.  I love Colleen’s informal description of WiP, “You’re borrowing 4 other smart people’s brains for an hour. Then you loan them yours.”  The participants do the presenting. For one hour, your group listens to your idea and helps you think through it, and then you pass the baton. The whole organizing task is “Let’s put these 4 people together, and those 4 people together, and so on. We give them 4 hours, and appropriate coffee/lunch breaks.” (Where the value “4” may be replaced with “5” or “6”.)

Another useful description of WiP is “doctoral consortia for after-graduation.”  Doctoral consortia are these great opportunities to share your research ideas and get feedback on them.  Then there’s this sense that you graduate and…not have those ideas anymore? Or don’t need to share them or get feedback on them?  I’ve expressed concern previously about the challenges of learning when you’re no longer seen as a learner. Of course, PhD graduates are supposed to have new research ideas, which go into proposals and papers. But how do you develop ideas when you’re at the early stages, when they’re not ready for proposals or papers?  That’s what the WiP is about.

The WiP page is here (and quoted in part below). To sign up, you just fill out this form, and later give us a drafty concept paper to share with your group.

The WIP Workshop (formerly named the Critical Research Review) is a dedicated 1-day workshop for ICER attendees to provide and receive friendly, constructive feedback on works-in-progress. To apply for the workshop you will specify a likely topic about which you’ll request feedback. WIP participants will be assigned to thematic groups with 4-6 participants.

Two weeks before ICER, participants will submit to the members of their group a 2-4 page primer document to help prepare for the session and identify the types of feedback sought. At WIP, depending upon group size, each participant will have 45-75 minutes to provide context, elicit advice, support, feedback, and critique. Typically, one of the other group members acts as a notetaker during an individual’s time in order to allow the presenter to engage fully in the discussion.

WIP may be the right experience for you, if you would like to provide and receive constructive advice, support, feedback, or critique on computing education research issues such as:

  • A kernel of a research idea
  • A grant proposal
  • A rejected ICER paper
  • A study design
  • A qualitative analysis approach
  • A quantitative analysis approach
  • A motivation for a research project
  • A theoretical framing
  • A challenge in a research project

The goal of the workshop is to provide a space where we can receive support and provide support. The workshop is intended for active CS education researchers. PhD students are instead encouraged to apply for the Doctoral Consortium, held on the same day as WIP.

May 31, 2019 at 7:00 am Leave a comment

Older Posts Newer Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,297 other followers

Feeds

Recent Posts

Blog Stats

  • 1,671,640 hits
August 2019
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips