Archive for June, 2018

We might want naive and delusional PhD students

We’re in the midst of cleaning out 25 years of accumulated stuff in our house in order to sell this house, buy a new house in Ann Arbor, and move to the University of Michigan by September 1.

As I was cleaning, I found the below — my original statement of purpose that I submitted to the University of Michigan in 1988 to start my doctorate.

I shared it with some friends, ruefully.  It felt silly, as well as grammatically flawed. I really did think that I was going to get a faculty position in “Computer Science and Education” when I graduated in the early 1990’s.  I was naive, maybe even delusional. I had no idea what academic CS was like when I applied. The reality is far different than what I imagined.  At the Home4CS event just this last April, I mentioned that it would be great if we had CS Education faculty slots in Schools of Education today.  As Diane Levitt reported on Twitter, the audience roared with laughter.  How crazy was I to think that we’d have some in the 1990’s?

But now, some positions like that do exist.  There are faculty who have been hired at US higher-education institutions to focus on CS Ed.  My new job at the University of Michigan is a joint position between CS and their Engineering Education Research program.  It took 25 years, but yeah, I’m going to have the kind of job for which I earned my PhD.

Some friends encouraged me to share this statement. Maybe it’s a good thing to have naive new PhD students.  Maybe that’s what we want in PhD students. We want PhD students to think long term, i.e., to have bought into a goal, a set of research questions, or a vision — and be willing to work at it for decades.  Eventually, if the student is really lucky and others are working on similar visions at the same time, the vision doesn’t seem not quite so naive, not quite so delusional.

I’ll be taking some time off from the blog while making the move to Michigan. I may post some guest contributions over the next few weeks, but for now, I’m putting the blog on hiatus.

June 29, 2018 at 7:00 am 5 comments

Visiting NTNU in Trondheim Norway June 3-23

Barbara and I are just back from a three week trip to NTNU in Trondheim, Norway. Katie Cunningham came with us (here’s a blog post about some of her work). Three weeks is enough time to come up with a dozen ideas for blog posts, but I don’t have the cycles for that. So let me just give you the high-level view, with pictures and links to learn more.

We went at the beginning of June because Barb and I (and the University of Michigan) are part of the IPIT network (International Partnerships for Excellent Education and Research in Information Technology) that had its kick-off meeting June 3-5. The partnership is about software engineering and computing education research, with a focus student and faculty exchange and meetings at each others’ institutions: NTNU, U. Michigan, Tsinghua University, and Nanjing University. I learned a lot about software engineering that I didn’t know before, especially about DevOps.

If you ever get the chance to go to a meeting organized by Letizia Jaccheri of NTNU, GO! She was the organizer for IPIT, co-chair of IDC 2018, and our overall host for our three weeks there. She has a wonderful sense for blending productivity with fun. During the IDC 2018 poster session, she brought in high school students dressed as storybook characters, just to wander around and “bring in a bit of whimsy.” For a bigger example, she wanted IPIT to connect with the NTNU campus at Ålesund, which just happens to be near the Geiranger fjord, one of the most beautiful in Norway. So, she flew the whole meeting to Ålesund from Trondheim! We took a large cruise-ship like boat with meeting rooms down the fjord. We got in some 5-6 hours of meetings, while also seeing amazing waterfalls and other views, and then visited the Ålesund campus the next day before flying home. We got work done and WOW!

For the next week and a half, we got to know the computing education research folks at NTNU. We were joined at the end of the first week by Elisa Rubegni from the University of Lincoln, and Roberto Martinez-Maldonado came by a couple days later. Barb, Elisa, and I held a workshop on the first Monday after IPIT. A couple days later, we had a half-day meeting with Michalis Giannakos’s group and Roberto, then Elisa led us all in a half-day design exercise (pictured below — Elisa, Sofia, Javi, and Katie). In between, we had individual meetings. I think I met with every one of the PhD students there working in computing education research. (And, in our non-meeting time, Barb and I were writing NSF proposals!)

Michalis’s group is doing some fascinating work. Let me tell you about some of the projects that most intrigued me.

  • Sofia (with Kshitij and Ilias) is lead on a project where they track what kids using Scratch are looking at, both on and off screen. It’s part of this cool project where kids program these beautiful artist-created robots with Scratch. It’s a pretty crazy looking experimental setup, with fiducial markers on notebooks and robots and screens.
  • Kshitij is trying to measure EEG and gaze in order to determine cognitive load in a user interface. Almost all cognitive load measures are based on self-report (including ours). They’re trying to measure cognitive load physiologically, and correlate it with self-report.
  • Katerina and Kshitij is using eye-tracking to measure how undergrads use tools like Eclipse. What I found most interesting was what they did not observe. I noticed in their data that they had no data on using the debugger. They explained that in 40 students, only five people even looked at the debugger. Nobody used data or control flow visualizations at all. I’m fascinated by this — what does it take to get students to actually look at the debuggers and visualizers that were designed to help them learn?
  • Roberto is doing this amazing work with learning analytics in physical spaces, where nurses are working on robot patients. Totally serious — they can gather all kinds of data about where people are standing, how they interact, and when they interact. For tasks like nursing, this is super important to understand what students are learning.

Then came FabLearn with an amazing keynote by Leah Buechley on art, craft, and computation. I have a long list of things to look up after her talk, including Desmos, computer controlled cutting machines (which I had never heard of before) which are way cheaper than 3-D printers but still allow you to do computational craft, and http://blog.recursiveprocess.com/ which is all about learning coding and mathematics. She made an argument that I find fascinating — that art is what helps diverse students reflect their identity and culture in their school, and that’s why students who get art classes (controlling for SES) are more likely to succeed in school and go onto post-secondary schooling. Can computing make it easier to bring art back into school? Can computing then play a role in engaging children with school again?

The next reason we were at NTNU was to attend the EXCITED Centre advisory board meeting. Barb and I were there for the launch of EXCITED in January 2017. It’s a very ambitious project, starting from students making informed decisions to go into CS/IT, helping students develop identities in CS, learning through construction, increasing diversity in CS, and moving into careers. We got to hang out with Arnold Pears, Mats Daniels, and Aletta Nylén of UpCERG (Upssala Computing Education Research Group), the world’s largest CER group.

Finally, for the last four days, we attended the Interaction, Design and Children Conference, IDC 2018. I wrote my Blog@CACM post for this month about my experiences there. I saw a lot there that’s relevant to people who read this blog. My favorite paper there tested the theory of concreteness fading on elementary school students learning computing concepts. Here’s a picture of a slide (not in the paper) that summarizes the groups in the experiment.

I’ll end with my favorite moment in IDC 2018, not in the Blog@CACM post. We met Letizia’s post-doc, Javier “Javi” Gomez at the end of our first week in Trondheim. Summer weather in Trondheim is pretty darn close to winter in Atlanta. One day, we woke up to 44F and rain. But we lucked out — the weekends were beautiful. On our first Saturday, Letizia invited us all to a festival near her home, and we met Javi and Elisa. That evening (but still bright sunlight), Javi, Elisa, Barb, and I took a wonderful kayaking trip down the Nidelva river. So it was a special treat to be at IDC 2018 to see Javi get TWO

awards for his contributions, one for his demo and an honorable mention for his note. The note was co-authored by Letizia, and was her first paper award (as she talks about in the lovely linked blog post). It was wonderful to be able to celebrate the success of our new friends.

On the way back, Barb and I stopped in London to spend a couple days with Alan Kay and his wife, Bonnie MacBird. If I could come up with a dozen blog post ideas from 3 weeks, it’s probably like two dozen per day with Alan and Bonnie, and we had two days with them. Visiting a science museum with an exhibit on early computers (including an Alto!) is absolutely amazing when you’re with Alan. But those blog posts will have to wait until after my blog hiatus.

June 28, 2018 at 7:00 am 2 comments

We can build new programming languages that people will teach, learn, and use: Scratch 3.0 in August

When I come out with blog posts saying that we need new programming languages (like this one), I regularly get a bunch of skepticism.  People will only use industry-approved languages, says one argument.  We need to teach the languages that exist, says another.

Then I just reply, “Scratch.”  It’s real programming, it’s popular, and it’s taught around the world.  We ought to study how Scratch succeeded.  One key insight: Don’t beat your head against the traditional CS1 teachers.  There’s a lot more people to teach, and not everyone has to become a software developer.

A new version of Scratch is coming this August!

Source: 3 Things To Know About Scratch 3.0 – The Scratch Team Blog – Medium

June 25, 2018 at 7:00 am 20 comments

It Matters a Lot Who Teaches Introductory Courses if We Want Students to Continue

Thanks to Gary Stager who sent this link to me. The results mesh with Pat Alexander’s Model of Domain Learning. A true novice to a field is not going to pursue studies because of interest in the field — a novice doesn’t know the field. The novice is going to pursue studies because of social pressures, e.g., it’s a requirement for a degree or a job, it’s expected by family or community, or the teacher is motivating.  As the novice becomes an intermediate, interest in the domain can drive further study.  These studies suggest that persistence is more likely to happen if the teacher is a committed, full-time teacher.

The first professor whom students encounter in a discipline, evidence suggests, plays a big role in whether they continue in it.

On many campuses, teaching introductory courses typically falls to less-experienced instructors. Sometimes the task is assigned to instructors whose very connection to the college is tenuous. A growing body of evidence suggests that this tension could have negative consequences for students.

Two papers presented at the American Educational Research Association’s annual meeting in New York on Sunday support this idea.

The first finds that community-college students who take a remedial or introductory course with an adjunct instructor are less likely to take the next course in the sequence.

The second finds negative associations between the proportion of a four-year college’s faculty members who are part-time or off the tenure track and outcomes for STEM majors.

Source: It Matters a Lot Who Teaches Introductory Courses. Here’s Why.

June 22, 2018 at 7:00 am 8 comments

The Story of MACOS: How getting curriculum development wrong cost the nation, and how we should do it better

Man: A Course of Study (MACOS) is one of the most ambitious US curriculum efforts I’ve ever heard about. The goal was to teach anthropology to 10 year olds. The effort was led by world-renowned educational psychologist Jerome Bruner, and included many developers, anthropologists, and educational psychologists (including Howard Gardner). It won awards from the American Education Research Association and from other education professional organization for its innovation and connection to research. At its height, MACOS was in thousands of schools, including whole school districts.

Today, MACOS isn’t taught anywhere. Funding for MACOS was debated in Congress in 1975, and the controversy led eventually to the de-funding of science education nationally.

Peter Dow’s 1991 book Schoolhouse Politics: Lessons from the Sputnik Era is a terrific book which should be required reading for everyone involved in computing education in K-12. Dow was the project manager for MACOS, and he’s candid in describing what they got wrong. It’s worthwhile understanding what happened so that we might avoid it in computing education. I just finished reading it, and here are some of the parts that I found particularly insightful.

First, Dow doesn’t dismiss the critics of MACOS. Rather, he recognizes that the tension is between learning objectives. What do we want for our children? What kind of society do we want to build?

I quickly learned that decisions about educational reform are driven far more by political considerations, such as the prevailing public mood, than they are by a systematic effort to improve instruction. Just as Soviet science supremacy had spawned a decade of curriculum reform led by some of our most creative research scientists during the late 1950s and 1960s, so now a new wave of political conservatism and religious fundamentalism in the early 1970s began to call into question the intrusion of university academics into the schools…Exposure to this debate caused me to recast the account to give more attention to educational politics. No discussion of school reform, it seems, can be separated from our vision of the society that the schools serve.

MACOS was based in the best of educational psychology at the time. Students engaged in inquiry with first-hand accounts, e.g., videos of Eskimos. The big mistake the developers made was they gave almost no thought to how it was going to get disseminated. Dow points out that MACOS was academic researchers intruding into K-12, without really understanding K-12. They didn’t plan for teacher professional development, and worse, didn’t build any mechanism for teachers to tell them how the materials should be changed to work in real classrooms. They were openly dismissive of the publishers who might get the materials into the world.

On teachers: There was ambivalence about teachers at ESI. On the one hand the Social Studies Program viewed its work as a panacea for teachers, a liberation from the drudgery of textbook materials and didactic lessons. On the other, professional educators were seen as dull-witted people who conversed in an incomprehensible “middle language” and were responsible for the uninspired state of American education.

On publishers: These two experienced and widely respected publishing executives listened politely while Bruner described our lofty education aspirations with characteristic eloquence, but the discussion soon turned to practical matters such as the procedures of state adoption committees, “tumbling test” requirements, per-pupil expenditures, readability formulas, and other restrictions that govern the basal textbook market. Spaulding and Kaplan tried valiantly to instruct us about the realities of the educational publishing world, but we dismissed their remarks as the musings of men who had been corrupted by commercialism. Did they not understand that our mission was to change education, not submit to the strictures that had made much of instruction so meaningless? Could not men so powerful in the publishing world commit some of their resources to support curriculum innovation? Had they no appreciation of the intellectual poverty of most social studies classrooms? I remember leaving that room depressed by the monumental conservatism of our visitors and more determined than ever to prove that there were ways to reach the schools with good materials. Our arrogance and naivete were not so easily cured.

By 1971, Dow realizes that the controversies around MACOS could easily have been avoided. They had made choices in their materials that highlighted the challenges of Eskimo life graphically, but the gory details weren’t really necessary to the learning objectives. They simply hadn’t thought enough about their users, which included the teachers, administrators, parents, and state education departments.

My favorite scene in the book is with Margaret Mead who tries to help Dow defend MACOS in Congress, but she’s frustrated by their arrogance and naivete.

Mead’s exasperation grew. “What do you tell the children that for?…I have been teaching anthropology for forty years,” she remarked, “and I have never had a controversy like this over what I have written.”

But Mead’s anger quickly returned. “No, no, you can’t tell the senators that! Don’t preach to them! You and I may believe that sort of thing, but that’s not what you say to these men. The trouble with you Cambridge intellectuals is that you have no political sense!”

Dow describes over two chapters the controversies around MACOS and the aftermath impacts on science education funding at NSF. But he also points out the problems with MACOS as a curriculum. Some of these are likely problems we’re facing in CS for All efforts.

For example, he talks about why MACOS was removed from Oregon schools, using the work of Lynda Falkenstein. (Read the below with an awareness of the Google-Gallup and EdWeek polls showing that administrators and principals are not supportive of CS in schools.)

She concluded that innovations that lacked the commitment of administrators able to provide long-term support and continuing teacher training beyond the initial implementation phase were bound to faster regardless of their quality. Even more than controversy, she found, the greatest barrier to successful innovation was the lack of continuity of support from the internal structure of the school system itself.

I highly recommend Schoolhouse Politics. It has me thinking about what it really takes to get any education reform to work and to scale. The book is light on evaluation evidence that MACOS worked. For example, I’m concerned that MACOS was so demanding that it may have been too much for underprepared students or teachers. I am totally convinced that it was innovative and brilliant. One of the best curriculum design efforts I’ve ever read about, in terms of building on theory and innovative design. I am also totally convinced that it wasn’t ready to scale — and the cost of that mistake was enormous. We need to avoid making those mistakes again.

June 18, 2018 at 7:00 am 6 comments

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

June 15, 2018 at 1:00 am 5 comments

Workshops for New Computing Faculty in Summer 2018: Both Research and Teaching Tracks

This is our fourth year, and our last NSF-funded year, for the New Computing Faculty Workshops which will be held August 5-10, 2018 in San Diego. The goal of the workshops is to help new computing faculty to be better and more efficient teachers. By learning a little about teaching, we will help new faculty (a) make their teaching more efficient and effective and (b) make their teaching more enjoyable. We want students to learn more and teachers to have fun teaching them. The workshops were described in Communications of the ACM in the May 2017 issue (see article here) which I talked about in this blog post. The workshop will be run by Beth Simon (UCSD), Cynthia Bailey Lee (Stanford), Leo Porter (UCSD), and Mark Guzdial (Georgia Tech).

This year, for the first time, we will offer two separate workshop tracks:

  • August 5-7 will be offered to tenure-track faculty starting at research-intensive institutions.
  • August 8-10 will be offered to faculty starting a teaching-track job at any school, or a tenure-track faculty line at a primarily undergraduate serving institution where evaluation is heavily based in teaching.

This year we added new organizers, Ben Shapiro (Boulder) for the research-intensive track, and Helen Hu (Westminster) and Colleen Lewis (Harvey Mudd) for the teaching-intensive track.

The new teaching-oriented faculty track is being added this year due to enthusiasm and feedback we heard from past participants and would-be participants. When I announced the workshops last year (see post here), we heard complaints (a little on email, and a lot on Twitter) asking why we were only including research-oriented faculty and institutions. We did have teaching-track faculty come to our last three years of new faculty workshops that were research-faculty focused, and unfortunately those participants were not satisfied. They didn’t get what they wanted or needed as new faculty. Yes, the sessions on peer instruction and how to build a syllabus were useful for everyone. But the teaching-track faculty also wanted to know how to set up their teaching portfolio, how to do research with undergraduate students, and how to get good student evaluations, and didn’t really care about how to minimize time spent preparing for teaching and how to build up a research program with graduate students while still enjoying teaching undergraduate students.

So, this year we made a special extension request to NSF, and we are very pleased to announce that the request was granted and we are able to offer two different workshops. The content will have substantial overlap, but with a different focus and framing in each.

To apply for registration, To apply for registration, please apply to the appropriate workshop based on the type of your position: research-focused position http://bit.ly/ncsfw2018-research or teaching-focused position http://bit.ly/ncsfw2018-teaching. Admission will be based on capacity, grant limitations, fit to the workshop goals, and application order, with a maximum of 40 participants. Apply on or before June 21 to ensure eligibility for workshop hotel accommodation. (We will notify respondents by June 30.)


Many thanks to Cynthia Lee who helped a lot with this post

June 12, 2018 at 6:00 am 1 comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,326 other followers

Feeds

Recent Posts

Blog Stats

  • 1,575,794 hits
June 2018
M T W T F S S
« May   Aug »
 123
45678910
11121314151617
18192021222324
252627282930  

CS Teaching Tips