Posts tagged ‘teachers’

The Ground Truth of Computing Education: What Do You Know?

Earlier this month, I was a speaker at a terrific event at Cornell Tech To Code & Beyond: Thinking & Doing organized by Diane Levitt (see Tweet here). I spoke, and then was on a panel with Kelly Powers, Thea Charles, Aman Yadav, and Diane to discuss what is Computational Thinking.

One of the highlights of the day for me was listening to Margaret Honey, a legendary educational technology designer and researcher (see bio here). She is President and CEO of the New York Hall of Science. One of my favorite parts of her talk was a description of the apps that they’re building to get kids to notice and measure things in their world. I even love the URL for their tools — https://noticing.nysci.org/

At the event, Diane mentioned that she was working on a blog post about her “ground truth” — what she most believed about CS education. She shared it as a tweet right after the event. It’s lovely and deep — find it here.

A couple of my favorite of her points:

Students thrive when we teach at the intersection of rigor and joy. In computer science, it’s fun to play with the real thing. But sometimes we water it down until it’s too easy—and kids know it. Struggle itself will not turn kids away from computer science. They want relevant learning experiences that lead to building things that matter to them. “I can do hard things!” is one of the most powerful thoughts a student can have.

The biggest lever we have is the one we aren’t using enough yet: preservice education for new teachers. The sooner we start teaching computer science education alongside the teaching of math and reading, during teachers’ professional preparation programs, the sooner we get to scale. It’s expensive and time-consuming to continually retool our workforce. Eventually, if every teacher enters the classroom prepared to include computer science, every student will be prepared for the digital world in which they live. This is what we mean by equity: equal access for every student, regardless of geography, gender, income, ability, or, frankly, interest.

Sara Judd answered Diane’s post with one of her own — find it here. I really enjoy it because she sees computer science like I do. It’s not just about problem-solving, but also about making things and connecting to the world.

Programming makes things.

While programming for it’s own sake can be fun for some people, (me, for instance) generally when people are programming it is because there is a thing that needs to be made. These things can be expressive pieces of visual art or music. These things can be silly fun for fun’s sake. These things can revolutionize the world, they can make our lives easier. The important thing is, they are “things.” CS doesn’t exist in a vacuum. Therefore, classroom CS should not exist in a vacuum.

I encourage more of us to do this — to write down what we believe about CS education, then share the essays. It’s great to hear goals and perspectives, both to learn new ones and also to recognize that others share how we think about it. I particularly enjoy reading these from people with different life experiences. I have a privileged life as a University CS professor. Teachers in K-12 struggle with very different things. I’m so pleased when I find that we still have similar goals for and perspectives about CS education.

January 28, 2019 at 7:00 am 1 comment

Do we know how to teach secure programming to K-12 students and end-user programmers?

I wrote my CACM Blog post this month on the terrific discussion that Shriram started in my recent post inspired by Annette Vee’s book (see original post here), “The ethical responsibilities of the student or end-user programmer.” I asked several others, besides the participants in the comment thread, about what responsibility they thought students and end-user programmers bore for their code.

One more issue to consider, which is more computing education-specific than the general issue in the CACM Blog. If we decided that K-12 students and end-user programmers need to know how to write secure programs, could we? Do we know how? We could tell students, “You’re responsible,” but that alone doesn’t do any good.

Simply teaching about security is unlikely to do much good. I wrote a blog post back in 2013 about the failings of financial literacy education (see post here) which is still useful to me when thinking about computing education. We can teach people not to make mistakes, or we can try to make it impossible to make mistakes. The latter tends to be more effective and cheaper than the former.

What would it take to get students to use best practices for writing secure programs and to test their programs for security vulnerabilities? In other words, how could you change the practice of K-12 student programmers and end-user programmers? This is a much harder problem than setting a learning objective like “Students should be able to sum all the elements in an array.” Security is a meta-learning objective. It’s about changing practice in all aspects of other learning objectives.

What it would take to get CS teachers to teach to improve security practices? Consider for example an idea generally accepted to be good practice: We could teach students to write and use unit tests. Will they when not required to? Will they write good unit tests and understand why they’re good? In most introductory courses for CS majors, students don’t write unit tests. That’s not because it’s not a good idea. It’s because we can’t convince all the CS teachers that it’s a good idea, so they don’t require it. How much harder will it be to teach K-12 CS teachers (or even science or mathematics teachers who might be integrating CS) to use unit tests — or to teach secure programming practices?

I have often wondered: Why don’t introductory students use debuggers, or use visualization tools effectively (see Juha Sorva’s excellent dissertation for a description of how student use visualizers)? My hypothesis is that debuggers and visualizers presume that the user has an adequate mental model of the notional machine. The debugging options Step In or Step Over only make sense if you have some understanding of what a function or method call does. If you don’t, then those options are completely foreign to you. You don’t use something that you don’t understand, at least, not when your goal is to develop your understanding.

Secure programming is similar. You can only write secure programs when you can envision alternative worlds where users type the wrong input, or are explicitly trying to break your program, or worse, are trying to do harm to your users (what security people sometimes call adversarial thinking). Most K-12 and end-user programmers are just trying to get their programs work in a perfect world. They simply don’t have a model of the world where any of those other things can happen. Writing secure programs is a meta-objective, and I don’t think we know how to achieve it for programmers other than professional software developers.

January 14, 2019 at 7:00 am 16 comments

Analyzing CS in Texas school districts: Maybe enough to take root and grow

My Blog@CACM for this month is about Code.org’s decision to shift gradually the burden of paying for CS professional development to the local regions — see link here.  It’s an important positive step that needs to happen to make CS sustainable with the other STEM disciplines in K-12 schools.

We’re at an interesting stage in CS education. 40-70% of high schools have CS, but the classes are pretty empty.  I use Indiana and Texas as examples because they’ve made a lot of their data available.  Let’s drill a bit into the Texas data to get a flavor of it, available here.  I’m only going to look at Area 1’s data, because even just that is deep and fascinating.

Brownsville Intermediate School District. 13,941 students. 102 in CS.

Computer_Science_Regional_Data___STEM_Center___The_University_of_Texas_at_Austin

Of the 10 high schools in Brownsville ISD, only two high schools have anyone in their CS classes.  Brownsville Early College High School has 102 students in CS Programming (no AP CS Level A, no AP CSP).  That probably means that one teacher has several sections of that course — that’s quite a bit.  The other high school, Porter Early College High School has fewer than five students in AP CS A.  My bet is that there is no CS teacher there, only five students doing an on-line class.  That means for 10 high schools and 13K students, there is really only one high school CS teacher.

Edinburg Consolidated Independent School District, over 10K students, 92 students in CS.

Computer_Science_Regional_Data___STEM_Center___The_University_of_Texas_at_Austin-3

This is a district that could grow CS if there was will.  There are 6 high schools, but two are special cases: One with less than 5 students, and the other in a juvenile detention center.  The other four high schools are huge, with over 2000 students each.  In Economedes, that are only 9 students in AP CS A — maybe just on-line?  Edinburg North and Robert R Vela high school each have two classes: AP CS A and CS1.  With 21 and 14, I’m guessing two sections.  The other has 43 and 6. That might be two sections of AP CS A and another of CS1, or two sections of AP CS A and 6 students in an on-line class.  In any case, this suggests two high school CS teachers (maybe three) in half of the high schools in the district.  Those teachers aren’t teaching only CS, but with increased demand and support from principals, the CS offerings could grow.

It’s fascinating to wander through the Texas data, to see what’s there and what’s not.  I could be wrong about what’s there, e.g., maybe there’s only one teacher in Edinburg and she’s moving from school-to-school.  Given these data, there’s unlikely to be a CS teacher in every high school, who just isn’t teaching any CS. These data are a great snapshot. There is CS in Texas high schools, and maybe there’s enough there to take root and grow.

 

October 19, 2018 at 7:00 am 2 comments

CRA Memo on Best Practices for Engaging Teaching Faculty in Research Computing Departments

I’m excited to see this memo from the Computing Research Association on the status of teaching faculty in computing departments. Computing departments are increasingly relying on teaching faculty, and it’s important to give them fair and equitable treatment.

I wrote in 2016 that “CS Teaching Faculty are like Tenant Farmers.” This memo addresses some of the issues I raised, though some are buried in the text of the memo.  I argued that teaching faculty should be involved in hiring for both traditional and teaching faculty, and that teaching faculty should serve in upper-level leadership positions.  The report does state halfway down the report, “Similarly, teaching faculty should be broadly included in faculty governance on matters related to their roles in the department, including participation in faculty meetings, voting rights on matters impacting the education mission, inclusion in evaluation of the teaching performance of other faculty, and input on hiring decisions.”  This memo is a step in the right direction.

To achieve their educational mission, computing departments at research universities increasingly depend on full-time teaching faculty who choose teaching as a long-term career. This memo discusses the need for teaching faculty, explores the impact of teaching faculty, and recommends best practices.

Essential best practices for departments include:

  • Departments should provide teaching faculty with equitable rights and resources, except in limited areas where differing job responsibilities make that inappropriate.

  • Departments should encourage teaching faculty to be equal and active partners on projects and committees with the goal of contributing to the department’s educational mission.

  • Departments should set course, preparation, student, and service loads of teaching faculty at a level that allows for innovation and quality instruction.

    ….

Source: Laying a Foundation: Best Practices for Engaging Teaching Faculty in Research Computing Departments

August 17, 2018 at 7:00 am 6 comments

The Story of MACOS: How getting curriculum development wrong cost the nation, and how we should do it better

Man: A Course of Study (MACOS) is one of the most ambitious US curriculum efforts I’ve ever heard about. The goal was to teach anthropology to 10 year olds. The effort was led by world-renowned educational psychologist Jerome Bruner, and included many developers, anthropologists, and educational psychologists (including Howard Gardner). It won awards from the American Education Research Association and from other education professional organization for its innovation and connection to research. At its height, MACOS was in thousands of schools, including whole school districts.

Today, MACOS isn’t taught anywhere. Funding for MACOS was debated in Congress in 1975, and the controversy led eventually to the de-funding of science education nationally.

Peter Dow’s 1991 book Schoolhouse Politics: Lessons from the Sputnik Era is a terrific book which should be required reading for everyone involved in computing education in K-12. Dow was the project manager for MACOS, and he’s candid in describing what they got wrong. It’s worthwhile understanding what happened so that we might avoid it in computing education. I just finished reading it, and here are some of the parts that I found particularly insightful.

First, Dow doesn’t dismiss the critics of MACOS. Rather, he recognizes that the tension is between learning objectives. What do we want for our children? What kind of society do we want to build?

I quickly learned that decisions about educational reform are driven far more by political considerations, such as the prevailing public mood, than they are by a systematic effort to improve instruction. Just as Soviet science supremacy had spawned a decade of curriculum reform led by some of our most creative research scientists during the late 1950s and 1960s, so now a new wave of political conservatism and religious fundamentalism in the early 1970s began to call into question the intrusion of university academics into the schools…Exposure to this debate caused me to recast the account to give more attention to educational politics. No discussion of school reform, it seems, can be separated from our vision of the society that the schools serve.

MACOS was based in the best of educational psychology at the time. Students engaged in inquiry with first-hand accounts, e.g., videos of Eskimos. The big mistake the developers made was they gave almost no thought to how it was going to get disseminated. Dow points out that MACOS was academic researchers intruding into K-12, without really understanding K-12. They didn’t plan for teacher professional development, and worse, didn’t build any mechanism for teachers to tell them how the materials should be changed to work in real classrooms. They were openly dismissive of the publishers who might get the materials into the world.

On teachers: There was ambivalence about teachers at ESI. On the one hand the Social Studies Program viewed its work as a panacea for teachers, a liberation from the drudgery of textbook materials and didactic lessons. On the other, professional educators were seen as dull-witted people who conversed in an incomprehensible “middle language” and were responsible for the uninspired state of American education.

On publishers: These two experienced and widely respected publishing executives listened politely while Bruner described our lofty education aspirations with characteristic eloquence, but the discussion soon turned to practical matters such as the procedures of state adoption committees, “tumbling test” requirements, per-pupil expenditures, readability formulas, and other restrictions that govern the basal textbook market. Spaulding and Kaplan tried valiantly to instruct us about the realities of the educational publishing world, but we dismissed their remarks as the musings of men who had been corrupted by commercialism. Did they not understand that our mission was to change education, not submit to the strictures that had made much of instruction so meaningless? Could not men so powerful in the publishing world commit some of their resources to support curriculum innovation? Had they no appreciation of the intellectual poverty of most social studies classrooms? I remember leaving that room depressed by the monumental conservatism of our visitors and more determined than ever to prove that there were ways to reach the schools with good materials. Our arrogance and naivete were not so easily cured.

By 1971, Dow realizes that the controversies around MACOS could easily have been avoided. They had made choices in their materials that highlighted the challenges of Eskimo life graphically, but the gory details weren’t really necessary to the learning objectives. They simply hadn’t thought enough about their users, which included the teachers, administrators, parents, and state education departments.

My favorite scene in the book is with Margaret Mead who tries to help Dow defend MACOS in Congress, but she’s frustrated by their arrogance and naivete.

Mead’s exasperation grew. “What do you tell the children that for?…I have been teaching anthropology for forty years,” she remarked, “and I have never had a controversy like this over what I have written.”

But Mead’s anger quickly returned. “No, no, you can’t tell the senators that! Don’t preach to them! You and I may believe that sort of thing, but that’s not what you say to these men. The trouble with you Cambridge intellectuals is that you have no political sense!”

Dow describes over two chapters the controversies around MACOS and the aftermath impacts on science education funding at NSF. But he also points out the problems with MACOS as a curriculum. Some of these are likely problems we’re facing in CS for All efforts.

For example, he talks about why MACOS was removed from Oregon schools, using the work of Lynda Falkenstein. (Read the below with an awareness of the Google-Gallup and EdWeek polls showing that administrators and principals are not supportive of CS in schools.)

She concluded that innovations that lacked the commitment of administrators able to provide long-term support and continuing teacher training beyond the initial implementation phase were bound to faster regardless of their quality. Even more than controversy, she found, the greatest barrier to successful innovation was the lack of continuity of support from the internal structure of the school system itself.

I highly recommend Schoolhouse Politics. It has me thinking about what it really takes to get any education reform to work and to scale. The book is light on evaluation evidence that MACOS worked. For example, I’m concerned that MACOS was so demanding that it may have been too much for underprepared students or teachers. I am totally convinced that it was innovative and brilliant. One of the best curriculum design efforts I’ve ever read about, in terms of building on theory and innovative design. I am also totally convinced that it wasn’t ready to scale — and the cost of that mistake was enormous. We need to avoid making those mistakes again.

June 18, 2018 at 7:00 am 6 comments

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

June 15, 2018 at 1:00 am 5 comments

Workshops for New Computing Faculty in Summer 2018: Both Research and Teaching Tracks

This is our fourth year, and our last NSF-funded year, for the New Computing Faculty Workshops which will be held August 5-10, 2018 in San Diego. The goal of the workshops is to help new computing faculty to be better and more efficient teachers. By learning a little about teaching, we will help new faculty (a) make their teaching more efficient and effective and (b) make their teaching more enjoyable. We want students to learn more and teachers to have fun teaching them. The workshops were described in Communications of the ACM in the May 2017 issue (see article here) which I talked about in this blog post. The workshop will be run by Beth Simon (UCSD), Cynthia Bailey Lee (Stanford), Leo Porter (UCSD), and Mark Guzdial (Georgia Tech).

This year, for the first time, we will offer two separate workshop tracks:

  • August 5-7 will be offered to tenure-track faculty starting at research-intensive institutions.
  • August 8-10 will be offered to faculty starting a teaching-track job at any school, or a tenure-track faculty line at a primarily undergraduate serving institution where evaluation is heavily based in teaching.

This year we added new organizers, Ben Shapiro (Boulder) for the research-intensive track, and Helen Hu (Westminster) and Colleen Lewis (Harvey Mudd) for the teaching-intensive track.

The new teaching-oriented faculty track is being added this year due to enthusiasm and feedback we heard from past participants and would-be participants. When I announced the workshops last year (see post here), we heard complaints (a little on email, and a lot on Twitter) asking why we were only including research-oriented faculty and institutions. We did have teaching-track faculty come to our last three years of new faculty workshops that were research-faculty focused, and unfortunately those participants were not satisfied. They didn’t get what they wanted or needed as new faculty. Yes, the sessions on peer instruction and how to build a syllabus were useful for everyone. But the teaching-track faculty also wanted to know how to set up their teaching portfolio, how to do research with undergraduate students, and how to get good student evaluations, and didn’t really care about how to minimize time spent preparing for teaching and how to build up a research program with graduate students while still enjoying teaching undergraduate students.

So, this year we made a special extension request to NSF, and we are very pleased to announce that the request was granted and we are able to offer two different workshops. The content will have substantial overlap, but with a different focus and framing in each.

To apply for registration, To apply for registration, please apply to the appropriate workshop based on the type of your position: research-focused position http://bit.ly/ncsfw2018-research or teaching-focused position http://bit.ly/ncsfw2018-teaching. Admission will be based on capacity, grant limitations, fit to the workshop goals, and application order, with a maximum of 40 participants. Apply on or before June 21 to ensure eligibility for workshop hotel accommodation. (We will notify respondents by June 30.)


Many thanks to Cynthia Lee who helped a lot with this post

June 12, 2018 at 6:00 am 1 comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,143 other followers

Feeds

Recent Posts

Blog Stats

  • 1,607,264 hits
February 2019
M T W T F S S
« Jan    
 123
45678910
11121314151617
18192021222324
25262728  

CS Teaching Tips