Posts tagged ‘computing for all’

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

June 15, 2018 at 1:00 am 4 comments

Reflections of a CS Professor and an End-User Programmer

In my last blog post, I talked about the Parsons problems generator that I used to put scrambled code problems on my quiz, study guide, and final exam. I’ve been reflecting on the experience and what it suggests to me about end-user programming.

I’m a computing professor, and while I enjoy programming, I mostly code to build exercises and examples for my students. I almost never code research prototypes anymore. I only occasionally code scripts that help me with something, like cleaning data, analyzing data, or in this case, generating problems for my students. In this case, I’m a casual end-user programmer — I’m a non-professional programmer who is making code to help him with some aspect of his job. This is in contrast:

  • To Philip Guo’s work on conversational programmers, who are people who learn programming in order to talk to programmers (see his post describing his papers on conversational programmers). I know how to talk to programmers, and I have been a professional programmer. Now, I have a different job, and sometimes programming is worthwhile in that job.
  • To computational scientists and engineers, which is the audience for Software Carpentry. Computational scientists and engineers might write code occasionally to solve a problem, but more importantly, they write code as part of their research.  I might write a script to handle an odd-job, but most of my research is not conducted with code.

Why did I spend the time writing a script to generate the problems in LaTeX? I was teaching a large class, over 200 students. Mistakes on quizzes and exams at that scale are expensive in terms of emails, complaints, and regrading. Scrambled code problems are tricky. It’s easy to randomly scramble code. It’s harder to keep track of the right ordering. I needed to be able to do this many times.

Was it worthwhile? I think it was. I had a couple Parsons problems on the quiz, maybe five on the study guide, and maybe three on the final exam. (Different numbers at different stages of development.) Each one got generated at least twice as I refined, improved, or fixed the problem. (One discovery: Don’t include comments. They can legally go anywhere, so it only makes grading harder.) The original code only took me about an hour to get working. The script got refined many times as I used it, but the initial investment was well worth it for making sure that the problem was right (e.g., I didn’t miss any lines, and indentation was preserved for Python code) and the solution was correct.

Would it be worthwhile for anyone else to write this script facing the same problems? That’s a lot harder question.

I realized that I brought a lot of knowledge to bear on this problem.

  • I have been a professional programmer.
  • I do not use LiveCode often, but I have used HyperTalk a lot, and the environment is forgiving with lots of help for casual programmers like me. LiveCode doesn’t offer much for data abstraction — basically, everything is a string.  I have experience using the tool’s facility with items, words, lines, and fields to structure data.
  • I know LaTeX and have used the exam class before. I know Python and the fact that I needed to preserve indentation.

Then I realized that it takes almost as much knowledge to use this generator. The few people who might want to use the Parsons problem generator that I posted would have to know about Parsons problems, want to use them, be using LaTeX for exams, and know how to use the output of the generator.

But I bet that all (or the majority?) of end-user programming experiences are like this. End-users are professionals in some domain. They know a lot of stuff. They’ll bring a lot of knowledge to their programming activity. The programs will require a lot of knowledge to write, to understand, and to use.

One of the potential implications is that this program (and maybe most end-user programs?) are probably not useful to many others.  Much of what we teach in CS1 for CS majors, or maybe even in Software Carpentry, is not useful to the occasional, casual end-user programmer.  Most of what we teach is for larger-scale programming.  Do we need to teach end-user programmers about software engineering practices that make code more readable by others?  Do we need to teach end-user programmers about tools for working in teams on software if they are not going to be working in teams to develop their small bits of code? Those are honest questions.  Shriram Krishnamurthi would remind me that end-user programmers, even more than any other class of programmers, are more likely to make errors and less likely to be able to debug them, so teaching end-user programmers practices and tools to catch and fix errors is particularly important for them.  That’s a strong argument. But I also know that, as an end-user programmer myself, I’m not willing to spend a lot of time that doesn’t directly contribute towards my end goal.  Balancing the real needs of end-user programmers with their occasional, casual use of programming is an interesting challenge.

The bigger question that I’m wondering about is whether someone else, facing a similar problem, could learn to code with a small enough time investment to make it worthwhile. I did a lot of programming in HyperTalk when I was a graduate student. I have that investment to build on. How much of an investment would someone else have to make to be able to write this kind of script as easily?

Why LiveCode? Why not Python? Or Smalltalk? I was originally going to write this in Python. Why not? I was teaching Python, and the problems would all be in Python. It’d good exercise for me.

I realized that I didn’t want to deal with files or a command line. I wanted a graphical user interface. I wanted to paste some code in (not put it in a file), and get some text that I could copy (not find it in one or more files). I didn’t want to have to remember what function(s) to call. I wanted a big button. I simply don’t have the time to deal with the cognitive load of file names and function names. Copy-paste the sorted code, press the button, then copy-paste the scrambled code and copy-paste the solution. I could do that. Maybe I could build a GUI in Python, but every time I have used a GUI tool in Python, it was way more work than LiveCode.

I also know Smalltalk better than most. Here’s a bit of an embarrassing confession: I’ve never really learned to build GUIs in Smalltalk. I’ve built a couple of toy examples in Morphic for class. But a real user interface with text areas that really work? That’s still hard for me. I didn’t want to deal with learning something new. LiveCode is just so easy — select the tool, drag the UI object into place.

LiveCode was the obvious answer for me, but that’s because of who I am and the background that I already have. What could we teach future professionals/end-user programmers that (a) they would find worthwhile learning (not too hard, not too time-consuming) and (b) they could use casually when they needed it, like my Parsons problem generator? That is an interesting computing education research question.

How does a student determine “worthwhile” when deciding what programming to learn for future end-user programming?  Let’s say that we decided to teach all STEM graduate students some programming so that they could use it in their future professional practice as end-user programmers.  What would you teach them?  How would they judge something “worthwhile” to learn for later?

We know some answers to this question.  We know that students judge the authenticity of the language based on what they see themselves doing in the future and what the current practice is in that field (see Betsy DiSalvo’s findings on Glitch and our results on Media Computation).

But what if that’s not a good programming language? What if there’s a better one?  What if the common practice in a field is ill-informed? I’m going to be that most people, faced with the general problem I was facing (wanting a GUI to do a text-processing task) would use JavaScript.  LiveCode is way better than JavaScript for an occasional, casual GUI task — easier to learn, more stable, more coherent implementation, and better programming support for casual users.  Yet, I predict most people would choose JavaScript because of the Principle of Social Proof.

I’ve been reading Robert Cialdini’s books on social psychology and influence, and he explains that social proof is how people make decisions when they’re uncertain (like how to choose a programming language when they don’t know much about programming) and there are others to copy.

First, we seem to assume that if a lot of people are doing the same thing, they must know something we don’t. Especially when we are uncertain, we are willing to place an enormous amount of trust in the collective knowledge of the crowd. Second, quite frequently the crowd is mistaken because they are not acting on the basis of any superior information but are reacting, themselves, to the principle of social proof.

Cialdini PhD, Robert B.. Influence (Collins Business Essentials) (Kindle Locations 2570-2573). HarperCollins. Kindle Edition.

How many people know both JavaScript and LiveCode well?  And don’t consider computer scientists. You can’t convince someone by telling them that computer scientists say “X is better than Y.”  People follow social proof from people whom they judge to be similar to them. It’s got to be someone in their field, someone who works like them.

It would be hard to teach the graduate students something other than what’s in common practice in their fields, even if it’s more inefficient to learn and harder to use than another choice.

June 11, 2018 at 2:00 am 2 comments

Integrating CS into other fields, so that other fields don’t feel threatened: Interview with Jane Prey

I really enjoyed the interview in the last SIGCSE Bulletin with Jane Prey.  Her reason for doing more to integrate CS into other disciplines, at the undergraduate level, is fascinating — one I hadn’t heard before.

Other fields are nervous because they think we’re taking so many students from them, and universities are nervous because they’re afraid of losing us to industry. I would hate to lose any other faculty position to add a CS professor. I really believe it’s important for computing professionals to be well-rounded, to be able to appreciate what they learned in history, biology, and anthropology classes. We need to do a better job of integrating more of a student’s educational experiences. For example, how do we do more work together with the education schools? We just aren’t there. We have to work cross-disciplines to develop a path forward, even though it’s really hard.

June 1, 2018 at 7:00 am Leave a comment

Some principals are getting interested in CS, but think pressure for CS is mostly coming from Tech companies

How do high school principals in small, medium and large districts view the Computer Science for All movement?

 

High school leaders in smaller districts are most enthusiastic about the trend, a new survey by the Education Week Research Center found. Overall, 30% of all principals say CS is not “on their radar,” and 32% say CS is an “occasional supplement or enrichment opportunity.”  I found the two graphs above interesting.  The majority of principals aren’t particularly excited by CS, and most principals think that it’s the Tech firms that are pushing CS onto schools, not parents.

Source: Principals Warm Up to Computer Science, Despite Obstacles

May 28, 2018 at 7:00 am 3 comments

Computer science education is far bigger than maker education: A post in lieu of a talk #InfyXRoads

I was scheduled to speak this Thursday in the final plenary panel of the Infosys Foundations USA CrossRoads 2018 conference (see program here). My father passed away on May 10, and we just had the funeral Friday May 18, so I apologized and cancelled the trip. I had already thought about what I wanted to say, so here’s a blog post in lieu of a panel presentation.

The session is “Why Teach CS? Why Teach Making?” with Yasmin Kafai, Quincy Brown, and Colleen Lewis. The session was inspired in part by my blog post listing the reasons for teaching programming, and was framed in our preliminary discussions as a debate. Is there a difference between CS education and Maker education? Yasmin was tasked with making the argument that they are pretty much the same. I disagree with that position. Colleen was moderating, and Quincy was still keeping her cards close to her chest — I don’t know what position she’s going to take Thursday.

If our goal is to teach the basics of programming, sure, maker education (where we teach students to make physical devices with embedded computation, such as e-textiles, robotics, or Lego Mindstorms devices) and the kind of computing education that I see reflected in the K-12 CS Framework is pretty much the same. There’s some CS education in there. Students learn the basics of sequential execution, conditionals, and looping. But that’s not the same as computer science education.

If our goal is to change students attitudes towards technology, then sure, maker education may be even more effective than computing education for getting students to see the technology in their world. By making their own technology, students may increase their self-efficacy, and help them to feel that they can and should have control over the technology in their lives. But again, that’s not the same as teaching students computer science.

The big ideas of computer science are much bigger than maker education. Here are three examples.

The questions that Alan Turing was trying to answer when he invented the Turing Machine were “What is computable? What are the limits of mathematics? What is not computable? Is even human intelligence computable?” These are as meta as you can get. This is the heart of computer science, as the science of abstraction. These aren’t ideas students currently explore in maker education. Maybe they could, but certainly don’t require a maker context.

One of the most powerful ideas associated with Turing Machines is that any computer can simulate any other computer, including being many other computers with many processes. That’s the big idea that Alan Perlis was talking about in 1961 when he talked about computer science as the study of process. That’s one of the big ideas behind object-oriented programming as Alan Kay defined it.  We don’t explore simulation in maker education, and it’s hard to imagine how we might.

 

Ada Lovelace was the world’s first computer programmer. More than that, she was the first to realize that computers were about programming anything. Quoting from her Wikipedia page:

Ada saw something that Babbage in some sense failed to see. In Babbage’s world his engines were bound by number…What Lovelace saw—what Ada Byron saw—was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper.

Maker education isn’t about general computation. It’s about computing associated with sensors and actuators. Computer science education is about computing everything, from numbers to letters to musical notes. Having to connect the computation to a device made by the student limits the space of what you might compute. Computer science is about representation and abstractions on representations. Everything can be defined in terms of bits. That’s a big idea.  You can probably teach that concept in maker education, but it can be taught (and more easily) without tying it to maker education.

Most of us know Grace Hopper’s name today, but probably more for her iconic status and as the namesake for the Grace Hopper Conference than for what she actually did. Admiral Grace Hopper led the effort to create compiled programming languages, including (eventually) COBOL. There are so many big ideas in here, but let’s just take two.

  • Automatic programming means that you have a program specified in one language (like COBOL or Java or Scratch) and you use that as input to a program that generates another language written in another language (used to be machine language, but JavaScript is probably more common today). A compiler is a program that inputs a program and generates another a program. That is a powerful, meta idea that students do not typically see in maker education. Could we teach about compilers in maker education?  Maybe, but “making” is certainly not the easiest and most obvious way to talk about compilers — it’s another way computing education is bigger than maker education.
  • COBOL was about making programming accessible by using words and concepts familiar to the end users. (It was also about designing a compiled language that would work on any underlying computer, which connects back to Turing’s machine.) Designing for others who are not you and have different expertise than you is one of the most fundamental ideas of human-computer interface design today. Do we get to that in maker education? That big idea occurs more often in non-maker contexts, e.g., making apps for others and using user-centered design to get there.

Bottomline: CS education is so much bigger than maker education. You can explore a lot of computer science using student-made devices as a context. Ben Shapiro has shown that he can have kids playing with powerful modern-day computing ideas from networking to machine learning, all using student-made devices. That’s serious CS education. But it’s not all of CS education, and you can do CS education apart from student-made devices. Maker and CS education are not one-to-one.

There is an equity component here. We often talk about Ada Lovelace and Grace Hopper when we talk about the women who were part of the creation of computer science. We do them a disservice if we only remember them as early members of a category “women in computing.” It’s important to recognize what they actually did, what they contributed to computer science — and we should teach that. What Lovelace and Hopper did mattered, and we demonstrate that it mattered by teaching it and explaining why it’s important.  Ideas like data representation and compilers are not today taught in maker education, are not easily taught in maker education, and can certainly be taught without maker education.

The big ideas that Turing, Lovelace, and Hopper created and explored are not new. This shouldn’t be the realm of advanced CS any more.  An important goal of computer science education should be to teach these foundational ideas of computer science.  I don’t think we know how to get there yet, but that should be our goal. We should be teaching the computer science developed by the people we hold up as heroes, leaders, and role models.

We can teach a lot with maker education, but let’s make sure that we don’t miss out on what CS education is about. Maker education is a great idea. It’s a terrific context for learning some of CS. If we only focus on the intersection of maker and CS education, we might miss the other, far bigger ideas that are in computer science.

May 21, 2018 at 7:00 am 14 comments

Scale or Fail: Making national CS education work in Switzerland

Alex Repenning has the CACM Viewpoints Education column this month where he sets out a bold challenge — scale CS education to a national scale, or fail at making CS education work for all.

K–12 computer science Education (CSed) is an international challenge with different countries engaging in diverse strategies to reach systemic impact by broadening participation among students, teachers and the general population. For instance, the CS4All initiative in the U.S. and the Computing at School movement in the U.K. have scaled up CSed remarkably. While large successes with these kinds of initiatives have resulted in significant impact, it remains unclear how early impact becomes truly systemic. The main challenge preventing K–12 CSed to advance from teachers who are technology enthusiasts to pragmatists is perhaps best characterized by Crossing the Chasm, a notion anchored in the diffusion of innovation literature. This chasm appears to exist for CSed. It suggests it is difficult to move beyond early adopters of a new idea, such as K–12 CSed, to the early majority. Switzerland, a highly affluent, but in terms of K–12 CSed somewhat conservative country, is radically shifting its strategy to cross this chasm by introducing mandatory pre-service teacher computer science education starting at the elementary school level.

Three fundamental CSed stages are characterized by permutations of self-selected/all and students/teachers combinations. It took approximately 20 years to transition through these stages. Each stage is described here from a more general CSed perspective as well as my personal perspective.

Source: Scale or Fail

May 14, 2018 at 7:00 am 1 comment

Feeling disadvantaged in CS courses at University of XXX

Even at Berkeley, the home of the great course emphasizing CS teaching for everyone, The Beauty and Joy of Computing, there are students who don’t feel that they belong in CS.  See the post quoted and linked below.

Of course, the story below is not about Berkeley.  This is about the slow pace of change, and how difficult it is to get whole CS departments to buy into the vision of “CS for All.”

CS 61A was a completely different story.

Last fall, I had the opportunity to work as a lab assistant for Data 8: “Foundations of Data Science,” and I couldn’t help but notice the difference in atmosphere between the students in Data 8 and my own experience in CS 61A.

Data 8 is one of the alternative courses offered for UC Berkeley students who are new programmers. Data 8 and CS 10: “The Beauty and Joy of Computing” are offered to students who want to test the waters of programming before jumping into 61A.

Data 8 uses Python, just like 61A. But the concepts are taught more slowly so new programmers can really understand how to use these concepts properly in their code.

Source: Column | Feeling disadvantaged in CS courses at UC Berkeley

May 11, 2018 at 7:00 am 7 comments

Older Posts


Recent Posts

June 2018
M T W T F S S
« May    
 123
45678910
11121314151617
18192021222324
252627282930  

Feeds

Blog Stats

  • 1,519,612 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,276 other followers

CS Teaching Tips