Posts tagged ‘computational science’

Computer science wins Chemistry Nobel prize

A big win for computational science, and for the argument that computer science is important, even for people who aren’t going to be professional software developers.

When he conceived his prestigious prizes in 1895, Alfred Nobel never imagined the need to honor an unknown field called computer science.

But the next best thing happened on Wednesday: Computing achieved a historic milestone when the Nobel Prize for chemistry went to a trio of researchers — one of them a Stanford University professor — for their groundbreaking work using computers to model the complex chemistry that sustains life.

“Computers in biology have not been sufficiently appreciated. Now they have been,” said ebullient winner Michael Levitt of Stanford’s School of Medicine, the university’s second Nobel winner this week.

via Stanford’s Nobel chemistry prize honors computer science – San Jose Mercury News.

October 18, 2013 at 1:51 am 6 comments

Pursuing universal computing literacy: Mozilla-as-Teacher, Everyone-as-Coder

Here’s another take on the “Computing for Everyone” theme that is near and dear to me. I’ve been exploring this idea in my talks and papers, here in the blog, and all starting from our Media Computation work.  This theme starts from a different question than CS: Principles, which is asking what should everyone learn about computing.  The Mozilla-as-teacher post is suggesting why everyone should learn “coding” (here, including HTML coding, vs. programming): to make the Web better.

It’s a reasonable answer, in the sense that universal literacy makes the world of letters better.  But how does it make it better?  For me, I’m still attracted to the innovation argument: we use code as a medium to say, share, and test ideas that we can’t in other media.  That communication, sharing, and debugging of ideas leads to more and better ideas, which results in innovation — new ideas, new extensions of those ideas, new implementations of those ideas.  That’s why it’s important to strive towards near-universal computing literacy, at least with respect to knowledge workers, which is why it’s important to require computing in college.

There are other arguments, too.  Another powerful reason for universal computing literacy is that it’s about knowing the world we live in. Why do we teach students the periodic table and the difference between meiosis and mitosis?  It’s mostly not because of job skills.  It’s because people live in a world where chemistry and biology matter.  Today, we all live in a world where computing matters.  Knowing about the inherent limitations of digital representations is more important to most people’s daily lives than knowing about meiosis and mitosis.

Now, if you buy all that: How do we get there?

This has been the premise behind much of what we have done with Mozilla Drumbeat: people who make stuff on the internet are better creators and better online citizens if they know at least a little bit about the web’s basic building blocks. Even if they only learn a little HTML, the web gets better.

via Mozilla as teacher « commonspace.

September 27, 2011 at 9:14 am 6 comments

Physics students grok that they need computing

Danny Caballero has started a blog at Boulder, and a recent post describes a survey he took of upper-division undergraduates in Physics.  They definitely grokked that they need computing in their education.

59 students responded to the survey. Most students were juniors and seniors. That’s because we targeted upper-division courses like Classical Mechanics, E&M and Stat. Mech.

Here’s a brief summary with more details located here:

75% of students said that computational modeling is “Essential” or “Important” to their development as a physicist.

Students mentioned these characteristics would be important for the tool they might be taught to use (in decreasing order): ease of use, support/resources for learning, efficiency and power, flexibility and adaptability, well-accepted in the field.

Students were neutral about using open-source software, but stated that it was important for the tool be free or cheap after they graduate.

As far as implementation, students wanted to see computational modeling as a complement to analytical problem solving task. Ideas included solving more complex problems, helping with algebra and visualizing problems.

via Think like a physicist… | Ideas and thoughts from an underpaid fool.

September 15, 2011 at 9:36 am 3 comments

Software is the modern language of science

This is a great quote, and really speaks to the importance of computing in modern science and engineering.

“We are thinking about ways to encourage the publication of more modern forms of scientific output,” he said. He suggested in organizing scientific data for multiple communities, new approaches that merge databases with wikis, in addition to using social networking media tools such as Flickr and Twitter, will be very powerful. He noted that there are even new programs that create openly writable information storage and search platforms, such as those discussed in posters at the conference.

“We need to make the world writable,” Seidel told TeraGrid ’11 participants, adding that “software is the modern language of science these days.”

via HPCwire: NSF’s Seidel: ‘Software is the Modern Language of Science’.

August 31, 2011 at 8:57 am 1 comment

Instruction makes student attitudes on computational modeling worse: Caballero thesis part 3

Note: Danny’s whole thesis is now available on-line.

In Danny Caballero’s first chapter, he makes this claim:

Introductory physics courses can shape how students think about science, how they believe science is done and, perhaps most importantly, can influence if they continue to pursue science or engineering in the future. Students’ attitudes toward learning physics, their beliefs about what it means to learn physics and their sentiments about the connection between physics to the “real world” can play a strong role in their performance in introductory physics courses. This performance can affect their decision to continue studying science or engineering.

Danny is arguing that physics plays a key role in retaining student interest in science and engineering. Computing plays a similar role. Computing is the new workbench for science and engineering, where the most innovative and ground-breaking work is going to happen.  Danny realized that students’ attitudes about computational modeling are important, in terms of (a) student performance and learning in physics and (from above) all of science and engineering learning, and (b) influencing student decisions to continue in science and engineering. What we worry about are students facing programming and saying, “Real scientists and engineers do this?  I hate this!  Time to switch to a management degree!”.

There are validated instruments for measuring student attitudes towards computer science and physics, but not for measuring student attitudes towards computational modeling.  So, Danny built one (which is included in an appendix to his thesis), that he calls “COMPASS” for “Computational Modeling in Physics Attitudinal Student Survey.”  He validated it with experts and with correlations with similar instruments for physics attitudes.  It contains phrases for students to agree-or-disagree with, like:

  • I find that I can use a computer model that I’ve written to solve a related problem.
  • Computer models have little relation to the real world.
  • It is important for me to understand how to express physics concepts in a computer model.
  • To learn how to solve problems with a computer, I only need to see and to memorize examples that are solved using a computer.

Danny gave this instrument to a bunch of experts in computational modeling who generally had similar answers to all the statements, e.g., strongly agreed/strongly disagreed in all the same places. Then he measured student answers in terms of percentage of answers that were “favorable” (agreed with experts) on computational modeling, and the percentage of answers that were “unfavorable” (were different than the experts) on computational modeling.  A student’s answers to COMPASS is then a pair of %favorable and %unfavorable.  He gave this to several cohorts at Georgia Tech and at North Carolina State University, in week 2 (just as the semester started) and in week 15 (as the semester was wrapping up).  The direction of change from week 2 to week 15 was the same for every cohort:

The black square in each indicates the mean.  The answers after instruction shifted to more unfavorable attitudes toward computational modeling.  Instruction led to students being more negative about computational modeling.  Danny did an analysis of where the big shifts were in these answers.  In particular, students after instruction had less personal interest in computational modeling, agreed less with the importance of sense-making (the third bullet above), and agreed more with the importance of rote memorization (last bullet above).

Danny chopped up these data in lots of ways.  Does student grade influence the results?  Gender?  Year in school?  The only thing that really mattered was major.  Computing majors (thankfully!) did recognize more value for computational modeling after instruction.

These results are disappointing.  Teaching students about computational modeling makes them like it less?  Makes them see less value in it?  Across multiple cohorts?!? But from a research perspective, this is an important result.  We can’t fix a problem that we don’t know is there.  Danny has not only identified a problem.  He’s given us a tool to investigate it.

The value of COMPASS is in having a yardstick.  We can use it to see how we can influence these attitudes.  Danny wrote it so that “physics” could be swapped out for “biology” or “chemistry” easily, to measure attitudes towards computational modeling in those disciplines, too.  I’ll bet that this is a useful starting place for many folks interested in measuring computational thinking, too.

“So, Guzdial, you spent a lot of time on these three blog posts?  Why?!?”

I strongly believe that the future of computing education lies in teaching more than just those who are going to be software developers.  Scaffidi, Shaw, and Myers estimate that there are four professionals who program but who are not software developers for every software developer in the US.  We computing educators need to understand how people are coming to computing as a tool for thinking, not just a material for engineering.  We need to figure out how to teach these students, what tools to provide them, and how to measure their attitudes and learning.

Danny’s thesis is important in pursuing this goal.  Each of these three studies is important for computing education research, as well as for physics education research.  Danny has shown that physics students’ learning is different with computational modeling, where their challenges are in computational modeling in Python, and here, how their attitudes about computational modeling are changing. Danny has done a terrific job describing the issues of a non-CS programming community (physics learners) in learning to use computational modeling.

This is an important area of research, not just for computer science, but for STEM more generally.  Computing is critical to all of STEM.  We need to produce STEM graduates who can model use computers and who have positive attitudes about computational modeling.

The challenge for computing education researchers is that Danny’s thesis shows us is we don’t know how to do that yet.  Our tools are wrong (e.g., the VPython errors getting in the way), and our instructional practices are wrong (e.g., such that students are more negative about computational modeling after instruction than before).  We have a long way to go before we can teach all STEM students about how to use computing in a powerful way for thinking.

We need to figure it out.  Computational modeling is critical for success in STEM today.  We will only figure it out by keep trying.  We have to use curricula like Matter and Interactions. We have to figure out the pedagogy.  We have to create new learning tools.  The work to be done is not just for physics educators, but for computer scientists, too.

Finally, I wrote up these blog posts because I don’t think we’ll see work like this in any CS Ed conferences in the near term.  Danny just got a job in Physics at U. Colorado-Boulder.  He’s a physicist.  Why should he try to publish in the SIGCSE Symposium or ICER?  How would that help his tenure case? I wonder if his work could get in.  His results don’t tell us anything about CS1 or helping CS majors become better software developers.  Will reviewers recognize that computational modeling for STEM learning is important for CS Ed, too?  I hope so, and I hope we see work like this in CS Ed forums.  In the meantime, it’s important to find this kind of computing education work in non-CSEd communities and connect to it.  

August 2, 2011 at 10:15 am 31 comments

What students get wrong when building computational physics models in Python: Cabellero thesis part 2

Danny’s first study found that students studying Matter and Interactions didn’t do better on the FCI.  That’s not a condemnation of M&I. FCI is an older, narrow measure of physics learning. The other things that M&I cover are very important.  In fact, computational modeling is a critically new learning outcome that science students need.

So, the next thing that Danny studied in his thesis was what problems students were facing when they built physics models in Vpython.  He studied one particular homework assignment.  Students were given a piece of VPython code that modeled a projectile.

The grading system gave students the program with variables filled in with randomly-generated values.  The Force Calculation portion was blank. The grading also gave them the correct answer for the given program, if the force calculation part was provided correctly. Finally, the students were given the description of another situation.  The students had to complete the force calculation (and could use the given, correct answer to check that), and then had to change the constants to model the grading situation.  They submitted the final program.

Danny studied about 1400 of these submitted programs.  Only about 60% of them were correct.

He and his advisor coded the errors.  All of them.  And they had 91% inter-rater reliability, which is amazing!  Danny then used cluster analysis to group the errors.

Here’s what he found (image taken below from his slides, not his thesis):

23.8% of the students couldn’t get the test case to work.  19.8% of the students got the mapping to the new test condition wrong.  That last one is a common CS error — something which had to be inside the loop was moved before the loop. Worked once, but never got updated.

Notice that a lot of the students got an “Error in Force Calculation.”  Some of these were a sign error, which is as much a physics error as a computation error.  But a lot of the students tried to raise a value to a vector power.  VPython caught that as a type error — and the students couldn’t understand the error message.  Some of these students plugged in something that got past the error, but wasn’t physically correct.  That’s a pretty common strategy of students (that Matt Jadud has documented in detail), to focus on getting rid of the error message without making sure the program still makes sense.

Danny suggests that these were physics mistakes.  I disagree.  I think that these are computation, or at best, computational modeling errors.  Many students don’t understand how to map from a situation to a set of constants in a program.  (Given that we know how much difficulty students have understanding variables in programs, I wouldn’t be surprised if they don’t really understand what the constants mean or what they do in the program.)  They don’t understand Python’s error messages, which were about types not about Physics.

Danny’s results help us in figuring out how to teach computational modeling better.

  • These results can inform our development of new computational modeling environments for students. Python is a language designed for developers, not for physics students creating computational models.  Python’s error messages weren’t designed to be understood for that audience — they are about explaining the errors in terms of computational ideas, not in terms of modeling ideas.
  • These results can also inform how we teach computational modeling.  I asked, and the lectures never included live coding, which has been identified as a best practice for teaching computer science.  This means that students never saw someone map from a problem to a program, nor saw anyone interpret error messages.

If we want to see computation used in teaching across STEM, we have to know about these kinds of problems, and figure out how to address them.

August 1, 2011 at 9:07 am 15 comments

Wolfram on the importance of computing for understanding the world

Spending too much time in airports lately, I’ve been catching up on some of my TED video watching — the ones that everyone says I have to watch, but I didn’t have time until now. One of those that I watched recently was Stephen Wolfram’s on A New Kind of Science and Wolfram-Alpha. I realized that he’s really making a computing education argument. He explicitly is saying that computing is necessary for understanding the natural world, and all scientists need to learn about computation in order to make the next round of discoveries about how our universe works.

http://video.ted.com/assets/player/swf/EmbedPlayer.swf

June 6, 2011 at 10:39 am 3 comments

Older Posts


Recent Posts

December 2017
M T W T F S S
« Nov    
 123
45678910
11121314151617
18192021222324
25262728293031

Feeds

Blog Stats

  • 1,459,205 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,192 other followers

CS Teaching Tips