Posts tagged ‘contextualized computing education’

Live coding as a path to music education — and maybe computing, too

We have talked here before about the use of computing to teach physics and the use of Logo to teach a wide range of topics. Live coding raises another fascinating possibility: Using coding to teach music.

There’s a wonderful video by Chris Ford introducing a range of music theory ideas through the use of Clojure and Sam Aaron’s Overtone library. (The video is not embeddable, so you’ll have to click the link to see it.) I highly recommend it. It uses Clojure notation to move from sine waves, through creating different instruments, through scales, to canon forms. I’ve used Lisp and Scheme, but I don’t know Clojure, and I still learned a lot from this.

I looked up the Georgia Performance Standards for Music. Some of the standards include a large collection of music ideas, like this:

Describe similarities and differences in the terminology of the subject matter between music and other subject areas including: color, movement, expression, style, symmetry, form, interpretation, texture, harmony, patterns and sequence, repetition, texts and lyrics, meter, wave and sound production, timbre, frequency of pitch, volume, acoustics, physiology and anatomy, technology, history, and culture, etc.

Several of these ideas appear in Chris Ford’s 40 minute video. Many other musical ideas could be introduced through code. (We’re probably talking about music programming, rather than live coding — exploring all of these under the pressure of real-time performance is probably more than we need or want.) Could these ideas be made more constructionist through code (i.e., letting students build music and play with these ideas) than through learning an instrument well enough to explore the ideas? Learning an instrument is clearly valuable (and is part of these standards), but perhaps more could be learned and explored through code.

The general form of this idea is “STEAM” — STEM + Art.  There is a growing community suggesting that we need to teach students about art and design, as well as STEM.  Here, I am asking the question: Is Art an avenue for productively introducing STEM ideas?

The even more general form of this idea dates back to Seymour Papert’s ideas about computing across the curriculum.  Seymour believed that computing was a powerful literacy to use in learning science and mathematics — and explicitly, music, too.  At a more practical level, one of the questions raised at Dagstuhl was this:  We’re not having great success getting computing into STEM.  Is Art more amenable to accepting computing as a medium?  Is music and art the way to get computing taught in schools?  The argument I’m making here is, we can use computing to achieve math education goals.  Maybe computing education goals, too.

October 3, 2013 at 7:15 am 20 comments

Teaching intro CS and programming by way of scientific data analysis

This class sounds cool and similar to our “Computational Freakonomics” course, but at the data analysis stage rather than the statistics stage. I found that Allen Downey has taught another, also similar course “Think Stats” which dives into the algorithms behind the statistics. It’s an interesting set of classes that focus on relevance and introducing computing through a real-world data context.

The most unique feature of our class is that every assignment (after the first, which introduces Python basics) uses real-world data: DNA files straight out of a sequencer, measurements of ocean characteristics (salinity, chemical concentrations) and plankton biodiversity, social networking connections and messages, election returns, economic reports, etc. Whereas many classes explain that programming will be useful in the real world or give simplistic problems with a flavor of scientific analysis, we are not aware of other classes taught from a computer science perspective that use real-world datasets. (But, perhaps such exist; we would be happy to learn about them.)

via PATPAT: Program analysis, the practice and theory: Teaching intro CS and programming by way of scientific data analysis.

September 10, 2012 at 3:33 pm Leave a comment

Report on “Computational Freakonomics” Class: Olympics, game consoles, the Euro, and Facebook

I’ve told you a bit about how the Media Computation class went this summer, with the new things that I tried.  Let me tell you something about how the “Computational Freakonomics” (CompFreak) class went.

The CompFreak class wasn’t new.  Richard Catrambone and I taught it once in 2006.  But we’ve never taught it since then, and I’d never taught it before on my own, so it was “new” for me.  There were six weeks in the term at Oxford.  Each week was roughly the same:

  • On Monday, we discussed a chapter from the “Freakonomics” book.
  • We then discussed social science issues related to that chapter, from the nature of science, through t-tests and ANOVA, up to multiple linear regression.  Sometimes, we did a debate about issues in the chapter (e.g., on “Atlanta is a crime-ridden city” and on “Roe v. Wade is the most significant explanation for the drop in crime in the 1990′s.”)
  • Then I showed them how to implement the methods in SciPy to do real analysis of some Internet-based data sets.  I give them a bunch of example data sets, and show them how to read data from flat text files and from CSV files.

At the end of the course, students do a project where they ask a question, any question they want from any database.  Then, they do it again, but in pair, after a bunch of feedback from me (both on the first project, and on their proposal for the final project).  The idea is that the final projects are better than the first round, since they get feedback and combine efforts in the pair.  And they were.

  • One team looked at the so-called “medal slump” after a country hosts the Olympics.  The “medal slump” got mentioned in some UK newspapers this summer.  One member of the team had found in his first project that, indeed, the host country wins a statistically significant fewer medals in the following year.  But as a pair of students, they found that there was no medal “slump.”  Instead, during the Olympics of hosting, there was a huge medal “bump”!  When hosting, the country gets more medals, but the prior two and following two Olympics all follow the same trends in terms of medals won.
  • Another team looked at Eurozone countries and how their GDP changes tracked one another after moving to the Euro, then tried to explain that in terms of monetary policy and internal trading.  It is this case that Eurozone countries who did move to the Euro found that their GDP started correlating with one another, much more than with non-Euro Eurozone countries or with other countries of similar GDP size.  But the team couldn’t figure out a good explanation for why, e.g., was it because internal trading was facilitated, or because of joint monetary policy, or something else?
  • One team figured out the Facebook API (which they said was awful) and looked at different company’s “likes” versus their stock price over time.  Strongly correlated, but “likes” are basically linear — almost nobody un-likes a company.  Since stock prices generally rise, it’s a clear correlation, but not meaningful.
  • Another team looked at the impact of new consoles on the video game market.  Video game consoles are a huge hit on the stock price of the developing company in the year of release, while the game manufacturers stock rises dramatically.  But the team realized a weakness of their study: They looked at the year of a console’s release.  The real benefit of a new console is in the long lifespan.  The year that the PS3 came out, it was outsold by the PS2.  But that’s hard to see in stock prices.
  • The last team looked at impact of Olympics on the host country’s GDP.  No correlation at all between hosting and changes in GDP.  Olympics is a big deal, but it’s still a small drop in the overall country’s economy.

One of my favorite observations from their presentations: Their honesty.  Most of the groups found nothing significant, or they got it wrong — and they all admitted that.  Maybe it was because it was a class context, versus a tenure-race-influenced conference.  They had a wonderful honesty about what they found and what they didn’t.

I’ve posted the syllabus, course notes, slides that I used (Richard never used PowerPoint, but I needed PowerPoint to prop up my efforts to be Richard), and the final exam that I used on the CompFreak Swiki.  I also posted the student course-instructor opinion survey results, which are interesting to read in terms of what didn’t work.

  • Clearly, I was no Richard Catrambone. Richard is known around campus for how well he explains statistics, and I learned a lot from listening to his lectures in 2006. Students found my discussion of inferential statistics to be the most boring part.
  • They wanted more in-class coding! I had them code in-class every week. After each new test I showed them (correlation, t-test, ANOVA, etc.), I made them code it in pairs (with any data they wanted), and then we all discussed what they found in the last five minutes of class. I felt guilty that they were just programming away while I worked with pairs that had questions or read email. I guess they liked that part and wanted more.
  • I get credit from the students for something that Richard taught me to do. Richard pointed out that his reading of cognitive overload suggests that nobody can pay attention for 90 minutes straight. Our classes were 90 minutes a day, four days a week. In a 90 minute class, I made them get up halfway through and go outside (when it wasn’t raining). They liked that part.
  • Students did learn more about computing, inspired by the questions that they were trying to answer.  They talk in their survey comments about studying more Python on their own and wishing I’d covered more Python and computing.
  • In general, though, they seemed to like the class, and encourage us to offer it on-campus, which we’ve not yet done.

Students who talked to me about the class at the end said that they found it interesting to use statistics for something.  Turns out that I happened to get a bunch of students who had taken a lot of statistics before (e.g., high school AP Statistics).  But they still liked the class because (a) the coding and (b) applying statistics to real datasets.  My students asked all kinds of questions, from what factors influenced money earned by golf pros, to the influences on attendance at Braves games (unemployment is much more significant than how much the team is in contention for the playoffs).  One of the other more interesting findings for me: GPD correlates strongly and significantly with number of Olympic gold medals that a country wins, i.e., rich countries win more medals. However, GPD-per-capita has almost no correlation. One interpretation: To win in the Olympics, you need lots of rich people (vs. a large middle class).

Anyway, I still don’t know if we’ll ever offer this class again, on-campus or study-abroad.  It was great fun to teach.  It’s particularly fun for me as an exploration of other contexts in contextualized computing education.  This isn’t robotics or video games.  This is “studying the world, computationally and quantitatively” as a reason for learning more about computing.

August 16, 2012 at 8:27 am 6 comments

CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts

Boy, do I want to learn more about this! Chuck and Processing, and two semesters — it sounds like Media Computation on steroids!

The National Science Foundation (NSF) has awarded California Institute of the Arts (CalArts) a grant of $111,881 to develop a STEM (Science, Technology, Engineering and Mathematics) curriculum for undergraduate students across the Institute’s diverse arts disciplines. The two-semester curriculum is designed to teach essential computer science skills to beginners. Classes will begin in Fall 2012 and are open to students in CalArts’ six schools—Art, Critical Studies, Dance, Film/Video, Music and Theater.

This innovative arts-centered approach to teaching computer science—developed by Ajay Kapur, Associate Dean of Research and Development in Digital Arts, and Permanent Visiting Lecturer Perry R. Cook, founder of the Princeton University Sound Lab—offers a model for teaching that can be replicated at other arts institutions and extended to students in similar non-traditional STEM contexts.

via CalArts Awarded National Science Foundation Grant to Teach Computer Science through the Arts | CalArts.

May 31, 2012 at 7:14 am 2 comments

How can we teach multiple CS1′s?

A common question I get about contextualized approaches to CS1 is: “How can we possibly offer more than one introductory course with our few teachers?”  Valerie Barr has a nice paper in the recent Journal of Computing Sciences in Schools where she explains how her small department was able to offer multiple CS1′s, and the positive impact it had on their enrollment.

The department currently has 6 full time faculty members, and a 6 course per year teaching load. Each introductory course is taught studio style, with integrated lecture and hands-on work. The old CS1 had a separate lab session and counted as 1.5 courses of teaching load. Now the introductory courses (except Programming for Engineers) continue this model, meet the additional time and count as 1.5 courses for the faculty member, allowing substantial time for hands-on activities. Each section is capped at 18 students and taught in a computer lab in order to facilitate the transition between lecture and hands-on work.

In order to make room in the course schedule for the increased number of CS1 offerings, the department eliminated the old CS0 course. A number of additional changes were made in order to accommodate the new approach to the introductory CS curriculum: reduction of the number of proscribed courses for the major from 8 (out of 10) to 5 (this has the added benefit, by increasing the number of electives, of giving students more flexibility and choice within the general guidelines of the major); put elective courses on a rotation schedule so that each one is taught every other or every third year; made available to students a 4-year schedule of offerings so that they can plan according to the course rotation.

May 8, 2012 at 7:23 am 2 comments

A CS Emporium would be wonderful idea: Efficient and Tailored Computing Education

Over the weekend, I read a post by GasStationsWithoutPumps on speeding through college.  The Washington Post has a great article about Virginia Tech’s Math Emporium that provides a mechanism to do that: Self-paced mathematics instruction, with human instructors available for one-on-one help.  It’s efficient, and it provides student learning at their pace.  I would love to see a computer science version of this.  In particular, it would be great if students could explore problems in a variety of contexts (from media to games to robotics to interactive fiction), and get the time in that they need to develop some skill and proficiency.  Like the distance education efforts, this is about improving the efficiency of higher education.  Unlike distance education, the Emporium includes 1:1 human interaction and the potential for individualized approaches and curriculum.  And there’s potential synergy: the content needed to make a CS Emporium work could also be used in a distance education.  Here’s my prediction: Without the 1:1 help, I’d expect the distance folks to still have a higher WFD rate.

No academic initiative has delivered more handsomely on the oft-stated promise of efficiency-via-technology in higher education, said Carol Twigg, president of the National Center for Academic Transformation, a nonprofit that studies technological innovations to improve learning and reduce cost. She calls the Emporium “a solution to the math problem” in colleges.

It may be an idea whose time has come. Since its creation in 1997, the Emporium model has spread to the universities of Alabama and Idaho (in 2000) and to Louisiana State University (in 2004). Interest has swelled as of late; Twigg says the Emporium has been adopted by about 100 schools. This academic year, Emporium-style math arrived at Montgomery College in Maryland and Northern Virginia Community College.

“How could computers not change mathematics?” said Peter Haskell, math department chairman at Virginia Tech. “How could they not change higher education? They’ve changed everything else.”

Emporium courses include pre-calculus, calculus, trigonometry and geometry, subjects taken mostly by freshmen to satisfy math requirements. The format seems to work best in subjects that stress skill development — such as solving problems over and over. Computer-led lessons show promise for remedial English instruction and perhaps foreign language, Twigg said. Machines will never replace humans in poetry seminars.

via At Virginia Tech, computers help solve a math class problem – The Washington Post.

April 25, 2012 at 8:58 am 4 comments

Nice List: Seven misconceptions about how students learn

I would have written the first one a bit different for a CS Ed audience.  There’s a big push in CS Ed to make sure students learn the “right” basic facts so that they don’t have to “unlearn” bad habits later.  Absolutely, that’s a real risk.  But that doesn’t mean that we can teach the basic facts first.  Context comes first — students have to know why they’re learning something in order to get deep learning.

Here are seven of the biggest myths about learning that, unfortunately, guide the way that many schools are organized in this era of standardized test-based public school reform.

Basic Facts Come Before Deep Learning

This one translates roughly as, “Students must do the boring stuff before they can do the interesting stuff.” Or, “Students must memorize before they can be allowed to think.” In truth, students are most likely to achieve long-term mastery of basic facts in the context of engaging, student-directed learning.

via Seven misconceptions about how students learn – The Answer Sheet – The Washington Post.

March 19, 2012 at 8:01 am 3 comments

Helping Everyone Create with Computing: Video of C5 Talk

A YouTube video of my talk (with Alan’s introduction) at C5 is now available.

February 15, 2012 at 10:33 am 2 comments

Thoughts on Code Year, Codecademy, and Learning to Code (with C5 Side Note)

The blog piece below is the most biting criticism I’ve read of Codecademy.  (And of course, I’m always glad to read someone else pushing context as important for computing education!)  The author has a very good point quote below.  I’m not sure that we know how to achieve the goals of Code Year.  It’s amazing that Codecademy has raised $2.5M to support Code Year, but I do wonder if there’s a better use for that money–one that moves us closer to the goal of ubiquitous computing literacy.

Learning anything without context is hardly learning. I wish that Code Year was 2013 and 2012 was “some smart people with good ideas and a lot of money built took the time to build great pedagogically-driven tool to really solve an existing problem for folks who want and need training in this area.”

via Thoughts on Code Year, Codecademy, and Learning to Code | thickbook.com.

Side note: I should be visiting with Alan Kay in 4 or 5 hours.  He’s introducing my keynote at the C5 Conference (http://www.cm.is.ritsumei.ac.jp/c5-12/), which I’m excited about.  Two of the C’s of C5 is “creating” and “computing,” and my talk is going to be about the challenges of supporting everyone in creating (for me, including “programming”) with computing.  I’m going to tell the MediaComp story, talk about Brian Dorn’s work with graphics designers, and with Klara Benda’s and Lijun Ni’s work that tells us about teachers’ needs to learn computer science.

January 18, 2012 at 7:06 am 8 comments

Trip report on Australia visit: The Higher Ed Times are a-changin’

As I mentioned last week, Barb and I spent the week visiting Australia.  We gave talks in Melbourne (keynote at Melbourne Computing Education Conventicle and teacher workshop), Adelaide (presentation on MediaComp at Festival of Teaching and Learning and on CS outreach), and Sydney (joint keynote at Sydney Computing Education Conventicle, then MediaComp talk at alumni end-of-term BBQ).  I’ve uploaded all the talk slides at http://coweb.cc.gatech.edu/mediaComp-plan/1, if you’d like to see any of them.

It was a really interesting time to visit Australia.  Higher education is going through some dramatic changes there.  I don’t understand the current system all that well, so I don’t understand exactly what’s changing.  I did hear a lot about what the new system will look like.

Higher education enrollment used to be “capped,” with a certain number of well-ranked students being given access to Universities.  There are almost no private Universities in Australia.

Under the new system, there are “no caps.”  Universities can take as many students as they wish (where Universities have some say on their goals for their enrollment, e.g., to keep a high test score average for an entering class vs. having increased diversity).  Private Universities will be allowed, maybe even encouraged.  There will be funding for Universities associated with taking students from lower socioeconomic status (SES) high schools.  Funding will be tied to retention and graduation rates.  All of these changes start in 2012.

The new system features TEQSA, the Tertiary Education Quality and Standards Agency.  I heard a presentation in Adelaide about TEQSA and how the University of Adelaide is responding to it.  Currently, there is a huge effort in establishing standards for all their undergraduate and graduate programs, and starting next year, there TEQSA will also be part of accreditation (a “regulatory function,” in the words of their website) starting in January 2012.  I was a bit worried about the standards process from the presentation I heard.  It’s all “demand-driven” (their emphasis) with interviews with literally hundreds of stakeholders, especially industry.  After my talk on Media Computation, someone asked me, “But is industry asking for Liberal Arts majors to learn to program?”  Of course not, I explained.  Teaching all Liberal Arts majors about programming is about enabling innovation and creating a market differentiation.  I told one of my stories about students in Liberal Arts, Architecture, and Management getting interviews and jobs because they have a unique computer science background that peer students might not.  I explicitly said that that’s the problem of being “demand-driven” in setting standards — how do you plan for the future where your students are going to live?  (Our host in Adelaide, Katrina Faulkner, said that the Dean of Humanities approached her after my talk and asked if the Adelaide CS department could offer a similar course.  Someone from their Education school came up to me and said that her eyes were opened — she’d never thought about teaching CS to high school teachers before.  A good start for a conversation!)

I suspect that some of the higher education changes were drivers for our visit.  Some funding for our visit came from the Australia Council of Deans of ICT programs.  I heard a presentation in Sydney from Tony Koppi of ACDICT, where he talked about a study that they had just completed about why students leave computing programs.  The most common complaints were the same ones that drove the design of Media Computation, e.g., that students found the CS courses irrelevant and boring.  Tony explicitly called for exploring contextualized computing education.  Retention is clearly on their minds nowadays, and that’s one of the outcome variables that MediaComp has had the most success influencing.  They are also quite concerned with drawing more students into computing, especially among women and members of underrepresented minorities. Barbara was pressed for her lessons learned on CS outreach in all three cities.  I don’t think ACDICT is telling Australian computing departments to do as we do, but they are asking computing departments to think about these issues.

One of the most interesting interventions I heard about from ACDICT was their ACDICT Learning and Teaching Academy (ALTA), whose goal is: “To contribute to improvement in the perception of and the actual quality of learning and teaching across the ICT disciplines.” I thought the “perception of and actual quality” combination was particularly realistic!  I heard a presentation on ALTA in Melbourne, where they emphasized that they want to address “Grand Challenges” in computing education.  I’m interested in watching to see what they identify!

P.S.

Yes, we found Australia interesting and fun.  We took our two daughters. (Our son is a Sophomore at Georgia Tech, and had class through Wednesday before Thanksgiving, so couldn’t afford the time off.)  We pet kangaroos in Melbourne; had dinner in an Australian home in Sydney that sat on a peninsula directly between Sydney Harbor and the ocean, with beautiful views to both sides; and saw a play at the Sydney Opera House Friday before heading home.  Melbourne was so interesting (Royal Botanic Gardens are a must see), Adelaide was so beautiful (probably the prettiest campus I’ve yet visited), and Sydney is probably really nice when it isn’t raining for four days straight.  The visit to the Sydney Hyde Park Barracks was a highlight for me, learning about how Sydney grew into the amazing city it is from a society of convicts.  I’m really glad that we got the opportunity to visit and learn about Australia.  Thanks to the ACM Distinguished Speakers Program, ACDICT, and funding from our hosts.  Our hosts took great care of us, and we’re grateful: Catherine Lang at Swinburne, Katrina Falkner at U. Adelaide, and Judy Kay at U. Sydney.

November 28, 2011 at 10:03 am Leave a comment

Slow pace of higher-ed reform costs STEM majors: CS needs context

The argument being made here in this NYTimes piece suggests that the sluggish response to calls for higher-education reform has a real cost.  We know how to make STEM classes more successful, in terms of motivation and learning, but higher-education institutions are not willing to change.

What does this mean for Computing Education?  How do we avoid being “too narrow” and having a “sink or swim” mentality? We are encouraged to have CS education that has “passion” and includes “design projects for Freshmen.”  Sounds to me that contextualized computing education, which includes efforts like Media Computation and robotics, is the kind of thing they’re encouraging.

No one doubts that students need a strong theoretical foundation. But what frustrates education experts is how long it has taken for most schools to make changes.

The National Science Board, a public advisory body, warned in the mid-1980s that students were losing sight of why they wanted to be scientists and engineers in the first place. Research confirmed in the 1990s that students learn more by grappling with open-ended problems, like creating a computer game or designing an alternative energy system, than listening to lectures. While the National Science Foundation went on to finance pilot courses that employed interactive projects, when the money dried up, so did most of the courses. Lecture classes are far cheaper to produce, and top professors are focused on bringing in research grants, not teaching undergraduates.

In 2005, the National Academy of Engineering concluded that “scattered interventions” had not resulted in widespread change. “Treating the freshman year as a ‘sink or swim’ experience and accepting attrition as inevitable,” it said, “is both unfair to students and wasteful of resources and faculty time.”

via Why Science Majors Change Their Minds (It’s Just So Darn Hard) – NYTimes.com.

November 7, 2011 at 8:18 am 5 comments

Fixing Our Math Education with Context

Sounds pretty similar to the contextualized computing education that we’ve been arguing for with IPRE and Media Computation.  The argument being made here is another example of the tension between the cognitive (abstract conceptual learning) and the situative (integrating students into a community of practice).

A math curriculum that focused on real-life problems would still expose students to the abstract tools of mathematics, especially the manipulation of unknown quantities. But there is a world of difference between teaching “pure” math, with no context, and teaching relevant problems that will lead students to appreciate how a mathematical formula models and clarifies real-world situations. The former is how algebra courses currently proceed — introducing the mysterious variable x, which many students struggle to understand. By contrast, a contextual approach, in the style of all working scientists, would introduce formulas using abbreviations for simple quantities — for instance, Einstein’s famous equation E=mc2, where E stands for energy, m for mass and c for the speed of light.

Imagine replacing the sequence of algebra, geometry and calculus with a sequence of finance, data and basic engineering. In the finance course, students would learn the exponential function, use formulas in spreadsheets and study the budgets of people, companies and governments. In the data course, students would gather their own data sets and learn how, in fields as diverse as sports and medicine, larger samples give better estimates of averages. In the basic engineering course, students would learn the workings of engines, sound waves, TV signals and computers. Science and math were originally discovered together, and they are best learned together now.

via How to Fix Our Math Education – NYTimes.com.

October 6, 2011 at 8:09 am 8 comments

Visiting CMU: Historical Home of CS1-for-All

It’s Spring Break at Georgia Tech this week.  Last week was crammed full of midterm grading. Now that it’s Spring, I’m traveling north to colder climates.

Today, I’m at Carnegie Mellon University to speak in their Program for Interdisciplinary Education Research (PIER) today, and at the “EdBag” tomorrow.  I’m excited (and a bit nervous) — I’ve never spoken here at CMU before.  Leigh Ann Sudol-DeLyser is my student host, and she gave me a great tour of the campus yesterday.  For today’s talk, I’m giving a variation of my “Meeting the Computing Needs for Everyone” talk (describing Brian Dorn’s work and the role of contextualized computing education in this goal), but with special attention to the Alan Perlis lecture from 1961.

I re-read that chapter again yesterday.  Wow — it’s really clear that the idea of teaching everyone an introductory course on CS has its home here, at the University that grew out of Carnegie Tech.  While I had remembered that Peter Elias of MIT had pushed back against the idea of such a course as being unnecessary, I hadn’t remembered how J.C.R. Licklider and Perlis responded.

  • Elias argues that programming is just a “mental chore” that the computer should be able to take care of for us: “If the computers, together with sufficiently ingenious languages and programming systems, are capable of doing everything that Professor Perlis describes—and I believe they are (and more)—then they should be ingenious enough to do it without the human symbiote being obliged to perform the mechanical chores which are a huge part of current programming effort, and which are a large part of what must now be taught in the introductory course that he proposes.”
  • Licklider emphasizes what might be possible with this new kind of language. “Peter, I think the first apes who tried to talk with one another decided that learning language was a dreadful bore…But some people write poetry in the language we speak.”
  • Perlis makes a really interesting pedagogical rebuttal. He says that what he really wants to teach are abstractions, and a course in programming is the best way he can think of doing that. He also says (not quoted below) that he has no evidence for this, but believes it strongly. “The purpose of a course in programming is to teach people how to construct and analyze processes…A course in programming is concerned with abstraction: the abstraction of constructing, analyzing, and describing processes…The point is to make the students construct complex processes out of simpler ones….A properly designed programming course will develop these abilities better than any other course.”
  • John McCarthy (father of Lisp) also responded in opposition to Elias. “Programming is the art of stating procedures. Prior to the development of digital computers, one did not have to state procedures precisely, and no languages were developed for stating procedures precisely. Now we have a tool that will carry out any procedure, provided we can state this procedure sufficiently well. It is utopian to suppose that either English or some combination of English and mathematics will turn out to be the appropriate language for stating procedures.”  Interesting how McCarthy and Licklider, as in Donald Knuth’s Turing award lecture, talk about programming as art.

Leigh Ann told me that the “EdBag” is a place to play with new and incomplete ideas.  I’m planning to talk about the challenge of producing more high school CS teachers, including alternatives like Dave Patterson’s proposal.  I’ve been thinking a lot about using a worked examples approach, informed by Ashok Goel’s structure-behavior-function model of design cognition.

Tomorrow, I fly to DC and spend two days reviewing NSF proposals.  Still trying to get the rest of my proposals read today, and all the reviews written tonight.  Thursday night, I’ll get back home to Atlanta where it really is Spring already.

March 21, 2011 at 7:16 am 3 comments

Contextualized computing ed works — it’s just not there

CS Ed folk are mailing each other about the Washington Post article on CS Education (just in time for SIGCSE this week!).  Eli’s class at Virginia Tech sounds great, and the project is an excellent example of how context can help to highlight the relevance of computing education — what we’ve been saying with Media Computation and IPRE for years.  Jan Cuny’s comment is highlighting the more significant bit.  Sarita Yardi highlighted in her email to Georgia Tech’s CSEd mailing list that the reporters missed Jan’s bigger issue, and I think Sarita is right.

We do know how to engage kids now.  We have NCWIT Best and Promising Practices, and we have contextualized computing education.  The real problem is that, when it comes to high school CS, we’re just not there.  If you choose a high school at random, you are ten times more likely to find one that offers no CS than to find one offering AP CS.  That’s a big reason why the AP numbers are so bad.  It’s not that the current AP CS is such an awful class.  It can be taught well. It’s just not available to everyone!  The AP CS teachers we’re working with are turning kids away because their classes are full. Most kids just don’t have access.

“The sky is falling in a sense that we’re not engaging kids that we could be engaging,” said Jan Cuny of the National Science Foundation, who is helping to formulate a new AP course. While the current program focuses mostly on Java programming, a new class being piloted at several colleges would focus on problem-solving and creating technology instead of just using it.

“We’ll have no problem interesting kids in doing these things,” Cuny said. “The tough part is getting into the schools.”

via Computer science programs use mobile apps to make coursework relevant.

March 7, 2011 at 6:14 pm 2 comments

Older Posts


Recent Posts

Feeds

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Blog Stats

  • 879,747 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,785 other followers


Follow

Get every new post delivered to your Inbox.

Join 2,785 other followers