Posts tagged ‘end-user programming’

Dijkstra’s Truths about Computing Education Aren’t: The many kinds of programming

ACM Turing Award laureate Edsger Dijkstra had several popular pieces about computer science education. I did my Blog@CACM post on one of these (see post here), “On the cruelty of really teaching computer science,” which may be the most-cited computing education paper ever. Modern learning sciences and computing education research have shown him to be mostly wrong. Dijkstra encouraged us to avoid metaphor in learning the “radical novelty” of computing, which we now know is likely impossible. Instead, the study of metaphor in computing education gives us new insights into how we learn and teach about programming. So far, I’m not aware of any evidence of anyone teaching or learning CS without metaphor.

After my Blog@CACM post, I learned on Twitter about Briana Bettin’s dissertation about metaphors in CS (see link here). Briana considers the potential damage from Dijkstra’s essay on computing education. How many CS teachers think that analogy and metaphors are bad, citing Dijkstra, when the reality is that they are critical?

The second most popular of his computing education essays is “How do we tell truths that might hurt?” (See link here). This essay is known for zingers like:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

He goes on to critique those who use social science methods and anthropomorphic terms when describing computing. He’s wrong about those, too (as I described in the Blog@CACM post), but I’ll just take up the Basic comment here.

Today, we can consider Dijkstra’s comments in light of research on brain plasticity (see example article here). It wasn’t until 2002 that we had evidence of how even adult brains can grow and reorganize their neural networks. We can always learn and regenerate, even as adults. Changing minds is always hard. The way to achieve change is through motivating change — being able to show that change is in the person’s best interest (see example here). Maybe people stick with Basic (or for me, with HyperTalk and Smalltalk) because the options aren’t obviously better enough to overcome inertia. The onus isn’t on the adult learner to change. It’s on the teacher to motivate change.

There are computer scientists, like Dijkstra, who believe that innate differences separate those who can program from those who can not, a difference that is sometimes called the “Geek Gene.” An interview with Donald Knuth (another Turing Award laureate) last year quoted him saying that only one person in 50 will “groove with programming” (see interview here). We have a lot of evidence that there is no Geek Gene (see this blog post here), i.e., we have note yet identified innate differences that prevent someone from learning to program. Good teaching overcomes many innate differences (see blog post here making this argument).

Of course, there are innate differences between people, but that fact doesn’t have to limit who can program. Computers are the most flexible medium that humans have ever created. To argue that only a small percentage of people can “groove with programming” or that learning a specific programming language “mentally mutilates” is to define programming in a very narrow way. There are lots of activities that are programming. Remember that most Scratch programs have only Forever loops (if any loops at all), and Bootstrap:Algebra doesn’t have students write structures to control repetition. Students are still programming in Scratch and Bootstrap:Algebra. Maybe only one in 50 will be able to read and understand all of Knuth’s The Art of Computer Programming (I’m not one of those), and maybe people who programmed in Basic are unlikely to delve into Dijkstra’s ideas about concurrent and distributed programming (that’s me again). Let’s accept a wide range of abilities and interests (and endpoints) without denigrating those who will learn and work differently.

December 7, 2020 at 7:00 am 7 comments

What do I mean by Computing Education Research? The Social Science Perspective

As a new guy at the University of Michigan, I spend a lot of my time explaining who I am and what I mean by computing education research. In this and the next blog posts, I am sharing two of those explanations.

The first one was a six minute lightning talk to the University of Michigan School of Information. My audience was well-versed in social sciences, so I explained who I was in those terms.

I ground a lot of my work in Lave and Wenger’s (1991) theory about Communities of Practice. Let me create a symbol here to represent a community of practice. My symbol has a light boundary where novices engage in legitimate peripheral participation, there is a darker area where full-fledged members of the community work, and the core where the experts live who represent the values and practices of the community.

When people think about computing education research, they mostly think about this community of practice: software development. How do we help students join the community of practice of software developers? That’s an interesting question, and I have done some work there (e.g., on how students come to understand multiple class object-oriented systems, like Model-View-Controller), but that’s not my focus.

Let’s set that community of practice aside (lower left), and consider another that I’m much more interested in: end-users who program. I’ve taught thousands of liberal arts, architecture/design, and business/management majors to program using Media Computation, a contextualized approach to computing that focuses on their interests and needs. I worked with Brian Dorn as he studied graphics designers who learn to program. End-user programmers are far more numerous than professional software developers. They use programming for different purposes, so we would expect them to use different practices, languages, libraries, and tools. That’s why we developed JES, a Python programming environment for our future end-users who are programming with media. I’m very interested in understanding and supporting the practices of end-user programmers.

Let’s keep that community of practice in consideration, and next consider a different one: High school teachers who teach computer science. They’re not about software development, either. They’re going to do something different (different practices) and have different values. High school teachers need to learn efficiently because they don’t have a lot of time to learn, and they want to learn effective methods for their students. Here’s where we work on ebooks, and subgoal labeling, and Barb’s Parsons problems. I’m interested in how we make computing education efficient and effective, and in understanding the underlying cognitive mechanisms at work. Why do some things work better for learning programming than others?

Here’s another community of practice I care about: scientists and engineers who use programming as a new way to do science. Again, different practices and values than software developers. How do we best support them?  What tools do they need for their practices?

I would like students to have the same advantages as scientists and engineers, to be able to use code as a powerful and executable notation. Lately, I’ve been particularly focused on pre-calculus and economics. I know it’s stretching Lave and Wenger’s notion to think about classrooms as kinds of community of practice, so maybe the real community of practice is economists and professionals who use Computing. My specific interest is the edge of the community of practice, constructing legitimate peripheral participation for students who might use computing to aid in their entrance into the field. When does programming help with learning something else, and why does it help, and how can we make it more effective (e.g., use the best parts of programming that have the greatest leverage on supporting learning)?

So let’s consider these communities of practice.

This is a really weird picture, from Lave and Wenger’s perspective. I’m saying that programming is a practice in all of these communities, but it’s different in each one. We actually do know practices like this: Reading and writing, and the use of mathematics.

I suggest that programming is a literacy. (I’m not the first, of course, and I don’t make the argument nearly as well as my colleague, Yasmin Kafai.) It’s a way of expressing thought, communicating with others, and testing and exploring new ideas.

And that’s what makes computing education a social justice issue. If we have really invented a new literacy, we need to make it available to everyone.

November 5, 2018 at 9:00 am 12 comments

Reflections of a CS Professor and an End-User Programmer

In my last blog post, I talked about the Parsons problems generator that I used to put scrambled code problems on my quiz, study guide, and final exam. I’ve been reflecting on the experience and what it suggests to me about end-user programming.

I’m a computing professor, and while I enjoy programming, I mostly code to build exercises and examples for my students. I almost never code research prototypes anymore. I only occasionally code scripts that help me with something, like cleaning data, analyzing data, or in this case, generating problems for my students. In this case, I’m a casual end-user programmer — I’m a non-professional programmer who is making code to help him with some aspect of his job. This is in contrast:

  • To Philip Guo’s work on conversational programmers, who are people who learn programming in order to talk to programmers (see his post describing his papers on conversational programmers). I know how to talk to programmers, and I have been a professional programmer. Now, I have a different job, and sometimes programming is worthwhile in that job.
  • To computational scientists and engineers, which is the audience for Software Carpentry. Computational scientists and engineers might write code occasionally to solve a problem, but more importantly, they write code as part of their research.  I might write a script to handle an odd-job, but most of my research is not conducted with code.

Why did I spend the time writing a script to generate the problems in LaTeX? I was teaching a large class, over 200 students. Mistakes on quizzes and exams at that scale are expensive in terms of emails, complaints, and regrading. Scrambled code problems are tricky. It’s easy to randomly scramble code. It’s harder to keep track of the right ordering. I needed to be able to do this many times.

Was it worthwhile? I think it was. I had a couple Parsons problems on the quiz, maybe five on the study guide, and maybe three on the final exam. (Different numbers at different stages of development.) Each one got generated at least twice as I refined, improved, or fixed the problem. (One discovery: Don’t include comments. They can legally go anywhere, so it only makes grading harder.) The original code only took me about an hour to get working. The script got refined many times as I used it, but the initial investment was well worth it for making sure that the problem was right (e.g., I didn’t miss any lines, and indentation was preserved for Python code) and the solution was correct.

Would it be worthwhile for anyone else to write this script facing the same problems? That’s a lot harder question.

I realized that I brought a lot of knowledge to bear on this problem.

  • I have been a professional programmer.
  • I do not use LiveCode often, but I have used HyperTalk a lot, and the environment is forgiving with lots of help for casual programmers like me. LiveCode doesn’t offer much for data abstraction — basically, everything is a string.  I have experience using the tool’s facility with items, words, lines, and fields to structure data.
  • I know LaTeX and have used the exam class before. I know Python and the fact that I needed to preserve indentation.

Then I realized that it takes almost as much knowledge to use this generator. The few people who might want to use the Parsons problem generator that I posted would have to know about Parsons problems, want to use them, be using LaTeX for exams, and know how to use the output of the generator.

But I bet that all (or the majority?) of end-user programming experiences are like this. End-users are professionals in some domain. They know a lot of stuff. They’ll bring a lot of knowledge to their programming activity. The programs will require a lot of knowledge to write, to understand, and to use.

One of the potential implications is that this program (and maybe most end-user programs?) are probably not useful to many others.  Much of what we teach in CS1 for CS majors, or maybe even in Software Carpentry, is not useful to the occasional, casual end-user programmer.  Most of what we teach is for larger-scale programming.  Do we need to teach end-user programmers about software engineering practices that make code more readable by others?  Do we need to teach end-user programmers about tools for working in teams on software if they are not going to be working in teams to develop their small bits of code? Those are honest questions.  Shriram Krishnamurthi would remind me that end-user programmers, even more than any other class of programmers, are more likely to make errors and less likely to be able to debug them, so teaching end-user programmers practices and tools to catch and fix errors is particularly important for them.  That’s a strong argument. But I also know that, as an end-user programmer myself, I’m not willing to spend a lot of time that doesn’t directly contribute towards my end goal.  Balancing the real needs of end-user programmers with their occasional, casual use of programming is an interesting challenge.

The bigger question that I’m wondering about is whether someone else, facing a similar problem, could learn to code with a small enough time investment to make it worthwhile. I did a lot of programming in HyperTalk when I was a graduate student. I have that investment to build on. How much of an investment would someone else have to make to be able to write this kind of script as easily?

Why LiveCode? Why not Python? Or Smalltalk? I was originally going to write this in Python. Why not? I was teaching Python, and the problems would all be in Python. It’d good exercise for me.

I realized that I didn’t want to deal with files or a command line. I wanted a graphical user interface. I wanted to paste some code in (not put it in a file), and get some text that I could copy (not find it in one or more files). I didn’t want to have to remember what function(s) to call. I wanted a big button. I simply don’t have the time to deal with the cognitive load of file names and function names. Copy-paste the sorted code, press the button, then copy-paste the scrambled code and copy-paste the solution. I could do that. Maybe I could build a GUI in Python, but every time I have used a GUI tool in Python, it was way more work than LiveCode.

I also know Smalltalk better than most. Here’s a bit of an embarrassing confession: I’ve never really learned to build GUIs in Smalltalk. I’ve built a couple of toy examples in Morphic for class. But a real user interface with text areas that really work? That’s still hard for me. I didn’t want to deal with learning something new. LiveCode is just so easy — select the tool, drag the UI object into place.

LiveCode was the obvious answer for me, but that’s because of who I am and the background that I already have. What could we teach future professionals/end-user programmers that (a) they would find worthwhile learning (not too hard, not too time-consuming) and (b) they could use casually when they needed it, like my Parsons problem generator? That is an interesting computing education research question.

How does a student determine “worthwhile” when deciding what programming to learn for future end-user programming?  Let’s say that we decided to teach all STEM graduate students some programming so that they could use it in their future professional practice as end-user programmers.  What would you teach them?  How would they judge something “worthwhile” to learn for later?

We know some answers to this question.  We know that students judge the authenticity of the language based on what they see themselves doing in the future and what the current practice is in that field (see Betsy DiSalvo’s findings on Glitch and our results on Media Computation).

But what if that’s not a good programming language? What if there’s a better one?  What if the common practice in a field is ill-informed? I’m going to be that most people, faced with the general problem I was facing (wanting a GUI to do a text-processing task) would use JavaScript.  LiveCode is way better than JavaScript for an occasional, casual GUI task — easier to learn, more stable, more coherent implementation, and better programming support for casual users.  Yet, I predict most people would choose JavaScript because of the Principle of Social Proof.

I’ve been reading Robert Cialdini’s books on social psychology and influence, and he explains that social proof is how people make decisions when they’re uncertain (like how to choose a programming language when they don’t know much about programming) and there are others to copy.

First, we seem to assume that if a lot of people are doing the same thing, they must know something we don’t. Especially when we are uncertain, we are willing to place an enormous amount of trust in the collective knowledge of the crowd. Second, quite frequently the crowd is mistaken because they are not acting on the basis of any superior information but are reacting, themselves, to the principle of social proof.

Cialdini PhD, Robert B.. Influence (Collins Business Essentials) (Kindle Locations 2570-2573). HarperCollins. Kindle Edition.

How many people know both JavaScript and LiveCode well?  And don’t consider computer scientists. You can’t convince someone by telling them that computer scientists say “X is better than Y.”  People follow social proof from people whom they judge to be similar to them. It’s got to be someone in their field, someone who works like them.

It would be hard to teach the graduate students something other than what’s in common practice in their fields, even if it’s more inefficient to learn and harder to use than another choice.

June 11, 2018 at 2:00 am 2 comments

End-user programmers are at least half of all programmers

I was intrigued to see this post during CS Ed Week from ChangeTheEquation.org. They’re revisiting the Scaffidi, Shaw, and Myers question from 2005 (mentioned in this blog post).

You may be surprised to learn that nearly DOUBLE the number of workers use computing than originally thought.  Our new research infographic shows that 7.7 million people use complex computing in their jobs — that’s 3.9 million more than the U.S. Bureau of Labor and Statistics (BLS) reports. We examined a major international dataset that looks past job titles to see what skills people actually use on the job. It turns out that the need for complex computer skills extends far beyond what the BLS currently classifies as computer occupations. Even more reason why computer science education is more critical than ever!

Source: The Hidden Half | Change the Equation

ChangeTheEquation.org is coming up with a much lower estimate of end-user programmers than did Scaffidi et al. Why is that? I looked at their methodology:

To estimate the total number of U.S. citizens who use computers in complex ways on the job, CTEq and AIR examined responses to question G_Q06 in the PIAAC survey: What level of computer use is/was needed to perform your job/last job?

  • STRAIGHTFORWARD, for example using a computer for straightforward routine tasks such as data entry or sending and receiving e-mails
  • MODERATE, for example word-processing, spreadsheets or database management
  • COMPLEX, for example developing software or modifying computer games, programming using languages like java, sql, php or perl, or maintaining a computer network

Source: the Hidden Half: Methodology | Change the Equation

Their “Complex” use is certainly programming, but Scaffidi et al would also call building spreadsheet macros and SQL queries programming. ChangeTheEquation has a different definition that I think undercounts significantly.

January 20, 2016 at 8:13 am 8 comments

Should Everybody Learn to Code? Coverage in Communications of the ACM

I spoke to the author, Esther Shein, a few months ago, but didn’t know that this was coming out until now.  She makes a good effort to address both sides of the issues, with Brian Dorn, Jeannette Wing, and me on the pro side, and Chase Felker and Jeff Atwood on the con side.  As you might expect, I disagree with Felker and Atwood.  “That assumes code is the goal.”  No–computational literacy and expression, the ability to use the computer as a tool to think with, and empowerment are the goals.  Code is the medium.

Still, I’m excited about the article.

Just as students are taught reading, writing, and the fundamentals of math and the sciences, computer science may one day become a standard part of a K–12 school curriculum. If that happens, there will be significant benefits, observers say. As the kinds of problems we will face in the future will continue to increase in complexity, the systems being built to deal with that complexity will require increasingly sophisticated computational thinking skills, such as abstraction, decomposition, and composition, says Wing.

“If I had a magic wand, we would have some programming in every science, mathematics, and arts class, maybe even in English classes, too,” says Guzdial. “I definitely do not want to see computer science on the side … I would have computer science in every high school available to students as one of their required science or mathematics classes.”

via Should Everybody Learn to Code? | February 2014 | Communications of the ACM.

February 5, 2014 at 1:28 am 16 comments

The new Wolfram Language: Now available on Raspberry Pi

The new Wolfram Language sounds pretty interesting.  I was struck by the announcement that it’s going to run on the $25 Raspberry Pi (thanks to Guy Haas for that).  And I liked Wolfram’s cute blog post where he makes his holiday cards with his new language (see below), which features the ability to have pictures as data elements.  I haven’t learned much about the language yet — it looks like mostly like the existing Mathematica language.  I’m curious about what they put in to meet the design goal of having it work as an end-user programming language.

Here are the elements of the actual card we’re trying to assemble:

Now we create a version of the card with the right amount of “internal padding” to have space to insert the particular message:

via “Happy Holidays”, the Wolfram Language Way—Stephen Wolfram Blog.

January 23, 2014 at 1:25 am 1 comment

“Six Learning Barriers in End-User Programming Systems” wins most influential paper award

Congratulations! Well-deserved!  Here’s a link to the original paper.

Brad A. Myers, professor in the Human-Computer Interaction Institute, will be honored for the second year in a row as the author of a Most Influential Paper at the IEEE Symposium on Visual Languages and Human-Centric Computing, (VL/HCC). He is the first person to win the award twice since it was established in 2008.

Myers and his co-authors — former students Andrew Ko, the first author, is now an assistant professor at the University of Washington, and Htet Htet Aung, now a principal user experience designer at Harris Healthcare Solutions in the Washington, D.C., area — will receive the Most Influential Paper award at VL/HCC 2013, Sept. 15-19 in San Jose, Calif. The symposium is the premier international forum for research on how computation can be made easier to express, manipulate, and understand.

Their 2004 paper, “Six Learning Barriers in End-User Programming Systems,” focused on barriers to learning programming skills beyond the programming languages themselves. Their study of beginning programmers identified six types of barriers: design, selection, coordination, use, understanding, and information. This deeper understanding of learning challenges, in turn, supported a more learner-centric view of the design of the entire programming system.

via SCHOOL OF COMPUTER SCIENCE, Carnegie Mellon.

October 10, 2013 at 1:59 am Leave a comment

Is Coding the New Second Language? in Smithsonian Magazine

Nice piece in Smithsonian Magazine about the efforts to move computing into primary and secondary schools.  And hey! That’s me they quoted!  (It’s not exactly what I said, but I’ll take it.)

Schools that offer computer science often restrict enrollment to students with a penchant for math and center the coursework around an exacting computer language called Java. And students frequently follow the Advanced Placement Computer Science curriculum developed by the College Board—a useful course but not for everyone. “What the computer science community has been slow to grasp is that there are a lot of different people who are going to need to learn computer science, and they are going to learn it in a lot of different ways,” says Mark Guzdial, a professor of interactive computing at the Georgia Institute of Technology and author of the well-respected Computer Education blog, “and there are a lot of different ways people are going to use it, too. ”

via Is Coding the New Second Language? | Ideas & Innovations | Smithsonian Magazine.

June 3, 2013 at 1:19 am Leave a comment

LiveCode Community Edition is released: HyperCard is free again and runs on anything!

I’m excited about this and find myself thinking, “So what should I do with this first?”  LiveCode isn’t as HyperCard-like as it could be (e.g., you edit in one place, then compile into an application), and it has all of HyperCard’s limitations (e.g., object-based not object-oriented, lines are syntax).  But it’s freeincluding all engines.  I can program iOS and Android from the same HyperCard stack!  I can build new kinds of programming languages and environments on top of Livecode (but who in the world would want to do something like that?!?) that could compile into apps and applications!  It’s a compellingly different model for introductory computing, that sits between visual block programming and professional textual programming. Wow…

LiveCode Community is an Open Source application. This means that you can look at and edit all of the code used to run it, including the engine code. Of course, you do not have to do this, if you just want to write your app in LiveCode there is no need for you to get involved with the engine at all. You write your app using LiveCode, the English-like scripting language, and our drag and drop interface. Fast, easy, productive and powerful.

via Community Edition Overview | RunRev.

April 26, 2013 at 1:28 am 9 comments

Taming the Monolith: Refactoring for an open source HyperCard

LiveCode had an earlier blog piece on how they want to implement “Open Language” so that the HyperTalk syntax could be extended.  This piece (linked below) goes into more detail and is an interesting history of how LiveCode evolved from HyperCard, and how they plan to refactor it so that it’s extensible by an open source community.

LiveCode is a large, mature software product which has been around in some form for over 20 years. In this highly technical article, Mark Waddingham, RunRev CTO, takes us under the hood to look at our plan to modularize the code, making it easy for a community to contribute to the project. The project described in this post will make the platform an order of magnitude more flexible, extensible and faster to develop by both our team and the community.

Like many such projects which are developed by a small team (a single person to begin with – Dr Scott Raney – who had a vision for a HyperCard environment running on UNIX systems and thus started MetaCard from which LiveCode derives), LiveCode has grown organically over two decades as it adapts to ever expanding needs.

With the focus on maintenance, porting to new platforms and adding features after all this time evolving we now have what you’d describe as a monolithic system – where all aspects are interwoven to some degree rather than being architecturally separate components.

via Taming the Monolith.

February 19, 2013 at 5:53 am Leave a comment

Michael Littman’s new blog: End-user programming for household devices

I’m excited about the direction that Michael Littman is taking with his new blog.  It’s a different argument for “Computing for Everyone.”  He’s not making a literacy argument, or a jobs argument.  He’s simply saying that our world is filled with computers, and it should be easy to talk to those computers — for everybody.  Nobody should be prevented from talking to their own devices.

The aspiration of the “Scratchable Devices” team is to help move us to a future in which end-user programming is commonplace.  The short version of the pitch goes like this.  We are all surrounded by computers—more and more of the devices we interact with on a daily basis are general purpose CPUs in disguise.  The marvelous thing about these machines is that they can carry out activities on our behalf: activities that we are too inaccurate or slow or fragile or inconsistent or frankly important to do for ourselves.  Unfortunately, most of us don’t know how to speak to these machines  And, even those of us who do are usually barred from doing so by device interfaces that are intended to be friendly but in fact tie our hands.

We seem to be on the verge of an explosion of new opportunities.  There are new software systems being created, more ways to teach people about programming, and many many more new devices that we wish we could talk to in a systematic way.  The purpose of this blog is to raise awareness of developments, both new and old, that bear on the question of end-user programming.

via Scratchable Devices Blog | End-user programming for household devices.

February 11, 2013 at 1:09 am 1 comment

Open Source Edition of LiveCode (Modern HyperCard)

HyperCard is likely still the world’s most successful end-user programming environment.  Having an open source version that runs on all modern OS and mobile platforms would be fabulous.  I’m backing.

LiveCode lets you create an app for your smartphone, tablet, desktop computer or server, whether you are a programmer or not. We are excited to bring you this Kickstarter project to create a brand new edition of our award-winning software creation platform.

LiveCode has been available as a proprietary platform for over a decade. Now with your support we can make it open and available to everyone. With your help, we will re-engineer the platform to make it suitable for open source development with a wide variety of contributors.

Support our campaign and help to change coding forever.

via Open Source Edition of LiveCode by RunRev Ltd — Kickstarter.

February 5, 2013 at 1:27 am Leave a comment

25 years of HyperCard—the missing link to the Web | Ars Technica

An interesting argument: That Web browsers were designed based on HyperCard, and that HyperCard’s major flaw was a lack of hypertext links across computers.

How did creator Bill Atkinson define HyperCard? “Simply put, HyperCard is a software erector set that lets non-programmers put together interactive information,” he told the Computer Chronicles in 1987.

When Tim Berners-Lee’s innovation finally became popular in the mid-1990s, HyperCard had already prepared a generation of developers who knew what Netscape was for. That’s why the most apt historical analogy for HyperCard is best adapted not from some failed and forgotten innovation, but from a famous observation about Elvis Presley. Before anyone on the World Wide Web did anything, HyperCard did everything.

via 25 years of HyperCard—the missing link to the Web | Ars Technica.

June 20, 2012 at 8:07 am 4 comments

Excel programming for non-programmers: How mental models are developed

This new system for end-user programming from MIT raises a question for me about users’ mental models, which I think is key for computing education (e.g., for figuring out how to do inquiry learning in computer science).

Imagine that you use the system described below:  You give the system some examples of text you want transformed, before-and-after.  You run the system on some new inputs.  It gets it wrong. What happens then? I do believe the authors that they could train the system in three examples, as described below.  How hard is it for non-programmers to figure out the right three examples?  More interesting: When the system gets it wrong, what do the non-programmers think that the computer is doing, and what examples do they add to clear up the bug?

Technically, the chief challenge in designing the system was handling the explosion of possible interpretations for any group of examples. Suppose that you had a list of times in military format that you wanted to convert to conventional hour-and-minute format. Your first example might be converting “1515” to “3:15.” But which 15 in the first string corresponds to the 15 in the second? It’s even possible that the string “3:15” takes its 1 from the first 1 in “1515” and its 5 from the second 5. Similarly, the first 15 may correspond to the 3, but it’s also possible that all the new strings are supposed to begin with 3’s.

“Typically, we have millions of expressions that actually conform to a single example,” Singh says. “Then we have multiple examples, and I’m going to intersect them to find common expressions that work for all of them.” The trick, Singh explains, was to find a way to represent features shared by many expressions only once each. In experiments, Singh and Gulwani found that they never needed more than three examples in order to train their system.

via Excel programming for nonprogrammers – MIT News Office.

May 16, 2012 at 8:09 am 2 comments

Study Opens Window Into How Students Hunt for Educational Content Online: But what are they finding?

This reminds me of Brian Dorn’s work, and points out a weakness of this study.  Brian went out to check if the knowledge that the students needed was actually in the places where they looked.  Morgan’s study is telling us where they’re looking.  But it’s not telling us what the students are learning.

It’s nothing new to hear that students supplement their studies with other universities’ online lecture videos. But Ms. Morgan’s research—backed by the National Science Foundation, based on 14 focus-group interviews at a range of colleges, and buttressed by a large online survey going on now—paints a broader picture of how they’re finding content, where they’re getting it, and why they’re using it.

Ms. Morgan borrows the phrase “free-range learning” to describe students’ behavior, and she finds that they generally shop around for content in places educators would endorse. Students seem most favorably inclined to materials from other universities. They mention lecture videos from Stanford and the Massachusetts Institute of Technology far more than the widely publicized Khan Academy, she says. If they’re on a pre-med or health-science track, they prefer recognized “brands” like the Mayo Clinic. Students often seek this outside content due to dissatisfaction with their own professors, Ms. Morgan says.

via ‘Free-Range Learners’: Study Opens Window Into How Students Hunt for Educational Content Online – Wired Campus – The Chronicle of Higher Education.

May 9, 2012 at 8:49 am 1 comment

Older Posts


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 10,184 other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,054,360 hits
March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

CS Teaching Tips