Posts tagged ‘end-user programming’
I spoke to the author, Esther Shein, a few months ago, but didn’t know that this was coming out until now. She makes a good effort to address both sides of the issues, with Brian Dorn, Jeannette Wing, and me on the pro side, and Chase Felker and Jeff Atwood on the con side. As you might expect, I disagree with Felker and Atwood. “That assumes code is the goal.” No–computational literacy and expression, the ability to use the computer as a tool to think with, and empowerment are the goals. Code is the medium.
Still, I’m excited about the article.
Just as students are taught reading, writing, and the fundamentals of math and the sciences, computer science may one day become a standard part of a K–12 school curriculum. If that happens, there will be significant benefits, observers say. As the kinds of problems we will face in the future will continue to increase in complexity, the systems being built to deal with that complexity will require increasingly sophisticated computational thinking skills, such as abstraction, decomposition, and composition, says Wing.
“If I had a magic wand, we would have some programming in every science, mathematics, and arts class, maybe even in English classes, too,” says Guzdial. “I definitely do not want to see computer science on the side … I would have computer science in every high school available to students as one of their required science or mathematics classes.”
The new Wolfram Language sounds pretty interesting. I was struck by the announcement that it’s going to run on the $25 Raspberry Pi (thanks to Guy Haas for that). And I liked Wolfram’s cute blog post where he makes his holiday cards with his new language (see below), which features the ability to have pictures as data elements. I haven’t learned much about the language yet — it looks like mostly like the existing Mathematica language. I’m curious about what they put in to meet the design goal of having it work as an end-user programming language.
Here are the elements of the actual card we’re trying to assemble:
Now we create a version of the card with the right amount of “internal padding” to have space to insert the particular message:
Congratulations! Well-deserved! Here’s a link to the original paper.
Brad A. Myers, professor in the Human-Computer Interaction Institute, will be honored for the second year in a row as the author of a Most Influential Paper at the IEEE Symposium on Visual Languages and Human-Centric Computing, (VL/HCC). He is the first person to win the award twice since it was established in 2008.
Myers and his co-authors — former students Andrew Ko, the first author, is now an assistant professor at the University of Washington, and Htet Htet Aung, now a principal user experience designer at Harris Healthcare Solutions in the Washington, D.C., area — will receive the Most Influential Paper award at VL/HCC 2013, Sept. 15-19 in San Jose, Calif. The symposium is the premier international forum for research on how computation can be made easier to express, manipulate, and understand.
Their 2004 paper, “Six Learning Barriers in End-User Programming Systems,” focused on barriers to learning programming skills beyond the programming languages themselves. Their study of beginning programmers identified six types of barriers: design, selection, coordination, use, understanding, and information. This deeper understanding of learning challenges, in turn, supported a more learner-centric view of the design of the entire programming system.
Nice piece in Smithsonian Magazine about the efforts to move computing into primary and secondary schools. And hey! That’s me they quoted! (It’s not exactly what I said, but I’ll take it.)
Schools that offer computer science often restrict enrollment to students with a penchant for math and center the coursework around an exacting computer language called Java. And students frequently follow the Advanced Placement Computer Science curriculum developed by the College Board—a useful course but not for everyone. “What the computer science community has been slow to grasp is that there are a lot of different people who are going to need to learn computer science, and they are going to learn it in a lot of different ways,” says Mark Guzdial, a professor of interactive computing at the Georgia Institute of Technology and author of the well-respected Computer Education blog, “and there are a lot of different ways people are going to use it, too. ”
I’m excited about this and find myself thinking, “So what should I do with this first?” LiveCode isn’t as HyperCard-like as it could be (e.g., you edit in one place, then compile into an application), and it has all of HyperCard’s limitations (e.g., object-based not object-oriented, lines are syntax). But it’s free, including all engines. I can program iOS and Android from the same HyperCard stack! I can build new kinds of programming languages and environments on top of Livecode (but who in the world would want to do something like that?!?) that could compile into apps and applications! It’s a compellingly different model for introductory computing, that sits between visual block programming and professional textual programming. Wow…
LiveCode Community is an Open Source application. This means that you can look at and edit all of the code used to run it, including the engine code. Of course, you do not have to do this, if you just want to write your app in LiveCode there is no need for you to get involved with the engine at all. You write your app using LiveCode, the English-like scripting language, and our drag and drop interface. Fast, easy, productive and powerful.
LiveCode had an earlier blog piece on how they want to implement “Open Language” so that the HyperTalk syntax could be extended. This piece (linked below) goes into more detail and is an interesting history of how LiveCode evolved from HyperCard, and how they plan to refactor it so that it’s extensible by an open source community.
LiveCode is a large, mature software product which has been around in some form for over 20 years. In this highly technical article, Mark Waddingham, RunRev CTO, takes us under the hood to look at our plan to modularize the code, making it easy for a community to contribute to the project. The project described in this post will make the platform an order of magnitude more flexible, extensible and faster to develop by both our team and the community.
Like many such projects which are developed by a small team (a single person to begin with – Dr Scott Raney – who had a vision for a HyperCard environment running on UNIX systems and thus started MetaCard from which LiveCode derives), LiveCode has grown organically over two decades as it adapts to ever expanding needs.
With the focus on maintenance, porting to new platforms and adding features after all this time evolving we now have what you’d describe as a monolithic system – where all aspects are interwoven to some degree rather than being architecturally separate components.
via Taming the Monolith.
I’m excited about the direction that Michael Littman is taking with his new blog. It’s a different argument for “Computing for Everyone.” He’s not making a literacy argument, or a jobs argument. He’s simply saying that our world is filled with computers, and it should be easy to talk to those computers — for everybody. Nobody should be prevented from talking to their own devices.
The aspiration of the “Scratchable Devices” team is to help move us to a future in which end-user programming is commonplace. The short version of the pitch goes like this. We are all surrounded by computers—more and more of the devices we interact with on a daily basis are general purpose CPUs in disguise. The marvelous thing about these machines is that they can carry out activities on our behalf: activities that we are too inaccurate or slow or fragile or inconsistent or frankly important to do for ourselves. Unfortunately, most of us don’t know how to speak to these machines And, even those of us who do are usually barred from doing so by device interfaces that are intended to be friendly but in fact tie our hands.
We seem to be on the verge of an explosion of new opportunities. There are new software systems being created, more ways to teach people about programming, and many many more new devices that we wish we could talk to in a systematic way. The purpose of this blog is to raise awareness of developments, both new and old, that bear on the question of end-user programming.
HyperCard is likely still the world’s most successful end-user programming environment. Having an open source version that runs on all modern OS and mobile platforms would be fabulous. I’m backing.
LiveCode lets you create an app for your smartphone, tablet, desktop computer or server, whether you are a programmer or not. We are excited to bring you this Kickstarter project to create a brand new edition of our award-winning software creation platform.
LiveCode has been available as a proprietary platform for over a decade. Now with your support we can make it open and available to everyone. With your help, we will re-engineer the platform to make it suitable for open source development with a wide variety of contributors.
Support our campaign and help to change coding forever.
An interesting argument: That Web browsers were designed based on HyperCard, and that HyperCard’s major flaw was a lack of hypertext links across computers.
How did creator Bill Atkinson define HyperCard? “Simply put, HyperCard is a software erector set that lets non-programmers put together interactive information,” he told the Computer Chronicles in 1987.
When Tim Berners-Lee’s innovation finally became popular in the mid-1990s, HyperCard had already prepared a generation of developers who knew what Netscape was for. That’s why the most apt historical analogy for HyperCard is best adapted not from some failed and forgotten innovation, but from a famous observation about Elvis Presley. Before anyone on the World Wide Web did anything, HyperCard did everything.
This new system for end-user programming from MIT raises a question for me about users’ mental models, which I think is key for computing education (e.g., for figuring out how to do inquiry learning in computer science).
Imagine that you use the system described below: You give the system some examples of text you want transformed, before-and-after. You run the system on some new inputs. It gets it wrong. What happens then? I do believe the authors that they could train the system in three examples, as described below. How hard is it for non-programmers to figure out the right three examples? More interesting: When the system gets it wrong, what do the non-programmers think that the computer is doing, and what examples do they add to clear up the bug?
Technically, the chief challenge in designing the system was handling the explosion of possible interpretations for any group of examples. Suppose that you had a list of times in military format that you wanted to convert to conventional hour-and-minute format. Your first example might be converting “1515” to “3:15.” But which 15 in the first string corresponds to the 15 in the second? It’s even possible that the string “3:15” takes its 1 from the first 1 in “1515” and its 5 from the second 5. Similarly, the first 15 may correspond to the 3, but it’s also possible that all the new strings are supposed to begin with 3’s.
“Typically, we have millions of expressions that actually conform to a single example,” Singh says. “Then we have multiple examples, and I’m going to intersect them to find common expressions that work for all of them.” The trick, Singh explains, was to find a way to represent features shared by many expressions only once each. In experiments, Singh and Gulwani found that they never needed more than three examples in order to train their system.
Study Opens Window Into How Students Hunt for Educational Content Online: But what are they finding?
This reminds me of Brian Dorn’s work, and points out a weakness of this study. Brian went out to check if the knowledge that the students needed was actually in the places where they looked. Morgan’s study is telling us where they’re looking. But it’s not telling us what the students are learning.
It’s nothing new to hear that students supplement their studies with other universities’ online lecture videos. But Ms. Morgan’s research—backed by the National Science Foundation, based on 14 focus-group interviews at a range of colleges, and buttressed by a large online survey going on now—paints a broader picture of how they’re finding content, where they’re getting it, and why they’re using it.
Ms. Morgan borrows the phrase “free-range learning” to describe students’ behavior, and she finds that they generally shop around for content in places educators would endorse. Students seem most favorably inclined to materials from other universities. They mention lecture videos from Stanford and the Massachusetts Institute of Technology far more than the widely publicized Khan Academy, she says. If they’re on a pre-med or health-science track, they prefer recognized “brands” like the Mayo Clinic. Students often seek this outside content due to dissatisfaction with their own professors, Ms. Morgan says.
I’ve talked about RunRev/LiveCode here before. It’s 90% HyperCard, updated to be cross-platform and with enhanced abilities. I mostly agree with the comments below (but not with the critique of Scratch or Logo): It really does seem like an excellent tool for the needs in today’s schools. It’s real programming, you can build things quickly, you can build for desktop or Web or mobile devices, it’s cross platform, and it’s designed to be easily learned. The language is English-like and builds on what we know about how people naively think about programming.
I proposed using this next Fall in a course I’m teaching for graduate students to introduce them to programming. I got shot down. The faculty argued that they didn’t want a “boutique” language. They wanted a “real” language. I do see the point. Audrey Watters and I talked about this a few weeks ago. Students don’t just want knowledge — they want to join a community of practice. The students see or imagine people who do what they want to do, and the students want to learn what they know. Students want to grow to be at the center of some community of practice. Where’s the community of practice for HyperCard-like programming today? Do you see lots of experts who are doing the cool things that you want to do — with HyperCard? The power and expressivity of a language is not enough. Languages today have cultures and communities. To learn a language is to express interest in joining or defining a culture or community. Alan Perlis said, “A language that doesn’t affect the way you think about programming, is not worth knowing.” Today, a language that doesn’t reflect who you want to be, is not worth knowing.
Pascal is still available for modern computers. So is Logo. We know how to teach both of them to novices far better than we know how to teach Java or C++ to novices. These languages were not abandoned for pedagogical or cognitive grounds — they work for teaching computing. So why don’t we use them? It’s because of the perceptions, the expectations, and the culture/community that grew up around those. I’ll bet that some teacher who doesn’t know anything about Logo could discover it, not know about its past, and use it really well to teach K-12 kids about computer science.
Let’s go one step beyond the discussion that Audrey and I had, and I’ll use something that I always warn my students about: introspection.
I don’t know why I’m not particularly drawn to any language community these days. Maybe I am choosing languages based on community, but I’m not aware of it. Maybe I am at the center of my community of practice, which frees me to make choices and go in new directions. I want to be a computing education researcher who can express his ideas in code and can build his own tools. There aren’t many of us, and there isn’t a language central to that community. Maybe, in this community of practice, the tool is incidental and not integral to the community.
I do think (perhaps naively) that it’s important for us in computing to be willing to invent a community of practice, not just join an existing one. If you want to change the way people think about computing, you don’t just join an existing community. The existing communities were created within and support the existing values. We should also be about inventing communities that support different values.
I spoke to dozens of teachers who all told me a similar story. There is a sea change in the air. After thirty years of teaching powerpoint and excel spreadsheets, schools are finally returning to the idea that we really need to teach the next generation how to program – but where are the tools to do it? Suddenly ICT teachers up and down the country are being told, “from next year you must teach programming principles” but they have been given no training, tools or guidance on how to achieve this. With little time to learn and a very limited range of choices, these teachers were delighted to discover LiveCode. It seems to be exactly what they are looking for. Easy to learn for both teachers and students, real programming without the limitations of “snap together” tools like scratch or logo, no arcane or hard to understand syntax or symbols, and best of all, it lets the students deploy the end results on their iPad, iPhone or Android device.
GasStationWithoutPumps did a blog piece on the newspaper articles that I mentioned earlier this week, and he pointed out something important that I missed. The Guardian’s John Naughton provided a really nice definition of computational thinking:
… computer science involves a new way of thinking about problem-solving: it’s called computational thinking, and it’s about understanding the difference between human and artificial intelligence, as well as about thinking recursively, being alert to the need for prevention, detection and protection against risks, using abstraction and decomposition when tackling large tasks, and deploying heuristic reasoning, iteration and search to discover solutions to complex problems.
I like this one. It’s more succinct than others that I’ve seen, and still does a good job of hitting the key points.
Naughton’s definition includes issues of cyber-security and risk. I don’t see that often in “Computational Thinking” definitions. I was reminded of a list that Greg Wilson generated recently in his Software Carpentry blog about what researchers need to know about programming the Web.
Here’s what (I think) I’ve figured out so far:
- People want to solve real problems with real tools.
- All we can teach people about server-side programming in a few hours is how to create security holes, even if we use modern frameworks.
- People must be able to debug what they build. If they can’t, they won’t be able to apply their knowledge to similar problems on their own.
Greg’s list surprised me, because it was the first time that I’d thought risk and cyber-security as critical to end-user programmers. Yes, cyber-security plays a prominent role in the CS:Principles framework (as part of Big Idea VI, on the Internet), but I’d thought of that (cynically, I admit) as being a nod to the software development firms who want everyone to be concerned about safe programming practices. Is it really key to understanding the role of computing in our everyday lives? Maybe — the risks and needs for security may be the necessary consequent of teaching end-users about the power and beauty of computing.
Greg’s last point is one that I’ve been thinking a lot about lately. I’ve agreed to serve on the review committee for Juha Sorva’s thesis, which focuses on his excellent program visualization tool, UUhistle. I’m enjoying Juha’s document very much, and I’m not even up to the technology part yet. He has terrific coverage of the existing literature in computing education research, cognitive science, and learning sciences, and the connections he draws between disparate areas is fascinating. One of the arguments that he’s making is that the ability to understand computing in a transferable way requires the development of a mental model — an executable understanding of how the pieces of a program fit together in order to achieve some function. For example, you can’t debug without a mental model of how the program works (to connect to Greg’s list). Juha’s dissertation is making the argument (implicitly, so far in my reading) that you can’t develop a mental model of computing without learning to program. You have to have a notation, some representation of the context-free executable pieces of the program, in order to recognize that these are decontextualized pieces that work in the same way in any program. A WHILE loop has the same structure and behavior, regardless of the context, regardless of the function that any particular WHILE loop plays in any particular program. Without the notation, you don’t have names or representations for the pieces that is necessary for transfer.
Juha is making an argument like Alan Perlis’s argument in 1961: Perlis wasn’t arguing that everyone needed to understand programming for its own sake. Rather, he felt that the systems thinking was the critical need, and that the best way to get to systems thinking was through programming. The cognitive science literature that Juha is drawing on is saying something stronger: That we can’t get to systems thinking (or computational thinking) without programming. I’ll say more about Juha’s thesis as I finish reviewing it.
It’s interesting that there are some similar threads about risk and cyber-security appearing in different definitions of computational thinking (Naughton and Wilson discussed here), and those thinking about how to teach computational thinking (Sorva and Perlis here) are suggesting that we need programming to get there.
Great to see this coverage! Computer science is increasingly becoming a requirement at universities, says a piece in US News. This is likely the most powerful way to get CS into high schools — require it in colleges and universities, to send the message that it’s valued and ought to be part of the general education in high school. My colleague, Charles Isbell (Sage of Threads), gets quoted a good bit in the article.
Every student at Montclair State University in New Jersey must complete a computer science in order to graduate. For most students, that course is Introduction for Computer Applications: Being Fluent with Information Technology. (Music majors take Music and Computer Technology I.)
The course is designed to teach students majoring in subjects such as fashion, dance, or art history about network security, artificial intelligence, databases, and e-commerce, says Michael Oudshoorn, chairman of the computer science department at Montclair.
“It’s not aimed at making them experts; it’s aimed at making them aware,” Oudshoorn says. “They do live in a digital age … They have an obligation to know something about the technology.”
One of the highlights of SIGCSE 2012 was meeting Mike Richards and learning about the new introductory computing course at the Open University, My Digital Life. It’s an interesting contrast to the Stanford on-line CS courses.
I met Mike at a BOF (Birds of a Feather) meeting on digital books for CS. There were three major themes that I heard at the BOF: What standards are there going to be for electronic books, what features should there be, and what should authoring look like. The answer to the first question was clearly, “Who knows?” It’s too early. There were book authors in the audience who were pretty upset with the answer, insisting, “I need to know what to write in now, and I don’t want to have to change later!” That’s not going to happen yet. The answer to the second question had a disappointing amount of posturing — lots of “Well, I think…” with no data, no evidence, no arguments. Just claims of having the superior vision. The third one was pretty intriguing, with some really important ideas. Cay Horstmann had an interesting take on the value of XML, in any form. Mike described what their authoring process was like.
Mike said that it cost $3M USD to build “My Digital Life.” It consists of a set of books, some really great videos (including one of Alan Kay that he showed in his session), and a terrific computing platform and a modified form of Scratch. The interesting thing was that the majority of that $3M budget went to “proofreaders, testers, and editors.” They hire people to try the lesson, and then they watch where it doesn’t work. Iterative development with formative evaluation — what a great idea!
On Saturday morning, Mike presented his paper with Marian Petre on the SenseBoard that they use in “My Digital Life.” The course is currently 2/3 through its first offering. They have 4,500 students with no pre-requisites. It’s 1/3 female. 2/3 have no previous background knowledge. They have a wait list of 900 people for the next offering. Historically, the Open U gets very high student satisfaction ratings, and over 50% completion rates.
They decided to emphasize that their students don’t want to just observe computation, they want to make things. They decided to focus on ubiquitous computing, because that’s the future of computing from their students’ perspectives. They created the Senseboard, an Arduino-based device with microphone, IR sensor, push buttons, sliders, servo motor connections, stepper motor connections, and sensors for temperature, motion, and light. The board is included as part of the course (for which students pay around $800 USD), but should become available separately soon for about $80 USD. They have created a version of Scratch for programming the Senseboard. They extended Scratch with primitives for string and file handling, sensor handling, networking, and improved debugging.
I loved the way that they designed the curriculum. One of their design mantras are “No blank screens.” Each project comes with parts that already work, with descriptions and background images. The clear descriptions go step-by-step (checked by testers), and “jump aheads” (“If you already know Booleans, skip to this page”). All projects are provided with full solutions, so nobody ever gets stuck or can’t play with the result because they couldn’t figure it out. What an interesting notion — that the point of the programming assignments is to give students something that they want. Grading is not based on the assignments. Grading is based on seven marked-up assessments with a tutor, and one big final.
The projects looked great: Burglar alarms, a tank game with remote other players, a “tea ready” alarm, a BBC news feed (pull off your favorite articles and stuff them in string boxes), a network seismograph (showing the amount of traffic in your part of the network), live opinion polling, and games that the students found pretty addictive. Mike had a great quote from a student saying that he wanted to learn to make a “ghost detector” by the end of the course.
Mike says that the Stanford and MIT open learning initiatives are forcing the Open U to make some of “My Digital Life” available on their open learning site. But the Open U is worried about that. Stanford and MIT are putting up material that faculty built for free, with no or little testing. The Open U. spent $3M building this course. How do you recoup your investment if you give it away for free? There’s a flip side to that question. The Open U spends the majority of its development cost on guaranteeing quality, in the sense that the assignments are do-able by students (see previous discussion on reasonable effort). What guarantees can you make about free courses? Does course quality matter?