Posts tagged ‘programming languages’

Moving from Scratch to text: Why We Need Sniff

I’m intrigued by this project and would really love to see some analysis.  Do students who use Scratch recognize Sniff as being a text form of Scratch?  If it doesn’t work well, is the problem in the syntax and semantics of Sniff, and maybe we could do better?  Do students transfer their knowledge of Scratch into Sniff?

So if Scratch is so great why do we need Sniff? The problem is that at some point you need to move beyond Scratch. It could be that you want to tackle a different kind of problem that Scratch can’t handle well. Perhaps you’ve realised that graphical programming is a nice idea, and great way to start, but in practise its clumsy. Clicking and dragging blocks is a tedious and slow way to build large programs. It could be you need something that feels “more grown up” – the cat sprite/logo is cute, and even older children will find it fun for a while, but Scratch is designed to look and feel like a toy even though its actually very powerful. For whatever reason at some point you start to look for something “better”.

via Sniff: Why We Need Sniff.

August 29, 2014 at 8:39 am 13 comments

Enhancing syntax error messages appears ineffectual — if you enhance the error messages poorly

The ITICSE’14 paper referenced below is getting discussed a good bit in the CS Education community.  Is it really the case that enhancing error messages doesn’t help students?

Yes, if you do an ineffective job of enhancing the error messages.  I’m disappointed that the paper doesn’t even consider the prior work on how to enhance error messages in a useful way — and more importantly, what has been established as a better process.  To start, the best paper award at SIGCSE’11 was on an empirical process for analyzing the effectiveness of error messages and a rubric for understanding student problems with them — a paper that isn’t even referenced in the ITICSE paper, let alone applying the rubric.  That work and the work of Lewis Johnson in Proust point to the importance of bringing more knowledge to bear in creating useful error messages–by studying student intentionality, by figuring out what information they need to be successful.  Andy Ko got it right when he said “Programming languages are the least usable, but most powerful human-computer interfaces ever invented.”  We make them more usable by doing careful empirical work, not just tossing a bunch of data into a machine learning clustering algorithm.

I worry that titles like “Enhancing syntax error messages appears ineffectual” can stifle useful research.  I already spoke to one researcher working on error messages who asked if new work is even useful, given this result.  The result just comes from a bad job at enhancing error messages. Perhaps a better title would have been “An approach to enhancing syntax error messages that isn’t effective.”

Debugging is an important skill for novice programmers to acquire. Error messages help novices to locate and correct errors, but compiler messages are frequently inadequate. We have developed a system that provides enhanced error messages, including concrete examples that illustrate the kind of error that has occurred and how that kind of error could be corrected. We evaluate the effectiveness of the enhanced error messages with a controlled empirical study and find no significant effect.

via Enhancing syntax error messages appears ineffectual.

July 29, 2014 at 8:40 am 5 comments

People problem-solve differently in foreign languages: Implications for programming languages

Since states are making computing courses count as foreign language courses (even if that’s a bad idea),  it’s worthwhile to consider what the value is of learning a foreign language.  A recent Freakonomics podcast (linked below) considers the return on investment of learning a foreign language.  Most intriguing is that people problem-solve differently in their non-native languages.  I wonder what the implications are for programming languages?  We know that people have negative transfer when their native language abilities conflict with their programming language problem-solving.  Are there ways we could make the programming language better for problem-solving?

Learning a language is of course not just about making money — and you’ll hear about the other benefits. Research shows that being bilingual improves executive function and memory in kids, and may stall the onset of Alzheimer’s disease.

And as we learn from Boaz Keysar, a professor of psychology at the University of Chicago, thinking in a foreign language can affect decision-making, too — for better or worse.

via Freakonomics » Is Learning a Foreign Language Really Worth It? A New Freakonomics Radio Podcast.

July 24, 2014 at 9:31 am Leave a comment

Those Who Say Code Does Not Matter are Wrong

Bertrand Meyer is making a similar point to Andy Ko’s argument about programming languages.  Programming does matter, and the language we use also matters.  Meyer’s goes on to suggest that those saying that “code doesn’t matter” may be just rationalizing that they continue to live with antiquated languages.  It can’t be that the peak of human-computer programming interfaces was reached back in New Jersey in the 1970’s.

Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are simply trying to justify their use of bad languages.

via Those Who Say Code Does Not Matter | blog@CACM | Communications of the ACM.

May 16, 2014 at 9:09 am 8 comments

Happy 50th Birthday to BASIC, the Programming Language That Made Computers Personal

A really fun article, with videos of lots of classic Basic systems running.

Kemeny believed that these electronic brains would play an increasingly important role in everyday life, and that everyone at Dartmouth should be introduced to them. “Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate,” he said in a 1991 video interview. “It was as simple as that.”

via Fifty Years of BASIC, the Programming Language That Made Computers Personal | TIME.com.

May 12, 2014 at 8:58 am 2 comments

How do we make programming languages more usable and learnable?

Andy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which he explained in a blog post.  It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”

How would we make them better?  I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.

We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on.  Usability is one of the goals, but typically, in a theoretical sense.  Quorum is the only programming language that I know of that tested usability as part of the design process.

But what if we took Andy Ko’s argument seriously?  What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks?  Value would become more obvious.  It would be more easily adopted by a community.  The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.

What would our design process be?  How much more usable and learnable could our programming languages become?  How much easier would computing education be if the languages were more usable and learnable?  I’d love it if programming language designers could put me out of a job.

April 1, 2014 at 9:43 am 24 comments

Facts that conflict with identity can lead to rejection: Teaching outside the mainstream

Thought-provoking piece on NPR.  Take parents who believe that the MMR vaccine causes autism.  Show them the evidence that that’s not true.  They might tell you that they believe you — but they become even less likely to vaccinate future children.  What?!?

The explanation (quoted below) is that these parents found a sense of identity in their role as vaccine-deniers.  They rejected the evidence at a deeply personal level, even if they cognitively seemed to buy it.

I wonder if this explains a phenomenon I’ve seen several times in CS education: teaching with a non-traditional but pedagogically-useful tool leads to rejection because it’s not the authentic/accepted tool.  I saw it as an issue of students being legitimate peripheral participants in a community of practice. Identity conflict offers a different explanation for why students (especially the most experienced) reject Scheme in CS1, or the use of IDE’s other than Eclipse, or even CS teacher reaction when asked not to use the UNIX command line.  It’s a rejection of their identity.

An example: I used to teach object-oriented programming and user interface software using Squeak.  I had empirical evidence that it really worked well for student learning.  But students hated it — especially  the students who knew something about OOP and UI software.  “Why aren’t we using a real language?  Real OOP practitioners use Java or C++!”  I could point to Alan Kay’s quote, “I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.”  That didn’t squelch their anger and outrage.  I’ve always interpreted their reaction to the perceived inauthenticity of Squeak — it’s not what the majority of programmers used.  But I now wonder if it’s about a rejection of an identity.  Students might be thinking, “I already know more about OOP than this bozo of a teacher! This is who I am! And I know that you use Java or C++!”  Even showing them evidence that Squeak was more OOP, or that it could do anything they could do in Java or C++ (and some things that they couldn’t do in Java or C++) didn’t matter.  I was telling them facts, and they were arguing about identity.

What Nyhan seems to be finding is that when you’re confronted by information that you don’t like, at a certain level you accept that the information might be true, but it damages your sense of self-esteem. It damages something about your identity. And so what you do is you fight back against the new information. You try and martial other kinds of information that would counter the new information coming in. In the political realm, Nyhan is exploring the possibility that if you boost people’s self-esteem before you give them this disconfirming information, it might help them take in the new information because they don’t feel as threatened as they might have been otherwise.

via When It Comes To Vaccines, Science Can Run Into A Brick Wall : NPR.

March 31, 2014 at 1:13 am 32 comments

Older Posts


Recent Posts

September 2014
M T W T F S S
« Aug    
1234567
891011121314
15161718192021
22232425262728
2930  

Feeds

Blog Stats

  • 940,029 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,011 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 3,011 other followers