Posts tagged ‘HCI’

Why I say task-specific programming languages instead of domain-specific programming languages

I’ve written several posts about task-specific programming languages over the last few weeks (here’s the first one), culminating in my new understanding of computational thinking (see that blog post).

The programming languages community talks about “domain-specific programming languages.”  That makes a lot of sense, as a contrast with “general purpose programming languages.” Why am I using a different term?

It’s inspired from my interaction with social studies teachers. They talk about “the language used in math class” and about “what language should we use in history?” History and mathematics are domains. If we talk about a programming language for all of history, that’s too big. It will be difficult to design languages to be easily learned and used.  There are lots of tasks in history that are amenable to using computing to improve learning, including data visualization and testing the rigor of arguments.

“Task-specific programming language” makes clear that we’re talking about a task, not a whole domain. I don’t want teachers rejecting a language because “I can’t use it for everything.”  I want teachers to accept a language because it helps their students learn something. I want it to be so easy to learn and use, that (a) it’s not adding much additional load and (b) it’s obvious that it would help.

I like “task-specific programming language,” too, because the name suggests how we might design them. Human-computer interface researchers and designers have been developing methods to analyze tasks and design interfaces for those tasks for decades. The purpose of that analysis is to create interfaces for users to achieve those tasks easily and with minimal up-front learning.  For 25 years (Soloway, Guzdial, and Hay, 1994) , we have been trying to extend those techniques to design for learners, so that users achieve the tasks and learn in the process.

Task-specific programming languages are domain-specific programming languages (from the PL community) that are designed using learner-centered design methods (HCI).  It’s about integrating between two communities to create something that enables integration of computing across the curriculum.

 

May 27, 2019 at 7:00 am 6 comments

Survey for Human-Computer Interaction (HCI) Instructors

From Lauren Wilcox:

Betsy DiSalvo, Dick Henneman and I have designed a survey about a topic that is near and dear to us as HCI faculty:  topics, learning goals, and learning activities in HCI classrooms!

We hope to do an annual “pulse” of HCI instructors across the globe.

The survey takes about 15 minutes. We plan to share the results with the broader academic HCI community.

We are hoping that you can take the survey, and also please share with your colleagues who teach HCI-related classes.

https://gatech.qualtrics.com/SE/?SID=SV_2aFcLSR3zDcmebz

September 5, 2016 at 7:55 am 4 comments

Earn your Human-Centered Computing PhD at Georgia Tech: Applications due Dec 15

Georgia Tech founded the very first HCC degree program in 2004, focusing on the intersection of computing and people – where computing includes not just computers but also different kinds of computational artifacts from games to mobile applications, from robots to bionics and mobile applications; and people includes not only individuals but also teams, organizations, societies and cultures.

Join our 29 faculty in working across the HCC spectrum: learning sciences & technologies, computing education, artificial intelligence, cognitive science, collaboration, design, human-computer interaction, health & wellness, informatics, information visualization & visual analytics, international development, learning sciences & technology, social computing, and ubiquitous & wearable computing.

Join our 39 students, all doing research in one of three broad areas: Cognition, Learning & Creativity, Human-Computer Interaction, and Social Computing. We value diversity in all its dimensions; our students have a broad range of backgrounds, coming from across the world and with a variety of different and undergraduate degrees.

Join a vibrant community of faculty and graduate students that encompasses not just the HCC PhD but also the PhDs in Digital Media, Computer Science with specialization in HCI, Psychology with specializations in Engineering Psychology and Cognitive Aging, Music Technology, and Industrial Design, and the interdisciplinary GVU Center with its multitude of research labs.

Join, upon graduation, our alumni who have academic or research careers at Adobe Research, CMU, Drexel, Georgetown, Georgia Tech, Google, Kaiser Permanente, Kaltura, U. Maryland, U. Michigan, Michigan State, U. Minnesota, Oak Ridge National Labs, Northeastern, Penn State, Rose Hulman, Samsung, Sassafras, U. Washington, US Military Academy and Virginia Tech.

Our curriculum is flexible, allowing considerable customizing based on individual interests: three core courses, three specialization courses and three minor courses. You get involved with research during your first semester, and never stop!

Students receive tuition and a competitive stipend during their studies; outstanding US students are eligible for the President’s Fellowship.

Applications are due December 15; see http://www.ic.gatech.edu/future/phdhcc for additional program and application information.

November 19, 2014 at 1:16 pm Leave a comment

Those Who Say Code Does Not Matter are Wrong

Bertrand Meyer is making a similar point to Amy Ko’s argument about programming languages.  Programming does matter, and the language we use also matters.  Meyer’s goes on to suggest that those saying that “code doesn’t matter” may be just rationalizing that they continue to live with antiquated languages.  It can’t be that the peak of human-computer programming interfaces was reached back in New Jersey in the 1970’s.

Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are simply trying to justify their use of bad languages.

via Those Who Say Code Does Not Matter | blog@CACM | Communications of the ACM.

May 16, 2014 at 9:09 am 8 comments

MOOCs: One Size Doesn’t Fit All

My colleague, Amy Bruckman, considers in her blog how HCI design principles lead us to question whether MOOCs can achieve their goals.

Can a MOOC teach course content to anyone, anywhere? It’s an imagination-grabbing idea. Maybe everyone could learn about topics from the greatest teachers in the world! Create the class once, and millions could learn from it!

It seems like an exciting idea. Until you realize that the entire history of human-computer interaction is about showing us that one size doesn’t fit all.

via MOOCs: One Size Doesn’t Fit All | The Next Bison: Social Computing and Culture.

April 18, 2014 at 1:38 am 1 comment

How do we make programming languages more usable and learnable?

Amy Ko made a fascinating claim recently, “Programming languages are the least usable, but most powerful human-computer interfaces ever invented” which she explained in a blog post.  It’s a great argument, and I followed it up with a Blog@CACM post, “Programming languages are the most powerful, and least usable and learnable user interfaces.”

How would we make them better?  I suggest at the end of the Blog@CACM post that the answer is to follow the HCI dictum, “Know thy users, for they are not you.

We make programming languages today driven by theory — we aim to provide access to Turing/Von Neumann machines with a notation that has various features, e.g., type safety, security, provability, and so on.  Usability is one of the goals, but typically, in a theoretical sense.  Quorum is the only programming language that I know of that tested usability as part of the design process.

But what if we took Amy Ko’s argument seriously?  What if we designed programming languages like we defined good user interfaces — working with specific users on their tasks?  Value would become more obvious.  It would be more easily adopted by a community.  The languages might not be anything that the existing software development community even likes — I’ve noted before that the LiveCoders seem to really like Lisp-like languages, and as we all know, Lisp is dead.

What would our design process be?  How much more usable and learnable could our programming languages become?  How much easier would computing education be if the languages were more usable and learnable?  I’d love it if programming language designers could put me out of a job.

April 1, 2014 at 9:43 am 25 comments

Does learning occur differently with physical or digital print?

I’m skeptical about this claim: That your brain interprets text in books differently than text in digital form.  One argument in support of the claim is an observation (not much data) that we have to re-read digital information more often than print information before we remember it, but doesn’t offer a theory for why that should be true.  I find this second claim a bit more plausible: That our memories rely on contextual information, and physical books provide us more cues to support recalling what we read.  I wonder, though, if we might not be able to provide more contextual cues through the interface.  I’ve started reading the “Our Choice” app on my iPad, and there are lots of cues in that book to provide a sense of “place” (what page you’re on, what pages are around you, what chapter you’re in).

But without stronger evidence that there is a difference, I’m going to keep reading on my iPad and Kindle (well, once I get a new Kindle — my Kindle’s screen died somewhere during my trip to Venice this last weekend).

In other words, the human brain uses location to recall the words it reads, which helps reinforce the information. To trigger a memory, the brain might recall whether it read the information at the top, middle, or bottom of the page, remember a corresponding picture on the page, or even a page number — essentially creating a mental bookmark to cue recall of the information.

“Anyone who has read an e-book can attest that the page provides fewer spatial landmarks than print,” Changizi continues. “In a sense, the page is scrolled without incident, infinite and limitless, which can be dizzying. On the other hand, printed books give physical reference points, which can be particularly helpful in recalling how far along in the book we are, something that’s more challenging to assess on an e-book.”

via Are E-Books Bad for Your Memory? – Mobiledia.

July 10, 2012 at 4:20 am 4 comments

Shrinking our turf: Defining CS out of Interaction Design

A member of the SIGCSE mailing list asked the other day for recommendations on teaching a course on “HCI or Interaction Design.”  We at Georgia Tech teach a variety of undergraduate and graduate courses like that, and I figured that lots of others do, too.  I was surprised at some of the responses:

  • “Our main theme was that computer scientists should know how to implement interfaces but should not try to design them. Frankly, I’ve not seen any evidence that has changed my mind since then.”
  • “My personal experience with over 20 years of teaching GUIs is that CS students can be taught to be quite good at the software development aspects of GUIs, that they can be taught to at least understand good interaction design techniques, but that it does not really resonate with them and they do not tend to do it well, and that most of them are hopeless with respect to artistic design.”
Not all of the responses had this attitude that I might call “Not on my turf.”  Bill Manaris pointed out that he was part of a workshop that defined an interesting series of courses that aimed to teach HCI and interaction design front-and-center in an undergraduate computer science curriculum.  Overall, I got the strong message from the thread that the recommendation was: Computer scientists build software, but don’t expect them to deal well with interactions with people.
My first thought was reflexive. “What?!? We do this all the time!”  Georgia Tech’s Computer Science degree explicitly has a Thread on People,   Students who study the People thread (which is pretty popular) explicitly take courses in psychology, interface design, as well as how to build user interface software.  Our students go on to take jobs where they do design user interfaces, and work as part of teams building and evaluating user interfaces.  Not all computer science students at Georgia Tech take the People Thread, but that’s fine.  Not all computer scientists study AI, but I haven’t heard anyone argue that AI isn’t part of computer science.  There are lots of kinds of computer scientists.
My second thought was more reflective. “Why are we defining CS out of the role of interaction design?”  Surely, knowing something about computer science makes for a better interaction designer.  Painters start out by studying what their paints and brushes are capable of.  All designers first start by exploring the potential of their design materials.  Surely we can all agree computer science is connected to interaction design.
I can’t disagree with the experiences described in the messages — I’m sure that the SIGCSE members who posted really did see what they saw.  Those experiences say that students who currently go into computer science are not interested in nor have an aptitude for interaction design.  I can believe that, but that doesn’t define the discipline.  That observation is an outcome of our recruitment efforts, our lack of computer science in high schools (so only some students ever get introduced to CS), and the culture of our field that is mostly white, male, and with less value for interaction with people.  The fact that Georgia Tech CS students (and certainly students at other schools, especially those offering the courses that the Manaris workshop designed) can do interaction design, successfully, is an existence proof.
A bigger question for me is, “Why would we want to define computer science out of any design activity involving computing?” What is the value of saying, “Computer scientists can’t do X” for any value of X?  Why would we want to shrink our intellectual turf?  Wikipedia defines: “Computer science or computing science (abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems.”  While that isn’t a great definition, “practical techniques” allows for a wide range of activities, including interaction design.  Why would we want to give that up and say that that’s not CS?
In the College of Computing at Georgia Tech, we talk about Computing as becoming as large and broad as Engineering.  The virtual world can be even larger and  more complex than the natural world.  Eventually, we expect that there will be recognized Schools of Computing, just as there are recognized Schools of Mechanical, Chemical, Civil, and Electrical Engineering today.  Here at Georgia Tech, we already have three Schools: Computer Science (traditional definition), Interactive Computing, and Computational Science and Engineering.  All of us faculty in the College of Computing see ourselves as computer scientists.
There are going to be branches of computer science.  One of those will include HCI and interaction design.  Let’s grow our branches, not prune them off.

June 23, 2011 at 10:57 am 13 comments

HCI and Computational Thinking are Ideological Foes

A colleague of mine sent me a link to the iConference 2011 website, suggesting that I should consider attending and submitting papers to future instantiations. It looks like an interesting conference, with lots of research in human-computer interaction and computer-supported collaborative work. There was very little about learning. There was a session on Scratch, focused on “end-user programming,” not on learning about computing.

I started to wonder: Have human-computer interaction research and computational thinking become ideological opposites? By “computational thinking” I mean “that knowledge about computing that goes beyond application use and that is useful in any discipline.” Or as Jeanette Wing described it, “Computational thinking builds on the power and limits of computing processes, whether they are executed by a human or by a machine.” Notice that she points out the limits. Limits suggest things that the computer can’t do, and if you’re going to think about them, you have to be aware of them. They must be visible to you. If Computational Thinking involves, for example, understanding the power and limits of digital representations, and how those serve as metaphors in thinking about other problems, then those representations have to be visible.

Let’s contrast that with Don Norman’s call for the Invisible Computer. Or Mark Weiser’s call for the “highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” Or any number of user-interface design books that tell us that the goal of user-centered design is for the user to focus on the task and make the computer become “invisible.”

Michael Mateas has talked about this in his discussion of a published dialog between Alan Perlis and Peter Elias. Elias claims, like Norman and Weiser, that one day “undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming.” Michael responds, “The problem with this vision is that programming is really about describing processes, describing complex flows of cause and effect, and given that it takes work to describe processes, programming will always involve work, never achieving this frictionless ideal.”

The invisible-computer goal (that not all in HCI share, but I think it’s the predominant goal) aims to create a task-oriented interface for anything that a human will want to do with a computer. No matter what the task, the ad promises: “There’s an app for that!” Is that even possible? Can we really make invisible all the seams between tasks and digital representations of those tasks? Computational thinking is about engaging with what the computer can and cannot do, and explicitly thinking about it.

Computing education may be even more an ideological foe of this HCI design goal. Computing education is explicitly assuming that we can’t create an app for everything that we want to do, that some people (all professionals, in the extreme version that I subscribe to) need to know how to think about the computer in its own terms, in order to use it in new, innovative ways and (at least) to create those apps for others. It’s not clear who builds the apps in the invisible-computer world (because they would certainly need computing education), but whoever they are, they’re invisible, too.

I used to think that computing education was the far end of a continuum that started with HCI design. At some point, you can’t design away the computer, it has to become visible, and then you have to learn about it. After reviewing the iConference program, I suspect that HCI designers who believe in the invisible-computer have a goal for that never to happen. All possible tasks are covered by apps. Computing education should never be necessary except for an invisible few. Computational thinking is unnecessary, because we can make invisible all limitations.

Here’s a prediction: We won’t see a panel on “Computational Thinking” at CHI, CSCW, or iConference any time soon.

February 23, 2011 at 2:02 pm 25 comments

Programs as Poetry, and Programming as Art

Ian Bogost has just released a quartet of video games that he calls “poetry.”  I’m familiar with the idea that program code itself is a form of expression.  We have literate programming, and Donald Knuth’s famous Turing Award lecture “Computer Programming as an Art.”  Ian is saying something different here — that the program can be art, by being “expressive within tight constraints.”

Ian’s poems are saying something very interesting about human-computer interaction (HCI).  The poems are all about improving the lives of the humans who play them, in the subtle way of introducing new ideas and encouraging reflection.  However, they are not about usability.  These poems perform no useful, application-driven function.  They are “inscrutable.” The user manual for each program is a single Haiku.

Both literate programming and Ian’s poems introduce an interesting idea for computing teachers: What do we teach students about programming and programs as art? What should we be teaching them, about expressiveness, about craftsmanship, about creating code for reasons other than solving a problem or facilitating a task?

The games are simple, introduced to us who have no standards with which to judge the quality of video game poems. The A Slow Year games were made with the understanding that poetry can resist being obvious, that it can be expressive within tight constraints, that it can, like a video game, challenge its reader to work through it, that it can be vague but specific, harsh yet beautiful. The autumn game is just a slow game of waiting for a leaf to fall off a tree and catching it right on time. The spring game’s goal is to match thunder with lightning in a rainstorm. The summer game is the simple but daunting challenge to take a proper nap, a first-person game seen from behind drooping eyelids.

Each game was made to run on the Atari but will run on Windows or Macintosh computers.

Each is tough and accompanied with only a haiku for instructions.

via What If A Video Game Was Poetry? | Kotaku Australia.

December 21, 2010 at 8:35 am 7 comments


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,005 other followers

Feeds

Recent Posts

Blog Stats

  • 1,879,909 hits
October 2021
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

CS Teaching Tips