Posts tagged ‘BPC’

Barriers to Stack Overflow Use for Females

Stack Overflow is an often used resource by programmers today. It’s also a barrier to women entering computing. Here’s a blog post summarizing a recent study on why women find Stack Overflow so unwelcoming.

There are many movements to get women into programming, but what about keeping them there? If they don’t feel comfortable using the resources that are available for all programmers then that is a big problem for retention in the field. To do our part in being more proactive in welcoming women into the field, we sought to uncover some reasons for this low participation.

Source: Paradise Unplugged: Barriers to Stack Overflow Use for Females | fordable

September 12, 2016 at 7:08 am 10 comments

Learning CS while Learning English: Scaffolding ESL CS Learners – Thesis from Yogendra Pal

When I visited Mumbai for LaTICE 2016, I mentioned meeting Yogendra Pal. I was asked to be a reader for his thesis, which I found fascinating. I’m pleased to report that he has now graduated and his thesis, A Framework for Scaffolding to Teach Vernacular Medium Learners, is available here: https://www.cse.iitb.ac.in/~sri/students/#yogendra.

I learned a lot from Yogendra’s thesis, like what “vernacular medium learners” means. Here’s the problem that he’s facing (and that Yogendra faced as a student). Students go through primary and secondary school learning in one language (Hindi, in Yogendra’s personal case and in his thesis), and then come to University to study Computer Science. Do you teach them (what Yogendra calls “Medium of Instruction” or MoI) in English, or in Hindi? Note that English is pervasive in Computer Science, e.g., almost all our programming languages use English keywords.

Here’s Yogendra’s bottomline finding: “We find that self-paced video-based environment is more suitable for vernacular medium students than a classroom environment if English-only MoI are used.” Yogendra uses a design-based research methodology. He measures the students, tries something based on his current hypothesis, then measures them again. He compares what he thought would happen to what he saw, and revises his hypothesis — and then iterate. Some of the scaffolds he tested may seem obvious (like using a slower pace), but a strength of the thesis is that he develops rationale for each of his changes and tests them. Eventually, he came to this surprising (to me) and interesting result: It’s better to teach with Hindi in the classroom, and in English when students are learning from self-paced videos.

The stories at the beginning of the thesis are insightful and moving. I hadn’t realized what a handicap it is to be learning English in a class that uses English. It’s obvious that the learners would be struggling with the language. What I hadn’t realized was how hard it is to raise your hand and ask questions. Maybe you have a question just because you don’t know the language. Maybe you’ll expose yourself to ridicule because you’ll post the question wrong.

Yogendra describes solutions that the Hindi-speaking students tried, and where the solutions didn’t work. The Hindi-speaking students used English-to-English dictionaries. They didn’t want English-Hindi dictionaries, because they wanted to become fluent in English, but they needed help with the complicated (especially technical) words. They tried using online videos for additional explanations of concepts, but most of those were made by American or British speakers. When you’re still learning English, switching from an Indian accent to another accent is a barrier to understanding.

The middle chapters are a detailed description of Yogendra’s attempts to scaffold student learning. He tried to teach in all-Hindi but some English technical terms like “execute” don’t have a direct translation in Hindi. He selected other Hindi words to represent the technical terms, but the words he selected as the Hindi translation were unusual and not well-known to the students. Perhaps the most compelling insight for me in these chapters was how important it was to both the students and the teachers that the students learn English — even when the Hindi materials were measurably better for learning in some conditions.

In the end, he found that Hindi language screencasts led to better learning (statistically significantly) when the learners (who had received primary and secondary school instruction in Hindi) were in a classroom, but that the English language screencasts led to better learning (again, statistically significantly) when the learners were watching the screencasts self-paced. When the students are self-paced, they can rewind and re-watch things that are confusing, so it’s okay to struggle with the English. In the classroom, the lecture just goes on by. It works best if it’s in Hindi for the students who learned in Hindi in school.

Yogendra tells a convincing story. It’s an interesting question of how these lessons transfer to other contexts. For example, what are the issues for Spanish-speaking students learning CS in the United States? In a general form, can we use the lessons from this thesis to make CS learning accessible to more ESL (English as a Second Language) learners?

September 8, 2016 at 5:50 pm 5 comments

Why ‘U.S. News’ should rank colleges and universities according to diversity: Essay from Dean Gary May #CSforAll

Georgia Tech’s Dean of Engineering Gary May was one of the advisors on “Georgia Computes!”  He makes a terrific point in his essay linked below.  Want broadened participation in computing (BPC)? CS for All?  Make diversity count — and rankings are what “counts” in higher education today.

U.S. News & World Report, that heavyweight of the college rankings game, recently hosted a conference focused partially on diversity in higher education. I did an interview for the publication prior to the forum and spoke on a panel at the event.I was happy to do it. As dean of one of the country’s most diverse engineering schools, I am particularly invested in these issues. My panel focused on how to help women and underrepresented minority students succeed in STEM fields, and I’m grateful to U.S. News for leading the discussion.But the publication, for all its noble intentions, could do more to follow through where it counts. Diversity is currently given no weight in the magazine’s primary university and disciplinary rankings, and it’s time for that to change. As U.S. News goes, so goes higher education.

Source: Why ‘U.S. News’ should rank colleges and universities according to diversity (essay)

August 31, 2016 at 7:29 am 1 comment

Women 1.5 Times More Likely to Leave STEM Pipeline after Calculus Compared to Men: Lack of Mathematical Confidence a Potential Culprit

When you read this paper, consider Nathan Ensmenger’s assertion that (a) mathematics has been show to predict success in CS classes but not in computing careers and (b) increasing mathematics requirements in undergraduate CS may have been a factor in the decline in female participation in computing.

Our analyses show that, while controlling for academic preparedness, career intentions, and instruction, the odds of a woman being dissuaded from continuing in calculus is 1.5 times greater than that for a man. Furthermore, women report they do not understand the course material well enough to continue significantly more often than men. When comparing women and men with above-average mathematical abilities and preparedness, we find women start and end the term with significantly lower mathematical confidence than men. This suggests a lack of mathematical confidence, rather than a lack of mathematically ability, may be responsible for the high departure rate of women. While it would be ideal to increase interest and participation of women in STEM at all stages of their careers, our findings indicate that if women persisted in STEM at the same rate as men starting in Calculus I, the number of women entering the STEM workforce would increase by 75%.

Source: PLOS ONE: Women 1.5 Times More Likely to Leave STEM Pipeline after Calculus Compared to Men: Lack of Mathematical Confidence a Potential Culprit

August 24, 2016 at 7:06 am 8 comments

C.P. Snow keeps getting more right: Why everyone needs to learn about algorithms #CS4All

When I give talks about teaching computer to everyone, I often start with Alan Perlis and C.P. Snow in 1961. They made the first two public arguments for teaching computer science to everyone in higher education.  Alan Perlis’s talk was the most up-beat, talking about all the great things we can think about and do with computer.  He offered the carrot.  C.P. Snow offered the stick.

C.P. Snow foresaw that algorithms were going to run our world, and people would be creating those algorithms without oversight by the people whose lives would be controlled by them. Those who don’t understand algorithms don’t know how to challenge them, to ask about them, to fight back against them. Quoting from Martin Greenberger’s edited volume, Computers and the World of the Future (MIT Press, 1962), we hear from Snow:

Decisions which are going to affect a great deal of our lives, indeed whether we live at all, will have to be taken or actually are being taken by extremely small numbers of people, who are nominally scientists. The execution of these decisions has to be entrusted to people who do not quite understand what the depth of the argument is. That is one of the consequences of the lapse or gulf in communication between scientists and non-scientists.  There it is. A handful of people, having no relation to the will of society, have no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.

I was reminded of Snow’s quote when I read the article linked below in the NYTimes.  Increasingly, AI algorithms are controlling our lives, and they are programmed by data.  If all those data are white and male, the algorithms are going to treat everyone else as outliers. And it’s all “decisions in secret.”

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Source: Artificial Intelligence’s White Guy Problem – The New York Times

One of our superstar alumna, Joy Buolamwini, wrote about a similar set of experiences. She’s an African-American woman who works with computer vision, and the standard face-recognition libraries don’t recognize her. She lays the responsibility for fixing these problems on the backs of “those who have the power to code systems.”  C.P. Snow would go further — he’d say that it’s all our responsibility, as part of a democratic process.  Knowing about algorithms and demanding transparency when they effect people’s lives is one of the responsibilities of citizens in the modern world.

The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.

So what? As a result when I work on projects like the Aspire Mirror (pictured above), I am reminded that the training sets were not tuned for faces like mine. To test out the code I created for the Aspire Mirror and subsequent projects, I wore a white mask so that my face can be detected in a variety of lighting conditions.

The mirror experience brings back memories from 2009. While I was working on my robotics project as an undergraduate, I “borrowed” my roommate’s face so that I could test the code I was writing. I assumed someone would fix the problem, so I completed my research assignment and moved on.

Several years later in 2011, I was in Hong Kong taking a tour of a start-up. I was introduced to a social robot. The robot worked well with everyone on the tour except for me. My face could not be recognized. I asked the creators which libraries they used and soon discovered that they used the code libraries I had used as an undergraduate. I assumed someone would fix the problem, so I completed the tour and moved on.

Seven years since my first encounter with this problem, I realize that I cannot simply move on as the problems with inclusion persist. While I cannot fix coded bias in every system by myself, I can raise awareness, create pathways for more diverse training sets, and challenge us to examine the Coded Gaze — the embedded views that are propagated by those who have the power to code systems.

Source: InCoding — In The Beginning — Medium

August 22, 2016 at 7:31 am 9 comments

Supports for blind CS students: Guest blog post from Andreas Stefik

After my post last week on learning CS and programming by blind students, Andreas Stefik sent me an email.  Stefik has been working for years on these issues, and created the first programming language explicitly designed for blind programmers, Quorum,  He provided additional information on some of the things I’d talked about, and corrected me, too.  I asked if I could turn his message into a blog post, and he kindly agreed. Thanks!

Mark,

I came across your latest blog (about learning CS blind) and thought I would add a couple thoughts. Your student has it pretty spot on for the most part, but there’s a lot of variation in this community and thought I would add my perspective, for what it’s worth. First, if you aren’t aware of it, there’s a mailing list that many blind people use to discuss issues they face in blind programming called prog-l. I’ve lurked on there for years to try to get a sense of the diversity of needs in the community. It has everyone from total learners to experienced pros, with various levels of vision. People vary quite a bit in this community, so it’s a nice place to probe people’s brains and get opinions.

Second, blind CS students should know there’s a conference they can participate in called EPIQ. That’s our national Quorum conference, which is heavily attended by TVIs (teachers for the visually impaired) and blind folks. This year, the conference was mostly on writing 3D games in Quorum (audio + visual). It’s the first time we’ve tried to make something as complicated as 3D gaming accessible, but I think it went really well. If students want to go, they should apply. We almost always have funding to help students come out.

In terms of the post, there’s only one thing I would mention that is maybe questionable. That is when you say:

The second surprise was about their tools. They showed me Visual Studio and EdSharp, a plain text editor developed by a blind programmer for blind programmers. I asked what features made an editor good for blind programmers. They said, “It works with screen readers.” And really, that’s it. They don’t want specialized tools with non-standard interfaces because of the cognitive load of switching between the standard screen reader interfaces and a novel interface.

This is a tricky issue and in my view is not correct. Screen readers are not universal in the same way programming languages are not and blind programmers vary massively in their tool preferences. Different programming languages also connect to them in different ways (some good, some less so). Further, there is no such thing as a universal “screen reader API.” That doesn’t exist. I want to make this clear because it sounds like there would be, or at least should be, and it’s counterintuitive that it’s not true. On the web, it is true (it is called Aria), but not for desktops. A few examples:

  1. JAWS: Windows screen reader. Popular. Expensive. JAWS doesn’t have an API. It has a custom programming language you can learn to adjust settings, but this language isn’t very powerful. It works with some versions of some software on Windows. Visual Studio works mostly ok’ish with it.
  2. NVDA: Free screen reader on Windows. Less popular, but free. NVDA does have an API and it is extremely flexible. By far, it is the most flexible reader on the market, which uses Python as a backend for customization. It also works with some versions of some software. There are lots of problems here too, but I won’t get into them.
  3. Voice Over: The primary reader on Mac. It’s about as flexible as a piece of cement after it has dried, but works really well for applications written by Apple. It’s also free. There are other versions of Voice Over (e.g., tablets, Apple TV), but they are different. To my knowledge, there’s no API to adjust it. If you are writing custom software on Mac, you are at the whim of the programming language you are using, and UI toolkit, as to whether you even “can” support accessibility with it. Even if you can, “how well” is another issue.

    This isn’t to say that Apple doesn’t put a lot of work into making an API for accessibility: https://developer.apple.com/accessibility/ They do and it’s fine. But, the moment you stray from their API, which is in their languages, on their hardware, with their rules, it all breaks. Even if you connect into their API, voice over itself doesn’t have the kind of scripting capabilities that something like NVDA has, to my knowledge.

Now, this is even trickier once you start taking other platforms into account. Most platforms have some mechanism by which they claim accessibility works. Oracle’s Java has an accessibility API. Does it work? Not very well. Other languages (e.g., smalltalk): total crapshoot. Java on Android? Totally different. Microsoft’s API is one of the better ones — yet somehow even Microsoft Edge isn’t accessible (yet), even though IE was. The language wars impacts this community, if nothing else because it makes this stuff such a mess at the global level.

So, when I hear that an individual thinks connecting to a screen reader is good enough for an editor, I think that’s not quite right. That’s true today just because the field as a whole is incredibly inaccessible across the board, so when you get something — anything — working well, you suffer through and learn it. This is why some of my blind friends just use notepad and the console. However, we know from research that just a plain old editor for code, where you move up and down line-by-line is incredibly tedious and inefficient. Ignoring my own work on the topic (e.g., blind debugging) for a moment, check out this wonderful paper by one of Richard Ladner’s students (Catherine Baker): http://dl.acm.org/citation.cfm?id=2702589&CFID=653320883&CFTOKEN=24820033

The lesson: Even simple navigation aids in an editor make a big difference. For debugging, compiler errors, and so many other issues, this is true as well, although not in her paper. We try to combine all of the literature into Sodbeans (especially version 6), but our tools have their accessibility flaws as well. Our biggest flaw is caused largely by the fact that we connect through Java, which has accessibility problems caused by the JDK itself. Even with that flaw though, it’s used heavily in residential schools for the blind nowadays.

Anyway, this is a lot more than I was planning to write, but of course, I’m fascinated by blind programming and like this community a lot. I just felt like sharing, so there you go.

Stefik

 

August 15, 2016 at 7:44 am Leave a comment

Programming and learning CS when legally blind

Since I’ve been using blocks-based languages lately (see my posts on GP and MOHQ), I’ve been thinking more about the challenges of using blocks-based languages, and programming and learning CS more generally, when legally blind.  One of our PhD students in the Human-Centered Computing PhD program is legally blind, and he generously came to visit me and brought with him one of his students who is legally blind and learning programming.

The first and biggest surprise for me was that most (about 85%) legally blind people can actually see. One of the people I worked with can see light/dark (which doesn’t help with programming, but does help him with way-finding and spatial navigation). The other one loves to program in App Inventor using high magnification on her Mac. She’s low-vision and finds the large splotches of color useful in figuring out her code.

The implication, they explained to me, is that some tactile-based affordances for blind people don’t work because low-vision blind people would prefer to use audio and what sight they have, rather than learn a touch-based encoding. I was surprised to learn that most blind people don’t learn Braille because it’s a complicated code, and low vision people would rather magnify the screen than learn the encoding.

Blind programmers who know Braille will often use an audio screen reader along with a Braille reader for a single line of text. It’s easier to scan a line (especially for syntax errors) with Braille than with a screen reader.

The second surprise was about their tools. They showed me Visual Studio and EdSharp, a plain text editor developed by a blind programmer for blind programmers. I asked what features made an editor good for blind programmers. They said, “It works with screen readers.” And really, that’s it. They don’t want specialized tools with non-standard interfaces because of the cognitive load of switching between the standard screen reader interfaces and a novel interface.

I didn’t realize how few tools go to the trouble of accessing the screen reader API’s and providing good mappings from the interface to text. Processing (all platforms) and NetBeans (on Windows) are completely unusable for blind people because they are inaccessible by screen readers. Visual Studio has become a new favorite IDE, not because of any special features, but because it does “it doesn’t crash and I can access it with a screen reader.”

I was particularly interested in the low-vision programmer’s use of App Inventor. We talked about what didn’t work for her and brainstormed what would make it better. One of the tougher parts of block-based languages is that scripts could be anywhere in a 2-D space. It’s hard to scan a 2-D space with a zoomed interface, and there’s no obvious interface for screen-readers. Having blocks snap to a grid would help a lot to make it easier to find scripts for both types of blind programmers.

We talked about how CS classes might be better designed for legally blind students. I was surprised to learn how much they dislike active learning activities in classrooms.  They said that when the whole class breaks into small group discussions, they can’t hear their group.  The definition of the group is by physical proximity, but they discern “close” by “loud.”  They end up listening in to whichever group is loudest around them.  They need a different kind of active learning activity.

August 8, 2016 at 7:55 am 1 comment

Older Posts


Recent Posts

September 2016
M T W T F S S
« Aug    
 1234
567891011
12131415161718
19202122232425
2627282930  

Feeds

Blog Stats

  • 1,268,120 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,565 other followers

CS Teaching Tips