Posts filed under ‘Uncategorized’

Women 1.5 Times More Likely to Leave STEM Pipeline after Calculus Compared to Men: Lack of Mathematical Confidence a Potential Culprit

When you read this paper, consider Nathan Ensmenger’s assertion that (a) mathematics has been show to predict success in CS classes but not in computing careers and (b) increasing mathematics requirements in undergraduate CS may have been a factor in the decline in female participation in computing.

Our analyses show that, while controlling for academic preparedness, career intentions, and instruction, the odds of a woman being dissuaded from continuing in calculus is 1.5 times greater than that for a man. Furthermore, women report they do not understand the course material well enough to continue significantly more often than men. When comparing women and men with above-average mathematical abilities and preparedness, we find women start and end the term with significantly lower mathematical confidence than men. This suggests a lack of mathematical confidence, rather than a lack of mathematically ability, may be responsible for the high departure rate of women. While it would be ideal to increase interest and participation of women in STEM at all stages of their careers, our findings indicate that if women persisted in STEM at the same rate as men starting in Calculus I, the number of women entering the STEM workforce would increase by 75%.

Source: PLOS ONE: Women 1.5 Times More Likely to Leave STEM Pipeline after Calculus Compared to Men: Lack of Mathematical Confidence a Potential Culprit

August 24, 2016 at 7:06 am 5 comments

C.P. Snow keeps getting more right: Why everyone needs to learn about algorithms #CS4All

When I give talks about teaching computer to everyone, I often start with Alan Perlis and C.P. Snow in 1961. They made the first two public arguments for teaching computer science to everyone in higher education.  Alan Perlis’s talk was the most up-beat, talking about all the great things we can think about and do with computer.  He offered the carrot.  C.P. Snow offered the stick.

C.P. Snow foresaw that algorithms were going to run our world, and people would be creating those algorithms without oversight by the people whose lives would be controlled by them. Those who don’t understand algorithms don’t know how to challenge them, to ask about them, to fight back against them. Quoting from Martin Greenberger’s edited volume, Computers and the World of the Future (MIT Press, 1962), we hear from Snow:

Decisions which are going to affect a great deal of our lives, indeed whether we live at all, will have to be taken or actually are being taken by extremely small numbers of people, who are nominally scientists. The execution of these decisions has to be entrusted to people who do not quite understand what the depth of the argument is. That is one of the consequences of the lapse or gulf in communication between scientists and non-scientists.  There it is. A handful of people, having no relation to the will of society, have no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.

I was reminded of Snow’s quote when I read the article linked below in the NYTimes.  Increasingly, AI algorithms are controlling our lives, and they are programmed by data.  If all those data are white and male, the algorithms are going to treat everyone else as outliers. And it’s all “decisions in secret.”

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Source: Artificial Intelligence’s White Guy Problem – The New York Times

One of our superstar alumna, Joy Buolamwini, wrote about a similar set of experiences. She’s an African-American woman who works with computer vision, and the standard face-recognition libraries don’t recognize her. She lays the responsibility for fixing these problems on the backs of “those who have the power to code systems.”  C.P. Snow would go further — he’d say that it’s all our responsibility, as part of a democratic process.  Knowing about algorithms and demanding transparency when they effect people’s lives is one of the responsibilities of citizens in the modern world.

The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.

So what? As a result when I work on projects like the Aspire Mirror (pictured above), I am reminded that the training sets were not tuned for faces like mine. To test out the code I created for the Aspire Mirror and subsequent projects, I wore a white mask so that my face can be detected in a variety of lighting conditions.

The mirror experience brings back memories from 2009. While I was working on my robotics project as an undergraduate, I “borrowed” my roommate’s face so that I could test the code I was writing. I assumed someone would fix the problem, so I completed my research assignment and moved on.

Several years later in 2011, I was in Hong Kong taking a tour of a start-up. I was introduced to a social robot. The robot worked well with everyone on the tour except for me. My face could not be recognized. I asked the creators which libraries they used and soon discovered that they used the code libraries I had used as an undergraduate. I assumed someone would fix the problem, so I completed the tour and moved on.

Seven years since my first encounter with this problem, I realize that I cannot simply move on as the problems with inclusion persist. While I cannot fix coded bias in every system by myself, I can raise awareness, create pathways for more diverse training sets, and challenge us to examine the Coded Gaze — the embedded views that are propagated by those who have the power to code systems.

Source: InCoding — In The Beginning — Medium

August 22, 2016 at 7:31 am 9 comments

Where are the Python 3 Libraries for Media Computation

My Blog@CACM post for this month is on JES, the Jython Environment for Students, which at 14 years old and over 10,000 downloads, is probably one of the oldest, most used, and (by some definition) most successful pedagogical Python IDE’s.

The SIGCSE Members list recently had a discussion about moving from Python 2 to Python 3. Here’s a description of differences. Some writers asked about MediaComp. With respect to the Media Computation libraries, one wrote:

I’m sad about this one, because we use and like this textbook, but I think it’s time to move to Python 3.  Is there a compatible library providing the API used in the text?

Short answer: No. There are no compatible Media Computation libraries for CPython 2 or 3.

We keep trying. The latest attempt to build Media Computation libraries in CPython is here: https://github.com/sportsracer48/mediapy. It doesn’t work on all platforms yet, e.g., I can’t get it to load on MacOS.

We have yet to find a set of libraries in Python that work cross-platform identically for sample-level manipulations of sounds. For example, PyGame’s mixer object doesn’t work exactly the same on all platforms (e.g., sampling rates aren’t handled the same on all platforms, so the same code plays different speed output on different platforms). I can do pixel-level manipulations using PIL. We have not yet tried to find libraries from frame manipulations of video (as individual images). I have just downloaded the relevant libraries for Python 3 and plan to explore in the future, but since we can’t make it work yet in Python 2 (which has more mature libraries), I doubt it will work in Python 3.

I complained about this problem in my blog in 2011 (see post here). The situation is better in other languages, but not yet in Python.

  • I have been building Media Computation examples in GP, a blocks-based language (see post here).
  • Jeff Gray’s group at U. Alabama has built Blockly-like languages Pixly and Tunely for pixel and sample level manipulations.
  • Cynthia Lee at Stanford has been doing Media Computation in her classes in MATLAB and in C++
  • The Calico project supports Media Computation in IronPython (based on Python 3) and many other languages, because it builds on .NET/MONO which has good multimedia support.
  • We’re able to more and more in JavaScript-based Python implementations (like Pythy and Runestone), because JavaScript has excellent cross-platform multimedia support.

When we did the 4th edition of our Python Media Computation textbook, I looked into what we’d have to change in the book to move to Python 3. There really wasn’t much. We would have to introduce print as a function. We do very little integer division, so we’d have to explain that. The focus in our course (non-technical majors, first course) is at a higher level than the differences between Python 2 and 3. I am confident that, at the end of our course, the majority of our students would understand the differences between Python 2 and 3. As we move more to browser-based IDE’s, I can support either Python 2 or 3 syntax and semantics. Preparing students for industry jobs using exactly CPython 3 is simply not a priority in our course.

August 19, 2016 at 7:46 am Leave a comment

From Computational Thinking to Computational Participation in K-12 Education: Yasmin Kafai in CACM

Yasmin Kafai has been a friend and mentor to me for years — she introduced me to my PhD advisor, Elliot Soloway.  Her book with Quinn Burke, Connected Code, updates thinking about the role of computing and programming in schools. They emphasize an idea they call Computational Participation as a contrast with computational thinking.  I asked Yasmin to do a CACM Viewpoint on the idea, and it’s published this month. Yasmin has shared the paper on Academia.edu.

In the 1980s many schools featured Basic, Logo, or Pascal programming computer labs. Students typically received weekly introductory programming instruction. These exercises were often of limited complexity, disconnected from classroom work, and lacking in relevance. They did not deliver on promises. By the mid-1990s most schools had turned away from programming. Pre-assembled multimedia packages burned onto glossy CD-ROMs took over. Toiling over syntax typos and debugging problems were no longer classroom activities.

Computer science is making a comeback in schools. We should not repeat earlier mistakes, but leverage what we have learned. Why are students interested in programming? Under what circumstances do they do it, and how? Computational thinking and programming are social, creative practices. They offer a context for making applications of significance for others, communities in which design sharing and collaboration with others are paramount. Computational thinking should be reframed as computational participation.

Source: From Computational Thinking to Computational Participation in K-12 Education | August 2016 | Communications of the ACM

August 17, 2016 at 7:06 am 1 comment

Supports for blind CS students: Guest blog post from Andreas Stefik

After my post last week on learning CS and programming by blind students, Andreas Stefik sent me an email.  Stefik has been working for years on these issues, and created the first programming language explicitly designed for blind programmers, Quorum,  He provided additional information on some of the things I’d talked about, and corrected me, too.  I asked if I could turn his message into a blog post, and he kindly agreed. Thanks!

Mark,

I came across your latest blog (about learning CS blind) and thought I would add a couple thoughts. Your student has it pretty spot on for the most part, but there’s a lot of variation in this community and thought I would add my perspective, for what it’s worth. First, if you aren’t aware of it, there’s a mailing list that many blind people use to discuss issues they face in blind programming called prog-l. I’ve lurked on there for years to try to get a sense of the diversity of needs in the community. It has everyone from total learners to experienced pros, with various levels of vision. People vary quite a bit in this community, so it’s a nice place to probe people’s brains and get opinions.

Second, blind CS students should know there’s a conference they can participate in called EPIQ. That’s our national Quorum conference, which is heavily attended by TVIs (teachers for the visually impaired) and blind folks. This year, the conference was mostly on writing 3D games in Quorum (audio + visual). It’s the first time we’ve tried to make something as complicated as 3D gaming accessible, but I think it went really well. If students want to go, they should apply. We almost always have funding to help students come out.

In terms of the post, there’s only one thing I would mention that is maybe questionable. That is when you say:

The second surprise was about their tools. They showed me Visual Studio and EdSharp, a plain text editor developed by a blind programmer for blind programmers. I asked what features made an editor good for blind programmers. They said, “It works with screen readers.” And really, that’s it. They don’t want specialized tools with non-standard interfaces because of the cognitive load of switching between the standard screen reader interfaces and a novel interface.

This is a tricky issue and in my view is not correct. Screen readers are not universal in the same way programming languages are not and blind programmers vary massively in their tool preferences. Different programming languages also connect to them in different ways (some good, some less so). Further, there is no such thing as a universal “screen reader API.” That doesn’t exist. I want to make this clear because it sounds like there would be, or at least should be, and it’s counterintuitive that it’s not true. On the web, it is true (it is called Aria), but not for desktops. A few examples:

  1. JAWS: Windows screen reader. Popular. Expensive. JAWS doesn’t have an API. It has a custom programming language you can learn to adjust settings, but this language isn’t very powerful. It works with some versions of some software on Windows. Visual Studio works mostly ok’ish with it.
  2. NVDA: Free screen reader on Windows. Less popular, but free. NVDA does have an API and it is extremely flexible. By far, it is the most flexible reader on the market, which uses Python as a backend for customization. It also works with some versions of some software. There are lots of problems here too, but I won’t get into them.
  3. Voice Over: The primary reader on Mac. It’s about as flexible as a piece of cement after it has dried, but works really well for applications written by Apple. It’s also free. There are other versions of Voice Over (e.g., tablets, Apple TV), but they are different. To my knowledge, there’s no API to adjust it. If you are writing custom software on Mac, you are at the whim of the programming language you are using, and UI toolkit, as to whether you even “can” support accessibility with it. Even if you can, “how well” is another issue.

    This isn’t to say that Apple doesn’t put a lot of work into making an API for accessibility: https://developer.apple.com/accessibility/ They do and it’s fine. But, the moment you stray from their API, which is in their languages, on their hardware, with their rules, it all breaks. Even if you connect into their API, voice over itself doesn’t have the kind of scripting capabilities that something like NVDA has, to my knowledge.

Now, this is even trickier once you start taking other platforms into account. Most platforms have some mechanism by which they claim accessibility works. Oracle’s Java has an accessibility API. Does it work? Not very well. Other languages (e.g., smalltalk): total crapshoot. Java on Android? Totally different. Microsoft’s API is one of the better ones — yet somehow even Microsoft Edge isn’t accessible (yet), even though IE was. The language wars impacts this community, if nothing else because it makes this stuff such a mess at the global level.

So, when I hear that an individual thinks connecting to a screen reader is good enough for an editor, I think that’s not quite right. That’s true today just because the field as a whole is incredibly inaccessible across the board, so when you get something — anything — working well, you suffer through and learn it. This is why some of my blind friends just use notepad and the console. However, we know from research that just a plain old editor for code, where you move up and down line-by-line is incredibly tedious and inefficient. Ignoring my own work on the topic (e.g., blind debugging) for a moment, check out this wonderful paper by one of Richard Ladner’s students (Catherine Baker): http://dl.acm.org/citation.cfm?id=2702589&CFID=653320883&CFTOKEN=24820033

The lesson: Even simple navigation aids in an editor make a big difference. For debugging, compiler errors, and so many other issues, this is true as well, although not in her paper. We try to combine all of the literature into Sodbeans (especially version 6), but our tools have their accessibility flaws as well. Our biggest flaw is caused largely by the fact that we connect through Java, which has accessibility problems caused by the JDK itself. Even with that flaw though, it’s used heavily in residential schools for the blind nowadays.

Anyway, this is a lot more than I was planning to write, but of course, I’m fascinated by blind programming and like this community a lot. I just felt like sharing, so there you go.

Stefik

 

August 15, 2016 at 7:44 am Leave a comment

New ISTE Standards emphasize computational thinking with a better definition

ISTE has just released their ed-tech-influenced standards for students for 2016, and they include computational thinking — with a better definition than the more traditional ones.  It’s not about changing how students think.  It’s about giving students the tools to solve problems with technology.  I liked the frequent use of the term “algorithmic thinking” to emphasize the connections to the history of the ideas.  This definition doesn’t get to systems and processes (for example), but it’s more realistic than the broad transferable thinking skills claim.

CT-ISTE

Students develop and employ strategies for understanding and solving problems in ways that leverage the power of technological methods to develop and test solutions.

Source: For Students 2016

August 12, 2016 at 7:53 am 4 comments

Programming and learning CS when legally blind

Since I’ve been using blocks-based languages lately (see my posts on GP and MOHQ), I’ve been thinking more about the challenges of using blocks-based languages, and programming and learning CS more generally, when legally blind.  One of our PhD students in the Human-Centered Computing PhD program is legally blind, and he generously came to visit me and brought with him one of his students who is legally blind and learning programming.

The first and biggest surprise for me was that most (about 85%) legally blind people can actually see. One of the people I worked with can see light/dark (which doesn’t help with programming, but does help him with way-finding and spatial navigation). The other one loves to program in App Inventor using high magnification on her Mac. She’s low-vision and finds the large splotches of color useful in figuring out her code.

The implication, they explained to me, is that some tactile-based affordances for blind people don’t work because low-vision blind people would prefer to use audio and what sight they have, rather than learn a touch-based encoding. I was surprised to learn that most blind people don’t learn Braille because it’s a complicated code, and low vision people would rather magnify the screen than learn the encoding.

Blind programmers who know Braille will often use an audio screen reader along with a Braille reader for a single line of text. It’s easier to scan a line (especially for syntax errors) with Braille than with a screen reader.

The second surprise was about their tools. They showed me Visual Studio and EdSharp, a plain text editor developed by a blind programmer for blind programmers. I asked what features made an editor good for blind programmers. They said, “It works with screen readers.” And really, that’s it. They don’t want specialized tools with non-standard interfaces because of the cognitive load of switching between the standard screen reader interfaces and a novel interface.

I didn’t realize how few tools go to the trouble of accessing the screen reader API’s and providing good mappings from the interface to text. Processing (all platforms) and NetBeans (on Windows) are completely unusable for blind people because they are inaccessible by screen readers. Visual Studio has become a new favorite IDE, not because of any special features, but because it does “it doesn’t crash and I can access it with a screen reader.”

I was particularly interested in the low-vision programmer’s use of App Inventor. We talked about what didn’t work for her and brainstormed what would make it better. One of the tougher parts of block-based languages is that scripts could be anywhere in a 2-D space. It’s hard to scan a 2-D space with a zoomed interface, and there’s no obvious interface for screen-readers. Having blocks snap to a grid would help a lot to make it easier to find scripts for both types of blind programmers.

We talked about how CS classes might be better designed for legally blind students. I was surprised to learn how much they dislike active learning activities in classrooms.  They said that when the whole class breaks into small group discussions, they can’t hear their group.  The definition of the group is by physical proximity, but they discern “close” by “loud.”  They end up listening in to whichever group is loudest around them.  They need a different kind of active learning activity.

August 8, 2016 at 7:55 am 1 comment

Older Posts


Recent Posts

August 2016
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031  

Feeds

Blog Stats

  • 1,257,377 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,246 other followers

CS Teaching Tips


Follow

Get every new post delivered to your Inbox.

Join 4,246 other followers