Archive for August 22, 2016

C.P. Snow keeps getting more right: Why everyone needs to learn about algorithms #CS4All

When I give talks about teaching computer to everyone, I often start with Alan Perlis and C.P. Snow in 1961. They made the first two public arguments for teaching computer science to everyone in higher education.  Alan Perlis’s talk was the most up-beat, talking about all the great things we can think about and do with computer.  He offered the carrot.  C.P. Snow offered the stick.

C.P. Snow foresaw that algorithms were going to run our world, and people would be creating those algorithms without oversight by the people whose lives would be controlled by them. Those who don’t understand algorithms don’t know how to challenge them, to ask about them, to fight back against them. Quoting from Martin Greenberger’s edited volume, Computers and the World of the Future (MIT Press, 1962), we hear from Snow:

Decisions which are going to affect a great deal of our lives, indeed whether we live at all, will have to be taken or actually are being taken by extremely small numbers of people, who are nominally scientists. The execution of these decisions has to be entrusted to people who do not quite understand what the depth of the argument is. That is one of the consequences of the lapse or gulf in communication between scientists and non-scientists.  There it is. A handful of people, having no relation to the will of society, have no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.

I was reminded of Snow’s quote when I read the article linked below in the NYTimes.  Increasingly, AI algorithms are controlling our lives, and they are programmed by data.  If all those data are white and male, the algorithms are going to treat everyone else as outliers. And it’s all “decisions in secret.”

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Source: Artificial Intelligence’s White Guy Problem – The New York Times

One of our superstar alumna, Joy Buolamwini, wrote about a similar set of experiences. She’s an African-American woman who works with computer vision, and the standard face-recognition libraries don’t recognize her. She lays the responsibility for fixing these problems on the backs of “those who have the power to code systems.”  C.P. Snow would go further — he’d say that it’s all our responsibility, as part of a democratic process.  Knowing about algorithms and demanding transparency when they effect people’s lives is one of the responsibilities of citizens in the modern world.

The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.

So what? As a result when I work on projects like the Aspire Mirror (pictured above), I am reminded that the training sets were not tuned for faces like mine. To test out the code I created for the Aspire Mirror and subsequent projects, I wore a white mask so that my face can be detected in a variety of lighting conditions.

The mirror experience brings back memories from 2009. While I was working on my robotics project as an undergraduate, I “borrowed” my roommate’s face so that I could test the code I was writing. I assumed someone would fix the problem, so I completed my research assignment and moved on.

Several years later in 2011, I was in Hong Kong taking a tour of a start-up. I was introduced to a social robot. The robot worked well with everyone on the tour except for me. My face could not be recognized. I asked the creators which libraries they used and soon discovered that they used the code libraries I had used as an undergraduate. I assumed someone would fix the problem, so I completed the tour and moved on.

Seven years since my first encounter with this problem, I realize that I cannot simply move on as the problems with inclusion persist. While I cannot fix coded bias in every system by myself, I can raise awareness, create pathways for more diverse training sets, and challenge us to examine the Coded Gaze — the embedded views that are propagated by those who have the power to code systems.

Source: InCoding — In The Beginning — Medium

August 22, 2016 at 7:31 am 11 comments

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,005 other followers


Recent Posts

Blog Stats

  • 1,880,010 hits
August 2016

CS Teaching Tips