Posts tagged ‘AI’

The gender imbalance in AI is greater than in CS overall, and that’s a big problem

My colleague, Rada Mihalcea, sent me a copy of a new (April 2019) report from the AI Now Institute on Discriminating Systems: Gender, Race, and Power in AI (see link here) which describes the diversity crisis in AI:

There is a diversity crisis in the AI sector across gender and race. Recent studies found only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men. This disparity is extreme in the AI industry: women comprise only 15% of AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities. For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. Given decades of concern and investment to redress this imbalance, the current state of the field is alarming.

Without a doubt, those percentages do not match the distribution of gender and ethnicity in the population at large. But we already know that participation in CS does not match the population. How do the AI distributions match the distribution of gender and ethnicity among CS researchers?

A sample to compare to is the latest graduates with CS PhDs. Take a look at the 2018 Taulbee Survey from the CRA (see link here).  19.3% of CS PhD’s went to women. That’s terrible gender diversity when compared to the population, and AI  (at 10%, 15%, or 18%) is doing worse. Only 1.4% of new CS PhD’s were Black. From an ethnicity perspective, Google, Facebook, and Microsoft are doing surprisingly well.

The AI Now Institute report is concerned about intersectionality. “The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others.” I heard this concern at the recent NCWIT Summit (see link here).  The issues of women are not identical across ethnicities. The other direction of intersectionality is also a concern. My student, Amber Solomon, has published on how interventions for Black students in CS often focus on Black males: Not Just Black and Not Just a Woman: Black Women Belonging in Computing (see link here).

I had not seen previously a report on diversity in just one part of CS, and I’m glad to see it. AI (and particularly the sub-field of machine learning) is growing in importance. We know that having more diversity in the design team makes it more likely that a broader range of issues are considered in the design process. We also know that biased AI technologies are already being developed and deployed (see the Algorithmic Justice League). A new Brookings Institute Report identifies many of the biases and suggests ways of avoiding them (see report here). AI is one of the sub-fields of computer science where developing greater diversity is particularly important.

 

June 3, 2019 at 7:00 am 1 comment

C.P. Snow keeps getting more right: Why everyone needs to learn about algorithms #CS4All

When I give talks about teaching computer to everyone, I often start with Alan Perlis and C.P. Snow in 1961. They made the first two public arguments for teaching computer science to everyone in higher education.  Alan Perlis’s talk was the most up-beat, talking about all the great things we can think about and do with computer.  He offered the carrot.  C.P. Snow offered the stick.

C.P. Snow foresaw that algorithms were going to run our world, and people would be creating those algorithms without oversight by the people whose lives would be controlled by them. Those who don’t understand algorithms don’t know how to challenge them, to ask about them, to fight back against them. Quoting from Martin Greenberger’s edited volume, Computers and the World of the Future (MIT Press, 1962), we hear from Snow:

Decisions which are going to affect a great deal of our lives, indeed whether we live at all, will have to be taken or actually are being taken by extremely small numbers of people, who are nominally scientists. The execution of these decisions has to be entrusted to people who do not quite understand what the depth of the argument is. That is one of the consequences of the lapse or gulf in communication between scientists and non-scientists.  There it is. A handful of people, having no relation to the will of society, have no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.

I was reminded of Snow’s quote when I read the article linked below in the NYTimes.  Increasingly, AI algorithms are controlling our lives, and they are programmed by data.  If all those data are white and male, the algorithms are going to treat everyone else as outliers. And it’s all “decisions in secret.”

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Source: Artificial Intelligence’s White Guy Problem – The New York Times

One of our superstar alumna, Joy Buolamwini, wrote about a similar set of experiences. She’s an African-American woman who works with computer vision, and the standard face-recognition libraries don’t recognize her. She lays the responsibility for fixing these problems on the backs of “those who have the power to code systems.”  C.P. Snow would go further — he’d say that it’s all our responsibility, as part of a democratic process.  Knowing about algorithms and demanding transparency when they effect people’s lives is one of the responsibilities of citizens in the modern world.

The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.

So what? As a result when I work on projects like the Aspire Mirror (pictured above), I am reminded that the training sets were not tuned for faces like mine. To test out the code I created for the Aspire Mirror and subsequent projects, I wore a white mask so that my face can be detected in a variety of lighting conditions.

The mirror experience brings back memories from 2009. While I was working on my robotics project as an undergraduate, I “borrowed” my roommate’s face so that I could test the code I was writing. I assumed someone would fix the problem, so I completed my research assignment and moved on.

Several years later in 2011, I was in Hong Kong taking a tour of a start-up. I was introduced to a social robot. The robot worked well with everyone on the tour except for me. My face could not be recognized. I asked the creators which libraries they used and soon discovered that they used the code libraries I had used as an undergraduate. I assumed someone would fix the problem, so I completed the tour and moved on.

Seven years since my first encounter with this problem, I realize that I cannot simply move on as the problems with inclusion persist. While I cannot fix coded bias in every system by myself, I can raise awareness, create pathways for more diverse training sets, and challenge us to examine the Coded Gaze — the embedded views that are propagated by those who have the power to code systems.

Source: InCoding — In The Beginning — Medium

August 22, 2016 at 7:31 am 11 comments

Marvin Minsky and understanding things in more than one way

Marvin Minsky died last month.  I never met Marvin. I met his daughter, and worked with people who knew him well.   He must have been a remarkable person.

The NYTimes piece has several quotes from Alan Kay about Marvin.  Below is my favorite.  I’ve heard it before, and I think about it often when designing classes and lessons.

I want students to understand what I do in class, but not memorize it. I want them to understand it in more than one way.  It’s why I emphasize revision and multiple iterations so often in a class.  I want them to understand well enough to transfer the knowledge, at least in near contexts.

 

For Dr. Kay, Professor Minsky’s legacy was his insatiable curiosity. “He used to say, ‘You don’t really understand something if you only understand it one way,’” Dr. Kay said. “He never thought he had anything completely done.”

Source: Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88 – The New York Times

February 12, 2016 at 8:15 am Leave a comment

First Workshop on AI-Supported Education for Computer Science

Shared by Leigh Ann Sudol-DeLyser (Visiting Scholar, New York University) with the SIGCSE list.

Dear SIGCSE-ers!

I would like to announce the First Workshop on AI-Supported Education for Computer Science to be held at the Artificial Intelligence in Education conference this summer in Memphis and invite the submission of papers from the SIGCSE community. Please see the website at: https://sites.google.com/site/aiedcs2013/ Submissions are due by April 12, 2013.

Workshop Description:

Designing and deploying AI techniques within computer science learning environments presents numerous important challenges. First, computer science focuses largely on problem solving skills in a domain with an infinitely large problem space. Modeling the possible problem solving strategies of experts and novices requires techniques that represent a large and complex solution space and address many types of unique but correct solutions to problems. Additionally, with current approaches to intelligent learning environments for computer science, problems that are provided by AI-supported educational tools are often difficult to generalize to new contexts. The need is great for advances that address these challenging research problems. Finally, there is growing need to support affective and motivational aspects of computer science learning, to address widespread attrition of students from the discipline. Addressing these problems as a research community, AIED researchers are poised to make great strides in building intelligent, highly effective AI-supported learning environments and educational tools for computer science and information technology.

Topics of Interest:

  • Student modeling for computer science learning
  • Adaptation and personalization within computer science learning environments
  • AI-supported tools that support teachers or instructors of computer science
  • Intelligent support for pair programming or collaborative computer science problem solving
  • Automatic question generation or programming problem generation techniques
  • Affective and motivational concerns related to computer science learning
  • Automatic computational artifact analysis or goal/plan recognition to support adaptive feedback or automated assessment
  • Discourse and dialogue research related to classroom, online, collaborative, or one-on-one learning of computer science
  • Online or distributed learning environments for computer science

March 15, 2013 at 1:41 am Leave a comment

The Great Pretender: Turing as a Philosopher of Imitation – Ian Bogost – The Atlantic

“Everyone pretends.”  My favorite piece that I’ve read on Turing in honor of his Centenary. Ian has a wonderful insight into what’s powerful about Turing’s work.

But the computer itself reveals another example of pretense for Turing, thanks to his own theory of abstract computation and its implementation in the device known as the Turing machine. In the form Turing proposed, this machine is a device that manipulates symbols on a strip of tape. Through simple instructions like move forward, erase, write, and read, such a machine can enact any algorithm — and indeed, the design of modern CPUs is based directly on this principle.

Unlike other sorts of machines, the purpose of a Turing machine is not to carry out any specific task like grinding grain or stamping iron, but to simulate any other machine by carrying out its logic through programmed instructions. A computer, it turns out, is just a particular kind of machine that works by pretending to be another machine. This is precisely what today’s computers do–they pretend to be calculators, ledgers, typewriters, film splicers, telephones, vintage cameras and so much more.

via The Great Pretender: Turing as a Philosopher of Imitation – Ian Bogost – The Atlantic.

July 19, 2012 at 2:57 am 1 comment


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 9,014 other followers

Feeds

Recent Posts

Blog Stats

  • 1,937,492 hits
November 2021
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

CS Teaching Tips