Archive for September, 2012
Learnable Programming: Thinking about Programming Languages and Systems in a New Way
Bret Victor has written a stunningly interesting essay on how to make programming more learnable, and how to draw on more of the great ideas of what it means to think with computing. I love that he ends with: “This essay was an immune response, triggered by hearing too many times that Inventing on Principle was “about live coding”, and seeing too many attempts to “teach programming” by adorning a JavaScript editor with badges and mascots. Please read Mindstorms. Okay?” The essay is explicitly a response to the Khan Academy’s new CS learning supports, and includes many ideas from his demo/video on making programming systems more visible and reactive for the programmer, but goes beyond that video in elaborating a careful argument for what programming ought to be.
There’s so much in his essay that I strongly agree with. He’s absolutely right that we teach programming systems that have been designed for purposes other than learnability, and we need to build new ones that have a different emphasis. He uses HyperTalk as an example of a more readable, better designed programming language for learning, which I think is spot-on. His video examples are beautiful and brilliant. I love his list of characteristics that we must require in a good programming language for learners.
I see a border to Bret’s ideas. There are things that we want to teach about computing where his methods won’t help us. I recognize that this is just an essay, and maybe Bret’s vision does cover these additional learning objectives, too. The learning objectives I’m struggling with are not made easier with his visual approach.
Let me give two examples — one as a teacher, and the other as a researcher.
As a teacher: I’m currently teaching a graduate course on prototyping interactive systems. My students have all had at least one course in computer programming, but it might have been a long time ago. They’re mostly novices. I’m teaching them how to create high-fidelity prototypes — quite literally, programming artifacts to think with. The major project of the course is building a chat system.
- The first assignment involved implementing the GUI (in Jython with Swing). The tough part was not the visual part, laying out the GUI. The tough part was linking the widgets to the behaviors, i.e., the callbacks, the MVC part. It’s not visible, and it’s hard to imagine making visible the process of dealing with whatever-input-the-user-might-provide and connecting it to some part of your code which gets executed non-linearly. (“This handler here, then later, that handler over there.”) My students struggled with understanding and debugging the connections between user events (which occur sometime in the future) with code that they’re writing now.
- They’re working on the second assignment now: Connecting the GUI to the Server. You can’t see the network, and you certainly can’t see all the things that can go wrong in a network connection. But you have to program for it.
As a researcher: I’ve written before about the measures that we have that show how badly we do at computing education, and about how important it is to make progress on those measures: like the rainfall problem, and what an IP address is and whether it’s okay to have Wikipedia record yours. What makes the rainfall problem hard is not just the logic of it, but not knowing what the input might be. It’s the invisible future.
I disagree with a claim that Bret makes (quoted below), that the programmer doesn’t have to understand the machine. The programmer does have to understand the notional machine (maybe not the silicon one), and that’s critical to really understanding computing. A program is a specification of future behavior of some notional machine in response to indeterminate input. We can make it possible to see all the programs execution, only if we limit the scope of what it is to be a program. To really understand programming, you have to imagine the future.
It’s possible for people to learn things which are invisible. Quantum mechanics, theology, and the plains of Mordor (pre-Jackson) are all examples of people learning about the invisible. It’s hard to do. One way we teach that is with forms of cognitive apprenticeship: modeling, coaching, reflection, and eliciting articulation.
Bret is absolutely right that we need to think about designing programming languages to be learnable, and he points out a great set of characteristics that help us get there. I don’t think his set gets us all the way to where we need to be, but it would get us much further than we are now. I’d love to have his systems, then lay teaching approaches like cognitive apprenticeship on top of them.
Thus, the goals of a programming system should be:
to support and encourage powerful ways of thinking
to enable programmers to see and understand the execution of their programs
A live-coding Processing environment addresses neither of these goals. JavaScript and Processing are poorly-designed languages that support weak ways of thinking, and ignore decades of learning about learning. And live coding, as a standalone feature, is worthless.
Alan Perlis wrote, “To understand a program, you must become both the machine and the program.” This view is a mistake, and it is this widespread and virulent mistake that keeps programming a difficult and obscure art. A person is not a machine, and should not be forced to think like one.
Planet CAS: Blogs About Computing At School in the UK
I’ve been enjoying the Community pages for the UK Computing at Schools effort — so much going on there! I just saw a link from Michael Kölling referencing the blog linked below — an aggregator of UK computing education blogs. Really interesting set!
This is a blog aggregator collecting together the latest content from various blogs relating to computing in schools in the UK. The aggregator is maintained by Neil Brown (@twistedsq), who decides which blogs to include. Roughly, the inclusion criteria are that the blog should:
be related to computing (not just ICT or the use of technology) at school, with a UK focus,
be at least semi-regularly updated (a new post at least every 90 days),
have several posts already,
feature original content (not just links or quotes).
New ACM Classification System doesn’t get Computing Education
The new ACM classification system was just released. The goal is to create a taxonomy for all of computing research. It’s a significant improvement on the old one. Human-Centered Computing is one of the top-level branches now, which is terrific.
Unfortunately, computing education is classified as being a “Professional Topic” issue. What’s particularly odd about that is that “computing literacy” and “K-12 education” and even “computational thinking” appear (correctly, in my opinion) under “computing education,” but none of those are about creating professionals or even about conveying professional practice. Computing education research is a Human-Centered Computing research issue. It’s disappointing that it’s been moved into branch of the taxonomy that doesn’t reflect that.
Computing education is not about being a computing professional, especially today when much of the world is trying to understand how computer science fits into schools. Consider some of the relevant computing education research questions: What should (say) a fourth grader learn about computing, how should we teach it, and what challenges will we face? None of those questions are about being, becoming, or communicating about computing professionals. Think about it from a perspective of STEM education more generally — students’ study biology not to become a biologist.
Does it really matter? I think it does. A research taxonomy as a reification of how the field thinks about itself. It’s supposed to be a reflection of how “Computing” thinks about our constituent elements, and how we describe ourselves to the world. That’s where the placement of computing education is important. Placing it under “Professional Topics” suggests that computing education is about “creating more professionals” or “making more of us.”
There’s certainly a time and place to make the argument that we need “more of us.” When the CCC argues for the value of computer science, they are arguing that what computing professionals and researchers do is important and requires more funding. This is definitely saying that we need more of us to do the work. In some sense, this is what Physics does when they are arguing for some super Ballistic Supercollider (some super BS) — “we are important, we need more of us, society needs what we do.”
But that’s not why physics is taught in most high schools. It’s not because we need thousands of physicists to find the Higgs Boson. Rather, we need citizens who understand why it’s important to find the Higgs Boson, and more importantly, how physics helps them to understand their own world (and maybe why the Higgs Boson is part of understanding our world.) The argument that ACM and NSF are making about computing education is in this latter category. See Cameron Wilson’s blog post on “All Hands on Deck! Scaling K-12 Computer Science Education“. The argument for computer science in K-12 (or “computing for everyone/all”) is not that we need to make lots of professionals. My argument is that computing education informs human-computer interaction — that we as humans can do more, do better, and understand our world more if we (everyone/all) understand something about computing.
That’s why putting “Computing Education” under “Professional Topics” (along with “History of Computing,” “Computing Industry,” and “Computing Profession”) is wrong. It implies that Computing Education is about “us” when really it’s about “everyone.”
Where Computing Education appears in the Classification isn’t important in any practical sense. It’s important for how we think about ourselves and how we explain ourselves to others.
Who completes a MOOC?
We’ve wondered on this blog before: Who completes a MOOC? Who doesn’t? edX has released some data on who completed their Circuits & Electronics course, and it’s pretty interesting. These aren’t newbies. 37% had a bachelors, 28% had a master’s, and 6% had doctorates. This is only one course, and it’s only the completers, but I’m betting that it’s comparable to other MOOCs when considering (for example) all the folks who got perfect scores on the Udacity CS101 final exam.
The findings are limited and have not been formally compiled or analyzed — Agarwal relayed them to Inside Higher Ed after logging into the platform’s back end from his Cambridge, Massachusetts office. But perhaps the most interesting piece of data is that 80 percent of respondents said they had taken a “comparable” course at a traditional university prior to working their way through Circuits & Electronics.
…
One way to read the finding is to say that although the Circuits & Electronics course was open to anyone, anybody who had not already paid for traditional education would be ill-equipped to succeed in the course.
To some extent, Agarwal expected that would be the case for Circuits & Electronics, which listed certain physics and math courses as prerequisites. The survey findings affirmed that the successful students were well-educated: about 78 percent of the respondents said they had previously taken a course on vectors or differential equations. Only 4 percent said they had never taken calculus.
via edX explores demographics of most persistent MOOC students | Inside Higher Ed.
Are computing educators professionally and legally required to change and improve their practice?
I did a Blog@CACM post this weekend asking a question that I’ve been wondering: Are we as computing educators professionally and legally required to change our practice in order to diversify our classes? In the United States, we have a law called Title IX that says:
No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance.
Because of this law, many female-only athletic programs have been created in the US, with scholarships, so that women would have the same opportunities as men do in sports like football and baseball. Does this same law apply to academics, and specifically, computer science?
There are huge benefits for having a computing degree. Jane Margolis says that those who don’t get access to computing get “Stuck in the Shallow End” of the economic pool. If we are constructing our degrees in such a way that women don’t get access, are we “denying them the benefits of” our education programs?
“But women can take our programs! There are women in our programs!” one might reply. Yes, but few. Why? If the reason is bias (even if unconscious), then I think that Title IX would require us to change. I wonder if Title IX creates (at least) a legal obligation to monitor gender participation in computing programs, and to seek to improve that participation.
I do believe that higher education computing teachers ought to uphold professional standards, but it’s not obvious to me what that entails, and if there are legal requirements in addition to the professional obligations. I wrote the blog post to explore the question. What are our obligations, as computing education professionals?
New National Academies report on Discipline-Based Education Research
I’ve just started looking at this report — pretty interesting synthesis of work in physics education research, chemistry ed research, and others.
The National Science Foundation funded a synthesis study on the status, contributions, and future direction of discipline-based education research (DBER) in physics, biological sciences, geosciences, and chemistry. DBER combines knowledge of teaching and learning with deep knowledge of discipline-specific science content. It describes the discipline-specific difficulties learners face and the specialized intellectual and instructional resources that can facilitate student understanding.
Discipline-Based Education Research is based on a 30-month study built on two workshops held in 2008 to explore evidence on promising practices in undergraduate science, technology, engineering, and mathematics (STEM) education. This book asks questions that are essential to advancing DBER and broadening its impact on undergraduate science teaching and learning. The book provides empirical research on undergraduate teaching and learning in the sciences, explores the extent to which this research currently influences undergraduate instruction, and identifies the intellectual and material resources required to further develop DBER.
Discipline-Based Education Research provides guidance for future DBER research. In addition, the findings and recommendations of this report may invite, if not assist, post-secondary institutions to increase interest and research activity in DBER and improve its quality and usefulness across all natural science disciples, as well as guide instruction and assessment across natural science courses to improve student learning. The book brings greater focus to issues of student attrition in the natural sciences that are related to the quality of instruction. Discipline-Based Education Research will be of interest to educators, policy makers, researchers, scholars, decision makers in universities, government agencies, curriculum developers, research sponsors, and education advocacy groups.
Female CS PhD’s research areas
I was surprised by these data. Why would databases have one of the highest percentages of women? Why would graphics and visualization have one of the lowest? My guess is that it doesn’t have to do with features of the area and the match to women’s concerns and interests, but instead, is about how welcoming the culture of the field is to women.
The overall percent of women receiving PhDs in computing was 20.3%, but this representation is unevenly distributed across specialty areas. Table 1 shows numbers and percentages of specialty area by gender over the four years. Representation as measured by the percentage of graduates within a specialty who are women is one measure of women’s participation; raw numbers of women completing PhDs in a specialty is another. Because there are higher numbers of students graduating with specialties in Artificial Intelligence and Software Engineering, these areas have relatively high total numbers of women even though the percent of women in these areas is about average.
Highest Representation of Women (% within area): Information Science (38%), Human-Computer Interaction (36%), and Databases/Information Retrieval (26%).
Highest Numbers of Women: Artificial Intelligence, Databases/Information Retrieval, and Software Engineering
Lowest Representation of Women (% within area): Programming Languages/Compilers, Operating Systems, and Graphics/Visualization (all 14%).
via Computing Research News – Online – Computing Research Association.
Website that takes your online classes for you
I’m not sure that this is real — I tried to “get a quote” and couldn’t get the submit form to work right. And the “Read More” page is gobbledy gook. Even if satire, it raises a real point. There’s certainly a market in ‘We Take Your Online College Classes for You and Get You an “A”’
You are struggling with your online classes or homework and you want someone to do it for you. We can handle almost any subject and customer service is a priority. Our company culture revolves around making sure you feel safe and satisfied knowing that your work is being done by an expert within your specified deadline. We are here to serve you around the clock by email, live chat, and phone. For all of your academic needs, WeTakeYourClass wants to be the one you turn to time and time again.
via WETAKEYOURCLASS.COM- We Take Your Online Class! We Do Your Homework, Tests, Classes For You!.
New NSF Initiative: Graduating 10,000 New Engineers and Computer Scientists
Interesting new initiative between the White House and NSF to increase the number of graduates in computing and engineering by focusing on retention. (I strongly agree, because retention is where we’ve been focusing our attention.)
This letter announces a cooperative activity between NSF and members of the Jobs Councils High Tech Education working group, led by Intel and GE, to stimulate comprehensive action at universities and colleges to help increase the annual number of new B.S. graduates in engineering and computer science by 10,000. Proposals for support of projects would be submitted under a special funding focus Graduate 10K+ within the NSF Science, Technology, Engineering, and Mathematics Talent Expansion Program STEP, see http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5488.
Studies have shown that retention during the critical first two years in a students major, or along the path towards declaration of a major, is an excellent predictor of eventual graduation with a STEM degree. Recognizing that the correlation between retention and graduation is particularly strong for students in engineering and computer science, we invite proposals from institutions that can demonstrate their commitment to:(i) significant improvement in first and second year retention rates in these particular majors, beyond current levels; and (ii) sustained, institutionally-embraced practices e.g. http://www.asee.org/retention-project that lead, ultimately, to increased graduation. Jobs Council members anticipate providing support for this special funding focus, with the number of awards to be made contingent on the availability of funds.
Outrage over Udacity Statistics 101: But is it really worse than others?
AngryMath’s blog post on Udacity Statistics 101 (linked below) is detailed, compelling, and damning. It’s certainly not the best statistics course anywhere. But I have to wonder: Is it worse than average? It’s hard to teach statistics well (I really did try this last summer). It’s hard to teach anything well, and there’s evidence that we need to improve our teaching in computer science. This doesn’t feel like an indictment of MOOC courses overall.
In brief, here is my overall assessment: the course is amazingly, shockingly awful. It is poorly structured; it evidences an almost complete lack of planning for the lectures; it routinely fails to properly define or use standard terms or notation; it necessitates occasional massive gaps where “magic” happens; and it results in nonstandard computations that would not be accepted in normal statistical work. In surveying the course, some nights I personally got seriously depressed at the notion that this might be standard fare for the college lectures encountered by most students during their academic careers.
Brief Trip Report on ICER 2012: Answering the global needs for computing education research
ICER 2012 in Auckland, New Zealand, was notable for having the highest number of submitted papers to any ICER (8th year) and almost as many attendees as last year, despite being a long way for all the US and European delegates. In the end, there were 13 research papers accepted, 8 discussion papers (shorter papers, with shorter presentations and longer discussion periods), 16 attendees at the Doctoral Consortium, and 19 Lightning Talks accepted. Despite all the successes, I’m worried whether ICER can meet the global needs for computing education research.
The first talk of the conference ended up winning the People’s Choice award, voted on by the delegates (used to be called the “Fool’s Award,” but now renamed the “John Henry Award” for a paper with great “potential to be transformative”) from Ian Utting and the BlueJ crew. BlueJ, probably the most successful and popular pedagogical Java IDE, is going to be outfitted in the Spring with event logging. We’ll know what the students are doing in BlueJ, at a large scale (probably about 1Tb/year). All of that data is going to get stored (anonymously) for use by researchers. The interesting discussion point is: What are web-scale questions in CS Education?
The Chair’s Award (new to ICER, kind of a best-paper award) was won by Colleen Lewis for her detailed explanation of how a middle schooler at a summer camp in Scratch did his debugging. In a sense, Colleen (just graduated from Berkeley, and just started at Harvey Mudd College) was answering a comment that Ray Lister made which was often quoted during the conference: that students sometimes demonstrate “bizarro programming behaviors.” Colleen carefully reconstructed the activity of the student and pieced together a story of how the student thought through the problem, and how his behavior did make sense.
I tweeted some of my favorite one-liners from the conference. I’ll mention just a couple highlights here.
- Quintin Cutts presented an intriguing paper suggesting a new way of looking at questions that spur learning, with data drawn from Beth Simon’s CS:Principles course for non-majors. The idea is called the Abstraction Transition Taxonomy, and it’s about how we talk about problems (natural language), we talk about CS (“arrays”), and we talk in code (e.g., “a[i]”). They hypothesize that questions that lead to transitions between levels may be the most successful scaffolding for novice learning. So, how do we test that hypothesis?
- My former students, Drs. Brian Dorn and Allison Elliott Tew, are working on a new validated measure of attitudes towards computing, based on similar instruments developed for physics and biology. They presented their validation scheme at ICER. I’ve already read a draft of a future paper where they’re actually using the instrument, and I think that this is going to be a big deal.
- Lauren’s subgoal paper drew some oohs when I showed the results, a few shakes of heads (I don’t think everyone believed it), and some challenging questions. “Why aren’t you using this in your intro classes?” asked one questioner. “Or your advanced classes?” asked another. Yup. Good questions.
- One of the lightning talks had an interesting idea: Form pairs for pair-programming based on perception of efficacy. Put non-confident students together! The idea is that self-efficacy feeds off a vacuum, “I’m doing worse than everyone else. I just know.” Having someone else with low-confidence provides evidence that you’re not alone in struggling. No data were presented, but it’s an intriguing idea.
One of my mentors here at Georgia Tech is Jim Foley who recommends structuring research around BHAG’s — Big, Hairy Audacious Goals. The BHAG for computing education is teaching computer science in all schools. What’s particularly scary about this BHAG is it’s already happening. The US has the CS10K effort. The Computing At School effort is going strong in the UK. New Zealand and Denmark have both instituted new nationwide CS curricula in the last couple years. There is an enormous need for research on how to help teachers learn to teach computer science, what the challenges are in teaching computer science to school children (e.g., who have not declared a major of computer science, who are not necessarily motivated to learn computing for a career), and evaluations of successful models for supporting learning by both teachers and school children. Maybe we’re just going to do it, and figure out later what works. But maybe there’s a better way.
How much of ICER 2012 research could possibly inform these efforts? There’s Colleen’s interesting paper on a pre-teen debugging, and there’s Briana’s work on professional development efforts. That’s pretty much it for directly computing-at-schools/CS10K relevant, from my read of the papers. There were a few papers that addressed non-majors (like Quintin’s, and our statewide survey paper), but at the undergraduate level. The rest of ICER’s papers were seeking to understand and teach undergraduate CS majors.
It’s important to understand undergraduate CS majors and to improve their understanding. My personal research agenda is more on the latter than the former — it’s more important to me to learn how to teach better, rather than to understand the effects of teaching that might be better if we built on everything that we know about teaching. But I do get the value of understanding understanding (or lack of understanding, or even misconceptions). There are far more high school teachers and schoolchildren than there are undergraduate majors, and they’re different. The oncoming problems are much bigger than the ones we’re currently facing.
How do we inform the broader need for research on computing education? Is ICER the place to look for that research? Or will ICER (and SIGCSE) always be a mostly undergraduate-oriented conference (and organization)? If not ICER and SIGCSE, where should we look? I was a reviewer for AERA’s new engineering and computing education division, and while I was excited about those papers, they’re coming at the problems almost entirely from the education perspective. There was little from ICER and the computing education research community. The problems that we need solved will require work from both communities/disciplines/fields. How do we get there?
Survey on Scaling K-12 Computer Science Education: Please Complete!
NSF has reached out to the education side (yay — we really need that!) to start to get a handle on what it will take to scale CS education across the US in schools. Cameron Wilson wrote a blog post on the effort (quoted and linked below). The University of Chicago “landscape survey” that they’re asking everyone involved in K-12 CS Education to take is here. Please do fill it out and help U. Chicago get a picture of what’s going on now.
It’s a comprehensive survey — be sure to leave enough time for it. The goal is to get a handle on our overall capacity to offer professional development. So, the survey is asking for details on every offering of every professional development session across the country, including uploaded agendas (i.e., you can’t provide a URL to a webpage). We’re still trying to understand some of the terms in the survey, e.g., an on-line component seems to imply a webinar or using a tool like Piazza outside of the face-to-face time.
Ensuring wide-spread access to rigorous and engaging K-12 computer science education is a grand challenge, and this challenge revolves around key questions: How much professional development around new curricular approaches do we need and what models are out there? How are we going to directly engage with states, school districts and teachers on these issues? What will campaigns of sustained advocacy and awareness look like that will ensure the policy environment supports reform? If we are successful in scaling, how do we sustain reform?
The University of Chicago’s Urban Education Institute (UEI) and the University of Chicago’s Center for Elementary Mathematics and Science Education (CEMSE) are carrying out an 18-month study for ACM’s partnership to better understand the answers to these questions and the availability and nature of computer science professional development for K-12 teachers.
via All Hands on Deck! Scaling K-12 Computer Science Education | blog@CACM | Communications of the ACM.
A Question that Everyone should Ask their CS Professors: Why do we have to learn this?
There’s a meme going around my College these days, about the 10 questions you should never ask your professor (linked below). Most of them are spot-on (e.g., “Did we do anything important while I was out?”). I disagreed with the one I quote below.
One of our problems in computer science is that we teach things for reasons that even we don’t know. Why do we teach how hash tables are constructed? Does everybody need to know that? I actually like the idea of teaching everybody about hash functions, but it’s a valid question, and one that we rarely answer to ourselves and definitely need to answer to our students.
Why we’re teaching what we’re teaching is a critical question to answer for broadening participation, because we have to explain to under-represented minorities why it’s worth sticking with CS. Even more important for me is explaining this to non-majors, and in turn, to our colleagues in other departments. Computing is a fundamental 21st Century literacy, and we have to explain why it’s important for everyone to learn. “Stuck in the Shallow End” suggests that making ubiquitously available can help to broaden participation, but we can only get it ubiquitously available by showing that it’s worth it.
I’m back from New Zealand and ICER: today, yesterday, tomorrow — days get confused crossing a dateline. (We landed in both Sydney and Los Angeles at 10:30 am on 13 September.) I spent several hours of the trip reading Mike Hewner’s complete dissertation draft. Mike has been asking the question, “How do CS majors define the field of computer science, and how do their misconceptions lead to unproductive educational decisions?” He did a Grounded Theory analysis with 37 interviews (and when I tell this to people who have done Grounded Theory, their jaws drop — 37 interviews is a gargantuan number of interviews for Grounded Theory) at three different institutions. One of his findings is that even CS majors really have no idea what’s going on in a class before they get there. The students’ ability to predict the content of future courses, even courses that they’d have to take, was appallingly bad. Even our own majors don’t know why they’re taking what they’re taking, or what will be in the class when they go to take it.
We will have to tell them.
“Why do we have to learn this?” In some instances, this is a valid question. If you are studying medical assisting, asking why you have to learn a procedure can be a gateway to a better question about when such a procedure would be necessary. But it should be asked that way–as a specific question, not a general one. In other situations, like my history classes, the answer is more complicated and has to do with the composition of a basic liberal arts body of knowledge. But sometimes, a student asks this because they do not think they should have to take a class and want a specific rationale. In that case, I respond, “Please consult your course catalog and program description. If you don’t already know the answer to that question, you should talk to your advisor about whether or not this is the major for you.”
via 10 Questions You Should Never Ask Your Professor! – Online Colleges.
Does Google get that teachers innovate?
I love this post! Google’s Eric Schmidt doesn’t grok educators. Since Schmidt highlights “graduate students,” I think he’s dissing professors as well as K-12 teachers when he says that innovation doesn’t come from “established institution” educators.
@EricSchmidt: Innovation never comes from the established institutions. It’s always a graduate students or a crazy person or somebody with a great vision. Sal is that person in education in my view. He built a platform. If that platform works it could completely change education in America.
Mr. Chairman, I hate to say it but you are dead wrong, insultingly wrong, about educators.
Educators (who are probably some of @Google product’s biggest fans) are indeed innovators. What is the main difference between daily innovations and Khan Academy software? Funding. Bill Gates and Google (e.g. you) stumbled upon Khan’s youtube videos, (first made in his closet, by himself) and thought to fund it. Now, with a team, offices, software designers, backed by tons of financial support, Sal Khan can run as far as dreams can take him. I applaud him, don’t misconstrue my point here. I think he’s a really smart guy, doing really smart things, that hit a very lucky break that helps him continue to grow.
via Dear @Google Chairman @EricSchmidt, You Are WRONG About Educators « Christopher Lehman.
SAT is less predictive for females?
I’d not heard this claim before, seen below in an interesting USA Today piece on trying to get more women into STEM fields. Is it really the case that math SAT scores are not as predictive for females as males? I found one study about SAT predictive power, but it doesn’t seem to say that SAT is less predictive for women. I found other pieces complaining about the predictive power for SAT, but I didn’t see anything about the role of gender.
Not to be ignored is the school’s decision in 2007 to make SAT scores optional in admissions. Tichenor says math SAT scores were not accurately predicting the success of its female students. Historically, average math SAT scores for women have been lower than those for men.
Celina Dopart, who graduated this spring from Worcester Polytechnic with a degree in aerospace engineering and is headed to the Massachusetts Institute of Technology this fall for graduate work, says she submitted her scores, but liked the message sent by the test-optional policy.
via Math and science fields battle persistent gender gap – USATODAY.com.
Recent Comments