Posts tagged ‘educational technology’

Do we really want computerized personalized tutoring systems? Answer: Yes

An excerpt from Mitchel Resnick’s new book Lifelong Kindergarten: Cultivating Creativity through Projects, Passion, Peers, and Play is published below in the Hechinger Report.  Mitchel argues against computerized personal tutoring systems, because they are only good for “highly structured and well-defined knowledge.”  Because we don’t know how to build these tutoring systems to teach important topics like creativity and ethics.

Agreed, but we are not currently reaching all students with the “highly structured and well-defined knowledge” that we want them to have. We prefer students to have well-educated teachers, and we want students to learn creativity and ethics, too. But if we can teach topics like mathematics well with personalized tutoring systems, why shouldn’t we use them?  Here in Atlanta, students are not learning mathematics well (see blog post referencing an article by Kamau Bobb). We have good results on teaching students algebra with cognitive tutors.

Here’s my concern: Wealthy schools can reject computerized personal tutoring systems because they can afford well-trained teachers, which means that there is less of a demand for computerized personal tutoring systems. Lower demand means higher costs, which means that less-wealthy schools can’t afford them. If we encourage more computerized personal tutoring systems where they are appropriate, more of them get created, they get better, and they get cheaper.

But I’m skeptical about personalized tutoring systems. One problem is that these systems tend to work only in subject areas with highly structured and well-defined knowledge. In these fields, computers can assess student understanding through multiple-choice questions and other straightforward assessments. But computers can’t assess the creativity of a design, the beauty of a poem, or the ethics of an argument. If schools rely more on personalized tutoring systems, will they end up focusing more on domains of knowledge that are easiest to assess in an automated way?

Source: OPINION: Do we really want computerized systems controlling the learning process? – The Hechinger Report

January 3, 2018 at 7:00 am 6 comments

Disrupt This!: MOOCs and the Promises of Technology by Karen Head

Over the summer, I read the latest book from my Georgia Tech colleague, Karen Head. Karen taught a MOOC in 2013 to teach freshman composition, as part of a project funded by the Gates Foundation. They wanted to see if MOOCs could be used to meet general education requirements. Karen wrote a terrific series or articles in The Chronicle of Higher Education about the experience (you can see my blog post on her last article in the series here). Her experience is the basis for her new book Disrupt This! (link to Amazon page here). There is an interview with her at Inside Higher Education that I also recommend (see link here).

In Disrupt This!, Karen critiques the movement to “disrupt education” with a unique lens. I’m an education researcher, so I tend to argue with MOOC advocates with data (e.g., my blog post in May about how MOOCs don’t serve to decrease income inequality). Karen is an expert in rhetoric. She analyzes two of the books at the heart of the education disruption literature: Clayton Christensen and Henry Eyring’s The Innovative University: Changing the DNA of Higher Education from the Inside Out and Richard DeMillo’s Abelard to Apple: The Fate of American Colleges and Universities. She critiques these two books from the perspective of how they argue — what they say, what they don’t say, and how the choice of each of those is designed to influence the audience. For example, she considers why we like the notion of “disruption.”

Disruption appeals to the audience’s desire to be in the vanguard. It is the antidote to complacency, and no one whose career revolves around the objectives of critical thinking and originality—the pillars of scholarship—wants to be accused of that…Discussions of disruptive innovation frequently conflate “is” (or “will be”) and “ought.” In spite of these distinctions, however, writers often shift from making dire warnings to an apparently gleeful endorsement of disruption. This is not unrelated to the frequent use of millenarian or religiously toned language, which often warns against a coming apocalypse and embraces disruption as a cleansing force.

Karen is not a luddite. She volunteered to create the Composition MOOC because she wanted to understand the technology. She has high standards and is critical of the technology when it doesn’t meet those standards. She does not suffer gladly the fools who declare the technology or the disruption as “inevitable.”

The need for radical change in today’s universities—even if it is accepted that such change is desirable—does not imply that change will inevitably occur. To imply that because the church should have embraced the widespread publication of scripture, modern universities should also embrace the use of MOOCs is simply a weak analogy.

Her strongest critique focuses on who these authors are. She argues that the people who are promoting change in education should (at least) have expertise in education. Her book mostly equates expertise with experience. My colleagues and I work to teach faculty about education, to develop their expertise before they enter the classroom (as in this post). I suspect Karen would agree with me about different paths to develop expertise, but she particularly values getting to know students face-to-face. She’s angry that the authors promoting education disruption do not know students.

It is a travesty that the conversation about the reform or disruption of higher education is being driven by a small group of individuals who are buffered from exposure to a wide range of students, but who still claim to speak on their behalf and in their interests.

Disrupt This! gave me a new way to think about MOOCs and the hype around disruptive technologies in education. I often think in terms of data. Karen shows how to critique the rhetoric — the data are less important if the argument they are supporting is already broken.

October 6, 2017 at 7:00 am 2 comments

The Father Of Mobile Computing Is Not Impressed: The Weight of Redefining the Normal

I have been fortunate to have heard Alan Kay talk on the themes in this interview many times, but either he’s getting better at it or I’m learning enough to understand him better, because this was one of my favorites. (Thanks to Ben Shapiro for sending it to me.)  He ties together Steve Jobs, Neal Postman, and Maria Montessori to explain what we should be doing with education and technology, and critiques the existing technology as so much less than what we ought to be doing.  In the quote below, he critiques Tim Berners-Lee for giving us a World Wide Web which was less than what we already knew how to do.  The last paragraph quoted below is poignant: It’s so hard to fix the technology once it’s established because of “the weight of this redefining of the normal.”

What I understood this time, which I hadn’t heard before, was the trade-off between making technology easier and making people better.  I’ve heard Alan talk about using technology to improve people, to help them learn, to challenge their thinking.  But Alan led the team that invented the desktop user interface — he made computing easier.  Can we have both?  What’s the balance that we need? That’s where Neal Postman and Bertrand Russel come in, as gifted writers who drew us in and then changed our minds. That’s why we need adults who know things to create a culture where children learn 21st century thinking and not oral culture (that’s the Maria Montessori part), and why the goal should be about doing what’s hard — not doing what’s universal, not doing what pre-literate societies were doing.  Alan critiques the iPhone as not much better than the television for learning, when the technology in the iPhone could have made it so much more.

He tosses out another great line near the end of the interview, “How stupid is it, versus how accepted is it?”  How do we get unstuck?  The iPhone was amazing, but how do we roll back the last ten years to say, “Why didn’t we demand better? How do we shuck off the ‘the weight of this redefining of the normal’ in order to move to technology that helps us learn and grow?”

And so, his conception of the World Wide Web was infinitely tinier and weaker and terrible. His thing was simple enough with other unsophisticated people to wind up becoming a de facto standard, which we’re still suffering from. You know, [HTML is] terrible and most people can’t see it.

FC: It was standardized so long ago.

AK: Well, it’s not really standardized because they’re up to HTML 5, and if you’ve done a good thing, you don’t keep on revving it and adding more epicycles onto a bad idea. We call this reinventing the flat tire. In the old days, you would chastise people for reinventing the wheel. Now we beg, “Oh, please, please reinvent the wheel.”At least give us what Engelbart did, for Christ’s sake.

But that’s the world we’re in. We’re in that world, and the more stuff like that world that is in that world, the more the world wants to be that way, because that is the weight of this redefining of the normal.

Source: The Father Of Mobile Computing Is Not Impressed

September 22, 2017 at 7:00 am 3 comments

How to Write a Guzdial Chart: Defining a Proposal in One Table

In my School, we use a technique for representing an entire research proposal in a single table. I started asking students to build these logic models when I got to Georgia Tech in the 1990’s. In Georgia Tech’s Human-Centered Computing PhD program, they have become pretty common. People talk about building “Guzdial Charts.” I thought that was cute — a local cultural practice that got my name on it.

Then someone pointed out to me that our HCC graduates have been carrying the practice with them. Amy Voida (now at U. Colorado-Boulder) has been requiring them in her research classes (see syllabus here and here). Sarita Yardi (U. Michigan) has written up a guide for her students on how to summarize a proposal in a single table. Guzdial Charts are a kind of “thing” now, at least in some human-centered computing schools.

Here, I explain what a Guzdial Chart is, where it came from, and why it should really be a Blumenfeld Chart [*].

Phyllis Teaches Elliot Logic Models

In 1990, I was in Elliot Soloway’s office at the University of Michigan as he was trying to explain an NSF proposal he was planning with School of Education professor, Phyllis Blumenfeld. (When I mention Phyllis’s name to CS folks, they usually ask “who?” When I mention her name to Education folks, they almost always know her — maybe for her work in defining project-based learning or maybe her work in instructional planning or maybe her work in engagement. She’s retired now, but is still a Big Name in Education.) Phyllis kept asking questions. “How many students in that study?” and “How are you going to measure that?” She finally got exasperated.

She went to the whiteboard and said, “Draw me a table like this.” Each row of the table is one study in the overall project.

  • Leftmost column: What are you trying to achieve? What’s the research question?
  • Next column: What data are you going to collect? What measures are you going to use (e.g., survey, log file, GPS location)?
  • Next column: How much data are you going to collect? How many participants? How often are you going to use these measures with these participants (e.g., pre/post? Midterm? After a week delay?)?
  • Next column: How are you going to analyze these data?
  • Rightmost column: What do you expect to find? What’s your hypothesis for what’s going to happen?

This is a kind of a logic model, and you can find guides on how to build logic models. Logic models are used by program evaluators to describe how resources and activities will lead to desired impacts. This is a variation that Phyllis made us use in all of our proposals at UMich. (Maybe she invented it?) This version focused on the research being proposed. Each study reads on a row from left-to-right,

  • from why you were doing it,
  • to what you were doing,
  • to what you expected to find.

When I got to Georgia Tech, I made one for every proposal I wrote. I made my students do them for their proposals, too. Somewhere along the way, lots of people started doing them. I think Beth Mynatt first called them “Guzdial Charts,” and despite my story about Phyllis Blumenfeld’s invention, the name stuck. People at Georgia Tech don’t know Phyllis, but they did know Guzdial.

Variations on a Guzdial Chart Theme

The critical part of a Guzdial Chart is that each study is one row, and includes purpose, methods, and expected outcome. There are lots of variations. Here’s an example of one that Jason Freeman (in our School of Music) wrote up for a proposal he was doing on EarSketch. He doesn’t list hypotheses, but it still describes purpose and methods, one row per study.

In Sarita’s variation, she has the students put the Expected Publication in the rightmost column. I like that — very concrete. If you’re in a discipline with some clearly defined publication targets, with a clear distinction between them (e.g. , the HCI community where Designing Interactive Systems (DIS) is often about process, and User Interface Software and Technology (UIST) is about UI technologies), then the publication targets are concrete and definable.

My former student, Mike Hewner, did one of the most qualitative dissertations of any of my students. He used a Guzdial Chart, but modified it for his study. Still one row per study, still including research question, hypothesis, analysis, and sampling.

I still use Guzdial Charts, and so do my students. For example, we used one to work through the story for a paper. Here’s one that we started on a whiteboard outside of my office, and we left it there for several weeks, filling in the cells as they made sense to us.

img_9540-smaller

A Guzdial Chart is a handy way of summarizing a research project and making sure that it makes sense (or to use when making sense), row-by-row, left-to-right.

 

___________

[*] Because Ulysses now makes it super-easy to post to blogs, and I do most of my writing in Ulysses, I accidentally posted this post to Medium — my first ever Medium post.  I wanted this to appear in my WordPress blog, also, so I decided to two blog posts: The Medium one on Blumenfeld Charts, and this one on Guzdial Charts.

October 3, 2016 at 7:05 am 2 comments

NSF director unveils big ideas, with an eye on the next President and Congress

Interesting and relevant for this list. There’s a lot in the NSF big ideas document (see link here) about using technology for learning, but there’s also some on what we want students to know (including about computing technology), e.g., “the development and evaluation of innovative learning opportunities and educational pathways, grounded in an education-research-based understanding of the knowledge and skill demands needed by a 21st century data-capable workforce.”

The six “research” ideas are intended to stimulate cross-disciplinary activity and take on important societal challenges. Exploring the human-technology frontier, for example, reflects NSF’s desire “to weave in technology throughout the fabric of society, and study how technology affects learning,” says Joan Ferrini-Mundy, who runs NSF’s education directorate. She thinks it will also require universities to change how they educate the next generation of scientists and engineers.

Source: NSF director unveils big ideas, with an eye on the next president and Congress | Science | AAAS

June 27, 2016 at 8:03 am Leave a comment

The Indian Education Context is Completely Different from the US Education Context

At LaTICE 2016, I attended a session on teacher professional development. I work at preparing high school CS teachers. I felt like I’d be able to relate to the professional development work. I was wrong.

One of the large projects presented at LaTICE 2016 was the T10kT project (see link here) whose goal is to use technology to train 10,000 teachers. What I didn’t realize at first was that the focus is on higher-education teachers, not high school teachers. The only high school outreach activity I learned about at LaTICE 2016 was from the second keynote, on an Informatics Olympiad from Madhavan Mukund (see slides here) which is only for a select group of students.

India has 500 universities, and over 42,000 higher education institutions. They have an enormous problem trying to maintain the quality of their higher-education system (see more on the Wikipedia page). They rely heavily on video, because videos can be placed on a CD or USB drive and mailed. The T10kT instructors can’t always rely on Internet access even to higher-education institutions. They can’t expect travel even to regional hubs because many of the faculty can’t travel (due to expense and family obligations).

IMG_3478

As can be seen in the slide above, they have a huge number of participants.  I asked at the session, “Why?”  Why would all these higher-education faculty be interested in training to become better teachers?  The answer was that participants get certificates for participating in T10kT, and those certificates do get considered in promotion decisions.  That’s significant, and something I wish we had in the US.

I tried to get a sense for how many primary and secondary schools there are in India, and found estimates ranging from 740K to 1.3M. Compulsory education was only established in 2010 (goes to age 14), and is not well enforced. I heard estimates that about 50% of school-age children go to school because only enrollment is checked, not attendance.

Contrast this with the CS10K effort in the United States. There are about 25-30,000 high schools in the US. Having 10K CS teachers wouldn’t reach every school, but it would make a sizable dent. A goal to get 10K CS teachers in Indian high schools would be laughable. When you increase the number of high schools by two orders of magnitude, 10,000 teachers barely moves the needle. Given the difficulty of access and uncertain Internet, it’s certainly not cheaper to provide professional development in India. They have an enormous shortage of teachers — not just CS. They lack any teachers at all in many schools. The current national focus is on higher-education because the secondary and primary school problems are just so large.

Alan Kay has several times encouraged me to think about how to provide educational technology to support students who do not have access to a teacher. I resisted, because I felt that any educational technology was a poor substitute for a real teacher. Now I realize what a privilege it is to have any teacher at all, and how important it is to think about technology-based guided learning for the majority of students worldwide who do not have access to a teacher.

How do we do it?  How do we design technology-based learning supports for Indian students who may not have access to a teacher?  I attended a session on IITBx, the edX-hosted MOOCS developed by IIT-Bombay. I tweeted:

One of the IIT-Bombay graduate students responded:

Here’s the exchange as a screencap, just in case the Twitter feed doesn’t work right above:

Indian-blogs

I’m sure that Aditi (whose work was described in the previous blog post) is right. Developers in the US can’t expect to build technologies for India and expect them to work, not without involving Indian learners, teachers, and researchers. One of the themes in my book Learner-Centered Design of Computing Education is that motivation is everything in learning, and motivation is tied tightly to notions of identity, community of practice, and context. I learned that I don’t know much about any of those things for India, nor anywhere else in the developing world. The problems are enormous and worth solving, and US researchers and developers have a lot to offer — as collaborators. In the end, it requires understanding on the ground to get the context and motivation right, and nothing works if you don’t get that right.

April 20, 2016 at 7:22 am 4 comments

New OECD Report Slams Computers And Says Why They Can Hurt Learning: It’s all about the pedagogy

My PhD advisor, Elliot Soloway, considers a new report on the value of computers in education, and gets to the bottomline.  To swipe a line from Bill Clinton, “It’s the pedagogy, stupid!”  Of course, I agree with Elliot, and it’s why Lecia Barker’s findings are so disturbing.  We have to be willing to change pedagogy to improve learning.

The findings are the findings, but what is really interesting is a statement that Andreas Schleicher, the director of OECD, made as to why the impact of technology is negative. In the foreword to the OECD report, he writes, “…adding 21st century technologies to 20th century teaching practices will just dilute the effectiveness of teaching.”WOW! In this one sentence, Schleicher names clearly what he sees as the root cause of the lack of technology’s impact on student achievement. While the NYT’s articles danced around the issues, Schleicher doesn’t pull any punches: The reason computers are not having a positive impact lies in the use of outmoded teaching practices that do not truly exploit the opportunities that a 1-to-1 classroom affords.

Source: New OECD Report Slams Computers — and Actually Says Why They Can Hurt Learning — THE Journal

October 16, 2015 at 8:06 am 2 comments

Older Posts


Recent Posts

June 2018
M T W T F S S
« May    
 123
45678910
11121314151617
18192021222324
252627282930  

Feeds

Blog Stats

  • 1,518,319 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 5,275 other followers

CS Teaching Tips