Posts tagged ‘Media Computation’
I’m an advisor on the EarSketch project, and it’s really cool. Recommended.
Next month, the EarSketch team will be offering a workshop at SIGCSE in Kansas City. This is a great opportunity to learn more about EarSketch, get hands on experience with the curriculum and environment, and learn how to use EarSketch in your classroom. This year’s workshop will also offer advice on integrating EarSketch into Computer Science Principles courses, though the workshop is of relevance to anyone teaching an introductory computing course.
For more information about SIGCSE, visit http://sigcse2015.sigcse.org/index.html
To register for the workshop, please visit https://www.regonline.com/register/login.aspx?eventID=1618015&MethodId=0&EventsessionId=
Please contact Jason Freeman (email@example.com) with any questions.
Workshop #20: Computer Science Principles with EarSketch
Saturday, March 7th, 2015
3 pm – 6 pm
Jason Freeman, Georgia Institute of Technology
Brian Magerko, Georgia Institute of Technology
Regis Verdin, Georgia Institute of Technology
Fourth edition of Python Media Computation released today: Teacher resources and desirable difficulties
According to Amazon, the Fourth Edition of the Python Media Computation book is released today (see page here). That’s the new cover above. I’ve been working on the 4th edition for most of the summer. Some of the bigger changes are:
- Before we manipulate pictures, we manipulate letters, words, and language, e.g., build “MadLib” and “koan” generators, and encoding and decoding keyword ciphers. Language is a medium, too, and it’s easier to get started (for some folks) with the smaller-iteration loops of text before getting to the thousands-of-iterations loops of pixels in a picture. It’s an optional chapter — everything introduced there gets introduced again later.
- Since the new version of JES fixed a round-off error in the Turtle class, we can do recursive turtle manipulations now (which tended to get messed up in earlier forms of JES).
- I juggled content around so that we do more with conditionals and querying the pixel for its position, before we introduce nested loops. Nested loops are really hard for students, and I learned (from seeing the code that my students wrote) that they can do far more than I’d guessed with single loops — even with multiple pictures. I included more of that.
- I have tried (for the last two editions) to provide screen-scraping examples, e.g., writing code to pull weather, news, or friends’ information from websites. It’s getting harder and harder to write that kind of code. Instead, I decided to provide more code that parses CSV files, as can be found at Open Data Journalism sites (like at The Guardian) and sources like the US Census. The examples are still about parsing out useful information, but it’s a lot easier to parse CSV and encouraged at these sites.
- There are more end of chapter problems, and new pictures. And trying to catch all the errors in the Third Edition that master teachers like Susan Schwartz (at West Point) found.
We’re working on teacher resources now. Currently in development (aiming to have ready in the next couple months) the Powerpoint slides for each chapter of the book, a collection of all the code in the book for teachers, and a solutions manual for every end of chapter problem. These are surprisingly controversial. There are lots (mostly University) teachers who think that I shouldn’t provide any of these resources — teachers should be able to develop all of those themselves. Most of the high school and community college teachers I know appreciate having them.
In searching for the Fourth Edition on Amazon, I read the comments on the Third Edition (see here). Authors probably shouldn’t read the reviews of the book — they’re painful. But I did, and even worse, I actually responded.
Here is a quote from one, titled “False Advertising.”
Its biggest problem: false advertising. This is NOT a book on Python, it’s about JYTHON – A Java based imitation of Python.
Why? Well, there’s some pretty software, available to download, which uses the the JRE. The author chose to stick with this “easy learning environment” and basically cripple anyone wanting to write Python code for Blender, Maya, Android etc.
You may learn to program from this text, but don’t expect a trouble-free life when you get exposed to the real language.
Here was my response:
Everything in this book is useful when wanting to write Python code for Blender, Maya, Android, etc. This is an introductory book on data, loops, conditionals, and objects. Those parts of Python are identical in this book and in the Python that you’ll use in Blender, Maya, and Android. For introductory Python programming, Jython and CPython are exactly the same.
I was surprised to see the original commenter responded. His point was that some kinds of friction, in dealing with the “real world” is desirable:
As an introductory book, I would expect a section on how to install and configure Python. Written covering Windows, Linux, and the Mac OSX. There is no such section; the whole point of Jython is to “hide” this technical level. Which is fine for learning loops etc. but leaves a student lost when encountering Python out in the real world.
It’s an interesting perspective, kind of a “rugged individualism” approach. I do agree with the notion of desirable difficulties in learning (see more here), but don’t agree that installing Python is one of those. Do most Python programmers install Python themselves, or is it already installed on the servers, computers, etc. that they will be programming? Is it a critical part of learning a language? Is it significantly different than installing JES (try that here)? Are you “lost” and unable to program if you don’t install it yourself first?
A sad addendum to this story: Our Media Computation data structures book (see the Amazon page here) has gone out of print. The publisher didn’t notify us. Someone approached us about using the book, and was told that it was out of print. When I queried Pearson, they admitted it. More, because it’s not out of print everywhere (I guess it’s available in some non-US markets), Pearson won’t let us post the content anywhere. It’s a dead book now.
I am sympathetic to this argument for the value of STEAM (STEM+Art), rather than just STEM. I strongly believe in the value of creative expression in learning STEM subjects. That’s core to our goals for Media Computation. I believe that the STEAM perspective is why MediaComp has measurably improved motivation, engagement, and retention.
As a researcher, it’s challenging to measure the value of including art in learning STEM. I’m particularly concerned about the argument below. Singapore and Japan are less creative because they have less art in school? If we include more art in our schools, our students will be more innovative? If we’re already more innovative, and we have too little art classes, why should we believe that adding more art will increase our innovation?
But STEM leaves out a big part of the picture. “It misses the fact that having multiple perspectives are an invaluable aspect of how we learn to become agile, curious human beings,” Maeda said. “The STEM ‘bundle’ is suitable for building a Vulcan civilization, but misses wonderful irrationalities inherent to living life as a human being and in relation to other human beings.” A foundation in STEM education is exceptional at making us more efficient or increasing speed all within set processes, but it’s not so good at growing our curiosity or imagination. Its focus is poor at sparking our creativity. It doesn’t teach us empathy or what it means to relate to others on a deep emotional level. Singapore and Japan are two great examples. “[They] are looked to as exemplar STEM nations, but as nations they suffer the ability to be perceived as creative on a global scale.” Maeda said. Is the United States completely misinformed and heading down the wrong track? Not entirely. Science, technology, engineering and math are great things to teach and focus on, but they can’t do the job alone. In order to prepare our students to lead the world in innovation, we need to focus on the creative thought that gives individuals that innovative edge.
A computer science degree is neither necessary nor sufficient for success in teaching computing. The slides below miss the live demo of Media Computation. My TEDxGeorgiaTech talk (video on YouTube) has much of the same components, but is lacking the ukulele playing that I did today. There was no recording made of my talk.
- First, we’re on github! Come join us in stomping out bugs and making JES even better!
- Upgrading the Jython interpreter to version 2.5, making available new language features and speeding up many user programs. I have been working on the 4th edition of the Python MediaComp book this summer, and have introduced the time library so that users can actually time their algorithms (one of those CS Principles ideas), so I had ready-made programs to run in both JES4.3 and JES5.0. The speed doubled.
- Adding code to JES and the installers to support double-clicking .py files to open them in JES, on all supported platforms.
- Bundling JMusic and the Jython Music libraries, allowing JES to be used with the text “Making Music with Computers” by Bill Manaris and Andrew Brown. This is super exciting to me. All of their examples (like these) work as-is in JES 5 — plus you can do sampled sound manipulations using the MediaComp libraries. The combination makes for a powerful and fun platform for exploring computation and sound. My thanks to Bill who worked with us in making everything work in JES.
- Adding a plugin system that allows developers to easily bundle libraries for use with JES.
- Fixing the Watcher, so that user programs can be executed at arbitrary speeds. This has been broken for a long time, and it’s great to have it back. When you’re looking for a bug in a program that loops over tens of thousands of pixels or sound samples, the last thing you want is a breakpoint.
- Adding new color schemes for the Command Window, which allow users to visually see the difference between return values and print output. This was a suggestion from my colleague Bill Leahy. Students when first learning return can’t see how it does something different from printing. Now, we can use color to make the result of each more distinctive. Thanks to Richard Ladner at ACCESS Computing who helped us identify color palettes to use for colorblind students, so we can offer this distinction in multiple color sets.
- Fixing numerous bugs, especially threading issues. When we first wrote JES, threading just wasn’t a big deal. Today it is, and Matthew stomped on lots of threading problems in JES 5. We got lots of suggestions and bug reports from Susan Schwartz, Brian Dorn, and others which we’re grateful for.
Thanks to Matthew for pulling this all together! Matthew’s effort was supported by NSF REU funding.
The below-linked article is highly recommended. It’s an insightful consideration of the different definitions of “University” we have in the US, and how the goals of helping students become educated for middle class jobs and of being a research university are not the same thing.
This article gave me new insight into the challenges of discipline-based education research, like computing education research. We really are doing research, as one would expect in a research university, e.g., trying to understand what it means for a human to understand computation and how to improve that understanding. But what we study is a kind of activity that occurs at that other kind of university. That puts us in a weird place, between the two definitions of the role of a university. It gives me new insight into the challenges I faced when I was the director of undergraduate studies in the College of Computing and when I was implementing Media Computation. Education research isn’t just thrown over the wall into implementation. The same challenges of technology adoption and, necessarily, technology adaption have to occur.
At the “TIME Summit on Higher Education” that the Carnegie Corporation of New York and Time magazine co-sponsored in September 2013 along with the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation, the disconnect between the views of the research university from inside and outside was vividly on display. A procession of distinguished leaders of higher education mainly emphasized the need to protect—in particular, to finance adequately—the university’s research mission. A procession of equally distinguished outsiders, including the U.S. secretary of education, mainly emphasized the need to make higher education more cost-effective for its students and their families, which almost inevitably entails twisting the dial away from research and toward the emphasis on skills instruction that characterizes the mass higher-education model. Time’s own cover story that followed from the conference hardly mentioned research it was mainly about how much economically useful material students are learning, even though the research university was explicitly the main focus of the conference.
I got to see this at SIGCSE and was really impressed — both the effect, and how it’s written up. Thanks for letting me share it, Dwight!
A much better effect would be combine the images to give the impression that Bogart’s character, Rick, is thinking about Bergman’s character, Ilsa. This requires blending the images together. When blending images the necessary step required is to combine the colors of corresponding pixels of the images together. The RGB values of the pixels to be blended are added together using a percentage of the color of each pixel. If even blending is desired then 50% of each RGB value of the source pixels is added to 50% of each RGB value of the target pixels to make the color of the blended pixel. In the Bergman/Bogart merging above we do not wish an even blending instead we will use 33% of the Bergman pixel color and 67% of the Bogart pixel color. You should now be able to write a blend33() function to perform this blending. To view the completed function move your mouse over the following paragraph.
via Image Blending.