Posts tagged ‘Media Computation’
- First, we’re on github! Come join us in stomping out bugs and making JES even better!
- Upgrading the Jython interpreter to version 2.5, making available new language features and speeding up many user programs. I have been working on the 4th edition of the Python MediaComp book this summer, and have introduced the time library so that users can actually time their algorithms (one of those CS Principles ideas), so I had ready-made programs to run in both JES4.3 and JES5.0. The speed doubled.
- Adding code to JES and the installers to support double-clicking .py files to open them in JES, on all supported platforms.
- Bundling JMusic and the Jython Music libraries, allowing JES to be used with the text “Making Music with Computers” by Bill Manaris and Andrew Brown. This is super exciting to me. All of their examples (like these) work as-is in JES 5 — plus you can do sampled sound manipulations using the MediaComp libraries. The combination makes for a powerful and fun platform for exploring computation and sound. My thanks to Bill who worked with us in making everything work in JES.
- Adding a plugin system that allows developers to easily bundle libraries for use with JES.
- Fixing the Watcher, so that user programs can be executed at arbitrary speeds. This has been broken for a long time, and it’s great to have it back. When you’re looking for a bug in a program that loops over tens of thousands of pixels or sound samples, the last thing you want is a breakpoint.
- Adding new color schemes for the Command Window, which allow users to visually see the difference between return values and print output. This was a suggestion from my colleague Bill Leahy. Students when first learning return can’t see how it does something different from printing. Now, we can use color to make the result of each more distinctive. Thanks to Richard Ladner at ACCESS Computing who helped us identify color palettes to use for colorblind students, so we can offer this distinction in multiple color sets.
- Fixing numerous bugs, especially threading issues. When we first wrote JES, threading just wasn’t a big deal. Today it is, and Matthew stomped on lots of threading problems in JES 5. We got lots of suggestions and bug reports from Susan Schwartz, Brian Dorn, and others which we’re grateful for.
Thanks to Matthew for pulling this all together! Matthew’s effort was supported by NSF REU funding.
The below-linked article is highly recommended. It’s an insightful consideration of the different definitions of “University” we have in the US, and how the goals of helping students become educated for middle class jobs and of being a research university are not the same thing.
This article gave me new insight into the challenges of discipline-based education research, like computing education research. We really are doing research, as one would expect in a research university, e.g., trying to understand what it means for a human to understand computation and how to improve that understanding. But what we study is a kind of activity that occurs at that other kind of university. That puts us in a weird place, between the two definitions of the role of a university. It gives me new insight into the challenges I faced when I was the director of undergraduate studies in the College of Computing and when I was implementing Media Computation. Education research isn’t just thrown over the wall into implementation. The same challenges of technology adoption and, necessarily, technology adaption have to occur.
At the “TIME Summit on Higher Education” that the Carnegie Corporation of New York and Time magazine co-sponsored in September 2013 along with the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation, the disconnect between the views of the research university from inside and outside was vividly on display. A procession of distinguished leaders of higher education mainly emphasized the need to protect—in particular, to finance adequately—the university’s research mission. A procession of equally distinguished outsiders, including the U.S. secretary of education, mainly emphasized the need to make higher education more cost-effective for its students and their families, which almost inevitably entails twisting the dial away from research and toward the emphasis on skills instruction that characterizes the mass higher-education model. Time’s own cover story that followed from the conference hardly mentioned research it was mainly about how much economically useful material students are learning, even though the research university was explicitly the main focus of the conference.
I got to see this at SIGCSE and was really impressed — both the effect, and how it’s written up. Thanks for letting me share it, Dwight!
A much better effect would be combine the images to give the impression that Bogart’s character, Rick, is thinking about Bergman’s character, Ilsa. This requires blending the images together. When blending images the necessary step required is to combine the colors of corresponding pixels of the images together. The RGB values of the pixels to be blended are added together using a percentage of the color of each pixel. If even blending is desired then 50% of each RGB value of the source pixels is added to 50% of each RGB value of the target pixels to make the color of the blended pixel. In the Bergman/Bogart merging above we do not wish an even blending instead we will use 33% of the Bergman pixel color and 67% of the Bogart pixel color. You should now be able to write a blend33() function to perform this blending. To view the completed function move your mouse over the following paragraph.
via Image Blending.
Pearson has asked me to update our Python Media Computation book, “Introduction to Computing and Programming: A Multimedia Approach.” This will be the fourth edition. I plan to address the errata (as well as the ones I haven’t yet posted to the website), add new assignments, and change out the pictures (a lot of those pictures are 12 years old now). I think I’m going to give up on trying to do screen-scraping off a live website — they keep changing too fast. Instead, I might add something about how to parse CSV files, which are common and useful.
I have a couple of bigger ideas for changes, and I’d appreciate feedback from readers. (And I’m certainly interested in other advice you might give me.)
(1) CPython cross-platform libraries have come a long way since the 3rd edition was written. It’s likely that we could write a media library for CPython that works much like media library in JES. A CPython version of Media Computation would likely be faster. We probably would not re-create JES in CPython. It will take some time to develop a CPython version, so a Jython/JES-based 4th edition could be available in early 2015 (aiming to be out before SIGCSE 2015), but a CPython version would probably be mid-2015.
- (a) Is a CPython version something that you would find interesting and worth adopting?
- (b) Would you have a preference for one or the other? Or would you see value in having both versions?
(2) At Georgia Tech, we have started teaching the book with a brief excursion into strings and lists before introducing pictures. We talk about the medium as being language or text, and we manipulate characters in the strings using algorithms like those we later use with pixels in a picture or samples in a sound. For example, we can “mirror” words as we later mirror sounds or pictures. The advantage is that students can see all the characters in the string, and print out every step of the loop — where neither of those is reasonable to do with pictures or sounds.
We’re considering adding an OPTIONAL chapter at the beginning of the book in the 4th edition. We wouldn’t remove the introduction to loops in Chapter 3. We would move some of the string processing from Chapter 10 into this new Chapter 2.5, but leave methods and file I/O for Chapter 10. You would be able to use the book as-is, but if you want to start with characters and words as a text medium first, we would support that path, too.
- Does that seem like a chapter that you would find useful? Or would you rather just keep the book with the chapters as they are now?
Thanks for any advice you would like to give me on producing the 4th edition of the book!
I got a chance to review and write a foreword for:
I’m really pleased to see that it’s finally out! Recommended.
Interesting economic argument being made in the below piece — that we don’t have large numbers of manufacturing jobs, but we have large numbers of jobs that involve creating using digital technologies.
In the start of our Media Computation book, we make the argument that comes after this. Photoshop, Final Cut Pro, and Audacity are wonderful tools that can do a lot — if you know how to use them. Knowing programming gives you the ability to make with digital media, even if you don’t know how to get the tools to do. Knowing programming lets you say things with digital media, even if the tools don’t support it.
“We have moved from the industrial age to the knowledge economy,” said Facebook’s CIO Tim Campos at the HP Discover conference in Barcelona last month. An economy, that is, in which a company’s “core asset” lies not in material infrastructure but rather “the thoughts and ideas that come from our workforce.”
The blog post linked below felt close to home, though I measure it differently than lines of code. The base point is that we tend to start introductory programming courses assuming way more knowledge than is already there. My experience this semester is that we tend to expect students to gain more knowledge more quickly than they do (and maybe, than they can).
I’m teaching Python Media Computation this semester, on campus (for the first time in 7 years). As readers know, I’ve become fascinated with worked examples as a way of learning programming, so I’m using a lot of those in this class. In Ray Lister terms, I’m teaching program reading more than program writing. In Bloom’s taxonomy terms, I’m teaching comprehension before synthesis.
As is common in our large courses at Georgia Tech (I’m teaching in a lecture of 155 students, and there’s another parallel section of just over 100), the course is run by a group of undergraduate TA’s. Our head TA took the course, and has been TA-ing it for six semesters. The TA’s create all homeworks and quizzes. I get to critique (which I do), and they do respond reasonably. I realize that all the TA’s expect that the first thing to measure in programming is writing code. All the homeworks are programming from a blank sheet of paper. Even the first quiz is “Write a function to…”. The TA’s aren’t trying to be difficult. They’re doing as they were taught.
One of the big focal research areas in the new NSF STEM-C solicitation is “learning progressions.” Where can we reasonably expect students to start in learning computer science? How fast can we reasonably expect them to learn? What is a reasonable order of topics and events? We clearly need to learn a lot more about these to construct effective CS education.
I’m not going to articulate the next few orders of magnitude, both because they are not relevant to beginner or intermediate programmers, and because I’m climbing the 1K → 10K transition myself, so I’m not able to articulate it well. But they have to do with elegance, abstraction, performance, scalability, collaboration, best practices, code as craft.
The 3am realization is that many, many “introduction” to programming materials start at the 1 → 10 transition. But learners start at the 0 → 1 transition — and a 10-line program has the approachability of Everest at that point.