Posts tagged ‘software engineering’
I disagree with the claim below “In the future, everyone is going to be a software engineer, but only a few will learn how to code,” but we need a better definition of what it means to “code” and to “program” (as discussed with respect to recent ITICSE 2016 papers). If you’re using tools like Hypercard (“low-code” platforms), isn’t that still programming? It’s certainly more than the no loops, conditionals, or variables that’s often seen in elementary school students’ use of Scratch. Those tools are not software engineering tools. Just because you’re developing software doesn’t mean that you’re doing software engineering.
We need a range of tools from no-code to low-code to software engineering support. It’s an insult to those who carefully engineer software to say that anyone who assembles software is an engineer.
A new industry is emerging to serve the Morts of the world by designing and selling what are called no-code or low-code platforms. Companies like Caspio, QuickBase, Appian, and Mendix are creating visual interfaces that enable people to essentially snap together blocks of software, and bypass the actual lines of code underlying those blocks (skilled developers can also dive into the code). With basic training, a non-technical employee can rapidly assemble software tools that solve business problems ranging from simple database queries to applications lashing together multiple legacy enterprise applications.
Forrester reports the sector earned $1.7 billion in 2015 and is on track to bring in $15 billion by 2020 as the majority of large companies adopt “Citizen Development” policies similar to the bring-your-own-device rules. Employees will be empowered to choose tools, and even partially assemble software, to solve their own business problems without IT approval.
Another of the breakouts that I was in at the recent Dagstuhl seminar on assessment in CS learning focused on how we teach and assess in CS classes social and professional practices. This was a small group: Andy Ko, Lisa Kaczmarczyk, Jan Erik Moström, and me.
Andy and his students have been studying (via interviews and surveys) what makes a great engineer.
- They’re good at decision-making.
- They’re good at shifting levels of abstraction, e.g., describing how a line of code relates to a business strategy.
- They have some particular inter-personal skills. They program ego-less-ly. They have empathy, e.g., “not an asshole.”
- Senior engineers often spend a lot of time being teachers for more junior engineers.
Since I’ve worked with Lijun Ni on high school CS teachers, I know some of the social and professional practices of teachers. They have content knowledge, and they have pedagogical content knowledge. They know how to teach. They know how to identify and diagnose student misunderstandings, and they know techniques for addressing these.
We know some techniques for teaching these practices. We can have students watch professionals, by shadowing or using case-based systems like the Ask systems. We can put students in apprenticeships (like student teaching or internships) or in design teams. We could even use games and other simulations. We have to convey authenticity — students have to believe that these are the real social and professional practices. An interesting question we came up with: How would you know if you covered the set of social and professional practice?
Here’s the big question: How similar are these sets? They seem quite different to me, and these are just two possible communities of practice for students in an intro course. Are there social and professional practices that we might teach in the same intro CS — for any community of practice that the student might later join? My sense is that the important social and professional practices are not in the intersection. The most important are unique to the community of practice.
How would we know if we got there? How would you assess student learning about social and professional practice? Knowledge isn’t enough — we’re talking about practice. We have to know that they’d do the right things. And if you found out that they didn’t have the right practices, is it still actionable? Can we “fix” practices while in undergrad? Maybe students will just do the right things when they actually get out there?
The countries with low teacher attrition spend a lot of time on teacher on-boarding. In Japan, the whole school helps to prepare a new teacher, and the whole school feels a sense of failure if the first year teacher doesn’t pass the required certification exam. US schools tend not to have much on-boarding — at schools for teachers, or in industry for software engineers (as Begel and Simon found in their studies at Microsoft). On-boarding seems like a really good place, to me, for teaching professional practice. And since the student is then doing the job, assessment is job assessment.
The problems of teaching and assessing professional practice are particularly hard when you’re trying to design a new community of practice. We’d like computing to be more diverse, to be more welcoming to women and to people from under-represented groups. We’d want cultural sensitivity to be a practice for software professionals. How would you design that? How do you define a practice for a community that doesn’t exist yet? How do you convince students about the authenticity?
It’s an interesting set of problems, and some interesting questions to explore, but I came away dubious. Is this something that we can do effectively in school? Perhaps it’s more effective to teach professional practices in the professional context?
I’m leaving May 24 for a two week trip to Germany. Both one week parts are interesting and worth talking about here. I’ve been reflecting on my own thinking on the piece between, and how it relates to computing education themes, too.
I’m attending a seminar at Schloss Dagstuhl on Human-Centric Development of Software Tools (see seminar page here). Two of the seminar leaders are Shriram Krishnamurthi of Bootstrap fame who is a frequent visitor and even a guest blogger here (see post here) and Andy Ko whose seminal work with Michael Lee on Gidget has been mentioned here several times (for example here). I’ve only been to Dagstuhl once before at the live-coding seminar (see description here) which was fantastic and has influenced my thinking literally years later. The seminar next week has me in the relative-outsider role that I was at the live-coding seminar. Most of the researchers coming to this event are programming language and software engineering researchers. Only a handful of us are social scientists or education researchers.
The Dagstuhl seminar ends Thursday after lunch. Saturday night, I’m to meet up with a group in Oldenburg Germany and then head up Sunday to Stadland (near the North Sea) for a workshop where I will be advising STEM Education PhD students. I don’t have a web link to the workshop, but I do have a page about the program I’ll be participating in — see here. My only contact there is Ira Diethelm, whom I’ve met several times and saw most recently at WIPSCE 2014 in Berlin (see trip report here). I really don’t know what to expect. Through the ICER DC and WIPSCE, I’ve been impressed by the Computing Education PhD students I’ve met in Germany, so I look forward to an interesting time. I come back home on Friday June 5 from Bremen.
There’s a couple day gap between the two events, from Thursday noon to Saturday evening. I got a bunch of advice on what to do on holiday. Shriram gave me the excellent advice of taking a boat cruise partway north, stopping at cities along the way, and then finishing up with a train on Saturday. Others suggested that I go to Cologne, Bremen, Luxembourg, or even Brussels.
I’ve decided to take a taxi to Trier from Dagstuhl, tour around there for a couple days, then take a seven hour train ride north on Saturday. Trier looks really interesting (see Tripadvisor page), though probably not as cool as a boat ride.
Why did I take the safer route?
The science writer, Kayt Sukel, was a once student of mine at Georgia Tech — we even have a pub together. I am so pleased to see the attention she’s received for her book Dirty Minds/This is Your Brain on Sex. She has a new book coming out on risk, and that’s had me thinking more about the role of risk in computing education.
In my research group, we often refer to Eccles model of academic achievement and decision making (1983), pictured below. It describes how students’ academic decisions consider issues like gender roles and stereotypes (e.g., do people who are like me do this?), expectation for success (e.g., can I succeed at this?), and the utility function (e.g., will this academic choice be fun? useful? money-making?). It’s a powerful model for thinking about why women and under-represented minorities don’t take computer science.
Eccles’ model doesn’t say much about risk. What happens if I don’t succeed? What do I need to do to reduce risk? How will I manage if I fail? How much am I willing to suffer/pay for reduced risk?
That’s certainly playing into my thinking about my in-between days in Germany. I don’t speak German. If I get into trouble in those in-between days, I know nobody I could call for help. I still have another week of a workshop with a keynote presentation after my couple days break. I’ve already booked a hotel in Trier. I plan on walking around and taking pictures, and then I will take a train (which I’ve already booked, with Shriram’s help) to Oldenburg on Saturday. A boat ride with hops into cities sounds terrific, but more difficult to plan with many more opportunities for error (e.g., lost luggage, pickpockets). That’s managing risk for me.
I hear issues of risk coming into students’ decision-making processes all the time, combined with the other factors included in Eccles’ model. My daughter is pursuing pre-med studies. She’s thinking like many other pre-med students, “What undergrad degree do I get now that will be useful even if I don’t get into med school?” She tried computer science for one semester, as Jeanette Wing recommended in her famous article on Computational Thinking: “One can major in computer science and go on to a career in medicine, law, business, politics, any type of science or engineering, and even the arts.” CS would clearly be a good fallback undergraduate degree. She was well-prepared for CS — she had passed the AP CS exam in high school, and was top of her engineering CS1 in MATLAB class. After one semester in CS for CS majors, my daughter hated it, especially the intense focus on enforced software development practices (e.g., losing points on homework for indenting with tabs rather than spaces) and the arrogant undergraduate teaching assistants. (She used more descriptive language.) Her class was particularly unfriendly to women and members of under-represented groups (a story I told here). She now rejects the CS classroom culture, the “defensive climate” (re: Barker and Garvin-Doxas). She never wants to take another CS course. The value of a CS degree in reducing risks on a pre-med path does not outweigh the costs of CS classes for her. She’s now pursuing psychology, which has a different risk/benefit calculation (i.e., a psychology undergraduate degree is not as valuable in the marketplace as a CS undergraduate degree), but has reduced costs compared to CS or biology.
Risk is certainly a factor when students are considering computer science. Students have expectations about potential costs, potential benefits, and about what could go wrong. I read it in my students’ comments after the Media Computation course. “The course was not what I expected! I was expecting it to be much harder.” “I took a light load this semester so that I’d be ready for this.” Sometimes, I’m quite sure, the risk calculation comes out against us, and we never see those students.
The blog will keep going while I’m gone — we’re queued up for weeks. I may not be able to respond much to comments in the meantime, though.
Hackathons seem the antithesis of what we want to promote about computer science. On the one hand, they emphasize the Geek stereotype (it’s all about caffeine and who needs showers?), so they don’t help to attract the students who aren’t interested in being labeled “geeky.” On the other hand, it’s completely against the idea of designing and engineering software. “Sure, you can do something important by working for 36 hours straight with no sleep or design! That’s how good software ought to be written!” It’s not good when facing the public (thinking about the Geek image) or when facing industry and academia.
So why try to make them “female-friendly”?
OK, so there are a number of valid reasons women tend to stay away from hackathons. But what can hackathon planners due to get more females to attend their events? I found some women offering advice on this subject. Here are some suggestions for making your hackathon more female-friendly.
Amy Quispe, who works at Google and ran hackathons while a student at Carnegie Mellon University, writes that having a pre-registration period just for women makes them feel more explicitly welcome at your event. Also, shy away from announcing that its a competition (to reduce the intimidation factor), make sure the atmosphere is clean and not “grungy” and make it easy for people to ask questions. “A better hackathon for women was a better hackathon for everyone,” she writes.
Excellent post and interesting discussion at Neil Brown’s blog, on the question of the role of types for professional software developers and for students. I agree with his points — I see why professional software developers find types valuable, but I see little value for novice programmers nor for end-user programmers. I have yet to use a typing system that I found useful, that wasn’t just making me specify details (int vs Integer vs Double vs Float) that were far lower level than I cared about nor wanted to care about.
I finished Nathan Ensmenger’s 2010 book “The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise” and wrote a Blog@CACM post inspired by it. In my Blog@CACM article, I considered what our goals are for an undergraduate CS degree and how we know if we got there. Ensmenger presents evidence that the mathematics requirements in undergraduate computer science are unnecessarily rigorous, and that computer science has never successfully become a profession. The former isn’t particularly convincing (there may be no supporting evidence that mathematics is necessary for computer programming, but that doesn’t mean it’s not useful or important), but the latter is well-supported. Computer programming has not become a profession like law, or medicine, or even like engineering. What’s more, Ensmenger argues, the efforts to professionalize computer programming may have played a role in driving away the women.
Ensmenger talks about software engineering as a way of making-do with the programmers we have available. The industry couldn’t figure out how to make good programmers, so software engineering was created to produce software with sub-par programmers:
Jack Little lamented the tendency of manufacturers to design languages “for use by some sub-human species in order to get around training and having good programmers.” When the Department of Defense proposed ADA as a solution to yet another outbreak of the software crisi, it was trumpeted as a means of “replacing the idiosyncratic ‘artistic’ ethos that has longer governed software writing with a more efficient, cost-effective engineering mind-set.”
What is that “more efficient” mind-set? Ensmenger suggests that it’s for programmers to become factory line workers, nearly-mindlessly plugging in “reusable and interchangeable parts.”
The appeal of the software factory model might appear obvious to corporate managers; for skilled computer professionals, the idea of becoming a factory worker is understandably less desirable.
Ensmenger traces the history of software engineering as a process of dumbing-down the task of programming, or rather, separating the highest-ability programmers who would analyze and design systems, from the low-ability programmers. Quotes from the book:
- They organized SDC along the lines of a “software factory” that relied less on skilled workers, and more on centralized planning and control…Programmers in the software factory were machine operators; they had to be trained, but only in the basic mechanisms of implementing someone else’s design.
- The CPT, although it was developed at the IBM Federal Systems Division, reflects an entirely different approach to programmer management oriented around the leadership of a single managerially minded superprogrammer.
- The DSL permits a chief programmer to exercise a wider span of control over the programming, resulting in fewer programmers doing the same job.
In the 1980’s, even the superprogrammer was demoted.
A revised chief programmer team (RCPT) in which “the project leader is viewed as a leader rather than a ‘super-programmer.’” The RCPT approach was clearly intended to address a concern faced by many traditionally trained department-level managers—namely, that top executives had “abdicated their responsbility and let the ‘computer boys’ take over.”
The attempts to professionalize computer programming is a kind of response to early software engineering. The suggestion is that we programmers are as effective at handling projects as management. But in the end, he provides evidence from multiple perspectives that professionalization of computer programming has failed.
They were unable, for example, to develop two of the most defining characteristics of a profession: control over entry into the profession, and the adoption of a shared body of abstract occupational knowledge—a “hard core of mutual understanding”—common across the entire occupational community.
Ensmenger doesn’t actually talk about “education” as such very often, but it’s clearly the elephant in the room. That “control over entry into the profession” is about a CS degree not being a necessary condition for entering into a computing programming career. That “adoption of a shared body of abstract occupational knowledge” is about a widely-adopted, shared, and consistent definition of curriculum. There are many definitions of “CS1” (look at the effort Allison Elliott Tew had to go through to define CS1 knowledge), and so many definitions of “CS2” as to make the term meaningless.
The eccentric, rude, asocial stereotype of the programmer dates back to those early days of computing. Ensmenger says hiring that followed that stereotype is the source of many of our problems in developing software. Instead of allowing that eccentricity, we should have hired programmers who created a profession that embraced the user’s problems.
Computer programmers in particular sat in the uncomfortable “interface between the world of ill-stated problems and the computers.” Design in a heterogeneous environment is difficult; design is as much as social and political process as it is technical[^1]; cultivating skilled designers requires a comprehensive and balanced approach to education, training, and career development.”
The “software crisis” that lead to the creation of software engineering was really about getting design wrong. He sees the industry as trying to solve the design problem by focusing on the production of the software, when the real “crisis” was a mismatch between the software being produced and the needs of the user. Rather than developing increasingly complicated processes for managing the production of software, we should have been focusing on better design processes that helped match the software to the user. Modern software engineering techniques are trying to make software better matched to the user (e.g., agile methods like Scrum where the customer and the programming team work together closely with a rapid iterative development-and-feedback loop) as well as disciplines like user-experience design.
I found Ensmenger’s tale to be fascinating, but his perspective as a labor historian is limiting. He focuses only on the “computer programmer,” and not the “computer scientist.” (Though he does have a fascinating piece about how the field got the name “computer science.”) Most of his history of computing seems to be a struggle between labor and management (including an interesting reference to Karl Marx). With a different lens, he might have considered (for example) the development of the additional disciplines of information systems, information technology, user experience design, human-centered design and engineering, and even modern software engineering. Do these disciplines produce professionals that are better suited for managing the heterogeneous design that Ensmenger describes? How does the development of “I-Schools” (Schools of Information or Informatics) change the story? In a real sense, the modern computing industry is responding to exactly the issues Ensmenger is identifying, though perhaps without seeing the issues as sharply as he describes them.
Even with the limitations, I recommend “The Computer Boys Take Over.” Ensmenger covers history of computing that I didn’t know about. He gave me some new perspectives on how to think about computing education today.
[^1]: Yes, both semi-colons are in the original.
LiveCode had an earlier blog piece on how they want to implement “Open Language” so that the HyperTalk syntax could be extended. This piece (linked below) goes into more detail and is an interesting history of how LiveCode evolved from HyperCard, and how they plan to refactor it so that it’s extensible by an open source community.
LiveCode is a large, mature software product which has been around in some form for over 20 years. In this highly technical article, Mark Waddingham, RunRev CTO, takes us under the hood to look at our plan to modularize the code, making it easy for a community to contribute to the project. The project described in this post will make the platform an order of magnitude more flexible, extensible and faster to develop by both our team and the community.
Like many such projects which are developed by a small team (a single person to begin with – Dr Scott Raney – who had a vision for a HyperCard environment running on UNIX systems and thus started MetaCard from which LiveCode derives), LiveCode has grown organically over two decades as it adapts to ever expanding needs.
With the focus on maintenance, porting to new platforms and adding features after all this time evolving we now have what you’d describe as a monolithic system – where all aspects are interwoven to some degree rather than being architecturally separate components.
via Taming the Monolith.