LaTiCE 2017 in Hong Kong

LaTiCE was announced to be in Saudi Arabia (see previous blog post), but it didn’t work out.  I don’t know why. It will now be held in Hong Kong.
FIRST ANNOUNCEMENT AND CALL FOR PAPERS

Learning and Teaching in Computing and Engineering (LaTiCE 2017)
April 20-23, 2017
Hong Kong
http://www.latice-conference.org/

The Fifth International Conference on Learning and Teaching in Computing and Engineering (LaTiCE 2017) aims to create a platform towards sharing rigorous research and current practices being conducted in computing and engineering education. The previous four LaTiCE conferences have been successfully held in Macau (2013), Malaysia (2014), Taiwan (2015) and Mumbai (2016). The fifth LaTiCE conference will be held at the University of Hong Kong, from April 20th to 23rd, 2017.

LaTiCE 2017 is jointly organized by the University of Hong Kong, Hong Kong, and the Uppsala Computing Education Research Group (UpCERG), Uppsala University, Sweden. It is technically co-sponsored by the Special Technical Community for Education Research (STC Education), which is an IEEE Computer Society initiative to connect those interested in all forms of educational research and pedagogy in the field of computing and engineering.

The conference is preceded by a doctoral consortium on April 20th. The conference is a gathering for presentations of research papers, practice sharing papers, work-in-process papers, and display of posters and demos.

MAIN CONFERENCE THEMES
– Computer Science and Engineering Education research
– Secondary School Computer Science
– ICT in Education

CONFERENCE SUB-THEMES
– Computing and engineering education research, theories, and methodologies
– Cross-cultural aspects of computing and engineering education
– Educational technology, software, and tools
– Teaching innovations, best practices, experience sharing in computing and engineering education
– Course module design, proficiency assessment, and module cross-accreditation
– Improving student engagement in computing and engineering
– Collaborative learning in computing and engineering- team and project skills
– “Flipped” classrooms and active learning
– Work-integrated learning and project-based learning

PAPER SUBMISSION
Research Papers
Research papers (6-8 pages) present original, unpublished work relevant to the conference themes. Papers may be theoretical or based on empirical investigations. Papers are evaluated with respect to their theoretical contribution and the quality and relevance of the research.

Practice / Work-in-progress Papers
Practice / work-in-progress papers (3-5 pages) present original, unpublished practice sharing or work-in-progress work with a focus on innovative and valued practices within specific institutions. They can present preliminary results or raise issues of significance to the discipline.

Poster/Demo
Poster/demo (2 pages abstract) should present innovative ideas for work in the early stages related to research, teaching practice, or tools. Demonstration of tools should stress the methodology and can include some hands-on work for participants.

Papers should be submitted to the EasyChair review management system. All papers will undergo double-blind peer review.
Conference content will be submitted for inclusion into IEEE Xplore as well as other Abstracting and Indexing (A&I) databases. All papers should follow the IEEE Xplore Conference Publishing formatting guidelines.

IMPORTANT DATES
Paper submission: January 15th, 2017
Notification of acceptance: February 15th, 2017
Camera-ready deadline: March 1st
Author registration deadline: March 1st
Doctoral consortium: April 20th
LaTiCE conference: April 21st-23rd

PROGRAMME COMMITTEE CO-CHAIRS
Roger Hadgraft, University of Technology, Sydney, Australia, Roger.Hadgraft@uts.edu.au
James Harland, RMIT University, Australia, james.harland@rmit.edu.au

January 4, 2017 at 7:21 am Leave a comment

Why the Software Industry Needs Computing Education Research

Interesting argument from Andy Ko and Susanne Hambrusch about why we need more computing education research.

To fill the available jobs with skilled software developers, learners need to actually be learning. Unfortunately, recent research shows that many students simply aren’t. For example, a 2004 study conducted across seven countries and 12 universities found that even after passing college-level introductory programming courses, the majority of students could not predict the output of even basic computer programs. In some of our research on coding bootcamps, we are seeing similar trends, with students failing to learn and failing to get jobs.

If learning outcomes are as bad as these studies show, we need to be deeply concerned. Existing and new programs may be training tens of thousands of new software developers who aren’t quite good enough to get even an entry level position. This leaves the status quo of top companies fighting over top coders, leaving many jobs unfilled while they wait for more skilled developers. Worse yet, the demand for developers may be so high that they do get jobs, but write poor-quality code, putting at risk the software-based infrastructure that society increasingly needs to be robust, secure, and usable.

Source: Why the Software Industry Needs Computing Education Research | The Huffington Post

January 2, 2017 at 7:26 am 9 comments

Raising the Floor: Sharing What Works in Workplace Diversity, Equity, and Inclusion

A really interesting set of proposals.  I saw many that are applicable to improving diversity in higher-education CS, as well as the stated goal of improving workplace diversity.

Workplace diversity is probably the biggest factor inhibiting women in computing.  We used to say that females avoided CS, not knowing what it is.  I think we can now fairly say that many females avoid CS because they know what it is.

This is a great ending blog post of 2016.  See you in January! Happy Holidays and a Great New Year!

Over the past few months, we and our colleagues at OSTP have had conversations with dozens of Federal agencies, companies, investors, and individuals about their science and technology workforces, and we have consistently heard people express a commitment to bringing more diversity, equity, and inclusion to their workplaces. They understand the strategic importance. Yet often we found that many of the same people who want to create high-performing, innovative teams and workforces do not know the steps and solutions that others are already effectively using to achieve their diversity, equity, and inclusion goals.

In order to help accelerate this work, we have compiled insights and tips into an Action Grid designed to be a resource for those striving to create more diverse, equitable, and inclusive science and technology teams and workforces, so that we can all learn from each other.

Diversity, equity, and inclusion work is not one size fits all. We hope this set of potential actions clustered by leadership engagement, retention and advancement, hiring, and ecosystem support provides ideas and a jumping off point for conversations within your team or organization on steps that you can take to increase diversity and to make your workforce more reflective of the communities you serve, customers you sell to, and talent pools you draw from.

Source: Raising the Floor: Sharing What Works in Workplace Diversity, Equity, and Inclusion | whitehouse.gov

December 21, 2016 at 7:15 am 1 comment

After Leaving Computing, New Majors Tend to Differ by Gender – CRN

I found these differences fascinating, though I’m not sure what to make of them.  Once leaving computing, students head to different majors with a big gender difference.  Only 5% of women go into an Engineering field after CS, while 32% of men go into some form of Engineering.  Why is that?

As computing departments across the U.S. wrestle with increased enrollment, it is important to recognize that not everyone who becomes a computing major stays a computing major. In 2014, CERP collected data from a cohort of U.S. undergraduate students who agreed to be contacted for follow-up surveys in 2015. While most of the students surveyed remained computing majors (96%), some students changed to a non-computing major. As shown in the graphic above, students in our sample moved to a variety of majors, and the type of new major tended to differ by gender. Most men (69%) who left a computing major switched to engineering, math/statistics, or physical science majors. On the other hand, most women (53%) tended to move to social sciences, or humanities/arts. These data are consistent with existing social science research indicating women tend to choose fields that have clear social applications, such as the social sciences, arts, and humanities. CERP’s future analyses will explore why women, versus men, say they are leaving computing for other fields.

Source: After Leaving Computing, New Majors Tend to Differ by Gender – CRN

December 19, 2016 at 7:22 am 5 comments

Graduating Dr. Briana Morrison: Posing New Puzzles for Computing Education Research

I am posting this on the day that I am honored to “hood” Dr. Briana Morrison. “Hooding” is where doctoral candidates are given their academic regalia indicating their doctorate degree. It’s one of those ancient parts of academia that I find really cool. I like the way that the Wikiversity describes it: “The Hooding Ceremony is symbolic of passing the guard from one generation of doctors to the next generation of doctors.”

I’ve written about Briana’s work a lot over the years here:

But what I find most interesting about Briana’s dissertation work were the things that didn’t work:

  • She tried to show a difference in getting program instruction via audio or text. She didn’t find one. The research on modality effects suggested that she would.
  • She tried to show a difference between loop-and-a-half and exit-in-the-middle WHILE loops. Previous studies had found one. She did not.

These kinds of results are so cool to me, because they point out what we don’t know about computing education yet. The prior results and theory were really clear. The study was well-designed and vetted by her committee. The results were contrary to what we expected. WHAT HAPPENED?!? It’s for the next group of researchers to try to figure out.

The most interesting result of that kind in Briana’s dissertation is one that I’ve written about before, but I’d like to pull it all together here because I think that there are some interesting implications of it. To me, this is a Rainfall Problem kind of question.

Here’s the experimental set-up. We’ve got six groups.

  1. All groups are learning with pairs of a worked example (a completely worked out piece of code) and then a practice problem (maybe a Parson’s Problem, maybe writing some code). We’ll call these WE-P pairs (Worked Example-Practice). Now, some WE-P pairs have the same context (think of it as the story of a story problem), and some have different contexts. Maybe in the same context, you’re asked to compute the average tips for several days of tips as a barista. Maybe in a different context, you compute tips in the worked example, but you compute the average test score in the practice. In general, we predict that different contexts will be harder for the student than having everything the same.
  2. So we’ve got same context vs different context as one variable we’re manipulating. The other variable is whether the participants get the worked example with NO subgoal labels, or GENERATED subgoal labels, or the participant has to GENERATE subgoal labels. Think of a subgoal label as a comment that explains some code, but it’s the same comment that will appear in several different programs. It’s meant to encourage the student to abstract the meaning of the code.

In the GENERATE condition, the participants get blanks, to encourage them to abstract for themselves. Typically, we’d expect (for research in other parts of STEM with subgoal labels) that GENERATE would lead to more learning than GIVEN labels, but it’s harder. We might get cognitive overload.

In general, GIVEN labels beats out no labels. No problem — that’s what we expect given all the past work on subgoal labels. But when we consider all six groups, we get this picture.

Why would having the same context do worse with GIVEN labels than no labels? Why would the same context do much better with GENERATE labels, but worse when it’s different contexts?

So, Briana, Lauren, and Adrienne Decker replicated the experiment with Adrienne’s students at RIT (ICER 2016). And they found:

The same strange “W” pattern, where we have this odd interaction between context and GIVEN vs. GENERATE that we just don’t have an explanation for.

But here’s the really intriguing part: they also did the experiment with second semester students at RIT. All the weird interactions disappeared! Same context beat different context. GIVEN labels beat GENERATE labels. No labels do the worst. When students get enough experience, they figure things out and behave like students in other parts of STEM.

The puzzle for the community is WHY. Briana has a hypothesis. Novice students don’t attend to the details that they need, unless you change the contexts. Without changing contexts, students even GIVEN labels don’t learn because they’re not paying enough attention. Changing contexts gets them to think, “What’s going on here?” GENERATE is just too hard for novices — the cognitive load of figuring out the code and generating labels is just overwhelming for students, so they do badly when we’d expect them to do better.

Here we have a theory-conflicting result, that has been replicated in two different populations. It’s like the Rainfall Problem. Nobody expected the Rainfall Problem to be hard, but it was. More and more people tried it with their students, and still, it was hard. It took Kathi Fisler to figure out how to teach CS so that most students could succeed at the Rainfall Problem. What could we teach novice CS students so that they avoid the “W” pattern? Is it just time? Will all second semester students avoid the “W”?

Dr. Morrison gave us a really interesting dissertation — some big wins, and some intriguing puzzles for the next researchers to wrestle with. Briana has now joined the computing education research group at U. Nebraska – Omaha, where I expect to see more great results.

December 16, 2016 at 7:00 am 1 comment

If you really want a diverse workforce, why not go where there is diversity?

Nick Black, brilliant GT alum and (now former) Google engineer, says it like he sees it.  His critique of Google and their efforts to improving diversity extend to most of Silicon Valley.  If you really want a diverse workforce, open offices where there’s diversity.

Nick’s analysis (and I encourage you to read the whole post below) talks about the density of middle class Black workers. He doesn’t consider where there are Black workers who know computing.  Computing education is still pretty rare in the US.  Let’s use AP CS exam-taking as a measure of where there is CS education.  In Michigan last year, there were 19 Black AP CS exam-takers. 11 in Missouri.  None in Mississippi.  There are middle class Black families in these states.  They may not be getting access to CS education.

Google talks endlessly about diversity, and spends millions of dollars on the cause. My NYC office lends its prodigiously expensive square feet to Black Girls Code. We attempt to hook the recruiting pipeline up to HBCUs. We tweet about social justice and blog about the very real problem of racial inequality in America. Noble endeavors, all. It’s too bad that they’re not taking place where black people actually, you know, live.

According to census.gov’s data as of 2016, Mountain View is 2% black. In 2010, the Bay Area Census Project recorded 1,468 blacks in MTV. I saw more black people than that crossing Peachtree Street today. census.gov reports, as of 2010, blacks making up 25.1% of NYC, 9.6% of Los Angeles, and 6.1% of famously liberal San Francisco. census.gov does not provide data for Dublin or Zürich, but we can make some reasonable assumptions about those other largest Google offices, n’est-ce pas?

And let’s be honest — I doubt much of that 25.1% of NYC is centered around Chelsea.

Atlanta’s a bit down from 67% in 1990, but 54% ain’t so bad.

Source: A dispatch from Terminus – dankwiki, the wiki of nick black

December 12, 2016 at 7:05 am 7 comments

Making Hard Choices in State Computing Education Policy towards #CSforAll #CSEdWeek

At the ECEP Summit, I sat with the team from North Carolina as they were reviewing data that our evaluation team from Sagefox had assembled.  It was fascinating to work with them as they reviewed their state data.  I realized in a new way the difficult choices that a state has to make when deciding how to make progress towards the CS for All goal.  In the discussion that follows, I don’t mean to critique North Carolina in any way — every state has similar strengths and weaknesses, and has to make difficult choices.  I just spent time working with the North Carolina team, so I have their numbers at-hand.

North Carolina has 5,000 students taking CS in the state right now.  That was higher than some of the other states in the room.  I had been sitting with the Georgia state team, and knew that Georgia was unsure if we have even one full-time CS teacher in a public high school in the whole state.  The North Carolina team knew for a fact that they had at least 10 full-time high school CS teachers.

Some of the other statistics that Sagefox had gathered:

  • In 2015, the only 18% of Blacks in North Carolina who took the AP CS exam passed it. (It rose to 28% in 2016, but we didn’t have those results at the summit.) The overall pass rate for AP CS in North Carolina is over 40%.
  • Only 68 teachers in the state took any kind of CS Professional Development (that Sagefox could track).  There are 727 high schools in the state.
  • Knowing that there are 727 high schools in the state, we can put the 5,000 high school students in CS in perspective.  We know that there at 10 full-time CS teachers in North Carolina, each teaching six classes of 20 students each.  That accounts for 1,200 of those 5,000.  3,800 students divided by 717 high schools, with class sizes typically at 20 students, suggests that not all high schools in North Carolina have any CS at all.

Given all of this, if you wanted to achieve CS for All, where would you make a strategic investment?

  • Maybe you’d want to raise that Black student pass rate.  North Carolina is 22% African-American.  If you can improve quality for those students, you can make a huge impact on the state and make big steps towards broadening participation in computing.
  • Maybe you’d want to work towards all high schools having a CS teacher.  Each teacher is only going to reach at most 120 students (that’s full-time), but that would go a long way towards more equitable access to CS education in the state.
  • Maybe you’d want to have more full-time CS teachers — not just one class, but more teachers who just teach CS for the maximum six courses a year.  Then, you reach more students, and you create an incentive for more pre-service education and a pipeline for CS teachers, since then you’d have jobs for them.

The problem is that you can’t do all of these things.  Each of these is expensive.  You can really only go after one goal at a time.  Which one first?  It’s a hard choice, and we don’t have enough evidence to advise which is likely to pay off the most in the long run.  And you can’t achieve all of the goal all at once — as I described in Blog@CACM, you take incremental steps. These are all tough choices.

December 9, 2016 at 7:51 am Leave a comment

Older Posts Newer Posts


Recent Posts

January 2017
M T W T F S S
« Dec    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Feeds

Blog Stats

  • 1,304,132 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4,611 other followers

CS Teaching Tips