Hump-Jumping: How the Education of Computer Science can be Saved, err, maybe.

In a paper called 'The Camel has two humps' the authors described a test which can be applied to students before they've begun a computer science course, and can fairly accurately predict those that will do well and those that will not.

I've never agreed with their summary of the results.

They say they are dividing:

"programming sheep from non-programming goats"

...which implies the differences between pass and fail are as pronounced as speciation. That's a big claim and well outside the scope of their research.

And I think they misrepresent their study when they say:

"This test predicts ability to program with very high accuracy before the subjects have ever seen a program or a programming language."

...as, crucially, their study failed to check whether the students "had ever seen a program or programming language."

(Unless I'm misinterpreting this sentence from the paper, taken verbatim:

We believe (without any interview evidence at all) that they had had no contact with programming but had received basic school mathematics teaching.

... I've written to one of the authors to seek clarity.)

I asked Alan Kay for his opinion, when he commented here on a different topic -- he was very kind in providing a lengthy and thoughful answer.

His opening phrase really sticks in my mind:

"They could be right, but there is nothing in the paper that substantiates it."

Then, this morning, I saw a fresh comment at that article, from David Smith

"If there were a definitive test of ultimate programming capability I could apply on the first day of class, what would I say to those who 'failed'?"

Which presents a very human response to the topic from an educator directly affected by it. And I don't have a sufficient answer to that.

But a different approach to the whole problem has occurred to me since:

Let's suppose that this test is indeed an accurate test of those that will and won't succeed in computer science 101.

We put aside all worries about what biases or inconsistencies the study might have. Just accept that the test is effective. Stick with me here.

So we give the test to all students before they start Computer Science 101. At the end of the subject, we see that, as predicated, those that do poorly in the subject tend to do poorly on the pre-test. But instead of looking for correlation, what if we looked for outliers?

Which students did poorly on the pre-test, but did well in computer science? Those are the students with the most to teach us. Why did they buck the trend?

Okay, so maybe some of them cheated. (I remember a high incidence of cheating in early computer subjects I took; particularly amongst those who didn't continue in the field.) And maybe some of people deliberately blew the pre-test, even though they did well at the subject.

But once we find the genuine hump-jumpers, we focus on what it was that helped computer science 'click' for them. Did they find there were specific misconceptions they had to overcome? Did they have extra-tuition? Were there specific problems that helped them get their thinking in order? Was it just hard work? And, regardless of the answer, would they like to become tutors next semester, specifically working with those who perform poorly on the pre-test?

It might be necessary to look at a lot of such hump-jumpers before useful lessons emerge. It might be that every one of them has a different story to tell, there's no common pattern. (As Tolstoy said in his Turing Award speech: "Happy programmers are all alike; but every unhappy programmer is unhappy in her own way.")

So that's my answer for David. I don't know what you could say to those who fail the pre-test. But I think that over time a good pre-test could be used to develop new and better teaching methods, and maybe that's the best we can do.

 

My book "Choose Your First Product" is available now.

It gives you 4 easy steps to find and validate a humble product idea.

Learn more.

(By the way, I read every comment and often respond.)

Your comment, please?

Your Name
Your Url (optional)
Note: I may edit, reuse or delete your comment. Don't be mean.