Specifically, I'm thinking about binary. Converting to binary shows up in a lot of standards. "We" think it's an important fundamental of computer science. Yet for most students - and many teachers, I'd wager - there's no connection to computer science. We tell them that computers work in binary, but then they use web browsers and interpreters and word processors and there's nary a 0 or 1 to be found.
Wouldn't it be awesome if there was some kind of book that put all these things into context? There are certainly more examples. Maybe if you comment with some of the ones that occur to you, someone will put them into a book.
4 comments:
Sounds like a fun concept. Personally I find different number systems just plain fun and have since the 5th grade. But I agree that we too often teach some concepts in isolation and that is not good. I think recursion is another one. I think we should teach stacks and recursion together.
I agree Alfred - I love converting number systems. But we're the geeks. There are a lot of kids who don't think it's fun and don't understand why we make them do it, which makes them not want to do it.
Another concept I thought of last night was stacks and queues. I complained bitterly about how every language I've ever used combines them into a single "list" that can be popped, pushed, and, um, the other methods. I know other people think they're Very Important, but I never got it.
Then someone pointed out that you're using one every time you "undo" in an application or use the back and forward buttons on the browser. And what would happen if they behaved differently? Such a light bulb moment.
I use binary only as a means of demonstrating, or explaining why certain rules exist. Bytes in java can have a value of -128 to 127. Why those numbers, why not a different set of numbers? So I explain how binary works and they see where the values come from. Why do I get a negative number when I add two relatively large byte values? More binary. The idea of an old odometer rolling over from 99999.9 to 00000.0 and the leftmost 1 "going away" seems to help with the concept.
I spend a lot less time on it than I use to, and introduce it later in the semester than in previous years. I use it to explain things we've talked about rather than just presenting just to present it. I also just use bytes even though we rarely use them in class. If they get bytes, larger data type just add busy work.
I also teach a CS1 course at a university so YMMV.
As a CS PhD student, I am far too "close" to the material to see things that don't seem to make sense. I end up looking at binary numbers (well, more commonly hexadecimal), when debugging really gross s, low-level problems. Thus, I would *love* to see the "questions" part of this book, and would be happy to try and contribute "answers."
For the binary example, I think Tom is bang on: it only matters for the "edge cases" of numbers. I would say this is not terribly important when beginning to learn how to write programs. However, this issue does crop up in non-programming computer systems. For example, Excel uses floating-point math, which can sometimes give people disconcerting results.
Post a Comment