|« March 2004||May 2004 »|
I'm Ryan Lowe, a Software Engineering graduate living in Ottawa, Canada. I like agile software development and Ruby on Rails.
I write this blog in Canadian English and don't use a spell checker. Typos happen.
» Full-time Ruby on Rails freelancer
» Full-time with Rails since May 2005
» Former committer for RadRails (now Aptana)
» I also have a few Rails side-projects in development:
1. wheretogoinTO.com Toronto nightlife
2. Hey Heads Up! TODO list and sharing
3. Layered Genealogy family history research
4. foos for foosball scoring
5. fanconcert for music fans (on hold)
Hiring Rails developers? I can telecommute by the hour from Ottawa, Canada
»» Email: rails AT ryanlowe DOT ca
Now hosted on Hey! Heads Up -- check it out!
Derek Lowe's (Ryan's older brother) words at Ryan's funeral
[email protected] no more
Forging Email Headers: Good, Bad or Ugly?
Sarcastic Dictionary (Part 1 of Many)
Twisting Rails is Risky Business
Risky Business? My Take on Early Alphas
Whoa, it's August 2007
A Postscript to "Growth at the grassroots"
»» All Blog Posts
David Heinemeier Hansson
James Duncan Davidson
Signal vs. Noise
Amy Hoy: (24)slash7
Luis de la Rosa
# Testing is a First Class Job
Programmers naturally assume that, in general, "things work." ... Testers, on the other hand, are all descendants of Murphy and assume, in general, that things "don't work right."- from Testing Extreme Programming
At a job I had as a tester I gained a unique perspective on "traditional" software process, whereby features were added by developers and checked by testers. If a defect was found in the system, it would go through the following process:
Sounds pretty organized, right? Yup, it was -- but slow! It wasn't uncommon for each of those steps to occur on a different day. Yup, that's five business days minimum to completely fix a problem, and it could be the slightest little problem ... it had to go through this process. Usually most defects took much, much longer and the testing team always had a backlog of defects to verify were fixed.
Developers on this team fixed problems as they got them as defects. A unit test suite was used by the testing team but it was incomplete and about half automated, the rest were manual GUI tests done by the testers every few months. The automatic portion of the test suite was not executed on each build (see the Eclipse project for an excellent example of how to do that). The developers did not run the test suite at all, not even a small portion of it to check for regressions before they checked their code in.
So it was quite easy for a developer to check in a regression that could have been caught by a unit test almost immediately before it was checked in. Instead it took at least a full week to fix a regression -- and that's if it was caught immediately (unlikely). Predictably, the testing team ran into the same regressions over and over again.
So what's the problem? Not enough management direction? No I think it's the developers. They are ignorant of the problems they are causing because they have no feedback. By the time they get a defect for a regression, it's two weeks later and they have no idea how they caused it. With a unit testing suite, that feedback is immediate.
As well, the "dotCom boom" era created a niche for a lazy developer position centred on giving high paid and talented coders sexy work to keep them around. If you were one of these guys and your manager started telling you to start writing tests with your code, you'd probably quit. So there's pressure to keep giving them sexy work, which ultimately gives them less immediate accountability for regressions. Just write your code and the testers will find your mistakes, right?. There's no way to tell you just "broke the build" by putting in a regression because there's no test for it! How convenient.
Developers need to learn from their own mistakes to become better developers. They need to have an ingrained sense of quality in their work, instead of a casual oh I'll fix the defects as they come into my queue attitude. Only then will they be better managed -- when that wise men said "managing programmers is like herding cats" he wasn't kidding.
I've learned more about my own coding from test-driven development than from any amount of straight-up development experience. It's humbling to have to write your own tests -- and for recruiters and managers, that's probably the worst part. Where do you find people with the good sense of quality and humility to do it?
# Ahhh the Memories
I'm getting a few hits from Dave Winer about this problem again. He likes to link to posts from the same day in years past. It was my first link from a mainstream blogger and it pretty much got me hooked on link trolling. ;) hehe
So what's happened with this problem? People are still running into it (with old versions of MT presumably) -- I can tell by my hit counter. Just goes to show you: don't underestimate the reach your software can have, it'll bite you. This was a particularly silly booboo too given it affected Internet Explorer, a browser with over 90% market share at the time.
What was the root of the problem? A broken implementation of a standard: a bad library plus poor checking by the developer equals unhappy end users. It wasn't the CSS coder's fault that IE broke the spec, but it was his fault that he didn't verify it. Read my previous post today for more of my thoughts on issues with third-party libraries.
Oh and by the way, let's not forget that there was another IE-specific problem with that CSS stylesheet too. MovableType is a great product but those kinds of errors in a default stylesheet were just not cool. Lesson learned.
# Using Third Party Libraries for a Software Project
Where would we be, as programmers, without third party libraries? We'd be in the freakin stone ages. So they're important, very important. It's also important to know the right time to use a third party library and that's what I'm going to talk about here. Please feel free to leave comments so we can all learn from each other.
Let's make sure we're all on the same footing here. You're the programmer on a project made for a customer. The project has users, who may or may not be related to the customer. As far as the users are concerned, they are using one piece of software: yours. In reality they are probably using a couple dozen layers of library code written by other people topped off with a thin layer of customization that you wrote. If that doesn't give the customer a nice warm feeling I don't know what will.
Third party libraries can include but are not limited to the following: compilers, programming languages, runtimes, virtual machines, OS-dependent APIs, standards implementations, file formats and on and on. Anything involved with your program is probably coded somewhere ... and most of that code wasn't written by you.
Guess what happens when any of those layers breaks? The users and the customer blame the programmer. And they should, you know. The programmer made the choice to use the library* and he'll have to live with that choice as long as he's maintaining it. Then his replacement will have to live with his choice too, and his replacement and so on until the project gets stale and is used for french toast to make room for the new software.
So the choice is important. It's not always as easy as saying "oh, I'll just do it myself" or "forget that, it's already done over here I just have to tweak it a bit". It's also not as easy as saying "I better avoid Not Invented Here Syndrome at all costs!" either. Don't fool yourself into an antipattern of an antipattern. It's really all about risk, my friend. So add them up.
Now before you read any further, go read Joel Spolsky's opinion on Not Invented Here Syndrome. He makes a lot of great points and it doesn't make sense for me to regurgitate them. Don't worry I'll wait for you.
OK you're back. Let's continue ...
How risky is this library I'm considering using on my project? What attributes determine risk? That all depends on who you are. Here are a few considerations (feel free to add more in the comments):
--> is the library open source and if not, can I get the source if their business goes under?
--> does the library accurately follow an agreed on and published API, standard or specification? what deviations are in the library?
--> if it follows a standard, how many replacements are there? how do they stack up against this library?
--> what are the licensing terms of the library? or alternatively: how much will this library cost? and also: what are the distribution terms of this library? (huge consideration factors)
--> is the library extensible to my present needs? to my known future needs? to my unknown future needs?
--> what's the code quality? how many defects are logged against this library and what's the severity of those defects? can I fix the defects? can the library author fix them himself? what if I send him a patch, how quickly will it be incorporated?
--> is the individual or company maintaining this library easy to collaborate with or are they a pain the ass? do they embrace user feedback or dread it?
--> how many stable releases are there? how many partial or complete rewrites have happened? (hint: check the major version number)
--> is there a test suite against the library? was it made by another third party?
--> is the library written in a useful/usable language? can I technically even use the library?
--> what's the release schedule like? is the code rotting and in disrepair? or does it change too often? is there a stable maintenance branch for the long haul or will I be forced to feature upgrade to get bug fixes?
--> what other applications are using this library? what's their track record with it? what do users think of the end products that use the library?
--> is the library a leaky abstraction? if so, can I manage it? even in the long term? am I really sure about that?
--> is the library very well known or standard? can i leverage the programmers' knowledge of the library to speed development up and for maintenance later, rather than having new programmers learn all custom code when they are brought in?
--> and after all of those, finally: does the library perform well enough? can I improve performance if it doesn't perform well?
How to weigh each of these risks is up to you. You have to decide if using the library will actually save you time and effort or if it's just the cool thing to do. Using a third party library might seem like a time-saving decision but a poor decision can actually cost you time in the long long long run. It can also save you time, especially if the library is well known (Java, XML, Win32). Sometimes, like Joel says, a custom made in-house library is the way to go even though the problem has been 95% solved somewhere else. Ultimately it's a long term decision .... it's a very big commitment.
One last thing before I leave this topic: if you choose a library and use it you're not in a very good position to belly ache about its effectiveness afterwards. That's not to say you're not allowed to make suggestions but don't fault the library for your application's bad behaviour. You are responsible for work-arounds now. You made the bed and now you have to sleep in it. Maybe you'll make it more comfortably the next time around.
By the way, that includes web browsers too! They are just another layer, boys and girls. Use them properly .... but that's a rant for another day.
*let's not get into semantics on this point, his manager(s) may have chosen the underlying technology he uses. The choice is made by the project team.
# Post^H^H^H^H Essay Writing
I was never much of a writer, I don't care what you guys say. Sure I know spelling and grammar but that's just memorization. Not very hard, if you ask me (and I know some of you would disagree) ;). I still comma splice all the freakin time ... I have to watch it. And I just dot dot dot whenever I can't think up the right punctuation. It comes from having a conversational style of writing and trying to get that down in written form. Like chatting on an instant messenger or IRC ... sometimes it just doesn't fit right.
Anyway, now I have these big honkin blog posts to deal with. I try not to ramble and make a coherent point but it's hard when the posts get long. So I've been trying a few different things to organize myself.
The first is writing out point form what I want to cover. I try to identify a key point. Then I expand those points into sentences as quickly as I can ... I go with the writing flow, without blocking myself -- a kind of freestyle writing I guess. Then I go back and edit the sentences filling in the gaps to give the post some legs to stand on, fix spelling mistakes (I don't use a spell checker) and so on.
Then I publish it. The funny thing is even though Movable Type has a preview feature I'm still not happy with the post until I see it published. Then I find more mistakes, fix them, and republish. Often I'll republish half a dozen times or more but as soon as someone quotes the post I stop editing it.
That's my approach, given I haven't had to write an essay since my last year of high school. How do you write essays or long posts?
HA! Ironically this one was done completely off the top of my head .... still got it ;)
# id3v2 Issues
Reading id3v2 tags for MP3 files isn't too bad, but I can already tell that writing will be a bit of a challenge. id3v2 tags are divided into frames, with each frame having a sort of name/value pair. In id3v1, all of the values are strings. In id3v2 values can be any data at all.
One of the goals of the id3v2 component is to separate it from AudioMan so it may be used elsewhere. So I didn't want dependencies on AudioMan-specific things. As well the id3v2 spec has changed quite a bit (4 revisions so far) over the years which can lead to problems using some of its own information.
Something that changed a lot in the id3v2 spec were the frame id strings, which started as three character and then went to four. These inconsistencies between versions makes using the id strings in the API difficult, so I've decided not to do that.
Instead I've used a simple Map of name/value pairs. The name is a string for the field, like "TRACK_NUMBER" and the value is a string. So when the API is used, the data must be converted from AudioData to this format to id3v2, or the reverse. This creates a very loose dependency from AudioData to id3v2 and a loose dependency between revisions of id3v2.
Another issue is when I write data to the id3v2 tag. There will be a lot of frames I won't have to change, so I'll just copy them from the original instead of remaking them from scratch. This will prevent me from deleting data I don't understand and don't have any business trying to rewrite.
# Effective Placebo?
I stopped biting my nails again. It's something I do when I don't even realise it, which makes it hard to control. Sometimes I go through spurts of not biting especially if I really concentrate on it. Otherwise, they just go back to the way the way they were. I've been biting my nails as long as I can remember; as far back as age 5 or earlier.
The thing that helped this time was some audio suggestion therapy from Loren Parks of the Psychological Research Foundation. The recording includes a loud buzzing sound that supposedly disconnects you from a scary childhood experience related to your nail biting.
I'm very skeptical but it seems to be working. It could be the suggestion itself or it could be a placebo effect, where my mind is actually willing itself to believe that it worked. Either way, it has worked so it doesn't really matter if it is a placebo effect ... it's effective nonetheless so far.
The hard part, however, will be making it last longer than a few weeks. I'm betting that as soon as the suggestion gets to the back of my mind the nail biting will return. We'll see.
# id3v2 Tags and Feedback
I've come to the (rather late) realization that file formats rule AudioMan. I can't keep adding new features to the software until it understands how to read and write the fundamental audio tagging formats that people use day to day.
Today the format that dominates is MP3, but there are many variants. MP3 tag flavours come in id3v1 and id3v2 but v2 already has 4 variations to support. I haven't confirmed it yet but it appears that id3v2 tags are also used on AAC files sold on the iTunes Music Store.
Read and write support for id3v1 is done and was fairly straightforward, right Jim? :). Once I get support for id3v2 done, which will take longer, people will be able to use AudioMan day to day. This is really really important in an iterative process. If people always have to use your product in "test mode" they won't run into a lot of real problems because they won't use it as much.
When I get support for id3v2 done I'm guessing I'll get more feedback and interest in AudioMan.
Kibbee linked to an essay by Paul Graham called Why Nerds are Unpopular. Paul wrote an article a while back called The Hundred-Year Language that was also interesting. He seems to write mostly about programming but also goes into other topics in his essays.
First off, I have to be honest about my nerd status. Life at the small high school I spent the majority of my time at doesn't sound as bad as the situation he describes in suburbia. The reason, I think, is that people have much less of a chance of being typecasted at a small high school because they could do lots of different things. It wasn't that difficult to make the sports teams and it was easy to get to know everyone in your grade and people in many of the other grades.
Small towns, as opposed to suburban neighborhoods, are also more tightly knit with long-standing family histories. It isn't uncommon to go to high school with first and second cousins and other distant relatives. The community is much stronger and therefore the social consquence/stigma of misbehaviour/bullying is there. In general the fact that the community exists keeps everyone in line. Suburban neighborhoods lack this sense of community.
At a larger suburban high school, like the one I went to in grade nine with almost 2000 students, the situation is much different. People played one sport, or were a member of one school club or band and hung out with a small group of people. It was strangely different and I'm very glad I didn't have to go through that all of the way through high school. So in a lot of ways I was lucky to have moved when I did.
Before and during my transition to the smaller high school I realised that education and marks weren't everything I was looking for. To be honest, I was a nerd then, however during grade 8 I turned down a chance to take grade 9 math in high school because I didn't want to miss gym, of all things. To an adult that might seem like an irrational and unambitious decision but it's probably one of the best I've ever made. I played sports since grade 6 and in high school I played soccer, football and basketball. I played all of them on an average level but it was the sport, teamwork, the challenge and comraderie that was most important in all of that. The fact that nerds miss out on all of these aspects of sport is a great loss.
Don't worry, I'm getting to a point here. Now as an engineer I'm not going to start an argument with philosophy majors, I just want to give my opinion and my perspective. Paul's perspective is interesting because most of us never had to go through it, thank goodness. We can all give our own sides to this story here and share insight.
The general sentiment of the essay is a reassurance to nerds on one hand and a damning/resentfulness of the "system" on the other. I see this as a typical minority stance and I give my sympathy, given I was a nerd in the past. The reality is that the system won't change for the sake of minority groups because it just doesn't have the resources to do it, especially in a government-funded area like education. Unless the problems he describes start affecting more people they will not be addressed. This is a sad but cheap way to keep the system moving.
That said, I don't see vigorous social climbing as an aberration only occuring in high school or in extreme circumstances (he compares with prison a lot). Social skills are used in almost every aspect of human life. While nerds in high school are more concerned about being smart, they should realise that being an outcast is actually quite counterproductive. It's much easier to blend in than be an outcast. Therefore some minor effort to fit in should be done at the expense of bowing to the system. If you think about it you're already bowing to the system in other ways to get to university anyway. (right, why don't I just sell my soul to Satan while I'm at it?. I agree, this snowballs)
Some people's hatred for The System precludes them from even attempting to fit in. But they have to realise the consequence of their intentional rebellion. It seems as though they don't and the rebellion is more knee-jerk logical choice. They see the high school social scene as elaborate and unnecessary but it's a preparation for the harsh real world where not everyone is your buddy, not some strange detour.
Nerds' ignorance of the social scene hurts them the most. They see the fringe of it, they are not inside of it. So they don't see the benefits of it they just see the negativity. It's no small mystery why they don't want to try to fit in with the same bullies they are making fun of them but the ironic thing is that if they tried, they could. High school kids have pretty short memories from one year to the next.
When I say social skills there are many aspects to this. I'm not just talking about being able to communicate but also to fit in in general. The way you walk, dress, act, interact, etc. Humans are tribal, right? Maybe that's where this quite natural and childish behaviour to single out people comes from. In order to prevent being singled out you conform and become part of the tribe. So it's self-reinforcing.
I don't see adult life being much different from high school life. How could it be? People don't magically change into adults after high school. Maturation is a slow process of convincing yourself you were wrong. College students are still fairly immature (many high school traits are transfered) they are just divided into groups of similar social stature, where the conflicts between groups aren't as commonplace.
The logical way to look at it is to say that all of the smart kids are now rich, successful adults, so we're in control now! But no, I wouldn't say that's what happens. Nerds don't get control and power because they have no control over people. They have no social skills, no charisma because these are learnt skills they don't have practise with. They may not see the point of wearing a business suit outside of protocol, for example. This is what successful business leaders are made of. This is what politicians are made of. Where are these business leaders made? In the social scene and parties of high school and college and after. Where do they play their game now? In an even bigger, but equally frustrating, System.
So the advice and condolence that Paul gives is unfortunate. Telling nerds that once they get through high school everything will be OK is not a solution to their plight. In my opinion nerds are missing out on other things besides the pursuits of intelligence. An effort to integrate yourself socially, rather than just accept that people will make fun of you because they are immature is more proactive approach. My recommendation: learn a sport and get good at it, hack your body/appearance along with expanding your mind and integrate with a social scene. These are challenging things too, sometimes even harder. Being well-rounded is much more important to your physical and mental health.
# Thoughts on the Physics of Inline Skating
I'm rusty at physics but I'll try to do the best I can here. I'm going to try to talk about inline skating, pacing and friction.
Inline skates usually have 4 or 5 wheels on each skate. Each wheel contains ball bearings which let it spin with a minimum amount of friction. These bearings are greased or oiled to minimize friction. There is, however, still friction there -- and it gets worse as the ball bearings age and wear out with use and the lubricant wears out or is contaminated by dirt.
The other friction occurs where the skates meet the road. You'll notice that skating on a smooth surface is much easier than a rough one and takes less effort. This is the road on wheel friction working against you.
You, of course, are pushing against these two forces of friction with kinetic energy released by muscles in your legs. Skating is not a completely forward force -- it has a little bit of side to side lateral rocking motion involved to it. So the total force excerted by a person is broken down into forward and side to side forces. Any energy used for side to side motion is energy wasted in the fight against friction but nevertheless useful for things like balance and the fluidity/ease of the skating motion.
So we have three forces involved: the force of friction of the road, the force of friction of the ball bearings in the skate's wheels and the forward force of the person that is skating forward.
To have sustained movement on inline skates a person has to constantly fight against the two opposing forces to maintain speed because those forces will work to decelerate him. The person has to excert more force to get up to speed from zero velocity (rest) but once at a pace speed he only has to excert enough force to beat friction in order to maintain the speed.
So there's seems to be a certain maximum efficient speed (call it MES) that can be reached where a person is exerting the minimum amount of effort to beat friction but still maintain a high speed. The MES is of course dependent on the level of the two forces of friction: the road and the bearings. There are a lot of other factors too, like how the skate it put together and lubricants, which affect energy transfer.
The road you have no control over, but you do have control over how much friction occurs on the ball bearings because the faster you go the more friction is subjected to them. So from this you can say that there's a maximum efficient speed for the bearings themselves, which is directly proportional to your MES.
People tend to think that this is due largely in part to the quality of the bearings: a lot of inline skate bearings are manufactured based on the ABEC scale. The higher the rating, the more expensive the bearings. Apparently bearing quality has little to do with speed -- it is more a factor of choice of bearing lubricant and the fit of the skate components. Of course a more expensive wheel may have a better lubricant as well, to compliment a higher ABEC number.
So the trick to pacing yourself is this: try to max out the efficiency of the bearings as much as possible. For me this is much less than the speed that I could be skating at but if I did go faster I'd just be wasting energy fighting the increased friction of the bearings. This increased friction only decelerates me quicker and tires me out. There's a certain point where if you want to go faster you'll have to buy a higher quality bearing or skate (or just get stronger and have more stamina to constantly fight the losing battle against the increased friction).
Speedskaters sometimes use skates with more wheels. Do more wheels increase speed? More wheels could improve the skater's balance but they could also get in the way of a stride. Speedskaters tend to be leaned over more with a lower center of gravity so they already have a balance advantage over most people. Though speedskaters (on road or ice) seem to use a much different and longer stride than, say, hockey players or recreational skaters.
From a physics point of view more wheels means more friction, right?. Well, not really. The friction is partly a function of the load weight on each wheel (downward force of gravity). If you have more wheels then the weight of the person is more distributed and the downward force on each wheel is less. So this actually decreases the friction experienced on each wheel. This is especially true for the road friction but maybe also marginally true for the bearing friction, where gravity has less of an impact.
Another way for skaters to be more efficient is to concentrate on skating technique. Like I said above, a lot of exerted energy is "wasted" on lateral motion. This motion is never used in the fight against friction but if it were you would surely go faster with less fatigue. So an improved technique that translated a larger percentage of the exerted energy into forward force would also help.
Personally I try to use a longer stride and keep the lead skate (the skate not pushing) as straight ahead as possible to lessen side to side motion. As your balance improves with practise you can take longer strides (balancing on one skate essentially) and have less lateral motion. A good way to practise this is to purposely balance on one skate for few metres, especially while leaning forward as in a normal stride.
Incidentally, this whole discussion could also be easily used with ice skating. The only difference is that ice skating just has the one force of friction between the skate blade and the ice and that opposing force is much less than road or bearing friction. Of course that all depends on the quality of the ice. :)
So that's my rather long intepretation. If you made it this far, any thoughts?
# Sports and Their Personalities
Interviews in sports are just one of those things that comes with the territory. Even mentioning that all athletes say in interviews are cliches is cliche (and it's an old joke, see Bull Durham). People just go with it. Only when something truly extraordinary happens do athletes give real opinions, and it's usually only the veterans who say something meaningful. Not that announcers and sports commentators say meaningful things either, but I digress.
So given that there's so much of this cliche flinging going on in sports, what's with all of the media interest? They seem to be waiting for the one disasterous quote that blows a "controversy" wide open. Athletes have the media at their disposal so often it's a shame they don't have something more useful/informative/insightful to say. But what the hell would they say? Would sports fans listen? What? Where's the beer?
Yup, that's not entertainment ... sports fans want controversy and car wrecks. We want grown men insulting each other to get under each other's skin. Sports media are the transportation mechanism of sports psychology war. You think the WWE is bad? Modern sports preceeded all of that by 50 years at least; "pro wrestling" is just a more extreme version of it.
Sports are becoming soap operas that are socially acceptable for men to like. It's fun to know the personalities of the people playing the games but what about the game itself? Are the peronalities of the athletes so entrenched that we cannot separate them from the sport? Is it the media's duty to report of the minutia of superstar athletes? Hard to say, but at least I have the power to wade through the personalities to get to the news -- even though it's becoming an increasingly harder job. Reading sports news is like sifting through a spam-ridden email inbox.
# What's Up With AudioMan?
AudioMan development has stalled for the time being, unfortunately. There are a few roadblocks that I hit:
AudioMan could use the Rich Client Platform (RCP) that the Eclipse team is working on. RCP is probably considered to be in beta form at this stage and while I've played around with it a bit, I'm not that familiar with plugins which puts me at a great disadvantage. It would be a great opportunity to learn about all of that but it would delay AudioMan further. This option isn't necessary at this time, but there could be a number of useful things in RCP that would be good for AudioMan.
The direction of the project is also in question. Am I trying to make my holy grail collection manager, that includes support for music stored offline on disc? Am I trying to replace Windows Explorer? Or iTunes? All of these questions should be answered better and a direction should be established. If you want a say in AudioMan, just let me know your opinion.
The current development process is very informal and it would take people a while to get into it. Even though the overhead for a process would be much higher with just me working on the project, I think it's beneficial if I follow an established process. The main benefit here is that it will force me to think more about how to run a project, instead of just ad-hoc'ing it. I will be able to track how well the process works and adjust it.
# BulletBlogger Plugin for Mozilla
Scoble mentioned a tool like this a while back, but here's a specific implementation I need. If I knew more about Mozilla development and Perl (for MovableType) I could probably build it.
You'll notice I have a "BulletBlog" down the right side of the page. This is a list of links or stuff I think is interesting or I want to read later. The current way of adding to it is:
1. Go to the page I want to link to.
It's a lot of steps for a relatively simple thing to do. I'm pretty much just bookmarking this page on my BulletBlog. Here's what I'd like to be able to to with Mozilla:
1. Go to the page I want to link to.
The link for the page is automatically grabbed for you, so that's one less thing to do. As well you don't have to log right into the MovableType system to make the post.
I was thinking it could be done with some Mozilla development on the client side and some understanding of the Perl in MovableType on the server side, but I just remembered how MT does trackbacks (with the bookmarked popup window) and this type of thing may be able to be done in the same way .... hmmm ....
# White Box Testing Graphical User Interfaces (GUIs)
I've gotten quite a bit out of the book Testing Extreme Programming. The XP Series books are somewhat general in nature and they have a certain level of non-prescriptive freedom -- more like suggestions in the way that some aspects of quality are handled.
For example, the issue of customer testing is brought up in a less than explicit way (and I found that in the original XP book -- Extreme Programming Explained -- even more so). I think I found a page or two explaining exactly how to approach customer testing, and they were buried amongst other things. Given the importance of customer testing in the process this was pretty surprising.
So let's review a bit: XP advises automating testing as much as possible so that testing can be done very often. It also advises that the customer write acceptance tests to verify that stories are complete. Now, sometimes this isn't practical. What if your customer doesn't know how to program? Well, he can still write out the situations that the story should solve and with the help of an XP tester figure out all of the error possibilities and variants. Then someone else can write the customer tests on the customer's behalf.
So now you have a pile of variants to test. Great, more GUI testing right? Not really.
Most GUI-based software has a clear boundary between the GUI and the underlying business logic. As much code should be put in the business logic as possible so it can be unit tested. In XP, the top-most level just below the GUI code has integration unit tests. Since they are the top-most level that you can unit test, they are also automatic customer acceptance tests.
Traditional quality assurance (QA) is usually a bunch of people banging on the working program trying to break it. They could be running through use cases (and variants), they could be running build verification tests (usage scripts meant to test basic use cases) or they could just be banging on the software trying to break it. Usually these testers have no clue how the software is implemented, they just try to test it to death, find bugs and ultimately improve the quality of the product. This is called black-box testing because the tester can't see inside the software he's testing.
XP doesn't assume that the programmer writing the customer acceptance tests doesn't have access to the code. Why shouldn't he have access to all of the code? So now the test writer can do white-box (also called clear-box) customer acceptance testing. How does this help?
Let's say you have a method called by the GUI that has one correct use and 6 error cases. In all of the error cases, an error message is returned by the method and the GUI displays that error message in a model alert dialog. With black box testing you would have to figure out how to trigger the proper error situation in the GUI to generate each of the six errors and verify that the proper error dialog appears for each one. Then you have to make sure that the non-error way of using the method works as well, so that's seven manual GUI tests.
But if you look at the code, you'll realize that the only thing the GUI is doing is calling the method and checking to see if an error occured. If one did occur, it takes the error message and displaying it in a dialog. So that's two cases to check in the GUI (the correct way and one error), not seven. Then you use automated customer acceptance tests to test all six error cases and the one good case for that lower level method. You verify that the error is returned for each use case variant in an automatic unit test instead of manually in the GUI. So now you have two manual tests and seven automatic tests -- and the automatic ones will be run multiple times a day to catch regressions.
This kind of savings can really add up. Not only do you get the added benefit of testing all of the variants automatically instead of manually but you also save a lot of time doing manual test runs as well. You still have to do some manual testing but because it's done in a white box fashion you only need to manually test what the GUI code is doing, not what all of the lower layers are doing -- that's already handled by the automatic unit tests, why do it again?
By the way, even though you are only testing one error case manually you might want to rotate through the six different error situations for some variety. That way you won't get a strange false positive on one error when one of the other five is really broken. Then the worst you might be broken in that situation is a week or so, depending on how often you manual test. With the white-box example I gave that kind of oversight might seem pretty unlikely but real GUIs are often far more complicated.
As a member of the XP team, you can't spend all of your time setting up and running manual tests anyway, it's just not worth it. Given the code turnover of the XP process, your tests could get nuked at any moment -- that's why you want them to be automatic, they are much easier to create and less painful to delete, making the code and tests more easily refactorable. Some developers resist refactoring code they have perfected; you never want to do that in XP.
Your customer/users will pick up the majority of the remaining GUI regressions just in casual use just the iteration for feedback anyway. As long as you fix these regressions quickly, you can get away with missing a few minor regressions with your less-than perfect (but more agile) manual testing once in a while. But don't leave too much of this regression catching to your customer -- they could get annoyed and it distracts them from what they are trying to do: give you great feedback for the next iteration.
# Oscar Lima India Victor Echo Sierra
The Subway restaurant franchise has reinstated green olives on the menu in at least one location. The green olives were eradicated in a failed battle with Atkins dieters. Julienned carrots were not sacrificed as a result of the green olives' return. Awaiting confirmation of reinstatement at other locations. Over and out.
Update Thursday 4:55 PM: False alarm. :(
# The Missing Lynx
Mark Pilgrim quite eloquently states:
"I can read my Yahoo mail in Lynx. I can shop at Amazon in Lynx. If your web site doesn�t work in Lynx, your web site is thoroughly, thoroughly fucked."I couldn't agree more. I don't know if it's for exactly the same reasons though, I'm a browser purist where Mark seems to be an accessibility guy, but I think we're both after the same thing. I won't try to speak for him with my own little rant about web applications, below.
The bottom line for me is this: web browsers are meant to show pages of text, not host elaborate client-side applications. Sure, forms are built into browsers and they are OK but there's a fine line of interactivity there. Accessibility concerns remind us how far we are actually straying from the main browser idea. If you have to wonder if your web site is accessible, it's probably not designed right for everyone else either.
I'm not even going to start bleating about web standards because the standards are a steaming pile. What good are standards if the browser that over 80% of us use doesn't follow them? We can bitch all we want about IE's standards breakage but just because we prescribe standards doesn't mean they have to be followed.
So the best thing to do is follow the leaders in web applications: eBay, Amazon, Yahoo and Google and design for the least common denominator. Sure, your web application won't look all cool or do ultra neat things but it will get the job done in any browser you stick it in. That's something you can pat yourself on the back about.
I think it's for the best because it prevents smart people from putting too much effort into writing applications for a browser "platform" that never was. We need a real web platform, and if open source can't create one Longhorn will beat them to it. XUL is getting very warm indeed.
# Question for My RSS Feed Readers
I'm making longer posts these days as you've probably noticed. To make my RSS feed smaller I'm only including an excerpt of each post. MoveableType does the excerpt by default but I changed it to show the whole post and I've now changed it back to excerpt again.
For people that read my RSS feed (all three of you): do you mind coming to my page to read the whole post? My comment feed will continue to have the full text of comments. I don't want to piss people off too much or they won't subscribe to my feed. :) What do you think?
Update Friday 4:26 PM The full body text in the RSS feed is back. Let me know if it doesn't work properly.
# Technically Elite Good Intention of the Month: Gmail Privacy Outcry
I find it a little funny that there is all of this hubbub about Gmail and privacy concerns. Nevermind the fact that no one is holding a gun to your head to take a free email account anyway, though some would argue that's not the point. I think it is -- as long as Google says explicitly and not in fine print in a long contract "we'll be targetting ads based on the content of your email messages" and the user agrees to it, what's the problem?
Some of the technically elite feel the need to raise alarm bells to protect the technically ignorant. Frankly it's starting to sound like disinfectant commercials getting people all worried about germs our bodies are designed to handle naturally. But I digress...
Email isn't secure, there's the bombshell of the month. Neither is talking on the telephone or sending a postcard. All of these communications media can be and probably are being monitored by government agencies. Call me paranoid if you want to but that's exactly how Mohammed Momin Khawaja from Orleans, Ontario was implicated in the plot to set off a bomb in London, England. He was subsequently arrested by Canadian police, at the same time as nine of his accused co-conspirators.
When you send an email over the Internet it's routed by mail servers called mail transfer agents (MTAs) until it reaches the destination inbox. During that travel the email is reassembled at each MTA so the email header can be read and the email routed to the next MTA. The reassembled email is in plain text and readable to any person operating the MTA machine. In the case of Khawaja, the reassembled email resided on a server in the United States and could be read quite easily (though the article talks about a court order) by monitoring software, probably looking for suspicious keywords: bomb London Afganistan.
It wouldn't surprise me if most people didn't know email was so insecure. But when you think about it, who cares? Does anyone really want to read the letters I send to my mother? No, of course not. But if you had a government controlled MTA out there, you could monitor all of the emails routed by it and flag interesting ones based on keywords to catch terrorists. Since 9/11, you can bet there has been more pressure on the intelligence community (CIA, NSA) to monitor communications. That's where people should be worried about their privacy!
Do people care if Gmail's algorithms read their mail and target ads to them? Probably not, they are just happy to get a gigabyte of free email storage. Leave the poor folks alone and find a more worthy privacy crusade ... there are plenty of them out there.
# Channel 9
Here comes Channel 9, an open window on Microsoft. Will it work? Hmmm, well I think the best and most important part about it is that at least they are trying it out. If it flops, ok fine they tried it. Not much lost. But if it works? There will be lots of congradulatory pats on the back for not missing the next thing. When you have billions in the bank, you can spend a few hundred thousand on a hunch.
The worst part about modern software development and long software projects is the increasing disconnect between inception and production release/use. I'm talking about people using the software allll day every day, not just beta testing it -- that's where you notice extra mouse drags and clicks and usability problems and worse, missing features that your customer wants/needs.
It's difficult to get feedback into your products when that inception-release delay increases -- and the delay will increase as software becomes more complex and ambitious. Folding in this feedback reduces the risk of releasing the product (nevermind that extreme programming addresses these problems in its own ways but it's not mainstream so its irrelevant to this). But first you have to gather the feedback and manage it, channel it to people that can make the changes.
So you want to build a community around what you are doing and get your customers/users involved in a personal way. Don't send them a newsletter or a CD-ROM every month, have a conversation with them. Make them a real stakeholder and they'll give you good feedback because they know you'll use it in the next release, when it really matters to them. Treat them like a first class citizen on the project.
I could see this being just the start (and it may be a slow start, people should try to be patient -- the old system is entrenched) to a new customer feedback driven software revolution. It could also usher in more customer-centric feedback driven processes like extreme programming. Sounds good, bring it on.
# Ruining the Fun of Traditional Software Development
James (and I'm not sure which one) had an interesting comment:
Just a thought. You commented on having to unlearn a lot to grasp the utility of XP. So it may be picked up "easier" by a freshman developer. Can you think of any problems a XP developer would have transitioning to another technique or would they just pick up the aspects of development that would normally have to be unlearned?
I'm thinking the more you get used to a system the less likely you are to want to get out of it. It depends on your personality. I think that's why Kent Beck's first book is subtitled Embrace change. The "freshmen" developers are usually more open minded to new techniques (though they might not understand the drawbacks of the old way and realise the advantages of the new one). Of course the other side of the argument is that an old-hat might be so familiar with the bad sides of traditional development he may be thirsty for change. Generalizations are bad. :)
Sure, there are many problems that someone familiar with XP might have in a more traditional environment. Personally I feel uncomfortable NOT unit testing my code (and I think Andrew agrees with me), which is status quo on 'old school' projects. Developers pass the testing buck to QA and don't learn from their own coding mistakes. Where's the feedback? I would probably I insist that I do unit test, even though my manager might see it as a waste of my precious development time. Once you get used to test-driven and unit testing and you really honestly drink the kool-aid, it's very hard to go back. You'll wonder how anyone could trust untested code.
Refactoring support in IDEs is another thing I can't live without now. How did people ever develop code when they had to hand-edit name refactorings/renamings in multiple files, keeping track of context, etc etc? It just seems like a complete waste of time: TYPING. It's pretty silly, developers should be making great software not learning how to touch type. These are things IDEs should be doing for us. It's the reason why I feel far more comfortable in Eclipse than Visual Studio .NET ... but I hear refactoring support is coming in new versions of VS.NET, so that's good.
Yes, a freshman developer that drinks the XP kool aid may never be able to go back to traditional development -- or may do so reluctantly and awkwardly. Learn XP at your own risk. I'm serious, it will make traditional development much less fun.
Two of the main shifts I personally had when I learned XP was thinking more about risk management (which managers do all the time, so they should be able to relate to it) and ironically enough: long term thinking.
XP doesn't tell you to think long term, but you actually are. Enabling refactoring: quick changes, agility at any time in the future. High quality code for the next guy that has to edit it. Unit testing, customer acceptance testing as core ideas. These are things that traditional development projects shrug off. Get the code out the door! Worry about the bugs at the end in our slack time! Once it's out, we sign off and get the heck outta there! They're your bugs now! Heehee ho ho! XP brings bugs right up front and center and fails your tests. The project literally shuts down until you are running 100% pass again. That's an amazing commitment to quality.
The biggest barrier I see to XP is the advocation of pair programming. People are really worried about collaborating with other people. What if your co-worker thinks you're a bumbling idiot and tells the manager? XP doesn't let people hide at all and hack away by themselves. If you're having a conversation with someone, chances are half the team can hear you talking and jump in at any time. Seeing this as a bad thing is strange -- I see it as a great learning opportunity and a communication tool.
We don't have to be cordoned off into cublicles and isolated from our co-workers. We should be colaborating and being creative. Showing off our coding and design skills! XP is a really great chance to be creative and no be limited by an architecture passed down from above. No code owndership gives you carte-blanche and a lot of responsibility, but also the ability to be a team player. You can't be a team player isolated in a box off by yourself.
Anyway, that's enough ranting. If you're comfortable where you are, don't read about XP -- it will ruin it for you! As for James' question about change: people don't like it. Do you really want to have to learn a new programming language and be retrained every 5 years? No you don't. Unless you do it on your own time out of interest -- that's another story altogether. There will probably be XP zealots in 10 years that don't want to move onto the next thing. They're either just disinterested or lazy. Leave 'em in your dust.
# The XP Leaps
It's hard to believe in extreme programming (XP) because it requires several leaps of faith at once. When you see how these leaps complement each other and work together you'll realise the total benefit of the process. Until then the individual counter-intuitive leaps won't seem safe.
The book that bridged the leap for me was Kent Beck's Extreme Programming Explained. Thanks Andrew.
Here are a few of these leaps (with doubts) so you can recognize them:
- Using developer time to write tests: test driven development
- Very little artifacts
- Enabling refactoring
- Pair programming
- You Aren't Gonna Need It (YAGNI)
- All of the unit tests and customer acceptance tests run 100% all the time
- No code ownership
- Regular integration and automatic test runs
- Customer on site
- Release early and often
There is no singular argument to these doubts. I'll admit that learning XP is sometimes an exercise in futility ... you have to unlearn a lot and it takes a while to digest all of the principles and recognize their interconnections. There is no silver bullet. All it takes is an open mind and a desire to listen to your customer's and bring them into the process.
# iBook Power Adapter Dead
My iBook's power adapter died today and I can't recharge/plug in my machine until I get a new one. The iBook has 2 hours of battery life to spare. Luckily the power adapter is covered by the extended warranty I bought in December otherwise it would be $129 to replace.
The most annoying thing about Apple warranty claims is that they don't trust their service providers to have replacement stock. When you go in for a claim the service rep checks out the machine to make sure it's actually broken and then they have to order the replacement part from Apple and wait for it to arrive before they can fix the machine.
So I have to wait 3-5 business days for the new power adapter. I couldn't even buy a new power adapter (I've been wanting two for a while now) because B.Mac didn't have any in stock. These aren't rare power adapters either -- they are used on iBooks and Powerbooks. Another store, The Mac Group has one in stock but for $149! $129 plus tax is crazy enough, thanks.
I understand the reseller warranty system might be in place to prevent fraud but what if this Mac was my only computer? Isn't there some customer service pain-in-the-ass factor and fraud risk break even point?
# Google Ownz Me
As you can see on the right, I've signed up for Google's AdSense program. I'll get some pocket change every time people click on the links. The money will go towards paying for the hosting and domain costs for this site, and I don't expect to make much more than that. For an idea of what AdSense is like, see Tim Bray's impressions after a month of using it.
Google's also been in the news lately about their new web email service called Gmail, which gives you one free gigabyte of space to store email. Right now, Hotmail gives me 2 megabytes for free. That makes Gmail 500 times larger. Wow. The program is in testing right now but goes live pretty soon.
You'll also be able to search your mail with Google's search technology, which is a great feature the others don't have. How can Google afford to give away a gigabyte for free? They'll use advertising targetted to you based on the content of the email you are looking at, which is pretty much what AdSense does to this page.
I'll probably snag one of these accounts. But the real bonus is the competition that Hotmail and Yahoo will feel. I expect you'll get more free space at other sites to compete with this.