« May 2004 July 2004 »
blog header image
# Database Perf and Object Creation

Premature optimization is bad, sure. But no optimization is bad too, if what you end up with is unusable. I've been thinking a lot about the performance implications of using a database with AudioMan, especially with object creation and destruction.

You see, with the old repository objects were kept in collections. "Indexes" -- which allowed me to get certain tracks based on artist or album -- were just lists of pointers to objects. The track objects already existed.

The database abstraction consists of methods that take parameters and returns lists of objects. For the implementation it would use a SQL query, execute it and then take the resulting recordset and make a list of track objects out of it. All of these objects would be new, and creating them would be a serious performance issue.

The worse case for this performance is when the "Whole Collection" is visible in the track list. Collections can be 500, 1000 or even 5000 songs or more. Creating this many new objects in a reasonable amount of time just doesn't seem possible. I don't even think I need to try it to prove that to myself. In fact, the performance for loading existing objects into the track list is already borderline on about 1000 songs.

So if I can't create these objects on the fly I better keep them around in RAM, already created. However, the great thing about the database is that I can do all sorts of SQL queries on the tracks' values and such, which I can't do easily with collections. So I want to keep the database and SQL queries around too. But then I had an idea ...

I'll SQL query the database, except instead of returning all of the track data I'll just return the tracks' unique key numbers. I'll also keep a map of all of the track objects in the collection. Then I can take the key numbers returned from the SQL query and get the objects I need from the map without having to recreate them.

The only problem with that solution of course is that the objects in the map have to be kept in sync with the data in the database. There are two ways to do this:

  1. When you make a change to a track, write the change to the object in the map as well as the database, or
  2. When you make a change to a track, write the change to the database and then reread the object in the map from the database.

To me, the second option is preferable because there's less chance of a bug causing synchronization problems -- the object will always have the same data as the database.

This is starting to look more and more like a persistence layer, isn't it? Are there any ideas I should borrow from known implementations/APIs of persistence layers? I don't want to reinvent the wheel here. What do you think?

posted at June 28, 2004 at 01:02 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (4)

# Unit Testing a Database Abstraction

In the AudioMan project there's a need for data storage: the metadata is cached so that it can be displayed quickly instead of read from files all of the time.

Right now this data storage is implemented using the Java collection framework. While it was easier than hooking up a database, it's limited because all of the "indexes" have to be set up manually to "query" the collections.

Expanding the functionality of AudioMan requires better access to this data, being able to have access to all of the "columns" in the "tables" without having to create specific "indexes" for them. This analogy to a database is the easiest way to describe how limited this Java collection-based repository actually is.

The database abstraction is an API that has methods for specific queries that return arrays of objects. This separates the database implementation from the rest of the code, reducing coupling. Sure, every time you want to make a new query you need to make a new method, but the abstraction is invaluable.

So I'm going to replace the collections repository with a real relational database. The good news is that the repository abstraction already has 171 regression tests against it. These tests will ensure that the new repository functions the same way as the old one, assuming the tests were comprehensive. That's a dangerous assumption to make of course, so an audit would be a good idea too.

So now we're at unit testing this repository. The best way is with a clean database on every unit test method so that data from previous tests doesn't contaminate future tests. Even worse, some people set up tests like dominoes so that if early tests fail, the later tests fail too because the database doesn't contain the right data. It's better to make the tests completely independent even though it takes more time to set up each test and typically it takes longer to run the whole suite.

But I won't have to clear the database completely between unit test methods, I can just blank the tables and the schemas can stick around. As well, since I'm not making a web application I don't have to worry about production and testing databases like Andrew has been -- they will be the same one, because the deployed application starts with a fresh database. Web apps sometimes don't have this luxury, the deployment being a sticky combination of production and development web servers and databases out of your jurisdiction.

posted at June 25, 2004 at 05:13 AM EST
last updated December 5-, 2005 at 05: 1 PM EST

»» permalink | comments (5)

# Positively Fifth Street

While I was on my way to Las Vegas at the Newark, New Jersey airport I was bored so I went to the bookstore. In the business section of all places, I found a book by the name of Positively Fifth Street by James McManus.

The back cover has the following description of the book:

In the spring of 2000, Harper's Magazine sent James McManus to Las Vegas to cover the World Series of Poker, in particular the progress of women in the $23 million event, and the murder of Ted Binion, the tournament's prodigal host, purportedly done in by a stripper and her boyfriend. But when McManus arrives, the lure of the tables compels him to risk his entire Harper's advance in a long-shot attempt to play in the tournament himself. This is his deliciously suspenseful account of the tournament--the players, the hand-to-hand combat, his own unlikely progress in it--and the delightfully seedy carnival atmosphere that surrounds it.

...and that got me. It sucked me in. The $15 US price helped too. :) I'm not much of a reader, but being on the way to Vegas and all I decided this book would put me in the right frame of mind. Besides, Rounders is one of my favourite movies. ;)

What really surprised me about this book, because it wasn't mentioned in the description, is the amount of poker strategy it contains, especially for the World Series of Poker game: No Limit Texas Holdem. If you're a fan of poker or gambling, a large portion of the book will appeal to you directly.

The other parts are about Las Vegas, the murder and its trial, the World Series of Poker, famous poker players and autobiographical entries. James mixes all of these things together in a great way, without making the reader bored or getting too self indulgent. During the tournament I really found myself pulling for him.

If you you're curious about gambling, Las Vegas or even strange murders this book is definitely for you.

posted at June 25, 2004 at 03:12 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (1)

# NextGen GUI Choices

In my previous post I talked a bit about the direction DHTML is taking. OK, that's one way to make an application in the future. How about the other ways? Let's take a look:

Sun's Swing/AWT

Swing has been around for quite a while, but hasn't been adopted as much as Sun had hoped. It works by taking over window drawing at the OS level and hand-drawing all of its own widgets. UIs are created with API method calls.

The obvious drawback of this is that Java/Swing apps didn't look like other Windows apps, and they performed poorly. A good chunk of Java's reputation as a poor performer on the client can be blamed on Swing.

Recent improvements in Swing allow apps to be skinned to look just like Windows apps, and Java's performance is getting a lot better on the client. Will people give Swing a second look or just move on?

Mozilla's XUL

XUL interfaces are created in an XML markup language and rendered with the Gecko rendering engine, the same one that renders HTML for Mozilla and it's derivatives. XUL UIs use JavaScript to manage events and make calls to lower-level C++ methods through a COM-like interface called XPCOM. These lower level methods do things like read files.

XUL interfaces use skinning while rendering. While there are skins that looks like standard Windows widgets, some are slightly modified (ie. Modern) and give apps a different look and often a different feel. However the skins look the same across the different platforms.

Since the Gecko engine has been ported to many platforms, XUL interfaces can be used on all of these platforms. The lower-level code, however, must be written carefully in portable C++ code in order to be compiled and executed on all of the different platforms. This makes XUL a less than ideal cross-platform GUI tool so far.

Java integration would save XUL from this problem though.

Eclipse's SWT

SWT is IBM/Eclipse's answer to Java/Swing's cross-platform performance problems. It uses a common API but native widgets to display the GUI. The perfomance, especially on Windows, is quite good.

The problem is that all of the SWT implementations are separate, with the Windows version containing the most features. All of the other platforms play catchup. This will likely get better in the future when the Windows version slows down, however.

SWT is used via API method calls. The trend though -- with things like DHTML, XAML, XUL -- seem to be moving to XML markup languages to represent UIs. It's much easier to manipulate UIs this way, I think and SWT is probably going to be persuaded to move into this direction too. A WYSIWYG editor alone will not be good enough.

Lately SWT is also moving into the skinning direction, straying from the hey look, we're native! bragging. This complicates SWT GUIs beyond the cross-platform common denominator (sometimes custom widget to fill in gaps) approach.

Microsoft's XAML/Avalon

Avalon is Microsoft's new .NET GUI API for all of the new hardware accellerated widgets. In managed code, it's also an opportunity to introduce some new tools like XAML, an XML to object markup language.

XAML has many uses, but for UIs it can be used as a form and function seperation between GUI and application code. This lets UI designers focus on just the GUI XML while developers fool with other things like the event listeners and underlying method calls in code. At compile time the XAML XML code and C# source code are combined into one class definition.

While XAML is often compared to XUL, it appears to me that it's nothing like it. The XAML XML is converted to classes/objects that make standard Avalon API calls to display widgets. The widgets aren't rendered like XUL/Gecko, though they are rendered in the sense that they are hardware accelerated -- likely via a DirectX desktop in the same way that Apple's OS X uses OpenGL.

This makes Avalon decidedly not cross-platform, since a renderer cannot simply be ported like XUL/Gecko. In order to implement Avalon support on another platform, one would have to reimplement all of the Avalon method calls (think Wine), drawing widgets on a hardware accelerated desktop.

A hardware accelerated desktop is something that Linux does not have at the moment in mainstream use, though in development. If the Mono project wants to do it's own version of Avalon, it could be a lot of work. What I think it might do instead is make a Avalon to regular widget translation, losing some of the eye candy and functionality but retaining the general idea. It depends how much real functionality is lost I guess.

More Thoughts ...

Developers seem much less concerned about common look and feel these days. Was this ushered in by web apps, which all look different but use simple widgets? Skinning takes this to the extreme by making distinct looking UIs with equally complicated and custom widgets sometimes. This can't be good for usability, can it? While new types of widgets are interesting, it will only increase the training effort needed to use the software.

XML markup languages are all the rage, beating limited and hard to use WYSIWYG tools. I would say that with a minor amount of training and/or DHTML experience a good graphic designer could become familiar with these markups. Designers of these markup languages should keep in mind that graphic designers and UI experts are using them, and make them look HTML-like while seperating code from the GUI as much as possible.

XAML does this well, XUL -- which is scattered with JavaScript method calls on event handlers -- does not. UI designers need to be in the driver's seat with GUIs and need to be able to manipulate them quickly and freely. XML markup languages will enable this and free GUIs from API calls, putting GUIs back into the control of graphic designers and usability experts where they belong.

Cross-platform will become more, not less, of an issue. If Microsoft insists on keeping Avalon Windows-only, it could be hanging itself ... especially since Avalon won't be avalable in Longhorn until 2006 at the earliest like Joel said, and won't be in mainstream use until years later.

That's a lot of time for cross-platform solutions to catch on and for people to get used to them. Nevermind all of the (cross-platform) web apps that will be created in that time. XUL will lose out to cross platform languages and runtimes/VMs -- like Java and possibly C# with Mono -- because XUL apps are just too darned complicated at the lower levels. XUL needs to simplify there.

Eclipse's RCP is in a position to become a web browser for rich cross-platform applications with native widgets. Sun has no such platform for Swing, which could be its death knell as a GUI toolkit (developers could see it as a dead end). If Eclipse wants to get more web app developers using SWT it needs to make it easier to use and deploy -- a SWT XML markup language and (Gecko-like) renderer -- or even an XML to code translator/precompiler -- could be the answer.

Java is already well entrenched with J2EE developers, so it wouldn't be much of a switch from Java/J2EE/DHTML to Java/J2EE/SWT-XML. Eclipse could leverage all of this Java experience on the server side to switch these developers off the limited DHTML train and onto fully-featured SWT native widgets and rich cross-platform applications.

posted at June 24, 2004 at 05:07 AM EST
last updated December 5-, 2005 at 05: 1 PM EST

»» permalink | comments (2)

# DHTML is a Rubber Band?

While I was away in Las Vegas, Joel Spolsky did what he does best -- stir the pot -- by writing about How Microsoft lost the API War. At the end of it he describes the problem: Microsoft is losing to web apps in DHTML. Even though he laments the progression to web apps, Joel writes a list of ways that DHTML could be improved and made more useful.

I agreee that DHTML isn't quite there yet but ironically that's one of its strong suits. The fact that in order to remain cross-browser compatible a developer has to use the lowest common denominator of DHTML (which itself is pretty basic) keeps UIs simple. This is why ordinary people prefer web apps like Hotmail to Outlook. They are just simpler.

The disadvantage of DHML is that developers don't realise this. They continue to hack on top of it, making browser-specific apps and JavaScript tricks. Is this OK for intranets? Maybe. The next update of IE could break your app though, and those updates are usually out of your control as a developer. Then you get blamed when the hacks break and you have to figure the problems out and fix them. Is this OK for Internet sites? Definitely not.

An example of stretching DHTML is Google's GMail, which doesn't work in Safari. While it's an impressive web app, it seems overcomplicated to me. There are a lot of eye candyisms and a lot of non-web app features. Ordinary people familiar with web apps will have to take more time to get used to GMail's non familiar features because it's not simple DHTML any more.

GMail has crossed into the grey area between web apps and standard apps. Is this a good thing? I think it's a good gamble on their part. If an established webmail provider like Hotmail tried to make this change, people would bitch. With a brand new service comes a brand new way of doing things. People will look at GMail and say "gee, I didn't know this was possible with web apps". Good or bad, this is the direction it looks like we're heading in.

posted at June 24, 2004 at 04:47 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (4)

# Unexpected Exercise

Hey, I'm back from Las Vegas. It was a lot of fun -- and was an eye opening experience. I'll just say that I didn't expect Vegas to be like it was but it was still sweet. I'm also going to say that I won't elaborate on that. :) Getting there and back however, was a frickin' adventure. That's what I'm going to talk about.

We flew round trip on Continental Airlines. On the way there we went through Continental's eastern hub in Newark, New Jersey from Toronto to Las Vegas. On the way back we went through their south-western hub in Houston, Texas. You might have noticed that yes, we went further east to go west and further south to go north. The reason: it was much cheaper. I didn't think much of it at the time.

Everything was fine until we landed in Newark and got on the connecting flight to Vegas. It started to rain on the runway and the captain told us a storm system was coming from the west so we would have to wait it out. The flight attendants responded by giving everyone free headsets (usually $5) and booze. This seemed to keep people mostly satisfied. After four and a half hours sitting in the plane on the taxiing runway we finally left for the five hour flight. In total, we were on planes for 13 hours that day.

The bonus of being delayed is that we landed in Vegas at night instead of 7:30pm. Las Vegas is in the middle of the desert, and looks like a bright island in a sea of absolute blackness. I've never see anything like it. The strip is right next to the airport and we could see it clearly as we landed.

On the way back our flight left on time from Las Vegas and landed on time in Houston. However, right after we landed a huge thunderstorm poured rain on the runway and the airport wouldn't let us get to our gate. Unfortunately we had scheduled only a 38 minute changeover in Houston, and by the time we got off the plane and talked to the Continental representative at the arriving gate, she said "your connecting flight is on time and has probably already left".

Why they wouldn't delay our connecting flight for us when all of the other flights at the airport couldn't take off either, I don't know. But our first instinct was to run to the connecting gate on the other side of the airport. If you've never ran across an airport I highly recommend it -- it was a pretty cool rush! All of those years of soccer and football really paid off (and the fact that I was coincidentally wearing actual running shoes).

We dodged people, baggage and those motorized vehicles for taxiing people from gate to gate running as fast as we could. I passed my friend near the end (he was wearing sandals) and came up to gate C-31 in Houston yelling "stop 31!". The Continental rep stopped me and said "we've been waiting for you, we need five more" and that was that. We didn't need to run, they had actually delayed the flight for us. The Continental rep we talked to at the arrival gate was using old (printed) information. Maybe she needed a tablet or PDA? :)

Sitting in our seats, winded and sweaty, the pilot told us we were waiting for baggage from connecting flights. We ended up leaving about 20-30 minutes late, plenty of time to walk from gate to gate, but running was much more fun.

As we waited for our baggage in Toronto at the carrousel, we heard our names over the PA system. They had left our luggage in Houston and it would be coming on a noon flight from Houston the next day. We filled out the forms and went home. At 10pm the next day our bags finally came and I left Toronto at 11pm, arriving in Ottawa at 3am this morning.

Even with all of this flight crap (to be fair, mostly due to bad weather), the trip was completely worth it. Go check out Las Vegas!

Lesson learned: 38 minutes is not enough time to changeover, I don't care what Continental says.

posted at June 22, 2004 at 07:05 PM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (3)

# Hasta Las Vegas, Baby

Work on AudioMan and posting to this blog will be light this week so I can prepare for the trip to Las Vegas, Nevada. I'm leaving for Toronto on Wednesday and fly out Thursday, back on Sunday night.

Don't expect to hear too much about this trip when I get back to Ottawa on Monday. What happens in Vegas, stays in Vegas .... and that includes spur of the moment drunken marriages to celebrities with an Elvis as my best man (Travis will be too busy, uh ... gambling). ;)

posted at June 15, 2004 at 02:14 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (2)

# Job Search Prep

After a well deserved break (well, my aunt thinks so ... she's awesome) I think I'm ready to prepare to start looking for full time work. By prepare I mean get all of my "portfolio" in order to make an impression on the people I'm applying to.

Once you apply, that's it. That's what people see: your file. And they don't see it often, maybe once or twice. I have a few things that I will link to on my resume, such as this blog and AudioMan. But the rest will be a bunch of static documents that will never change in Company X's files for a year. Then maybe I can re-apply again. So I have to get my stuff together and make it count.

The first job is fixing up my resume. Most of the work experience there is well detailed, but I think I'm going to focus it more on "things accomplished, goals achieved" kind of direction. This is what employers care about.

The experience matrix that I borrowed from Mark Pilgrim (I think I did anyway, he has since removed his resume from his web site and I can't find it, even with Google) is the most controversial part of my resume. How useful is it?

For someone without a lot of work experience it's useful to demonstrate experience with tools, and lists just aren't effective. There's no way to tell how good someone is with C if they just list it along with 4 other languages, even if they are ordered by proficiency. Skill is just as hard to tell given the amount of time you use a language, but it makes it much easier to ballpark.

On the other hand I'm leaning more towards software project management positions or being mentored for those positions, where tools aren't as important as soft skills, for example. It may be a little premature to plan for this career move at this point though. I admit I could use at least 5 more years of straight development experience before I start managing even small projects on my own. As well, I'm interested in development still and hope to continue developing even while I'm managing projects in the future. I'm on the fence and shouldn't be, really.

The additional information section is a good place to show some personality but at the same time I should do it briefly. People are only going to look at it if they like the work experience and just want to see if I'm a normal person. :)

Travis made a good point that resumes have to be in the format that HR departments want, otherwise they are automatically and unceremoniously chucked. For most places this is Microsoft Word DOC format. Even PDFs are no good sometimes. I need my resume in DOC, PDF and the present web site format, with links to all three.

So that's the resume. Then I have to clean up my web sites a bit and make sure they are consistent. I have enough experience making web sites that even the code behind them should look good, so I have to get on that. Most of the time I'm diligent about it anyway just out of personal preference.

This blog has turned from being just a regular tech fascination blog into a more software development oriented blog, talking about experience while working on projects and interesting new tools and methodologies. I realise that most of the places I apply to won't read this blog but if they do they'll get a good feel about my opinions on software development. This blog is for the interested people: the ones that know about blogs and want to use them to evaluate candidates pre or post-interview.

I hope that when I get a job I can continue to post to this blog, though some companies are understandably secretive. I don't talk about work on my blog anyway; I just write about software development in general. But if the right job comes along and I cannot continue blogging, then so be it. I'm willing to give it up and only blog about flowers or cocktails.

AudioMan will always be a work in progress, but it should be kept organized. If I'm going to try to display my project management skills while I'm learning I should put that right into the web site and project itself. Public image is part of running a good project and I don't have a marketing department to help out here.

The fun part will be making a list of companies I'd like to work for. I've already done a lot of thinking about this over the years, but it mainly comes down to the question "am I willing to relocate"? Otherwise, I limit my choices severely.

Yes I am willing to relocate and depending on the city I'm looking forward to relocating. Ottawa is a great city and I would love to work here but that may not be possible or the right career move at this point. If a great job is available in another great city I would move to take that job. I'm young and I'm in a position to take risks so I should take them.

I'll try to write a bit about my experiences using job web sites, especially corportate HR web sites. They are usually a losing battle when compared with a personal recommendation from someone that already works there but lets see how things go ... I'm estimating I won't have a job for at least three months, and I've planned my break accordingly.

I'll also be reading more of Gretchen and Zoe's interesting jobs blog to get a better feel for how HR and recruiting departments work at large companies. Then I can better plan a strategy for getting noticed at those larger companies -- the ones with people way smarter than I am that I want to work with and most importantly learn from.

Be the sponge.

posted at June 15, 2004 at 01:17 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (7)

# Smalltalk Sessions Week #1

I finally got serious with Smalltalk and installed VisualWorks 7.2 from Cincom on my iBook. Then I went through the walkthrough included in the documentation.

The installation basically went smoothly, as long as you read the installation instructions. :) This is a bit of a change for Mac software, but it's forgivable given that it's development software. The installation is on par with Eclipse.

One thing that might be confusing to people just starting out with Smalltalk is the fact that you need to pick an image right away -- or even what an image IS for that matter. If you start VisualWorks without an image, it opens a file open dialog (and doesn't specifically ask what it wants).

I picked the image file visualnc.im and that seemed to work OK. Just clicking on the image file instead works best; it just starts VisualWorks with that image. Reading the manual helps to clear this up a bit, but instructions for all of the operating systems are mixed together.

The walkthrough document is very well written, and explains a great deal of the environment and tools available by making a GUI application. I like how it explicitly shows the built-in "edit and continue" capabilities of Smalltalk -- you can modify a method while the application is running without requiring a recompile/restart. It's also neat how all of the methods are separated in the browser and you don't have all of the class' methods mixed together in one big long window.

I noticed that the SUnit unit testing framework and the test runner GUI come with VisualWorks. That'll come in handy when I start doing test-driven development.

My only gripe about the Mac OS X version of VisualWorks is the rendering of the OS X widgets, which definitely don't look native. Also, they are working but not complete -- for example, disabled text boxes look the same as enabled text boxes. I wonder if there is a more native looking GUI toolkit for Macs, or if there's something on par with SWT for Smalltalk: looks native on all platforms and has a cross-platform API.

Ambrai Smalltalk uses native widgets and they look much better. However the environment doesn't seem to be as complete as VisualWorks, and it's in beta at the moment.

posted at June 11, 2004 at 02:59 AM EST
last updated December 5-, 2005 at 05: 1 PM EST

»» permalink | comments (1)

# Software Processes and AudioMan

I've been meaning to write about this, I just haven't had time lately. If you're a software process geek like myself, you've probably looked at open source software and wondered how the hell does that work? how do they get a quality product from this?. My interest in open source software to a large degree is the answer to this question because I believe it will usher in new ways to quickly develop effective software.

If you've read The Cathedral and the Bazaar you'll know all about "given enough eyeballs, all bugs are shallow." This is an important distinction of open source software: users at an early stage. It's a lot like having the customer onsite with Extreme Programming (XP) except with a popular and useful tool (like, say, Mozilla or the Linux kernel) you can get many many users testing very early.

These users find bugs, usability problems, missing features early. Very early. So the risk of implementing something bad goes down. If you do, your users politely (sometimes) backlash. The impact was minimal. Implicit risk management is a distinct advantage of open source projects.

Closed source shops on the other hand develop software on their own. It's tested by beta testers and quality assurance personel, but not at the volume of open source software. Neither is the testing as early as open source: once you've released beta you've pretty much feature committed, you're in no place to be adding new features even if you get great feedback from your beta testers. You just want to fix bugs. What good is customer feedback this late?

That's where XP "adopted" open source to get feedback as early as possible. This minimizes the risk of going down the wrong path, saves time and makes the project more agile (changeable).

The difference between XP and open source is the focus on quality. XP tries to instill high quality all of the time, which leads to a high respect for quality throughout the project lifecycle (see Broken Window theory). With XP, you should be able to take the contents of the source code repository (CVS) and use it at any time with the assurance that it will work properly. Unit testing for regressions has a big hand in making this possible.

Open source projects on the other hand want to attract contributors. To do this, most open source projects sacrifice quality at the beginning of an iteration in order to lower the barrier of contribution. Developers don't have to worry about testing their code, they just hack on it.

To a developer this can be a very attractive prospect -- if you write a feature that is really cool, you can get it in an open source project with all of the fun and none of the maintenance or testing. Open source project manangers often encourage this behaviour in order to accrue a greater number of new features, which in a snowball effect will attract more users and developers.

Then later on in the iteration the project stops accepting new features and stabilizes. This stabilization stage can take a while as bugs are discovered by users, debugged and fixed. Meanwhile a new development stream might be going on in parallel, introducing new features into the product while the stream before it stabilizes. This is what the Linux kernel does, except there is one development stream (2.5, soon to be 2.7) and at least 3 stable ones (2.6, 2.4 and 2.0).

In the software engineering world, we would call this the Build-It Fix-It (BIFI) methodology. It only works in open source because of the number of users that open source projects have, a few of which are developers with the inclination to create new features or even look at the open source. The rest could care less about the source code. Without these users BIFI falls flat on its face -- you would need far too many quality personnel in a closed source shop to sustain a methodology like that, which probably explains the poor quality and high failure rate of closed source BIFI projects.

So in the closed source world we have to deal with a lack of users. We compensate by carefully designing the product up front and hiring QA staff to pound on the product with scenarios (sometimes organized, sometimes not). Once you design the product like that though, you sacrifice agility for quality because everything is carefully planned. Deviating from the plan is possible but extremely painful, especially to planned schedules that managers are depending on in order to make business decisions. This in turn creates an atmosphere were no one wants to deviate from the plan, and agility problems.

Open source stays agile with little to no up front design because the users keep the quality in check in the long run. The developers are also more conscious about the quality of their code, since it can be read and critiqued by many people. Developers in these situations don't want to come off as stupid, so they put extra effort into it. They take pride in it because that is the context under which they are contributing in the first place: it's an ego thing.

So the features go in with no explicit quality barrier but even so they have a higher level of quality because of the ego factor: developers want to look smart so they write good code. This makes the stabilization period a little smoother than in closed source shops where a common policy of code ownership means that the code experiences little to no auditing, even informal.

Having said all of that, it brings me to AudioMan, my little pet project. First of all, AudioMan is not a typical open source project. The only reason that AudioMan is licensed under the GPL is so that I can talk about the code on this blog and in tools (like jcoverage, which shows code) without having to worry about people stealing it outright. Also, since this an academic exercise I thought it would be beneficial to license it under the viral GPL instead of the more "free" BSD-type licenses, so that closed source projects couldn't use the source code.

AudioMan for me then becomes a project to test software processes and tools. Once you start introducing processes or tools to an open source project, the barrier of entry increases dramatically. People don't want to contribute because they don't want to write unit or acceptance tests. They aren't concerned about software quality when they are just adding new features, like at the beginning of an iteration of an open source project. In that sense, AudioMan is not an open source project. I'm not trying to attract contributors by lowering the standards of contribution.

AudioMan is managed more like a closed-source project, in the XP-inspired style of "always green", unit and acceptance tests always passing. You can't do effective XP in an open source distributed project -- you need people in the same room to lower the communication overhead and facilitate quick decisions for agility -- so I borrowed only a few of the practises.

I'm not in a hurry to release AudioMan, so the process will often to precedence over the prooduct. This is not always true in the real world, where the process is sometimes sacrificed to get the product out the door in the short term. This is a learning experience for me, so exploring the process and tools are more important than the end result. However, I have to keep the real world realities in mind as I work.

At the same time, AudioMan will give me a good project management foundation to develop closed source projects in the future, as long as I'm aware of the differences in process style and why they exist. Developing and managing AudioMan will help me recognize these situations.

posted at June 09, 2004 at 12:43 PM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (0)

# UI Candy

Here are three new things I've implemented in AudioMan that will be in 0.5.0. I've already checked them into the CVS repository. Download the nightly build tomorrow if you're anxious and want to check them out.

1. Select Multiple Artists/Albums

You can select more than one artist and/or album and all of the tracks matching those artists/albums will appear in the track list. Hold down Control to select more than one item in the list.

2. Drag and Drop out of AudioMan

Grab a track from the track list and drag it to Windows Explorer to copy it, or MSN Messenger to send it. Works with any application that understands the drag and drop copy command.

3. Drag and Drop into AudioMan

Drag a file from Windows Explorer to the track list of AudioMan and it includes it in your AudioMan collection.

A question about this last one though: if the user has a playlist selected in AudioMan and a file is dragged over and dropped, should the track be added to the playlist too or just the collection?

On the one hand, it's simpler to add it just to the collection. The user won't have to worry about selecting "Whole Collection" before dragging files over, if he doesn't want these new files to be added to a playlist. But on the other hand the user might be expecting it to be added to what he's currently looking at (a playlist).

I *could* also pop up a dialog asking "add these tracks to the currently selected playlist too?" That might be the best idea. Let me know what you think.

posted at June 06, 2004 at 09:01 PM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (5)

# A Toe in Smalltalk Lake

On ambrai's web site I found a list of free Smalltalk books and I've been reading The Art and Science of Smalltalk. It starts off with no assumption of OO programming experience, and is old enough (1995) that it doesn't even mention Java, just C and C++.

There are some interesting things about Smalltalk I've picked up from this book so far (please correct me if I go astray):

Everything is an Object

Everything in Smalltalk is an object. Unlike Java, there are no "primitive types" for performance.

Variables Don't Have Type

Variables, just as in Java, point to instances. Rather than saying that variables are "pointers", like in C or C++ most people say that a variable is a reference to an instance. Like Java, instances are passed to methods by reference.

In Java and Smalltalk the "type" is the class that the instance comes from. The difference between Java and Smalltalk is that Java enforces types -- it makes sure that the type of the variable is the same as the type of the instance, or that the variable's type is a superclass or interface of that instance.

Smalltalk does not enforce types and any variable may point to any type of instance.


Classes and instances still have methods, but you don't "call" them like C, C++ or Java. Instead you send the instance a message. If the instance has a method that understands the message, it's dealt with by that method. Otherwise the message is ignored.

This ties in with the lack of variable types. Let's say instance A sends a message X to instance B. A might assume that B is of a certain type and will understand the message, but if B doesn't understand X it will be ignored. In Java, trying to call a method that didn't exist in an instance wouldn't even compile.

At first glance, this seemed dangerous to me. But the book says it's part of the power of Smalltalk, so I'll wait to be convinced. I could see it being handy for refactoring, where in Java it's a lot harder to refactor from object 1 to object 2 because you have to correct all of the methods to even make it compile. That type-checking might save your ass, but it's also a pain because it's so restrictive. With good unit testing, this issue could go away.

Encapsulation is Enforced

There's no such thing as a public instance variable in Smalltalk. The only way you can get to an instance or class variable in Smalltalk is to expose it with a method. I can see this being a great thing, instead of allowing a possible free-for-all on public variables like in Java because a programmer didn't understand the concept of encapsulation.

Blocks of Code

You can store a block of code in a variable, send the reference around and execute it later. That seems a lot nicer than function pointers in C/C++ or the "handler" design pattern often used in Java.

Small Language

The language itself is quite small, it is the extensive class library that contains most of the functionality. This makes the language itself easier to understand.

If you're curious about more differences, James Robertson recently pointed to a post from David Buck that goes through some advantages of Smalltalk over Java.

posted at June 03, 2004 at 04:54 PM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (4)

# What's Up, Doc?

I've been working on AudioMan a lot lately, getting it back up and running again. The nightly builds are pretty useful, though the Linux builds are still badly broken. Not a big deal.

The focus there was to allow a project manager to see the current status of the project easily without having to do a build himself. The Junit and jcoverage tools expose this quality information quite well. With the information the manager can make educated decisions and scheduling, resource allocation, etc.

AudioMan hasn't had a stable release in a while, and given that I've put a lot of new things in since 0.2.0 I decided to release 0.4.0 instead of another development release (0.3.2). It will be out this week, pending one last bug smash. Thanks James and Roy for testing.

After I release AudioMan I'm going to start programming in Smalltalk as a break from Java. I'm really curious what all the fuss is about and I want to get first hand experience. I think I'll try to make a Blackjack game, test-first naturally. I'll be documenting the process as I go along.

Update 11:13 AM: The latest integration build is going to be AudioMan 0.4.0 tomorrow if there are no problems with it. If you have some time, give it a test run. Thanks.

posted at June 02, 2004 at 07:48 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (7)

# Write Once, Don't Run on Linux

I'm having an interesting problem with the daily builds of AudioMan. For now AudioMan is only meant to run on Java for Windows, but is built with Java for Windows or Linux on my build machine. This isn't a problem because both SDKs (javac) produce the same bytecode, right?

It's not a problem until you try to test it. On Windows and Mac OS X the AudioMan test suite passes 100%. The Windows JVM I use is from Sun and the Mac JVM I use is from Apple. The Linux JVM is also from Sun and out of 899 tests, 9 of AudioMan's tests are failing for a variety of reasons -- mostly related to files, OS specific. See the builds page for the JUnit test results and errors.

If you look at the code provided with Java, you'll see that many of the clases are shared between platforms. Only the classes that have OS-dependent idiosyncracies are implemented for each platform, like the File class.

My problem really puts a damper on the "write once, run anywhere" mantra, doesn't it? It's the same reason why people dump on SWT: the implementations, though they use the same API, are developed independently. Given the size of the API, there are bound to be inconsistencies in those implementations -- and bugs, don't forget those. Even worse, different bugs between implementations of the JVM over different platforms.

Even with these problems, can you see it being done another way?

posted at June 01, 2004 at 07:37 AM EST
last updated December 5-, 2005 at 02: 2 PM EST

»» permalink | comments (0)

Search scope: Web ryanlowe.ca