All posts by John Fjellstad

Password

I forgot my password.

I had this brilliant idea of protecting the documents on my laptop with encryption some time ago in-case it ever gets stolen. And of course, encryption is useless without a good password. And now I forgot the password. And the backup doesn’t cover the latest stuff…

At least I decided to cut down on the number of unique password I have to remember. If only I could figure out what the password was for this particular set…

JavaZone ’08

I went to the JavaZone conference this week, my first conference in Norway.

The first thing that struck me was how small everything was. Now, I didn’t expect Comdex or JavaOne size, but still, I kinda expected something bigger. Especially since it billed itself as “the largest developer conference in Scandinavia”. If the smaller conferences in the US is probably twice if not three times as big as this one.

The second thing that struck me was how rubbish most of the speakers were. Not rubbish in the sense that the topic wasn’t interesting, but rubbish in the sense that the speakers weren’t any good at presenting their topic. The best speakers were those who came from the US and England. That was somewhat weird, considering some of the speakers billed themselves as “professional speakers”. Had a talk with someone at work about it, and he mentioned that the speakers in Norway probably hadn’t had formal training yet, and probably amateurs. Still, if you want to make a living as a speaker, hire a Speaking Coach, or take a class. I would think it would be worth it.

In any case, here are some of my favorite speakers from the conference

Jim Webber had a presentation entitled “Guerrilla SOA”. Basically put words to everything I have found weird about working with SOA the last couple of months.

Mary Poppendieck had a presentation on Lean Software Development, which I really found interesting. Lean software development seems to what all this agile programming people really are striving for. Got me thinking about what kind of decision you need to do right now…

Michael Feathers presentation on how to “see” good and bad code.

Robert C. Martins on writing good functions.

Not only are all these people really good presenters, but the topics were pretty interesting too. They are highly recommended if you ever get a chance to listen to them speak.

Amending the checkin

After finishing a particular software feature, I like to check in my work. The problem is, I get into a dilemma. Do I check in now, or wait until I’m sure the feature works as advertised. If I check in now, then I might discover that forgot to add something else to the checkin. If I don’t check in now, I might start on another feature, and then it gets messed up.

git has a wonderful feature that fixes this. git commit –amend. Basically, you commit your changes. Later on, you realize there were some other files that should have been committed alongside the first commit. So you just amend the previous commit with your new files. Really nice feature, and should make the history logs much nicer to look at.

Nokia E-65

I received a Nokia E-65 for work. These days, at least in Norway, people don’t get a desk phone. Rather, they get a mobile phone. It does make it easier if you have to move around according to which project you are in, since your phone number then always stay with you.

The initial impression is that’s it’s a really good phone. It feels nice, with the leather back, really light-weight, really clear screen.

Functionality wise, I don’t think there are any complains. It syncs with my Notes calendar, which make it much easier to remember all the meetings you have to go to. The email functionality has also been upgraded, compared to the Nokia N-70. That is, you can custom define the different ports you need to use to email. Wireless works great, and it’s a great way to save money on my the surfing habits. I installed Google Maps on it, and it seems to work pretty well.

E-65 comes with a mapping software, but you need to have GPS receiver, which kinda defeats the purpose, I think. If you have a GPS, you don’t really need the E-65 to show you the maps, do you?

The one thing I don’t really like about the phone is the camera. No, not because it has a “low” quality camera, but that there is a camera on it in the first place. It seems to me that a business phone like the E-series try to be, shouldn’t have a camera. There are places where you aren’t allowed to bring a camera into the office. And a camera doesn’t really fit in to the functionality of a business phone. Not that the camera is that good either. So, it seems Nokia put a camera there just to get past the reviewers that would complain about it, but not that good of a camera that would make it unusable in a business setting (no zoom, no flash etc).

Looking at the Nokia E-serie offerings, they have all cameras. I would love to get a phone with the functionality of E-65 without the camera. But other than that small thing, it’s a fine phone.

updating website

After months of procrastination, I decided to update my website. One of thing that had been bothering me was that it was getting too hard to update. Not the look. With css, updating the looks are somewhat easy. The problem was more that if I wanted to update, say the menu system, I had to edit a ton of pages. Small changes like updating the copyright just wasn’t happening unless it happened to be on page that I was editing a lot.

At first, I was considering installing a CMS or the very least, just let WordPress manage the site (which is quite capable of). But the software engineer in me doesn’t really like a solution that basically creates static pages dynamically. Waste of resources. Most CMS only forces your pages into their look and feel, and though I could work until I got my look ‘n’ feel, it would be too much work for too little gain.

But what is a CMS, anyways (at least web kind)? Well, it helps managing the files. You create templates, so that you can focus on the content and still get a consistent look’n’feel.

Managing files are easy. Being a software engineer, I’m used to source control systems, so that part was pretty much taken care of. The creating templates isn’t that hard either, if you are on an unix-like system. The scripting support on those systems are superb, so it wouldn’t really take much effort to write a template system to generate the static files.

Since I was going to make this change, I decided to do it in Perl while I was at it. Mostly because I haven’t worked much in Perl, and it would be an interesting challenge. It actually went really well. Took me a couple of hours, mostly because I needed to look up different functions in Perl, but in the end, I think I got the flexibility I wanted.

KDE4

I’ve been playing with KDE4 for awhile now since Kubuntu released 8.04. Although it’s a little more polished and useable than when I tested the 4.0 release, it’s still not quite there yet. That is, it doesn’t give compelling reason to switch from KDE 3. That said, KDE4 is beautiful.

One of the complains against KDE that it wasn’t very beautiful. It was very functional, but beauty… To me, it was like most enterprise level software. Functional, does what you need it to do, but it’s not a thing of beauty (if you want beauty, check out what Apple is doing with their stuff). KDE4 is beautiful. The artwork team in KDE has done a really good job. And with the addition of 3D accelerated graphics, it actually feel smoother too.

I really look forward to what the KDE team has in store for v4.1 and beyond. It’s going to be an exciting year.

cross-platform development

One of the problems of doing cross-platform development, is that you don’t necessary know what the environment will be at the person who is compiling your software. So, you need to be able to check this and maybe notify the user what additional software needs to be installed. You also want your build system to be flexible enough to maybe work around issues and options.

I used to write the build system for the unix development at my previous work. Since the target was limited to HP-UX and Solaris, it was somewhat easy to just create a script which figured out which platform we were at, and then copy the platform specific Makefile to the right location. When I started working on my own private projects, I wanted to make it more robust. Initially I was looking at the automake/autoconf systems. I found it a little too complicated for my needs, although autoconf by itself might have been useful. I still find it somewhat troubling to have to learn another language (M4) to write the configuration files. To me, configuration files that generate the Makefiles should be as simple as possible.

While I was investigating and trying out build systems, I heard that KDE had started using cmake. This made me at least a little curious about it.

The initial impression was that cmake makes it really easy to get started. I was up and running faster with cmake than I did with the autotools chaintools, and almost as fast as writing my own Makefiles. Now, my projects aren’t as big as the KDE project, but if it works for them, it should work for me.

One of the problem I found is the some problem I have found with most cross-platform tools: it has to target the least common denominator. Some of the features seem to indicate that cmake was first developed on Windows, or that developers were primary Windows developers. For instance, there wasn’t a good way to clean up custom generated files using cmake, as in you could make clean target in Make have a dependency on something else, so that that something else also got removed when clean was called. Reading the faq, this seems to have been fixed in 2.4.

Although I haven’t done anything complicated with cmake yet, so far I have been pretty happy with it. It isn’t much more complicated than handwritten Makefiles, and it should save me tons of work on porting.

Kubuntu 8.04 (Hardy Heron)

I’ve decided to reinstall my Linux system, and in the process upgrade to the next Kubuntu version, 8.04, also known as Hardy Heron. I usually don’t like to reinstall my OS when I upgrade to a new version. One of the reason I decided to move to a Debian distribution was that I didn’t have to reinstall during upgrade. Unfortunately, when I initially installed Kubuntu 6.06 (Dapper), I made the / partition too small. Although it was big enough for normal usage, during upgrades, with multiple versions of the kernel, firmwares and kernel modules, it got too tight. Hopefully, the current size of 1 Gb should be enough.

In general, I hate reinstalling an OS. Mostly because I spend some time customizing it, writing scripts to help me in the day-to-day work. Most of the time, I forget to backup these scripts, so I have to recreate them once I notice they are missing. Mostly finding everything to make the desktop look and feel exactly as I had it before I made the upgrade do take time. Of course, I’m also finding functionality that I’ve never noticed because they have been hidden because of customization, or I just hadn’t looked.

Although I stopped installing beta software (unless force to it), I’ve been pleasantly surprised by the stability of Hardy. Stuff like hibernation and suspension that I previously had problems with, works now. I never got the kernel in Gutsy to boot, so I had to be content with the kernel from Edgy, but 2.6.24 works perfectly now. And Kontact seem more stable than previous versions. The addressbook has gotten a much needed speedboost (when using LDAP), although it still doesn’t expose all the fields. This version seems to be a keeper.

Distributed Version Control Systems

I recently moved all my development over to a distributed version control system (dvcs) called git. I’m not going to write about dvcs and the difference/advantage/disadvantage between it and central version control systems, like Subversion. There several good articles and blogs on that topic around already [1][2][3]. This is more about my journey.

I started my (development) career using SCCS and RCS slowly moving towards CVS. They worked, with some warts. Some of the weaknesses are well-known, but for tracking individual files, they work really well, and I still use it. Once Subversion came on the scene and stabilized (with the introduction of fsfs repository type), I was hooked. One of the thing I really liked was the concept of change set, that is set of changes that logically belongs together. Like say, a header file and the source file. Change the header file and the source file needs to be changed too.

I recently got in trouble with my svn repository when my working set got into an invalid state as far as my repository was concerned. What happened was, when I moved back to Norway, I shipped my computers. I had made a backup, but had continued developing. And the shipping had taken time, so I had created a new repository from my backup. But since my working set was at a different version than my backup repository, svn wouldn’t let me checkin. Frustrated, I started looking at other vcs.

What got me first looking at git was the google talk by Linus Torvalds. I then read up at the current state of dvcs before I started testing out git, and I was hooked.

What I find interesting is that git feels ‘natural’. The impact on my development process is minimal. As an example, to create a tag in svn:

svn copy svn+ssh://server/path/to/project/trunk svn+ssh://server/path/to/project/tag/{version}

The same command in git:

git tag {version}

Branching is similarly easy in git. But I think the main advantage is that I now can I have multiple repositories for the same project.

My current work process is like this:
edit edit edit
checkin into local repository
edit edit edit
checkin into local repository
happy state
push to private server
edit edit …
once stable state
push to public repository

At at any time during the editing phase, if I’m unhappy with the track I’m on, I can always throw out the changes without messing up my private or public repository.

I really think you should at least take another look at the dvcs and see if it’s something for you.

[1] Wincent Colaiuta’s development blog
[2] Dave Dribin’s Choosing a Distributed Version Control System
[3] Ben Collins-Sussman’s The Risks of Distributed Version Control

mail labeling

Following the previous post, I’ve been playing around with labeling my email. Mutt has really good labeling support, mostly because of its flexibility. With it, I can pretty much do anything I need using labels (add, delete, modify, limit views to a given label etc), so it has become my primary email client these days.

I used to use Kontact, and I still like some of its functionality (like setting up search folders), but not being able to edit and add labels make it hard to use. It’s also not that stable when you’re using imap. Looking forward to trying it out again when KDE 4 turns stable. Hopefully, it is more stable there.

Being, the lazy bum that I am, I really don’t want to manually add labels to email from people I know when the computer can do as good a job as I can. Enter maildrop. One thing that I noticed was when I wanted to the system to add multiple labels to a given email because multiple people was on the mail list. I therefore wrote the following maildrop rule (procmail users can probably write a similar rule for their system):

# Get address from the From, To, and Cc line
foreach /^(From|To|Cc): .*/
{
    foreach (getaddr $MATCH) =~ /.+/
    {
        ADDR=tolower($MATCH)
        # check if the address is in the label file
        # label file has a key/value pairing looking like this
        # example@example.com exlabel
        # where first part is the address and second part is the label
        TMPLABEL=`grep $ADDR labels`
        if ( $RETURNCODE == 0 )
        {
            # if message already has a label, keep it
            LABEL = `echo $TMPLABEL | cut -d' ' -f2`
            if ( /^X-Label: (.*)/ )
            {
                xfilter "/usr/bin/reformail -I 'X-Label: $MATCHLABEL, $LABEL'"
            }
            else
            {
                xfilter "/usr/bin/reformail -I 'X-Label: $LABEL'"
            }
        }
    }
}

So far, it’s been working really well.