A LUCASFILM Reboot

A Star Wars reboot that would totally disturb the Force into a giant Black Hole of Ultimate Sadness? Star Wars Episode VII: Young Indiana Jones and the First Class… Indiana Jones as the first Jedi. Ew. Ick.

More seriously, what I would like to see is the true story of Yoda. This could mark the return of M. Night Shyamalan with the wickedest plot twist in the history of cinema. In nine centuries Yoda infiltrated the Jedis, modified their training to turn them from a mind-controlling police force into a religion that truly makes them weak. At their weakest, he organizes the Great Jedi Purge, handing the Stormtroopers the location of every single group of Jedis so they are exterminated overnight (showing a few of his Sith allies who is the boss along the way). How can the top target of the Purge survive without a scratch? Isn’t it odd that we almost never see Yoda use mind tricks? He is – he’s just so good nobody notices.

Is Yoda evil, the greatest Sith that ever lived, or does he only want to take away the Thought Police from Imperial power? Episode VII: Rise of the Whills would totally rock. Heck, it could even be filmed as found footage from Yoda’s point of view. 😉
(Or Ep VII: Your childhood, break I. Although that title also applies to the episodes I-III :()

(assuming only the MOVIES as canon. And mind tricks can bend anything else into shape. Please go and re-watch the movies with evil Yoda in mind 🙂

Coding by mistakes

All software design companies recognize how important it is to let its programmers make mistakes, as long as they have an established process. The idea of a branch-and-merge mechanism in version control software reflect this; the designer expects to experiment, make mistakes and fix everything before merging his code back into the code. While the programmers are not encouraged to make mistakes, making them in as safe a way as possible very often grants a team the freedom to make happy mistakes, where they learn a new trick, a simpler interface might be used or new approaches or functionality might be invented.

The team still has to work on a schedule, but the confidence to experiment new things is the first way to learn and improve. I believe it’s the best way of turning a team from good to great, especially if I use an agile approach.  I recently saw a funny video about ‘The Institute of Mixing Random Things Just In Case the Result is Good” – it had a better name, but you get the idea.  Random tests, mutations and errors are 99.999999% bad, but sometimes, very rarely, they are useful.  Awareness of it is important and encourages the open mind necessary to realize that an unexpected result might have a positive impact.

Have you ever tried out something or given someone a task and they come back with something that isn’t at all what you expected, but it makes the rest of the project better?  I could give you a few examples. An intern created a simple launcher (what I asked for) but stored more parameters than he needed to (a waste of resources at the time); it saved our bacon at a conference when the loaned machines needed a different configuration. A developer created an user-defined attribute system on top of the business objects (what I asked for), but the simple lightweight system meant that most of the previous storage code could be made irrelevant. Investigating performance problems of a tagging feature led to removing a very slow, convoluted and totally useless loop in unrelated database code.  Have you ever experienced this rare, but happy, event?

This morning I was wondering if the capacity to ‘revert’ things could be applied to anything else.  I know I often depend on ‘undo’ in my word processor to stop most of my logorrhea, or at least tide it back. I make a backup of drawings before I try to color them or edit them. It lets me screw up in confidence. But there are still plenty of places where I feel like I don’t know what I’m doing or if there’s a better way of doing it, and I can’t find out because I’m afraid to screw up.  Take this WordPress platform for instance. There are a ton of options and features I have never used and might never even try, just because reverting to a state I like might be impossible.

So where else could I use recoverability?  What software packages don’t have it but would be easier to learn if they did? Can I implement recoverability with simple, generalized backups in most cases? And do I actually learn to use my stuff better if there’s no chance of messing up?  Can I apply this to everything? Can I make a team better by giving them a safety net?  If I let a dev. team self-manage and edit their bug list instead of making it my exclusive domain, would they screw up or would they learn quickly and get more efficient at it than I would alone?

Any truly Agile process needs to adapt; being able to experiment (and having the freedom to screw up from time to time) is a basic need. I think it’s important to adapt your tools, processes and thinking to learning by mistakes and being able to revert at any time.

Knowing enough to get the job and staying afloat : learning by porting

Do you have all the skills you need to change jobs? While you’re busy working with one technology or managing a project, new technologies pop up.  Even if you make an effort to stay in touch, chances are you don’t get trained for the next job; only for your current one. So even if you do have a ton of experience, you might end up starting from the bottom again.

Job offers often repeat the same patterns of required knowledge and experience. In Québec, job offers usually require you to have 2 years of experience in a specific programming language or technology for a full-time job. A position for an experienced software developer requires at least 4 years, in multiple technologies, and expert positions very often require 10 years of experience in a specific technology, even if it has not been widely used for that long.

I believe that sometimes you can replace depth of knowledge with breadth. For example, if you want to apply for a C# job without work experience in it but you have worked with Java and other OO languages, an employer could still pay attention to your resume. Can you learn enough on your own qualify for a job requiring 2 or 4 years of experience? I’m leaving out ‘expert’ level; becoming an expert is a very different process than getting to a testable level of proficiency. To become an expert you need to  spend more time ‘hacking X’ than ‘using X’. It means spending time investigating a technology, not merely using it. So the actual number of years of ‘usage’ are not as relevant as being able to show you have investigated, reported and tried to improve on various parts of a technology. If you are an expert in something, you know that you are,  and don’t need my opinion.

I’ve spent some time hiring software developers;  I know that if some of the skills are missing or incomplete, I will still consider a candidate that can demonstrate a good spread of skills, because he or she has the capacity to learn.  How fast? What is the real difference between 2 years of experience and 4?  The important thing is to be able to show competence. Can you fill in the blanks in your knowledge quickly? Can you think your way through a problem or interview question?

The difference between a craftsman and a theoretician is practice. So you need to gain experience in new technologies quickly. What is the most efficient way? You can pay for certifications and seminars, but if you’re like me you’re an autodidact who doesn’t enjoy spending thousands of dollars and weekends when all you really want is to download some software package, Google for docs and learn at your own pace and time.  What to do? Pick a problem that uses one of the technologies you want to add to your resume, and come up with a project to resolve it. Design and implement it with the new technology. When that is done, solve the same problem with another technology.  Repeat till you’re confident you can show you won’t need 3 months to get to work in a potential employer’s environment.

For example I’ve designed a method for tracking requirements. The project isn’t the important thing; identifying what technologies I want to add to my resume are what matter.  I had never used C# or .Net in a project, so once I’m done designing the architecture and database, I code it in C#.  I can learn the basics of the language and the core libraries. Once I’m happy with the results, I can start over with J2EE.  The second run in the second language is much faster since I don’t have to redesign the software & database and the design bugs have been worked out. Then I can try again in Ruby, Python, and so Forth.  For a low time cost I can add learn new skills.  It may be a bit boring, but it’s hardly redundant and wasted time.

If this is obvious or common sense it should be easy to test. I’ve actually coded the above project in C#. I’ve learned a few things specific to .Net. Setting up and learning how to configure web.config, using code-behind and file layout correctly and understanding very obscure and unhelpful error messages took the most time; figuring out the libraries, running queries and using controls was very easy using Visual Studio. I plan to reprogram the same thing using J2EE and Eclipse, I won’t have to focus at all on the design, only on learning the differences in coding and libraries.  I will note how long it takes to do the second versions and the following ones.  Who knows, I might change my thinking of how quick learning a new technology and becoming proficient with it can be much, much faster than ‘common knowledge’. Let’s not forget that most studies on this date from pre-Internet days where programmers might expect to use the same programming languages for a very long time.

I’ll post the results as soon as I have anything interesting to report.  If you have any questions or suggestions please add your comments, I am eager to hear from you!

Google robot postmen?

Google can track packages.  Google can put them on robot trucks and deliver them to your house without having a human driver.  The truck can come by the house again tirelessly until you are there to sign for the package.  The only thing Google doesn’t have yet is a picture of your face so it can be sure that you’re the one signing for the package.

Or does it?   It shouldn’t be too long before you see Gods (Google Delivery Services) Trucks outside your door.  What could be better than getting a message straight from God?

EJC11 – Récits et Jeux Numériques — Narrative and Dramatic Spaces

At the beginning of the month I attended a talk on “Stories and Digital Games” at Les Conférences Jacques-Cartier. These conferences are interesting in that they are academic panels but open to the public. The organization and panelists took special pains to make sure that the conversations were accessible to all and simultaneously in both French and English.

I wanted to get a feel of how well the worlds of storytelling and computer games are getting along, as well as see if some product ideas I had in mind were worth pursuing. Also it was a chance to see bloggers such as Professor Monfort from MIT. (He’s exactly as I expected.)

I felt a bit of a stranger there, as one of the few white beards in a room full of young, eager students of game design. It seemed like attending the conference was a class assignment. But I found it fun to hear how seriously they take all aspects of game design. I don’t think I would have taken a sentence like “Could it be a method for analyzing the semiotics of inactivity?” seriously in any other context. (That’s not quoted verbatim, but close enough.)

The theme was ‘new methods of Interactive Storytelling’.  I only attended half the conference, but I feel the theme was not realized as I conclude that they did not so much find new methods of storytelling as validated old ones.

There always are presentations of little or no interest, like using academic obfuscation (the word ‘circulation’) to try to redefine Assassin’s Creed 3 as something else than a pure platformer (Isn’t being in a specific place at a specific time the very definition of a platformer?). What was more impressive was the panel on copying the chapter headings of any RPG rulebook and passing it off as an academic framework to analyze games, or getting paid to play through Mass Effect 2 five times at university-level salaries…

But most of the panels outlined how what you can’t do in the game defines the narrative. You can only tell a story if the game mechanics support it, and you play only as long as there is drama created by how you can resolve a situation. The fun, the narrative of a game emerges not from technical limits, but narrative limits; the crises and drama the creator of the game wants you to explore.  The artistry of a video game and its narrative relate exactly like a statue relates to its original marble block. You must remove potentialities to reveal the true shape of the creation beneath.

How to select a good narrative is outside the topic.  The only mention of this was the comment that when someone tells you “You shouldn’t do that!” then you should probably try it. For example, how many games are about self-sacrifice? It’s contrary to the typical ‘survival’ credo of most games but in Lemmings you had to purposefully sacrifice some of your critters to win.  And it was a success!

Several people asked during the day if game designers wish games had enough AI to be open-ended. They don’t. That would put writers out of a job.  Nobody really plays ‘god games’ for very long, because being omnipotent and omniscient removes all drama. But limits, either physical (Being in a wheelchair, not seeing in the dark, etc) or mental (Being in love, not knowing the enemy strategy, being a pacifist) are the underpinnings of drama and narrative.

Finally, and this is the most difficult part for the design team, the game mechanics must support the narrative limits. This means literally taking away control from the player.  For instance, the fun of Donkey Kong is in jumping over barrels to reach the top of a building. If you could fly in Donkey Kong, there would be no game. This chiseling away of ‘powers’ and capabilities is how you support the narrative.  You have to watch out for emergent behaviours and side effects.  For some people, this is the fun part. For others, it’s called debugging.

So are there new ways to inject narratives into video games?  It doesn’t seem so; but more and more the narrative is there because of limits set by the writers, not the software engineers.  And this means games have a hope of becoming serious works of the same calibre as classics of literature or cinema.

La coupure ressources humaines

Les recruteurs en Informatique comprennent-ils bien ce que les candidats ont à offrir?  Il me semble que les postes disponibles demandent une quantité surprenante de qualifications et d’expérience, tandis que les gens en charge de remplir les postes ignorent comment répondre au besoin. Par exemple, il n’est pas rare de rencontrer un recruteur qui ne connaît que les acronymes liés au domaine de l’informatique sans savoir ce qu’ils signifient ou comment ils sont liés.  Si on est un expert du J2EE, n’est-on pas aussi un expert du Java?  Un expert Scrum ou eXtreme ne connaît il pas nécessairement une méthode Agile?

Quelle est la quantité de pratique nécessaire pour dire que l’on connaît un langage de programmation?  Avec un bacc. en Informatique et une connaissance d’un éventail de langages de programmations, faut-il vraiment 2 ans dans une technologie spécifique pour être utile?  Pourtant à ce niveau, il n’est pas nécessaire de passer des semaines à apprendre une nouvelle technologie avant de pouvoir contribuer à un projet. Par exemple il est parfaitement possible  de débugger sans connaître tous les détails du langage utilisé. Ce qui est long à apprendre sont les librairies de fonctions.  Un nouvel employé doit apprendre l’architecture et les fonctions du logiciel produit, qu’il connaisse cette technologie à fonds ou non.  Les études sur le temps d’adaptation datent d’une époque ou les logiciels étaient bien moins complexes; je ne dis pas que ce temps n’as plus d’importance, mais qu’au contraire une expérience sur un projet différent ne garantit pas que cela prendra moins de temps pour réaliser le nouveau projet.

Pourquoi alors perdre un temps précieux a chercher un spécialiste au niveau d’expérience presque introuvable quand le candidat moyen peut être efficace en moins de temps?

Est-ce parce que les gens des ressources humaines ont, en général, une formation autre que l’informatique?  Lorsque j’ai rencontré récemment une experte en recrutement, elle m’as dit qu’un peu d’expérience en Informatique peut transformer le chasseur de têtes moyen en directeur du recrutement.

Est-ce que c’est parce que l’on cherche à remplir des postes avec l’expertise d’un employé qui quitte?  Pourtant si il quitte, c’est probablement parce qu’il a développé une expertise trop avancée pour le poste en question. De plus les experts sur place ont tout à gagner de faire ‘mousser’ leur offre en augmentant la difficulté de recruter de nouvelles personnes.

Je peux mentionner le cas d’une firme de consultants qui ne peut plus remplir ses mandats car ils ne recherchent que des experts en J2EE.  Les recruteurs m’ont expliqués que cette expertise était nécessaire pour montrer à leur clients qu’ils n’engagent que des consultants efficaces.  Du même souffle ces recruteurs me disent que le salaire minimum chez eux est de 100,000$.  Tout ça sans pouvoir m’expliquer pourquoi la necessité d’utiliser J2EE ou par quelle méthode ils peuvent différencier efficacement l’expertise d’un candidat qui a 2 ans de celle d’un candidat qui en a 10,  sauf par une entrevue avec le directeur technique.  Il est clair que cette entreprise ne fait pas des pieds et des mains afin d’offrir une formation continue à ses experts déjà en place…

Comment ne pas croire que les entreprises qui suivent ce modèle ne se tirent pas dans le pied à plus long terme?  Pourquoi ce désir d’encourager les Informaticiens à devenir consultants, contractuels et de seulement mettre l’accent sur les clients acquis?  Cela ne peut que continuer à réduire la quantité de candidats disponibles tout en augmentant les salaires des ‘experts’.

Smooth like Btrfs…

If you have any spare hard drives lying around and you’re a data squirrel like me, you’ve looked at the Drobo or Windows Home Server (WHS) with envy.  Drobo implements a file system  called BeyondFS, which lets you add and swap hard drives at will; the system will reconfigure itself depending in the space available and the number of hard disks installed, implementing mirroring when it can without regard to the actual size of the hard disks installed.  WHS can do the same, on a file level. While these are sweet high-tech toys, they are not free. Even after moving here and recycling dumping the worst of my old hardware, I have plenty left just lying around.  And I hoard data faster than I can hoard money.  Is there a way to build a Drobo with old hardware and free software?

I’ve looked at many solutions such as  unRaid and Freenas,  but they need dedicated servers and I wanted to run on my old Ubuntu server, since it also runs my VMs. I’ve also looked at Btrfs, believing that being a freshly minted file system it would support a logical volume manager internally (or at least intelligently manage volumes of different sizes). But it doesn’t; the current load balancer is very basic. It supports Raid0, Raid1 and Raid10. And for the moment, at least (as of 2.6.35), it is limited by the smallest volume in the Raid array (so you get the same capacity as in Raid1 mirroring, I find Raid5 more appealing).  In the process I’ve discovered that VMWare really doesn’t like running on top of Btrfs – suspending my VMs would leave them in an unrecoverable state with no helpful error message or log entry.

In the end, I went with LVM on top of Raid5.  Although it is not as easy to understand and configure as a Drobo or WHS, I can add and remove hard disks at will (provided I leave enough for the Raid5) and grow or shrink the file system even while it is live. And I can run VMWare on that Ubuntu server without a hitch.

As far as I can tell the only other free option would be to use ZFS, but that would mean switching to FreeBSD (and VirtualBox for my VMs).  I guess I could take another old box out of storage and muck around.  However as far as I can tell – correct me if I’m wrong! – once you define a ZFS array you cannot yet add devices, only replace disks with bigger disks, which is not as flexible as I want. So I’m not ready to invest too much time in the attempt.

Now, LVM+Raid5 isn’t as simple to configure and use as a Drobo or WHS, but it works, has decent performance, and is totally free. You might have some trouble getting all the commands right the first time you define it and you’ll wish it had a working GUI, but once I got the configuration right, it worked like a charm.  Even adding a device and growing the volume, while a multi-step process, proved easy enough.  Best of all, since LVM takes charge of load balancing on the array, you can add and remove disks WHILE it is in use. Unfortunately, there is still one thing that it doesn’t do; manage different-sized disks without waste.  Installed on top of a raid 5 array, the usage of any disk is limited to the smallest partition in the array (to get the final capacity, just ignore one of your partitions when adding up). However you can add as many partitions as you like to grow the array (Taking care not to use two partitions on the same disk).

Thus I ended up with a LVM on RAID5 array,  almost easily expanded to infinity and with some data safety. When Btrfs implements Raid5 (possibly in 2.6.39) and you can do an on-line re-balance, it will be worth another look, as you would not need LVM. The file system would grow without a problem as long as you do add one volume at a time.

All this exploration has left me feeling that there’s a need for a file system that is space-conscious and redundant (for select files).  Would it be possible to use WrapFS to do this?  Is anyone aware of a project or file system that already does this?

So for now it looks like I can’t make a DIY Drobo, but at least I got to experience the magic of LVM.

Nerd et Non-nerd sont en bateau…

Il y’a quelques semaines, j’ai été de façon virtuelle à un salon de l’emploi sur les nouvelles technologies; avec web cam, clavardage et contacts avec les employeurs potentiels. Je tais le nom du salon pas parce que les journalistes qui ont couvert l’évènement ont à avoir honte de quoi que ce soit, mais simplement parce que leur enthousiasme dépasse, et de loin, la réalité de la chose. L’enthousiasme est tel, en fait, qu’un salon entièrement virtuel est en préparation, Technofil2011. Nul doute que l’idée est intéressante, mais est-ce bien utile?

Pour une journée qui a accueillie plus de mille candidats, il y eut un total d’une trentaine de personnes présentes dans le salon virtuel.  3% est un résultat admirable, sauf que  seulement 4 personnes ont fait des échanges par clavardage, en m’incluant. Est-ce que les semaines de publicité et de préparations en valent la peine pour communiquer avec seulement 0.3% des chercheurs d’emplois de votre domaine?

Et sur ce nombre, quelle est leur motivation pour ne pas se déplacer? On peut imaginer que certains utilisent en secret la connexion réseau de leurs employeurs afin de changer d’emploi. D’autres auront des capacités de mouvement limitées. Mais comment un employeur peut-il vraiment être intéressé par un candidat s’il est trop timide pour sortir de chez lui? Trouver un contractuel par Internet n’est pas trop gênant, surtout pour du télétravail, mais vous devez rencontrer la personne en face-à-face avant de penser l’intégrer dans votre équipe permanente. Si vous voulez plus d’information sur un candidat, pourquoi ne pas l’appeler? Si votre problème est d’avoir trop de candidats (beau problème!), n’est-ce pas plus efficace d’avoir un seul formulaire en ligne qu’ils peuvent remplir à volonté?

Et est-ce que les employeurs vont vraiment utiliser un salon virtuel comme outil? Au salon tous les informaticiens ont remarqués le dé-balancement entre le nombre d’emplois en Génie et ceux en Technologie de l’Information, environs 1 emploi pour informaticien pour chaque tranche de 20 emplois. On nous explique cette différence par le fait que les Informaticiens sont des Nerds et donc timides. Je pense que la vraie explication est dans la motivation des employeurs d’être présents dans le salon. Les ressources humaines préfèrent recevoir un CV par courriel pour filtrer les candidats que les rencontrer en personne. Il n’est pas rare dans une foire de rencontrer des entreprises qui nous réfèrent à leur sites Web plutôt que de faire une entrevue impromptue. Cela ressembles plus à une opération de relations publiques que de recrutement.

Ceux qui ont vraiment besoin d’être vus sont les PMEs car tous connaissent les CGI, Loto-Québec, STM et autres gros employeurs de Montréal. Les PMEs ont aussi des besoins de ressources humaines et pourtant ne sont pratiquement pas représentées dans ces salons. Probablement car y avoir une présence est une dépense qu’ils ne peuvent pas se permettre; demander au directeur du développement de passer deux jours hors du bureau? Impossible!

Bref, un salon de l’emploi virtuel peut être intéressant pour des PMEs en Informatique, mais les sites d’emploi n’offrent pas de foire virtuelle. Pourquoi? Le CV est équivalent à une demande de ‘chat’, à la discrétion de l’employeur. Y’a t’il des avantages a un système en direct? Je n’en vois aucun.

C’est amusant de voir que des décisions sont prises basées sur l’image de marque des Nerds, ou du moins de la perception incorrecte des gens envers eux. Ça veut peut-être dire que les Nerds sont maintenant un groupe démographique à prendre en compte, encore bien incompris. “Ils sont timides, ils n’aiment pas les relations sociales, vite! Du virtuel!”. Pourtant l’Informatique moderne n’existerait pas si les hackers et nerds n’étaient pas fondamentalement grégaires… entre eux.

Peut-être espèrent-ils créer un nouveau marché de l’emploi en ligne, mais je ne vois pas de futur pour cette idée. Avez vous une vision que je n’ai pas?

Cependant si quelqu’un veut m’offrir un emploi à temps plein pour chatter avec des nerds de leur passions et de nouvelles technologies, bien assis devant mon ordi, laissez-moi un message!

Too pretty to code

These days, if you are not a Cruise, Jolie,  Depp, Jobs, Raymond or any kind of Beautiful People, then leading a major software development project is not for you. Or if you succeed, you might have a charismatic associate, because lately it seems that you really need to add well-groomed to the long list of required talents of a project lead.

One of the most surprising aspect of Software Development, whether you’re trying to help your team make technical advances or help people use better software is how much charisma is necessary.  Nerds and computer-oriented people are still often depicted as clueless and lacking in social graces.  What else but leadership is going to motivate your team to code the next, greatest app or get angels to invest in something that can’t be seen or touched?

Let me prove it with the following completely fictional tale:

The startup is well-run. It is housed in a red brick building in the historical part of downtown, but clearly there’s no money wasted on trivialities; creaky wood floors, a receptionist surrounded by dead plants and sometimes, during winter storms, snow makes its way through gaps in the windows and drop into the back of your shirt.

The team is top-notch too; experts in their field, tons of degrees (even PhD!), an efficient management team; the only people dragging their feet are interns feasting for hours in the cafeteria.

And the project itself is perfect; the software is useful, there is a demand for it, and it has really picked up speed since you signed on. There are just enough technical challenges to make it both deliverable as planned yet interesting enough to keep the team focused. And wonders of wonders, the project is actually funded!

It’s a happy, sunny little coding shop. Until the launch. Until doomsday. Until Fate destroyed the Gods using the vilainies of bad diction, adult acne and fat.

The plan is to announce the software and wow investors into funding the marketing effort. As the software itself is spectacularly boring to look at (mostly because that’s all the graphics skills that could be mustered in the absence of real designers) there isn’t even going to be a real demo.

Picture the scene, a full team of geeks and nerds dressed in their Sunday best welcoming journalists to their seats.  Big shot investors sitting in the front row, beaming from the expected ROI. And then… No drum rolls, just an uncomfortable quiet as a bald, rotund man wallows up to the microphone.

“Heh. Greetings, huh,  everyone… Today We… hmm.. Our company and hm… We… and I, I mean… I guess…”, the speaker fumbles, hesitates, pines for the miraculous oratory abilities he dreamed of last night. (It’s the Kung Fu Panda Theory – It will be okay as long as you think you’re special!)  Less than five seconds into the speech and a pall has fallen on your future.  “Who is that idiot?” is heard from the front row.  “The CEO, the guy you wanted to fund.”

And there it is, thousands of man-hours, creativity and talent wasted because the only person who was not required to have actual software skills couldn’t be bothered to practice a speech.  And you wouldn’t or couldn’t replace him. Couldn’t afford to dress up your software in something sexy. We rarely have the option of choosing our boss, and startups don’t usually have the gumption to hire image experts.  Is it possible that now on top of being able to herd cats, hit impossible deadlines and debug code written in a language you just picked up two hours earlier, you also have to be too pretty to be a nerd for your startup to succeed?

It seems that companies have caught on to the power of star designers. Sexy nerds are still in.  But frankly if you’re in a technical management position, you practice just-in-time learning for most things, code at random hours of day and night and have a significant other so that at least one person knows you own a tie. You don’t have the time to hit the gym or check on sexy trends. And if you did, you’d be the CEO, not the code gnome.

Sexy: you’re damned if you do, damned if you don’t. The sad truth is that if you are a software nerd, chances are that you really aren’t that great at decoding people or making them hang to your every word.

If you can’t be sexy yourself (are there classes for nerds?), can you afford not to make your software sexy instead? If you can make it sexy, will management fund it? Can ANY software be made sexier?

What is sexy software anyway?

So many questions, so much knowledge missing from the Development curriculum! But if you really intend to design insanely great software, I believe somehow you have to figure out how to look insanely great yourself.

The Four-Sided Triangle

More than ten years ago my boss explained the Software Management Triangle to me.  As he taught project management to engineers, I expected it to be timeless and precious wisdom. He wandered into my nobicle (it only had one wall so not a cubicle) and revealed that our project was in trouble. But he had a cunning technique to get back on track.

“Time, Cost or Scope.”, he said in a decisive tone, “Pick one.”.  The project was his baby so I knew reducing the scope was out of the question. We also liked the loose deadline. Thus only the Cost was changed; it spiraled out of control and the project was shut down.  That was my introduction to the Triangle.

What my would-be mentor did not mention is that the sacrifice for cutting down on any of those three items was Quality, which we wanted to keep high. However there is another, hidden part of the Triangle that can affect all others. It is known by many names but I like to call it simply “Location! Location! Location!”.

I believe that even in this era of telecommuting and high-speed networks, being in the right place is still an important aspect of success.  There is still no better way to understand someone’s requirements than to meet them face to face. It is still difficult to get people interested in your projects or what you can do if they can’t see you.  And synergy and other partnerships just can’t get started without a good handshake or some other way to get a ‘gut feeling’ about the other party. Off-shoring and outsourcing are all about changing the location factor; this change can improve or reduce quality.

That’s why I want to announce that I made the jump – from a small city with lots of resources (especially top-level programmers and analysts) but no real funds and large enterprises to Montreal, where there’s so many openings that rifling through IT jobs search engines takes about 20 hours.

Now all I need is some time, money and scope to manage!