Getting back to business, part 2: Theory

Note: You may have seen a phantom of this post before it was actually published, due to me accidentally hitting the “publish” button instead of the “save” one while I was drafting it.

So, as discussed in the previous post, I decided to teach myself a thing or two in order to improve the fate of TOSP and my other programming projects. Not so much about juggling with code, I believe I’m good enough at that, but I wanted to learn more of the other stuff that people typically learn along with coding at school: software architecture, efficient testing, project management, and so on. It appeared to me that if I dealt well with small “one-month” projects but encountered problems anytime I had to scale up in size, that might be where the problem lies. So, here’s how it went in practice.

Shopping for books

Technical bookstores are a fantastic place. Their spatial layout lets you roughly target a subject, and end up in front of shelves full of books that gravitate more or less precisely around that. Neglecting the shelves’ inner categorization, which often solely highlights the limits of the knowledge of vendors and customers, one can subsequently dive into the depth of the subjects on display, even discovering whole domains which one didn’t know about yet. Then comes more detailed exploration of various books which seem interesting, to see which ones are most worth buying at this point.

Most of the time, I choose based on a combination of

  • Whether I can understand what the core subject is in 10 minutes (if not, that probably means the author didn’t have a clear idea either)
  • Whether it sounds like something worth learning (otherwise, I won’t have the patience to get through the book anyway)
  • Whether it is explained well (clear and well-structured text, schematics are kept simple, titles in the table of contents make sense)

Oh, just quick rant for French readers regarding the latter point: do Dunod really expect to sell their technical books by filling them with such bland page layouts, unstructured lumps of text, and impossibly loaded schematics? To date, I have only found few books from them that felt like I could fully read them without being forced to. I can’t imaging using any single one of their products as anything but reference documentation on a subject which I already know very well. Perhaps they should take a few lessons from Eyrolles as for how learning material should be written and edited?

Anyway, sufficient to say, after laying a deep look on a remarkably large book that claimed to explain what’s so good about object-oriented programming and how it can be applied to any problem, I ended up with something about agile project management. Management skills sound like what I need most right now, and as for agile, well, I wouldn’t want to work in a place where every single thing is planned at the minute scale and dictated to me day after day, regardless of real world considerations, so why would I want to inflict such annoyances upon myself?

Besides the aforementioned three core qualities of any self-teaching book, the book also had the rare quality of not focusing on a single fashionable methodology such as Scrum. Instead, it offered a general presentation of the various project management jobs (efficiently define needs, schedule work, follow project evolution, and deal with people), then covered a wide range of method, stressing the importance of adapting to the unique characteristics of each project. That sounded smarter, and more future-proof, than just considering a One True Management Method and shoehorning it into every project, effectively repeating the errors made in the past with waterfall management in my view.

For those who feel interested (and can read French), I’m talking about “Gestion de projet agile” by V. Messager, edited by Eyrolles, ISBN 978-2-212-13666-1. With around 275 sub-A4 pages, it’s quite short and straight to the point: I went through it in about a week, without much hurry, and while learning C# in the meantime.

What did I learn?

Gathering and prioritizing needs

It appears to me that in TOSP (and in other projects too), I don’t spend enough time defining what I am doing and why I am doing it. That’s a problem, because it means that I’m writing code rather blindly, with only little design clues. If I were a professional devloper, that would probably lead me to create impossibly deep class hierarchies for the simplest things, hoping to remain able to independently reuse and tweak every single part of the code. But at my amateur level, it also is a source of tremendous inefficiency, by making me write code that I don’t need now and perhaps never will, only to rewrite it later when I actually need it and discover that it doesn’t do the job as required.

So, at this point, it appears to me that some things need to be clarified in my design docs. The problem is which, and how. If I don’t know that, I can’t really move much beyond complaining. So let’s clarify.

First, in general, I think I should clarify TOSP’s vision somewhere, by answering concisely and precisely the following questions:

  • What’s the point of this project, why am I doing this?
  • How far towards that goal do I envision to go, under which constraints?
  • Who could become linked to this project at some point? (as users, developpers, etc…)
  • How, roughly, do I plan to do this? (basic technical concepts and methodology, available time)

Once I’m done with this, I should be able to explain it in two minutes if I need to. A possible path towards that goal is to start with the problems that made me start this project, gradually improve the precision of that description (How much? Who? When?), then clarify the points above in a similar way, and conclude by trying to describe the finished OS if it were available today, again in a similar way. The point of clarifying a vision in such a way is, it can help significantly at the subsequent task of accepting, rejecting, and prioritizing feature ideas from me and others.

Another step in that direction would be to design features in terms of concrete expected usage patterns. Which is also great, as visualizing features in action in advance can help prevent some kinds of usability disasters. After all, an office suite feature which helps you at the tasks which you seem to be failing at can sound helpful, until you picture it as a paperclip with eyes kicking your screen from the inside while spouting lumps of irrelevant text in yellow bubbles. There are two main approaches to formalizing this, use cases an user stories, which mainly differ in the amount of written documentation which they involve. Use cases strive for extensive, unambiguous descriptions of everything that can happen regarding a given feature, while user stories try instead to produce a very short and readable description that focuses on the main points, combined with a redaction of functional tests for the feature from day one.

From this description, one could infer that use cases are highly painful to write but can be a safer path, while user stories are easier to write but just as easy to get wrong. To avoid this outcome, the latter can be analyzed using the six criteria contained inside of the INVEST acronym, according to which a story should be:

  • Independent from other user stories, so that they can be implemented in any order. It facilitates planning.
  • Negociable, in sense that it focuses on needs and leaves implementation details for future discussion around iterative prototypes. Contrast, as an example “I want to know how much money is left in my bank account” with “By clicking the ‘amount’ button located in the bottom right corner of the account management tab, I want a transient pop-up to inform me of how much money is left in my bank account”.
  • Vertically integrated, in sense that it does not worry about the underlying layers of the implemented application.
  • Estimable, in that the description is precise enough to roughly evaluate development costs.
  • Small, making implementation easy enough that fine-grained planning at a sub-month scale is possible. When it’s not the case, the task should be sliced into smaller functional blocks.
  • Testable, in that it’s accompanied with functional tests, that can be used to declare if the job is done as intended.

When multiple stories are simultaneously considered for implementation, which will happen pretty much all the time, another important aspect is that of task storage and prioritization. For storage, all that is needed is a hierarchical list, and that can be trivially produced with an Excel file or even a mere hand-formatted text file. Prioritization, however, is more tricky, since one needs to know about three things to do it: how valuable a story is, how costly it is to implement, and how risky implementing it is. Risk is also important, because if the risky stuff is sorted out first, it means that the visibility of the project’s advancement becomes better as time passes, and not worse as usually happen. And that’s quite valuable.

Value can be expressed by considering both how desirable having a given feature is and how annoying it is not to have it. These can be, as an example, quantified on an arbitrary relative scale ranging from 1 to 9. Cost can then be expressed in terms of how much work it would take to do something, either relatively to other tasks or in term of ideal work hours. From these, priorities can either be set qualitatively (such as using the “MOSCOW” scale of stuff which Must, Should, Could and Won’t be implemented), or be computed numerically such as by using Karl Wiegner’s relative weighting method (task priority = relative value/relative cost).

Sketching and following plans

Keeping a prioritized backlog of tasks around is only the tip of the planning iceberg, however. One also has to take into account that it is unrealistic to predict all the features of a complex product, like an OS, from day one. For one, needs will emerge and disappear as the project moves on, making such extensive initial planning obsolete. Besides, I can’t guarantee that I’ll be available for TOSP for more than a small decade, I need to ship something at some point if I don’t want to be left with the sour taste of unfinished business in my mouth. But humans can’t accurately predict the future for more than a month, much less for years, so I cannot do that accurately from day one.

For all these reasons, it seems worthwhile to organize TOSP’s evolution not only day after day and on the whole project’s scale, but on multiple time scales, with different planning practices at each scale. Here’s a proposal from the Scrum world which I’ve found interesting on this front:

  • The largest time scale to consider is the project itself. At this level, all I have to do is to define the project’s vision, a backlog of high-level functionality which I’d like to implement, and how much time I have to do that.
  • To get a feeling that things are moving in the right direction, it’s interesting to slice this up in large-scale milestone releases, produced in 3 to 6 months, which roughly group high-level functionality in thematic blocks. At this scale, however, planning remains highly coarse-grained, so milestone plans should not be something that’s set in stone.
  • Below the milestone scale, high-level functionality is broken up into smaller functional chunks, that can be implemented in less than a month and are thus suited to fine-grained planning. These smaller functionalities are spread across sub-monthly iterations, typically lasting 2 to 4 weeks.
  • At the beginning of each iteration, low-level functionality is associated to functional tests, and then broken up in individual technical activities and associated unit tests. This can allow to readjust the initial assessment of how long they are to take, and gives a concrete roadmap for daily development.
  • On the daily scale, technical goals are set (“Today, I want to do this”), and progress is quickly evaluated using automated tools so as to notice problems early on.

For iteration planning, it is important to know how long implementing a given user story is going to take. In the last part, I have discussed how cost can be evaluated in terms of ideal work hours or relative cost in “story points”, but how this translates to real work hours is more difficult to assess. Assuming this relative cost evaluation is accurate, however, it should directly translate into real work hours through a multiplicative constant, the “velocity”. By comparing the expected work at the beginning of an iteration to the work that has actually been done, one can compute the velocity with a precision that gradually increases across iterations.

Another thing to keep in mind when applying this methodology to TOSP in particular, is that since it’s something which I do alone and on my spare time, I have much less work hours per day than a professional development team. Consequently, I’ll probably need to adjust the various timescales at work here in the upward direction before they make sense. However, considering that I can currently only save up one hour in the morning for TOSP, scaling down to technical activity planning does seem relevant: if I have a precise idea of what I have to do, I can get to work more quickly, and thus make more efficient use of the time I have. On their side, time-consuming activities like planning will be best left to week-ends, where I have much more time at hand but don’t always feel like coding for hours.

Monitoring and directing work

So, I have already enumerated ideas to define what I want to do, and how and when it should be done. Another task which I’ve mentioned multiple times, without defining it precisely at this point, is knowing how well I’m doing. On this front, the three main things which should probably be monitored is performance (how fast I am going), quality (how well I am doing), and risks (which potential problems show up and disappear, how they are handled).

Performance is the easiest thing to monitor and understand, assuming planning steps like those discussed above have been taken. Since we know how much work is planned for a given iteration and how much total work time said iteration provides, all that is left is to provide a chart which features both the amount of expected work left or done every iteration day, and the amount of actual work. Once that’s done, a daily update on the remaining work coupled with a look at the chart is enough to know if things are going as quickly as planned, faster, or slower.

Quality is a combination of two factors, perceived quality (how well software satisfies its users’ needs) and intrinsic quality (how robust and future-proof the software is). It is significantly more tricky to evaluate, but one can assess some aspects of it through functional, technical, and interface tests, which respectively check the ability to meet a story’s goals, the correctness of the implementation, and the way software communicates with its environment. Said tests can be implemented as unit tests within the codebase (checking that every function works as expected), or validation tests (manually checking if the feature meets expectations). Tests should be written early and updated quickly once a flaw is found.

Risks are hard to plan in advance, so one shouldn’t try to make paranoid plans which take every single one of them into account. However, when one is spotted in the distance during development, it should be taken note of and handled. A proper handling process would involve evaluating the probability of the problem actually happening and its possible impact, followed by sketching an answer strategy and classifying these in an easily accessible place. Possible strategies for answering a risk include avoiding it, handling it to someone else, trying to reduce the odds or impact, accepting the risk and saving up extra resources to handle it, and preparing risk-specific actions which are triggered when certain conditions are met.

Monitoring is only useful if it’s done frequently, preferably daily. Consequently, it must be a very efficient process, and whatever can be automated, either through code (progress monitoring, unit tests…) or written formal procedure (functional tests…), should be automated. Beyond this quick daily action, extra attention should be paid at the end of iterations and releases to make sure that we can forget everything that happened and move on to the next step. For performance, this includes considering if, overall, the amount of work was under- or over-estimated, and if some specific tasks were very badly evaluated and why. For quality, this includes checking things in deeper detailed and taking note of everything that hasn’t been covered by tests. For risks, knowing how each one was managed would be of help so as to help with the next ones. The conclusions of these extensive checks should be put in a document suitable for archival and further reference.

For daily monitoring, I think I’ll also try using at home the helpful research practice of logging stuff on paper as it happens. It provides material for this blog (or, in the case of research, for my supervisor), and it helps wrapping things up at the end of workdays or iterations, without being too much of a disturbance as computer logging is.

Working effectively

Now that I have a solid theory to base my work upon, it’s time to introduce more practical considerations into it. The first one is that at this point, I am alone doing this, so I have to wear many different hats, ranging from that of customer, project manager, developer, tester… But as a human, I can only do one thing at a given time. So, when should I wear each of these hats? It seems to me that the best answer here is to focus on development and testing for everyday work, testing first and development second, and save the manager and customer hats for iteration beginning and ending.

Another question is that of a sustainable work schedule. Experience shows that for long projects, it’s best if I try to always do things at the same time and for the same duration, on a daily or weekly basis. I also need to take a break away from work sometimes, and I tend to be more awake in the morning than in the evening. For all these reasons, I think it’s best if I manage to save up about one hour every morning for TOSP work, and I’m currently trying to adjust my sleep schedule to this end. When longer periods of time are available, such as during week-ends, one thing which I’m awful at is breaks and fatigue management. I tend to start to do something, do it until I’m exhausted, then switch to something else, in the hope of going back to the first activity later. Perhaps I should try more efficient time management techniques ?

If I focus on coding itself now, here’s a list of things which I think I should focus on:

  • Try to consider things in a more cold-blooded fashion, through a more formal implementation planning.
  • Keep wondering why I’m coding stuff, and give me some rewards when I do it well.
  • Design overall program architecture (such as classes, structures, methods and states)  in advance, and use static variables instead of globals when possible.
  • Write simpler functions, and design unit tests for them beforehand (following the Test Driven Development ideology).
  • Automate testing. Make them run by default on debug-quality code, doing nothing when they work and complaining violently when they don’t.
  • Be extremely careful with things which can’t be easily tested, such as UI. Simplify their design and maximize their tweakability.
  • Distinguish errors, which are things which the caller of a function can recover from, from exceptions which crash stuff by default and are thus best left for catastrophic events.

…and I think that would already be a good start!

Conclusions

After going through various subjects in a bookstore, it appeared to me that what I needed to learn most right now was project management. So I took a look at an interesting book about the agile development methods, which seem to fit my character best. I learned from it a number of “good practices” which I could probably put to good use in TOSP (along a lot of other things which I didn’t mention here because they make no sense in the context of this project). Now, what I have to do is to put this theory in practice, by deciding which parts of it I’ll try out, when, and how, then actually doing it.

See you next post for a more detailed discussion of these later practical considerations!

2 thoughts on “Getting back to business, part 2: Theory

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s