Category Archives: Software

Still learning one language per year

Quick update about my “one language per year” lifelong initiative:

1992: QBasic
1993: Turbo Pascal
1994: C
1995: Delphi
1996: Java
1997: JavaScript
1998: VBScript
1999: Transact-SQL
2000: C# and Prolog
2001: C++
2002: PHP
2003: Objective-C
2004: Visual Basic.NET
2005: Ruby
2006: LINQ
2007: Erlang
2008: Python
2009: Go
2010: Lisp
2011: Haskell
2012: Lua
2013: C++11

This year it’s time to re-learn C++. The language has substantially changed, and I’m glad to revisit it.

Size matters

One of the facts I vividly remember of studying physics in university (this was in the mid 90′s in Geneva, Switzerland) was a certain disconnection between Relativity and Quantum Mechanics. The former is a theory used to describe phenomena at macro level, like galaxies, planets, stellar systems, while the latter describes the interactions at micro level, the atoms, light, particles, energy at microscopic levels.

When you apply some relativity equations to atoms, you get results that are not supported by experimentation; and the same happens when you apply some quantum theory equations to objects like planets. It is not that all the relativistic facts do not apply at micro level, or that all quantum facts do not apply in macro level; it is that there is no unifying theory that explains everything, and this quest is the graal of modern physics.

Fast-forward 5 years.

One of the facts I vividly remember of studying Economics in university (this time in Buenos Aires) was a certain disconnection between Microeconomy and Macroeconomy. The models that describe the behavior of the consumer (which is the heart of the study of Microeconomy) yield wrong conclusions when applied to issues like unemployment, foreign trade or other matters that are usually better explained by Macroeconomy; and similarly, well, you get the picture.

As a matter of fact, my Macroeconomics teacher would say that whatever we learnt in Microeconomics class was wrong, and that he had the right answers; of course, the Micro teacher told us the same the year before.

Fast-forward 5 years.

One of the facts that I vividly remember of studying Computer Science during my master degree program was a certain disconnection between small and big software projects. What works in small, simple applications and systems, including human and technical factors, does not usually work in bigger, more complex projects.

It is not the same to work on a startup project with some friends in a garage to create the next social networking site, where coordination is easy, most of the tools required are available for free, where the projects rarely have any dedicated quality assurance team, than working in, say, a bigger organization like Microsoft, together with other 1500 engineers and testers, all dedicated full time to writing and testing the next version of Windows.

The hardware requirements are not the same, either; in small projects you could use a couple of Mac Minis and a cloud hosting service and you are done; at Google they have MapReduce and a couple hundred thousand computers in a datacenter with air conditioning and security 24/7, and they still require more infrastructure every day.

However, and this is my main point, there is no proven recipe that can help a company grow from 10 to 10’000 people and from 10 to 10 million customers in a snap; there are some good techniques and principles, here and there, to make your software grow; but nobody actually knows of a generic recipe for every software company.

There are so many factors in macro problems, that the interaction of those factors has to be taken into account; not only the factors themselves, but also their interdependencies. I guess you see where I am going with this. This problem is usually called scaling in computer circles, and I think that the word can be applied to economy and physics.

As humans, we have trouble scaling. Scale is important in our eyes, because we tend to think that bigger is better. Bigger is more money, in general, but not necessarily better; we have trouble going from small to big and vice versa. Not only in facts; also in concepts. We cannot foresee the implications of scaling. At least, not completely, and not so far.

This disconnection creates lots of problems in our society. Politicians forget the human being altogether, buried beneath tons of numbers and statistics. Voters do not understand that managing a country is not like managing your household economy. Schools do not teach how to solve scalability problems; heck, they do not even properly teach kids how to work in teams to solve small, micro problems.

Small companies do not understand that scaling is neither automatic nor a required process, and that not all companies should grow; some companies work better when small than when they grow up, and that’s why they sometimes fail. Venture capitalists that are not familiar with technology cannot understand this fact, and will sometimes sacrifice good working teams for just making more money or for getting into the stock market.

The knowledge we have about the problem of scaling is limited; I actually sometimes ask myself whether there is a solution to it, that would justify the search of a global theory in physics, a unified theory in economics, or a generic scaling procedure for companies and software systems.

I do not have the answer; the fact is that size matters, and that this pattern has to do with the world we are living in; it does not matter whether you are a physicist, an economist or a programmer; this is how the world works.

MoMA and Software as an Art

What would be the place, in a museum like MoMA, of a collection of art dedicated to software?

If there is something that MoMA can make, is to boost your imagination. Anything is possible; the myriad of options for the expression of human creativity has no end, the mind boggles.

My dream has been, for years, to explain software, its intricacies, to make this part of our world accessible to anyone. Software rules our world, it is one of the most complex creations of man, yet it remains understood (albeit partly) by just a few.

There are many dimensions to software; the first to explore is size. When you tell anyone outside of the field that Windows 2000 took 5 years to a team of 1400 developers to complete, and that the whole thing is about 29 millions lines of code, it is still not enough; however, if you printed the whole code of Windows and put it in a series of books, how many books would it be?

On Kawara has created a piece called “One Million Years”, on display at MoMA; the whole thing is a series of books where the pages show, one after the other, as the name implies, one million years.

At 80 lines per page, at 1000 pages per volume, the source code of Windows 2000 would take… 363 volumes. Given that the Encyclopædia Universalis or the Encyclopædia Britannica consist of 20 or 30 volumes each, we are talking that a single company has been able to pull 12 encyclopædias out of the hat for a single version of a product. I’m not talking about quality or other characteristics; just size, raw and simple.

That’s the magnitude of software. Now we can begin to understand the magnitudes, the cost, the implications.

Another magnitudes worth exploring would be cost, number of people involved, number of errors… Infographies would explain in detail the interconnections and the different dimensions, their relations, their impact. But again, the whole thing remains so virtual, so out of reach, so different of anything else, that we just run out of analogies in no time.

What other dimensions could be used?

Learning one new language every year

Here’s an update of the current status of my “one language per year” lifelong initiative:

  1. 1992: QBasic
  2. 1993: Turbo Pascal
  3. 1994: C
  4. 1995: Delphi
  5. 1996: Java
  6. 1997: JavaScript
  7. 1998: VBScript
  8. 1999: Transact-SQL
  9. 2000: C# + Prolog
  10. 2001: C++
  11. 2002: PHP
  12. 2003: Objective-C
  13. 2004: Visual Basic.NET
  14. 2005: Ruby
  15. 2006: LINQ
  16. 2007: Erlang
  17. 2008: Python
  18. 2009: Go
  19. 2010: Lisp
  20. 2011: Haskell

The trend has roughly been an evolution from procedural during the 90′s, to object-oriented ones at the beginning of the 2000′s, and finally to functional languages right now.

And thus I realize, I’ve been programming for 20 years this year, 15 of which for a living.

Reflexions on the Software Business

There are basically two things you can do to earn a living when you write code:

  1. Consulting
  2. Products

When doing consulting, you write code, and somebody else owns it; you are blamed for its bugs, rarely praised for its benefits, and usually you only sell one copy of your work. When working on products, you write code, and you actually own it; you can brag about it on your blog without pissing anyone, and if you are lucky you sell as many copies of it as you want, all for basically the same production cost.

Now, here’s an insider tip: if your objective is living a nightmare, tearing yourself apart and swear never touching a keyboard again, choose option 1. If your objective is enjoying a healthy life, making money and living long and prosper, choose option 2.

This fact is explained by economists as a “diseconomy of scale”: this means that fixed costs are very low relative to variable costs, which means that the cost of creating a new copy of your finished product is virtually zero. You only have to invest in the building, not on the replication. Actually this is not 100% true, because you should spend on marketing anyway, and you might as well add new features on the way, but the truth is that well-run software companies make more money than drug dealers, and guess what: software is an activity usually considered legal.

However, there is a tacit consensus in Switzerland, apparently, by which there can’t be successful companies doing software in this side of the world. And most companies choose option 1 above. Which has interesting side effects.

Consulting

Consulting, just like the airline industry, succeeds in one particular point: it pisses off everyone involved in it. Let’s be frank; clients are seldom happy of the end result, while consultants have to deal with horrible working environments (read: open spaces). The only ones actually enjoying this market are the (usually non-technical) owners of consulting companies, who take pride in the fact of selling an “expert” to a company for around CHF 1000 per day (much more in the case of SAP), while they pay less than CHF 300 to the same consultant. The remaining 700 go to “operational costs”, of course, including the bonuses paid to managers of these companies on the backs of the workers.

Welcome to “Capitalism 101″. You have to afford that new Porsche somehow.

Not only are consultants screwed from day one, with the typical speech of “we are a human company, people is our first priority”, they also get fired first whenever the market shrinks. They have to beg for training and to be sent to conferences, while their managers go to corporate retreats in Davos or Zermatt. Heck, sometimes consultants even have to ask for a proper computer to do their jobs, or are cynically asked to use their own personal equipment.

Oh, and consultants have to fill timesheets, and get punished if they don’t do it. Timesheets are worth an article of their own, in the sense that they are only used as command-and-control tools, and not, as one would think, as the basis for future estimations of upcoming projects. Timesheets are just black holes of information, where you might as well log 8 hours in the “whatever” category and nobody would really care. And estimations are usually done by your non-technical boss, anyway, so screw those historical data.

(Sometimes consultants not only have to fill their employer’s timesheet, but also the customer’s. I remember that at one time I had to fill 3 different timesheets. I could easily spend 2 hours a week making sure everything was right and coherent. And no, there wasn’t any “timesheet filling” entry in the timesheet software. And even worse, timesheet software – web based or not – usually sucks big time.)

OK, I’m probably being unfair here. There are a couple of benefits to being a consultant. I suppose. I hope. But this is not my point.

Products

As shown, the consulting landscape does not look very promising; on one side, many companies try to eat a small consulting market using the same shitty practices. On the other side, thankfully, there are companies who have understood that you can earn a very decent living by creating a nice product and selling licenses (or subscriptions) of it.

As previously, there are interesting side effects to choosing this strategy:

  • Creating products has the ultimate goal of generating a steady income stream. This frees up energy and resources in your team to build new products, which generate more revenue, which you can spend creating new products… and so on and so forth. You get the idea.
  • Having to maintain a few products means that you can afford knowing its quirks by heart; you don’t have to context switch from project to project like most consultants do, and you can continuously fix bugs and add new features to it. You feel like the product is your child, and you help it grow and become stronger, more resilient, more powerful. Which helps you sell more copies, etc, etc (see the previous point).
  • Google’s much touted “20% project time” becomes, in the case of owning your own products, a “100% project time”. You enter a state of continuous creation. You don’t have to explain your choices to a non-technical (read: incompetent) boss: you respond, at most, to what your market demands (read: your customers).
  • You can create a product suite; the synergy created from one product to the other might suffice to drive sales up of both products all by itself.
  • You can have a direct contact with your clients, answering their requests and problems, instead of relying on a (usually non-technical) man-in-the-middle strategy.
  • You acknowledge the fact that 8 hours of coding work is an illusion. I know no developer capable of sitting for 8 hours in front of a computer and writing coherent code, which is what most consultants are expected to get for CHF 1000 per day (the customer doesn’t usually know that a consultant only gets 30% of that sum). A maximum of 5 or 6 hours of pure concentration is already a big win, and the rest should be spent doing paperwork, playing Wii Sports or doing the groceries. Freeing your mind helps you have more ideas, which in turn become products that generate new revenue streams. When you are in consulting mode, you cannot have this liberty. Actually you have no liberty at all.
  • You can have a real quality strategy. I know no consulting firm which really pays attention to quality (even if most fill their mouths with the Q word). Refactoring, unit testing, user testing, writing requirements and specs are just nonexistent tasks in most consulting companies. When you are creating products, you can take time to do them with the depth that you want; and actually, you do it, and you enjoy it.

Again, I’m really being unfair here. I am concentrating maybe too much into this “circle of virtue” called “product -> revenue -> freedom -> product -> rinse and repeat”. Things are never that easy; when you create a product, you have to choose a platform, find a market for it, invest in the creation part, advertise it, maintain it, support your customers, update your website, burn the CD-ROMs, write in your blog, test your product in the next version of the operating system (or browser), fix that weird Unicode bug, set up the eShop for selling your product, troubleshoot PayPal issues, add entries to the FAQ, participate in trade shows, send demos to magazines, fix the damn coffee machine, and many, many other things.

However hard it might seem, the underlying truths are fundamental: when you own the product, your commitment to quality and your enthusiasm will be unparalleled. And your code will be better just because of that.

Conclusion

I would say that consulting is a viable option to start up, as a short term strategy. There’s a lot of demand for custom software out there, and using your brain to generate cash that way can be used as a quick entry point to bootstrap your own company.

However, in the medium and long term, the only viable strategy for sustained growth in the software industry is the creation and sale of software products. This is the only way to create true value in your own company, helping you create a healthy environment for your staff, fostering creativity, engaging customers with a real experience, and creating a win-win situation for you and your customers.

Of course, creating and managing a product requires skills and objectives which are not the same as your usual consulting project; this is the reason why most consulting companies fail when trying to jump on a product mindset. This will be the subject of a future article.

On the Importance of Yerba Mate in the Software Development Process

mateThis paper will highlight the results of an extensive research conducted since the mid 90′s, on the effects of the consumption of beverages based in the plant known as Ilex paraguariensis, in the framework of software development process activities in South America and some small parts of Europe.

This paper will provide an introduction to the herb commonly referred to as “Yerba Mate”, and will later delve into the advantages and disadvantages of such practice, in the context of the creation of software products.

Introduction

Yerba Mate is defined by Wikipedia as follows:

Yerba mate or yerba-mate (Br.) (Spanish: yerba mate, Portuguese: erva-mate), Ilex paraguariensis, is a species of holly (family Aquifoliaceae) native to subtropical South America in northeastern Argentina, eastern Paraguay and southern Brazil. It was first scientifically classified by Swiss botanist Moses Bertoni, who settled in Paraguay in 1895.

The Yerba Mate (usually and wrongly spelled as “Yerba Maté” in English-speaking texts) is used in the preparation of a caffeinated beverage described by Wikipedia as follows:

Mate (Spanish pronunciation: [ˈmate]), also known as chimarrão (Portuguese: [ʃimaˈxɐ̃ũ]) or cimarrón, is a traditional South American infused drink. It is prepared from steeping dried leaves of yerba mate (llex paraguariensis, known in Portuguese as erva mate) in hot water. It is the national drink in Argentina, Paraguay, and Uruguay, and drinking it is a common social practice in parts of Brazil, Chile, eastern Bolivia, Lebanon and Syria. In Brazil, it is considered to be a tradition typical of the “Gaúchos”, name given to those born in Rio Grande do Sul. The drink contains caffeine. (…) The multicultural Yerba Mate Association of the Americas states that it is always improper to accent the second syllable, since doing so confuses the word with the unrelated Spanish word meaning “I killed.”

One of the phrases in the quoted paragraphs from Wikipedia brings to mind the importance of such a drink in the creation of software products (no, not the phrase about killing, the previous one). Caffeine is known for its capabilities in waking up inert areas of the brain, particularly during brain-damaging activities.

70675.strip

We consider unfortunate to qualify software development as a brain-damaging activity (although some research arrives to this particular conclusion), however, it is certainly a brain-intensive one, and as such, Yerba Mate has proven, in our tests, to be a particularly interesting option to coffee.

Preparation

To prepare “Mate” (the beverage), three basic elements are required:

  1. A recipient, usually also referred to as “mate” (to add to the confusion), but also called “guampa”, “cuia”, “calabaza”, and other names without any translation to English whatsoever. Among these names appears also “porongo”, as it is known in Uruguay; this word is usually avoided in Argentina, for the exact same reason the name “Mitsubishi Pajero” has been a commercial failure there. This element can be made of wood, metal or even be the hollow shell of a dried calabash.
  2. The strawA metallic straw, usually also referred to as “bombilla” or less commonly, “bomba”. This element can be made out of metal or wood, and is used to drink the infusion, avoiding to swallow the leaves of Yerba Mate at the same time. The best ones have their top part covered in gold, which protects the lips from the intense heat generated by the water in the metal, and also provides a sense of luxury into an otherwise rather humble activity.
  3. Hot water, never boiled, at around 70 to 80 degrees Celsius (160 – 180 degrees Fahrenheit). It is very, very, VERY important to serve the water at the exact temperature, without boiling the water inadvertently. Usually, the best way to keep the water hot is with a thermos or vacuum flask, of which the latest industry benchmarks highlight the Uruguayan brand “Lumilagro” as the most reliable, competitive and durable in the market. European customers are best served by the standard thermos provided by Ikea.

Once all the elements are ready, the preparation process is fairly simple:

  1. Add the Yerba Mate leaves in the mate (the recipient);
  2. Put the right hand on top of the mate (recipient) covering the entrance, and using your left hand, turn the recipient upside down and shake it a little; then return the recipient to its normal position and dust the mate powder from your hand (it is strongly recommended not sniffing it);
  3. Insert the straw in the recipient, creating a small hole in the Yerba at the same time;
  4. Pour in hot water, very slowly, in the hole caved in the previous step; on the first serve it is best to avoid filling the mate completely, to leave time to the yerba to get moist and release the flavor slowly;
  5. Drink the mate, by sipping at the straw, taking care not to burn your mouth or throat;
  6. Pass the mate around, which helps create and spread a sense of teamwork, to bring an ambience of relaxation and self-contemplation, and also to spread many known viruses.

Advantages

In the context of software engineering, such a practice has the following advantages:

  • Health benefits: The ingestion of mate (the beverage) contributes positively to the recommended daily intake of water (at least around 2 or 3 liters a day), and thus to the maintenance of a convenient hydration level in the brain, which is recognized by several studies as a major contribution to its productivity. Some recent papers even indicate that the habit of Mate drinking can reduce the risks of cancer, but in any case, Yerba Mate is also a major source of many important elements for a healthy daily diet:
    It contains vitamins A, C, E, B1, B2, Niacin (B3), B5, B… and complex minerals like Calcium, Manganese, Iron, Selenium, Potassium, Magnesium, Phosphorus, and Zinc. It also contains Carotene, Fatty Acids, Chlorophyll, Flavonols, Polyphenols, Inositol, Trace Minerals, Antioxidants, Tannins, Pantothenic Acid, and 15 Amino Acids.
  • Prolonged working hours: Instead of having to leave the desk to get yet another cup of coffee, the knowledge worker can sit in front of his computer for hours, particularly when using thermos with a capacity of at least 1 or 1.5 liters (around half a gallon). Mate (the beverage) is also known for reducing appetite, which helps reduce costs in the case of companies providing food to their employees.
  • Teamwork benefits: Given the inherent social origins of the habit of drinking mate, in the case of teams, or in the case of agile practices such as pair programming, sharing the mate (the recipient) helps team managers to create a sense of unity and common goal.
  • Increased sensitivity: As with all caffeinated drinks, the intake of mate can lead to an improvement in the overall awareness of the mate drinker.

Disadvantages

The following disadvantages of Mate (the herb, the beverage and the recipient) are worth considering:

  • Cold water effects: Although common practice in Paraguay (where the infusion of Yerba Mate with cold water is known as Tereré), this variant is known for causing violent reactions in the digestive system of the person drinking it, and it is strongly recommended to never drink it more than 20 meters away from the nearest toilet.
  • Bitterness: The strong taste of Yerba Mate is also a factor of considerable debate. Most mate drinkers usually start drinking it with sugar (some even with saccharine or other sweeteners), while most experienced drinkers will dismiss this practice and downplay those doing it as amateurish or otherwise ignorant. It is strongly recommended to have everyone agree on a mate variant beforehand to avoid shallow discussions on the relative merits of different approaches to mate drinking.
  • Mate lavado: When the same Yerba has been poured several times (usually above 10 or 12 servings, depending on the quality of the Yerba), it loses part of its taste and must be replaced with new Yerba. Depending on how many people share the same mate (the recipient), this can be a significant problem, leading to reduced productivity and major anxiety and dismay.
  • “Matetiquette”: Mate (the beverage) is linked with a complete language, tied up to the history of the southern part of South America. As such, please be aware of the fact that serving a “mate lavado” (see previous item for an explanation of the concept) is considered rude practice, and is strongly discouraged. Serving mate with cold water, as explained above, can also be seen negatively, particularly if the person preparing the mate is not from Paraguay. Finally, talking in front of your recently-filled mate instead of drinking it, is also frowned upon, as you might be greeted with a “it’s not a microphone” protest if you do it.

Conclusion

The importance of the Yerba Mate in the process of creation of software has been greatly dismissed by major research efforts, and we think that more research and mate drinking is needed. In our tests, Yerba Mate has been proven to foster creativity, teamwork, overall happiness, and trips to the toilets.

Saving a Failing Project

In 2006 I had the opportunity to work as a “project leader” into a small failing project. Three developers were working in an ad hoc basis, creating a software application for an important client (a government office in Lausanne), without any kind of detailed formal specification, without any kind of design documentation, and with strong pressure from the management to release the application, even if not in an usable state. Needless to say, the project was also beyond budget.

I had just joined this company a couple of days ago, and the management asked me to take the project in charge. Not an easy task, particularly because it was my first experience of this kind.

The client was pushing to get the software it had paid for (it was a desktop reporting application for the Police department), and had not got any kind of preview yet. So the first thing I did is to pick up my copy of “Leading a Software Development Team” book and read chapter 2, “I’m taking over the leadership of an existing project // where do I start?” and get a thorough read:

The first thing that you should start to do is to review the situation. This involves more than just absorbing impressions; you need to organize these impressions into a framework. Try to organize your thoughts into the following areas, and in each area try to separate technical issues from personnel ones: - Where is the team now? (…) - Where is it supposed to be getting to? (…) - How does the team currently intend to continue?

(Whitehead, page 17)

Another highly pragmatic resource was Joel Spolsky, and his “Joel Test”:

The neat thing about The Joel Test is that it’s easy to get a quick yes or no to each question. You don’t have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each “yes” answer.(…) A score of 12 is perfect, 11 is tolerable, but 10 or lower and you’ve got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

(Spolsky, 2000)

The “Joel Test” result for this team was 2 when I joined the team (they just had source control and good tools). When I left the company, they were running at 9 (we just did not have candidates writing code during interviews, nor testers, nor hallway usability testing).

For this project I took the following decisions:

  • Since the priority for the client was to see results, I asked the developers to concentrate on stabilizing “visible” features, particularly on a visual report editor, that used a complex set of controls, similar to those of a drawing application, to create reports. Doing this, we could have a stabilized preview version that we showed to the client as early as one week after my arrival to the project.
  • In agreement with the developers, we set up a daily build procedure, and I also asked them to provide a “client build” every Wednesday, that would be placed in a public directory available to the client. It turns out that the client never downloaded the binaries, but they liked to see the version numbers grow, and the binaries being delivered. Every week, Wednesday was the “public build” day, Thursday was the “bug correction” day, and Friday, Monday and Tuesday were “new features day”. Small stand-up meetings every day allowed us to know what was going on.
  • Another important concern from the developers’ side was to have a quiet environment to work. They were constantly interrupted by the (quite nervous) managers to see their progress, and as such, I decided to stand in between both; I asked them to not to interrupt the developers for any reason, and to ask me for updates. I became a “proxy” between both, which reduced the tensions, and brought some peace to the developers.
  • I created a fast project plan in our Intranet (there wasn’t any, so tracking the project was next to impossible) by asking the developers about the tasks they needed to do to finish the project, with the estimated time to do them, and setting some milestones. Since the project was in a wiki page, the developers could change the time estimations in case that they felt they had made a mistake; the only condition being to notify me of these changes.
  • Using that information, I could create a couple of reports for everyone to see, and bring more visibility to the project:
    1. I wrote a weekly report stating the week’s achievements, the status of the project (number of open bugs, new functionality available, etc).
    2. In the intranet, I set up a couple of graphs and report tables, which were automatically updated every day.
  • I did not take any technical decisions about the project; I gave full authority on this matter to the lead developer, who in turn appreciated this trust and took spontaneously the decision of documenting and unit testing the system thoroughly doing extra hours every day. This boosted the morale of the team, and the quality of the application as well. The other two developers contributed to these tasks as well, and the rhythm of releases and their quality increased in a couple of months. It turns out that the architecture of the system was particularly well done, and as such, adding new features was a relatively simple task, once the underlying framework was done; of course, during that time no visible results were available, which made everyone nervous.

Looking backwards, the only technical decision I’ve needed to take during this project was to use the company wiki; there I could add information pages that everyone contributed to, reducing the number of communication channels and reducing the misunderstandings between project team members. I cannot stress how much this helped; it provided a complete dashboard for everyone to refer to.

The most important problems in this project were human and customer related ones. By providing more visibility to the project, and by reducing the signal-to-noise ratio in the communication channels between developers and management, the team was able to provide the customer with a more reliable and full-featured product.

References

Joel Spolsky, “The Joel Test: 12 Steps for Better Code”, Wednesday, August 9th, 2000 [Internet] http://www.joelonsoftware.com/articles/fog0000000043.html (Accessed June 8th, 2007)

Whitehead, R.; “Leading a Software Development Team – A Developer’s Guide to Successfully Leading People & Projects”, Addison-Wesley, 2001, ISBN 0-201-67526-9

Adding Manpower

Published in 1975, “The Mythical Man-Month” is considered an all-time classic in the software engineering field. The book author, Frederick P. Brooks Jr., used his experience as the project manager of the IBM System/360 and its software, the Operating System/360, to explain a common set of problem patterns, applicable to other software projects as well.

One of the most famous citations in the book is the one regarding the consequences of adding human resources to a late project; this article will provide a couple of thoughts about this assertion, and highlight some contrariwise opinions.

The Mythical Man-Month

The second chapter of Brooks’ masterpiece is named exactly as the book, “The Mythical Man-Month”; the core argument of this chapter is that the most frequent factor of project failure is schedule and time estimation. Brooks states that this is due to the fact that

Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them. This is true of reaping wheat or picking cotton; it is not even approximately true of systems programming. When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. The bearing of a child takes nine months, no matter how many women are assigned.

(Brooks, pages 16 & 17)

The final phrase of the above paragraph is often used as a graphical depiction of the nature and meaning of Brooks’ law. It implies the strong need for communication and integration existing in software projects; being social processes, software requires a strong network of communication between team members, allowing them to coordinate the inherent set of interdependencies that every project has.

After an interesting analysis of common time overrun situations, Brooks ends this chapter with the following conclusion, which contains the enunciation of the law itself:

Oversimplifying otrageously, we state Brooks’s Law: Adding manpower to a late software project makes it later. This is then the demythologizing of the man-month. The number of months of a project depend upon its sequential constraints. The maximum number of men depends upon the number of independent subtasks. From these two quantities one can derive schedules using fewer men and more months. (The only risk is product obsolescence.) One cannot, however, get workable schedules using more men and fewer months. More software projects have gone awry for lack of calendar time than for all other causes combined.

(Brooks, pages 25 & 26)

This “law” is known and cited throughout the industry as an example of a common pattern, observed once and again in different projects all over the world:

Fact 3: Adding people to a later project makes it later(…) Intuition tells us that, if a project is behind schedule, staffing should be increased to play schedule catch-up. Intuition, this fact tells us, is wrong. The problem is, as people are added to a project, time must be spent on bringing them up to speed.(…) Furthermore, the more people there are on a project, the more the complexity of its communication rises.

(Glass, page 16)

As a personal experience, I must say that the lecture of this book opened my eyes more than many, many other books. It is a funny read, but also an enlightening one: many anecdotes told by Brooks strangely correspond to my own experience, and this one is no exception. I have seen projects gone unfortunately late because of the simple fact of adding more people; and in one particular case, the project was cancelled altogether. These projects had several factors in common, though:

  • Bad documentation, or lack thereof; the only way for newcomers to the project to know what was going on was interrupting the other developers, disrupting the current operations on the project; I think that a good set of documents, describing both the high-level architecture and the low-level APIs are needed for new developers to jump in and catch up. It’s maybe not enough, but a good leap forward anyway.
  • Lack of architectural vision; projects that do not have an architect, providing vision and technical leadership to the team, are in my opinion exposed to problems when more developers join the project. The architect can act as a proxy person, guiding new developers while they familiarize themselves with the project, isolating other developers from this task.
  • Bad project decomposition in components; if the system to be developed is sufficiently large, and the decomposition in components is not properly done, the overlap and extended communication paths among team members might affect the whole project negatively. A good decomposition breaks down the whole project in a set of smaller ones, with the corresponding set of interfaces, which brings the whole team to work separately on different subsystems. In these, the risk of getting later for adding manpower is reduced proportionally.
  • Bad working conditions; I positively think that open spaces are a common disease in our industry. Teams working in open spaces suffer more of noise and visual distractions, and this is more evident when new team members join the project.

Criticism

However famous, Brooks’ law has had a good deal of criticism as well, regarding the specific characteristics of projects that might be affected in case that new people is assigned to them. The OS/360 project, which served as the basis for Brooks’ work, might not be similar to other projects, and as such, the law would not necessarily apply to them:

For Brooks’ Law to be true, the amount of training effort required from existing staff must be significant. The amount of effort lost to training must exceed the productivity contributed by new staff when they eventually become productive. (…) “Late” chaotic projects are likely to be much later than the project manager thinks–project completion isn’t three weeks away, it’s six months away. Go ahead and add staff. You’ll have time for them to become productive. Your project will still be later than your plan, but that’s not a result of Brooks’ Law. It’s a result of underestimating the project in the first place.(…) Controlled projects are less susceptible to Brooks’ Law than chaotic projects. Their better tracking allows them to know when they can safely add staff and when they can’t. Their better documentation and better designs make tasks more partitionable and training less labor intensive. They can add staff later in the project with less risk to the project.

(McConnell, 1999)

Scott Berkun gives a more concrete analysis on why the law could be wrong:

  • It depends who the manpower is. The law assumes that all added manpower is equal, which is not true.
  • Some teams can absorb more change than others. Some teams are more resiliant to change.
  • There are worse things than being later. (…) That can be ok if you also get higher quality
  • There are different ways to add manpower. (…) The more experience everyone has with mid-stream personnel changes, the better.
  • It depends on why the project was late to begin with. (…) no amount of programming staff modifications will resolve the psychiatric needs of team leaders or the dysfunctions of executives.
  • Adding people can be combined with other management action. (…) if you’re removing your worst, and most disruptive, programmer and adding one of your best, it can be a reasonable choice.

(Berkun, 2006)

And what about open source projects? Many of these (Linux, Apache, MySQL) are potentially among the biggest software projects ever undertaken, and they don’t appear to suffer o fthe problems pictured by Brooks’ law:

But proponents of open source and free software development, including Linux developers, are not completely satisfied with the Law. Most famously (among geeks at any rate), Eric Raymond in his “The Cathedral and the Bazaar,” declared Brooks’ Law obsolete, if not simply limited, saying “if Brooks’ Law were the whole picture, Linux would be impossible.” Although Raymond now says that he has somewhat modified his views or was misunderstood, some still would say he is given to oversimplifying and outrageousness himself. “I don’t consider Brooks’ Law ‘obsolete’ any more than Newtonian physics is obsolete; it’s just incomplete. Just as you get non-Newtonian effects at high energies and velocities, you get non-Brooksian effects when transaction costs go low enough. Under sufficiently extreme conditions, these secondary effects dominate the system — you get nuclear explosions, or Linux.”

(Jones, 2000)

Conclusion

So far, the discussion seems to be open. There might be a scale factor for projects, which in turn might expose them to be affected by Brooks’ law. I think that research is needed to arrive to a conclusion, even if it will be a statistical one.

Other important facts highlighted in the book are the “second system phenomenon”, the productivity advantage of using high-level languages, and the importance of building a prototype – “one to throw away”. I can only recommend this book to everyone interested in the field of software engineering (which I did in my own review of classic books in this blog: http://kosmaczewski.net/2005/11/20/my-bookshelf-part-iii/ )

References

Berkun, S.; “Exceptions to Brooks’ Law”, January 11th, 2006, [Internet] http://www.scottberkun.com/blog/2006/exceptions-to-brooks-law/ (Accessed June 8th, 2007)

Brooks Jr., F. P.; “The Mythical Man-Month – Essays on Software Engineering, Anniversary Edition”, 1995, Addison Wesley, ISBN 0-201-83595-9

Glass, R. L.; “Facts and Fallacies of Software Engineering”, Addison-Wesley, 2003, ISBN 0321117425

Jones, P.; “Brooks’ Law and open source: The more the merrier?”, IBM, May 1st, 2000, [Internet] http://www.ibm.com/developerworks/linux/library/os-merrier.html (Accessed June 8th, 2007)

McConnell, S.; “Brooks’ Law Repealed?”, IEEE Software, November/December 1999 [Internet] http://stevemcconnell.com/ieeesoftware/eic08.htm (Accessed June 8th, 2007)

Certification

While several other professions have a long, established and standard procedure of certification, the title “software engineer” is applied to both self-made developers, turned into experts of some technique, or to people with PhD degrees, and a long history of both academic and professional achievements.

When in some situations it is not legally possible to use the title “software engineer” without an engineering degree of some kind (for example, in some states of the USA or some institutions like the IEEE – http://www.ieeeusa.org/policy/positions/titleengineer.html), the term “software developer” is usually applied to people in charge of designing, writing and / or maintaining software-based systems. I will use the terms developer and engineer interchangeably in this discussion, which some people might think is not correct.

The discussion about the need of a formal certification process is a relatively new one:

Professional certification in the IT industry is a relatively recent phenomenon. It was begun in the late 1980s by Novell, Inc., an upstart networking vendor from Provo, Utah, in an effort to build market share and manage support costs for its products by building the skill levels of the people who worked with those products. Novell was one of the first companies to recognize the links between education/skills and product success. They knew that they could not build an education infrastructure that would support their worldwide marketing plans with their own resources. However, they also recognized that if they did not provide for skills acquisition for their highly technical products, they could never meet their product revenue goals.

(Shore)

However, no consensus about whether or not certification is needed has been reached yet. This article will highlight some of the problems raised by software engineering certification, which might explain the lack of consensus cited before:

  • The first one has to do with the inherent extension of the software engineering field: are all software developers equal?
  • The second one has to do with the large number of available certifications: which one to choose? Which ones are “reliable” indicators of expertise, and in which fields?

What is a “Software Engineer”?

In my career, I’ve found self-made people (I’m one of them, actually), real-estate architects, lawyers, mathematicians, economists and even geophysicists writing code for a life. What I’ve seen so far is that the most successful software developers are those who like doing it, no matter which profession they’ve followed. And the opposite is also true: many guys with a computer science degree discover, some time after they start their careers, that they definitely do not like that code thing.

One of the biggest problems with certifications is that there is not such thing as a “single kind” of software developer:

  • There are those who write games, and spend most of their time writing in low-level languages for game consoles, optimizing for speed and space, and creating three-dimensional worlds using as little memory as possible…
  • There are those who write web-based applications, and spend their time creating 3-tier architectures, talking to a database, using some kind of object-oriented platform, and luckily exposing some data using XML web services, dealing with cross-browser issues, and wondering what is all that fuss about Web 2.0…
  • There are those who write operating systems, and work for some embedded software company, or hack Linux kernel device drivers every night, or work for Microsoft or Sun or Apple, and spend most of their time discussing whether microkernels are better than monolithic architectures…
  • There are those who have the ill fate of working as a consultant, and spend more time switching from project to project every day, or dealing more with corporate politics, rather than with code…
  • There are those who manage projects and spend more time in their mailing list or in Microsoft Project rather than being able to code (and then complain about this in their blogs)…
  • There are those who have a software engineering degree, but work for ZDNet writing about industry trends…
  • There are those who turn into human resource consultants, and try to keep up to date on the new trends, but feel completely lost given what they learnt in university…
  • There are those who do a little bit of all what I’ve mentioned above, and are or not really good at all of them…
  • And finally there are those who might fit any of the characteristics above, but would have preferred not to listen their parents and rather open that scuba-diving shop in Honolulu.

Available Certifications

This diversity explains the existence of more common product-specific certifications: you can be certified to use Microsoft technologies (http://www.microsoft.com/learning/mcp/default.mspx), MySQL databases (http://www.mysql.com/certification/), Apple servers (http://www.apple.com/xserve/raid/certifications.html), various IBM products (http://www-03.ibm.com/certify/certs/index.shtml), Java development stacks (http://www.sun.com/training/certification/java/index.xml), Cisco routers (http://forums.cisco.com/eforum/servlet/CCNP?page=main), RedHat Linux installations (https://www.redhat.com/training/certification/) or UML diagrams (http://www.omg.org/uml-certification/index.htm)

However, given that technology companies have interest in having many people taking their certifications, their affordability and low-entry barriers to get them, many of these become much easy to get than they should be, and as a result, they lose credibility, and do not help IT recruiters to filter properly software developers during the selection processes. I’ve heard many complaints of project managers regarding these certifications, and I think it’s a generalized feeling:

“Certified skills pay has not just flat lined, it’s in the negative. This is big news if you’re certified and you’re thinking about getting recertified,” said Foote. “This trend is in the fourth quarter, that pay for certifications is on the wane, while non-certified skills are growing in pay.”(…) Certifications are losing value because employers are looking for more in their workers than the ability to pass an exam; they want business-articulate IT pros.”

(Rothberg, 2006)

Bruce Schneier, a well-known security researcher, has written about security certifications as well, with a mixed feeling:

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

(Schneier, 2006)

The conclusion of all of this is that the debate is pretty much still open, and that there is not a simple answer to it.

Market Fragmentation

There is an interesting anonymous comment in Schneier’s website as well:

Another thought on certification is they are not all equal. There are Vendor Certs. Microsoft’s MCP/MCSE, CISCO CCNA/CCNP/CCIE Pro: The canidate is likley to know how to work on your specific platform. Con: The canidate is likely to think in only the vendor’s interest. There are Certs to assure knowledge of standard security terminlogy. ISC CISSP Pro: Can talk strategy and evaluate the nine domains to evaluate how the company is doing overall Cons: Most likely could not tell you what the nineth byte of an ip packet means or if OpenSSL is out of date on Red Hat Linux. Topic specific, vendor neutral. SANS GIAC Pro: Vendor neutral. A lot of focus on specific skills in NIDS or Hardening Windows, Incident Handeling, etc. Con: Concentration on open source tools since they are easily available, but it does not seem to impress all employers.

(@nonymou5, in Schneier, 2006, spelling mistakes not corrected)

I think that this comment summarizes pretty well another problem with certifications: there is a great level of fragmentation in today’s market. Every single important technology in the IT world requires a huge investment in time and practice in order to master it, and this translates in a huge complexity for the developers to choose the right certification. All of these without taking into account the large number of IT-related university degrees available, online or not.

Conclusion

The term “software engineer” is sufficiently vague, and the number of “certifications” sufficiently large, as to allow a single “yes or no” answer to whether professionals in the software sector should be certified or not. I personally think that I would rather avoid vendor-specific certifications as far as possible, and choose university-related or problem domain related certifications instead, to keep my career options open, and my mind free of marketing.

References

Rothberg, Deborah; “Another Nail in the IT Certification Coffin”, Novembrer 3rd, 2006, [Internet] http://www.eweek.com/article2/0,1895,2051272,00.asp (Accessed June 3rd, 2007)

Schneier, Bruce; “Security Certifications”, July 20th, 2006, [Internet] http://www.schneier.com/blog/archives/2006/07/security_certif.html (Accessed June 3rd, 2007)

Shore, Julie; “Why Certification? The Applicability of IT Certifications to College and University Curricula”, [Internet] http://www.developer.ibm.com/university/scholars/certification/ebusiness/pdfs/why-certification.pdf (Accessed June 3rd, 2007)

Challenges for Software Engineers

Software Engineering is the youngest of all the professions, being born around 50 years ago, but since then it has been continually improved. Practicers have fiercely debated upon it through the years, given the extremely fast pace of the innovations in the field, and the extremely difficult and inherently dynamic nature of software. Many trends have appeared and vanished, and many others will come.

In this article I will provide a short overview of two kinds of challenges that I consider that software engineers will have to confront in the next 20 years: the human and the technical.

The Human Factor

A quick look at the agenda of the 29th Int. Conference on Software Engineering (held in Minneapolis last year, from the 20th to the 26th May 2007) shows the key themes considered by the software engineering research community as the major challenges today:

  • “Improving Software Practice through Education: Challenges and Future Trends”
  • “Research Collaborations between Industry and Academia”
  • “Model-driven Development of Complex Systems: A Research Roadmap”
  • “Source Code Analysis: A Road Map”
  • “Software Reliability Engineering: A Roadmap”
  • “Global Software Engineering: The Future of Socio-technical Coordination”
  • “Collaboration in Software Engineering: A Roadmap”
  • “Self-Managed Systems: An Architectural Challenge”
  • “Software Project Economics: A Road Map”

(Source: ICSE 2007)

Mixed up with technical concerns, some presentations highlighted core problems that appears in the current state of software engineering: communication, collaboration and human issues.

The core substance of software deserves more eyes and more minds, thinking ways to describe not only the big picture (something that you can do with fancy diagrams) but also to give solutions to the problems that developers find daily while building systems up. Software is a process, but not any kind of process: a human one, maybe the most intangible of all processes; and as such, it is filled with all human brightnesses and failures.

(Myself, in 2006)

I have the deep, strong conviction that software development cannot and must not be separated from the human-side problems of forming, keeping and training teams, enhancing the internal and external communications, improving and enhancing the individual creativity as well as the ways of reaching team consensus. As a powerful example, the seminal Peopleware book by DeMarco and Lister showed that many of the most successful software companies have been those that excelled in creating human-centric environments:

In 1982, (Mitchell Kapor) founded Lotus Development Corporation, for which he is most noted. While there, he revolutionized corporate workplace culture by making diversity and inclusivity top priorities in his goal for creating an environment that attracted and retained employees. There were many “firsts” for Lotus, including being the first company to sponsor an AIDS Walk event in the mid-80′s and refusing to do business with South Africa due to Apartheid.

(Sterling-Hoffman)

Thanks to a sharp hiring process, a series of innovations in their flagship spreadsheet product, and a progressive corporate culture, Lotus dominated the software landscape of the 80s. Today, Google follows very closely Lotus’ steps (Google, 2007a), and their brilliant results in the last few years seem to confirm this trend. Google for example allows their employees to use 20% of their time in their own projects (Google, 2007b). This is resulting in an incredible amount of code, used internally and also released as open-source projects:

Google is a fantastic company to work for. I could cite numerous reasons why. Take the concept of “20 percent time.” Google engineers are encouraged to spend 20 percent of their time pursuing projects they’re passionate about. I started one such exciting project some time back, and I’m pleased to announce that Google is releasing the fruits of this project as an open source contribution to the Macintosh community. That project is MacFUSE, a Mac OS X version of the popular FUSE (File System in User Space) mechanism, which was created for Linux and subsequently ported to FreeBSD.

(Google Mac Blog, 2007)

The empowerment of both the individual and the team (the emphasis is important here) is key for a successful software project.

Parallelization

Herb Sutter has put it very clearly: technically speaking, since the beginning of the decade, there is no way for getting more processing power without jumping to multicore architectures:

The key question is: When will it end? After all, Moore’s Law predicts exponential growth, and clearly exponential growth can’t continue forever before we reach hard physical limits; light isn’t getting any faster.(…) If you’re a software developer, chances are that you have already been riding the “free lunch” wave of desktop computer performance.(…) Right enough, in the past. But dead wrong for the foreseeable future.

(Sutter, 2005)

The problem is that more cores do not necessarily mean more computing power, because the jump done by chip manufacturers has not (yet) been completely followed by the software community. Of course there is the concept of “threads”, and multi-threaded applications can benefit of performance boosts when running on multicore hardware platforms; however, a number of myths have to be debunked, as the common “2 x 3GHz = 6GHz” (as explained by Sutter here: http://www.ddj.com/showArticle.jhtml?documentID=ddj0503a&pgno=3), and even more importantly, creating multithreaded applications is not easy. At all.

A couple of months ago, in the “Questions and Answers” of LinkedIn.com I answered an interesting question about parallelization; the following excerpt of my answer pretty much summarizes my opinions about the current state of multithreading, as well as some challenges that are raised for the future:

The problem is simply that the “streamline” programming languages languages do not provide good ways to code multithreaded applications. (…) Not at all. The problem is real, since multithreading applications are extremely complicated to think of, let alone develop properly. A line of code in a high-level language could mean several hundred instructions in a processor; and depending on the sharing algorithm used at the CPU level, each one of these instructions might be executed separately, sharing resources with other processes. So what happens when? (…) What I mean is that the fact that the JVM and the CLR support threads does not make good .NET or Java developers good multithreading developers by default. It’s a different mindset; who is accessing your resources? (…) I think that as long as programming languages do not take multitasking and multithreading as base features (and not as mere library or API add-ons) we will continue struggling with single-threaded applications that collide with each other.

(Myself, this time on LinkedIn Answers, 2007)

I think that the challenge of parallelization is not only an extremely tough one, requiring what Thomas Kuhn calls a “paradigm shift”, but also an extremely huge business opportunity; after all, while the top of the Chinese ideogram for “Crisis” means “Danger”, the bottom part means “Opportunity” (Mary R. Bast, 1999).

Very Large Systems

I also think that software systems will invariably get bigger and bigger. And given the historically high risk of failure of software projects, the dependency on software of the modern society, the pervasiveness of the Internet, the low prices of connectivity and the overall globalization, it is more important than ever to get ready for those challenges.

In July 2006, the well known Software Engineering Institute of the Carnegie Mellon University published an impressive report (freely downloadable) called “Ultra-Large-Scale (ULS) Systems: The Software Challenge of the Future”:

The study brought together experts in software and other fields to answer a question posed by the U.S. Army Office of the Assistant Secretary of the U.S. Army (Acquisition, Logistics & Technology): “Given the issues with today’s software engineering, how can we build the systems of the future that are likely to have billions of lines of code?” Increased code size brings with it increased scale in many dimensions, posing challenges that strain current software foundations. The report details a broad, multi-disciplinary research agenda for developing the ultra-large-scale systems of the future.

(SEI, CMU, 2006)

The 150-page long report gives an extremely detailed vision of the challenges raised by complex systems, in the following areas:

  • Design
  • Monitoring
  • Human interaction
  • Computational Engineering
  • Deployment
  • Legal issues

The report provides interesting conclusions, highlighting the methodologies and techniques that will required to tackle these systems efficiently, among them the role of the W3C, the forthcoming trends of grid computing and parallelization, the Model-Driven Architecture (MDA) initiative of the OMG, and finally the development of larger Service-Oriented Architectures (SOA) platforms, such as .NET or J2EE (page 41 of the report).

The report also places a strong emphasis in the concept of socio-technical ecosystems and I think it’s worth a read by everyone interested in software engineering.

Conclusion

Given its youth, we have yet to see the most important developments in software engineering. However, it is extremely difficult to predict the future in this industry: Bill Gates himself published a book in 1995, “The Road Ahead”, where he only slightly talks about the World Wide Web:

“The Road Ahead” appeared in December 1995, just as Gates was unveiling Microsoft’s master plan to “embrace and extend” the Internet. Yet the book’s first edition, with its clunky accompanying CD-ROM, mentioned the Web a mere seven times in nearly 300 pages. Though later editions tried to correct this gaffe, “The Road Ahead” remains a landmark of bad techno-punditry — and a time-capsule illustration of just how easily captains of industry can miss a tidal wave that’s about to engulf them.

(Salon.com, 2000)

In any case, I think that there are important challenges in our industry: the need for better human management, the jump to multicore architectures and multiprocessing, and the ever-growing size of software projects. These three elements will without any doubt change the shape of the industry in the years to come, and raise new challenges in turn.

References

Adrian Kosmaczewski on LinkedIn Answers, “For the software architects out there, do you feel there is an impending paradigm shift in the software development model, towards “parallel computing” models?”, January 2007, [Internet] http://www.linkedin.com/answers?viewQuestion=&questionID=7804&askerID=4194838 (Accessed June 3rd, 2007)

Adrian Kosmaczewski on Kosmaczewski.net, “What will the Software Architecture discipline look like in 10 years’ time?”, March 16th, 2006 [Internet] http://kosmaczewski.net/2006/03/16/software-architecture-future/ (Accessed June 3rd, 2007)

Bast, Mary R; “Crisis: Danger & Opportunity”, 1999 [Internet], http://www.breakoutofthebox.com/crisis.htm (Accessed June 3rd, 2007)

DeMarco, Tom & Lister, Timothy, “Peopleware – Productive Projects and Teams, 2nd Edition”, 1999, Dorset House Publishing, ISBN 0-932633-43-9

Google, “Top 10 Reasons to Work at Google”, 2007a [Internet] http://www.google.com/jobs/reasons.html (Accessed June 3rd, 2007)

Google, “What’s it like to work in Engineering, Operations, & IT?”, 2007b, [Internet] http://www.google.com/support/jobs/bin/static.py?page=about.html (Accessed June 3rd, 2007)

Google Mac Blog, “Taming Mac OS X File Systems”, January 11th, 2007, [Internet] http://googlemac.blogspot.com/2007/01/taming-mac-os-x-file-systems.html (Accessed June 3rd, 2007)

ICSE, “Future of Software Engineering”, 2007, [Internet] http://web4.cs.ucl.ac.uk/icse07/index.php?id=104 (Accessed June 3rd, 2007)

Software Engineering Institute, Carnegie-Mellon University, “Ultra-Large-Scale (ULS) Systems – The Report”, July 2006, [Internet] http://www.sei.cmu.edu/uls/ (Accessed June 3rd, 2007)

Salon.com, 2000 [Internet], “Why Bill Gates still doesn’t get the Net”, [Internet] http://archive.salon.com/21st/books/1999/03/cov_30books.html (Accessed June 3rd, 2007)

Sutter, Herb; “A Fundamental Turn Toward Concurrency in Software”, 2005, [Internet] http://www.ddj.com/dept/architect/184405990 (Accessed June 3rd, 2007)

Sterling-Hoffman, “Opening Doors To Higher Education”, [Internet] http://www.sterlinghoffman.com/newsletter/articles/article140.html (Accessed June 3rd, 2007)

Wikipedia, “Thomas Kuhn” [Internet], http://en.wikipedia.org/wiki/Thomas_Kuhn (Accessed June 3rd, 2007)