Before There Was Lean, Agile Or Waterfall There Was Theory X, Y And Z

xyzWhen we talk about Lean or Agile or even Waterfall we talk about software development processes, but software development is a relatively young industry so we have been able to piggyback upon the work done in other industries (such as manufacturing) to create processes and management frameworks. This is both a good thing and a bad thing, we did not have to start from scratch, but we have also inherited some stuff that clearly doesn’t fit and we now have to weed it out while retaining and innovating practices that work for our industry (we want to throw out the proverbial bath water, but want to keep the proverbial baby :)).

The whole idea of introducing processes is all about seeking efficiency this is just as true in software development as it is in any other industry, and leading/managing people is central to this. If we want to effectively prune the processes and practices that don’t fit while retaining ones that do, we need to be aware of what has worked (and didn’t work) previously, after all we don’t want to make the same mistakes that others have made before us. That is to say, it behooves us to be aware of some historical facts regarding where management and process theory comes from. I am a firm believer in learning from the successes and failures of others (not just from your own) and so it is to this end I’d like to share with you a little bit of process and scientific management history. Most of this largely predates software development, but you will undoubtedly see the seeds of the processes you use today in it, as well as seeing where certain management attitudes come from.

The First Foray Into Process – Frederick Taylor

Frederick W. Taylor is considered by many to be the “father of scientific management”. Taylor developed his scientific management theories in the late 1800s and early 1900s. Taylor studied, measured and documented the behavior of steel workers. He attempted to study the most efficient way to perform a task by breaking each task down into smaller component tasks (he called this process job fractionalization). He performed a great many studies (called time and motion studies) using a stop watch to find the “one right way” of doing particular tasks. The idea was that by successfully combining the most efficient elements the best production methods could be adopted.

Taylor developed four principles of scientific management:

  1. Replace rule-of-thumb work methods with methods based on a scientific study of the tasks
  2. Rather than letting employees gain experience themselves, scientifically select train and develop each employee
  3. Provide “Detailed instruction and supervision of each worker in the performance of that worker’s discrete task”
  4. Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks.

In Taylor’s opinion it was a worker’s nature to slack off and furthermore a worker wasn’t really capable of understanding what they were doing in the first place. It was therefore the job of management to control and force the lazy workers to be productive and efficient by giving them clear and unambiguous direction (does that sound like some managers you might have known :)).

It is interesting to note that Taylor had several disciples who pushed his theories and were reasonably successful in getting them implemented by industry. One of the more famous of his disciples was Henry Gantt – yep you guessed it, the guy who invented the Gantt chart :).

A More Humane Approach – Elton Mayo And The Hawthorne Effect

Elton Mayo is believed to have pioneered the human relations movement of production management. Mayo believed that the emotional state of workers is just as important as finding the best combination of movements when it comes to achieving maximum productivity. Mayo believed that workers form social groups at work and therefore cannot be treated in isolation but must be seen as members of a group. He believed that financial incentives are less important to a worker than the need to belong to a group.

Mayo is best known for a series of experiments he conducted in 1927 at the Hawthorne Works of the Western Electrical Company.

The Hawthorne Experiment

During the experiment Mayo varied the intensity of light on the shop floor in order to find the optimum level of lighting that would result in maximum productivity. He found that regardless of the degree of light, worker productivity increased. Knowing that they were the subjects of a study made the workers change their behavior. This phenomenon became known as the Hawthorne Effect. Interviewing the workers after the experiment Mayo found that they performed better because they were treated better by their supervisors during the experiment. The workers were also more motivated as their tasks acquired greater meaning as part of an experiment.

Theory X, Y And Z

In 1960 Douglas McGregor, a Management professor at the MIT Sloan School of Management repackaged and renamed Taylor’s and Mayo’s theories, he called them respectively Theory X and Theory Y. At the time the majority of managers were proponents of Theory X, i.e. they took a pessimistic view of human behavior and believed that people were inherently lazy and needed to be pushed to achieve better productivity with rewards and punishments.

McGregor was believed to be a proponent of Theory Y, he thought that his work to repackage Taylor’s and Mayo’s theories would prompt the managers of that time to question the ideas that underpin both theories, to achieve greater understanding.

In the 1980s William Ouchi took Theory Y a step further. He studied the benevolent version of Theory Y used by Japanese management and called it Theory Z. At the time Theory Z was thought by many to be the secret of the Japanese competitive advantage. Using Theory Z the Japanese were able to bring together management and workers in cohesive work groups. Everyone was part of the decision making process, workers and management worked together in quality circles. Everyone was involved in kaizen – a continuous effort to improve all aspects of the company and of self. Of course we know that another thing that stems directly from this is the idea of Lean software development.

It is interesting to note that the ideas employed by Japanese manufacturing were not actually invented in Japan but instead stem from the work done by people such as Mayo and William Deming.

I hope this has given everyone a high level overview of where the idea of process comes from and what shaped (and continues to shape) the attitudes of the managers that we have today. Of course as software developers we are even more interested in process as applied to our own field (agile, lean, waterfall etc.). And since Lean seems to be the flavor of the month at the moment I will try to explore what I believe to be the origins of the Lean movement (not the software one, but the real one i.e. Toyota Way etc.) in a subsequent post. It is sometimes better to go back to the roots of a process (or movement) and built on top of the basic principles rather than trying to retrofit and existing process from one industry to another (e.g. manufacturing to software), but you will have to judge for yourself. Don’t forget to grab my RSS feed so you don’t miss out on that discussion.

Image by Beadmobile

More Advanced Ruby Method Arguments – Hashes And Block Basics

ArgumentsI’ve previously given an overview of basic method arguments in Ruby (at least in Ruby 1.9). There is quite a lot you can do with just the basic method arguments, so I purposely left out the more advanced topics from that post (which many people were quick to point out :)). However, if you want to have an in-depth knowledge of Ruby you will need to know how to use hashes as method arguments as well as where blocks fit into the picture and this is what I am going to cover here.

Using Hashes As Arguments

A Hash is just a regular object in Ruby, so normally, using it as an argument is no different from using any other object as an argument e.g.:

```ruby def some_method(a, my_hash, b) p a p my_hash p b end

some_method “Hello”, {:first=>“abc”, :second=>“123”},“World”```

This would produce the following output:

"Hello"
{:first=>"abc", :second=>"123"}
"World"

The interesting things about hashes as arguments is that depending on their location in the argument list you can get some interesting benefits (and sometimes interesting detriments).

Hashes As The Last Argument

When you use a hash as the last argument in the list Ruby allows you to forego the use of the curly braces which surprisingly can make argument lists look a lot nicer, consider this:

```ruby def print_name_and_age(age, name_hash) p “Name: #{name_hash[:first]} #{name_hash[:middle]} #{name_hash[:last]}” p “Age: #{age}” end

print_name_and_age 25, :first=>‘John’, :middle=>’M.‘, :last=>‘Smith’```

This produces the following output:

"Name: John M. Smith"
"Age: 25"

We don’t need to use curly braces as the hash is the last argument which makes our method call look a little neater. This also has a very neat side-effect of making the method arguments somewhat self-documenting (the hash key tells you what the argument value relates to). If we weren’t using a hash, then we would have to know the exact order of the method arguments to pass them in (e.g. does last name come first in the argument list, or is it first name) e.g.:

```ruby def print_name_and_age(age, last, middle, first) p “Name: #{first} #{middle} #{last}” p “Age: #{age}” end

print_name_and_age 25, ‘John’, ’M.‘, ‘Smith’```

This looks correct, but we no longer know for sure if we accidentally confused the order of the parameters (which we did in this case) unless we look at the method definition, so our output is not what we expect:

"Name: Smith M. John"
"Age: 25"

This feature of not having to use curly braces also works when we use a hash as the only argument to our method. Because of this and the self-documenting properties of passing in a hash as an argument, a case can be made for exclusively using a hash to pass all arguments to a method rather than having an argument list. Not only do we get the benefits of self-documentation and neater syntax, but we can also pass arguments in any order since the hash doesn’t care due to the key/value association.

So Why Not Do This All The Time

When you use a hash as an argument you always have the extra overhead of using a hash, i.e. you always have to pull values from the hash, inside the method, using the [] operator. It is probably not such a big deal, but it is there nonetheless, so it may not be worth doing this for methods that have one or two simple arguments.

There is one more caveat to be aware of with hashes. Just like you get some niceness when you use the hash as a last (or only) argument in an argument list, you get some nastiness when you use it as the first. If you use a hash as the first argument, you can’t leave out the curly braces, not only that, because there are other arguments in the list you can’t leave out the parentheses from the method call like you usually would with Ruby e.g.:

```ruby def print_name_and_age(name_hash, age) p “Name: #{name_hash[:first]} #{name_hash[:middle]} #{name_hash[:last]}” p “Age: #{age}” end

print_name_and_age({:first=>‘John’, :middle=>’M.‘, :last=>‘Smith’}, 25)```

You can probably guess that if you did try to leave off the parentheses, then Ruby would try to interpret your hash as a block (as curly braces are also used for block syntax) and so we must use the them to disambiguate our method call. This leads me neatly into discussing block and exactly what they are (and aren’t).

Blocks Are NOT Method Arguments

Despite what some people might believe, blocks are not method arguments. Blocks and arguments are two separate constructs. Infact if you have even a basic understanding of blocks you know that blocks can have arguments of their own. Does it really make sense for arguments to have arguments?

But to get back to block basics. There are many methods in ruby that iterate over a range of values. Most of these iterators are written in such a way as to be able to take a code block as part of their calling syntax. The method can then yield control to the code block (i.e. execute the block) during execution as many times as is necessary for the iteration to complete (e.g. if we are iterating over array values, we can execute the block as many times as there are array values etc.).

There are two types of block syntax, curly brace and do..end. If we want to supply a one-liner block to a method we normally us the curly brace syntax, otherwise we use the do..end syntax. For example:

ruby [1,2,3,4].each {p 'hello'}

or

[1,2,3,4].each do
  print 'i am printing '
  puts 'hello'
end

The two versions of block syntax are not exactly alike, but I will cover that aspect of blocks in a later post (which I plan to devote completely to blocks).

The other side of the coin when it comes to blocks is the yield keyword. Any method that wants to take a block as a parameter can use the yield keyword to execute the block at any time. It’s as simple as that. This one is another thing that I would like to explore further in a later post.

One final thing to remember about block basics is the fact that blocks can also take parameters e.g.:

ruby [1,2,3,4].each do |x| print "i am printing #{x} " puts "hello" end

The above will produce the following output:

i am printing 1 hello
i am printing 2 hello
i am printing 3 hello
i am printing 4 hello

Depending on what you’re iterating over the parameters that get passed to the blocks can have different values (and there can even be a different number of parameters). In the above case, because we are iterating over an array, each time the block is executed the parameter contains the value of the current element of the array that we are iterating over. As you can see you don’t pass parameters to blocks using parentheses like you would to methods, instead you pass them in between two pipes (i.e. |x|). There is once again much more to be said about blocks and block arguments, but since we are only covering block basics here, I’ll leave that for later.

The main things to take away from this are as follows:

  • blocks are not method arguments but are infact a separate construct
  • blocks are used to support iterator-type methods
  • there are two types of block syntax
  • blocks can take parameters of their own

I will explore all of those in more depth when I dig into the how and why of blocks in Ruby. I hope you found at least some of this helpful/interesting and as always if you have something to add (or if you just want to say hello :)), feel free to leave a comment below.

Image by Aislinn Ritchie

Does YAGNI Mean You Ignore The Obvious

StupidYou’ve probably heard of the YAGNI principle, it stands for (You Ain’t Gonna Need It). Essentially it is meant as a mantra for developers to prevent us from anticipating functionality and building things before we know that they are actually necessary. You can think of it as an Agile principle or an unspoken (or sometimes vociferously spoken :)) rule if you like. Regardless, it can sometimes be a good idea to keep this principle in mind and measure much of what you do as a developer against it to make sure you’re not building a ‘mountain’ when a ‘mole hill’ will do just fine.

I like the idea of YAGNI and often try to apply it to what I do, but just like any principle that has been around for a while I have seen the message (or the idea behind the principle) be diluted to the point where some people start to use it without thinking and without understanding what it is all about.

Applying YAGNI Without Thinking

You know you’re applying YAGNI without thinking when it becomes the first and only yardstick for anything you do as a software developer. Implement a pattern? No way – YAGNI, a simple loop will do. Use a utility library? What for – YAGNI, it’s just one simple method, we can roll our own. How about building in more automation around our deployment process? Bah – YAGNI, we only do this once every 3 months, we can do without.

Don’t get me wrong I am not advocating always using patterns, or libraries, or automating without thought. What I am trying to say is, you can’t unilaterally apply YAGNI to everything without considering the larger context. If I do use a pattern here, will it make my code more readable, maintainable, testable, if the answer is yes then perhaps the pattern is a good idea. The point is, YAGNI in and of itself is not an objective it is just a practice, a tool. No matter what task you do you should always be aiming to make the system easier to understand and easier to use, more testable and maintainable, cleaner and more robust. If that means writing a little bit more code and putting in a little bit more thought, then YAGNI will just have to deal with it.

The Ideas Behind YAGNI

To give it even more context, applying YAGNI to everything you do is a recipe for a dish of spaghetti code. In my opinion the two main ideas you need to consider before reaching for the YAGNI stick are:

  • granularity
  • balance

Granularity

It makes a lot more sense to me to apply YAGNI to much larger concerns. Let’s make our application scalable to 1000 requests per second. Whoa, YAGNI – it is only ever gonna be used internally and there are only 100 users, not necessary. We as developers tend to sometimes let our love of playing with cool technology run away with us and end up using a proverbial bazooka to clobber a fly. YAGNI can help us avoid this, but it is only really relevant when we are talking about things that can take significant time, money or work. I am referring to coarse-grained concerns – major features, changing project direction, radical technology changes. If you’re dealing with any of those or similar than by any means apply YAGNI and see if it still makes sense, otherwise there are perhaps other practices that can take precedence.

Balance

You have to balance the YAGNI mentality against reality. You often CAN anticipate which way the project is likely to go in the near future and it might make sense to build particular things now to cater for this. You may have capacity now but will not necessarily have it later. You may have some expertise within your team at the moment that will not be available down the track. These things can weigh in against YAGNI. And when we are talking about finer grained features (i.e. the actual code and low level design of the system), then the *abilities should always take precedence, maintainability, testability, usability, readability etc.

The Agile Practice Stupidity Threshold

No matter what practice you use you should always understand the implications behind it and in what context the practice is best applied. For example, TDD is a great practice and you can use it to evolve better designs through writing the tests first. However, would you ever use TDD to evolve attribute readers or writers – of course not! More than that, there is a certain level of code that is just too simple to require being evolved through TDD, there is simply no need. But, if you use TDD without thinking you can find yourself doing exactly that, which would just be stupid and a bad use of TDD.

Every practice you use, including TDD and YAGNI has a certain threshold which I call the stupidity threshold. While you’re using the practice as a tool giving due consideration to all other concerns, you’re fine. But, as soon as the use of a particular practice becomes an objective in and of itself, you have crossed the practice stupidity threshold and will find yourself in trouble eventually.

Image by hiddedevries