kief.com

Sporadically delivered thoughts on Continuous Delivery

Iterations Considered Harmful

| Comments

The iteration is a cornerstone of agile development. It provides a heartbeat for the team and its stakeholders, and a structure for various routine activities that help keep development work aligned with what the customer needs. However, the way many teams run their iterations creates serious pitfalls which can keep them from delivering software as effectively as they could.

The orthodox approach to the iteration is to treat it as a timebox for delivering a batch of stories, which is the approach most Scrum teams take with sprints (the Scrum term for an iteration). In recent years many teams have scrapped this approach, either using iterations more as a checkpoint, as many ThoughtWorks teams do, or scrapping them entirely with Kanban and Lean software development.

For the purpose of this post, I will refer to these two general approaches to running iterations as the “orthodox” or “timeboxed-batch” iteration model on the one hand, and the “continuous development” model on the other hand. Although orthodox iterations have value, certainly over more old-school waterfall project management approaches, continuous development approaches which do away with timeboxing and avoid managing stories in batches allow teams to more effectively deliver higher quality software.

Orthodox iteration model: (or “timeboxed-batch” model). Each iteration works on a fixed batch of stories, all of which must be started and finished within a single iteration. Continuous development model: Stories are developed in a continuous flow, avoiding the need to stop development in order to consolidate a build containing only fully complete stories.

The anatomy of the orthodox iteration

In the classically run iteration or sprint, the Product Owner (PO) and team choose a set of stories that it commits to deliver at the end of the iteration. All of the stories in this batch should be sufficiently prepared before the iteration begins. The level and type of preparation varies between teams, but usually includes a level of analysis including the definition of acceptance criteria. This analysis should have been reviewed by the PO and developers to ensure there is a common understanding of the story. The PO should understand what they can expect to have when the story is implemented, and the technical team should have enough of an understanding of the story to estimate it and identify potential risks.

The iteration begins with an iteration kickoff meeting (IKO) where the team reviews the stories and confirms their confidence that they can deliver the stories within the iteration. The developers then choose stories to work on, discussing each story with the PO, Business Analyst (BA), and/or Quality Analyst (QA) as appropriate, then breaking it down into implementation tasks. Implementation takes place with continual reviews, or even pairing with these other non-developers, helping to keep implementation on track, and minimizing the amount of rework needed when the story is provisionally finished and goes into testing.

The QA and BA/PO then test and review each story as its implementation is completed. This is in addition to the automated testing which has been written and repeatedly running following TDD and CI practices. Only once the story is signed off do the developers move on to another one of the stories in the iteration’s committed batch.

As the end of the iteration approaches, developers and QAs should be wrapping up the last stories and preparing a releasable build for the showcase, which is typically held on the final day of the iteration. In the showcase, the team demonstrates the functionality of the completed stories to the PO and other stakeholders, the stories are signed off. The team holds a retrospective to consider how they can work better, then on the next working day they hold the IKO to start the following iteration.

When the iteration ends the team has a complete, fully tested and releasable build of the application, regardless of whether the software actually will go into production at this point.

The start and end dates of the iteration are firmly fixed. If there are stories (or defect fixes) which aren’t quite ready at the end of the iteration, the iteration end date is never slipped. Instead, the story is not counted as completed, so must be carried over to the next iteration.

The benefits of the orthodox iteration

This style of iteration offers many benefits over traditional waterfall methodologies. A short, rigid cycle for producing completely tested and releasable code forces discipline on the team, keeping the code in a near-releasable state throughout the project, and avoiding the temptation to leave work (e.g. testing) for “later”, building up unmanageable burdens of work, stress, and defects to be dealt with under the pressure of the final release phase.

The timeboxed iteration also forces the team to learn how to define stories of a manageable size. If stories are routinely too big to complete in one iteration this is a clear sign that the team needs improve the way it defines and prepares stories.

This demonstrates another benefit of the iteration, which is frequent feedback. The team is able to evaluate not only the quality of their code and its relevance to the business by getting feedback quickly, they are also able to evaluate how effectively they are working, and try out ideas for improving continually throughout the project.

Fundamental problems with the orthodox itertation

The timeboxed-batch approach to iterations has value, particularly for teams inexperienced with agile. However, it has fundamental problems. At core, this approach is waterfall written small, with many of the same flaws, albeit with a small enough cycle that issues can be dealt with more quickly than with a full waterfall project.

To understand why this is so, let’s flesh out the idealized anatomy of the iteration from above with some of the things which often happen in practice.

  • At the start of the iteration, no development is taking place, because everyone is working on preparing the new batch of stories. The BA’s are extremely busy now, because they have a full working set of stories to hand over to developers (i.e. however many stories the team can work on at once, that’s how many stories the BA’s must hand over all at the same time). The QA’s are less busy now, although they may be helping the BA’s out, and planning their testing for the iteration’s stories.
  • Actually, I lied. Development is taking place, and QA’s are extremely busy. Testing and bugfixing stories left over from the previous iteration is still going on. See points below to understand why. As a result, preparation and starting of the stories for this iteration is sluggish because of work carried over from the previous iteration. This actually isn’t necessarily bad, since it helps to stagger the story preparation work, preventing the BA’s from becoming a bottleneck. However, depending on whether carryover work was factored into the number of stories chosen for the current iteration (business pressures often make this difficult), this may mean the team is already at risk of failing to meet its commitment.
  • Once the previous iteration’s work has settled down, QA’s have little to do until the end of the iteration approaches, at which point they come under enormous pressure. Developers are humping to get their stories done in time, leaving QA’s with a pile of stories to be tested in time for the showcase. Any defects they find increase this pressure even more, with very little time to get the fixes in and then re-tested (maybe needing even more fixing and re-testing!)
  • If developers complete a story with a bit of time left in the iteration, they aren’t able to start new stories because the stories for the following iteration won’t be ready to work on until the IKO.
  • In the end, some stories don’t get fully tested during the iteration. They may be tested in the following iteration, after having already been signed off as “complete” by the unsuspecting Product Owner. If so, developers need to be pulled away to fix the defects found, or else the defects are added to a backlog to be fixed “later” (also known as “probably never”). Other developed code is left completely untested or under-tested, with the vague hope that any defects will be found in later testing phases, or that maybe there aren’t any important bugs anyway. In fact, these defects will be found, but not at a time more convenient to the team.
  • If any serious issues are raised by stakeholders during the showcase there is not time to fix them until the next iteration, which means it will take the full length of an iteration before a truly releasable build is created.

The root problem of the orthodox iteration

At the end of the day, the orthodox iteration suffers from two problems which are inherent in its very definition: it organizes work into batches, and it enforces a timebox.

Batching work is the antithesis to flow. The Lean approach to working aims to maximize the flow of work for the members of a team, which in software development translates to getting stories flowing easily through creation, analysis, implementation, validation, and release. When a developer finishes one story and it is signed off, she should have another story ready for her to pick up and start on. This shouldn’t need to wait on an arbitrary ceremony, and certainly shouldn’t have to wait for everyone else on the team to finish their stories and get them all signed off.

The batching focus of orthodox iterations doesn’t only cause developers to block, it also turns BA’s, QA’s, and the PO into bottlenecks. As described above, the start and end of the iteration each put a full working set of stories in the same state at once, all needing the same activity carried out on them at once.

Imagine an assembly line which starts up to assemble twenty cars, then stops while they are all inspected at once. Only once all of the cars are inspected and their defects fixed does the line start up again to begin assembling another twenty cars.

Timeboxing is also a source of problems for iterations. The main problem is the arbitrary deadline creates pressure to get stories “over the line” so they can count towards the velocity for the iteration. Unless management is enlightened (or uninterested) enough to avoid focusing on fluctuations of velocity from iteration to iteration (and even the most enlightened managers I’ve worked with do get worked up over velocity) this leads to the temptation to rush and cut corners, or to play games with stories.

Rushing obviously endangers the quality of the code, which almost certainly leads to delays down the line when the defects surface. Playing games, such as closing unfinished stories and opening defects to complete the work, or counting some points towards an unfinished story, undermines the team’s ability to measure and predict its work honestly. These bad habits will catch up one way or another.

Expecting code to be complete at the end of the iteration, fully tested, fixed, and ready for deployment, is unrealistic unless the iteration is structured with significant padding at the end. This padding must come after all reviews, including the stakeholder showcase, to allow time to make corrections, unless those reviews are mere rubber stamp sessions, with no genuine feedback permitted. This then means the team will be underutilized during the padding time. Otherwise, if there is so much rework done during this period that the entire team is fully engaged, then the risk of introducing new defects is too high to be confident in stable code by the end.

The alternative is to break the strict timeboxed-batched iteration model by interleaving work on the next iteration with the cleanup work from the previous iteration. This turns out to not be such a bad idea, and leads to evolving away from the timeboxed-batch iteration model towards the continuous development model.

The continuous development model

The continuous development model may be purely iteration-less, e.g. Kanban, or it may still retain the iteration as a period for measuring progress and for scheduling activities such as showcases. Once development is underway stories are prepared, developed, and tested using a “pull” approach, being worked on as team members become available, so that stories are constantly flowing, and everyone is constantly working on the highest value work available at the moment. This requires some different approaches to managing work flow than are used with other approaches. For more information, look into Kanban and Lean software development.

Since joining a year or so ago I’ve found that although no two ThoughtWorks projects run in exactly the same way, there is a strong tendency to use iterations which look a lot like Kanban, but retaining a one or two week iteration. Iterations are used to report progress (including velocity), and to schedule showcases and other regular meetings, but stories are not moved through the process in batches. Teams don’t start and stop work as a whole other than the start and end of a release. If the showcase is two days away, nothing stops a developer pair from starting on a new story knowing full well it will be incomplete when the codebase is demoed to the stakeholders, and possibly even deployed to later stage environments.

Although we do make projections and aim to have certain stories done by the next showcase, the team doesn’t promise to deliver a specific batch of stories. If it makes sense, stories can be dropped, added, or swapped as needed. This gives the business more flexibility to adapt their vision of the software as it is developed. It also reduces the pressure to mark a given story as “done” by a hard deadline, since there is no disruption from letting work carry on over the end of an iteration.

I’ve seen a Scrum team become ornery and rebellious when a PO made a habit of asking to swap stories after a sprint had started, even though work hadn’t been started on the particular stories involved. This was made worse because bugfixes were scheduled into sprints alongside stories, meaning that any serious defect found in production completely disrupted the team. Another factor that aggravated the situation was that the stories for each sprint were agreed before the end of the previous iteration. So if the showcase raised ideas for improvements to the functionality completed in iteration N, new stories could only be started in iteration N + 2 at the soonest. This hardly created a situation where the PO or the business felt the development team was responsive to business needs.

Also see Oren Teich’s post Go slow to go fast, which points out the problems with deadlines, and that iterations are simply a shorter deadline.

Challenges and rewards of continuous development

There are certainly challenges in moving to continuous development over the timeboxed-batch model. There is more risk of stories dragging on across multiple iterations. This can be mitigated by monitoring cycle time and keeping things visible, so that the team can discuss the issue and make changes to their processes if it becomes a problem.

For teams which are new to agile and still struggle to create appropriately sized stories, the timeboxed model may be more helpful to build the discipline and experience needed before being able to move to a continuous model. However, for experienced teams, timeboxing and batching stories simply has too many negative effects.

Continuous development, with a looser approach to iterations, maximizes the productivity of the team, avoids pitfalls that put quality at risk, and offers the business and the team more flexibility.

Configuration Drift

| Comments

In my previous article on the server lifecycle I mentioned ConfigurationDrift, a term that I’ve either coined, or I’ve forgotten where I originally heard, although most likely I got it from the Puppet Labs folks.

Configuration Drift is the phenomenon where running servers in an infrastructure become more and more different as time goes on, due to manual ad-hoc changes and updates, and general entropy.

A nice automated server provisioning process as I’ve advocated helps ensure machines are consistent when they are created, but during a given machine’s lifetime it will drift from the baseline, and from the other machines.

There are two main methods to combat configuration drift. One is to use automated configuration tools such as Puppet or Chef, and run them frequently and repeatedly to keep machines in line. The other is to rebuild machine instances frequently, so that they don’t have much time to drift from the baseline.

The challenge with automated configuration tools is that they only manage a subset of a machine’s state. Writing and maintaining manifests/recipes/scripts is time consuming, so most teams tend to focus their efforts on automating the most important areas of the system, leaving fairly large gaps.

There are diminishing returns for trying to close these gaps, where you end up spending inordinate amounts of effort to nail down parts of the system that don’t change very often, and don’t matter very much day to day.

On the other hand, if you rebuild machines frequently enough, you don’t need to worry about running configuration updates after provisioning happens. However, this may increase the burden of fairly trivial changes, such as tweaking a web server configuration.

In practice, most infrastructures are probably best off using a combination of these methods. Use automated configuration, continuously updated, for the areas of machine configuration where it gives the most benefit, and also ensure that machines are rebuilt frequently.

The frequency of rebuilds will vary depending on the nature of the services provided and the infrastructure implementation, and may even vary for different types of machines. For example, machines that provide network services such as DNS may be rebuilt weekly, while those which handle batch processing tasks may be rebuilt on demand.

Automated Server Management Lifecycle

| Comments

One of the cornerstones of a well-automated infrastructure is a system for provisioning individual servers. A system that lets us reliably, quickly, and repeatably create new server instances that are consistent across our infrastructure means we spend less time fiddling with individual servers. Instead, servers become disposable components that are easily swapped, replaced, and expanded as we focus our attention on the bigger picture of the services we’re providing.

The first step in achieving this is making sure server instances are built using an automated process. This ensures every server is built the same way, that improvements can be easily folded into the server build process, and that it is a simple matter to spin up new instances and to scrap and replace broken ones. Automating this process also means your team of highly skilled, well-paid professionals don’t need to spend large amounts of their time on the brainless rote-work of menu-clicking through OS installation work.

I first used automated installation by PXE-booting physical rack servers in 2002, following the advice I found on the then-current infrastructures.org site, and in later years applied the same concepts with virtualized servers and then IaaS cloud instances.

The machine lifecycle

I think of this as the machine lifecycle (which I tend to call the ‘server lifecycle’ because that’s what I normally work with, although it’s just as applicable to desktops). This involves a number of activities required to set up and manage a single machine instance, such as partitioning storage, installing software, and applying configuration.

Basic Server Lifecycle phases

These activities are applied during one or more phases of the machine lifecyle. There are three phases: “Package Image”, “Provision Instance”, and “Update Instance”. There are a number of different strategies for deciding which activities to do in each phase.

The various activities may be applied during one or more phases, depending on the strategy used to manage the machine’s lifecycle. Some strategies carry out more activities during the packaging phase, for instance, while other approaches might have a simpler packaging phase but do more in the provisioning and/or updating phase.

Machine lifecycle phases

Image packaging phase

In the image packaging phase, some or all elements of a machine instance are pre-packaged into a machine image in a way that can be reused to create multiple running instances.

This could be as simple as using a bare OS installation CD or ISO from the vendor. Alternately, it could be a snapshot of a fully installed, fully configured runnable system, such as a Symantec Ghost image, VMWare template, or EC2 AMI. Either way, these images are maintained in a Machine Image Library for use in the instance provisioning phase.

With the ManualInstanceBuilding pattern, everything happens during provisioning

Different machine lifecycle strategies use different approaches to image packaging. ManualInstanceBuilding and ScriptedInstanceBuilding both tend to use stock OS images, which involves less up-front work and maintenance of the Machine Image Library, since the instances are take straight from the vendor. However, work is still needed to create, test, and maintain the checklists or scripts used to configure instances when provisioning.

On the other hand, CloningExistingMachineInstances and TemplatedMachineInstances both create pre-configured server images, which need only minor changes (e.g. hostnames and IP addresses) to provision new instances. This is appealing because less work is done to provision a new instance, but the drawback is that creating and updating images takes more work. Admins tend to make updates and fixes to running instances which may not make it into the templates, which contributes to ConfigurationDrift, especially if changes are made ad-hoc.

What happens in each phase with the TemplateMachineInstances pattern

CloningExistingMachineInstances, which usually takes the shape of copying an existing server to create new ones as needed, tends to make ConfigurationDrift worse, as new servers inherit the runtime cruft and debris (log files, temporary files, etc.) of their parents, and it is difficult to bring various servers into line with a single, consistent configuration. TemplatedMachineInstances are a better way to keep an infrastructure consistent and easily managed.

The tradeoffs between scripted installs vs. packaged images depends partly on the tools used for scripting and / or packaging, which in turn often depends on the hosting platform. Amazon AWS requires the use of templates (AMIs), for example. In either case, exploiting automation more fully in the provisioning phase favours the case for keeping the packaging phase as lightweight as possible.

Instance Provisioning Phase

In the provisioning phase, a machine instance is created from an image and prepared for operational use.

Examples of activities in this phase include instantiating a VM or cloud instance, preparing storage (partitioning disks, etc.), installing the OS, installing relevant software packages and system updates, and configuring the system and applications for use.

There are two main strategies for deciding which activities belong in the packaging versus the provisioning phases. One is RoleSpecificTemplates, and the other is GenericTemplate.

With RoleSpecificTemplates, the machine image library includes images that have been pre-packaged for specific roles, such as web server, application server, mail server, etc. These have the necessary software and configuration created in the packaging phase, so that provisioning is a simple matter of booting a new instance and tweaking a few configuration options. There are two drawbacks of this approach. Firstly, you will have more images to maintain, which creates more work. When the OS used for multiple roles is updated, for example, the images for all of those roles must be repackaged. Secondly, this pattern gives you less flexibility, since you can’t easily provision an instance that combines multiple roles, unless you create - and then maintain - images for every combination of roles that you might need.

What happens in each phase with the GenericTemplate pattern

With the GenericTemplate pattern, each image is kept generic, including only the software and configuration that is common to all roles. The role for each machine instance is assigned during the provisioning phase, and software and configuration are applied accordingly then. The goal is to minimise the number of images in the machine image library, to reduce the work needed to maintain them. Typically, a separate template is needed for each hardware and OS combination that can’t be supported from a single OS install. The JeOS (Just Enough Operation System) concept takes this to the extreme, making the base template as small as possible.

The GenericTemplate pattern does require a more robust automated configuration during provisioning, and may mean provisioning an instance takes longer than using more fully-built images, since more packages will need to be installed during install.

Instance Updating Phase

Once a machine instance is running and in use, it is continuously updated. This includes activities such as applying system and software updates, new configuration settings, user accounts, etc.

Many teams carry out these updates manually, however it requires a high level of discipline and organization to maintain systems this way, especially as the number of systems grows. The number of machines that a team can be managed is closely dependent on the size of the team, so the ration of servers to sysadmins is low. In practice, teams using manual updates tend to be reactive, updating machines opportunistically when carrying out other tasks, or in order to fix problems that crop up. This leads to ConfigurationDrift, with machines becoming increasingly inconsistent with one another, creating various problems including unreliable operation (software that works on one machine but not another), and extra work to troubleshoot and maintain.

Breaking Into Automated Infrastructure Management

| Comments

Automated management of infrastructure is vital for delivering highly effective IT services. But although there are plenty of tools available to help implement automation, it’s still common to see operations teams manually installing and managing their servers, which leads to a high-maintenance infrastructure, which soaks up the team’s time on firefighting and other reactive tasks.

Doing it by hand

I’ve met many smart and skilled systems administrators in this situation. These folks know automation can make their life easier, but they can’t afford to take time away from turning cranks, greasing wheels, and unjamming the gears to keep their infrastructure puffing along in order to focus on improving their situation.

I’m convinced this is largely due to habit. Even though these teams understand that automation would be useful to them, when the pressure is on (and the pressure is always on), they roll up their sleeves, ssh into the servers and knock them into shape, because that’s the fastest way to get stuff done. Manual infrastructure management is what they’re used to. I find that most of these teams haven’t had personal experience of well-automated infrastructures, and don’t tend to believe it’s something they can realistically implement for their own operations.

Sysadmins who have worked in teams with mature, comprehensive automation, on the other hand, can’t go back. Sure, they might log into a box to diagnose and fix something that needs fixing right now, but they can’t relax until they’ve baked the fix into their automated configuration, and made sure that their monitoring will alert them ahead of time if the problem happens again.

Breaking out of manual infrastructure management and setting up an effective automation regime is difficult. Although there are loads of tools out there to make it work, it helps to understand good strategies for implementing them. I recommend looking over the material on the infrastructures.org site. It hasn’t been updated in a few years, so doesn’t take much of the advances since then into account, including virtualization, cloud, and newer tools like Chef and Puppet, but there is still rich material there.

Another must-read which more up to date is Web Operations by John Allspaw, Jesse Robbins, and a bunch of other smart peeps.

I’m also planning to share a few of the practices I’ve seen and used for automation in upcoming posts.

Successful Software Delivery in Spite of Evil IT

| Comments

In my previous post, I glibly said that SLAs represent waste that an organization has identified and formalized. ReaderKenfin commented on my post, rightly calling me out to provide alternatives.

If you believe that SLA’s ‘formalise waste’ this way how would you approach my situation where communications are beyond poor (atrocious) and the org structure is silo’d and no one is accountable for their work?

Kenfin’s example illustrates my point quite well - the organization’s structure is an obstacle to effective delivery. Since he’s not in a position to fix this problem, he’s turned to SLA’s as a way to manage the problem. They won’t make the issues go away, but they may give him a handle to manage them, and importantly, make them more predictable.

Turtle on a keyboard, like slow IT people. It's a metaphor.

But it’s a fair question, what can someone in Kenfin’s shoes do in the face of an IT organization which is inherently not aligned to effectively providing the services he needs to deliver software to his users effectively?

A common strategy, and one that I’ve helped teams inside these kinds of organizations do, is to completely bypass the existing IT organization. The goal is to put control of everything that the product team needs in order to deliver into its hands, rather than leaving it at the mercy of a group (or multiple groups) who have other priorities.

Outsource it!

One way to do this is outsourcing, finding another company that specializes in the functions that the IT group would provide, whether this means development, integration, hosting, or something else. This works best if the project is not seen as core to the business, so that it avoids fear of entrusting sensitive data or business critical functions to outsiders. It also helps if the project needs skills that can’t be found in-house.

My friends at Cognifide have built their business on this, building technically complex content-focused websites for corporate clients, delivering far more quickly, and with greater expertise, than most corporate IT organizations can manage. This is also the premise that Software as a Service (SaaS) is based on. By choosing SalesForce for CRM a company completely bypasses the massive IT project that would be required to implement an off the shelf, self-hosted CRM package (integration with other applications aside).

There are pitfalls to outsourcing to bypass IT. Many outsourcers are no more responsive than an in-house IT department, using SLAs and change control processes to make their workload, risks, and profitability more manageable.

Do it yourself!

The strategy I’ve most often been involved with myself (although I didn’t really think of it this way at the time) is product departments building their own IT capabilities. Again, this is about having control of the services and resources the group needs in order to deliver to its own customers.

The typical pattern is an “online” (or often, “digital”) department of a company where online was originally on the fringes of the main business, but has in recent years grown into a major channel for sales, customer service, or even delivery of products (for example in publishing).

The online team leverages their growing importance, as well as the specialized needs they have compared with typical corporate IT custometrs, to get approval from top management to create their own “digital operations team” or similar. This team may outsource elements of infrastructure, such as hosting (with IaaS cloud providers as an increasingly appealing option), but they are able to respond immediately to the needs of the online product group, because a) they don’t have to juggle requests from other departments and teams, and b) they report directly to the manager of that group.

But what if I have to use the crappy IT guys who don’t care about my project?

Those strategies are not feasible for every team. I’ve certainly had to support projects where we had no alternative but to struggle along with unresponsive IT. In these cases, SLAs may well have to do, even though they represent waste and inefficiency.

There are a few other things you might at least try out in these cases. Your goal is still to have the resources you need in order to get things done at your disposal as far as possible. So see if you can identify those services which are especially critical, and particularly those which are likely to change frequently, and see if you can get some dedicated resource assigned to your project. You want someone who will sit with your team, be incentivized by the success of your project, and who has the skills, authority, and system privileges to carry out the tasks you need.

If full time secondment of people to your team is not quite feasible due to budget, lack of available resource, etc., see if you can at least get commitments of time from the right people. Can someone come to daily standups? Weekly meetings? Regular release management meetings? Ask for as much as possible to start with, then see what you can get.

Also, maybe you can hire someone into your own team with qualifications and background that will help them effecitely liaise with difficult IT teams. Your own DBA, security consultant, etc. can engage with the IT groups using their own language, couching things in terms that address their concerns. They may be able take certain tasks off the IT group’s plates, which ends up giving you the ability to get things done more quickly, while at the same time making IT grateful that their workload is lighter.

But the right thing to do is …

These are all ways to work around the core problems. The best solution is of course for the organization to restructure itself in a way that aligns its resources with its goals. Most companies, especially large ones, insist on organizing themselves in ways that are self-defeating. It’s a shame that many people who work in large companies accept this as normal, often even as desirable.

Grouping everyone with a given function into a single group forces them to focus on juggling the competing needs of many stakeholders, managing their own risks (especially the risk of getting blamed when projects fail). They will inevitably favor the abstract principles of their own technical practices over what is most effective in making the business succeed. Much better to group people into units that have complete ownership for delivering business value, and find ways to connect staff of given function with each other so they can develop their skills as working practices.

Unfortunately most of us are rarely in a position to influence this, so I hope that my suggestions will be helpful to some people in making things a little less painful.

Running Multiple Tomcat Instances on One Server

| Comments

Here’s a brief step by step guide to running more than one instance of Tomcat on a single machine.

Step 1: Install the Tomcat files

Download Tomcat 4.1 or 5.5, and unzip it into an appropriate directory. I usually put it in /usr/local, so it ends up in a directory called /usr/local/apache-tomcat-5.5.17 (5.5.17 being the current version as of this writing), and make a symlink named /usr/local/tomcat to that directory. When later versions come out, I can unzip them and relink, leaving the older version in case things don’t work out (which rarely if ever happens, but I’m paranoid).

Step 2: Make directories for each instance

For each instance of Tomcat you’re going to run, you’ll need a directory that will be CATALINA_BASE. For example, you might make them /var/tomcat/serverA and /var/tomcat/serverB.

In each of these directories you need the following subdirectories: conf, logs, temp, webapps, and work.

Put a server.xml and web.xml file in the conf directory. You can get these from the conf directory of the directory where you put the tomcat installation files, although of course you should tighten up your server.xml a bit.

The webapps directory is where you’ll put the web applications you want to run on the particular instance of Tomcat.

I like to have the Tomcat manager webapp installed on each instance, so I can play with the webapps, and see how many active sessions there are. See my instructions for configuring the Tomcat manager webapp.

Step 3: Configure the ports and/or addresses for each instance

Tomcat listens to at least two network ports, one for the shutdown command, and one or more for accepting requests. Two instances of Tomcat can’t listen to the same port number on the same IP address, so you will need to edit your server.xml files to change the ports they listen to.

The first port to look at is the shutdown port. This is used by the command line shutdown script (actually, but the Java code it runs) to tell the Tomcat instance to shut itself down. This port is defined at the top of the server.xml file for the instance.

<Server port="8001" shutdown="_SHUTDOWN_COMMAND_" debug="0">

Make sure each instance uses a different port value. The port value will normally need to be higher than 1024, and shouldn’t conflict with any other network service running on the same system. The shutdown string is the value that is sent to shut the server down. Note that Tomcat won’t accept shutdown commands that come from other machines.

Unlike the other ports Tomcat listens to, the shutdown port can’t be configured to listen to its port on a different IP address. It always listens on 127.0.0.1.

The other ports Tomcat listens to are configured with the <Connector> elements, for instance the HTTP or JK listeners. The port attribute configures which port to listen to. Setting this to a different value on the different Tomcat instances on a machine will avoid conflict.

Of course, you’ll need to configure whatever connects to that Connector to use the different port. If a web server is used as the front end using mod_jk, mod_proxy, or the like, then this is simple enough - change your web server’s configuration.

In some cases you may not want to do this, for instance you may not want to use a port other than 8080 for HTTP connectors. If you want all of your Tomcat intances to use the same port number, you’ll need to use different IP addresses. The server system must be configured with multiple IP addresses, and the address attribute of the <Connector> element for each Tomcat instance will be set to the appropriate IP address.

Step 4: Startup

Startup scripts are a whole other topic, but here’s the brief rundown. The main different from running a single Tomcat instance is you need to set CATALINA_BASE to the directory you set up for the particular instance you want to start (or stop). Here’s a typical startup routine:

JAVA_HOME=/usr/java
JAVA_OPTS="-Xmx800m -Xms800m"
CATALINA_HOME=/usr/local/tomcat
CATALINA_BASE=/var/tomcat/serverA
export JAVA_HOME JAVA_OPTS CATALINA_HOME CATALINA_BASE

$CATALINA_HOME/bin/catalina.sh start

Definition of an SLA

| Comments

SLA: Waste that an organization has identified in a critical business process and decided to formalize rather than eliminate.

Monitor Your Development Infrastructure as if It Were Business Critical

| Comments

A development team’s infrastructure - development and QA environments, CI servers, SCM servers, etc. - are indisputably business critical, but rarely given the kind of monitoring attention that production environments are. This is a missed opportunity, not only to ensure the continuity of development work, but also to gain valuable insight.

Reasons to monitor your application in every environment it’s deployed to:

1 Keep your development team moving

This is the obvious one. You need to know before you run out of disk space, RAM, etc.

2 Optimize your CI / deployment pipeline

Do you know what the limiting factors are on the time it takes your automated tests to run? The shorter you make your dev/test/fix feedback loop, the more productive your team will be, so why not analyze and optimize it as you would any other key software system? If checkout from SCM takes 20% of the time to test results, what can you do to reduce it? Are your unit tests constrained by CPU, RAM, or disk I/O?

3 Understand your applications

We’re conditioned to think that measuring performance and resource consumption is only useful in an environment that mirrors our production hardware. But if we build up an awareness of how our application uses memory and other resources as a part of every execution and every environment, we’ll have a deep and intuitive understanding of what makes it tick.

4 Develop and test your monitoring

By having monitoring running against applications while they are still in development, you will find ways to improve how you monitor (“let’s measure the activity on our queues”), catch holes in your monitoring (“why didn’t our monitoring tell us when the dev server queue broker went down?”), and test changes to your monitoring.

Once you put monitoring in place in development and testing and make a habit of using it, it becomes a ubiquitous and indispensable part of your team’s working processes, similar to the shift to using CI well.

Don’t leave it as a low priority task, something to get around to at some point after you get around to setting up a perfect performance testing environment. Put it at the center of your team’s toolset for understanding your work.

The Build Monkey Antipattern

| Comments

A common pattern in software development teams is to have a person who owns the build system. This may be a deliberate decision, or it may evolve organically as a particular team member gravitates towards dealing with the build scripts, automated testing and deployment, etc. While it’s normal for some team members to have a deeper understanding of these things than others, it’s not a good idea for the knowledge and responsibility for the build to become overly concentrated in one person.

I prefer the term build gorilla myself The build system should be looked at as a module or component of the software application or platform being developed, so the philosophy taken towards code ownership apply.

If a single person owns the build system, everyone else becomes dependent on them to fix issues with it, and to extend it to meet new needs. There is also a risk, especially for projects which are big enough that maintaining the build system becomes a full time job, that a bit of a siloed mentality can develop.

If developers have a poor understanding of how their software is built and deployed, their software is likely to be difficult and costly to deploy. On the flip side, if build and test tools are implemented and maintained entirely by people who don’t develop or test the software, it isn’t likely to make the life of those who do as easy as it could be.

In the past few months I’ve taken on a role which is largely focused on this area, and have been helping a development team get their build and delivery system in place. Pairing with developers to implement aspects of the system has worked well, as has letting them take on the setup of particular areas of the build and test tooling. This follows what Martin Fowler calls “Weak Code Ownership”, allowing everyone to take part in working on the build and test system.

Special attention is needed for stages of the path to production as they get further from the developer’s workstation. Developers are keen to optimize their local build and deployment, but can often be fuzzy on what happens when things are deployed in server environments. This is exacerbated when the platforms are different (e.g. developers working on Windows, code deployed on Linux).

Even without platform differences, developers understandably focus on the needs of their own local build over those of production system deployment. This is natural when server deployment is not a part of their daily world. So the best way to compensate for this is to keep developers involved in implementing and maintaining server deployment.

Driving the implementation of the build and deployment system according to the needs of business stories has also been useful. So rather than setting up tooling to test parts of the system that haven’t been developed yet, wait until the design of the code to be tested starts to be understood, and the code itself has actually started being developed. This helps ensure the tooling closely fits the testing and deployment needs, and avoids waste and re-work.

Maven: Great Idea, Poor Implementation (Part 3)

| Comments

In the first post in this series, I explained why I think Maven Maven is a good idea. Most projects need pretty much the same thing from a build system, but using Ant normally results in complex, non-standard build system which becomes a headache to maintain.

In theory, Maven should be a better way to run a build. By offering a standardised build out of the box, you would massively reduce the setup and learning curve for new joiners to the development team and take advantage of a healthy ecosystem of plugins that can be simply dropped into your build, and save loads of setup and maintenance hassle.

End of the Road sign

Although its goes pretty far towards the first two of these advantages, in my second post I described how Maven’s configuration is too complex even for simple things.

I note that the Maven site doesn’t currently mention “convention over configuration”, although I’m sure it used to in the past, and there are plenty of references to it around the web. The Wikipedia entry for convention over configuration lists Maven as a poster-child, and Sonatype, the commercial company supporting Maven, names a chapter of their reference book named after the concept.

But it’s a load of bollocks.

Anyway.

My final point (for this series, anyway) is on flexibility. The tradeoff between configuration complexity is normally flexibility. This is certainly the case with Ant; the complexity which makes every Ant-based build system a painfully unique snowflake buys you the capability to do damn near anything you want with it.

But Maven’s complexity does not buy us flexibility.

My team wants to divide up its testing into multiple phases, following the “Agile testing pyramid” concept as mentioned by Mike Cohn.

So we’d like to have four layers to our pyramid, unit tests running first; database integration tests running next; web service tests third; and only if all of these pass do we run web UI tests. These test groups run in order of increasing heaviness, so we get feedback on the simple stuff quickly.

Maven supports two levels of testing, unit tests and integration tests. The failsafe plugin which provides integration testing support seems fairly new, and is actually pretty good if you only need one phase of integration testing. It lets you configure setup and teardown activities, so you can fire up an app server before running tests, and make sure it gets shut down afterwards.

If we could get failsafe to run three times during the build, each time carrying out different setup and teardown activities, and running different groups of tests, my team would be fairly happy with Maven.

It is possible to use build profiles to set up different integration tests in this way, but to get them to run, you need to run the build three times, and each time the preceding steps will be re-run - compiling, unit tests, packaging, etc. So it’s kind of nasty, brutish, and too long.

The right way to achieve what we’re after is probably to customise the build lifecycle, or create a new build lifecyle. Either way, it involves creating custom plugins, or extensions, or both. I’ve taken a stab at working out how, but after burning a couple of evenings without getting anywhere, I’ve shelved it.

I have no doubt it can be done, but it’s just easier to do it in Ant and move on to other tasks. And that’s pretty much the bottom line for me. I still like the idea of Maven, I have hopes it will continue to improve (it’s a thousand times more usable than it was a few years ago), and maybe even go through a shift to embrace convention over configuration for real.

In the meantime, I’m likely to reach for Maven when I need something quickly for a project that seems likely to fit its paradigm, but for heavier projects (most of the ones I’m involved with for client projects), Ant is the pragmatic choice.