kief.com

Sporadically delivered thoughts on Continuous Delivery

Pat Kua’s New Book on Agile Retrospectives

| Comments

ThoughtWorkers write loads of books, and I’m too lazy to make a habit out of reading, reviewing, and plugging them all. So given that I’ve gotten off my ass (erm, well, not literally of course) to tout Pat Kua’s new book, The Retrospective Handbook, you can be assured it’s not a rote act of loyalty to my colleagues.

As Pat says, if you were to pick only one agile practice to adopt, retrospectives are it. It’s the engine a team uses to identify and address ways to improve performance, so regular retrospectives become the forum to work out which other practices would be helpful, how to adjust they way they’re being used, and which ones are getting in the way or just unnecessary.

If you’ve tried retrospectives but not gotten as much out of them as the above bold claim suggests, Pat’s book could be for you. Everything in it is refreshingly practical and actionable for such a potentially hand-wavy, touchy-feely subject. It ranges from high level topics and techniques, through to dealing with common problems such as lack of action afterwards, to nuts and bolts details about the materials to use.

If you want a more detailed review of the book, check out our other colleague Mark Needham’s review. Then get the book itself!

And, yeah, check out the stuff our other colleagues have written as well. I may be too lazy to write them all up, but they’re quality stuff.

Organizing for Continuous Delivery - the Reading List

| Comments

I presented a webinar Organizing for Continuous Delivery earlier this week, which was a lot of fun. The recording of me droning over the slides is available at that link. I mentioned a number of books that influenced my thinking on the presentation, so I’d like to share the list here, with some additional ones that I’d recommend for people interested in this stuff. (Disclaimer, these are Amazon affiliate links.)

Beyond Performance: How Great Organizations Build Ultimate Competitive Advantage by Scott Keller and Colin Price
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and Dave Farley
Freedom from Command and Control: Rethinking Management for Lean Service by John Seddon
Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management by Jeffrey Pfeffer and Robert I. Sutton
Implementing Lean Software Development: From Concept to Cash by Mary and Tom Poppendieck
The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses by Eric Ries
The Modern Firm: Organizational Design for Performance and Growth by John Roberts

Presenting a Webinar About Organizational Structures and Continuous Delivery

| Comments

Today I’m presenting the 11th installment of ThoughtWorks’ Continuous Delivery webinar series. My talk is titled “Organizing for Continuous Delivery”, and it’s basically about the people and organizational aspects. In short, it’s intended to help think about how to answer the question, “How should we structure our people into teams to make Continuous Delivery work?”

You can either sign up (if it’s before the day), or view the recorded webinar (if you’re reading this from the future), on the ThoughtWorks website.

The Conflict Between Continuous Delivery and Traditional Agile

| Comments

In working with development teams at organizations which are adopting Continuous Delivery, I have found there can be friction over practices that many developers have come to consider as the right way for Agile teams to work. I believe the root of conflicts between what I’ve come to think of as traditional agile and CD is the approach to making software “ready for release”.

Evolution of software delivery

Waterfall A usefully simplistic view of the evolution of ideas about making software ready for release is this:

  • Waterfall believes a team should only start making its software ready for release when all of the functionality for the release has been developed (i.e. when it is “feature complete”).
  • Agile introduces the idea that the team should get their software ready for release throughout development. Many variations of agile (which I refer to as “traditional agile” in this post) believe this should be done at periodic intervals.
  • Continuous Delivery is another subset of agile which in which the team keeps its software ready for release at all times during development. It is different from “traditional” agile in that it does not involve stopping and making a special effort to create a releasable build.

Continuous Delivery is not about shorter cycles

Going from traditional Agile development to Continuous Delivery is not about adopting a shorter cycle for making the software ready for release. Making releasable builds every night is still not Continuous Delivery. CD is about moving away from making the software ready as a separate activity, and instead developing in a way that means the software is always ready for release.

Ready for release does not mean actually releasing

A common misunderstanding is that Continuous Delivery means releasing into production very frequently. This confusion is made worse by the use of organizations that release software multiple times every day as poster children for CD. Continuous Delivery doesn’t require frequent releases, it only requires ensuring software could be released with very little effort at any point during development. (See Jez Humble’s article on Continuous Delivery vs. Continuous Deployment.) Although developing this capability opens opportunities which may encourage the organization to release more often, many teams find more than enough benefit from CD practices to justify using it even when releases are fairly infrequent.

Friction points between Continuous Delivery and traditional Agile

As I mentioned, there are sometimes conflicts between Continuous Delivery and practices that development teams take for granted as being “proper” Agile.

Friction point: software with unfinished work can still be releasable

One of these points of friction is the requirement that the codebase not include incomplete stories or bugfixes at the end of the iteration. I explored this in my previous post on iterations. This requirement comes from the idea that the end of the iteration is the point where the team stops and does the extra work needed to prepare the software for release. But when a team adopts Continuous Delivery, there is no additional work needed to make the software releasable.

More to the point, the CD team ensures that their code could be released to production even when they have work in progress, using techniques such as feature toggles. This in turn means that the team can meet the requirement that they be ready for release at the end of the iteration even with unfinished stories.

This can be a bit difficult for people to swallow. The team can certainly still require all work to be complete at the iteration boundary, but this starts to feel like an arbitrary constraint that breaks the team’s flow. Continuous Delivery doesn’t require non-timeboxed iterations, but the two practices are complementary.

Friction point: snapshot/release builds

Many development teams divide software builds into two types, “snapshot” builds and “release” builds. This is not specific to Agile, but has become strongly embedded in the Java world due to the rise of Maven, which puts the snapshot/build concept at the core of its design. This approach divides the development cycle into two phases, with snapshots being used while software is in development, and a release build being created only when the software is deemed ready for release.

This division of the release cycle clearly conflicts with the Continuous Delivery philosophy that software should always be ready for release. The way CD is typically implemented involves only creating a build once, and then promoting it through multiple stages of a pipeline for testing and validation activities, which doesn’t work if software is built in two different ways as with Maven.

It’s entirely possible to use Maven with Continuous Delivery, for example by creating a release build for every build in the pipeline. However this leads to friction with Maven tools and infrastructure that assume release builds are infrequent and intended for production deployment. For example, artefact repositories such as Nexus and Artefactory have housekeeping features to delete old snapshot builds, but don’t allow release builds to be deleted. So an active CD team, which may produce dozens of builds a day, can easily chew through gigabytes and terabytes of disk space on the repository.

Friction point: heavier focus on testing deployability

Nobody likes cleaning up broken builds A standard practice with Continuous Delivery is automatically deploying every build that passes basic Continuous Integration to an environment that emulates production as closely as possible, using the same deployment process and tooling. This is essential to proving whether the code is ready for release on every commit, but this is more rigorous than many development teams are used to having in their CI.

For example, pre-CD Continuous Integration might run automated functional tests against the application by deploying it to an embedded application server using a build tool like Ant or Maven. This is easier for developers to use and maintain, but is probably not how the application will be deployed in production.

So a CD team will typically add an automated deployment to an environment will more fully replicates production, including separated web/app/data tiers, and deployment tooling that will be used in production. However this more production-like deployment stage is more likely to fail due to its added complexity, and may be may be more difficult for developers to maintain and fix since it uses tooling more familiar to system administrators than to developers.

This can be an opportunity to work more closely with the operations team to create a more reliable, easily supported deployment process. But it is likely to be a steep curve to implement and stabilize this process, which may impact development productivity.

Is CD worth it?

Given these friction points, what benefit is there to moving from traditional Agile to Continuous Delivery worthwhile, especially for a team that is unlikely to actually release into production more often than every iteration?

  • Decrease risk by uncovering deployment issues earlier,
  • increase flexibility by giving the organization the option to release at any point with minimal added cost or risk,
  • Involves everyone involved in production releases - such as QA, operations, etc. - in making the full process more efficient. The entire organization must identify difficult areas of the process and find ways to fix them, through automation, better collaboration, and improved working practices,
  • By continuously rehearsing the release process, the organization becomes more competent at doing it, so that releasing becomes autonomic, like breathing, rather than traumatic, like giving birth,
  • Improves the quality of the software, by forcing the team to fix problems as they are found rather than being able to leave things for later.

Dealing with the friction

The friction points I’ve described seem to come up fairly often when Continuous Delivery is being introduced. My hope is that understanding the source of this friction will be helpful in discussing it when it comes up, and working through the issues. If developers who are initially uncomfortable with breaking with the “proper” way of doing things, or find a CD pipeline overly complex or difficult understand the aims and value of these practices, hopefully they will be more open to giving them a chance. Once these practices become embedded and mature in an organization, team members often find it’s difficult to go back to the old ways of doing them.

Edit: I’ve rephrased the definition of the “traditional agile” approach to making software ready for release. This definition is not meant to apply to all agile practices, but rather applies to what seems to me to be a fairly mainstream belief that agile means stopping work to make the software releasable.

Iterations Considered Harmful

| Comments

The iteration is a cornerstone of agile development. It provides a heartbeat for the team and its stakeholders, and a structure for various routine activities that help keep development work aligned with what the customer needs. However, the way many teams run their iterations creates serious pitfalls which can keep them from delivering software as effectively as they could.

The orthodox approach to the iteration is to treat it as a timebox for delivering a batch of stories, which is the approach most Scrum teams take with sprints (the Scrum term for an iteration). In recent years many teams have scrapped this approach, either using iterations more as a checkpoint, as many ThoughtWorks teams do, or scrapping them entirely with Kanban and Lean software development.

For the purpose of this post, I will refer to these two general approaches to running iterations as the “orthodox” or “timeboxed-batch” iteration model on the one hand, and the “continuous development” model on the other hand. Although orthodox iterations have value, certainly over more old-school waterfall project management approaches, continuous development approaches which do away with timeboxing and avoid managing stories in batches allow teams to more effectively deliver higher quality software.

Orthodox iteration model: (or “timeboxed-batch” model). Each iteration works on a fixed batch of stories, all of which must be started and finished within a single iteration. Continuous development model: Stories are developed in a continuous flow, avoiding the need to stop development in order to consolidate a build containing only fully complete stories.

The anatomy of the orthodox iteration

In the classically run iteration or sprint, the Product Owner (PO) and team choose a set of stories that it commits to deliver at the end of the iteration. All of the stories in this batch should be sufficiently prepared before the iteration begins. The level and type of preparation varies between teams, but usually includes a level of analysis including the definition of acceptance criteria. This analysis should have been reviewed by the PO and developers to ensure there is a common understanding of the story. The PO should understand what they can expect to have when the story is implemented, and the technical team should have enough of an understanding of the story to estimate it and identify potential risks.

The iteration begins with an iteration kickoff meeting (IKO) where the team reviews the stories and confirms their confidence that they can deliver the stories within the iteration. The developers then choose stories to work on, discussing each story with the PO, Business Analyst (BA), and/or Quality Analyst (QA) as appropriate, then breaking it down into implementation tasks. Implementation takes place with continual reviews, or even pairing with these other non-developers, helping to keep implementation on track, and minimizing the amount of rework needed when the story is provisionally finished and goes into testing.

The QA and BA/PO then test and review each story as its implementation is completed. This is in addition to the automated testing which has been written and repeatedly running following TDD and CI practices. Only once the story is signed off do the developers move on to another one of the stories in the iteration’s committed batch.

As the end of the iteration approaches, developers and QAs should be wrapping up the last stories and preparing a releasable build for the showcase, which is typically held on the final day of the iteration. In the showcase, the team demonstrates the functionality of the completed stories to the PO and other stakeholders, the stories are signed off. The team holds a retrospective to consider how they can work better, then on the next working day they hold the IKO to start the following iteration.

When the iteration ends the team has a complete, fully tested and releasable build of the application, regardless of whether the software actually will go into production at this point.

The start and end dates of the iteration are firmly fixed. If there are stories (or defect fixes) which aren’t quite ready at the end of the iteration, the iteration end date is never slipped. Instead, the story is not counted as completed, so must be carried over to the next iteration.

The benefits of the orthodox iteration

This style of iteration offers many benefits over traditional waterfall methodologies. A short, rigid cycle for producing completely tested and releasable code forces discipline on the team, keeping the code in a near-releasable state throughout the project, and avoiding the temptation to leave work (e.g. testing) for “later”, building up unmanageable burdens of work, stress, and defects to be dealt with under the pressure of the final release phase.

The timeboxed iteration also forces the team to learn how to define stories of a manageable size. If stories are routinely too big to complete in one iteration this is a clear sign that the team needs improve the way it defines and prepares stories.

This demonstrates another benefit of the iteration, which is frequent feedback. The team is able to evaluate not only the quality of their code and its relevance to the business by getting feedback quickly, they are also able to evaluate how effectively they are working, and try out ideas for improving continually throughout the project.

Fundamental problems with the orthodox itertation

The timeboxed-batch approach to iterations has value, particularly for teams inexperienced with agile. However, it has fundamental problems. At core, this approach is waterfall written small, with many of the same flaws, albeit with a small enough cycle that issues can be dealt with more quickly than with a full waterfall project.

To understand why this is so, let’s flesh out the idealized anatomy of the iteration from above with some of the things which often happen in practice.

  • At the start of the iteration, no development is taking place, because everyone is working on preparing the new batch of stories. The BA’s are extremely busy now, because they have a full working set of stories to hand over to developers (i.e. however many stories the team can work on at once, that’s how many stories the BA’s must hand over all at the same time). The QA’s are less busy now, although they may be helping the BA’s out, and planning their testing for the iteration’s stories.
  • Actually, I lied. Development is taking place, and QA’s are extremely busy. Testing and bugfixing stories left over from the previous iteration is still going on. See points below to understand why. As a result, preparation and starting of the stories for this iteration is sluggish because of work carried over from the previous iteration. This actually isn’t necessarily bad, since it helps to stagger the story preparation work, preventing the BA’s from becoming a bottleneck. However, depending on whether carryover work was factored into the number of stories chosen for the current iteration (business pressures often make this difficult), this may mean the team is already at risk of failing to meet its commitment.
  • Once the previous iteration’s work has settled down, QA’s have little to do until the end of the iteration approaches, at which point they come under enormous pressure. Developers are humping to get their stories done in time, leaving QA’s with a pile of stories to be tested in time for the showcase. Any defects they find increase this pressure even more, with very little time to get the fixes in and then re-tested (maybe needing even more fixing and re-testing!)
  • If developers complete a story with a bit of time left in the iteration, they aren’t able to start new stories because the stories for the following iteration won’t be ready to work on until the IKO.
  • In the end, some stories don’t get fully tested during the iteration. They may be tested in the following iteration, after having already been signed off as “complete” by the unsuspecting Product Owner. If so, developers need to be pulled away to fix the defects found, or else the defects are added to a backlog to be fixed “later” (also known as “probably never”). Other developed code is left completely untested or under-tested, with the vague hope that any defects will be found in later testing phases, or that maybe there aren’t any important bugs anyway. In fact, these defects will be found, but not at a time more convenient to the team.
  • If any serious issues are raised by stakeholders during the showcase there is not time to fix them until the next iteration, which means it will take the full length of an iteration before a truly releasable build is created.

The root problem of the orthodox iteration

At the end of the day, the orthodox iteration suffers from two problems which are inherent in its very definition: it organizes work into batches, and it enforces a timebox.

Batching work is the antithesis to flow. The Lean approach to working aims to maximize the flow of work for the members of a team, which in software development translates to getting stories flowing easily through creation, analysis, implementation, validation, and release. When a developer finishes one story and it is signed off, she should have another story ready for her to pick up and start on. This shouldn’t need to wait on an arbitrary ceremony, and certainly shouldn’t have to wait for everyone else on the team to finish their stories and get them all signed off.

The batching focus of orthodox iterations doesn’t only cause developers to block, it also turns BA’s, QA’s, and the PO into bottlenecks. As described above, the start and end of the iteration each put a full working set of stories in the same state at once, all needing the same activity carried out on them at once.

Imagine an assembly line which starts up to assemble twenty cars, then stops while they are all inspected at once. Only once all of the cars are inspected and their defects fixed does the line start up again to begin assembling another twenty cars.

Timeboxing is also a source of problems for iterations. The main problem is the arbitrary deadline creates pressure to get stories “over the line” so they can count towards the velocity for the iteration. Unless management is enlightened (or uninterested) enough to avoid focusing on fluctuations of velocity from iteration to iteration (and even the most enlightened managers I’ve worked with do get worked up over velocity) this leads to the temptation to rush and cut corners, or to play games with stories.

Rushing obviously endangers the quality of the code, which almost certainly leads to delays down the line when the defects surface. Playing games, such as closing unfinished stories and opening defects to complete the work, or counting some points towards an unfinished story, undermines the team’s ability to measure and predict its work honestly. These bad habits will catch up one way or another.

Expecting code to be complete at the end of the iteration, fully tested, fixed, and ready for deployment, is unrealistic unless the iteration is structured with significant padding at the end. This padding must come after all reviews, including the stakeholder showcase, to allow time to make corrections, unless those reviews are mere rubber stamp sessions, with no genuine feedback permitted. This then means the team will be underutilized during the padding time. Otherwise, if there is so much rework done during this period that the entire team is fully engaged, then the risk of introducing new defects is too high to be confident in stable code by the end.

The alternative is to break the strict timeboxed-batched iteration model by interleaving work on the next iteration with the cleanup work from the previous iteration. This turns out to not be such a bad idea, and leads to evolving away from the timeboxed-batch iteration model towards the continuous development model.

The continuous development model

The continuous development model may be purely iteration-less, e.g. Kanban, or it may still retain the iteration as a period for measuring progress and for scheduling activities such as showcases. Once development is underway stories are prepared, developed, and tested using a “pull” approach, being worked on as team members become available, so that stories are constantly flowing, and everyone is constantly working on the highest value work available at the moment. This requires some different approaches to managing work flow than are used with other approaches. For more information, look into Kanban and Lean software development.

Since joining a year or so ago I’ve found that although no two ThoughtWorks projects run in exactly the same way, there is a strong tendency to use iterations which look a lot like Kanban, but retaining a one or two week iteration. Iterations are used to report progress (including velocity), and to schedule showcases and other regular meetings, but stories are not moved through the process in batches. Teams don’t start and stop work as a whole other than the start and end of a release. If the showcase is two days away, nothing stops a developer pair from starting on a new story knowing full well it will be incomplete when the codebase is demoed to the stakeholders, and possibly even deployed to later stage environments.

Although we do make projections and aim to have certain stories done by the next showcase, the team doesn’t promise to deliver a specific batch of stories. If it makes sense, stories can be dropped, added, or swapped as needed. This gives the business more flexibility to adapt their vision of the software as it is developed. It also reduces the pressure to mark a given story as “done” by a hard deadline, since there is no disruption from letting work carry on over the end of an iteration.

I’ve seen a Scrum team become ornery and rebellious when a PO made a habit of asking to swap stories after a sprint had started, even though work hadn’t been started on the particular stories involved. This was made worse because bugfixes were scheduled into sprints alongside stories, meaning that any serious defect found in production completely disrupted the team. Another factor that aggravated the situation was that the stories for each sprint were agreed before the end of the previous iteration. So if the showcase raised ideas for improvements to the functionality completed in iteration N, new stories could only be started in iteration N + 2 at the soonest. This hardly created a situation where the PO or the business felt the development team was responsive to business needs.

Also see Oren Teich’s post Go slow to go fast, which points out the problems with deadlines, and that iterations are simply a shorter deadline.

Challenges and rewards of continuous development

There are certainly challenges in moving to continuous development over the timeboxed-batch model. There is more risk of stories dragging on across multiple iterations. This can be mitigated by monitoring cycle time and keeping things visible, so that the team can discuss the issue and make changes to their processes if it becomes a problem.

For teams which are new to agile and still struggle to create appropriately sized stories, the timeboxed model may be more helpful to build the discipline and experience needed before being able to move to a continuous model. However, for experienced teams, timeboxing and batching stories simply has too many negative effects.

Continuous development, with a looser approach to iterations, maximizes the productivity of the team, avoids pitfalls that put quality at risk, and offers the business and the team more flexibility.

Configuration Drift

| Comments

In my previous article on the server lifecycle I mentioned ConfigurationDrift. Configuration Drift is the phenomenon where servers in an infrastructure become more and more different from one another as time goes on, due to manual ad-hoc changes and updates, and general entropy.

A nice automated server provisioning process as I’ve advocated helps ensure machines are consistent when they are created, but during a given machine’s lifetime it will drift from the baseline, and from the other machines.

There are two main methods to combat configuration drift. One is to use automated configuration tools such as Puppet or Chef, and run them frequently and repeatedly to keep machines in line. The other is to rebuild machine instances frequently, so that they don’t have much time to drift from the baseline.

The challenge with automated configuration tools is that they only manage a subset of a machine’s state. Writing and maintaining manifests/recipes/scripts is time consuming, so most teams tend to focus their efforts on automating the most important areas of the system, leaving fairly large gaps.

There are diminishing returns for trying to close these gaps, where you end up spending inordinate amounts of effort to nail down parts of the system that don’t change very often, and don’t matter very much day to day.

On the other hand, if you rebuild machines frequently enough, you don’t need to worry about running configuration updates after provisioning happens. However, this may increase the burden of fairly trivial changes, such as tweaking a web server configuration.

In practice, most infrastructures are probably best off using a combination of these methods. Use automated configuration, continuously updated, for the areas of machine configuration where it gives the most benefit, and also ensure that machines are rebuilt frequently.

The frequency of rebuilds will vary depending on the nature of the services provided and the infrastructure implementation, and may even vary for different types of machines. For example, machines that provide network services such as DNS may be rebuilt weekly, while those which handle batch processing tasks may be rebuilt on demand.

Automated Server Management Lifecycle

| Comments

One of the cornerstones of a well-automated infrastructure is a system for provisioning individual servers. A system that lets us reliably, quickly, and repeatably create new server instances that are consistent across our infrastructure means we spend less time fiddling with individual servers. Instead, servers become disposable components that are easily swapped, replaced, and expanded as we focus our attention on the bigger picture of the services we’re providing.

The first step in achieving this is making sure server instances are built using an automated process. This ensures every server is built the same way, that improvements can be easily folded into the server build process, and that it is a simple matter to spin up new instances and to scrap and replace broken ones. Automating this process also means your team of highly skilled, well-paid professionals don’t need to spend large amounts of their time on the brainless rote-work of menu-clicking through OS installation work.

I first used automated installation by PXE-booting physical rack servers in 2002, following the advice I found on the then-current infrastructures.org site, and in later years applied the same concepts with virtualized servers and then IaaS cloud instances.

The machine lifecycle

I think of this as the machine lifecycle (which I tend to call the ‘server lifecycle’ because that’s what I normally work with, although it’s just as applicable to desktops). This involves a number of activities required to set up and manage a single machine instance, such as partitioning storage, installing software, and applying configuration.

Basic Server Lifecycle phases

These activities are applied during one or more phases of the machine lifecyle. There are three phases: “Package Image”, “Provision Instance”, and “Update Instance”. There are a number of different strategies for deciding which activities to do in each phase.

The various activities may be applied during one or more phases, depending on the strategy used to manage the machine’s lifecycle. Some strategies carry out more activities during the packaging phase, for instance, while other approaches might have a simpler packaging phase but do more in the provisioning and/or updating phase.

Machine lifecycle phases

Image packaging phase

In the image packaging phase, some or all elements of a machine instance are pre-packaged into a machine image in a way that can be reused to create multiple running instances.

This could be as simple as using a bare OS installation CD or ISO from the vendor. Alternately, it could be a snapshot of a fully installed, fully configured runnable system, such as a Symantec Ghost image, VMWare template, or EC2 AMI. Either way, these images are maintained in a Machine Image Library for use in the instance provisioning phase.

With the ManualInstanceBuilding pattern, everything happens during provisioning

Different machine lifecycle strategies use different approaches to image packaging. ManualInstanceBuilding and ScriptedInstanceBuilding both tend to use stock OS images, which involves less up-front work and maintenance of the Machine Image Library, since the instances are take straight from the vendor. However, work is still needed to create, test, and maintain the checklists or scripts used to configure instances when provisioning.

On the other hand, CloningExistingMachineInstances and TemplatedMachineInstances both create pre-configured server images, which need only minor changes (e.g. hostnames and IP addresses) to provision new instances. This is appealing because less work is done to provision a new instance, but the drawback is that creating and updating images takes more work. Admins tend to make updates and fixes to running instances which may not make it into the templates, which contributes to ConfigurationDrift, especially if changes are made ad-hoc.

What happens in each phase with the TemplateMachineInstances pattern

CloningExistingMachineInstances, which usually takes the shape of copying an existing server to create new ones as needed, tends to make ConfigurationDrift worse, as new servers inherit the runtime cruft and debris (log files, temporary files, etc.) of their parents, and it is difficult to bring various servers into line with a single, consistent configuration. TemplatedMachineInstances are a better way to keep an infrastructure consistent and easily managed.

The tradeoffs between scripted installs vs. packaged images depends partly on the tools used for scripting and / or packaging, which in turn often depends on the hosting platform. Amazon AWS requires the use of templates (AMIs), for example. In either case, exploiting automation more fully in the provisioning phase favours the case for keeping the packaging phase as lightweight as possible.

Instance Provisioning Phase

In the provisioning phase, a machine instance is created from an image and prepared for operational use.

Examples of activities in this phase include instantiating a VM or cloud instance, preparing storage (partitioning disks, etc.), installing the OS, installing relevant software packages and system updates, and configuring the system and applications for use.

There are two main strategies for deciding which activities belong in the packaging versus the provisioning phases. One is RoleSpecificTemplates, and the other is GenericTemplate.

With RoleSpecificTemplates, the machine image library includes images that have been pre-packaged for specific roles, such as web server, application server, mail server, etc. These have the necessary software and configuration created in the packaging phase, so that provisioning is a simple matter of booting a new instance and tweaking a few configuration options. There are two drawbacks of this approach. Firstly, you will have more images to maintain, which creates more work. When the OS used for multiple roles is updated, for example, the images for all of those roles must be repackaged. Secondly, this pattern gives you less flexibility, since you can’t easily provision an instance that combines multiple roles, unless you create - and then maintain - images for every combination of roles that you might need.

What happens in each phase with the GenericTemplate pattern

With the GenericTemplate pattern, each image is kept generic, including only the software and configuration that is common to all roles. The role for each machine instance is assigned during the provisioning phase, and software and configuration are applied accordingly then. The goal is to minimise the number of images in the machine image library, to reduce the work needed to maintain them. Typically, a separate template is needed for each hardware and OS combination that can’t be supported from a single OS install. The JeOS (Just Enough Operation System) concept takes this to the extreme, making the base template as small as possible.

The GenericTemplate pattern does require a more robust automated configuration during provisioning, and may mean provisioning an instance takes longer than using more fully-built images, since more packages will need to be installed during install.

Instance Updating Phase

Once a machine instance is running and in use, it is continuously updated. This includes activities such as applying system and software updates, new configuration settings, user accounts, etc.

Many teams carry out these updates manually, however it requires a high level of discipline and organization to maintain systems this way, especially as the number of systems grows. The number of machines that a team can be managed is closely dependent on the size of the team, so the ration of servers to sysadmins is low. In practice, teams using manual updates tend to be reactive, updating machines opportunistically when carrying out other tasks, or in order to fix problems that crop up. This leads to ConfigurationDrift, with machines becoming increasingly inconsistent with one another, creating various problems including unreliable operation (software that works on one machine but not another), and extra work to troubleshoot and maintain.

Breaking Into Automated Infrastructure Management

| Comments

Automated management of infrastructure is vital for delivering highly effective IT services. But although there are plenty of tools available to help implement automation, it’s still common to see operations teams manually installing and managing their servers, which leads to a high-maintenance infrastructure, which soaks up the team’s time on firefighting and other reactive tasks.

Doing it by hand

I’ve met many smart and skilled systems administrators in this situation. These folks know automation can make their life easier, but they can’t afford to take time away from turning cranks, greasing wheels, and unjamming the gears to keep their infrastructure puffing along in order to focus on improving their situation.

I’m convinced this is largely due to habit. Even though these teams understand that automation would be useful to them, when the pressure is on (and the pressure is always on), they roll up their sleeves, ssh into the servers and knock them into shape, because that’s the fastest way to get stuff done. Manual infrastructure management is what they’re used to. I find that most of these teams haven’t had personal experience of well-automated infrastructures, and don’t tend to believe it’s something they can realistically implement for their own operations.

Sysadmins who have worked in teams with mature, comprehensive automation, on the other hand, can’t go back. Sure, they might log into a box to diagnose and fix something that needs fixing right now, but they can’t relax until they’ve baked the fix into their automated configuration, and made sure that their monitoring will alert them ahead of time if the problem happens again.

Breaking out of manual infrastructure management and setting up an effective automation regime is difficult. Although there are loads of tools out there to make it work, it helps to understand good strategies for implementing them. I recommend looking over the material on the infrastructures.org site. It hasn’t been updated in a few years, so doesn’t take much of the advances since then into account, including virtualization, cloud, and newer tools like Chef and Puppet, but there is still rich material there.

Another must-read which more up to date is Web Operations by John Allspaw, Jesse Robbins, and a bunch of other smart peeps.

I’m also planning to share a few of the practices I’ve seen and used for automation in upcoming posts.

Successful Software Delivery in Spite of Evil IT

| Comments

In my previous post, I glibly said that SLAs represent waste that an organization has identified and formalized. ReaderKenfin commented on my post, rightly calling me out to provide alternatives.

If you believe that SLA’s ‘formalise waste’ this way how would you approach my situation where communications are beyond poor (atrocious) and the org structure is silo’d and no one is accountable for their work?

Kenfin’s example illustrates my point quite well - the organization’s structure is an obstacle to effective delivery. Since he’s not in a position to fix this problem, he’s turned to SLA’s as a way to manage the problem. They won’t make the issues go away, but they may give him a handle to manage them, and importantly, make them more predictable.

Turtle on a keyboard, like slow IT people. It's a metaphor.

But it’s a fair question, what can someone in Kenfin’s shoes do in the face of an IT organization which is inherently not aligned to effectively providing the services he needs to deliver software to his users effectively?

A common strategy, and one that I’ve helped teams inside these kinds of organizations do, is to completely bypass the existing IT organization. The goal is to put control of everything that the product team needs in order to deliver into its hands, rather than leaving it at the mercy of a group (or multiple groups) who have other priorities.

Outsource it!

One way to do this is outsourcing, finding another company that specializes in the functions that the IT group would provide, whether this means development, integration, hosting, or something else. This works best if the project is not seen as core to the business, so that it avoids fear of entrusting sensitive data or business critical functions to outsiders. It also helps if the project needs skills that can’t be found in-house.

My friends at Cognifide have built their business on this, building technically complex content-focused websites for corporate clients, delivering far more quickly, and with greater expertise, than most corporate IT organizations can manage. This is also the premise that Software as a Service (SaaS) is based on. By choosing SalesForce for CRM a company completely bypasses the massive IT project that would be required to implement an off the shelf, self-hosted CRM package (integration with other applications aside).

There are pitfalls to outsourcing to bypass IT. Many outsourcers are no more responsive than an in-house IT department, using SLAs and change control processes to make their workload, risks, and profitability more manageable.

Do it yourself!

The strategy I’ve most often been involved with myself (although I didn’t really think of it this way at the time) is product departments building their own IT capabilities. Again, this is about having control of the services and resources the group needs in order to deliver to its own customers.

The typical pattern is an “online” (or often, “digital”) department of a company where online was originally on the fringes of the main business, but has in recent years grown into a major channel for sales, customer service, or even delivery of products (for example in publishing).

The online team leverages their growing importance, as well as the specialized needs they have compared with typical corporate IT custometrs, to get approval from top management to create their own “digital operations team” or similar. This team may outsource elements of infrastructure, such as hosting (with IaaS cloud providers as an increasingly appealing option), but they are able to respond immediately to the needs of the online product group, because a) they don’t have to juggle requests from other departments and teams, and b) they report directly to the manager of that group.

But what if I have to use the crappy IT guys who don’t care about my project?

Those strategies are not feasible for every team. I’ve certainly had to support projects where we had no alternative but to struggle along with unresponsive IT. In these cases, SLAs may well have to do, even though they represent waste and inefficiency.

There are a few other things you might at least try out in these cases. Your goal is still to have the resources you need in order to get things done at your disposal as far as possible. So see if you can identify those services which are especially critical, and particularly those which are likely to change frequently, and see if you can get some dedicated resource assigned to your project. You want someone who will sit with your team, be incentivized by the success of your project, and who has the skills, authority, and system privileges to carry out the tasks you need.

If full time secondment of people to your team is not quite feasible due to budget, lack of available resource, etc., see if you can at least get commitments of time from the right people. Can someone come to daily standups? Weekly meetings? Regular release management meetings? Ask for as much as possible to start with, then see what you can get.

Also, maybe you can hire someone into your own team with qualifications and background that will help them effecitely liaise with difficult IT teams. Your own DBA, security consultant, etc. can engage with the IT groups using their own language, couching things in terms that address their concerns. They may be able take certain tasks off the IT group’s plates, which ends up giving you the ability to get things done more quickly, while at the same time making IT grateful that their workload is lighter.

But the right thing to do is …

These are all ways to work around the core problems. The best solution is of course for the organization to restructure itself in a way that aligns its resources with its goals. Most companies, especially large ones, insist on organizing themselves in ways that are self-defeating. It’s a shame that many people who work in large companies accept this as normal, often even as desirable.

Grouping everyone with a given function into a single group forces them to focus on juggling the competing needs of many stakeholders, managing their own risks (especially the risk of getting blamed when projects fail). They will inevitably favor the abstract principles of their own technical practices over what is most effective in making the business succeed. Much better to group people into units that have complete ownership for delivering business value, and find ways to connect staff of given function with each other so they can develop their skills as working practices.

Unfortunately most of us are rarely in a position to influence this, so I hope that my suggestions will be helpful to some people in making things a little less painful.

Running Multiple Tomcat Instances on One Server

| Comments

Here’s a brief step by step guide to running more than one instance of Tomcat on a single machine.

Step 1: Install the Tomcat files

Download Tomcat 4.1 or 5.5, and unzip it into an appropriate directory. I usually put it in /usr/local, so it ends up in a directory called /usr/local/apache-tomcat-5.5.17 (5.5.17 being the current version as of this writing), and make a symlink named /usr/local/tomcat to that directory. When later versions come out, I can unzip them and relink, leaving the older version in case things don’t work out (which rarely if ever happens, but I’m paranoid).

Step 2: Make directories for each instance

For each instance of Tomcat you’re going to run, you’ll need a directory that will be CATALINA_BASE. For example, you might make them /var/tomcat/serverA and /var/tomcat/serverB.

In each of these directories you need the following subdirectories: conf, logs, temp, webapps, and work.

Put a server.xml and web.xml file in the conf directory. You can get these from the conf directory of the directory where you put the tomcat installation files, although of course you should tighten up your server.xml a bit.

The webapps directory is where you’ll put the web applications you want to run on the particular instance of Tomcat.

I like to have the Tomcat manager webapp installed on each instance, so I can play with the webapps, and see how many active sessions there are. See my instructions for configuring the Tomcat manager webapp.

Step 3: Configure the ports and/or addresses for each instance

Tomcat listens to at least two network ports, one for the shutdown command, and one or more for accepting requests. Two instances of Tomcat can’t listen to the same port number on the same IP address, so you will need to edit your server.xml files to change the ports they listen to.

The first port to look at is the shutdown port. This is used by the command line shutdown script (actually, but the Java code it runs) to tell the Tomcat instance to shut itself down. This port is defined at the top of the server.xml file for the instance.

<Server port="8001" shutdown="_SHUTDOWN_COMMAND_" debug="0">

Make sure each instance uses a different port value. The port value will normally need to be higher than 1024, and shouldn’t conflict with any other network service running on the same system. The shutdown string is the value that is sent to shut the server down. Note that Tomcat won’t accept shutdown commands that come from other machines.

Unlike the other ports Tomcat listens to, the shutdown port can’t be configured to listen to its port on a different IP address. It always listens on 127.0.0.1.

The other ports Tomcat listens to are configured with the <Connector> elements, for instance the HTTP or JK listeners. The port attribute configures which port to listen to. Setting this to a different value on the different Tomcat instances on a machine will avoid conflict.

Of course, you’ll need to configure whatever connects to that Connector to use the different port. If a web server is used as the front end using mod_jk, mod_proxy, or the like, then this is simple enough - change your web server’s configuration.

In some cases you may not want to do this, for instance you may not want to use a port other than 8080 for HTTP connectors. If you want all of your Tomcat intances to use the same port number, you’ll need to use different IP addresses. The server system must be configured with multiple IP addresses, and the address attribute of the <Connector> element for each Tomcat instance will be set to the appropriate IP address.

Step 4: Startup

Startup scripts are a whole other topic, but here’s the brief rundown. The main different from running a single Tomcat instance is you need to set CATALINA_BASE to the directory you set up for the particular instance you want to start (or stop). Here’s a typical startup routine:

JAVA_HOME=/usr/java
JAVA_OPTS="-Xmx800m -Xms800m"
CATALINA_HOME=/usr/local/tomcat
CATALINA_BASE=/var/tomcat/serverA
export JAVA_HOME JAVA_OPTS CATALINA_HOME CATALINA_BASE

$CATALINA_HOME/bin/catalina.sh start