DevOps introduction in the form of a lighthouse project
As written in DevOps Strategies and commented in my blog post DevOps and Continuous Delivery in corporations – the myth of prescribed agility, it is not a promising approach to simply „direct“ DevOps & Continuous Delivery from above and, for example, to establish it in an organization in one step.
The exact opposite approach would be to start the DevOps implementation with only a first system in focus. However, this does not a free lunch either – and a number of points have to be taken into account.
Although the truth lies somewhere in between the two approaches mentioned above, the idea of a single lighthouse project is often very popular because it promises to focus on a single product and its success when introducing DevOps and Continuous Delivery.
The lighthouse approach results in particularly high expectations of success in terms of DevOps and Continuous Delivery practices.
Moreover, the lighthouse project with its high complexity is expected to prove that it is possible to create and operate a continuous delivery pipeline with the complexity required in the organization.
And besides, this initial proof shall be valid for all other products. I
This is not least due to the fact that a large number of critics from the „old world“ are to be convinced. This also applies to the requirements for the end product – especially if an important legacy system is to be replaced as part of the DevOps-implementation.
The second system syndrome
There are often several reasons for this approach to replace a successful legacy system:
The architecture and design of the legacy system is no longer able to cope with the growing demands in terms of throughput, number of users and regional distribution – and is no longer scalable.
The legacy system generates a large part of the revenue and has a solid customer base that has aligned its own business processes with the legacy system in question.
The organization must master this expectation in the case of a single DevOps project, to which then all eyes are directed.
This has not only been the case since DevOps: In the past I have encountered this pattern several times, that the introduction of a new technology or process should be carried out within a lighthouse project.
In addition to the actual tasks (automation, culture change) the DevOps/Continuous Delivery introduction should be done „in passing“ as follows:
- Introduction of a new development paradigm.Among others, strict agility, which must not only be lived by developers, but also understood by all parties involved
- New technology and related architecture paradigmsMany legacy systems have client-server architectures and are based on RDBMS – In the DevOps environment, micro-service architectures and NoSQL data retention procedures are very popular because they are well suited for continuous deployment.
- Complete automation of deployment and test processes Associated with this, new and higher demands are placed on employees, especially on the QA department.
- Business-critical applications are created or replaced.
In the case of a reimplementation, the second system syndrome ihas a grave impact on the project:
- In addition to the functional features that are to be re-implemented by DevOps, new requirements gradually arise with the aim of compensating for the shortcomings of the old system. Theoretically, an organization can prevent or at least reduce such a scope creep to an inescapable extent by a stringent requirements management in combination with agile processes such as SCRUM. In practice, however, it is clear that stakeholders exert considerable pressure on improvements, especially when there is a delay in the lighthouse project. As a counterbalance to this, promises are often made about new features.
- Furthermore, even without scope creep in the re-implementation of complex traditional systems, it is extremely difficult to resurrect the legacy system with all its relevant functionality and inherent processes. A system developed over many years has undergone a large number of iterations, maintenance and correction cycles. This represents an enormous amount of know-how that has been built up and implemented over time, but is no longer accessible due to fluctuation in the teams responsible for legacy system – despite all existing documentation. In the team responsible for the new system the situation is even worse.
Although this should be evident, but there are always such lighthouse projects in connection with the introduction of new procedures and technologies.
Another approach could be to dismantle the legacy system or to „switch off“ certain parts in the legacy system and replace them with DevOps and Continuous Delivery products.
Another problem with lighthouse projects is the uniqueness of the „DevOps project“, which exists in a bubble.
Thus, it is often the case that everyone involved initially accepts or demands that the DevOps project be carried out on a „greenfield“ basis. This means that there are should be no complex interfaces to legacy systems, with the exception of, for example, an initial master data transfer, for example.
In such situations, however, it is necessary to assume the green field as a prerequisite for feasibility within budget and schedule. Over time, especially in the old world outside the DevOps bubble, this assumption becomes obsolete, and thus the entire time and budget plan of the lighthouse project.
The bubble shrinks
The following factors may cause the DevOps bubble to contract:
Island situation regarding personal
The personnel situation can change if members of the lighthouse project leave the project, fail or are needed with higher complexity. In this case, there is no know-how outside of the DevOps bubble within the organization with regard to DevOps, Continuous Delivery, the new technology or (agile) development processes that characterize the lighthouse project.
If in such a case no replacement staff can be recruited or if it cannot be swiftly incorporated, time delay and budget overruns are caused by inefficiency and ineffectiveness – the DevOps bubble shrinks.
This process continues if synchronization with the release processes of the legacy systems becomes necessary, due to the growing need to integrate interfaces to legacy systems,. DevOps and CALMS requirements.
Then the development processes in the DevOps-Bubble are increasingly diluted or transferred to the u. U. U. C. The principle of automation in deployment and testing is broken – manual changes are made to the production environment, for example. The DevOps bubble continues to shrink.
In addition, the culture and principles underlying DevOps („CALMS“ Culture, Automation, Lean, Measurement, Sharing) are only lived in the DevOps bubble and are almost unknown or even rejected outside of it. This undermines the realization of the CALMS principles not only in the DevOps base. The DevOps bubble shrinks even further.
No trust inflow
As long as the initial DevOps/Continuous Delivery project is still the only project and the product delivered so far does not meet the expectations of the final product, DevOps has not (yet) justified its existence in the respective organization. With the above mentioned. If the expectations of stakeholders or delays are exaggerated, such a lighthouse project will be hit by a slightly fundamental critical fire.
At the end of the day, the existing trust in the lighthouse project falls short of the minimum necessary for its success – this leads to the discontinuation of the project or „slow heartbeat“ (fluctuation of employees, continual budget cuts). The DevOps bubble is bursting.