
We use cookies to help provide you with the best possible online experience. By using this site, you agree that we may store and access cookies on your device. You can find out more and set your own preferences here
We use cookies to help provide you with the best possible online experience. By using this site, you agree that we may store and access cookies on your device. You can find out more and set your own preferences here
21 March, 2022
The cloud computing explosion has led to the development of software programs and applications at an exponential rate. The ability to deliver features faster is now a competitive edge.
To achieve this your DevOps teams, structure & ecosystem should be well-oiled. Therefore it is critical to understand how to build an ideal CI/CD pipeline that will help to deliver features at a rapid pace.
Through this blog, we shall be exploring important cloud concepts, execution playbooks, and best practices of setting up CI/CD pipelines on public cloud environments like AWS, Azure, GCP, or even hybrid & multi-cloud environments.
Here’s a bird’s eye view of what an ideal CI/CD Pipeline looks like
Let’s take a closer look at what each stage of the CI/CD involves:
Source Code:
This is the starting point of any CI/CD pipeline. This is where all the packages and dependencies relevant to the application being developed are categorized and stored. At this stage, it is vital to have a mechanism that offers access to some reviewers in the project. This prevents developers from randomly merging bits of code into the source code. It is the reviewer's job to approve any pull requests in order to progress the code into the next stage. Although this involves leveraging several different technologies, it certainly pays off in the long run.
Build:
Once a change has been committed to the source and approved by the reviewers, it automatically progresses to the Build stage.
Test Environment:
This is the first environment the code enters. This is where the changes made to the code are tested and confirmed that they’re ready for the next stage, which is something closer to the production stage.
If there are dependencies on any of the APIs, it is vital to conduct testing on those as well. For instance, if the ‘create order’ API is dependent on a customer profile service; it should be tested and checked to ensure that the customer profile service is receiving the expected information. This tests the end-to-end workflows of the entire system and offers added confidence to all the core APIs and core logic used in the pipeline, ensuring they are working as expected. It is important to note that developers will spend most of their time in this stage of the pipeline.
On/Off Switches to Control Code Flow
Production:
The next stage after testing is usually the production stage. However, moving directly from testing to a production environment is usually only viable for small to medium organizations where only a couple of environments are used at the highest. But the larger an organization gets, the more environments they might need. This leads to difficulties in maintaining consistency and quality of code throughout the environment. To manage this, it is better for code to move from the testing stage to a pre-production stage and then move to a production stage. This becomes useful when there are many different developers testing things at different times like QA or a new specific feature is being tested. The pre-production environment allows developers to create a separate branch or additional environments for conducting a specific test.
This pre-production environment will be known as ‘Prod 1 Box’ for the rest of this article.
Pre-Production: (Prod 1Box)
A key aspect to remember when moving code from the testing environment is to ensure it does not cause a bad change to the main production environment where all the hosts are situated and where all the traffic is going to occur for the customer. The Prod 1 Box represents a fraction of the production traffic - ideally around less than 10% of total production traffic. This allows developers to detect when anything goes wrong while pushing code such as if the latency is really high. This will trigger the alarms, alerting the developers that a bad deployment is occurring and allowing them to roll back that particular change instantly.
The purpose of the Prod 1 Box is simple. If the code moves directly from the testing stage to the production stage and results in bad deployment, it would result in rolling back all the other hosts using the environment as well which is very tedious and time-consuming. But instead, if a bad deployment occurs in the Prod 1 Box, only one host is needed to be rolled back. This is a pretty straightforward process and extremely quick as well. The developer is only required to disable that particular host and the previous version of the code will be reverted to in the production environment without any harm and changes. Although simple in concept, the Prod 1 Box is a very powerful tool for developers as it offers an extra layer of safety when they introduce any changes to the pipeline before it hits the production stage.
Main Production:
When everything is functioning as expected within the Prod 1 Box, the code can be moved on to the next stage which is the main production environment. For instance, if the Prod 1 Box was hosting 10% of the traffic, then the main production environment would be hosting the remaining 90% of that traffic. All the elements and metrics used within the Prod 1 Box such as rollback alarms, Bake Period, anomaly detection or error count and latency breaches, and canaries, must be included in the stage exactly as they were in the Prod 1 Box with the same checks, except on a much larger scale.
The main issue most developers face is – ‘how is 10% of traffic supposed to be directed to one host while 90% goes to another host?’. While there are several ways of accomplishing this task, the easiest is to transfer it at the DNS level. Using DNS weights, developers can shift a certain percentage of traffic to a particular URL and the rest to another URL. The process might vary depending on the technology being used but DNS is the most common one that developers usually prefer to use.
Detailed Ideal CI/CD Pipeline
Summary
The ultimate goal of an ideal CI/CD pipeline is to enable teams to generate quick, reliable, accurate, and comprehensive feedback from their SDLC. Regardless of the tools and configuration of the CI/CD pipeline, the focus should be to optimize and automate the software development process.
Let’s go Over the key Points Covered One More Time. These are the key Concepts And Elements that Make up an Ideal CI/CD Pipeline:
NeoSOFT’s DevOps services are geared towards delivering our signature exceptional quality and boosting efficiency wherever you are in your DevOps journey. Whether you want to build a CI/CD pipeline from scratch, or your CI/CD pipeline is ineffective and not delivering the required results, or if your CI/CD pipeline is in development but needs to be accelerated; our robust and signature engineering solutions will enable your organization to
NeoSOFT’s DevOps Services Impact on Organizations
Solving Problems in the Real World
Over the past few years, we’ve applied the best practices mentioned in this article.
Organizations often find themselves requiring assistance at different stages in the DevOps journey; some wish to develop an entirely new DevOps approach, while others start by exploring how their existing systems and processes can be enhanced. As their products evolve and take on new characteristics, organizations need to re-imagine their DevOps processes and ensure that these changes aren’t affecting their efficiencies or hampering the quality of their product.
DevOps helps eCommerce Players to Release Features Faster
When it comes to eCommerce, DevOps is instrumental for increasing overall productivity, managing scale & deploying new and innovative features much faster.
For a global e-commerce platform with millions of daily visitors, NeoSOFT built their CI/CD pipeline. Huge computational resources were made to work efficiently, giving a pleasing online customer experience. The infrastructure was able to carry out a number of mission-critical functions with substantial savings resulting in both: time and money.
With savings up to 40% on computing & storage resources matched with an enhanced developer throughput, an ideal CI/CD pipeline is critical to the eCommerce industry.
Robust CI/CD Pipelines are Driving Phenomenal CX in the BFSI Sector
DevOps’ ability to meet the continually growing user needs with the need to rapidly deploy new features has facilitated its broader adoption across the BFSI industry with varying maturity levels.
When executing a digital transformation project for a leading bank, NeoSOFT upgraded the entire infrastructure with an objective to achieve continuous delivery. The introduction of emerging technologies like Kubernetes into the journey enabled the institution to move at startup speed, driving the GTM 10x faster rate.
As technology leaders in the BFSI segment look to compete through digital capabilities, DevOps & CI/CD pipelines start to form their cornerstone of innovation.
A well-oiled DevOps team, structure, and ecosystem can be the difference-maker in driving business benefits and leveraging technology as your competitive edge.
Begin your DevOps Journey Today!
Speak to us —let’s Build.