CI/CD Pipeline: Understanding What it is and Why it Matters

The cloud computing explosion has led to the development of software programs and applications at an exponential rate. The ability to deliver features faster is now a competitive edge.

To achieve this your DevOps teams, structure & ecosystem should be well-oiled. Therefore it is critical to understand how to build an ideal CI/CD pipeline that will help to deliver features at a rapid pace.

Through this blog, we shall be exploring important cloud concepts, execution playbooks, and best practices of setting up CI/CD pipelines on public cloud environments like AWS, Azure, GCP, or even hybrid & multi-cloud environments.

HERE’S A BIRD’S EYE VIEW OF WHAT AN IDEAL CI/CD PIPELINE LOOKS LIKE

Let’s take a closer look at what each stage of the CI/CD involves:

Source Code:

This is the starting point of any CI/CD pipeline. This is where all the packages and dependencies relevant to the application being developed are categorized and stored. At this stage, it is vital to have a mechanism that offers access to some reviewers in the project. This prevents developers from randomly merging bits of code into the source code. It is the reviewer’s job to approve any pull requests in order to progress the code into the next stage. Although this involves leveraging several different technologies, it certainly pays off in the long run.

Build:

Once a change has been committed to the source and approved by the reviewers, it automatically progresses to the Build stage.

1) Compile Source and Dependencies The first step in this stage is pretty straightforward, developers must simply compile the source code along with all its different dependencies.

2) Unit Tests This involves conducting a high coverage of unit tests. Currently, many tools show whether or not a line of code is being tested. To build an ideal CI/CD pipeline, the goal is to essentially commit source code into the build stage with the confidence that it will be caught in one of the later steps of the process. However, if high coverage unit tests are not conducted on the source code then it will progress directly into the next stage, leading to errors and requiring the developer to roll back to a previous version which is often a painful process. This makes it crucial to run a high coverage level of unit tests to be certain that the application is running and functioning correctly.

3) Check and Enforce Code Coverage (90%+) This ties into the testing frameworks above, however, it deals with the output code coverage percent related to a specific commit. Ideally, developers want to achieve a minimum of 90% and any subsequent commit should not fall below this threshold. The goal should be to achieve an increasing percentage for any future commits – the higher the better.

Test Environment:

This is the first environment the code enters. This is where the changes made to the code are tested and confirmed that they’re ready for the next stage, which is something closer to the production stage.

1) Integration Tests The primary thing to do as a prerequisite is to run integration tests. Although there are different interpretations of what exactly constitutes an integration test and how they compare to functional tests. To avoid this confusion, it is important to outline exactly what is meant when using the term.

In this case, let’s assume there is an integration test that executes a ‘create order’ API with an expected input. This should be immediately followed with a ‘get order’ API and checked to see if the order contains all the elements expected of it. If it does not, then there is something wrong. If it does then the pipeline is working as intended – congratulations.

Integration tests also analyze the behavior of the application in terms of business logic. For instance, if the developer inputs a ‘create order’ API and there’s a business rule within the application that prevents the creation of an order where the dollar value is above 10,000 dollars; an integration test must be performed to check that the application adheres to that benchmark as an expected business rule. In this stage, it is not uncommon to conduct around 50-100 integration tests depending on the size of the project, but the focus of this stage should mainly revolve around testing the core functionality of the APIs and checking to see if they are working as expected.

2) On/Off Switches At this point, let’s backtrack a little to include an important mechanism that must be used between the source code and build stage, as well as between the build and test stage. This mechanism is a simple on/off switch allowing the developer to enable or disable the flow of code at any point. This is a great technique for preventing source code that isn’t necessary to build right away from entering the build or test stage or maybe preventing code from interfering with something that is already being tested in the pipeline. This ‘switch’ enables developers to control exactly what gets promoted to the next stage of the pipeline.

If there are dependencies on any of the APIs, it is vital to conduct testing on those as well. For instance, if the ‘create order’ API is dependent on a customer profile service; it should be tested and checked to ensure that the customer profile service is receiving the expected information. This tests the end-to-end workflows of the entire system and offers added confidence to all the core APIs and core logic used in the pipeline, ensuring they are working as expected. It is important to note that developers will spend most of their time in this stage of the pipeline.

ON/OFF SWITCHES TO CONTROL CODE FLOW

Production:

The next stage after testing is usually the production stage. However, moving directly from testing to a production environment is usually only viable for small to medium organizations where only a couple of environments are used at the highest. But the larger an organization gets, the more environments they might need. This leads to difficulties in maintaining consistency and quality of code throughout the environment. To manage this, it is better for code to move from the testing stage to a pre-production stage and then move to a production stage. This becomes useful when there are many different developers testing things at different times like QA or a new specific feature is being tested. The pre-production environment allows developers to create a separate branch or additional environments for conducting a specific test.

This pre-production environment will be known as ‘Prod 1 Box’ for the rest of this article.

Pre-Production: (Prod 1Box)

A key aspect to remember when moving code from the testing environment is to ensure it does not cause a bad change to the main production environment where all the hosts are situated and where all the traffic is going to occur for the customer. The Prod 1 Box represents a fraction of the production traffic – ideally around less than 10% of total production traffic. This allows developers to detect when anything goes wrong while pushing code such as if the latency is really high. This will trigger the alarms, alerting the developers that a bad deployment is occurring and allowing them to roll back that particular change instantly.

The purpose of the Prod 1 Box is simple. If the code moves directly from the testing stage to the production stage and results in bad deployment, it would result in rolling back all the other hosts using the environment as well which is very tedious and time-consuming. But instead, if a bad deployment occurs in the Prod 1 Box, only one host is needed to be rolled back. This is a pretty straightforward process and extremely quick as well. The developer is only required to disable that particular host and the previous version of the code will be reverted to in the production environment without any harm and changes. Although simple in concept, the Prod 1 Box is a very powerful tool for developers as it offers an extra layer of safety when they introduce any changes to the pipeline before it hits the production stage.

1) Rollback Alarms When promoting code from the test stage to the production stage, several things can go wrong in the deployment. It can result in:

  • An elevated number of errors
  • Latency spikes
  • Faltering key business metrics
  • Various abnormal and expected patterns

This makes it crucial to incorporate the concept of alarms into the production environment – specifically rollback alarms. Rollback alarms are a type of alarm that monitors a particular environment and is integrated during the deployment process. It allows developers to monitor specific metrics of a particular deployment and that particular version of the software for issues like latency errors or if key business metrics are falling below a certain threshold. The rollback alarm is an indicator that alerts the developer to roll back the change to a previous version. In an ideal CI/CD pipeline these configured metrics should be monitored directly and the rollback initiated automatically. The automatic rollback must be baked into the system and triggered whenever it determines any of these metrics exceed or fall below the expected threshold.

2) Bake Period The Bake Period is more of a confidence-building step that allows developers to check for anomalies. The ideal duration of a Bake Period should be around 24 hours, but it isn’t uncommon for developers to keep the Bake Period to around 12 hours or even 6 hours during a high volume time frame.

Quite often when a change is introduced to an environment, errors might not pop up right away. Errors and latency spikes might be delayed, unexpected behavior of APIs or a certain code flow of APIs doesn’t occur until a certain system calls it, etc. This is why the Bake Period is important. It allows developers to be confident with the changes they’ve introduced. Once the code has sat for the set period and nothing abnormal has occurred, it is safe to move the code onto the next stage.

3) Anomaly Detection or Error Counts and Latency Breaches During the Bake period, developers can use anomaly detection tools to detect issues however that is an expensive endeavor for most organizations and often is an overkill solution. Another effective option, similar to the one used earlier, is to simply monitor the error counts and latency breaches over a set period. If the sum of the issues detected exceeds a certain threshold then the developer should roll back to a version of the code flow that was working.

4) Canary A canary tests the production workflow consistently with expected input and expected outcome. Let’s consider the ‘create order’ API we used earlier. In the integration test environment, the developer should set up a canary on that API along with a ‘cron job’ that triggers every minute.

The cron job should be given the function of monitoring the create order API with expected input and hardcoded with an expected output. The cron job must continually call or check on that API every minute. This would allow the developer to immediately know when this API begins failing or if the API output results in an error, notifying that something wrong has occurred within the system.

The concept of the canary must be integrated within the Bake Period, the key alarms as well the key metrics. All of which ultimately links back to the rollback alarm which reverts the pipeline to a previous software version that was assumed to be working perfectly.

Main Production:

When everything is functioning as expected within the Prod 1 Box, the code can be moved on to the next stage which is the main production environment. For instance, if the Prod 1 Box was hosting 10% of the traffic, then the main production environment would be hosting the remaining 90% of that traffic. All the elements and metrics used within the Prod 1 Box such as rollback alarms, Bake Period, anomaly detection or error count and latency breaches, and canaries, must be included in the stage exactly as they were in the Prod 1 Box with the same checks, except on a much larger scale.

The main issue most developers face is – ‘how is 10% of traffic supposed to be directed to one host while 90% goes to another host?’. While there are several ways of accomplishing this task, the easiest is to transfer it at the DNS level. Using DNS weights, developers can shift a certain percentage of traffic to a particular URL and the rest to another URL. The process might vary depending on the technology being used but DNS is the most common one that developers usually prefer to use.

DETAILED IDEAL CI/CD PIPELINE

Summary

The ultimate goal of an ideal CI/CD pipeline is to enable teams to generate quick, reliable, accurate, and comprehensive feedback from their SDLC. Regardless of the tools and configuration of the CI/CD pipeline, the focus should be to optimize and automate the software development process.

Let’s go Over the key Points Covered One More Time. These are the key Concepts And Elements that Make up an Ideal CI/CD Pipeline:

  • The Source Code is where all the packages and dependencies are categorized and stored. It involves the addition of reviewers for the curation of code before it gets shifted to the next stage.
  • Build steps involve compiling code, unit tests, as well as checking and enforcing code coverage.
  • The Test Environment deals with integration testing and the creation of on/off switches.
  • The Prod 1 Box serves as the soft testing environment for production for a portion of the traffic.
  • The Main Production environment serves the remainder of the traffic

NeoSOFT’s DevOps services are geared towards delivering our signature exceptional quality and boosting efficiency wherever you are in your DevOps journey. Whether you want to build a CI/CD pipeline from scratch, or your CI/CD pipeline is ineffective and not delivering the required results, or if your CI/CD pipeline is in development but needs to be accelerated; our robust and signature engineering solutions will enable your organization to

  • Scale rapidly across locations and geographies,
  • Quicker delivery turnaround,
  • Accelerate DevOps implementation across tools.

NEOSOFT’S DEVOPS SERVICES IMPACT ON ORGANIZATIONS

Solving Problems in the Real World

Over the past few years, we’ve applied the best practices mentioned in this article.

Organizations often find themselves requiring assistance at different stages in the DevOps journey; some wish to develop an entirely new DevOps approach, while others start by exploring how their existing systems and processes can be enhanced. As their products evolve and take on new characteristics, organizations need to re-imagine their DevOps processes and ensure that these changes aren’t affecting their efficiencies or hampering the quality of their product.

DevOps helps eCommerce Players to Release Features Faster

When it comes to eCommerce, DevOps is instrumental for increasing overall productivity, managing scale & deploying new and innovative features much faster.

For a global e-commerce platform with millions of daily visitors, NeoSOFT built their CI/CD pipeline. Huge computational resources were made to work efficiently, giving a pleasing online customer experience. The infrastructure was able to carry out a number of mission-critical functions with substantial savings resulting in both: time and money.

With savings up to 40% on computing & storage resources matched with an enhanced developer throughput, an ideal CI/CD pipeline is critical to the eCommerce industry.

Robust CI/CD Pipelines are Driving Phenomenal CX in the BFSI Sector

DevOps’ ability to meet the continually growing user needs with the need to rapidly deploy new features has facilitated its broader adoption across the BFSI industry with varying maturity levels.

When executing a digital transformation project for a leading bank, NeoSOFT upgraded the entire infrastructure with an objective to achieve continuous delivery. The introduction of emerging technologies like Kubernetes into the journey enabled the institution to move at startup speed, driving the GTM 10x faster rate.

As technology leaders in the BFSI segment look to compete through digital capabilities, DevOps & CI/CD pipelines start to form their cornerstone of innovation.

A well-oiled DevOps team, structure, and ecosystem can be the difference-maker in driving business benefits and leveraging technology as your competitive edge.

Begin your DevOps Journey Today!

Speak to us —let’s Build.

Thriving in a Digital Society — Modernizing Legacy Banking Applications

For more than half a century, banks have been at the frontier in embracing automation and introducing digital systems to gain operational excellence. Today, their demands have grown and banks now look beyond their legacy core banking systems that have been, to date, leveraged for conventional services such as opening up new accounts, processing deposits and transactions, and initializing loans.

Digital innovations are disrupting the marketplace and the continuous evolvement and spurt of technologies have now radically put these legacy systems back in the race. New players are beginning to enter the market without the burden of outdated technologies.

The rise of Fintech startups, teeth-gritting competition, and the fast-paced digital momentum have exponentially elevated consumer expectations and have forced banks to modernize their digital assets.

What is Core Banking Modernization?

Core banking modernization is the replacement, upgrade or outsourcing of a banks’ existing core banking systems and IT environment, which can be scaled and sustained to perform mission-critical operations for the bank, empowering it to harness the power of advancements in technology and design.

Banking Yesterday, Banking Today, and Banking Tomorrow

The core banking solutions of the future shall accommodate global perspectives so that it gets easier for the banks to deploy systems across multiple geographies. In comparison with the legacy systems, these new systems shall be more lean, scalable, process-centric, economical, and deployed over the cloud which shall empower banks to be agile and meet the changing business requirements.

EVOLUTION OF CORE BANKING SYSTEMS BY DECADE

In pursuit of embracing innovative features and scaling customer experience, the banks are at a disposition where they seem to be keen on accepting data-driven and cutting-edge technologies, and lean and agile processes. This transformation is disruptive and banks need to strike the right balance between revitalizing their core systems vis-à-vis creating new products and services to thrive in a digital society.

To address the challenges of the near future and the next normal, it is necessary to conduct a thorough assessment of the current core banking platform and external environments. Modernizing legacy applications is a critical process and it requires a disciplined and well-thought approach. Banks will need to understand whether a full replacement or a systematic upgrade will offer a better value-to-risk ratio.

Modernization Objectives and Drivers

Core banking modernization is driven by the need to respond to internal business imperatives such as growth and efficiency as well as the external ones such as regulations, competition, and customer experience expectations.

As new banking products, channels, and technologies enter the marketplace, the complexity and the necessity to modernize old legacy core banking systems becomes more crucial. The internal and the external drivers pushing the banks to transform are worth consideration.

Internal Drivers:

  • Product and Channel Growth
    Managing high volumes of product-channel transactions and payments demand scalable and sustainable modern core banking systems. The introduction of ever-increasing custom solutions/products to satiate a wide segment of customers which is further amplified with multifarious channels creates an opportunity for banks to re-strategize their old digital assets.
  • Legacy Systems Management
    With technologies that had been used to build the legacy systems getting obsolete, finding resources to manage these outdated systems also gets difficult. Moreover, introducing new technologies into the systems benefit the banks in staying relevant, achieving flexibility and cost-effectiveness.
  • Cost Reduction
    Modernizing core applications involves consolidating the other stand-alone applications that stand peripheral to the core. This subsequently optimizes the overall cost and helps banks in reducing the high maintenance costs associated with legacy systems.

External Drivers:

  • Regulatory Compliance
    It is imperative for the banks to enhance their IT infrastructure and operations in order to comply with increasing regulations such as Basel III, Foreign Account Tax Compliance Act (FATCA), and the Dodd-Frank Act, all of which are aimed at 1) Enhancing risk management 2) Governance procedures and, 3) Improving transparency of banking operations that also involves customer interactions.
  • Increasing Competition
    The competition pressure compels banks to innovate and embrace new core banking platforms. The new entrants in financial services are speculated to give banks a tough run and start questioning their purpose of existence.
  • Customer Centricity
    Customer experience is a derivate of many components and banks need to re-strategize their positioning. Moving from a product-centric to a customer-centric approach is highly necessary. Focus on customer service, relationship-based pricing, and digital experience shall be the crucial elements in the transformation journey.

OBJECTIVES OF CORE SYSTEMS TRANSFORMATION

Best Practices in Core Banking Modernization

  • Evaluate Technical Debt: Banks should be able to closely identify and calculate their technical debt so that they can properly prioritize the debt and its impact on the legacy system processes. To get an accurate assessment, banks will need to factor in the prospective cost of adding or altering features and functionality later.
  • Outline the Organization’s Objectives and Analyze Risk Tolerance: When going for legacy system modernization, the bank must assess various business variables like customer satisfaction levels, modernization objectives, cost savings, business continuity, and risk management. These thorough assessments will help to provide context for the selection of the most efficient and effective modernization approach.
  • Choose Futuristic & Advanced Solutions: Technology refinements are taking place at an unprecedented scale, which demands organizations to be agile in the adoption of future technologies. For this, it is critical to build solutions that support future adaptability.
  • Define the Post-Modernization Release Strategy: The most crucial modernization practice is to create a follow-up plan that includes successful training of employees, ensuring systematic and streamlined process, timely update schedule, and undertaking other maintenance tasks.

Legacy modernization will empower traditional banks in performing a wide range of modern banking services which shall be robust and scalable. Moreover, the digitalization of traditional banks shall address the changing needs of customers through seamless digital services and drive excellent customer experience.

Legacy Modernization Benefits

  • Faster Customer Onboarding: Deploy cutting-edge technologies such as Artificial Intelligence, Blockchain, Data Science, etc. to speed up the customer onboarding process. Remember, that the customer experience is a derivative of the way banks engage with them and makes their life easier and better.
  • Omnichannel Banking Experience: Your online and mobile banking software should not only match but supersede the banking experience drawn at your physical banks. This simply means that the virtual banking experience of your customer should be seamless, personalized, and secured.
  • Scalability and Flexibility: Your banking application should be able to onboard any number of users and be fit for massive user access at the same time. Cloud adoption is proving to improve efficiency, security, and reduced costs.

IMPACT AREAS OF LEGACY MODERNIZATION

The Way Forward

As the world tunes in to the new normal, the solution to legacy systems is the modernization of core banking systems. Banks looking to enhance their IT efficiency are sorting to innovative technologies of AI/ML, IoT, Cloud Computing, Blockchain, and RPA. The integration of new technologies shall help in unlocking the growth and revenue potentials of banks whilst building a loyal and satisfied customer base. It also enables real-time systems that are agile, scalable, flexible, and cost-effective.

Now is not the time to mull over the prospect of banking legacy software modernization. It is only the survival of the fittest, and to stay fit, banks and financial institutions must weather the storm and adapt to the new rapid evolution of Fintech. This however can’t be a solitary journey!

Get in touch with NeoSOFT’s Application Modernization Experts to get a free consultation towards your first step in the modernization journey.