DevOps Terminologies

author
36 minutes, 17 seconds Read
0 0
Read Time:36 Minute, 19 Second
  1. Servers

A server is a piece of computer hardware or software (computer program) that provides  functionality for other programs or devices, called “clients”. Servers can provide various  functionalities, often called “services”, such as sharing data or resources among multiple  clients, or performing computation for a client. A single server can serve multiple clients,  and a single client can use multiple servers. A client process may run on the same device  or may connect over a network to a server on a different device. Typical servers  are database servers, file servers, mail servers, print servers, web servers, game servers,  and application servers. 

The following table shows several scenarios in which a server is used. 

 

  1. Clients

– A client is a piece of computer hardware or software that accesses a service made available  by a server as part of the client–server model of computer networks. The server is often (but not  always) on another computer system, in which case the client accesses the service by way of a  network. 

– A client is a computer or a program that, as part of its operation, relies on sending a request to  another program or a computer hardware or software that accesses a service made available by  a server.

  1. Agents

– A software agent is a computer program that acts for a user or other program in a relationship  of agency. 

– Such “action on behalf of” implies the authority to decide which, if any, action is  appropriate. Agents are colloquially known as bots, from robot. 

  1. Source Code Management

– Source code management (SCM) is used to track modifications to a source code repository. SCM  tracks a running history of changes to a code base and helps resolve conflicts when merging  updates from multiple contributors. SCM is also synonymous with Version control. 

  1. Code Version Control System

– Version control systems are a category of software tools that helps in recording changes made  to files by keeping a track of modifications done to the code. 

– As we know that a software product is developed in collaboration by a group of developers they  might be located at different locations and each one of them contributes in some specific kind of  functionality/features. So, to contribute to the product, they made modifications in the source  code(either by adding or removing). A version control system is a kind of software that helps the  developer team to efficiently communicate and manage(track) all the changes that have been  made to the source code along with the information like who made and what change has been  made. A separate branch is created for every contributor who made the changes and the changes  aren’t merged into the original source code unless all are analyzed as soon as the changes are  green signaled they merged to the main source code. It not only keeps source code organized  but also improves productivity by making the development process smooth. 

  1. Centralized Version Control

In a centralized version control system (CVCS), a server acts as the main repository which stores  every version of code. Using centralized source control, every user commits directly to the main  branch, so this type of version control often works well for small teams, because team members  can communicate quickly so that no two developers want to work on the same piece of code  simultaneously. Strong communication and collaboration are important to ensure a centralized  workflow is successful. 

– Centralized source control systems, such as CVS, Perforce, and SVN, require users to pull the  latest version from the server to download a local copy on their machine. Contributors then push  commits to the server and resolve any merge conflicts on the main repository.

  1. Distributed Version Control

– Distributed version control is a form of version control in which the complete codebase,  including its full history, is mirrored on every developer’s computer. Compared to centralized  version control, this enables automatic management branching and merging, speeds up most  operations (except pushing and pulling), improves the ability to work offline, and does not rely  on a single location for backups. Git, the world’s most popular version control system, is a  distributed version control system. 

Advantages of DVCS (compared with centralized systems) include: 

  • Allows users to work productively when not connected to a network. Common operations (such as commits, viewing history, and reverting changes) are faster for DVCS, because there is no need to communicate with a central server. With DVCS, communication is necessary only when sharing changes among other peers. Allows private work, so users can use their changes even for early drafts they do not  want to publish. 

Disadvantages of DVCS (compared with centralized systems) include: 

  • Initial checkout of a repository is slower as compared to checkout in a centralized version control system, because all branches and revision history are copied to the local machine by default. 
  • The lack of locking mechanisms that is part of most centralized VCS and still plays an important role when it comes to non-mergeable binary files such as graphic assets or too complex single file binary or XML packages (e.g. office documents, PowerBI files,  SQL Server Data Tools BI packages, etc.) 
  • Additional storage required for every user to have a complete copy of the complete codebase history.
  • Increased exposure of the code base since every participant has a locally vulnerable copy.
  1. Continuous Integration

Continuous integration (CI) is the practice of automating the integration of code changes from  multiple contributors into a single software project. It’s a primary DevOps best practice, allowing  developers to frequently merge code changes into a central repository where builds and tests  

then run. 

– One of the primary benefits of adopting CI is that it will save you time during your development  cycle by identifying and addressing conflicts early. It’s also a great way to reduce the amount of  time spent on fixing bugs and regression by putting more emphasis on having a good test suite. 

Finally, it helps share a better understanding of the codebase and the features that you’re  developing for your customers. 

  1. Continuous Build

– The build cycle compiles and runs unit tests on the code. On success, artefacts and reports will  be published. Changesets may trigger only parts of the build cycle, merges to master should  trigger a full build as the outcome represents an artefact fit for production . if working towards  continuous deployment. Certain criteria in the reports, such as critical security vulnerabilities,  may fail the build. 

  1. Build Artifact

– Build artifacts are files produced by a build. Typically, these include distribution packages, WAR  files, reports, log files, and so on. When creating a build configuration, you specify the paths to  the artifacts of your build on the Configuring General Settings page. 

  1. Artifact Repository

– An artifact repository is a software application designed to store these artifacts, and an artifact  manager helps your team interact with the packages. Using an artifact repository & manager  provides consistency to your Continuous Integration/Continuous Development (CI/CD) workflow.  It saves teams time and increases build performance. 

Examples include Nexus, JFrog, CloudRepo. 

  1. Continuous Testing

– Continuous testing in DevOps is a type of software testing that involves testing at every stage  of the development life cycle. The goal of continuous testing is to evaluate the quality of the  software as part of a continuous delivery process, by testing early and often. 

Traditional testing involves software being handed off by one team to another, with a project  having clearly defined development and quality assurance (QA) phases. The QA team would  require extensive time to ensure quality, and quality is generally prioritized over the project  schedule. 

Businesses of today, however, are requiring faster delivery of software to end users. The newer  software is, the more marketable it is, and therefore the more likely it is to offer companies the  opportunity for improved revenues. 

  1. Unit Testing

– Unit testing is a software development process in which the smallest testable parts of  an application, called units, are individually and independently scrutinized for proper operation.  This testing methodology is done during the development process by the software developers

and sometimes QA staff. The main objective of unit testing is to isolate written code to test and  determine if it works as intended. 

Unit testing is an important step in the development process, because if done correctly, it can  help detect early flaws in code which may be more difficult to find in later testing stages. 

Unit testing is a component of test-driven development (TDD), a pragmatic methodology that  takes a meticulous approach to building a product by means of continual testing and revision.  This testing method is also the first level of software testing, which is performed before other  testing methods such as integration testing. Unit tests are typically isolated to ensure a unit does  not rely on any external code or functions. Testing can be done manually but is often automated. 

  1. Integration Testing

– Integration testing is done to test the modules/components when integrated to verify that they  work as expected i.e. to test the modules which are working fine individually does not have issues  when integrated. 

When talking in terms of testing large application using black box testing technique, involves the  combination of many modules which are tightly coupled with each other. We can apply the  Integration testing technique concepts for testing these types of scenarios. 

The main function or goal of this testing is to test the interfaces between the units/modules.

15. Functional Testing 

– Functional testing is a kind of black-box testing that is performed to confirm that the  functionality of an application or system is behaving as expected. It is done to verify all the  functionality of an application. 

– This is specified in a functional or requirement specification. It is a document that describes  what a user is permitted to do so, that he can determine the conformance of the application or  system to it. Additionally, sometimes this could also entail the actual business side scenarios to  be validated. 

Therefore, functionality testing can be carried out via two popular techniques:

  • Testing based on Requirements: Contains all the functional specifications which  form a basis for all the tests to be conducted. 
  • Testing based on Business scenarios: Contains the information about how the system will be perceived from a business process perspective.

Testing and Quality Assurance are a huge part of the SDLC process. As a tester, we need to be  aware of all the types of testing even if we’re not directly involved with them daily.

  1. Performance Testing

– Performance testing is a non-functional software testing technique that determines how the  stability, speed, scalability, and responsiveness of an application holds up under a given  workload. It’s a key step in ensuring software quality, but unfortunately, is often seen as an  afterthought, in isolation, and to begin once functional testing is completed, and in most cases,  after the code is ready to release. 

The goals of performance testing include evaluating application output, processing speed, data  transfer velocity, network bandwidth usage, maximum concurrent users, memory utilization,  workload efficiency, and command response times. 

There are 5 main types of performance testing. 

Capacity Testing. 

Load Testing. 

Volume Testing. 

Stress Testing. 

Soak Testing. 

  1. Quality Testing

– Quality assurance testing is a process that ensures an organization delivers the best products  or services possible. QA aims to deliver consistent results through a set of standardized  procedures, which means organizations also need to make sure their processes for achieving  the desired results hit specific quality benchmarks. 

In brief, you might say QA includes all activities centered around implementing standards and  procedures associated with ensuring software meets certain requirements before it’s released  to the public. The key thing to keep in mind is that QA doesn’t involve the actual testing of  products. Instead, it focuses on the procedures to ensure the best outcome. QA activities are  ultimately process-oriented. 

  1. Security Testing

– Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a  software application and prevents malicious attacks from intruders. The purpose of Security  Tests is to identify all possible loopholes and weaknesses of the software system which might  result in a loss of information, revenue, repute at the hands of the employees or outsiders of  the Organization. 

The main goal of Security Testing is to identify the threats in the system and measure its  potential vulnerabilities, so the threats can be encountered, and the system does not stop 

functioning or cannot be exploited. It also helps in detecting all possible security risks in the  system and helps developers to fix the problems through coding. 

Types Of Security Testing 

Vulnerability Scanning. 

Penetration Testing. 

Web Application Security Testing. 

API Security Testing. 

Configuration Scanning. 

Security Audits. 

Risk Assessment. 

  1. Provisioning Automation

– Automated provisioning means having the ability to deploy information technology or  telecommunications services using predefined automated procedures without requiring human  interventions. It’s essentially a design principle which facilitates the implementation of end-to end automation using specifications, policies and analytics. 

Automated Provisioning Benefits 

Improve Accuracy 

Reduce Onboarding Costs 

Increase Visibility in Environment 

Achieve Compliance Requirements 

Improve User Onboarding Experience 

  1. Infrastructure Orchestration

– Orchestration is the automated configuration, management, and coordination of computer  systems, applications, and services. Orchestration helps IT to manage complex tasks and  workflows more easily. 

IT teams must manage many servers and applications, but doing so manually isn’t a scalable  strategy. The more complex an IT system, the more complex managing all the moving parts can  become. The need to combine multiple automated tasks and their configurations across groups  of systems or machines increases. That’s where orchestration can help.

  1. Configuration Automation

– Configuration automation is the methodology or process of automating the deployment and  configuration of settings and software for both physical and virtual data center equipment. 

The role of configuration management is to maintain systems in a desired state. Traditionally,  this was handled manually or with custom scripting by system administrators. Automation is the  use of software to perform tasks, such as configuration management, in order to reduce cost,  complexity, and errors. 

Through automation, a configuration management tool can provision a new server within  minutes with less room for error. You can also use automation to maintain a server in the desired  state, such as your standard operating environment (SOE), without the provisioning scripts  needed previously. 

Examples include Ansible, Chef, Puppet, Salt. 

  1. Deployment Automation

– Deployment automation refers to a software deployment approach that allows organizations  to increase their velocity by automating build processes, testing, and deployment workflows for  developers. In other words, it allows organizations to release new features faster and more  frequently. 

Deployment automation is what enables you to deploy your software to testing and production  environments with the push of a button. Automation is essential to reduce the risk of production  deployments. It’s also essential for providing fast feedback on the quality of your software by  allowing teams to do comprehensive testing as soon as possible after changes. 

An automated deployment process has the following inputs: 

  • Packages created by the continuous integration (CI) process (these packages should be deployable to any environment, including production). 
  • Scripts to configure the environment, deploy the packages, and perform a deployment test (sometimes known as a smoke test). 
  • Environment-specific configuration information.

How to implement deployment automation 

When you design your automated deployment process, we recommend that you follow these  best practices:

  • Use the same deployment process for every environment, including production. This rule helps ensure that you test the deployment process many times before you use it to deploy to production. 
  • Allow anyone with the necessary credentials to deploy any version of the artifact to any environment on demand in a fully automated fashion. If you have to create a ticket and wait for someone to prepare an environment, you don’t have a fully automated  deployment process. 
  • Use the same packages for every environment. This rule means that you should keep environment-specific configuration separate from packages. That way, you know that the packages you are deploying to production are the same ones that you tested. 
  • Make it possible to recreate the state of any environment from information stored in version control. This rule helps ensure that deployments are repeatable, and that in the event of a disaster recovery scenario, you can restore the state of production in a  deterministic way. 
  1. Continuous Delivery

– Continuous Delivery is the ability to get changes of all types—including new features,  configuration changes, bug fixes and experiments—into production, or into the hands of  users, safely and quickly in a sustainable way. 

The goal is to make deployments—whether of a large-scale distributed system, a complex  production environment, an embedded system, or an app—predictable, routine affairs that can  be performed on demand. 

Why continuous delivery? 

– Low risk releases. The primary goal of continuous delivery is to make software  deployments painless, low-risk events that can be performed at any time, on demand. 

– Faster time to market. It’s not uncommon for the integration and test/fix phase of the  traditional phased software delivery lifecycle to consume weeks or even months. 

– Higher quality. When developers have automated tools that discover regressions within  minutes, teams are freed to focus their effort on user research and higher level testing  activities such as exploratory testing, usability testing, and performance and security  testing.

– Lower costs. Any successful software product or service will evolve significantly over the  course of its lifetime. 

– Happier teams. Peer-reviewed research has shown continuous delivery makes releases  less painful and reduces team burnout. 

  1. Continuous Deployment

Continuous Deployment (CD) is a software release process that uses automated testing to  validate if changes to a codebase are correct and stable for immediate autonomous deployment  to a production environment. 

The software release cycle has evolved over time. The legacy process of moving code from one  machine to another and checking if it works as expected used to be an error prone and resource heavy process. Now, tools can automate this entire deployment process, which allow engineering  organizations to focus on core business needs instead of infrastructure overhead. 

Continuous delivery lets developers automate testing beyond just unit tests so they can verify  application updates across multiple dimensions before deploying to customers. These tests may  include UI testing, load testing, integration testing, API reliability testing, etc. This helps  developers more thoroughly validate updates and pre-emptively discover issues. With the cloud,  it is easy and cost-effective to automate the creation and replication of multiple environments  for testing, which was previously difficult to do on-premises. 

  1. Continuous Monitoring

Continuous monitoring is a technology and process that IT organizations may implement to  enable rapid detection of compliance issues and security risks within the IT infrastructure.  Continuous monitoring is one of the most important tools available for enterprise IT  organizations, empowering SecOps teams with real-time information from throughout public and  hybrid cloud environments and supporting critical security processes like threat intelligence,  forensics, root cause analysis, and incident response. 

The goal of continuous monitoring and the reason that organizations implement continuous  monitoring software solutions is to increase the visibility and transparency of network activity,  especially suspicious network activity that could indicate a security breach, and to mitigate the  risk of cyber-attacks with a timely alert system that triggers rapid incident response. 

Continuous monitoring can also play a role in monitoring the operational performance of  applications. A continuous monitoring software tool can help IT operations analysts detect  application performance issues, identify their cause and implement a solution before the issue  leads to unplanned application downtime and lost revenue.

Ultimately, the goal of continuous monitoring is to provide the IT organizations with near immediate feedback and insight into performance and interactions across the network, which  helps drive operational, security and business performance. 

  1. Monitoring as Code

Monitoring-as-Code learns from IaC and brings your monitoring config closer to your application  and your development workflows. How? By having it also declared as code, much like you would  do with any kind of IT infrastructure. 

Why Monitoring-as-Code 

What does one gain when moving from a manual to a Monitoring-as-Code approach? The main  advantages are: 

  1. Better scalability through faster provisioning and easier maintenance. 2. Better history and documentation: config files can be checked into source control. 3. Shared monitoring setup visibility (and easier shared ownership) in DevOps teams. 

End-to-end monitoring as code should include: 

  • Instrumentation: Installation and configuration of plugins and exporters. Scheduling and orchestration: Management of monitoring jobs (e.g. collect, scrape). Diagnosis: Collection of additional context (e.g. automated triage, including validating configuration, examining log files, etc).
  • Detection: Codified evaluation, filtering, deduplication, and correlation of observability events.
  • Notification: Codified workflows for alerts and incident management, including automatically creating and resolving incidents.
  • Processing: Routing of metrics and events to data platforms like Elasticsearch, Splunk, InfluxDB, and TimescaleDB for storage and analysis.
  • Automation: Codifying remediation actions, including integrations with runbook automation platforms like Ansible Tower, Rundeck, and Saltstack.
  1. Infrastructure as Code

– Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines,  load balancers, and connection topology) in a descriptive model, using the same versioning as  DevOps team uses for source code. Like the principle that the same source code generates the 

same binary, an IaC model generates the same environment every time it is applied. IaC is a key  DevOps practice and is used in conjunction with continuous delivery. 

Infrastructure as Code evolved to solve the problem of environment drift in the release pipeline.  Without IaC, teams must maintain the settings of individual deployment environments. 

Idempotence is a principle of Infrastructure as Code. Idempotence is the property that a  deployment command always sets the target environment into the same configuration,  regardless of the environment’s starting state. Idempotency is achieved by either automatically  configuring an existing target or by discarding the existing target and recreating a fresh  environment. 

  1. Orchestration as Code

– Orchestration is the automated configuration, management, and coordination of computer  systems, applications, and services. Orchestration helps IT to manage complex tasks and  workflows more easily. 

IT teams must manage many servers and applications but doing so manually isn’t a scalable  strategy. The more complex an IT system, the more complex managing all the moving parts can  become. The need to combine multiple automated tasks and their configurations across groups  of systems or machines increases. That’s where orchestration can help. 

IT orchestration also helps you to streamline and optimize frequently occurring processes and  workflows, which can support a DevOps approach and help your team deploy applications more  quickly. 

You can use orchestration to automate IT processes such as server provisioning, incident  management, cloud orchestration, database management, application orchestration, and many  other tasks and workflows. 

  1. Configuration as Code

– Configuration as code is the practice of managing configuration files in a repository. Config files  establish the parameters and settings for applications, server processing, operating systems. By  managing your config files alongside your code, you get all the benefits of version control for your  entire product.

Why Teams Use Configuration as Code 

Scalability 

Standardization 

Traceability 

Increased Productivity 

  1. Everything as Code

– Everything as Code is the practice of treating all parts of the system as code. This means, storing  configuration along with Source Code in a repository such as git or svn. Storing the configuration  from bottom to top (communication switches, bare metal servers, operating systems, build  configurations, application properties and deployment configurations…) as code means they are  tracked and can be recreated at the click of a button. 

Everything as Code includes system design, also stored as code. In old world IT, infrastructure  required specialized skills and physical hardware and cables to be installed. Systems were  precious or were not touched / updated often as the people who created them no longer work  for the company. The dawn of cloud computing and cloud native applications has meant it is  cheap and easy to spin up virtual infrastructure. By storing the configuration of virtual  environments as code, they can be life-cycled and recreated whenever needed. 

Why Do Everything-as-Code? 

  1. Traceability – storing your config in git implies controls are in place to track who / why a  config has changed. Changes can be applied and reverted and are tracked to a single user  who made the change. 
  2. Repeatable – moving from one cloud provider to another should be simple in modern  application development. Picking a deployment target should be like shopping around for  the best price that week. By storing all things as code, systems can be re-created in  moments in various providers. 
  3. Tested – Infra and code can be rolled out, validated and promoted into production  environments with confidence and assurance it will behave as expected. 
  4. Phoenix server – No more fear of a servers’ configuration drifting. If a server needs to be  patched or randomly dies, it’s OK. Just create it again from the stored configuration. 5. Shared understanding – When a cross-functional team is using ‘Everything as Code’ to  describe all the parts of the Product they are developing together, they contribute to  increase the shared understanding between Developers and Operations, they speak same  language to describe the state of this Product and they use the same frameworks to  accomplish their goals.

31. Infrastructure as a Service

– Infrastructure as a service (IaaS) is a type of cloud computing service that offers essential  compute, storage and networking resources on demand, on a pay-as-you-go basis. IaaS is one of  the four types of cloud services, along with software as a service (SaaS), platform as a service  (PaaS) and serverless. 

IaaS lets you bypass the cost and complexity of buying and managing physical servers and  datacenter infrastructure. Each resource is offered as a separate service component, and you  only pay for a particular resource for as long as you need it. A cloud computing service  provider like Azure manages the infrastructure, while you purchase, install, configure and  manage your own software—including operating systems, middleware and applications. 

IaaS provider provides the following services – 

  1. Compute: Computing as a Service includes virtual central processing units and virtual main memory for the VMs that is provisioned to the end- users.
  2. Storage: IaaS provider provides back-end storage for storing files. 
  3. Network: Network as a Service (NaaS) provides networking components such as routers,  switches, and bridges for the VMs. 
  4. Load balancers: It provides load balancing capability at the infrastructure layer. 
  5. Platform as a Service

– Platform as a service (PaaS) is a complete development and deployment environment in the  cloud, with resources that enable you to deliver everything from simple cloud-based apps to  sophisticated, cloud-enabled enterprise applications. You purchase the resources you need from  a cloud service provider on a pay-as-you-go basis and access them over a secure Internet  connection.

Like IaaS, PaaS includes infrastructure—servers, storage and networking—but also middleware,  development tools, business intelligence (BI) services, database management systems and more.  PaaS is designed to support the complete web application lifecycle: building, testing, deploying,  managing and updating. 

PaaS includes infrastructure (servers, storage, and networking) and platform (middleware,  development tools, database management systems, business intelligence, and more) to support  the web application life cycle. 

Example: Google App Engine, Force.com, Joyent, Azure. 

PaaS providers provide the Programming languages, Application frameworks, Databases, and  Other tools. 

  1. Software as a Service

Software as a service (or SaaS) is a way of delivering applications over the Internet—as a service.  Instead of installing and maintaining software, you simply access it via the Internet, freeing  yourself from complex software and hardware management. 

SaaS applications are sometimes called Web-based software, on-demand software, or hosted  software. Whatever the name, SaaS applications run on a SaaS provider’s servers. The provider  manages access to the application, including security, availability, and performance. 

SaaS provides a complete software solution which you purchase on a pay-as-you-go basis from  a cloud service provider. You rent the use of an app for your organisation and your users connect  to it over the Internet, usually with a web browser. All of the underlying infrastructure,  middleware, app software and app data are located in the service provider’s data center. The  service provider manages the hardware and software and with the appropriate service  agreement, will ensure the availability and the security of the app and your data as well. SaaS  allows your organisation to get quickly up and running with an app at minimal upfront cost. 

Advantages of SaaS 

Gain access to sophisticated applications. To provide SaaS apps to users, you don’t need to  purchase, install, update or maintain any hardware, middleware or software. SaaS makes even  sophisticated enterprise applications, such as ERP and CRM, affordable for organisations that lack  the resources to buy, deploy and manage the required infrastructure and software themselves. 

Pay only for what you use. You also save money because the SaaS service automatically scales  up and down according to the level of usage. 

Use free client software. Users can run most SaaS apps directly from their web browser without  needing to download and install any software, although some apps require plugins. This means  that you don’t need to purchase and install special software for your users.

Mobilise your workforce easily. SaaS makes it easy to “mobilise” your workforce because users  can access SaaS apps and data from any Internet-connected computer or mobile device. You  don’t need to worry about developing apps to run on different types of computers and devices  because the service provider has already done so. In addition, you don’t need to bring special  expertise onboard to manage the security issues inherent in mobile computing. A carefully  chosen service provider will ensure the security of your data, regardless of the type of device  consuming it. 

Access app data from anywhere. With data stored in the cloud, users can access their  information from any Internet-connected computer or mobile device. And when app data is  stored in the cloud, no data is lost if a user’s computer or device fails. 

  1. Everything as a Service

– XaaS is short for Everything-as-a-Service and sometimes Anything-as-a-Service. XaaS reflects  how organizations across the globe are adopting the as-a-Service method for delivering just  about, well, everything. Initially a digital term, XaaS can now apply to the real, non-digital world,  too. 

Many B2B organizations provide as-a-Service offerings. These offerings are neatly sliced up and  portioned out to create customized services that meet the specific needs of each client at a price  that makes sense for them. In this way, XaaS could be simply thought of as a combination of SaaS,  PaaS, and IaaS offerings.

  1. API

– An API (Application Programming Interface) is a set of functions that allows applications to  access data and interact with external software components, operating systems, or  microservices. To simplify, an API delivers a user response to a system and sends the system’s  response back to a user. 

How an API works 

An API is a set of defined rules that explain how computers or applications communicate with  one another. APIs sit between an application and the web server, acting as an intermediary layer  that processes data transfer between systems. 

Here’s how an API works: 

  1. A client application initiates an API call to retrieve information—also known as a request. This request is processed from an application to the web server via the API’s Uniform Resource Identifier (URI) and includes a request verb, headers, and sometimes, a request  body. 
  2. After receiving a valid request, the API makes a call to the external program or web server.
  3. The server sends a response to the API with the requested information.
  4. The API transfers the data to the initial requesting application.

   36.Everything as API

– Abstracting this definition from computer programming, an API is the interface of a system  towards other systems which defines the operations it can perform, the expected input and the  resulting output. 

Every aspect, or certain aspects, of a system can be exposed that way to allow linking it with  other systems for a larger purpose. Whatever happens inside of the system after taking the input  and before returning the output, is irrelevant for the use of the API.

  1. Software Defined Storage

– Software-defined storage (SDS) enables users and organizations to uncouple or abstract storage  resources from the underlying hardware platform for greater flexibility, efficiency and faster  scalability by making storage resources programmable. 

This approach enables storage resources to be an integral part of a larger software-designed data  center (SDDC) architecture, in which resources can be easily automated and orchestrated rather  than residing in siloes. 

How software-defined storage works 

Software-defined storage is an approach to data management in which data storage resources  are abstracted from the underlying physical storage hardware and are therefore more flexible.  Resource flexibility is paired with programmability to enable storage that rapidly and  automatically adapts to new demands. This programmability includes policy-based management  of resources and automated provisioning and reassignment of the storage capacity. 

Benefits of software-defined storage 

  • Future-proof with independence from hardware vendor lock-in 
  • Programmability and automation 
  • Faster changes and scaling up and down 
  • Greater efficiency 
  1. Software Defined Network

– Software-Defined Networking (SDN) is an emerging architecture that is dynamic, manageable,  cost-effective, and adaptable, making it ideal for the high-bandwidth, dynamic nature of today’s  applications. This architecture decouples the network control and forwarding functions enabling  the network control to become directly programmable and the underlying infrastructure to be  abstracted for applications and network services. The OpenFlow protocol is a foundational  element for building SDN solutions. 

  1. Software Defined Data Center

Software-defined data center (SDDC; also: virtual data center, VDC) is a marketing term that  extends virtualization concepts such as abstraction, pooling, and automation to all data 

center resources and services to achieve IT as a service (ITaaS). In a software-defined data center,  “all elements of the infrastructure — networking, storage, CPU and security – are virtualized and  delivered as a service.” 

SDDC support can be claimed by a wide variety of approaches. Critics see the software-defined  data center as a marketing tool and “software-defined hype,” noting this variability. 

The software-defined data center encompasses a variety of concepts and data-center  infrastructure components, with each component potentially provisioned, operated, and  managed through an application programming interface (API). Core architectural components  that comprise the software-defined data center include the following: 

  • computer virtualization, – a software implementation of a computer
  • software-defined networking (SDN), which includes network virtualization – the process of merging hardware and software resources and networking functionality into a software-based virtual network 
  • software-defined storage (SDS), which includes storage virtualization, suggests a service interface to provision capacity and SLAs (Service Level Agreements) for storage, including performance and durability 
  • management and automation software, enabling an administrator to provision, control, and manage all software-defined data-center components

A software-defined data center differs from a private cloud, since a private cloud only has to  offer virtual-machine self-service, beneath which it could use traditional provisioning and  management. Instead, SDDC concepts imagine a data center that can encompass private, public,  and hybrid clouds.  

  1. Containerization

– Containerization is a form of virtualization where applications run in isolated user spaces, called  containers, while using the same shared operating system (OS). A container is essentially a fully  packaged and portable computing environment: 

Everything an application needs to run—its binaries, libraries, configuration files, and  dependencies—is encapsulated and isolated in its container. The container itself is abstracted  away from the host OS, with only limited access to underlying resources—much like a lightweight  virtual machine (VM). As a result, the containerized application can be run on various types of  infrastructure—on bare metal, within VMs, and in the cloud—without needing to refactor it for  each environment. 

With containerization, there’s less overhead during startup and no need to set up a separate  guest OS for each application since they all share the same OS kernel. Because of this high  efficiency, containerization is commonly used for packaging up the many individual microservices  that make up modern apps.

  1. Virtualization

– Virtualization is the process of running a virtual instance of a computer system in a layer  abstracted from the actual hardware. Most commonly, it refers to running multiple operating  systems on a computer system simultaneously. To the applications running on top of the  virtualized machine, it can appear as if they are on their own dedicated machine, where the  operating system, libraries, and other programs are unique to the guest virtualized system and  unconnected to the host operating system which sits below it. 

There are many reasons why people utilize virtualization in computing. To desktop users, the  most common use is to be able to run applications meant for a different operating system  without having to switch computers or reboot into a different system. 

  1. Virtualization vs containerization

 

 

  1. Immutable Infrastructure based Deployment

– Immutable infrastructure refers to servers (or VMs) that are never modified after deployment.  With an immutable infrastructure paradigm, servers work differently. We no longer want to  update in-place servers. Instead, we want to ensure that a deployed server will remain intact,  with no changes made. 

When you do need to update your server, you’ll replace it with a new version. For any updates,  fixes, or modifications, you’ll: 

  • Build a new server from a common image, with appropriate changes, packages, and services included
  • Provision the new server to replace the old one
  • Validate the server
  • Decommission the old server

Every update (environment) is exact, versioned, timestamped, and redeployed. The previous  servers are still available if you need to roll back your environment. This change almost entirely  removes troubleshooting for broken instances, and these new servers are quick to deploy thanks  to OS-level virtualization. 

Benefits of immutable infrastructure

Perhaps the biggest benefit of immutable infrastructure is how quickly engineers can replace a  problematic server, keeping the application running with minimal effort. But that’s just the  beginning—immutable infrastructure offers several benefits: 

  • Infrastructure is consistent and reliable, which makes testing more straightforward. Deployment is simpler and more predictable.
  • Each deployment is versioned and automated, so environment rollback is a breeze. Errors, configuration drifts, and snowflake servers are mitigated or eliminated entirely.
  • Deployment remains consistent across all environments (dev, test, and prod). Auto-scaling is effortless thanks to cloud services.
  1. Deployment Rollout and Rollback

– A deployment rollout is a deployment strategy that slowly replaces previous versions of an  application with new versions of an application by completely replacing the infrastructure on  which the application is running. 

– Deployment rollback means, going back to the previous instance of the deployment if there is  some issue with the current deployment. 

  1. Monolithic Application

– A monolithic application describes a single-tiered software application in which the user  interface and data access code are combined into a single program from a single platform. 

A monolithic application is self-contained and independent from other computing applications.  The design philosophy is that the application is responsible not just for a particular task, but can  perform every step needed to complete a particular function. Today, some personal finance  applications are monolithic in the sense that they help the user carry out a complete task, end to  end, and are private data silos rather than parts of a larger system of applications that work  together. Some word processors are monolithic applications. These applications are sometimes  associated with mainframe computers. 

In software engineering, a monolithic application describes a software application that is  designed without modularity. Modularity is desirable, in general, as it supports reuse of parts of  the application logic and facilitates maintenance by allowing repair or replacement of parts of  the application without requiring wholesale replacement. 

For example, consider a monolithic ecommerce SaaS application. It might contain a web server,  a load balancer, a catalog service that services up product images, an ordering system, a payment  function, and a shipping component.

  1. Distributed Application

– A distributed application consists of one or more local or remote clients that communicate  with one or more servers on several machines linked through a network. With this type of  application, business operations can be conducted from any geographical location. For  example, a corporation may distribute the following types of operations across a large region,  or even across international boundaries: 

  • Forecasting sales 
  • Ordering supplies 
  • Manufacturing, shipping, and billing for goods 
  • Updating corporate databases 

Example of a Distributed Application 

  • The following diagram illustrates the basic parts of an application distributed across three  machines. 
  • Sample of a Distributed Application 

  1. Service Oriented Architecture

– Service-Oriented Architecture (SOA) is a style of software design where services are provided  to the other components by application components, through a communication protocol over a  network. Its principles are independent of vendors and other technologies. In service oriented  architecture, severalservices communicate with each other, in one of two ways: through passing  data or through two or more services coordinating an activity. This is just one definition of  Service-Oriented Architecture. An article on Wikipedia goes into much more detail. 

Characteristics Of Service-Oriented Architecture 

While the defining concepts of Service-Oriented Architecture vary from company to company,  there are six key tenets that overarch the broad concept of Service-Oriented Architecture. These  core values include: 

Business value 

Strategic goals 

Intrinsic inter-operability 

Shared services 

Flexibility 

Evolutionary refinement 

  1. Microservices

– Microservices are an architectural and organizational approach to software development where  software is composed of small independent services that communicate over well-defined APIs.  These services are owned by small, self-contained teams. 

Microservices architectures make applications easier to scale and faster to develop, enabling  innovation and accelerating time-to-market for new features.

Benefits of Microservices 

Agility 

Flexible Scaling 

Easy Deployment 

Technological Freedom 

Resilience 

Reusable Code 

  1. Blue Green Deployment

Blue green deployment is an application release model that gradually transfers user traffic  from a previous version of an app or microservice to a nearly identical new release—both  of which are running in production. 

The old version can be called the blue environment while the new version can be known  as the green environment. Once production traffic is fully transferred from blue to green,  blue can standby in case of rollback or pulled from production and updated to become  the template upon which the next update is made. 

There are downsides to this continuous deployment model. Not all environments have  the same uptime requirements or the resources to properly perform CI/CD processes like  blue green. But many apps evolve to support such continuous delivery as the enterprises  supporting them digitally transform.

  1. Canary Deployment

– A canary deployment is a deployment strategy that releases an application or service  incrementally to a subset of users. All infrastructure in a target environment is updated in small  phases (e.g: 2%, 25%, 75%, 100%). A canary release is the lowest risk-prone, compared to all  other deployment strategies, because of this control. 

  1. A/B Testing

– In A/B testing, different versions of the same service run simultaneously as “experiments” in  the same environment for a period. Experiments are either controlled by feature flags toggling,  A/B testing tools, or through distinct service deployments. It is the experiment owner’s  responsibility to define how user traffic is routed to each experiment and version of an  application. Commonly, user traffic is routed based on specific rules or user demographics to  perform measurements and comparisons between service versions. Target environments can  then be updated with the optimal service version.

  1. Zero Downtime Designs

– Zero downtime design is a deployment method where your website or application is never  down or in an unstable state during the deployment process. To achieve this the web server  doesn’t start serving the changed code until the entire deployment process is complete.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Similar Posts

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published.

X

Cart

Cart

Your Cart is Empty

Back To Shop