Archive for cloud

Cloud Vendors Release CI/CD Tools

Cloud CI/CD Tools

This was also released under a slightly different name on Amalgam Insights.

Development organization continue to feel increasing pressure to produce better code more quickly. To help accomplish that faster-better philosophy, a number of methodologies have emerged that that help organizations quickly merge individual code, test it, and deploy to production. While DevOps is actually a management methodology, it is predicated on an integrated pipeline that drives code from development to production deployment smoothly. In order to achieve these goals, companies have adopted continuous integration and continuous deployment (CI/CD) tool sets. These tools, from companies such as Atlassian and GitLab, help developers to merge individual code into the deployable code bases that make up an application and then push them out to test and production environments.

Cloud vendors have lately been releasing their own CI/CD tools to their customers. In some cases, these are extensions of existing tools, such as Microsoft Visual Team Studio on Azure. Google’s recently announced Cloud Build as well as AWS CodeDeploy and CodePipeline are CI/CD tools developed specifically for their cloud environments. Cloud CI/CD tools are rarely all-encompassing and often rely on other open source or commercial products, such as Jenkins or Git, to achieve a full CI/CD pipeline.

These products represent more than just new entries into an increasingly crowded CI/CD market. They are clearly part of a longer-term strategy by cloud service providers to become so integrated into the DevOps pipeline that moving to a new vendor or adopting a multi-cloud strategy would be much more difficult. Many developers start with a single cloud service provider in order to explore cloud computing and deploy their initial applications. Adopting the cloud vendor’s CI/CD tools embeds the cloud vendor deeply in the development process. The cloud service provider is no longer sitting at the end of the development pipeline; They are integrated and vital to the development process itself. Even in the case where the cloud service provider CI/CD tools support hybrid cloud deployments, they are always designed for the cloud vendors own offerings. Google Cloud Build and Microsoft Visual Studio certainly follow this model.

There is danger for commercial vendors of CI/CD products outside these cloud vendors. They are now competing with native products, integrated into the sales and technical environment of the cloud vendor. Purchasing products from a cloud vendor is as easy as buying anything else from the cloud portal and they are immediately aware of the services the cloud vendor offers.  No fuss, no muss.

This isn’t a problem for companies committed to a particular cloud service provider. Using native tools designed for the primary environment offers better integration, less work, and ease of use that is hard to achieve with external tools. The cost of these tools is often utility-based and, hence, elastic based on the amount of work product flowing through the pipeline. The trend toward native cloud CI/CD tools also helps explain Microsoft’s purchase of GitHub. GitHub, while cloud agnostic, will be much for powerful when completely integrated into Azure – for Microsoft customers anyway.

Building tools that strongly embed a particular cloud vendor into the DevOps pipeline is clearly strategic even if it promotes monoculture. There will be advantages for customers as well as cloud vendors. It remains to be seen if the advantages to customers overcome the inevitable vendor lock-in that the CI/CD tools are meant to create.

Monitoring Containers: Do you know what happening inside your cluster?

container with a spherical object in it

This was originally published on May 18th on the Amalgam Insights. For reasons I can’t fathom, I forgot to push the publish button.

 

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately:
container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and service can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose built for container clusters. In come cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing), are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications components, and take advantage of container and Kubernetes APIs. This helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.