Archive for cloud

Monitoring Containers: Do you know what happening inside your cluster?

container with a spherical object in it

This was originally published on May 18th on the Amalgam Insights. For reasons I can’t fathom, I forgot to push the publish button.

 

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately:
container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and service can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose built for container clusters. In come cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing), are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications components, and take advantage of container and Kubernetes APIs. This helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.

Microsoft Azure Plus Informatica Equals Cloud Convenience

Informatica Logo

This was originally published on June 4, 2018 on the Amalgam Insights site.

 

Two weeks ago (May 21, 2018), at Informatica World 2018, Informatica announced a new phase in its partnership with Microsoft. Slated for release in the second half of 2018, the two companies announced that Informatica’s Integration Platform as a Service, or IPaaS, would be available on Microsoft Azure as a native service. This is a different arrangement than Informatica has with other cloud vendors such as Google or Amazon AWS. In those cases, Informatica is more of an engineering partner, developing connectors for their on-premises and cloud offerings. Instead, Informatica IPaaS will be available from the Azure Portal and integrated with other Azure services, especially Azure SQLServer, Microsoft’s cloud database and Azure SQL Data Warehouse.

For Informatica customers who already use Azure, this creates great convenience. Instead of creating server instances on Azure and then installing Informatica software from scratch, customers will be able to create an IPaaS instance from the Azure portal. This allows customer to standup an IPaaS instance much faster and with less effort. Microsoft Azure customers, especially mid-market customers, who may have found an Informatica server IPaaS installation time consuming or daunting will now have an easier option too. Until now, the only way to get an Informatica installation without hand installing it was to purchase a cloud instance directly from Informatica. That would have required two different cloud relationships – Informatica for IPaaS and Microsoft for everything else. Amalgam Insights predicts that this will make Informatica IPaaS much more attractive to the existing Microsoft Azure customer base. The potential is especially high for customers who deploy SQLServer and are actively looking to move those databases to Azure SQLServer.

This partnership also provides Informatica Intelligent Cloud Services customers with a true multi-cloud option. Customers that we spoke to at Informatica World 2018 were interested in multi-cloud – many were already architecting for multi-cloud – and clearly excited by the potential to support their existing Informatica cloud offerings with an easy alternative. While the reasons companies use multi-cloud strategies vary – backup, extra capacity, segmenting architecture, or simply because of unique value in different cloud – most Informatica customers pursuing multi-cloud were excited to have another cloud option that didn’t require manual installation.

Informatica and Microsoft are natural partners. PowerBI makes for an excellent front-end for the line of business user that Informatica is pursuing. Similarly, PowerBI users need well integrated and conditioned data to create meaningful dashboards and visualizations. SQLServer is a popular data source for Informatica’s platform; Having Informatica IPaas on Azure will make the combination of Azure SQLServer and PowerBI more powerful by providing clean data from many databases as one view. This partnership is a win for both Informatica and Microsoft customers, especially their shared customers. We are looking forward to more partnerships like this with other cloud vendors in the future.