Unless you've been hiding under a rock in a datacenter from the last century, chances are you've heard of Docker, the leading software container solution on the market. And if so, you've likely heard of its chief competitor CoreOS as well. Let's see how the two stack up in this comparison.
We've covered more than a handful of IT monitoring solutions, but few dominate their categories like SolarWinds and Microsoft SCOM, the two contenders in this match-up. From the network to the servers and applications, SolarWinds' suite of solutions ensure that the whole stack is performing optimally; similarly, SCOM/Systems Center 2016 provides monitoring across applications, workloads, and infrastructures. Let's see how they stack up in this head-to-head comparison.
The enterprise's infrastructure monitoring needs have evolved drastically over the years; more often, firms need operational intelligence regarding the health and performance of a myriad of IT assets: physical/virtual servers, applications/services, security devices, and more. System Center Operations Manager (SCOM) and Splunk are two leading solutions on the market for monitoring datacenter health and performance; let's see how they compare for keeping the enterprise IT ship afloat.
More often, catastrophic outages and security compromises can be traced back to simple misconfigurations and unpatched systems. This isn't to say that elements like pilot error and the workings of nefarious actors are not common—they certainly are—but IT asset misconfigurations tend to be the lowest common denominator in most of these scenarios. That being the case, a plethora of solutions focus on systems management for maintaining strong security and quality of service. Tanium and Microsoft System Center Configuration Manager (SCCM) are two such solutions competing in this space.
Log management solutions play a crucial role in an enterprise's layered security framework— without them, firms have little visibility into the actions and events occuring inside their infrastructures that could either lead to data breaches or signify a security compromise in progress. Splunk and ELK (a.k.a BELK or Elastic Stack) are two of the leading enterprise solutions in this category; let's see how they stack up in this comparison.
Services are the programs that run in the background on servers. All OSes come with a set of base services and most software utilizes services as well. Effectively managing servers means controlling these services—knowing what is there, what should and shouldn’t be running, whether or not services will automatically start on (re)boot and who the services should and shouldn’t run as. We’ll go through each of these pieces to see how a strong service management policy can help reliability and security in the data center and how configuration management and testing is key.
This article is part of our ongoing How-to series that focuses on ways to keep your environment ready and yourself sane in real world scenarios.
When we speak of the DevOps and continuous delivery/integration (CI/CD) toolchain, we’re referring to a superset of tools—many with overlapping capabilities—for helping organizations achieve faster and safer deployment velocity. This encompasses a broad range of solutions: provisioning tools, orchestration tools, testing frameworks, configuration management (CM) and automation platforms, and more. Comparisons between CM products usually steal the show (e.g., Puppet vs. Chef), but in this case we’ll compare two orchestration and management tools for provisioning infrastructures: Terraform and CloudFormation.
Puppet and Chef have both evolved significantly since we covered them last—suffice to say, we’re long overdue in revisiting these two heavy-hitters. In this article we’ll take a fresh look at their core components along with new integrations and expansions that continue to position them as leading enterprise IT automation platforms.
Puppet Enterprise is a great platform for automating the configuration and deployment of applications to servers, but as a sophisticated infrastructure management tool with numerous interconnected moving parts-- can be a challenge to troubleshoot when things go awry. This is especially true when dealing with cascading errors that are hard to isolate for resolution. What follows is a short list of some of the more common issues one may encounter, and a few tips on how to troubleshoot and resolve them.
The biggest players in the web server business, Apache and IIS, have had the field to themselves for a long time. Now, however, they have to contend with a few seriously capable upstarts, the most prominent of which is Nginx (pronounced ‘engine-x’). This young turk was first developed in 2002 and boasts a growing, dedicated following among many webmasters. Nginx’s popularity is mainly due to being open-source and having the desirable combination of high performance and low resource consumption. It is important to note that Nginx is most often compared to Apache due to its similar open-source philosophy.
For today’s busy sysadmin, systems health and performance monitoring tools like Microsoft’s SCOM (Systems Center Operations Manager) and the open-source Nagios are invaluable. They enable at-a-glance monitoring of large numbers of servers throughout a network, which is doubly critical in case of a widely geographically dispersed network setup such as in a WAN or MAN. Though they broadly achieve the same goals, SCOM and Nagios come at it from quite different directions.
Today’s sys admin and devops professionals have to manage, on average, a much larger number of servers hosting a much larger number of applications, than their counterparts from as recently as the 90’s. Blame this on the exponential growth on computing for organizations, coupled with the emergence of new technologies such as virtualization and cloud computing.
With the huge growth in virtualization and cloud computing, there has also been a correspondent increase in the average number of virtual machines (VM) that today’s admin has to manage. Manually creating a full VM on today’s virtualizers, like VMWare and Hyper-V, is a real pain because they have to take a snapshot of the entire machine config, and then replicate this to another machine. As you can imagine, VM images eat up a lot of space and time.
Cyber resilience is a fundamental change in understanding and accepting the true relationship between technology and risk. IT risk (or cyber risk, if you prefer) is actually business risk, and always has been. And the cybersecurity industry, for what it's worth, has generally avoided this concept because it goes against the narrative that their respective offerings—whether it's a firewall, IDS, monitoring tool, or otherwise—would be the one-size-fits-all silver bullet that can keep businesses safe. But reality tells a different story.