I recently attended the 2013 PuppetConf in San Francisco and spent most of the Thursday in what we affectionally call the "neckbeard" session. It was the "Product and Technologies" stream and seemed to be highly tailored to the relative minority of developers at the conference, or at least the people in charge of developing and maintaining the low level detail contained in Puppet manifests. Those at one with the Puppet DSL. As a developer this seemed like the only stream I would be interested in, seeing as four of the other sessions had sysadmin written all over them and the last one seemed to be targeted at use cases for sales people. In fact, one of the other devs here at UpGuard asked us at the end of the first day if we'd been called sysadmins all day. Thankfully, I hadn't. It is also a common generalisation that Puppet is designed for sysadmins, having a model based way of defining infrastructure, as opposed to code based approaches employed by products like Chef. I went into the start of Thursday's talks with this generalization clouding my judgement.
At UpGuard we've got many decades of experience in large enterprises and are very familiar with the sorts of problems that arise in those sorts of environments. Even for those who have lived through it though, it can be hard to explain to people who haven't. That's why we require all our new employees to read The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr and George Spafford. It does a great - and surprisingly entertaining - job of describing these issues. It also explains how the lessons learnt from years of Lean Manufacturing apply directly to IT. We know that no tool is a silver bullet, but if the employees at Parts Unlimited had UpGuard then it may have been an entirely different story. I've chosen some key excerpts from the book so that we could see how things may have been different.
OK, it's Labor Day weekend. I don't suppose any of you want to read about application configuration. Time to bring a bit of culture into matters then. Arts and culture are very important to us here at UpGuard. OK, so that's a stretch. We may not be brogrammers but we have a lot of Australians working here. Art appreciation often only extends as far as stubby holder (koozie) design. Having said that, and contrary to some rumors that are currently doing the rounds, we can read. I'm a bit of a Cormac McCarthy fan myself (insert disclaimer here that I was into his stuff before Oprah tarnished his cool), and my favorite book of his is Blood Meridian. I won't go into too much detail other than to say if you're into epic tales of debauchery you should check it out.
What is Quality Assurance? Well in time honoured fashion I shall quote directly from wikipedia: Quality assurance (QA) refers to the engineering activities implemented in a quality system so that requirements for a product or service will be fulfilled What does this mean for DevOps though? Well the end product is the software or application being provided so most people focus on its requirements when talking QA and DevOps.
Information Technology Service Management (ITSM) may not have the sex appeal of Agile or the buzz of DevOps, but it lays a crucial foundation for each within the Enterprise today. So, whether you consider it a necessary evil or the only way to run your IT department, here are a few resources that may come in handy.
When I attended the DevOpsDays event in Mountain View (well, Santa Clara really) a couple of months ago I started writing a blog post on my impressions. I was a bit distracted at the time though after having had a minor twitter spat with a well known DevOps proponent on the first morning. I won't go into any detail here other than to say that it was sparked after I made a comment that I felt "DevOps" vendors need to be doing more to ease the transition for large Enterprises.
There is no doubt that the DevOps movement has gone mainstream. When even IBM and HP are dedicating sites to it there is no longer any question. If we were to place it on the Gartner Hype Cycle even the most devoted proponents would have to admit that it’s rapidly approaching the “Peak of Inflated Expectations”. What does this mean for you as a CIO? Should you steer clear of the movement entirely until things calm down a bit? Not at all. Should you be cautious in your approach to “implementing” DevOps though? Absolutely.
There's a hidden killer lurking below the surface of every Enterprise IT project. No, it's not Kevin, that sysadmin who spends a disturbing amount of time in the bathroom each day. It's not even that 400 page requirements document, although from a conservationist's point of view the PM's insistence on reprinting it every few days can't be doing the world too much good. So what is it? Well, let me give you a clue:
Most Enterprise CMDB offerings are a joke. They've always been a joke. Just another white elephant system sucking time and money out of IT Budgets. What most, if not all, become are simply inventory systems. They're not even good for that half the time.
Whether a user or not, we all are familiar with the popular microblogging service, Twitter. With over 200 million users, it’s no easy task to maintain their infrastructure. It has been plagued with several outages in recent times including one this week. A product with a die hard user base can face severe backlash for even the slightest of outages.
As there's a lot of interest out there in the various IT automation tools on offer I thought I'd do a series of blogs covering getting started on each. In particular I wanted to put them to the test regarding how simple it is to go from zero to "Hello World" *. This way I get to play the truly dumb user (not much of a stretch, I know), which is kinda fun too.
You're never safe in Enterprise IT. Just when you feel you've gotten a handle on the last hot topic you're hit with another. SOA, BPM, Agile, ITIL; You feel like screaming "Enough!" but you know resistance is futile. Gartner have said it's important so you know full well that you'll be asked to "do" it by management.
Designing and building a race car using the typical lifecycle process used within an Enterprise IT department. Sounds like a good idea, no? No. It's a terrible idea, but it's fun to paint a picture of how it may work out to illustrate what goes wrong today in so many Enterprises. For this exercise I'm going to assume that there are four main groups. The design team (analogous to IT Architects), the manufacturing team (development), the safety team (security) and the mechanics (operations). Here is how things may turn out.
After taking a week off, the weekly updates are back! Here's some of the news that interested us over the past week:
Converging IT development and operations into DevOps have come a long way, and yet, the two should have grown together like Siamese twins. Developers need sysadmins as much as sysadmins need developers. Collaboration is the way winning software and infrastructure are built. And that's all the market wants: effective systems with which to run businesses. DevOps can claim substantial ground today, thanks to the persistence of players from both sides of the sysadmin-developer divide. While the segment is still evolving, various tools have been developed to help the Devs and the Ops collaborate more effectively.
Here's some of the IT news that caught our attention over the past week
Conference season continues this week, notably with Opscode's #ChefConf in San Francisco (which is going on as I'm typing this up). Here's the latest from #ChefConf and other IT news that interested us this week.
IT testing automation is an important concern of businesses, and a growing field in which IT professionals are able to make a name for themselves. If you are not already involved in automated IT testing, here are a few of the most important skills to have when holding an automation related position.
With Devopsdays London recently concluded and the Open Networking Summit having just wrapped up, here's some of IT news that interested us this week.
Here’s some of the news we came across that interested us this week The Open DayLight Project – A pretty big development for Software-Defined Networking:
It's been really interesting to watch the dramatic uptick in activity around the automation space the last year or two. I don't need to go into too much detail on the benefits that automation offers here; consistency and scalability are two of the more prominent that come to mind. What has struck me, though, is that it feels like the way that companies are going about it is missing a key step.
Today represents the hottest time to be in financial markets - nanosecond response times, the ability to affect global markets in real time, and lucrative spot deals in dark pools being all the rage. For companies who do business in these times, it is a technical arms race, worthy of a Reagan era analogy.
Those of us who haven't worked in the Enterprise probably don't know a lot about ITIL (Information Technology Infrastructure Library). ITIL may even be a source of amusement for them. C'mon, they would say, how much practical use can you get from a methodology that is defined through a set of books that is actually referred to as a "library"?
In this blog, we're constantly covering and discussing the concept of DevOps. At this point, most folks in departments related to a company's infrastructure (i.e. Developers, System Administrators) have some understanding of this idea. But where do these people learn about this relatively new and young concept?
DevOps is a concept that has materialized fairly recently, yet is already adored by so many people. Obviously, the fact that it bridges the chasm between software development and operations is pretty exciting, but there seems to be something extra that people love. So without throwing around too many corporate buzzwords (besides “DevOps”, of course), what could that extra something something be?
Software-Defined Networking (SDN) has become a hot topic of late, and with good reason. This technology has the potential to dramatically improve the configuration of networking solutions. Traditionally, data has been housed in a static fashion, with the development of network intelligence, focused on individual routers and switches. This is problematic with today's vast and ever-expanding data pool, with central automation of data management quickly becoming the ideal solution. SDN is an answer to this challenge, and a good one.
Many enterprise network workers are now adopting automation technology as a means of completing operational tasks, and of creating a more efficient environment within an IT enterprise. One of the advantages of adopting IT automation is that it helps to deliver optimal IT management, without the need for any significant capital investment.
Configuration testing should not only be an essential step in the overall development process, but also important in the process of installation of new apps for use on web and application servers. Without proper testing, apps can often fail or be open to vulnerabilities. Exposure to attack by hackers or viruses can lead to needless expenses and excessive time correcting these problems. It is not unusual for app developers to overlook the need for configuration testing. This is because they believe that using automated methods, like Chef and Puppet (or other systems that test the deployment of their products), will work just fine. They feel that by using these fully automated processes they can test consistency, reproduce outputs adequately, and determine if things are working as predicted or not. This kind of thinking can delay a timely product delivery, produce unnecessary costs, and create additional workloads to address vulnerabilities that can occur later in production.
There are two constants in the world of High Frequency Trading (HFT): massive volumes of data, and the need for programs that process this data and act on it at blistering, fast speeds. These systems change frequently as the needs of the companies using them change and as the rules and regulations of market organizations and governments change. The potential for market instability is a big concern for both companies and regulatory bodies, and major incidents occurring in the market simply due to algorithm errors have put a sharp focus on the quality and performance of HFT software. The DevOps philosophy can provide serious advantages to HFT companies, and this article will take a look at some of the main issues and concerns of the business and summarize it with how DevOps can help.
One of the best opportunities for networking and keeping up to date on all the latest trends in software development, ITIL best practice implementation, innovative methods of handling automation, new methods of tackling ever-expanding configuration drift, and learning to navigate increasingly tricky compliance issues is attending industry conferences. IT professionals focused on automation will need to read between the lines to find a convention well-suited to their interests. These events are massive in production and are usually more broadly focused.
OK, so I probably just closed out 100 games of Bulls**t Bingo in the title of this blog post but I'll stand by it. You want actual agility in what you do? You need a safety net. That safety net is automated testing.
We've made some additions to the platform that we're pretty excited about and would like to share. An even easier way to add tests, service/daemon support for the application and job scheduling for those of you that like to know that your configuration is gold even when you're not watching.
Configuration testing should not only be an essential step in the overall development process, but also important in the process of installation of new apps for use on web and application servers. Without proper testing, apps can often fail or be open to vulnerabilities. Exposure to attack by hackers or viruses can lead to needless expenses and excessive time spent correcting these problems. It is not unusual for app developers to overlook the need for configuration testing, because they think that using automated methods like Chef, Puppet, or other systems to test the deployment of their products, will work just fine. They feel that by using these fully automated processes, they can test consistency, reproduce outputs adequately and determine if things are working as predicted or not. This kind of thinking can delay a timely product delivery, produce unnecessary costs and create additional workloads to address vulnerabilitiesthat can occur later in production.
Upon the application of Chef/Puppet with a view towards the automation of system architecture, it is possible to apportion the systems environment piece by piece and start up applications in a heartbeat. This is ideally the configuration management pinnacle of achievement, encompassing a time saving mechanism, highly replicable, and with unrivaled ability to replicate.
OK, so I was supposed to be blogging this weekend but I was bored of blogging so I instead decided to combine two things I'm terrible at, illustration and comedy, and do a comic instead. I deserve to be punished for this so please, flame away :)
Why IT Automation Needs Configuration Testing
While there are many benefits to cloud computing, one of the major difficulties is migrating from the in-house servers to a cloud computing platform. Configuration issues can develop when a company does not have the right tools, and when it lacks clear communication.
There is no disputing the fact that cloud computing has led to a number of remarkable changes in the way many companies do business. Cloud-based solutions have been instrumental in streamlining IT functions and other business processes, resulting in a considerable savings in terms of time and monetary output.
Cloud CMDB - Where to Next? Cloud providers and IT shops must engage in unit testing for infrastructure management. A cloud provider is an organization that provides a component of cloud computing to businesses or individuals. The cost is usually based on a per-use model.
The Sinkhole That is Manual Configuration Testing Testing is a crucial part of software development: it involves the execution of a program with the goal of locating errors. Successful tests are able to uncover new errors that can then be corrected before the software is released.
Testing environment configurations in enterprise environments manually with scripts is difficult, just because there are so many factors involved. These can include applications, hardware, and device compatibility issues that can arise at any point within the implementation, areas which may be difficult to determine in the pre-implementation stage. Worse yet, the larger the network infrastructure, the more time consuming and complicated the test and the implementation processes are. This is when Environment Drift and Stateless Systems can come into play.
Before delivery to the intended party, a system should be tested to figure out whether the requirements set forth in the contract have been met. Configuration acceptance testing is the fundamental means to assuage all doubts that the system will fall short of its intended purposes. It is an essential part of the testing phase of the Software Development Life-Cycle (SDLC), and perhaps the most vital in its category. The way in which the components of the system interact is the sure fire means of determining the susceptibility of the system to frequent errors and ultimately the strength of resistance to its implementation. Configuration acceptance testing is pivotal to the SDLC, and as such will be an integral part of the Application Life-cycle Management (ALM) policy of any firm. It reveals any available bugs and inadequacies in the system, enhancing the process of error correction and formulation of a suitable plan of action in the event undiscovered errors manifest and affect the system after it has been implemented.
So I was stumbling around the web this morning and I found myself in the LinkedIn DevOps group. Browsing around I came across several discussions on "DevOps" tools. Now a lot of companies and projects out there use the DevOps keyword but not many of them would label themselves a "DevOps Tool". For good reason too. It doesn't take much googling to be assured that DevOps, like Agile, is not about tools. DevOps is about principles, methods and practices.
We've been saying it for a while now here at UpGuard but there's something pretty special about Australia's Wollongong when it comes to tech. Talented engineers abound in this not so sleepy New South Wales coastal idyll.
This is a pretty common response we get from people we're explaining our product to. There is logic to it but we don't believe it's necessarily reasonable. To illustrate our viewpoint on this we thought we'd paraphrase a conversation we had with a prospective client recently.
OK. Time to take a deep breath. Time to reflect on what has been a crazy six months and an even crazier week. As you may have heard, we got funded. Funded to the tune of $1.2M, and by a list of investors we wouldn't have dared to dream having on board when we started our journey with Startmate at the beginning of the year. One name in particular has been hard to miss in the coverage we've received and we are truly proud to have Peter Thiel involved through Valar's investment in UpGuard, but one investment did not the round make. Also on board are:
You've used Chef/Puppet to automate your infrastructure, you can provision your virtual environment from scratch and deploy all your applications in minute. It’s magical. You've achieved Configuration Management Nirvana. What you've built is repeatable, saves time, increases efficiency and removes human error.
Exciting times for us here at UpGuard as we've just launched and are now set up for people to request early access to our platform. We should be live for this purpose in the next couple of weeks so there is not much time to get your name on the list.
Cyber resilience is a fundamental change in understanding and accepting the true relationship between technology and risk. IT risk (or cyber risk, if you prefer) is actually business risk, and always has been. And the cybersecurity industry, for what it's worth, has generally avoided this concept because it goes against the narrative that their respective offerings—whether it's a firewall, IDS, monitoring tool, or otherwise—would be the one-size-fits-all silver bullet that can keep businesses safe. But reality tells a different story.