Call it an experiment gone wrong: a bug in a test feature of the OpenSSH client was found to be highly vulnerable to exploitation today, potentially leaking cryptographic keys to malicious attackers. First discovered and announced by the Qualys Security Team, the vulnerability affects OpenSSH versions 5.4 through 7.1. Here's what you need to know about bug, including remediation tips.
Yes, it's that time of the year again. Time for global electronics vendors and eager enthusiasts from far and wide to converge at the world's largest annual consumer electronics/technology tradeshow. CES 2016 is in full swing, and IoT innovations have unsurprisingly taken center stage once again. Of course, who can forget the debut of Samsung "Smart" Fridge at last year's show, followed by the publicized hacking of the device soon thereafter. Judging by this year's exhibitor turnout, consumers can expect to see more hacked IoT devices making headlines in 2016. The following are the top 7 hackable IoT devices to watch out for at CES this year.
2015 may have come and gone, but the effects of last year's data breaches are far-reaching—for both millions of consumers and internet users as well as the companies and organizations whose systems were breached. Such events are no less devastating in terms of brand damage, and 2016 will undoubtedly bring forth a heightened collective security awareness in both organizations as well as consumers.
It's been barely a month since the VTech data breach resulted in the theft of over 6.4 million children's records, and yet another massive compromise affecting kids' data privacy is upon us—this time involving venerable children's toy and accessory brand Sanrio (of Hello Kitty fame). The data leak resulted in the exposure of details from more than 3 million user accounts: first/last names, birth dates, genders, countries, and email addresses, all openly available to the public. With children becoming prime targets for cyber criminals seeking low hanging fruit, companies that deal with and manage minors' data are increasingly under pressure to bolster their security controls and practices.
Methodologies and frameworks may come and go, but at the end of the day—tools are what make the IT world go 'round. DevOps is no exception: as the term/practice/movement/[insert-your-descriptor-here] rounds its 6th year since entering public IT vernacular, a bounty of so-called DevOps tools have emerged for bridging development and operations, ostensibly to maximize collaborative efficiencies in the IT and service delivery lifecycle. Subsequently, a common issue these days is not a dearth of competent tools, but how to integrate available tooling into one cohesive toolchain.
Advertising-based revenue models may be a standard facet of today's internet businesses, but firms peddling free/freemium services are still on the hook for providing strong information security to their user bases. In fact, they arguably have an even greater responsibility protect user data than paid-for services. So how do events like yesterday's massive data breach involving free web hosting providing 000webhost transpire? In a word, negligence. Gross negligence, to be precise.
The Network Time Protocol (NTP) has been seeing quite a bit of publicity this year, starting with the NTP Leap Second Bug in June promising—but greatly under delivering—digital calamity of Y2K proportions. Ultimately, the fallout resulted in little more than sporadic Twitter interruptions, but last week newly discovered critical vulnerabilities in the timeworn clock synchronization protocol have increased the urgency of recent NTP-hardening projects like NTPSec.
It's practically a national tradition that Americans collectively spend about one year out of every four obsessing over the group of people who are in the running for a job which is undoubtedly awful to actually have. Every part of their campaign is put under heavy scrutiny—their clothes, their hair, their past, their associations—and today, their websites. Let's examine how candidates are fairing online using data from tools such as BuiltWith, Alexa, Google and Twitter.
Known vulnerability assessment– evaluating a machine's state for the presence of files, packages, configuration settings, etc. that are known to be exploitable– is a solved problem. There are nationally maintained databases of vulnerabilities and freely available repositories of tests for their presence. Search for "free vulnerability scanner" and you'll see plenty of options. So why are breaches due to known vulnerabilities still so common? Why, according the Verizon Data Breach Investigation Report, were 99.9% of the vulnerabilities exploited in data breaches last year over a year old?
UpGuard's core functionality solves a really basic problem– how is everything configured and is it all the same across like nodes– by scanning configuration state and visualizing anomalies. We're pretty happy with how we've solved that problem so we've started expanding to other fundamental problems that deserve elegant solutions. One of those is vulnerability management. Sure, there are ways to detect vulnerabilities today, but they suck to use and are over-priced. Since we have the core architecture in place to scan and evaluate machine state, testing for vulnerabilities is a natural addition.
Though the widely publicized failure of the ObamaCare website (a.k.a Healthcare.gov) back in October of 2013 has all but faded from memory, the public sector’s persistent lag in technological innovation coupled with recent calamitous data breaches means there is no shortage of press fodder for critics. What will it take for the U.S. government to transcend its current dearth of agility and innovation?
By now, news of the Experian/T-Mobile hack has traveled far and wide, stirring up public ire and prompting demands for a broader investigation around the data breach. And while the event is just one of many high profile compromises to make headlines lately, it stands out from the rest for a number of reasons. How does the rising tide of cyber threats impact consumers in a world that revolves so heavily around credit?
Though still a relatively new player on the market, group messaging upstart Slack has steadily expanded its footprint into the business and enterprise arena with its polished, streamlined offering for team collaboration. For the uninitiated, Slack is essentially a tool for collaborating amongst teams—chat rooms on steroids, if you will. And like UpGuard, Slack’s integration capabilities are among its most lauded features. When used in conjunction with each other, the two together can give organizations a highly effective feedback loop for staying on top of system/configuration changes and vulnerabilities.
Technology professionals walk a perpetual tight rope between innovation and security—new computing paradigms emerge and IT security scrambles behind to catch up. Nowhere is this more evident than in cloud computing and the rising frequency of data breaches targeting cloud infrastructures. And as computing enters another transitional epoch—namely the age of the Internet of Things (IoT)—similar challenges are emerging, but with much more at stake this time around.
A rising concern amongst IT professionals is the degree to which security vendors and products are themselves susceptible to compromises. This past weekend critical flaws were discovered in the products of not one, but two leading security vendors: FireEye and Kaspersky Labs. Because all systems are exploitable—even security products—a layered approach to security is crucial for maintaining a strong security posture in today’s cyber landscape. Enterprises heavily reliant on a single monolithic solution are best advised to diversify their security strategies to combat ongoing threats.
For those still holding out for a better alternative to SSL, it’s time to give up the ghost. Though implementations like OpenSSL have seen many a vulnerability as of late, the protocol remains the best ubiquitous technology we have for end-to-end encryption. And with Google’s announcement last year regarding SSL’s impact on a website’s search rankings, the question stands: why are so many organizations still holding out on implementing SSL site-wide?
In a news flash buried beneath a slew of other notable security news items, UCLA Health revealed last week it was the victim of a massive data breach that left 4.5 million patient records compromised. Like previous attacks on Anthem and Premera Blue Cross, the intrusion gave hackers access to highly sensitive information: patient names, addresses, date of births, social security numbers, medical conditions, and more. And while matters around healthcare IT have taken center stage as of late, the ineffective security at leading institutions of higher education and research is equally distressing.
For those of you harboring secrets behind a website paywall, a word of warning: your skeletons are now easy targets for cyber criminals and nefarious 3rd parties around the globe. The recent data breach and compromise of 3.5 million Ashley Madison user accounts may turn out to be largest case of broad-scale extortion the world has ever seen, but for many—the outcome is hardly surprising.
The OpenSSL Project Team announced a high severity bug in their open source implementation of SSL today that could allow the bypassing of checks on untrusted certificates (read: man-in-the-middle attacks). Find out which versions of OpenSSL are impacted, and what you need to patch this critical vulnerability.
For those of you planning on enjoying the sunset on June 30, 2015—an extra second of bliss awaits, compliments of the Earth’s inconsistent wobble. However, if Y2K sent you running for the hills, start packing again. Analysts predict technological fallout ranging from undeliverable tweets to outright digital armageddon, but for faithful IT folks with more grounded concerns like SLAs and business continuity, keeping critical systems up and running trump all other concerns. Fortunately, resolving potential issues related to the Leap Second Bug is a fairly straightforward matter—as long as you know what to look for and where to find it.
Full stack development is all the rage these days, and for good reason: developers with both front-end web development skills and back-end/server coding prowess clearly offer substantially more value to their respective organizations. The ability to traverse the entire stack competently also makes interacting and cooperating with operations and security an easier affair—a key tenet of DevOps culture.
The question is indeed a contentious one, never failing to incite heated arguments from all camps. Many ways exist to cut the cake in this regard—WhiteHat Security took a stab at it in a recent edition of its Website Security Statistics Report, where it analyzed statistics around web programming languages and their comparative strengths in security.
When it comes to IT security, how do you roll? Many tools exist, but the fact is that in most cases, to do it right— you have to roll your own. This is especially true in today’s environments, where infrastructures can vary widely in composition from organization to organization. The truth is that factors such as degree of DevOps and Agile adoption, skill set of IT staff, corporate culture, and even line of business come into play when crafting a security solution for an organization. How well these tools align with the organization ultimately dictate the success and failure of a company’s security architecture. And when existing tools don’t fit or don’t work well, sometimes the only option is to build them yourself.
Home Depot. Target. Neiman Marcus. Albertsons. Michaels. Most Americans have shopped at one of these national chains recently. If you’re one of them, your credit card information may already be on the black market. And if you’re a retailer using a POS system, proposed legislation like the The Consumer Privacy Protection Act may hold you financially accountable in the event of a data breach. Here’s the skinny on RAM scraping, and what can be done to prevent it.
Every year, Verizon compiles data from a list of prominent contributors for its annual report highlighting trends and statistics around data breaches and intrusions from the past year. The 70-page Data Breach Investigations Report (DBIR) covers a myriad of data points related to victim demographics, breach trends, attack types, and more. Reviewing these shifting security trends can give indications as to how well-postured one’s organization is against future threats. And just in case you’ve got your hands full patching server vulnerabilities, we’ve done the legwork of expanding on a few critical key points from the report.
The Ponemon Institute just released some unsurprisingly bleak findings in its annual study on healthcare data privacy/security, including data showing deliberate criminal attacks now accounting for most medical data breaches. The report goes on to illustrate how the healthcare industry— sitting on a treasure trove of valuable data— is ill-equipped to counter these attacks. Perhaps forward-thinking enterprise healthcare leaders should start considering DevSecOps as a viable strategy for surviving the perils of the information age.
Yesterday, open source content management system (CMS) WordPress made headlines with the announcement of yet another critical zero day vulnerability. The newly discovered flaw is markedly different than other WordPress vulnerabilities surfacing as of late― in this case, the problem exists in WordPress’ core engine and codebase, rather than 3rd party plugins and extensions. WordPress.org was quick to release a patch to fix the vulnerability and has since advised users to upgrade to WordPress 4.2.1, the latest version of the CMS.
In a widely publicized report released last week titled "FAA Needs a More Comprehensive Approach to Address Cybersecurity As Agency Transitions to NextGen," the US Government Accountability Office (GAO) details the potential vulnerabilities and dangers of offering in-flight wifi services during air transit. By essentially granting customers IP networking capabilities for their devices, airlines may be opening up their avionics systems for attacks:
As a group of concepts, DevOps has converged on several prominent themes including continuous software delivery, automation, and configuration management (CM). These integral pieces often form the pillars of an organization’s DevOps efforts, even as other bigger pieces like overarching best practices and guidelines are still being tried and tested. Being that DevOps is a relatively new paradigm - movement - methodology - [insert your own label here], standards around it have yet to be codified and set in stone. Organizations are left to identify tools and approaches most suitable for their use cases, and will either swear by or disparage them depending on their level of success.
Sarbanes-Oxley (SOX) compliance—it’s like checking for holes in your favorite pair, but with consequences beyond public embarrassment. For publicly traded companies, the ordeal is a bit like income tax preparation for the rest of us: a painful, time-consuming evil that—if not carried out judiciously—may result in penalties and fines. Throw in an additional bonus of prison time for good measure, if you’re a C-level executive and discrepancies are found on your watch. Yes, the SEC is serious about SOX compliance, and you should be, too—especially if you’re in IT.
Audits are one of life’s greatest pleasures, right up there with root canals and childbirth. Firms love them, too; alongside tax audits-- financial audits, records audits, and compliance audits make life splendid for businesses. Unfortunately, compliance is an unwieldy but necessary evil-- that is, unless you’re America’s 2nd biggest health insurer.
We rewrote the UpGuard agent as a connection manager to reap the benefits of agentless monitoring. Why get rid of agents? Because agents must be updated. They are like a free puppy–it's easy to take them home but you have to feed them, take them to the vet, and clean up after them for years afterward. The new connection manager allows for an agentless architecture while keeping all SSH/WinRM activity behind your firewall. It's fast, light, easy to maintain, and secure.
We know you're sick of updating OpenSSL so we'll keep this short. There is a new SSL vulnerability named FREAK with a published proof of concept. FREAK affects a significant portion of websites, including big names like American Express and the NSA. Like POODLE, FREAK takes advantage of support for legacy cryptographic protocols.
In Part 1 of this article, we presented an overview of Amazon AWS and UpGuard, and discussed how the two marry the best in cloud computing and DevOps. We also learned how UpGuard is not just the premier solution for configuration monitoring, control and automation of AWS offerings like EC2 and S3, but can also work with any number of RESTful services. But enough waxing philosophical—time to put theory into action. And what better way than to follow a fictional organization as it sets up UpGuard monitoring for its AWS cloud infrastructure?
Over the years, Amazon has become the poster child for all things cloud-related, and for good reason: as one of the initial vendors to embrace the cloud computing paradigm, they were the first to offer widely accessible commercial cloud infrastructure services when it launched EC2 and S3 as part of AWS back in 2006. And now, almost a decade later, the tech giant continues to dominate with a 27% market share of the cloud services market. It's therefore not surprising that for many, Amazon comes to mind first when thinking of cloud computing.
When we set out to create a cloud-based tool for configuration monitoring, we used the tools we knew and wrote UpGuard using JRuby. For our application, JRuby had many good qualities: getting started only required a one line install, the agent only needed to talk out on port 443, and it was platform agnostic. Using JRuby we demonstrated the value of system visibility, attracted our first cohort of customers, and raised the funds to expand UpGuard. Now we're not only scrapping that agent, we're moving away from agent-based architecture altogether. Here's why.
UpGuard was initially designed to solve the problems we faced every day in the world of enterprise IT. Technical debt, documentation rot, and configuration drift consumed untold hours of our lives. UpGuard was designed to make those problems a thing of the past.
Email is a mission-critical application that is relied on to power business communication and collaboration capabilities on a day-to-day basis. It is a vital component of modern business and being able to send and receive email securely and reliably is of paramount importance. If you were to make a list of applications to track and control configuration changes of, email would be at the top of that list.
Today we're proud to show one of our newest features to UpGuard: support for your CloudFlare powered website. As a next-generation CDN (Content Delivery Network) CloudFlare purports to make your site faster to load, optimize your content, provide a swathe of ridiculously powerful and easy-to-understand security mechanisms, provide exclusive analytics insights and even has an app marketplace. To give you an idea of just how big this Cisco combatant has become: As of 2016, CloudFlare delivers over 1 trillion page views per month The company has at least half a million customers. Claims to have protected those customers from hundreds of billions of incidents Adding your CloudFlare site to UpGuard is easy and enables you to discover, track and control all of your CloudFlare DNS and Zone configuration settings including A, CNAME, MX and SPF records.
Having just started working for UpGuard as a software engineer my journey understanding UpGuard and its place in the IT automation ecosystem is just beginning. This places me in a unique position to provide a series of blog posts that will start from the ground up in getting started with UpGuard. Today we'll work through the steps required to connect and scan a Ubuntu linux server using SSH.
It's a topic that comes up frequently for us here at UpGuard. Our customers are always keen to know how they can take control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
I was perusing through Twitter-land recently and ran across a tweet talking about a DevOps meetup in the Los Angeles area that was underway. And it went on to denote that the first opening question posed to the entire group was: What are the minimum requirements for DevOps? Huh?~!
DevOps is a relatively new concept in comparison to Agile development, so it should come as little surprise that IT enterprises have a myriad of experiences and instances of Agile approaches. And there is no need to throw everything out and start over - both Agile and DevOps are complimentary. But what if after careful deliberation inside of your enterprise you've decided to evolve from Agile to DevOps? How can you ensure that you keep all the good things that Agile provided yet apply some of the learnings from the early adopters of DevOps principles? Building a DevOps state of mind requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. Here are 5 key steps to work through when implementing DevOps in an IT enterprise where Agile rules:
The rise of DevOps teams is upon us. The most recent State of DevOps survey found that 16% of respondents were part of a DevOps department with 55% of respondents self-identifying as DevOps engineers or systems engineers. Interesting. And if you simply Google ‘DevOps jobs’ you get over 4.5 million hits. So like it or not, this DevOps thing is going mainstream.
Most leading IT enterprises have some form of Agile development in place in their organization. Thereby, many organizations, websites, blogs, and companies exist to provide information about and support for Agile development. Here is a list of 10 key online resources to support your Agile journey.
Puppet Labs just released the 2014 State of DevOps Report. The research team interviewed companies from multiple industries and various sizes, from startups to global firms with over 10,000 employees and had over 9,200 respondents in all. The report shows us that not only is DevOps working within the enterprise, but it is also driving higher employee satisfaction.
It goes without saying that automation in the enterprise is critical to keeping up with today’s dynamic business demands. Unfortunately, automation isn't a set-it-and-forget-it process. You need to carefully monitor the environment to know exactly how much to automate and when to adjust for environment changes. To exasperate the issue, the concept of DevOps is still confusing to many and some still inappropriately equate DevOps to automation. But that isn’t stopping leading enterprises to create automation initiatives, have DevOps skunkworks projects popping up, and to name whole teams DevOps for the sake of it.
For the past 3 months I've been publishing a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable. As much information as is contained here, the reality remains that teamwork ultimately comes down to practicing a small set of principles over a long period of time. DevOps success is not a matter of mastering subtle, sophisticated theory, but rather embracing common sense with uncommon levels of discipline and persistence.
This is the fifth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
It is almost summertime, so time to dust off your reading material and cozy up with a good book. Recently I asked our expert panel from our most recent DevOps webcast what was their number one resource they would recommend to a friend if they wanted to brush up on the ins-and-outs of Enterprise DevOps. And in truth, they had a hard time narrowing it down to just a few. But if you're looking to stock up your bookshelf on all things DevOps then you can't go wrong with this list of the Top DevOps Reading List.
As it has been said many times: DevOps is not a technical problem, it is a business problem. The struggle for a large, entrenched Enterprise IT shops can't be underestimated and the legacy factor has to be dealt with (aka. why fix something that isn't broken). However, there is mounting evidence to suggest that independent, discrete teams are in fact becoming more common in these large Enterprises. While the fully-embedded model (sometimes called NoOps because there is no visible/distinct Ops team) that the unicorns have deployed work for them, a more discrete team to learn how to 'do DevOps' makes a lot of sense for the larger Enterprise.
As an IT manager or engineer, you can sometimes get so thick in the details that it can be challenging to step back and answer the fundamental questions. Sure, you wrote the scripts that automate your systems. You also train users to understand the tools that implement configuration management. However, you also struggle to answer why your business should have configuration management teams, automation and tools. Don't worry, if this has ever happened to you, remember that you’re not alone.
We received a lot of positive feedback regarding our last article on Controlling SQL Configuration Drift so thought it might be a good idea to continue along that same theme of analysis and follow it up with an article about DNS configuration and some simple steps you can take to prevent configuration drift.
This is the fourth in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
Many large enterprises over the last decade made a deliberate shift to an agile development process as a response to the ever-changing demands of the business. One key tenet of the agile development process is to deliver working software in smaller and more frequent increments, as opposed to the the “big bang” approach of the waterfall method. This is most evident in the agile goal of having potentially shippable features at the end of each sprint.
If you do not feel you have a good handle on all the ways DevOps can benefit your enterprise and bring positive return on investment, you are not alone. While the concept of DevOps dates far back to 2009 (prehistoric times in our world!), the evolution and implementation of the procedures and tools that facilitate its use are still evolving. As has been discussed countless times - DevOps is not something you buy, it is something you do. And in order to 'do DevOps' you need to connect it to your business in a meaningful way to ensure long-term success. But let's pretend for a moment (shouldn't be hard to imagine) that your non-technical resources / upper-level management is holding out on making any changes that bring you closer to the DevOps principles of collaboration, culture and communication. How do you get them to invest in DevOps in your enterprise?
Trying to translate the concept of Configuration Management for those who do not understand its efficacy is like explaining surfing to an Inuit. It is simply not an inherent part of their culture. Without question, the benefits of Configuration Management can be challenging to grasp to the uninformed. One of the best ways to understand the benefits and use cases is to learn from other enterprise's experiences.
Controlling database configuration drift is a tricky subject. It's a topic that comes up frequently for us here at UpGuard and customers are always keen to know how they can go about taking control and simplify their configuration management processes. We've all experienced at some time or another that issue that was the result of a database migration that didn't complete, a column that has mysteriously changed data type or an old version of a stored proc or view being restored to a new database.
This is the third in a series of posts around DevOps culture and lessons learned from Patrick Lencioni’s leadership book The Five Dysfunctions of a Team - A Leadership Fable.
One of the easiest ways to build applications programmatically into containers through Docker is to use a Dockerfile. Dockerfiles are the Makefiles of the Docker world. A ton of blog posts and tutorials have sprung up over the last few months about how to set up Docker, or how to set up a LAMP stack and even a LEMP stack in Docker.
DevOps is a human problem and a leadership problem. Building a DevOps culture requires more than giving developers root, installing a configuration management tool, using a source code repository, and proclaiming ‘yes, we’re a DevOps shop.” At the end of the day all aspects of the people, process, technology continuums get impacted by DevOps. However, there is little doubt that the people aspect has the most to gain (and the biggest challenges) for anyone who is considering, or already on, the journey to becoming a DevOps ninja.
I always love going into those meeting rooms where there are different color post-it notes all over the room that looks like a 3M sales rep threw up everywhere. For the longest time I just considered it one of those strange things R&D did. Then one day I was extremely early for a meeting and actually got to spend some time studying what was cluttered all over the glass wall and I began to realize there was a definite method to the madness. This Kanban board concept wasn't just for the engineers, it was for everyone to see where work was being performed and its status. I loved the visual nature of it, and the fact that I could get accurate information without reading release notes or technical requirements documents was refreshing.
Gmail is amazing, but it isn't perfect. In both 2014 and 2016, the popular service suffered severe outages.
My mom used to tell me 'quality over quantity' back in high school when I was dating girls. Of course that meant that I completely ignored her and would date a girl if she was breathing. What in the hell would you expect an awkward 17 year old boy to do?! I've heard that same sentence used in lots of other ways too: when writing, when speaking, when eating, when working out, and so on. What does that have to do with DevOps? As I continue on my journey through the DevOps movement, it seems to me that we have a bit of a conflict here - the goal is to release at a higher velocity (quantity) with well tested code (quality). Is this really possible? I know that some of the 'high-performers' like Amazon, Etsy, Flickr and Netflix are proving that it can be accomplished, but I keep wondering if slowing down can actually help us deliver more extraordinary things.
I've been thinking a lot lately about the intersection of DevOps and Information Security. I'm definitely not the first to have considered the implications, but I am undoubtedly a complete cynic when it comes to InfoSec and how it can align itself to the DevOps movement. Why am I cynic you may ask? Well, I spent almost 10 years in the security/governance arena interacting with CISOs and their teams trying to help them 'reduce risk' and 'pass audits', but I've watched countless organizations fail miserably. What is the main reason why? Because the business fails to see the value of security and doesn't understand it. Better said - the business invests in what the business understands.
We all know that DevOps is the glimmer in our executive's eye, the savior that will solve world hunger, and the most important thing to happen since the wheel was invented. But all joking aside, there is little doubt of the business benefits it can bring to organizations big & small. So now what?! You've decided (or been told) that DevOps is critical to your 2014 success, but where do you start and what are the foundational elements you must work through before claiming victory? Here are 4 prerequisites for DevOps success that you can use as your blueprint to making sure you achieve your business objectives.
Tonight I gave a talk on comparing containers and generating Dockerfiles. Instead of providing the slides, which are pretty lame by themselves, I thought I'd write up the talk in a proper context. UpGuard has a number of use cases, one of which highlighted for the talk was migrating the configuration of environments from one location to another. Traditionally we have helped some of our customers scan their configuration state and generate executable tests based on those configuration items as well as allow scanned configuration from multiple machines to be compared.
Information Technology Service Management (ITSM) may not have the sex appeal of Agile or the buzz of DevOps, but it lays a crucial foundation for each within the Enterprise today. So, whether you consider it a necessary evil or the only way to run your IT department, here are a few resources that may come in handy.
Today represents the hottest time to be in financial markets - nanosecond response times, the ability to affect global markets in real time, and lucrative spot deals in dark pools being all the rage. For companies who do business in these times, it is a technical arms race, worthy of a Reagan era analogy.
One of the best opportunities for networking and keeping up to date on all the latest trends in software development, ITIL best practice implementation, innovative methods of handling automation, new methods of tackling ever-expanding configuration drift, and learning to navigate increasingly tricky compliance issues is attending industry conferences. IT professionals focused on automation will need to read between the lines to find a convention well-suited to their interests. These events are massive in production and are usually more broadly focused.
There is no disputing the fact that cloud computing has led to a number of remarkable changes in the way many companies do business. Cloud-based solutions have been instrumental in streamlining IT functions and other business processes, resulting in a considerable savings in terms of time and monetary output.
The Sinkhole That is Manual Configuration Testing Testing is a crucial part of software development: it involves the execution of a program with the goal of locating errors. Successful tests are able to uncover new errors that can then be corrected before the software is released.
Testing environment configurations in enterprise environments manually with scripts is difficult, just because there are so many factors involved. These can include applications, hardware, and device compatibility issues that can arise at any point within the implementation, areas which may be difficult to determine in the pre-implementation stage. Worse yet, the larger the network infrastructure, the more time consuming and complicated the test and the implementation processes are. This is when Environment Drift and Stateless Systems can come into play.
Before delivery to the intended party, a system should be tested to figure out whether the requirements set forth in the contract have been met. Configuration acceptance testing is the fundamental means to assuage all doubts that the system will fall short of its intended purposes. It is an essential part of the testing phase of the Software Development Life-Cycle (SDLC), and perhaps the most vital in its category. The way in which the components of the system interact is the sure fire means of determining the susceptibility of the system to frequent errors and ultimately the strength of resistance to its implementation. Configuration acceptance testing is pivotal to the SDLC, and as such will be an integral part of the Application Life-cycle Management (ALM) policy of any firm. It reveals any available bugs and inadequacies in the system, enhancing the process of error correction and formulation of a suitable plan of action in the event undiscovered errors manifest and affect the system after it has been implemented.
Cyber resilience is a fundamental change in understanding and accepting the true relationship between technology and risk. IT risk (or cyber risk, if you prefer) is actually business risk, and always has been. And the cybersecurity industry, for what it's worth, has generally avoided this concept because it goes against the narrative that their respective offerings—whether it's a firewall, IDS, monitoring tool, or otherwise—would be the one-size-fits-all silver bullet that can keep businesses safe. But reality tells a different story.